id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
248072251
pes2o/s2orc
v3-fos-license
A Case Series of Diverse Cardiac Abnormalities in Collegiate Athlete with COVID-19: Role for Multimodality Imaging Introduction Since the COVID-19 pandemic there is concern for subclinical cardiac pathology in the absence of clinical symptoms in collegiate athletes, we present 4 cases of abnormal left ventricular global longitudinal strain (LVGLS), a “red-flag” for potential COVID-19 myocardial disease, following diagnosis with diverse abnormalities reported via multimodality imaging weeks into recovery. Methods Cardiac imaging studies consisting of transthoracic echocardiography (TTE) and cardiovascular magnetic resonance imaging (CMR) were performed 10 days post-COVID-19 diagnosis and several weeks into recovery. Results Initial TTE revealed abnormal left ventricular global longitudinal strain (LVGLS), an identified “red-flag” for potential COVID-19 myocardial disease. Further CMR imaging revealed potential recent/prior myocarditis in 1 athlete. Follow-up TTE several weeks later revealed a return to normal LVGLS. Conversely, 2 cases with normal CMR imaging had a LVGLS that remained abnormal >30 days into recovery. Conclusions These individual cases highlight the substantial differences in echocardiographic and CMR abnormalities between athletes with confirmed COVID-19. Introduction With increased concerns for coronavirus disease-19-(COVID-19-) induced cardiac injury in athletes, numerous recommended screening procedures have been proposed; however, the differential pattern of cardiac abnormalities in individual athletes remains unknown [1]. Moreover, recent work suggests in the acute stage a presence of myocardial and pericardial inflammation in some athletes, but it is unknown if this persists into weeks of recovery. We describe a series of 4 competitive athletes with COVID-19 cardiac abnormalities. All patients provided written and verbal consent for this study which was approved by the Institutional Review Board of Kansas State University and conformed to the standards set forth by the Declaration of Helsinki. Case 1 An 18-year-old male collegiate athlete presented with fever, cough, and sinus congestion but denied chest pain and shortness of breath. The test for COVID-19 returned positive. Ten days into recovery, he had normal sinus rhythm, was normotensive, and had a normal 12-lead ECG. High-sensitivity troponin T (hsTnT) was normal (<0.010). Transthoracic echocardiography (TTE) showed mild left ventricular (LV) hypertrophy and cavity dimensions, suggestive of athletic remodeling. Mild dilations of right atria (RA) and left atria (LA) were noted with mild tricuspid valve regurgitation. LV ejection fraction (LVEF) was normal, but global longitudinal strain (GLS) was abnormal (-16%) with normal diastolic function ( Figure 1). Cardiovascular magnetic resonance imaging (CMR) was performed 20 days into recovery, showing mild dilation of the LV and RV along with mitral and tricuspid valve regurgitation but no signs of myocarditis (no edema, T1 alterations, or Late Gadolinium Enhancement (LGE)). A repeat TTE 31 days into recovery showed improved GLS (-19.1%) and normal LVEF. Case 2 A 19-year-old male collegiate athlete tested positive for COVID-19, and 11 days into his recovery, there were no indications of arrhythmias or murmurs with normal hsTnT (<0.010). Contemporaneous TTE revealed a normal camber dimensions, LVEF, and diastolic function, but abnormal GLS (-14.6%) with mitral and tricuspid valve regurgitation was observed. CMR performed 19 days into recovery revealed no signs of myocarditis (no edema, T1 alterations, or Late Gadolinium Enhancement (LGE)). A repeat TTE 35 days into recovery showed a still abnormal but improved GLS of -15.8%, with normal LVEF%. Case 3 A 20-year-old male collegiate athlete presented with symptoms of a headache, nausea, difficulty breathing, sore throat, and fatigue after testing positive for COVID-19 but denied any symptoms of chest pain or shortness of breath (SOB), and hsTnT was normal (<0.010). TTE was performed 10 days following diagnosis, and athlete reported continuing symptoms of a headache, stuffy nose, and fatigue. He was hypertensive (130/58 mmHg) but had a normal 12-lead ECG and LVEF. TTE showed mild dilation of the RV and RA and abnormal LV GLS of -13%. CMR was performed 22 days into recovery showing no signs of myocarditis (no edema, T1 alterations, or Late Gadolinium Enhancement (LGE)) and normal myocardial perfusion. Follow-up TTE, 31 days into recovery, revealed an abnormal but improved GLS (-14.9%). Case 4 An 18-year-old male collegiate diagnosed with COVID-19 reported a mild cough during infection but denied chest pain, SOB, or fever. Ten days into recovery, he had a normal ECG and hsTnT (<0.010). TTE revealed an abnormal LVGLS of -16%, but normal LVEF and diastolic function. CMR performed 15 days into recovery showed an area of midmyocardial LGE, indicating potential recent/prior myocarditis, with normal LV wall motion and LVEF ( Figure 2). Discussion Early reports have indicated that SARS-CoV-2 infection elicits cardiac injury in up to 2 out of 5 hospitalized COVID-19 patients [2][3][4][5][6][7][8][9], with elevated cardiac troponin I, a marker of acute myocardial injury, within 24 hrs of hospital admissions associated with death (hazard ratio 3.23 and 95% CI 2.59-4.02) [2]. Moreover, data from China has revealed that mortality rate was higher among patients with vs. without cardiac injury, even after adjusting for age and prior comorbidities, with COVID-19-related cardiac injury associated a 5-fold increase in required ventilation [4,6,[9][10][11][12]. Recent data from Puntmann et al. has also suggested that in middle-aged adults recently recovered from COVID-19, of which 33% required hospitalization, a majority of patients had cardiovascular involvement, as detected by CMR, with 60% exhibiting ongoing myocardial inflammation [13]. Unfortunately, to date, most of our knowledge on COVD-19-induced cardiovascular complications is limited to hospitalized patients, with a paucity of information on the use of imaging modalities for diagnosis and follow-up of myocardial involvement in younger nonhospitalized individuals. Recently, Joy et al. evaluated cardiac function in health-care workers (mean age 37 years) 6 months following COVID-19, of which 85% were mildly symptomatic and 15% asymptomatic at the time of diagnosis [14]. Their measurements of cardiac involvement via CMR scanning revealed no difference in cardiac structure, function, or tissue characterization between recovered COVID-19 individuals and matched controls. While CMR imaging provides valuable insight into cardiovascular involvement with COVID-19, it is not readily available in all clinical settings, with TTE more commonly used as the method for early cardiac screening following infection. Assessment of COVID-19-induced cardiac abnormalities, against the background of athletic remodeling, presents a particularly unique challenge in identifying individuals at risk for pathologic outcomes following infection. In addition, most athletic departments do not have the capabilities or capacity for onsite cardiac evaluation following COVID-19. When follow-up evaluation for cardiac involvement is required, this challenge is further exacerbated when different imaging modalities are utilized. Thus, there is a critical need to improve our understanding on the differential pattern of cardiac abnormalities that may exist in this unique population. Here, we present 4 cases of abnormal LVGLS, an identified "red-flag" for potential COVID-19 myocardial disease [1]. CMR revealed potential recent/prior myocarditis in 1 case, but a return to normal LVGLS. Two cases had normal 3 Case Reports in Cardiology CMR outcomes but LVGLS remained abnormal >30 days into recovery. This highlights the substantial variability in echocardiographic and CMR abnormalities between athletes with confirmed COVID-19. Recommendations for screening of athletes with COVID-19 include the use of hsTnT and echocardiographic assessment of chamber size, wall motion, and systolic and diastolic function, with subsequent CMR recommended when indicated [1]. We demonstrate variability in multimodality imaging in the characterization of potential COVID-19 cardiac injury in collegiate athletes, which highlights the challenges of managing COVID-19 and determining the appropriate workup in this population. At the individual patient level, an abnormal LVGLS and the temporal characteristics into recovery do not appear to be predictive of CMR-indicated myocarditis. To date, there are a limited number of studies investigating the potential cardiac consequences of COVID-19 in the athletic populations, with fewer extending into recovery (Table 1). In a study of 22 COVID-19-positive athletes, all had normal troponin and LVGLS [15], with one exhibiting meeting CMR criteria for suggested myocarditis. In agreement, a recent study by Rajpal et al. looked at 26 COVID-19-positive student athletes and reported that no athlete had markers of elevated troponin, but 46% showed the presence of CMR-indicated LGE, further highlighting how variable CMR data is between studies and the potential for myocarditis in the absence of elevated biomarkers [16]. Others have reported~40% of COVID-19 athletes exhibit late pericardial enhancement and with more than one-half of athletes showing subclinical myocardial and pericardial disease [17]. Additional work in professional athletes with COVID-19 revealed acute cardiac injury in 2.5% with CMR-confirmed inflammatory heart disease in 18.5%, myocarditis in 11.1%, and pericarditis in 7.4% [18]. To date, both direct and indirect effects of the SARS-CoV-2 virus on cardiovascular outcomes have been postulated but remain incompletely understood [19][20][21][22], thus limiting clinical decision-making regarding patient triage and treatment. Recent work by Greulich and Klingel suggests a very heterogenous presentation of myocardial inflammation in patients with a history of COVID-19 following endomyocardial biopsy, further highlighting the unknown pathogenesis of COVID-19-related cardiac inflammation [23]. Moreover, while CMR and other imaging modalities, like that used in the present study and previous work provide valuable prognostic insight, they often provide different pathological information relative to endomyocardial biopsy further complicating our understanding of the underlying mechanisms mediating COVID-19 myocardial injury [24]. Thus, this previous work, coupled with the divergent response reported in the present cases, highlights the challenges associated with the clinical evaluation of cardiac abnormalities in athletes following COVID-19 diagnosis. The diverse temporal responses of our cases highlight our limited understanding of the time course of LV function changes as they relate to CMR parameters. Taken altogether, the consequences of COVID-19 infection still remain unclear and future research looking into the long-term effects of this disease is warranted, with clear indication that no screening modality provides a complete picture of potential cardiac abnormalities. Data Availability All data used in this case report is readily available through cited literature or is protected patient information, which cannot be released. Disclosure We state that this manuscript is not under consideration elsewhere, and that the research reported will not be submitted for publication elsewhere until a final decision is made as to the acceptability of the manuscript. There is no financial or other relationship that influenced the outcome of this paper. In addition, this manuscript represents original work without fabrication, fraud, or plagiarism. Conflicts of Interest The authors declare that they have no conflicts of interest.
2022-04-11T15:09:57.925Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "b2f1cc97cbe6e3ceb1851c40ba909bce1687c2bb", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cric/2022/3259978.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ddc5881aaa11b5086fbabb494192d1a9c119e6b8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219478489
pes2o/s2orc
v3-fos-license
Successful treatment of HIV‐associated lupus‐like glomerulonephritis with mycophenolic acid Abstract HIV‐associated lupus‐like glomerulonephritis is an uncommon but well‐described entity. Treatment has traditionally focused on control of HIV viremia with some using adjuvant steroids. Mycophenolic acid may prove to be a novel, nonsteroid, therapy in patients with active glomerulonephritis despite control of the underlying infection. | INTRODUCTION A 34-year-old HIV-positive man was diagnosed with HIVassociated lupus-like glomerulonephritis. The patient's retrovirus was already well-controlled on combination antiretrovirals. The glomerulonephritis was treated with mycophenolic acid with excellent response. We consider the potential role of mycophenolic acid as a novel therapy for this increasingly recognized entity. In the era of combination antiretroviral therapy (cART), HIV-associated glomerulonephritis is an uncommon cause of renal disease in HIV-positive individuals. There are limited data to guide treatment, but antiretroviral therapy, targeted to achieve a reduction in retroviral burden and normalization of CD4 T cell counts, renin-angiotensin system blockade and systemic corticosteroids have previously been used. Given the metabolic consequences of systemic corticosteroids and the background significant burden of cardiovascular and metabolic disease in the HIV population, well-tolerated steroid-sparing or steroid-avoiding regimens are likely more desirable where renal disease persists despite suppression of detectable circulating virus. | CASE REPORT A 34-year-old HIV-positive Caucasian man presented with recurrent episodes of fever, myalgia, and macroscopic hematuria. Each episode was self-resolving, typically lasting 1-3 days. Investigations, including urinalysis and serum creatinine, had been previously unremarkable in between clinical episodes. HIV infection had been diagnosed 12 years prior to the current presentation, predating the earliest onset of febrile hematuric episodes by approximately 2 years. Circulating retroviral load was persistently below the detectable threshold by polymerase chain reaction since commencing cART 5 years prior. The decision to commence cART had been 1602 | TIONG eT al. made in accordance with the evolution of international HIV treatment guidelines rather than any clinical or hematological indication. 1 The febrile episodes started approximately 10 years prior and typically occurred 2-3 times per year, though more recently had increased to once every 1-2 months, which prompted presentation for assessment. There was a significant decrement in renal function noted with a serum creatinine of 160 μmol/L (eGFR 45 mL/min/1.73 m 2 ), from 94 μmol/L (eGFR 90 mL/min/1.73m 2 ) one year prior. Investigation during an acute febrile episode demonstrated proteinuria (uPCR 80 mg/mmol) with hematuria (urinary erythrocytes > 1000 × 10^6/L) of glomerular morphology. Testing for co-existent infections including hepatitis B and C, Syphilis, gonorrhea, chlamydia, Brucella, Rickettsia, Q fever, strongyloides, mycobacterium tuberculosis, parvovirus, and malaria were all negative, as was screening for familial Mediterranean fever and porphyria. There was serological evidence of previous EBV and CMV exposure, but not of active or recent infection. There was no other past medical history of note, including no history of diabetes mellitus or hypertension. A renal biopsy was obtained showing renal cortex without any interstitial fibrosis or interstitial inflammation ( Figure 1). A total of 36 glomeruli were present, none of which were globally sclerosed. By light microscopy, glomeruli showed a focal (10 glomeruli) mild segmental increase in mesangial cellularity. There was no endocapillary proliferation, no segmental sclerosis, no wire loops, no basement membrane spiking or hyaline thrombi. There were no crescents or necrotizing lesions. Immunoperoxidase stains showed granular capillary basement membrane staining for IgA, IgG, IgM, C3, and C1q with weak focal mesangial staining for IgA, IgG, and C1q. Electron microscopy was not performed. Concurrent serological testing was negative for antinuclear antibodies, anti-Smith, and anti-double stranded DNA. Serum levels of C3 and C4 were within the normal laboratory reference range, and rheumatoid factor was not detectable. Given his history of HIV positivity and negative systemic lupus erythematosus (SLE) serology, a diagnosis of HIV-associated lupus-like glomerulonephritis was made. Multiple treatment options including active surveillance and systemic corticosteroids were considered and discussed with the patient. There was a general reluctance expressed by the patient to utilize systemic corticosteroids, due to perceived burden of adverse effects. Given an established role in the treatment of LN and favorable side effect profile, a trial of mycophenolic acid (MPA) was also offered. Following informed consent, 1 gram of oral mycophenolate mofetil was commenced twice a day. Since commencing MPA the patient has experienced a dramatic reduction in frequency of febrile hematuric episodes, with only a single event now 12 months into treatment. Biochemical renal function has also returned to the previous baseline with a serum creatinine of 90 μmol/L (eGFR 90 mL/min/1.73m 2 ). | DISCUSSION HIV-associated lupus-like glomerulonephritis is a rare, but well-recognized cause of renal disease in HIV-positive individuals. It is commonly included on the spectrum of "HIV Immune Complex Kidney disease" (HIVICK) though whether this heterogenous group represents distinct disease processes remains unclear. 2 HIV-associated lupus-like glomerulonephritis is described in patients with renal biopsy features which are "lupus-like" both histologically and by immunofluorescence markers, but occur in HIV-infected patients who otherwise lack serological and clinical evidence of SLE. 3 Patients with primary LN with concurrent HIV infection are also acknowledged. While glomerular immune deposits may be analyzed for the presence of HIV antigens, this is not generally available outside of the research setting. 2 This, and the lack of fully objective criteria for primary SLE, creates the possibility of diagnostic misclassification. 4 Our patient did not meet conventional diagnostic criteria for SLE, 5,6 and while seronegative SLE represents a potential differential, this entity is also exceedingly rare. [5][6][7] Episodic fever was a prominent feature of our patient's presentation, and while not universal of this disease entity, has been previously described. 8 Importantly, extensive testing for concurrent infection was persistently negative, as was other markers of autoimmune disorders. While there was limited testing for periodic fever syndromes (the patient tested negative for familial Mediterranean fever), these entities would not account for the renal histopathological findings. 9 Given the lack of alternative explanation and the presence of established HIV infection, which preceded the initial presentation of febrile hematruic episodes, the patient's renal pathology was attributed to consequence of HIV rather than an alternative process. 10 Beyond empiric recommendations for renin-angiotensin system inhibition, there is limited data to guide treatment of patients with either HIV-associated lupus-like glomerulonephritis, or HIVICK more broadly. 2,11 There is an apparent protective association between cART use and reduced risk of HIVICK, with cases typically arising where there is an established history of untreated HIV infection or suppressed CD4 T lymphocyte count. Multiple case reports, generally from the era preceding the advocacy of universal cART upon HIV diagnosis, have suggested benefit from starting cART in untreated HIV patients with lupus-like renal disease. [12][13][14] Our patient was already established on cART with his retroviral infection well-controlled. Mechanistically, we postulated that despite the absence of detectable HIV viremia, there was ongoing sequalae attributable to the initial immunostimulatory event, manifesting as ongoing fevers and active glomerulonephritis. This is perhaps analogous to the subgroup of patients with hepatitis-C-associated cryoglobinemic vasculitis, who have ongoing glomerular disease despite successful treatment of the causative hepatitis infection. In such patients, treatment with immunomodulatory therapies such as rituximab is now recommended in international guidelines. 15 The role of immunosuppression in HIVICK has a limited clinical evidence base. Experience with corticosteroids in HIVAN has often been extrapolated and used as a rationale for therapeutic trials of systemic corticosteroids in individual cases of HIVICK, including in several cases of lupus-like glomerulonephritis, with some benefit. 2,8,11,16,17 The progressive frequency of febrile hematuria and significant loss of renal function was considered the indication for specific immunomodulatory therapy in this case. Patients with HIV infection are at increased risk of metabolic and cardiovascular disease, making the potential adverse effects of systemic corticosteroids undesirable. 18 Our patient was reluctant to be treated with corticosteroids which prompted consideration of alternative immune-modulating agents. MPA has become the standard of care for induction and maintenance therapy in proliferative lupus nephritis with a favorable efficacy and tolerability profile. 19 Active LN is broadly characterized by reactive inflammation to immune complex deposition in the glomeruli. MPA appears to modulate this response, thereby reducing inflammatory-mediated kidney injury. 19 Through inhibition of inosine-5'-monophosphate dehydrogenase, guanosine nucleotides in lymphocytes are depleted, suppressing normal T-and B-cell proliferation pathways. 20 In HIVICK, circulating HIV antigen immune complexes have been found deposited in the kidneys. 21 Similar to LN, these circulating complexes are thought to trigger immune-mediated inflammatory kidney damage, which may explain why patients with HIV-associated lupus-like glomerulonephritis have histopathological lesions that closely resemble those of LN. These observations led us to postulate that our patient may be responsive to therapies traditionally used in primary LN. The safety of using cytostatic immunosuppressive medications in patients with HIV infection is an important consideration, as is potential interactions with cART. MPA has been used successfully in HIV-positive renal transplant recipients, as an adjunct antiviral, and in other novel cases, where HIV is thought to act as an immunostimulatory trigger, without apparent deleterious effect in terms of retroviral control, excess adverse events or opportunistic infections. [22][23][24] Since commencing MPA therapy, there has been a clear demarcation in the trajectory of his clinical episodes with normalization of biochemical renal function and urinalysis-a signal that immunomodulation is favorably altering the underlying disease process. glomerulonephritis, particularly in those who have not responded to, or are already treated with cART.
2020-05-21T09:12:22.645Z
2020-05-19T00:00:00.000
{ "year": 2020, "sha1": "562598b74b84a550e023b5881738d1087eab4ef0", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.2955", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "abc2b62617b8bd4470528ccb5c9e509c79cd4f5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
86297267
pes2o/s2orc
v3-fos-license
Determining Redundancy of Short-day Onion Accessions in a Germplasm Collection Using Microsatellite and Targeted Region Amplified Polymorphic Markers The U.S. National Plant Germplasm System is one of the world’s largest national genebank networks focusing on preserving the genetic diversity of plants by acquiring, preserving, evaluating, documenting, and distributing crop-related germplasm to researchers worldwide. Maintaining viable germplasm collections is essential to world food security but comes at a cost. Redundancy within the collection can incur needless expense and occurs as a result of donations of similar material under different names from different donors. Alternatively, similarly named accessions from different donors can actually be genetically distinct. We evaluated 35 short-day onion (Allium cepa) accessions using microsatellite and targeted region amplified polymorphic (TRAP) molecular markers to compare newly acquired germplasm with existing accessions in the collection to determine differences and redundancies and to compare the use of each marker type in distinguishing the onion accessions. Both marker types distinguished differences and found similarities, but the results did not always agree. TRAP markers found one of the Italian Torpedo entries to be different, whereas the 10 microsatellite loci analyzed found no differences. In contrast, microsatellite analysis found all three Red Grano entries to be different, whereas TRAP analysis distinguished only one accession. The eight White Grano entries were separated into four groups by microsatellite markers and five groups by the TRAP markers. Discriminating among closely related accessions using molecular markers can require a large number of random marker loci, especially when differences may be limited to a single trait. TRAP markers were more efficient, uncovering ’10 random polymorphic loci per primer pair, whereas microsatellite markers each uncovered differences at a single locus. The U.S. National Plant Germplasm System (NPGS) is one of the world’s largest national genebank networks focusing on preserving the genetic diversity of plants by acquiring, preserving, evaluating, documenting, and distributing crop-related germplasm to researchers worldwide. Among the over 500,000 accessions of plant germplasm managed through the NPGS, 1100 accessions are of cultivated onion. Specific challenges for germplasm repositories include minimizing genetic change during seed increase, limiting redundancy in the collection, and maximizing genetic diversity in a collection that remains manageable in size. Maintaining viable germplasm collections is essential to world food security but comes at a cost. Redundancy within the collection can incur needless expense and occurs as a result of donations of similar material under different names from different donors. Alternatively, similarly named accessions from different donors can actually be genetically distinct. Thus, detection of genetic differences among accessions is particularly critical at germplasm repositories, which are uniquely challenged to develop collections that represent the genetic diversity of the crop species. Molecular markers have become an accepted and widely used tool for the measurement of genetic diversity, population structure, and evolution (Avise, 1994; Nei, 1987). Molecular marker technology can be used to characterize the extent of diversity within a collection and for the development of collection management strategies, which may include establishment of core collections (Gouesnard et al., 2001; Johnson et al., 2002; Marita et al., 2000), guidance for future collection efforts, and identification of gaps within collections of ancestral crop relatives. Additionally, analysis of worldwide genetic diversity can identify areas suited for the establishment of in situ conservation sites (Greene et al., 2008). Microsatellite markers, or simple sequence repeats [SSRs (Oliveira et al., 2006; Tautz and Renz, 1984)], are codominant polymorphic markers formed by 2to 6-bp repeats flanked by conserved regions that can serve as primer sequences. Marker production using the polymerase chain reaction (PCR) can result in markers with high polymorphism information content (Anderson et al., 1993), but single reactions usually result in analysis at only one locus, which increases laboratory costs and limits the number of loci compared. Amplified fragment length Received for publication 30 Sept. 2010. Accepted for publication 10 Dec. 2010. This research was funded through a germplasm evaluation grant from the National Plant Germplasm System, ARS, USDA. Mention of product names does not represent an endorsement of any product or company but is given only to clarify the methodology; other products may be equally effective. Plant Geneticist. Professor of Horticulture. Corresponding author. E-mail: tkisha@wsu.edu. J. AMER. SOC. HORT. SCI. 136(2):129–134. 2011. 129 polymorphism (AFLP) markers produce many dominant markers with a single PCR reaction (Vos et al., 1995). Although less informative of genetic variability within a locus, they allow for the efficient sampling of many loci (Gaudeul et al., 2004; Powell et al., 1996). Thus, AFLPs lend themselves to studies when more loci are needed to estimate differences between populations (Mariette et al., 2002). Despite being dominant markers, AFLPs have shown themselves to be effective in discriminating among populations and correctly assigning individuals to populations when compared with SSRs (Gaudeul et al., 2004; Woodhead et al., 2005). However, the production and scoring of AFLP markers in a genome as large as that of Allium species is problematic (Volk et al., 2004). Targeted region amplification polymorphism markers are amplified regions of polymorphic DNA between primers designed after specific gene analogs but that generate multiple fragments in a single PCR reaction (Hu and Vick, 2003). Using a ‘‘fixed’’ primer designed from a known gene sequence, a ‘‘random’’ primer with ATor GC-rich cores designed to amplify intragenic fragments (Li and Quiros, 2001), and less-stringent annealing temperatures, multiple, easily discernible polymorphic markers can be generated that are generally distributed randomly across the genome of interest (van Treuren and van Hintum, 2009). A recently conducted, short-day onion germplasm plant exploration resulted in the collection of 70 to 75 lines that may be included in the onion collection to expand the number of short-day accessions. Some of the newly collected lines appear to be represented currently in the collection based on similar cultivar names of the newly collected lines and existing accessions. The inclusion of duplicate cultivars in the collection would result in additional cost for maintenance and regeneration without the benefit of additional genetic diversity. However, if the newly collected lines are different from existing accessions in the collection, even if similar cultivar names would suggest otherwise, then these newly collected lines should be included in the collection. A study to examine the relatedness of these newly collected lines to existing accessions currently in the collection would be helpful in deciding whether the collected material should be added or discarded. In addition to possible duplication of the newly collected lines, there are short-day onion accessions that share similar cultivar names. It is unclear whether these accessions represent different germplasm or are the same germplasm but with somewhat different cultivar names. A study to determine the relatedness of these existing accessions based on molecular marker data will be helpful in determining whether the accessions are unique or should be discarded as redundant. The objective of this study was to use molecular markers to evaluate newly collected, short-day onion lines and to compare that germplasm with existing accessions in the collection to determine if including that germplasm in the collection would result in redundancy. In addition, this project evaluated shortday accessions in the collection that appear to have similar cultivar names to determine if some accessions are redundant and compared the efficacy of SSR and TRAP markers toward this end. Materials and Methods PLANT MATERIAL. Entries, accessions and collected germplasm, were separated into eight groups (Eclipse, Italian Red Torpedo, Red Creole, Red Grano, White Creole, White Grano, White Mexican, and Yellow Grano) based on their cultivar name relatedness (Table 1). Entries in the Eclipse group of onions (U.S. Department of Agriculture, 2010) are very similar to ‘Crystal White Wax’, which was developed from ‘White Bermuda’ (Goldman et al., 2000). Onions produced by these cultivars tend to be early-maturing, medium to large in size, flat-shaped, of short storage, and possess a shiny, white dry outer scale and a mild-flavored, soft flesh (Magruder et al., 1941). Cultivars of the Italian Red Torpedo group produce a very characteristic onion bulb that is oblong in shape, has reddish purple dry outer scales, and white, mild-flavored fleshy scales. Creole-type onions are thought to be tropical in nature because they have a short-day to almost day-neutral bulbing response that would suggest adaptation to lower latitudes. Bulbs of Creole cultivars store for long periods of time, are small to medium in size, and have a flattened shape and pungent fleshy scales. Red Creole and White Creole cultivars produce bulbs that have reddish and shiny white dry outer scales, respectively. Bulbs produced from Grano-type cultivars tend to be large in size and top-shaped. The bulbs have mild-flavored fleshy scales and they store for short periods of time. Bulbs of ‘Red Grano’, ‘White Grano’, and ‘Yellow Grano’ have reddish purple, shiny white, and pale yellow dry outer scales, respectively. Grano-type onions are thought to have arisen from ‘Babosa’ or ‘Valencia Grano’ onions that were imported from Spain to New Mexico in the early 20th century (Goldman et al., 2000). Breeding work at this time in New Mexico resulted in the d The U.S. National Plant Germplasm System (NPGS) is one of the world's largest national genebank networks focusing on preserving the genetic diversity of plants by acquiring, preserving, evaluating, documenting, and distributing crop-related germplasm to researchers worldwide. Among the over 500,000 accessions of plant germplasm managed through the NPGS, 1100 accessions are of cultivated onion. Specific challenges for germplasm repositories include minimizing genetic change during seed increase, limiting redundancy in the collection, and maximizing genetic diversity in a collection that remains manageable in size. Maintaining viable germplasm collections is essential to world food security but comes at a cost. Redundancy within the collection can incur needless expense and occurs as a result of donations of similar material under different names from different donors. Alternatively, similarly named accessions from different donors can actually be genetically distinct. Thus, detection of genetic differences among accessions is particularly critical at germplasm repositories, which are uniquely challenged to develop collections that represent the genetic diversity of the crop species. Molecular markers have become an accepted and widely used tool for the measurement of genetic diversity, population structure, and evolution (Avise, 1994;Nei, 1987). Molecular marker technology can be used to characterize the extent of diversity within a collection and for the development of collection management strategies, which may include establishment of core collections (Gouesnard et al., 2001;Johnson et al., 2002;Marita et al., 2000), guidance for future collection efforts, and identification of gaps within collections of ancestral crop relatives. Additionally, analysis of worldwide genetic diversity can identify areas suited for the establishment of in situ conservation sites (Greene et al., 2008). Microsatellite markers, or simple sequence repeats [SSRs (Oliveira et al., 2006;Tautz and Renz, 1984)], are codominant polymorphic markers formed by 2-to 6-bp repeats flanked by conserved regions that can serve as primer sequences. Marker production using the polymerase chain reaction (PCR) can result in markers with high polymorphism information content (Anderson et al., 1993), but single reactions usually result in analysis at only one locus, which increases laboratory costs and limits the number of loci compared. Amplified fragment length polymorphism (AFLP) markers produce many dominant markers with a single PCR reaction (Vos et al., 1995). Although less informative of genetic variability within a locus, they allow for the efficient sampling of many loci (Gaudeul et al., 2004;Powell et al., 1996). Thus, AFLPs lend themselves to studies when more loci are needed to estimate differences between populations (Mariette et al., 2002). Despite being dominant markers, AFLPs have shown themselves to be effective in discriminating among populations and correctly assigning individuals to populations when compared with SSRs (Gaudeul et al., 2004;Woodhead et al., 2005). However, the production and scoring of AFLP markers in a genome as large as that of Allium species is problematic (Volk et al., 2004). Targeted region amplification polymorphism markers are amplified regions of polymorphic DNA between primers designed after specific gene analogs but that generate multiple fragments in a single PCR reaction (Hu and Vick, 2003). Using a ''fixed'' primer designed from a known gene sequence, a ''random'' primer with AT-or GC-rich cores designed to amplify intragenic fragments (Li and Quiros, 2001), and less-stringent annealing temperatures, multiple, easily discernible polymorphic markers can be generated that are generally distributed randomly across the genome of interest (van Treuren and van Hintum, 2009). A recently conducted, short-day onion germplasm plant exploration resulted in the collection of 70 to 75 lines that may be included in the onion collection to expand the number of short-day accessions. Some of the newly collected lines appear to be represented currently in the collection based on similar cultivar names of the newly collected lines and existing accessions. The inclusion of duplicate cultivars in the collection would result in additional cost for maintenance and regeneration without the benefit of additional genetic diversity. However, if the newly collected lines are different from existing accessions in the collection, even if similar cultivar names would suggest otherwise, then these newly collected lines should be included in the collection. A study to examine the relatedness of these newly collected lines to existing accessions currently in the collection would be helpful in deciding whether the collected material should be added or discarded. In addition to possible duplication of the newly collected lines, there are short-day onion accessions that share similar cultivar names. It is unclear whether these accessions represent different germplasm or are the same germplasm but with somewhat different cultivar names. A study to determine the relatedness of these existing accessions based on molecular marker data will be helpful in determining whether the accessions are unique or should be discarded as redundant. The objective of this study was to use molecular markers to evaluate newly collected, short-day onion lines and to compare that germplasm with existing accessions in the collection to determine if including that germplasm in the collection would result in redundancy. In addition, this project evaluated shortday accessions in the collection that appear to have similar cultivar names to determine if some accessions are redundant and compared the efficacy of SSR and TRAP markers toward this end. Materials and Methods PLANT MATERIAL. Entries, accessions and collected germplasm, were separated into eight groups (Eclipse, Italian Red Torpedo, Red Creole, Red Grano, White Creole, White Grano, White Mexican, and Yellow Grano) based on their cultivar name relatedness (Table 1). Entries in the Eclipse group of onions (U.S. Department of Agriculture, 2010) are very similar to 'Crystal White Wax', which was developed from 'White Bermuda' (Goldman et al., 2000). Onions produced by these cultivars tend to be early-maturing, medium to large in size, flat-shaped, of short storage, and possess a shiny, white dry outer scale and a mild-flavored, soft flesh (Magruder et al., 1941). Cultivars of the Italian Red Torpedo group produce a very characteristic onion bulb that is oblong in shape, has reddish purple dry outer scales, and white, mild-flavored fleshy scales. Creole-type onions are thought to be tropical in nature because they have a short-day to almost day-neutral bulbing response that would suggest adaptation to lower latitudes. Bulbs of Creole cultivars store for long periods of time, are small to medium in size, and have a flattened shape and pungent fleshy scales. Red Creole and White Creole cultivars produce bulbs that have reddish and shiny white dry outer scales, respectively. Bulbs produced from Grano-type cultivars tend to be large in size and top-shaped. The bulbs have mild-flavored fleshy scales and they store for short periods of time. Bulbs of 'Red Grano', 'White Grano', and 'Yellow Grano' have reddish purple, shiny white, and pale yellow dry outer scales, respectively. Grano-type onions are thought to have arisen from 'Babosa' or 'Valencia Grano' onions that were imported from Spain to New Mexico in the early 20th century (Goldman et al., 2000). Breeding work at this time in New Mexico resulted in the development of cultivars from 'New Mexico Early Grano' (Garcia and Fite, 1931;Goldman et al., 2000). 'New Mexico Early Grano' was taken to Texas from which 'Texas Early Grano' was developed (Goldman et al., 2000). This cultivar served as the progenitor for 'Texas Grano', 'Texas Early Grano 502', and 'Crystal Grano' (Goldman et al., 2000). 'Texas Early Grano 502 PRR' was a pink root (Phoma terrestris) -resistant selection of 'Texas Early Grano 502'. Some additional material must have been introgressed into 'Texas Early Grano 502 PRR' because 'Texas Early Grano 502' possesses normal cytoplasm exclusively and 'Texas Early Grano 502 PRR' possesses sterile cytoplasm exclusively (Havey and Bark, 1994). 'White Mexican' is an onion cultivar that is adapted to the Tampico District of Mexico. Bulbs from this cultivar have a flat shape, short storage length, and pungent flesh. DNA EXTRACTION. Seeds of all accessions were grown in a greenhouse in Pullman, WA. Sixteen seedlings from each accession were sampled and tissue was freeze-dried and placed in a 1.5-mL microcentrifuge tube with six 3-mm-diameter glass beads. Tubes were placed in a plastic tube rack and shaken in a Geno/Grinder 2000 (SPEX SamplePrep, Metuchen, NJ) until pulverized. Extraction of DNA was completed using the MagneSil Ò kit (Promega, Madison, WI). SIMPLE SEQUENCE REPEAT MARKERS. A total of 10 SSR primer pair loci were examined (ACM006, ACM013, ACM017, ACM078, ACM082, ACM091, ACM093, ACM097, ACM099, ACM102) with forward and reverse primer sequences reported by Jakše et al. (2005). Loci were selected that amplified 3-bp repeats to avoid stutter bands. The three-primer system developed by Boutin-Ganache et al. (2001) was used to reduce cost for labeled primers. The sequence TGTAAAACGACGGC CAGT was added to the 5# end of each forward primer selected from Jakše et al. (2005). Reaction volume was 10 mL and contained 0.1 unit of Biolaseä Taq polymerase (Bioline, Boston, MA); 25 ng of template DNA; and reagent concentrations of 150 mM each dNTP, 1.5 mM MgCl 2 , and 0.2 mM each of the forward and reverse primers. The amplification method started with an initial denaturing step at 94°C for 30 s followed by 15 cycles beginning at 94°C for 10 s, then to 65°C for 30 s, and then to 72°C for 30 s and stepping down the annealing temperature 1°C at each cycle to finish at 50°C and then ending with 10 cycles of 94°C for 10 s, then to 50°C for 30 s, and then to 72°C for 30 s. Separation of the markers was done on a 6.5% polyacrylamide gel using a automated electrophoresis apparatus (GeneReadIR 4300; LI-COR Biosciences, Lincoln, NE). Images were visualized using GeneImager Software (Scanalytics, Fairfax, VA) and fragments were scored by size. TARGETED REGION AMPLIFIED POLY-MORPHIC MARKERS. TRAP markers were generated according to Hu and Vick (2003) adjusted to 10 mL reactions. Each reaction contained 1.5 mM MgCl 2 , 200 mM each dNTP, 2 pmol fixed primer, 0.2 pmol of each arbitrary primer, and 1 unit of Biolaseä Taq polymerase with its associated buffer. The first combination of TRAP primers consisted of the fixed primer QHB14G14b and the random labeled primers Ga5 and Sa12 (Hu et al., 2005). The fixed primer, QHB14G14b, is from a sunflower (Helianthus annuus) expressed sequence tag (EST) with no homology to any known genes (Hu et al., 2005). The random primers, Sa12 and Ga5, were labeled with infrared (IR) dyes, IRD700 and IRD800 (Eurofins MWG Operon, Huntsville, AL), respectively. The second combination of TRAP primers consisted of the fixed primer miR156a and random-labeled primers Ga3 and Sa4 (Maher et al., 2006). The random primers, Sa12 and Ga5, were labeled with IR dyes, IRD700 and IRD800, respectively. TRAP marker fragments were separated on a GeneReadIR 4300 on 6.5% polyacrylamide gel. Printed images were scored visually, markers being either present or absent. POPULATION DIFFERENTIATION. Accessions to be compared were grouped by individual plants, without a priori classification, into K clusters using the software STRUC-TURE, which identifies genetically similar populations based on genotypes in Hardy-Weinberg equilibrium (Falush et al., 2003(Falush et al., , 2007Pritchard et al., 2000;Pritchard and Rosenberg, 1999). The program assumes models with each run of K hypothetical populations and assigns a probability [P(XjK)] that individuals (X) are correctly assigned to each of these K populations. Each individual plant is then assigned a membership coefficient, the fraction of its genome assigned to each of the K populations. Q-plots represent each individual by a thin horizontal line partitioned into K-colored segments that represent that individual's membership fractions in the K-estimated populations. Black lines separate individuals of different accessions. Estimation of the most probable K was facilitated by the technique developed by Evanno et al. (2005), which uses the change in the slope of the graph of [P(XjK)] and the variance of the probability [P(XjK)] at each value of K. Five replications with a burn-in length of 20,000 followed by a Markov chain Monte Carlo of 20,000 additional iterations were run at each assumed K until results indicated lowered and erratic values for [P(XjK)]. The parameter set included the admixture model with allele frequencies correlated. Average Q-plots over all replications for the best K and the resulting graphic display of ordered Q-plots were determined using the STRUCTURE ancillary programs CLUMPP (Jakobsson and Rosenberg, 2007) and DISTRUCT (Rosenberg, 2004), respectively. Results and Discussion Both marker types distinguished differences and found similarities, but the results did not always agree. Results of the STRUCTURE analysis for each of the groups of accessions is given as Q-plots in Figure 1. Both SSR and TRAP markers found 'Eclipse L303' to be distinct from the other Eclipse entries (Fig. 1A). Although SSRs found no differences among the Long Red Italian entries, TRAP markers distinguished 'Long Red Italian' (PI 546168) from the other two entries (Fig. 1B). The TRAP markers evaluated these accessions at 46 polymorphic loci. Dominant markers, because of the greater number of loci typically uncovered, have shown themselves to be effective in discriminating among populations and correctly assigning individuals to populations when compared with microsatellite markers (SSRs) (Gaudeul et al., 2004;Woodhead et al., 2005). In contrast, TRAP markers found no distinction among the Red Creole entries, whereas SSRs found 'Red Creole' C5 and 'Red Creole' CSC to be different from 'Red Creole' ESC and 'Red Creole C5' ESC (Fig. 1C). Prevosti's distance, which is a measure of the average allele frequency differences between the two pairs of entries (Prevosti et al., 1975) was 0.154 (data not shown). Still, STRUCTURE, which separates individuals into populations based on Hardy-Weinberg equilibrium, repeatedly placed these two pairs into different groups as a result of these differences. Both marker systems found 'Red Grano' (PI 546234) to be different from the other Red Grano entries, but SSRs further distinguished the remaining two entries (Fig. 1D). Although the origins of the two commercial versions of 'Red Grano' are unknown, it is possible that differences observed between entries could have resulted from selection and/or genetic drift during seed production. Onion open-pollinated populations such as 'Red Grano' are very heterogenous between individuals and each individual within the population is quite heterozygous for most traits. With onions preferring to be cross-pollinated rather than self-pollinated, recombination among individuals during seed production would result in genetic variation not observed in the previous generation. In addition, onions are highly influenced by the environment in which they are grown. These environmental differences from one location to another can cause indirect selection and genetic drift of populations during seed production. This genetic drift could result in genetic differences between populations of the same cultivar if that cultivar was being produced by several different seed companies in several different locations. Both marker systems found similar groupings among the White Grano entries (Fig. 1E). 'Early White Grano' (PI 546094) and 'White Grano' (PI 546170) were similar. 'White Grano' CSC, 'White Grano Improved', and 'White Grano' ESC formed a similar group, and 'Extra Early White Grano' was distinct. However, although SSRs found 'S-1 White Grano' (PI 546161) and 'New Mexico White Grano PRR' to be similar, TRAP markers separated the two entries. The results for the Yellow Grano entries differed from all the other analyses in that although both marker types found a significant break at K = 2, the Q-plots did not divide those groups among entries. Rather, the Yellow Grano entries appeared to be an admixture of two populations (Fig. 1F). SSR alleles from the admixed populations showed a perceptible gradient from 'Texas Early Grano 502 PRR' developed by Ferry-Morse Seeds (Groupe Limagrain, Auvergne, France) to 'Texas Early Grano 502 PRR' developed by Asgrow Seed Co. (Monsanto Vegetable Seeds, Oxnard, CA). There was no perceptible gradient given by TRAP markers, but 'Texas Early Grano 502 PRR' developed by Asgrow Seed Co. was easily seen to be different from the rest of the accessions. This more clear delineation of this accession may have been the result of the greater number of loci analyzed by TRAP markers (31 polymorphic loci). There were no differences identified between the White Mexican entries or the White Creole entries by either marker system. The results from this study found both differences and similarities among entries within each group. The decision whether to include or exclude newly collected lines or to remove redundant accessions ultimately rests with those individuals responsible for the collection. The goal of germplasm preservation is to maintain the greatest genetic diversity possible for a particular plant species. Although past policy has been to maintain every accession, knowledge of differences and similarities among accessions provides curators with tools to prioritize maintenance of accessions should resources become limiting. When subtle genetic differences among accessions may no longer be economically feasible to maintain, alternative methods to conserve diversity while reducing costs must be considered. One approach would be to combine closely related lines and/or accessions into a single line or accession that represents the cultivar in question. When this approach is taken, genetic diversity is maintained, whereas the number of lines or accessions is reduced. Another approach would be to designate the oldest, historical accession of the cultivar in question as the representative sample, maintain its genetic purity, and keep other representations only because they prove to be significantly different. As selection and genetic drift occur over time, newly acquired lines of a cultivar may diverge from the original cultivar. For example, the name Texas Early Grano 502 PRR would imply that this cultivar was only a pink root-resistant selection of 'Texas Early Grano 502'. This difference in itself might warrant the separation of the two cultivars. However, work by Havey and Bark (1994) suggested that the source of pink root resistance came from Excel 986 A, a male-sterile line used in the development of the hybrid cultivar Granex 33, and possesses sterile-type (S) cytoplasm. In doing so, the PRR strains of 'Texas Early Grano 502' are almost exclusively S cytoplasm, whereas 'Texas Early Grano 502' populations are almost exclusively normal (N) cytoplasm. This difference is important for the production of fertility restoration lines of 'Texas Early Grano 502' that have pink root resistance. Decisions made with regard to germplasm maintenance must rely on as much information that a curator can bring to bear. Passport data, common garden studies, and molecular analyses all provide valuable information for curators and increase germplasm use for researchers and plant breeders requesting samples. The results shown here demonstrate both the strength and the shortfalls of molecular markers for distinguishing among accessions. Analysis at multiple loci can clearly distinguish among entries based on unique alleles or differences in allele frequencies. However, when comparing closely related entries, the choice of marker systems or loci analyzed can ultimately determine the outcome. A near isogenic line, for example, can differ at one or a few loci, and the chances of choosing the marker locus associated with the important distinguishing trait can be remote if the differences and the linked marker are not known or suspected a priori. Subtle differences such as earliness or disease resistance can thus only be determined in a field study under which those differences are expressed. The advantages of TRAP markers are that they reveal more random loci in a single reaction and that if desired can also be directed toward specific traits. Miklas et al. (2006) used TRAP markers designed after ESTs associated with disease resistance in the Compositae Genomics Database or against sequenced resistance gene analogs from common bean (Phaseolus vulgaris). They found a proportion of the markers to map in the vicinity of R genes or to be linked with newly identified quantitative trait loci conditioning disease resistance in bean. Thus, TRAP markers designed for specific traits of interest could be used in conjunction with random genetic analysis to more specifically analyze differences among closely related accessions. In cases in which markers linked to specific differences are not known, random markers such as TRAPs with a high probability of wide genome coverage should be preferred. In the present study, a conservative approach toward germplasm preservation is recommended, in which the curator would maintain accessions as different, even if only one marker set defined them so. For example, although the 'Long Red Italian' accession is distinguished by TRAP markers, but not the microsatellites, that accession should be considered as distinct from this study. All three Red Grano accessions should be considered as different, and the eight White Grano accessions should be maintained as five separate accessions. 'Texas Early Grano 502 PRR' is clearly different from the other six accessions. However, further investigation into the background of these accessions might be warranted. Overall, the need for complete and accurate passport data with the inclusion of entries into the germplasm system cannot be overstated.
2019-03-30T13:12:17.040Z
2011-03-01T00:00:00.000
{ "year": 2011, "sha1": "1499481152a6d7301ffe51a6b39e43296179de03", "oa_license": null, "oa_url": "https://journals.ashs.org/downloadpdf/journals/jashs/136/2/article-p129.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2bc84e16492087b5d6efe1eff7598513f52e93fa", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
201829667
pes2o/s2orc
v3-fos-license
A preliminary study of microRNA expression in different types of primary melanoma MicroRNAs (miRNAs) have been proven to regulate the development and progression of cancer through various mechanisms. The aim of the present study was to compare miRNA expression between primary melanomas from different sites. We analyzed the expression of 84 miRNAs in 27 primary melanoma and 5 nevus formalin-fixed paraffin-embedded (FFPE) samples using the Human Cancer PathwayFinder miScript miRNA PCR Array. The FFPE samples were obtained from the archives of the Municipal Clinical Emergency Hospital of Timisoara and included 10 cutaneous melanomas, 10 uveal melanomas, 7 mucosal melanomas, and 5 cutaneous nevi. Out of 84 miRNAs, 11 miRNAs showed altered expression in all types of melanoma compared with the nevi. Among these, miR-155-5p, miR-9-5p, miR-142-5p, miR-19a-3p, miR-134-5p, and miR-301a-3p were upregulated, while miR-205-5p, miR-203a-3p, miR-27b-3p, miR-218-5p, and miR-23b-3p were downregulated. The highest similarity in miRNA expression pattern was found between uveal and mucosal melanoma groups, i.e., 15 miRNAs had altered expression in both groups. Overall, we identified several miRNAs with significantly altered expression in primary melanomas, including those reported for the first time in this type of cancer. Among them, mir-9-5p, mir-203a-3p, mir-19a-3p, mir-27b-3p, and mir-218-5p showed altered expression in all three melanoma types vs. nevi. Further research should explore the potential of these miRNAs in melanoma. INTRODUCTION Primary melanoma, a malignant neoplasm of melanocytes, can be highly aggressive and has an increasing incidence worldwide [1]. In 2016, 76,380 new cases of melanoma were estimated in the United States (US) from which approximately 10,130 cases would be fatal [2]. Romania and other countries from Central and Eastern Europe show a higher incidence of 111 primary melanomas at advanced stage compared with Western Europe, which may be due to the lack of proper health education in these countries, among other reasons [3]. Based on this information, we consider melanoma to be one of the most important research areas in our country. Melanoma can occur in any tissue that contains melanocytes. These cells are predominantly present in the skin, but they can also be found in organs such as the eyes, inner ear, and brain (the substantia nigra and locus coeruleus) as well as in the mucosal lining of the leptomeninges, oral cavity, esophagus, rectum, anal canal, nasal cavity, paranasal sinuses, larynx, vagina, and cervix [4][5][6][7][8][9][10][11][12][13]. In each of these tissues melanocytes have different functions and are influenced by different local factors [13]. Numerous genomic studies showed different mutational patterns in melanoma, which was followed by the investigation of epigenetic factors involved in melanoma development. MicroRNAs (miRNAs) are small noncoding RNAs (∼22 nt in length) that can regulate the development and progression of cancer through various mechanisms. Different miRNAs have been shown to be upregulated or downregulated in melanoma, which suggests their use as diagnostic and prognostic biomarkers as well as therapeutic targets [21]. Furthermore, www.bjbms.org Ioana Gencia, et al.: MiRNA expression in primary melanoma circulating miRNAs can be used for non-invasive diagnosis and prognosis of early metastatic disease, representing a more cost-effective method for monitoring patients and deciding about the treatment. These biomarkers have a higher sensitivity for detection in early stages of disease than the current imaging techniques (e.g., computed tomography [CT], positron emission tomography [PET] scan, etc.), and they are less invasive compared with tumor excision and sentinel lymph node biopsy, used for staging of melanoma [22]. Several studies investigated miRNA expression changes in melanomas, most notably in cutaneous and uveal types, however, there is little data on miRNA expression in primary mucosal melanoma. To the best of our knowledge, only two studies have investigated changes in miRNA expression in conjunctival melanomas [23,24] and none in mucosal melanomas involving other sites. Therefore, the current study is the first to analyze miRNA expression in primary mucosal melanomas involving mucosal surfaces other than the conjunctiva. Here, we compared miRNA expression among three primary melanoma types (CM, UM, and MM) and control cutaneous nevi. We identified several miR-NAs with significantly altered expression in primary melanomas, including those reported for the first time in this type of cancer. Tissue samples We obtained 27 primary melanoma and 5 nevus formalin-fixed paraffin-embedded (FFPE) samples from the archives of the Pathology Department at the Municipal Clinical Emergency Hospital of Timisoara. The FFPE samples included 10 cutaneous melanomas (stage III-IV), 10 uveal melanomas, 7 mucosal melanomas (2 intestinal mucosa, 2 genital mucosa, 1 nasal mucosa, and 2 oral mucosa) and 5 cutaneous nevi (Table 1). To reduce genomic and transcriptomic changes that occur because of environmental factors, we collected the control nevus samples from younger individuals. All participants signed informed consent to participate in the study, and the study was approved by the Institutional Ethics Committee (approval number 1-015922/2019). Patients did not receive any treatment prior to tumor excision. Real-time polymerase chain reaction (real-time PCR) MiRNAs were purified from FFPE samples using a miR-Neasy FFPE Kit (Qiagen, MD, US), according to the manufacturer' s instructions. We analyzed the expression of 84 miRNAs using the Human Cancer PathwayFinder miScript miRNA PCR Array (Qiagen, MD, US), on an ABI 7900HT real-time PCR instrument (Thermo Fisher Scientific, MA, US). RESULTS The distribution of upregulated and downregulated miRNAs in the three primary melanoma types compared with control nevi is presented in Figure 1-3. MiRNAs with most significant changes in their expression in the primary melanoma samples compared with nevus group are presented in Table 2. We used these results to generate a cluster dendrogram that highlights the segregation of miRNA expression according to the 4 studied groups ( Figure 4). The samples clustered together for each melanoma type and for nevi, showing similar RNA expression patterns in all three repetitions of miRNA expression analysis. Using the IPA, we determined the diseases, pathways, and biological functions associated with the miRNAs with altered expression. The IPA revealed that the dysregulated miRNAs are involved in many physiological and pathological processes. Table S1 shows tumor stages, tumor sites, and diseases associated with the dysregulated miRNAs for each melanoma type. It is worth noting that both primary and metastatic disease are common to all three melanoma types. We used IPA for each data set, identifying the targets of the miRNAs that were significantly overexpressed or underexpressed in our study (p < 0.05). Matching our results against the IPA data sets we generated pathway networks for each data set and, by identifying shared molecules, we created merged networks for each type of melanoma ( Figure S1-S3). The targets common to all three types of melanoma were Smad2/3, insulin, sirtuin 1 (SIRT1), and tumor protein p53. The other identified targets were either unique to each type of melanoma ( We found that miR-155-5p was upregulated in all three types of melanoma. Similarly, a previous study reported miR-150 and miR-155 to be upregulated in primary and metastatic melanoma compared with nevi [25]. In addition, miR-155-5p was suggested to play a role in the development of other solid and hematopoietic cancers [26][27][28][29]. Nevertheless, in melanoma cell lines, ectopic expression of miR-155-5p had an anti-proliferative and pro-apoptotic effect [30], and higher miR-155-5p expression in metastatic melanoma could predict longer post-recurrence survival [25,31]. The expression of miR-155-5p increases during inflammatory response, especially during lymphocyte proliferation, and some authors suggest this to support the role of miR-155 in melanoma progression [32,33]. In this study, miR-205-5p was downregulated in melanoma vs. nevus samples, which is consistent with previous studies [34,35]. MiR-205 was shown to act as a tumor suppressor, inhibiting melanoma cell proliferation and inducing apoptosis by targeting vascular endothelial growth factor (VEGF) and transcription factor E2F1 in vitro as well as in vivo [35][36][37]. MiR-142-5p was upregulated in our melanoma samples compared with control nevi, and previous research indicated that miR-142 is one of five miRNAs with important clinical implications and a high prognostic value in metastatic melanoma [38]. www.bjbms.org Ioana Gencia, et al.: MiRNA expression in primary melanoma MiR-23b-3p was downregulated in our melanoma samples vs. control nevi. This miRNA was shown to have altered expression in different malignant tumors, including melanoma, with clinical, therapeutic, and prognostic significance [34,39]. We showed for the first time that miR-9-5p is upregulated in melanoma compared with nevi. MiR-9-5p promotes cell proliferation and metastasis in non-small cell lung cancer (NSCLC) and colorectal cancer [40,41]. It is involved in the differentiation of B lymphocytes into plasma cells by the negative regulation of the transcription factor PR domain zinc finger protein 1 (PRDM1/BLIMP-1), showing lower levels as the lymphocytes differentiate. This explains high levels of miR-9-5p in primary large-B cell lymphomas [42]. On the other hand, miR-9 is downregulated in human ovarian cancer compared with normal ovary, and the overexpression of miR-9 inhibits cell growth in ovarian cancer in vitro through the negative regulation of nuclear factor NF-kappa-B p105 (NFκB1) [42,43]. MiR-9-5p may represent a new prognostic marker in melanoma and possibly a new therapeutic target. We found that miR-19a-3p is upregulated in melanoma vs. nevi, which is another novel finding in this cancer type. In gastric cancer, miR-19a-3p had a negative prognostic impact and promoted cell malignancy [44]. MiR-134-5p was upregulated in our melanoma samples compared with nevi. Previous studies showed that miR-134-5p is downregulated in NSCLC cells [45] and nasopharyngeal carcinoma cells [46] and that it has a role in inhibiting tumor progression. Interestingly, another study on melanoma showed that miR-134-5p is downregulated in melanoma patients compared with healthy controls [47], and this discrepancy with our results should be further investigated. We reported in this study upregulation of miR-301a-3p in melanoma vs. nevi. MiR-301a was previously reported to be upregulated in melanoma samples compared to benign melanocytic lesions [48]. In addition, in hepatocellular carcinoma cell lines, miR-301a-3p overexpression was shown to stimulate cell proliferation, invasion, and chemoresistance [49]. For the first time, we showed that miR-203a-3p was downregulated in melanoma compared with nevi. The overexpression of miR-203a-3p in colorectal cancer cell lines inhibited cell proliferation and reduced chemoresistance [50] and similarly, in nasopharyngeal carcinoma, overexpressed miR-203a-3p inhibited cell proliferation, migration, and invasion in vitro as well as www.bjbms.org Ioana Gencia, et al.: MiRNA expression in primary melanoma xenograft tumor growth and lung metastasis in vivo [51]. On the other hand, in hepatocellular carcinoma cells, miR-203a-3p.1 overexpression was reported to be oncogenic [52]. The main limitation to our study is the small sample size. Thus, although we consider some of our findings to be groundbreaking in melanoma research, they still need to be confirmed in large-scale studies. To this end, we propose analyzing miRNAs in each melanoma sample separately and in combination with follow-up data, so to determine changes in miRNA expression specific to each cell type and site of origin, the impact of local factors, and the prognostic potential of miRNAs in melanoma. To achieve this, we plan to conduct a prospective study in the future. CONCLUSION Overall, we identified several miRNAs with significantly altered expression in primary melanomas, including those reported for the first time in this type of cancer. Mir-9-5p, miR-203a-3p, miR-19a-3p, miR-134-5p, miR-301a-3p, miR-155-5p, miR-142-5p, miR-205-5p, miR-23b-3p, miR-27b-3p, and miR-218-5p had altered expression in all three melanoma types. Further research should explore the potential of these miRNAs in melanoma. The primary contribution of this study is to demonstrate that despite originating from the same cell type, the three melanoma types are still separate entities characterized by different miRNA expression patterns.
2019-09-05T13:17:26.028Z
2019-08-27T00:00:00.000
{ "year": 2020, "sha1": "9f997627b52e4e85d3a1232dab7f6b8c0779f37d", "oa_license": "CCBYNCSA", "oa_url": "https://www.bjbms.org/ojs/index.php/bjbms/article/download/4271/1244", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99794b5c58151b7da651d8a95263bc7aa6d76fbe", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
239616946
pes2o/s2orc
v3-fos-license
Secure Outsourcing of Matrix Determinant Computation under the Malicious Cloud Computing the determinant of large matrix is a time-consuming task, which is appearing more and more widely in science and engineering problems in the era of big data. Fortunately, cloud computing can provide large storage and computation resources, and thus, act as an ideal platform to complete computation outsourced from resource-constrained devices. However, cloud computing also causes security issues. For example, the curious cloud may spy on user privacy through outsourced data. The malicious cloud violating computing scripts, as well as cloud hardware failure, will lead to incorrect results. Therefore, we propose a secure outsourcing algorithm to compute the determinant of large matrix under the malicious cloud mode in this paper. The algorithm protects the privacy of the original matrix by applying row/column permutation and other transformations to the matrix. To resist malicious cheating on the computation tasks, a new verification method is utilized in our algorithm. Unlike previous algorithms that require multiple rounds of verification, our verification requires only one round without trading off the cheating detectability, which greatly reduces the local computation burden. Both theoretical and experimental analysis demonstrate that our algorithm achieves a better efficiency on local users than previous ones on various dimensions of matrices, without sacrificing the security requirements in terms of privacy protection and cheating detectability. Introduction The development of cloud computing provides great convenience to resource-constrained clients. They can outsource complex computations into the cloud by paying a fee. Computation outsourcing brings economic benefits to both resource-constrained clients and high-performance servers. Nevertheless, in practice, cloud servers are untrustworthy, which brings many security issues to computation outsourcing. According to [1], security has become a top issue that affects the business potential of cloud computing. The outsourced data usually contain private user information. The curious cloud may spy on user privacy through outsourced data. Besides, the malicious cloud may violate the computing scripts and return incorrect results to the client. Even without considering the maliciousness of the cloud, computing errors caused by the cloud hardware failure, software errors, etc. should also be detected by the client locally. In addition to these traditional security issues, with the development of smart phones, virtual assistants (VA) have been widely used in smart phones, which are vulnerable to malicious attacks; it uploads voice records to the cloud without the user's knowledge or consent [2]. Therefore, cloud-based secure computation outsourcing algorithms have become hot topics of researches. There are generally two types of security assumptions in cloud-based secure outsourcing algorithms [3]. The cloud that is assumed to be semihonest (or honest-but-curious) is only curious about the privacy contained in outsourced data. The cloud assumed to be malicious may firstly be semihonest, and then cause damage or forge results to sabotage the computation. Besides, in the above two security assumptions, the local computational burden of the client should be as low as possible. Otherwise, the efficiency benefit of outsourcing will be nullified. Therefore, the secure outsourcing algorithms under the malicious model require the considerations of efficiency, privacy protection, and cheating detectability. In order to reduce the local computational burden using cloud computation outsourcing while protecting privacy and detecting false results, researchers have proposed secure outsourcing algorithms for many commonly used and complex computations, including the computation of matrix determinant. The computation of matrix determinant has appeared more and more widely in science and engineering problems recently. Many researches of medicine and biology use the matrix determinant for signal processing before using machine learning algorithms for practical classification tasks. For example, when machine learning algorithms are used for medical diagnosis, the data collected by the medical sensors are usually time series and contain private user information. Many researches divide the signal into several segments and fill them into matrices. Then, the determinants of matrices are considered as the feature values of the original signal. However, the computation of matrix determinant is still unaffordable for the sensors. They usually need to outsource matrix determinant computations to high-performance hardware. To improve the detectability of cheating behaviors from the malicious cloud, all previous algorithms must increase the rounds of verification, which significantly increase the local computational burden of the client. Moreover, the currently known algorithm with the best privacy protection in [4] uses significantly more local computations than other algorithms. Therefore, we aim to propose a novel algorithm for secure outsourcing of matrix determinant computation, to solve the conflicts between the efficiency and security issues. The contributions of this paper are summarized as follows: • We propose a secure outsourcing algorithm for the matrix determinant computation under the malicious cloud model, which can not only ensure the confidentiality of matrix, but also detect the forged results returned from the malicious cloud. We use the permutation, mix-row/mix-column, and split operations in our algorithm to protect privacy, which achieves the currently known lowest computation cost. • We propose a one-round verification method in the proposed algorithm, which achieves a high cheating detectability. The malicious forged results can only escape our local verification with the probability of 1 (n!) 4 , given a matrix of n × n dimensions. In all the previous algorithms, the detectability of malicious forged results depends on the rounds of verification, and to achieve a high cheating detectability, multiple rounds of verification are required, which also brings high computational burden to the client. In the previous three algorithms [4][5][6], the succeeding probability of malicious forged results is 1 2 l , where l is the number of verification rounds and recommended to be greater than 20. • We conduct theoretical proofs of the correctness, efficiency, privacy protection, and cheating detectability for the proposed algorithm. Experimental results also demonstrate the superior efficiency of the proposed algorithm. The rest of the paper is organized as follows: We introduce the related work and comparative analysis in Section 2. In Section 3, we introduce some background knowledge and the system model of our algorithm. Section 4 describes the proposed secure outsourcing algorithm for matrix determinant. We conduct the correctness, security, and complexity analysis of the proposed algorithm in Section 5. We evaluate the performance of our algorithm in Section 6. Finally, we conclude our work in Section 7. Related Work and Comparative Analysis Recently, the privacy and security issues in lightweight devices are widely concerned (e.g., the intrusion detection system on lightweight devices [7] and the secure computation outsourcing on resource-constrained devices). As we all know, although cloud computation outsourcing brings convenience to resource-constrained devices, it also causes privacy issues. Therefore, there are many researches on how to protect user privacy and verify the correctness of results when using cloud computing. From the perspective of the cloud, data access control mechanisms can be used to protect user privacy and ensure data availability. Kayes et al. [8] gave a survey on context-aware access control mechanisms (CAAC) during data management in cloud and fog networks. They also proposed a new generation of Fog-Based CAAC (FB-CAAC) framework for accessing data from distributed cloud data centers. When computations on the encrypted data are required, access control mechanisms are not enough. From the perspective of the client, Fully Homomorphic Encryption algorithms (FHE) [9][10][11][12] and Attribute-based Encryption algorithms (ABE) [13][14][15] have great application potential in cloud secure computation outsourcing, but their high computational complexities limit their practical applications, especially for resource-constrained devices. In addition, there are a large number of researches on the secure outsourcing algorithms for commonly-used and complex scientific computations (e.g., modular exponentiation [16,17], extended Euclidean [18], bilinear pairings [19][20][21], polynomial multiplication [22]). There are many applications of matrix in the field of computer science, such as Digital Image Processing (DIP), computer graphics, computer geometry, Artificial Intelligence (AI), network communications, and so on. Thus, many computations of matrix are also commonly used and complex. However, some scientific computations of matrix have high computational complexities. Therefore, there are also many researches on secure outsourcing algorithms for computations of matrix. For example, the secure outsourcing algorithm for matrix multiplication has been widely studied [23][24][25][26]. In addition, Nonnegative Matrix Factorization (NMF) is widely used in DIP, face recognition, text analysis, and other fields. Thus, there are many secure outsourcing algorithms for NMF [27][28][29][30]. Matrix inverse is also one of the most basic computations in large-scale data analysis. Computing the inverse of matrix on resource-constrained devices such as sensors is usually costly. Thus, there are also some secure outsourcing algorithms for matrix inverse [31][32][33]. Besides, in the field of machine learning, Singular Value Decomposition (SVD) has a wide range of applications. It can be used not only for feature decomposition in dimension reduction algorithms but also recommendation system, Natural Language Processing (NLP), and other fields. Securely outsourcing SVD to the cloud can greatly reduce the computation costs of the client. Chakan et al. proposed a secure outsourcing algorithm for SVD [34]. The local computational complexity of this algorithm is O(n 2 ) and the complexity of cloud is O(n 3 ). Chen et al. proposed a secure outsourcing scheme for SVD with less interactions between the client and the cloud [35]. Similar to the above computations of matrix, the determinant is also an important computation of matrix in the field of scientific and engineering. In the semihonest model, Kim et al. proposed a secure matrix determinant outsourcing method based on FHE [36]. Their scheme computes the determinant by the standard definition of matrix determinant, which results in a high computational burden of the client. Zong et al. introduced a division-free computational method for FHE-based secure matrix determinant computation outsourcing [37], which is significantly more efficient than the method in [36]. The above two algorithms only considered the privacy under the semihonest model. However, in practice, the cloud may be malicious. To the best of our knowledge, the existing secure outsourcing algorithms for matrix determinant computation under the malicious model include [4][5][6]. In [5], Lei et al. used the block matrix and permutation techniques to protect privacy. In their algorithm, the client's local computations include (2 + l)n 2 + 2m 2 + 4mn + 2n + m multiplications, where m is the increase in dimension after encryption and n is the original dimension of matrix. Liu et al. proposed a new matrix determinant secure outsourcing algorithm using the permutation and mix-row/mix-column operations, which avoid the increase in matrix dimension during the encryption and reduces the number of local multiplications to (2 + l)n 2 + 3n [6]. Zhang et al. proposed a method that has better privacy [4]; however, because 8n times of elementary column/row transformations are involved in the process of encryption, it has a higher local computational burden than other algorithms, which require (10 + l)n 2 + 6n local multiplications. All the above three algorithms and our proposed algorithm have the same cloud computational complexity (O(n 2.373 )). In our algorithm, in addition to the permutation and mix-row/mix-column operations, we used split operation to achieve a higher privacy. To prevent malicious cloud from forging computation results, the above three algorithms adopted the idea of Freivalds' algorithm [38] with ln 2 computation costs that need to increase the frequency of verification (l) to improve the cheating detectability. l is greater than 20 at least in their works. In our proposed algorithm, only one round of verification is required, so there is no factor l in the computation cost. Table 1 demonstrates comparisons of our algorithm and the three existing algorithms from aspects of local multiplications, privacy protection level, and cheating detectability. Since multiplications dominate the local computation, local additions are omitted here and will be discussed later in Section 5. As we will discuss in Section 5, our proposed algorithm uses the lowest overall local computation cost to achieve a high detectability of result cheating (or a negligible probability of cheating success). Preliminary The notations and their implications used in this paper are shown in Table 2. This section will introduce some background knowledge of our algorithm. Symbol Implication The determinant of matrix M α ← K Choose an element α from set K randomly λ Security parameter Prob x A (χ) The probability that attacker A obtains the secret input x using data χ Prob The probability that the client C detects the forged results f orge using data χ The element in the ith row and jth column of matrix The inner product of vector a and vector b f LU (M) The Low triangle and Up triangle decomposition of matrix M System Model The secure outsourcing algorithm for matrix determinant consists of the following five parts. • KeyGen(λ) → (SK 1 , SK 2 ): λ is a security parameter related to key generation. The generated key SK 1 is used to encrypt the input data, and SK 2 is used to verify and decrypt the returned results. Both SK 1 and SK 2 should be kept privately by the client C. x is the input data. The client uses SK 1 to encrypt the input x and gets encrypted data σ x . σ x is sent to the server S for computing. f is a function given by the client. The server computes σ y using the given function f and encrypted data σ x . • Verify(σ y , SK 2 ) → (True/⊥): The client verifies the results returned from the cloud. If the σ y is valid, the output of this function is True. Otherwise, the output is ⊥. The client uses the secret key SK 2 to decrypt σ y and obtains the result y. The proposed algorithm is effective in the malicious cloud model. The malicious cloud can not only use its known information to infer the privacy of the client, but also maliciously forge false computation results to tamper with the whole algorithm. The system model of secure outsourcing for matrix determinant in this paper is shown as Figure 1. A semihonest attacker may only be curious about the privacy contained in outsourced data. As shown in Equation (1), for any attacker A in the cloud server, if the probability of computing the secret input x with its known information (σ x , f , σ y ) is so small that it can be ignored in the polynomial time, the proposed secure outsourcing algorithm for matrix determinant is privacy-protected. A malicious attacker may cause damage or forge results to sabotage the computation. As shown in Equation (2), for any forged computation results ( f orge) returned from the cloud server, if the probability that the client (C) recognizes the forged results using the Verify(σ y ,SK 2 ) function is infinitely close to 1, the proposed secure outsourcing algorithm for matrix determinant is cheating-detected, which means the cheating detectability of the algorithm is high, and the success probability of attacker's result cheating is negligible. If the local computation complexity of the client is substantially less than the computation complexity of the previous algorithm without outsourcing, the secure outsourcing algorithm for matrix determinant is efficient. Secure Outsourcing of Matrix Determinant In this section, we propose a secure outsourcing algorithm for matrix determinant. In comparison with the previous algorithms, our algorithm aims to improve the privacy protection ability and cheating detectability with less local computation costs. We use the permutation, mix-row/mix-column, and split operations in our algorithm to protect the privacy of matrix. Besides, we use a new result verification method to improve the cheating detectability, which is different from [4][5][6]. The main idea of the verification method is to ensure that at least the diagonal elements in the results of Low triangle and Up triangle (LU) decomposition returned from the cloud are correct. The security analysis in Section 5 will prove that it is more difficult for the forged results returned from the malicious cloud to pass this verification than the previous ones. The proposed algorithm is described as follows. The size of the input matrix is n × n in the rest of this section. Algorithm 1 Procedure of Secret Key Generation Pick 8 random parameters satisfying n ≤ n 1 , n 2 , n 3 , n 4 , m 1 , m 2 , m 3 , m 4 ≤ 2n 4: end for 7: for i = 1 → 4 do 8: for j = 1 → n i do 9: Randomly select two rows of P i and exchange them. 10: end for 11: for j = 1 → m i do 12: Randomly select two columns of Q i and exchange them. 13: end for 14: end for 15: In the KeyGen function, K can be {0, 1} λ , given a security parameter λ. From the analysis in Section 5, the number of elements in K is associated with the security of the algorithm. When λ = 10, the probability that the cloud obtains the privacy input is less than 1 2 40 , which is negligible. Other ways of defining K are also applicable, as long as the number of elements in the set K is sufficient to resist the brute-force attack of the cloud. Encryption • While keeping the other rows, the client randomly splits the ith and jth rows of M to get four matrices M 1 , (3) and (4). 4: The cloud returns σ y = {r 1 , r 2 , r 3 , r 4 , L 1 , U 1 , L 3 , U 3 } to the client. Algorithm 3 Procedure of Computation : If the output of the Verify function is true, the clients executes the Decrypt function. Otherwise, the client rejects the results returned from the cloud. for i = 1 → n do 7: Pick j, k in [i, n] randomly. 8: if for j = 1 → n do 13: Pick i, k in [1, j] randomly. 14: if 15: f lag ← ⊥; return f lag. Obviously, the input parameters of the Verify and Decrypt functions are the same. In fact, the result of the Decrypt function can be obtained during the execution of the Verify function in the proposed algorithm. We separate them into two parts for the sake of clarity and convenience to compare with the previous algorithms. The specific flowchart of the proposed algorithm is shown in Figure 2. In the next section, the correctness, security, and computational complexity of the proposed algorithm will be analyzed. Correctness Theorem 1. The proposed secure outsourcing algorithm for matrix determinant is correct. Proof of Theorem 1. When proving the correctness of a secure outsourcing algorithm, it is reasonable to believe that both the client and the cloud honestly follow the procedure of the algorithm. In the procedure of DECRYPT, the determinant of matrix is computed by According to the function KEYGEN, we can obtain Thus, we can obtain Finally, Equation (8) is proved. This implies that the function DECRYPT always yields the correct determinant and the proposed algorithm is correct. Computational Complexity We analyze the computational complexities of the client and the cloud in this section. The KEYGEN function, ENCRYPT function, VERIFY function, and DECRYPT function are executed by the client. The COMPUTE function is executed by the cloud. We first analyze the computational complexity of the client. The ENCRYPT function involves matrix split operations and matrix multiplications. Obviously, the matrix split operation needs only 2n subtractions. The major computations are lines 4-7. Since P 1 , P 2 , P 3 , P 4 , Q 1 , Q 2 , Q 3 , and Q 4 are obtained by randomly swapping rows/columns of diagonal matrices, the computations of Y 1 , Y 2 , Y 3 , and Y 4 require a total of 8n 2 multiplications; therefore, the ENCRYPT function requires a total of 8n 2 multiplications and 2n subtractions. Two divisions and one addition are required in DECRYPT. Thus, it is easy to see that the computational complexity of the DECRYPT function is O(1). In conclusion, the client needs to undertake a total of 12n 2 + 12n multiplications and 4n 2 + 10n + 7 additions. Therefore, the computational complexity of the client side of the proposed algorithm is O(n 2 ). In Table 3, the number of multiplications required by every part of the proposed algorithm is demonstrated and compared with three existing algorithms from [4][5][6]. In Table 4, the numbers of additions required by the proposed algorithm are listed and compared in the same way as Table 3. Although the complexity of the KEYGEN and ENCRYPT is higher than the compared schemes [5,6], the complexities of VERIFY and DECRYPT in the compared schemes are higher than that of the proposed algorithm. According to [4][5][6], to improve security, the value of l is usually greater than 20 and the value of m is usually greater than 100, where l is the frequency of verification and m is the increase of the dimension after encryption. In fact, when l in the compared algorithms is set to 20, the security of those algorithms are very poor, which is equivalent to the security of the proposed algorithm running on a matrix of dimensions 4 × 4. When pursuing higher security, the local computational cost of the compared algorithms will be significantly higher than the algorithm in this paper. Therefore, it is easy to see that the proposed algorithm has the lowest local computational burden. Proof of Theorem 3. Only the COMPUTE function is performed by the server side. Four iterations of LU decomposition computations and 8n multiplications are required in Algorithm 3. If the server supposes the fastest LU decomposition algorithm (e.g., Williams' algorithm [39]), the computational overhead for the cloud side can be reduced to O(n 2.373 ), which has been proven in [5]. In comparison with the previous algorithms, the specific theoretical performance of the proposed algorithm is shown in Table 5. The nonoutsourcing algorithm for matrix determinant is O(n 2.373 ) using the fast LU decomposition method in paper [39]. Obviously, the local computation complexity of the client (O(n 2 )) is substantially less than the computation complexity of nonoutsourcing (O(n 2.373 )). Security A cloud server can be a passive or active attacker. Next, we will analyze the security of the proposed algorithm against both passive and active attacks. Privacy against Passive Attacks A passive attacker (semihonest) follows the procedure of the algorithm while exploiting the intermediate information to breach the privacy of matrix. Theorem 4. The proposed secure outsourcing algorithm for matrix determinant is privacy-protected. Proof of Theorem 4. It is easy to see that the methods of encrypting matrices M 1 , M 2 , M 3 , and M 4 are consistent. The privacy input matrix M can be obtained by computing {M 1 , M 2 } or {M 3 , M 4 }. We take {M 1 , M 2 } as an example to prove that the proposed algorithm is privacy-protected. As Y 1 and Y 2 are visible to the attacker, to restore {M 1 , M 2 }, the attacker needs to guess P 1 , Q 1 , P 2 , and Q 2 . Then, the attacker uses the inverse matrices of P 1 , Q 1 , P 2 , and Q 2 to restore {M 1 , M 2 }. When generating the original diagonal matrices P 1 , Q 1 , P 2 , and Q 2 in the KEYGEN function, a total of 4n elements are selected from the key space K = {1, 0} λ . The probability of attacker A correctly guessing the 4n elements is 1 (2 λ ) 4n . Besides, from the perspective of the attacker, as long as the frequencies of mix-rows/mixcolumns (n 1 , n 2 , m 1 , m 2 ) are large enough, it is equivalent to repositioning the n nonzero elements of the diagonal matrix (P 1 , Q 1 , P 2 , Q 2 ) to ensure that all rows/columns of the new matrix have only one element. Thus, through the mix-row/mix-column operations in lines 7-14 of Algorithm 1, the attacker successfully guessing any matrix requires n! attempts. Therefore, the passive attacker should make (n!) 4 (2 λ ) 4n brute-force guesses to obtain {M 1 , M 2 }. Then, the attacker can easily compute M. The probability that the attacker A obtains the secret input M is shown at Equation (14). When either the size of the matrix or the key space K is large enough, the value of Prob M A will be so small that it can be ignored. As for the privacy of the output det(M), because det(M) = r 1 t 1 + r 2 t 2 , the attacker must obtain t 1 , t 2 before computing det(M). As t 1 = det(P 1 )det(Q 1 ), t 2 = det(P 2 )det(Q 2 ), computing t 1 , t 2 is equivalent to guessing P 1 , Q 1 , P 2 , and Q 2 . Thus, the probability that the attacker A obtains the secret output det(M) is the same as obtaining the input privacy. Thus, we can conclude that the proposed secure outsourcing algorithm for matrix determinant is privacy-protected. Security against Active Attacks An active attacker (malicious) injects false computation results into the algorithm to tamper with the whole procedure. Theorem 5. The proposed secure outsourcing algorithm for matrix determinant is cheating-detected. Proof of Theorem 5. There are three types of attacks with different complexity levels. In the first attack, the attacker returns random r 1 , r 2 , r 3 , r 4 , L 1 , L 2 , U 1 , U 2 to the client with O(1) computational complexity. Obviously, . Thus, it cannot pass the verifications of Equations (5)- (7). The malicious cloud can also perform a small number of computations with O(n) computational complexity so that , which can nullify the verifications of Equations (5) and (6). However, due to the lack of t 1 , t 2 , t 3 , t 4 , it still fails to pass the verification of Equation (7). The complexity of the second type of attack is O(n 2.373 ). There are two ways of attacking. In the first way, the attacker computes the correct results σ y , but chooses a random ath element on the diagonal of L 1 or U 1 and tampers with it (e.g. , L 1 (a, a) = γL 1 (a, a)). In the same way, the attacker tampers with a random bth element on the diagonal of L 3 or U 3 (e.g., L 3 (b, b) = γL 3 (b, b)). Besides, the attacker also changes the r 1 , r 2 , r 3 , r 4 by r 1 = γr 1 , . . . , r 4 = γr 4 . The above attack returns r 1 , r 2 , r 3 , r 4 , L 1 , U 1 , L 3 , U 3 to the client, which can successfully nullify the verifications of Equations (5)- (7). However, it cannot pass the verification in lines 6-17 of the VERIFY function. The verifications in lines 6-17 verify all the diagonal elements in L 1 , L 3 , U 1 , and U 3 at least once. The nature of the verifications in lines 6-17 is to select 2n elements in the matrices Y 1 and Y 3 , respectively, to verify the correctness of the diagonal elements in L 1 , L 3 , U 1 , and U 3 . As shown in Equa-tions (15) and (16), the error in the ath/bth term in l 1 (a, −)/l 3 (b, −) will be propagated to Y 1 (a, j)/Y 3 (b, k), which is not equal to Y 1 (a, j)/Y 3 (b, k). Thus, the forged L 1 (a, a) and L 3 (a, a) can be certainly detected. In the second way, before performing the COMPUTE function, the attacker tampers with a random element in Y 1 and Y 3 , respectively (Tampering with more items will be easier to detect.). Then, the attacker performs the COMPUTE function with the forged input Y 1 , Y 3 , and returns the cheating result σ y = r 1 , r 2 , r 3 , r 4 , L 1 , U 1 , L 3 , U 3 to the client. As the verification in lines 6-17 can only detect this attack with a probability of 2 n . This way of attacking can easily nullify the verification in lines 6-17. However, as shown in Equation (17), it cannot pass the verification of Equation (7). Forging more elements in Y 1 , Y 3 , or elements in Y 2 , Y 4 can also be detected by Equation (7) (e.g., the cloud returns σ y = r 1 , r 2 , r 3 , r 4 , L 1 , U 1 , L 3 , U 3 to the client where r i is the cheating result and r i is the correct result). The complexity of the third attack is much greater than others. In order to pass all the verification, before attacking, the attacker has to make (n!) 4 brute-force guesses to get the 4n positions that the client verifies in Y 1 and Y 3 (lines [6][7][8][9][10][11][12][13][14][15][16][17]. Then, the cloud constructs two forged matrices Y 1 and Y 3 . The forged matrices Y 1 and Y 3 satisfy the condition that the elements at the detected positions remain unchanged, and the elements at other positions are changed. Meanwhile, the attacker ensures det( Afterwards, the cloud tampers with r 1 , r 2 , r 3 , r 4 by r 1 = γr 1 , . . . , r 4 = γr 4 . The third attack is the only way to pass all the verifications in VERIFY with a probability of 1 (n!) 4 and return a forged result y = γy. The probability that the client C successfully detects the forged result is shown in Equation (18), which infinitely approaches 1 when n is large enough. Thus, we can conclude that the proposed secure outsourcing algorithm for matrix determinant is cheating-detected, when the size of the matrix is large enough. Prob f orge C (r 1 , r 2 , r 3 , r 4 , L 1 , L 3 , U 1 , As shown in Table 6, the theoretical security of the proposed algorithm is significantly higher than the other three algorithms while using the same key space K = {1, 0} λ . According to [4][5][6], the parameter l is recommended to be greater than 20. Thus, when n ≥ 5, the proposed algorithm can achieve a high cheating detectability comparable to these three algorithms with the lowest computational cost, as demonstrated in Tables 3 and 4. Actually, it is common to compute the determinant of the large matrices with dimensions of far-greater than 5 in DIP and machine learning. Performance Evaluation According to the above theoretical analysis, our algorithm can significantly reduce the local computational burden of the client. In this section, we implement the algorithm to assess its practical efficiency. The client and the cloud server functions in our experiments were conducted on the same machine, which has an Intel(R) Core(TM) i7-8550U CPU 1.80 GHz with eight cores. We implemented the proposed algorithm by Matlab and used the LAPACK [40] package to perform the LU decomposition. The communication costs between the client and the server were ignored, since the computations dominate the running time. Table 7 shows the notations used in this section. The time consumption of client in key generation and encryption. t c2 The time consumption of client in decryption and verification. t c The time consumption of client. Our goal is to reduce the client's computational burden through outsourcing. Therefore, the ratio of time consumption without outsourcing to time consumption with outsourcing is an important measure, which is referred to as the Acceleration Ratio of clients. In our comparisons, we only considered the existing algorithms proposed for the malicious model, without considering those merely for the semihonest model, since the former model is more secure. As far as we know, the currently existing algorithms for the malicious model include only Lei's algorithm [5], Liu's algorithm [6], and Zhang's algorithm [4]. In the experiment, we set the parameter l to 20 in the three previous algorithms, and set the parameter m to 500 in Lei's algorithm. We compared the running time of every part of the algorithms on matrices of different dimensions. Table 8 and Figure 3 show that our algorithm has the highest acceleration rate on matrices of all dimensions. The time consumption on cloud in our algorithm is slightly higher than the three previous algorithms, which is not a big issue since this follows the aim of outsourcing by moving computational burden from local to cloud. As shown in Figure 4, compared with the previous algorithms and nonoutsourcing algorithms, our proposed algorithm has the lowest local computational burden. As for the security level, even if l is set to 20, the security of the three previous algorithms are still much lower than the proposed algorithm. In order to prove that the proposed algorithm achieves a higher security level with less local computation costs. we also compare the local running time of the proposed algorithm with the previous three algorithm on different values of parameter l. In this experiment, the dimension of matrix is fixed at 2000. As shown in Figure 5, because the proposed algorithm uses a new verification method, which does not involve the parameter l, the local running time of the proposed algorithm remains unchanged. As the parameter l increases, the cheating detectability of the previous three algorithms increases, since the forged results can escape local verifications with the probability of 1 2 l , while in our algorithm the cheating detectability remains at the probability of 1 − 1 (2000!) 4 . However, the local computational burden of the three algorithms also increases significantly. We also apply the proposed secure outsourcing algorithm as a basic module to the Cramer's rule to solve linear equations. We compare the time consumption of outsourcing determinant computation and nonoutsourcing in solving linear equations of different dimensions. As shown in Figure 6, outsourcing the computations of determinant can significantly reduce the time consumption of solving linear equations, which also proves the efficiency superiority of the proposed secure outsourcing algorithm for matrix determinant computation. Conclusions In this paper, we propose a new secure outsourcing algorithm for matrix determinant computation under the malicious model. We also conduct theoretical analysis to prove the correctness, efficiency, privacy protection level, and cheating detectability for the proposed algorithm. In comparison with the previous algorithms in [4][5][6], the proposed algorithm achieves higher cheating detectability with less computations on the client. The previous algorithms [4][5][6] use Freivald's method [38] for verification, which achieves a high cheating detectability by continuously increasing local computational burden. The cheating detectability of the proposed algorithm does not depend on the frequency of verification but is only related to the size of the matrix. Even in the case that the dimension of the matrix is not large enough (greater than or equal to 5), the cheating detectability of our algorithm is significantly better than the previous algorithms. For the privacy, the state-of-the-art algorithm [4] has the highest privacy, but its local computation cost usually nullifies the efficiency benefit of outsourcing when the dimension of matrix is less than 2000. Our algorithm with the lowest local computation cost achieves privacy comparable with the state-of-the-art works. We also conduct experiments to demonstrate the local efficiency superiority of the proposed algorithm. However, the theoretical analysis and experimental results also show that the proposed algorithm has a higher cloud computation cost than the three previous algorithms, which is not a big issue since this follows the aim of outsourcing by moving computational burden from local to cloud. As future work, we will study how to reduce the cloud's computational burden in the secure outsourcing algorithms for computations of the matrix.
2021-10-17T15:12:44.305Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "698a6cf323029bdfd9961ac1dc62dca3e1d25d62", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/20/6821/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3acfabee6801ebc13143b2144219d47aaf36259c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
255462669
pes2o/s2orc
v3-fos-license
WRN suppresses p53/PUMA-induced apoptosis in colorectal cancer with microsatellite instability/mismatch repair deficiency Colorectal cancer (CRC) initiates in the large intestine (colon or rectum) and is a leading cause of cancer-related deaths in the United States in both males and females and across all racial and ethnic groups (1). As much as 16% of these patients harbor a cancer susceptibility gene pathogenic variant, such as mutations in the adenomatous polyposis coli gene, the DNA polymerase genes POLE or POLD1 or the base excision repair genes MUTYH or NTHL1 (2). The most prominent of these hereditary CRC syndromes include pathogenic mutations in genes of the mismatch repair (MMR) pathway, including MLH1, MSH2, MSH6, and PMS2 (2). Patients with MMR deficiency are classified with Lynch syndrome (3) and have a high incidence of many cancers in addition to CRC, including cancer of the stomach, endometrium, and ovaries, among others (2, 3). As such, there is a dire need to uncover new therapies that might be selective for MMR-deficient cancers such as CRC. In PNAS, Hao et al. (4) uncover the mechanism that leads to apoptosis for a recently identified targeted therapy approach for MMR-deficient CRC (Fig. 1). An active area of discovery for tumor targeted therapeutic approaches relies on synthetic lethality, whereby treatments are designed to exploit compensatory (synthetic lethal) relationships among biological pathways essential for tumor growth, one of which is uniquely defective in the tumor (5, 6). Such an approach, for example, has been highly effective for the treatment of breast cancer with BRCA1/BRCA2 deficiency or with defects in other homologous recombinant genes, shown to be selectively sensitive to inhibitors of the DNA damage response (DDR) signaling enzyme, PARP1, such as olaparib (7) or talazoparib (8). Recent efforts (6, 9–13) have indicated that MMR-deficient CRC tumor cells are highly sensitive to loss of expression of WRN, a RecQ-family ATPdependent helicase/bifunctional 3′-5′ exonuclease, pointing to a synthetic lethal relationship between WRN and the MMR pathway in CRC. Hao et al. (4) build on their lab’s expertise on the mechanism of p53-dependent cell death, further documenting the significance of the WRN/MMR synthetic lethal relationship. Importantly, they find that the loss of or inhibition of WRN, in MMR-defective cells, triggers DNA damage that leads to p53-dependent and p53-independent PUMA activation that precipitates the onset of mitochondria-mediated apoptosis (Fig. 1). MMR is a post-replicative DNA repair pathway that recognizes and repairs base-base mis-pairs and DNA strand misalignments that arise during DNA replication (14, 15). Such lesions or DNA replication errors are recognized by the MSH2/ MSH6 heterodimer (the MutSα complex) that in turn recruits the MLH1/PMS2 heterodimer (the MutLα complex) (16). The base-base or strand misalignment error is then corrected by excision of the ‘error-containing’ DNA strand followed by gap-filling DNA synthesis, improving the overall fidelity of DNA replication by ~1,000-fold (14, 16, 17). As such, pathogenic defects in MMR lead to elevated mutation rates (14, 18) and genetic variability characterized by microsatellite instability (MSI) (19, 20). Regions of the genome encoding microsatellites or tracts of short (2 to 4 base) tandem repeats are highly unstable when MMR is defective, giving rise to either expansions or contractions of these microsatellites (19, 20). Close to 15% of CRC is classified by high levels of MSI (also called MSI-high), whereas 85% are found to have chromosomal instability but with genetically stable microsatellite regions, defined as microsatellite stable (MSS) (21). WRN loss (via RNA interference or CRISPR/cas9-mediated gene knockout, KO) in MMR-deficient and MSI-high cells (9–13) leads to elevated DNA damage (DNA double-strand breaks, DSBs) and cell death (9), not seen in MSS cells (10). Interestingly, it is the helicase function of WRN that is required for viability of MMR-deficient cells, not the WRN exonuclease activity (9–11). WRN may help resolve abnormal genomic structures, such as long (TA)n repeat expansions (13), that accumulate in MMR-deficient and MSI-high cells (10, 11, 13). Upon loss of WRN in MMR-deficient/MSI-high cells, such genomic structures are likely not resolved, leading to an increase in DNA damage and the activation of the DDR signaling kinases ATM and CHK2. The increase in DSBs upon loss of WRN in MSI-high cells (10, 11, 13), with high prevalence of end-resected breaks (13), is in-line with an increase in ATM/CHK2 activation. Hao et al. (4) show in MSI CRC, but not MSS CRC, that WRN depletion triggers an increase in DNA damage, as measured by phosphorylation (activation) of ATM(Ser1981) and CHK2(Thr68). Further, they find that ATM inhibition suppresses the WRN loss-induced phenotype, highlighting the significance of DNA damage induced apoptosis following WRN inhibition in MMR-deficient CRC cells (Fig. 1). WRN dependency in MMR-deficient/MSI-high cells has been documented in over 60 preclinical models (12), and loss Colorectal cancer (CRC) initiates in the large intestine (colon or rectum) and is a leading cause of cancer-related deaths in the United States in both males and females and across all racial and ethnic groups (1). As much as 16% of these patients harbor a cancer susceptibility gene pathogenic variant, such as mutations in the adenomatous polyposis coli gene, the DNA polymerase genes POLE or POLD1 or the base excision repair genes MUTYH or NTHL1 (2). The most prominent of these hereditary CRC syndromes include pathogenic mutations in genes of the mismatch repair (MMR) pathway, including MLH1, MSH2, MSH6, and PMS2 (2). Patients with MMR deficiency are classified with Lynch syndrome (3) and have a high incidence of many cancers in addition to CRC, including cancer of the stomach, endometrium, and ovaries, among others (2,3). As such, there is a dire need to uncover new therapies that might be selective for MMR-deficient cancers such as CRC. In PNAS, Hao et al. (4) uncover the mechanism that leads to apoptosis for a recently identified targeted therapy approach for MMR-deficient CRC (Fig. 1). An active area of discovery for tumor targeted therapeutic approaches relies on synthetic lethality, whereby treatments are designed to exploit compensatory (synthetic lethal) relationships among biological pathways essential for tumor growth, one of which is uniquely defective in the tumor (5,6). Such an approach, for example, has been highly effective for the treatment of breast cancer with BRCA1/BRCA2 deficiency or with defects in other homologous recombinant genes, shown to be selectively sensitive to inhibitors of the DNA damage response (DDR) signaling enzyme, PARP1, such as olaparib (7) or talazoparib (8). Recent efforts (6,(9)(10)(11)(12)(13) have indicated that MMR-deficient CRC tumor cells are highly sensitive to loss of expression of WRN, a RecQ-family ATPdependent helicase/bifunctional 3′-5′ exonuclease, pointing to a synthetic lethal relationship between WRN and the MMR pathway in CRC. Hao et al. (4) build on their lab's expertise on the mechanism of p53-dependent cell death, further documenting the significance of the WRN/MMR synthetic lethal relationship. Importantly, they find that the loss of or inhibition of WRN, in MMR-defective cells, triggers DNA damage that leads to p53-dependent and p53-independent PUMA activation that precipitates the onset of mitochondria-mediated apoptosis (Fig. 1). MMR is a post-replicative DNA repair pathway that recognizes and repairs base-base mis-pairs and DNA strand misalignments that arise during DNA replication (14,15). Such lesions or DNA replication errors are recognized by the MSH2/ MSH6 heterodimer (the MutSα complex) that in turn recruits the MLH1/PMS2 heterodimer (the MutLα complex) (16). The base-base or strand misalignment error is then corrected by excision of the 'error-containing' DNA strand followed by gap-filling DNA synthesis, improving the overall fidelity of DNA replication by ~1,000-fold (14,16,17). As such, pathogenic defects in MMR lead to elevated mutation rates (14,18) and genetic variability characterized by microsatellite instability (MSI) (19,20). Regions of the genome encoding microsatellites or tracts of short (2 to 4 base) tandem repeats are highly unstable when MMR is defective, giving rise to either expansions or contractions of these microsatellites (19,20). Close to 15% of CRC is classified by high levels of MSI (also called MSI-high), whereas 85% are found to have chromosomal instability but with genetically stable microsatellite regions, defined as microsatellite stable (MSS) (21). WRN loss (via RNA interference or CRISPR/cas9-mediated gene knockout, KO) in MMR-deficient and MSI-high cells (9-13) leads to elevated DNA damage (DNA double-strand breaks, DSBs) and cell death (9), not seen in MSS cells (10). Interestingly, it is the helicase function of WRN that is required for viability of MMR-deficient cells, not the WRN exonuclease activity (9)(10)(11). WRN may help resolve abnormal genomic structures, such as long (TA) n repeat expansions (13), that accumulate in MMR-deficient and MSI-high cells (10,11,13). Upon loss of WRN in MMR-deficient/MSI-high cells, such genomic structures are likely not resolved, leading to an increase in DNA damage and the activation of the DDR signaling kinases ATM and CHK2. The increase in DSBs upon loss of WRN in MSI-high cells (10,11,13), with high prevalence of end-resected breaks (13), is in-line with an increase in ATM/CHK2 activation. Hao et al. (4) show in MSI CRC, but not MSS CRC, that WRN depletion triggers an increase in DNA damage, as measured by phosphorylation (activation) of ATM(Ser1981) and CHK2(Thr68). Further, they find that ATM inhibition suppresses the WRN loss-induced phenotype, highlighting the significance of DNA damage induced apoptosis following WRN inhibition in MMR-deficient CRC cells (Fig. 1). WRN dependency in MMR-deficient/MSI-high cells has been documented in over 60 preclinical models (12), and loss of WRN in MMR-deficient cells induces p53 activation and increased p21 levels (11). While this might suggest a role for PUMA, Noxa, or Bax, the mechanism of DNA damage induced apoptosis upon WRN loss or inhibition in MMR-deficient/MSIhigh cells had not been explored. However, the increase in DSBs would be predictive for induction and/or stabilization of p53 to trigger either cell cycle arrest or apoptosis. Here, Hao et al. (4) show that WRN loss in MMR-defective cells (MSI), but not MSS cells, selectively induces activation of apoptosis, specifically by an increase in Annexin V, the release of cytochrome c, and cleavage of caspases 3 and 9 (Fig. 1). To better define the WRN-dependent apoptotic pathway in MSI cells, Hao et al. (4) used gene set enrichment analysis to evaluate the changes in mRNA species (RNA-seq) in an MMRdeficient/MSI cell line (HCT116) and after chromosome 3+5 complementation to revert to MMR proficiency and MSS status. Consistent with the prediction that WRN-KO-induced DNA damage may activate p53, they found that the predominant gene changes were downstream targets of the p53 pathway, including p21, PUMA, and Noxa, among others (4). In cells displaying MSI, WRN loss (via RNA interference) induced elevated protein expression levels for p53, p21, PUMA, and Noxa. PUMA is directly induced at the mRNA level by p53 via binding to the PUMA promoter and is required for WRN-KO-induced apoptosis (4). Further, viability of the WRN-depleted cells can be rescued by CRISPR/cas9-mediated PUMA or Bax gene KO but not by KO of Bim, Noxa, or BAK (4). This apoptotic signature (genotype) can be blocked by the pan caspase inhibitor z-VAD and is indicative of mitochondrial-mediated apoptosis via the p53/PUMA/Bax axis (Fig. 1) (4). In PNAS, Hao et al. define the mechanism of PUMAinduced activation of the mitochondria-mediated apoptosis pathway upon WRN loss or inhibition of the helicase activity of WRN in MMR-deficient/MSI-high CRC cells and tumor/PDX models. Specific post-translational modifications (PTM) of p53 define cell fate, with different PTMs driving the onset of either cell cycle arrest or apoptosis. Upon loss of WRN in MSI cells, Hao et al. (4) show that the induction of apoptosis is dependent on p53(K120) acetylation, likely via TIP60 (4), and blocking p53(K120) acetylation prevents both PUMA induction and apoptosis (Fig. 1). This study, using a series of isogenic and genetically defined cell lines, documents that PUMA is the key facilitator in WRN loss-induced apoptosis in MSI CRC cells (4). There was evidence previously (11), and as shown here by Hao et al. (4), for p53-independent cell death/apoptosis upon WRN-KO in MSI cells, complicating the mechanistic clarity of the response. However, here Hao et al. demonstrate that while most of the phenotype is p53-dependent PUMA induction, there is clear evidence for p53-independent PUMA induction (4), likely via DNA damage (Fig. 1). Finally, Hao et al. (4) use PDX tumor models to evaluate the role for PUMA in the response to WRN loss on MSI CRC tumors. As with the cell line models, loss of WRN (via RNA interference) shows an increase in DNA damage in the MSI tumors, but not MSS tumors, and the onset of apoptosis is dependent on p53 and PUMA (4). The essential role for PUMA observed here may be anticipated, as PUMA is a BH3-only Bcl-2 family member that is important for apoptosis induction in CRC (4). Earlier reports suggested that only the helicase In PNAS, Hao et al. uncover the mechanism that leads to apoptosis for a recently identified targeted therapy approach for MMR deficient CRC. activity of WRN is essential for viability of the MSI CRC cells (9)(10)(11). Similarly, Hao et al. show that inhibition of the helicase activity of WRN, with the small molecule inhibitors NSC617145 and ML216, selectively kills MSI CRC cells and demonstrates ML216 selective efficacy in MSI CRC PDX tumors (4). CRC with MMR deficiency and the resulting MSI or MSI-high genotype show poor treatment outcomes and resistance to therapy that have necessitated more selective therapeutic approaches. Such targeted therapies that exploit the unique genetic defects of CRC tumors have been used to great effect to selectively suppress tumor growth (22). The increased mutational load in MMR-deficient CRC (18) suggested that such tumors may encode a high level of 'non-self' antigens that could be exploited by immune checkpoint inhibition, such as anti-PD-1, shown to have significant clinical benefit (23). Although immune checkpoint monotherapies such as cytotoxic T-cell lymphocyte-4 inhibitors (ipilimumab) and PD-1 inhibitors (pembrolizumab, nivolumab) have shown benefit in MSI CRC, especially regarding metastatic disease, acquired or intrinsic resistance is quite prevalent (>60%) and reliable biomarkers have not yet been defined that can help predict responsiveness (24). In this study (4), Hao et al. reinforce the findings that MMR-deficient CRC cells and tumors are dependent on WRN expression and WRN helicase activity and define this mechanistically as being a p53/PUMAdependent mechanism. Interestingly, p53 mutations in MSI CRC are rare (<20%) (4), suggesting that most MSI CRC may be responsive to WRN helicase inhibition (12,25) to induced PUMA and the onset of apoptosis (4) (Fig. 1). ACKNOWLEDGMENTS. Research in the Sobol lab on DNA repair, the analysis of DNA damage, and the impact of genotoxic exposure is funded by grants from the NIH [ES014811, ES029518, ES028949, CA238061, CA236911, AG069740, and ES032522], and from the NSF [NSF-1841811]. Support is also provided by grants from the Breast Cancer Research Foundation of Alabama and from the Legoretta Cancer Center Endowment Fund (to R.W.S.). Special thanks to Aishwarya Prakash (University of South Alabama) for carefully reading this report and providing comment. The author apologizes that not all primary references could be cited due to space limitations.
2023-01-06T22:11:31.198Z
2023-01-04T00:00:00.000
{ "year": 2023, "sha1": "45a88cb6364ecfc5d022e11a14d0415e826826fe", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c88a42df307d17805010b4d747b2f94f3cfd91c6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119168415
pes2o/s2orc
v3-fos-license
Almost global existence for cubic nonlinear Schr\"odinger equations in one space dimension We consider non-gauge-invariant cubic nonlinear Schr\"odinger equations in one space dimension. We show that initial data of size $\varepsilon$ in a weighted Sobolev space lead to solutions with sharp $L_x^\infty$ decay up to time $\exp(C\varepsilon^{-2})$. We also exhibit norm growth beyond this time for a specific choice of nonlinearity. Introduction We study the initial-value problem for the following cubic nonlinear Schrödinger equation (NLS) in one space dimension: where u : R t × R x → C is a complex-valued function of space-time, λ j ∈ C for j = 1, . . . 4, and Our main result, Theorem 1.1 below, is almost global existence for small solutions to (1.1). The most widely-studied cubic NLS is the gauge-invariant equation Gauge invariance (that is, the symmetry u → e iθ u for θ ∈ R) corresponds to the conservation of the L 2 x -norm. As the cubic NLS in one dimension is L 2 x -subcritical, this conservation law (together with Strichartz estimates) leads to a simple proof of global well-posedness of (1.2) in L 2 x . As for long-time behavior, equation (1.2) in one dimension is a borderline case for the L 2 x scattering theory: for 'short-range' nonlinearities |u| p u with p > 2 there are positive results, while for the 'long-range' case 0 < p ≤ 2 there is no L 2 x scattering [1,24]. For (1.2), small data in Σ lead to global solutions that decay in L ∞ x at the sharp rate t −1/2 and exhibit modified scattering, that is, linear behavior up to a logarithmic phase correction as t → ∞ (see for example [4,7,17,14]). For non-gauge-invariant equations like (1.1), the question of global existence is less wellunderstood. Hayashi-Naumkin have studied non-gauge-invariant cubic NLS in one space dimension extensively (see [8,9,10,11,12,13,19], for example). They have shown that under specific conditions on the initial data, small data in weighted Sobolev spaces can lead to global solutions. In these cases they are also able to describe the asymptotic behavior. In this paper, we prove almost global existence for small (but otherwise arbitrary) data in Σ. Such a result is in the spirit of the well-known works concerning quadratic wave equations in three dimensions [15,16]. Similar results have also been established for the cubic NLS with derivative nonlinearities; see, for example, [21,23]. However, as mentioned in [8,19,21], there is a sense in which cubic nonlinearities containing at least one derivative can be considered 'short-range', while this Date: April 4, 2018. 1 is not the case for the problem without derivatives. We will discuss this in a bit more detail below in Section 1.1. Our main theorem is the following. It is important to observe that without imposing further conditions on the initial data or the coefficients in the nonlinearity, Theorem 1.1 is essentially sharp. To demonstrate this, we consider the particular model (i∂ t + 1 2 ∂ xx )u = i|u| 2 u (1.4) and show that solutions either blow up or exhibit norm growth after the almost global existence time. The idea is that for sufficiently small data, certain ODE dynamics will dictate the behavior of the solution. The particular model (1.4) has the following advantages: (i) solutions to the ODE blow up in finite time, and (ii) since |u| 2 u is gauge-invariant, we get better estimates for u than those appearing in Theorem 1.1, specifically, a slower growth rate for the L 2 x -norm of (x + it∂ x )u. Thanks to (i) we need not fine-tune the initial conditions to make the arguments work, while (ii) allows us to show in a fairly straightforward fashion that the ODE can accurately model the PDE for long times. The precise result we prove is the following. Theorem 1.2 (Norm growth). There exists ε > 0 sufficiently small that the following holds. Suppose u 1 ∈ Σ satisfies u 1 Σ = ε, u 1 L ∞ ξ ≥ 1 2 ε, (1.5) and let u ∈ C([1, T ε ]; Σ) be the solution to (1.4) with u(1) = u 1 given by Theorem 1.1. In particular, T ε = exp( 1 cε 2 ) for some c > 0, and sup t∈ [1,Tε] u(t) L ∞ ξ ε. Denoting by T max ∈ (T ε , ∞] the maximal time of existence, there exists an absolute constant K ≫ ε 2 and a finite time T K > T ε such that either T max ≤ T K , or there exists t ∈ [T ε , T K ] such that Remarks. • The proof will show that we could take, for example, K = (200c) −1 . This means that K is a small but fixed constant independent from ε and, in particular, large compared to ε 2 . • The time T K is the time at which the associated ODE solution reaches size 4K; see (5.10). • By the standard local theory (see below), if T max < ∞ then u(t) L 2 x → ∞ as t → T max . • With trivial modifications, our arguments apply to (1.4) with a nonlinearity of the form λ|u| 2 u with Im λ > 0. 1 1 The case Im λ = 0 reduces to (1.2), while for Im λ < 0 one can prove that small solutions exist on [1, ∞) and have 'dissipative' behavior, namely, additional logarithmic time decay [22]. This same dissipative behavior occurs for (1.4) in the negative time direction. The strategy described above, namely, deducing behavior about solutions from associated ODE dynamics, has been carried out in many previous works. In the case of NLS with the |u| 2 u nonlinearity, this approach leads to a proof of modified scattering [7,17,14]. In other cases, for some specific nonlinearities and well-prepared initial data one can prove global existence and describe the asymptotics [8,9,10,11,12,13,19]. In our case, we pick an equation for which the ODE solutions blow up; accordingly, we can demonstrate norm growth. This example demonstrates that one cannot hope to improve on Theorem 1.1 without imposing some more specific conditions. 1.1. Strategy of the proof of Theorem 1.1. We begin by recalling the standard local theory for (1.1). x the initial-value problem The existence in C([1, T ]; L 2 x (R)) follows from the standard arguments, namely contraction mapping and Strichartz estimates. The fact that the time of existence depends only on the norm of the data is a consequence of scaling. The existence in C([1, T ]; Σ) follows from standard persistence of regularity arguments, which involve commuting the equation with ∂ x and J(t) = x + it∂ x . We refer the reader to the textbook [2] and the references cited therein. For a solution u we define The proof of Theorem 1.1 will be based on a bootstrap argument in a properly chosen norm. To this end we introduce the notation We record here two facts that we prove in Section 2.3. Lemma 1.4. The following estimates hold: The next two propositions are the main ingredients for the bootstrap argument used to prove Theorem 1.1; they constitute the heart of the paper. (1.10) We prove Propositions 1.5 and 1.6 in Sections 3-4 by performing an analysis in Fourier space known as the space-time resonance method [5,6]. More precisely, we begin by looking at the integral equation (1.6) and expressing it in terms of the profile f = e −it∂xx/2 u and in Fourier space as in (2.5)-(2.6). We do not follow this approach for the gauge-invariant term |u| 2 u, since it is amenable to a simpler treatment altogether, which in particular does not necessitate analysis via space-time resonance. We then proceed to study the oscillations in the integrals (2.5). The most delicate interactions arise when there is a lack of oscillation in (η, σ, s), that is, when the phases in (2.6) vanish together with their gradients in η and σ. The region of (η, σ) in R 2 where this vanishing occurs is known as the space-time resonant set. For the three non-gauge-invariant cubic nonlinearities, the space-time resonant set is the origin. To deal with the contribution of this set, our strategy is to introduce a time-dependent cutoff to a neighborhood of the origin where we use volume bounds. We then decompose the complement of this neighborhood into regions where we can integrate by parts in either space (in η or σ) or time (in s), using the identities respectively, where A is one of the phases appearing in (2.5)-(2.6). This procedure yields additional decay either by introducing the factor s −1 or by introducing more copies of the solution (cf. (1.8) and the fact that ∂ s f = e −is∂xx/2 (∂ s + i 2 ∂ xx )u is a cubic expression in u). Thanks to our decompositions of the frequency space, the multipliers of the form (∂ η A) −1 or A −1 that appear after the integration by parts can be viewed as (powers of) antiderivatives acting on the highest frequency terms, up to multiplication by Coifman-Meyer multipliers. We point out that that the contribution of the termū 3 is the easiest to estimate, since away from the origin we have complete temporal non-resonance (the phase Φ in (2.6) is bounded below). For u 3 and |u| 2ū , we need to decompose the frequency space more carefully. The use of the Coifman-Meyer Theorem (see Lemma 2.1 below) is crucial for our arguments, since it gives the sharp Hölder-type estimates that allow us to prove optimal lifespan bounds. Note that from the perspective of space-time resonance, the presence of derivatives in the nonlinearity actually offers some improvement compared to the nonlinearities we consider in (1.1). Indeed, derivatives act as multiplication by the frequency on the Fourier side and hence provide some cancellation at zero frequency, that is, on the space-time resonant set. In particular, this can be thought of as a type of null condition (see [20], for example). We refer the reader especially to [6], which employs the space-time resonance method to prove global existence and scattering for a non-gauge-invariant quadratic NLS in two space dimensions, with a nonlinearity containing a derivative at low frequencies. Assuming Propositions 1.5 and 1.6 for now, we prove Theorem 1.1. Proof of Theorem 1.1. Let 0 < ε < 1 to be specified below and let u 1 Σ = ε. If u solves (1.1), then Proposition 1.5, Proposition 1.6, and Lemma 1.4 imply for some absolute constant C > 0. We choose ε = ε(C) > 0 and define T ε so that We now claim that the following estimate holds: This holds at t = 1 by (1.7). By continuity, if it is not true for all t ∈ [1, T ε ] there must be a first time t ∈ (1, T ε ] such that u(t) X(t) = 2ε. Applying (1.11) at this time and using (1.12) yields which is a contradiction. This proves (1.13). To complete the proof, it suffices to show that if u : [1, T ] × R → C is a solution such that T ≤ exp 1 cε 2 and sup t∈[1,T ] u(t) X(t) ε, then we may continue the solution in time. By the local theory it suffices to prove that u(T ) L 2 x u 1 L 2 x . We use the Duhamel formula (1.6), Lemma 1.4, and the bound on u to estimate Thus by Gronwall's inequality and the bound on T , we deduce u(T ) L 2 x , as was needed to show. This completes the proof of Theorem 1.1. The rest of the paper is organized as follows: In Section 2 we set up notation and collect some useful lemmas. The main trilinear estimates that we will use repeatedly in the proofs of Proposition 1.5 and 1.6 are given in Lemma 2.3. In Section 3 we prove Proposition 1.5, and in Section 4 we prove Proposition 1.6. As shown above, these two propositions imply the main result, Theorem 1.1. Section 5 contains the proof of Theorem 1.2, in which we demonstrate norm growth for a model nonlinearity. In Appendix A we discuss the construction of some cutoffs used in Sections 3 and 4. Acknowledgements. J. M. was supported by the NSF Postdoctoral Fellowship DMS-1400706. F. P. was supported in part by NSF grant DMS-1265875. Notation and useful lemmas For nonnegative X, Y we write X Y to denote X ≤ CY for some C > 0. We write X ≪ Y to denote X ≤ cY for some small c ∈ (0, 1). We write Ø(X) to denote a finite linear combination of terms that resemble X up to constants, complex conjugation, and Littlewood-Paley projections. For example, the nonlinearity in (1.1) is Ø(u 3 ). The Fourier transform and its inverse are given by For s ∈ R we define the fractional derivative operator |∂ x | s as a Fourier multiplier, namely, |∂ x | s = F −1 |ξ| s F. We define the homogeneous Sobolev spaceḢ s x via u Ḣs x = |∂ x | s u L 2 x . We employ the standard Littlewood-Paley theory. Let φ : R → R be an even bump supported on [− 10 9 , 10 9 ] and equal to one on [−1, 1]. For N ∈ 2 Z we define , These operators commute with all other Fourier multiplier operators. They are self-adjoint and bounded on every L p x -space and obey the estimate 2.1. Linear theory. The free Schrödinger propagator is defined as a Fourier multiplier: From (2.2) we can read off the following factorization: where the modulation M (t) and dilation D(t) are defined by Notation and Duhamel formula. Suppose that u is a solution to (1.1) and denote f (t) = e −it∂xx/2 u(t). Using the Duhamel formula (1.6) and taking the Fourier transform leads to: where the phases Φ, Ψ, and Ω are given by (2.6) We do not write out the phase for the gauge-invariant nonlinearity |u| 2 u, since this term is amenable to a simpler analysis. It is convenient to introduce the notation (2.7) In this notation we may rewrite the phases as follows: We also need to consider derivatives of the phases, which we record here: Finally, we set up notation concerning frequency cutoffs. For a function f = f (s, x) and s ≥ 1, we define f lo and f hi via (2.10) Here φ is the standard cutoff defined earlier at the beginning of this section. We use the notation φ * to denote that either φ lo or φ hi may appear, and f * = P * f to denote that either f lo or f hi may appear. 2.3. Proof of Lemma 1.4. In this section we prove (1.7) and (1.8). First, It therefore suffices to show These share the same estimates as the usual projections, asP ≤N f = P ≤Nf . We first use (2.1), Plancherel, and (2.4) to estimate x . Second, we use Hausdorff-Young, Cauchy-Schwarz, (2.4), and the bound 2.4. Useful estimates. For a function m : R 3 → R we define the trilinear operator T m as follows: If a, b, c are functions of space-time, we employ the following notation: We also make use of the notation introduced in (2.7). The following multilinear estimate due to Coifman-Meyer is one of the primary technical tools used in this paper. For the original result, see [3,Chapter 13]; for a more modern treatment, see [18]. We will now establish some trilinear estimates that will be used frequently in Sections 3 and 4. The proofs rely on Lemma 2.1, together with the following estimate, which is a consequence of Plancherel and Hölder: (2.14) In all of the above estimates, we can exchange the role of a and b on the right-hand side. As explained above, we obtain a priori bounds on solutions to (2.5) by analyzing the cubic terms in Fourier space. This leads us to study trilinear expressions of the form (2.12) with symbols that have some degeneracies for small frequencies. By properly dividing frequency space, we will be able to show that these singularities are always of the form max(|ξ 1 |, |ξ 2 |, |ξ 3 |) −1 or max(|ξ 1 |, |ξ 2 |, |ξ 3 |) −2 . We will therefore be able to use the bounds (2.15)-(2.17) to control these expressions; in particular, we will make the choice N ∼ s −1/2 , where s is the time variable. We decompose a = a ≤N + a >N and first estimate Next, note that under our assumptions, |ξ 1 ξ 3 |m is also a Coifman-Meyer multiplier. Thus, Combining the two estimates above yields the first estimate in (2.15). We turn to the second inequality in (2.15). We decompose both a = a ≤N + a >N and b = b ≤N + b >N . Using (2.1) as well, we first have Next, note that under our assumptions, ξ 2 2 m is also Coifman-Meyer. Thus, Noting that ξ 2 1 m is also Coifman-Meyer, we can similarly obtain The remaining case can be treated similarly, as |ξ 1 ξ 2 |m is also Coifman-Meyer. We can obtain the first estimate in (2.16) as follows: To obtain the second estimate in (2.16), we decompose b = b ≤N + b >N . Using (2.1) as well, we first have Next, note that under our assumptions, |ξ 2 |m is also Coifman-Meyer. Thus, Proof of (2.17). Suppose |ξ 3 |m is a Coifman-Meyer symbol supported on a region where We can obtain the first estimate in (2.17) as follows: To obtain the second estimate in (2.17), we proceed as above and decompose b = b ≤N + b >N . Using (2.1) as well, we first have As |ξ 2 |m is Coifman-Meyer, we can also estimate This completes the proof. Proof of Proposition 1.5 In this section we prove the estimate (1.9) for f (t). Using (2.5) we see that it suffices to estimate the following terms in L ∞ ξ : where the phases Φ, Ψ, Ω are as in (2.8) and we use the notation from (2.7). 3.1. Estimation of (3.1). We recall the notation from (2.10) and write in the integrand of (3.1). Expanding the product, we encounter two types of terms: (i) the low frequency term We estimate the contribution of the low frequency term by using volume bounds: which is an acceptable contribution to the right-hand side of (1.9). For the terms of type (ii) we write 1 = χ 1 ( ξ) + χ 2 ( ξ) + χ 3 ( ξ) for ξ ∈ R 3 , where each χ j is a smooth Coifman-Meyer multiplier such that |ξ j | ≥ max{ 9 10 |ξ k | : k = j} for all ξ ∈ support(χ j ). (3.6) See Appendix A for the construction of such multipliers. We will show how to estimate the contribution from χ 3 . The same ideas suffice to treat the (almost symmetric) contributions from χ 1 and χ 2 . Note that on the support of χ 3 we need only consider the contribution of the terms containing f hi (ξ 3 ); indeed, if |ξ 3 | s − 1 2 , then max j |ξ j | s − 1 2 and we can estimate with volume bounds as we did for (3.5) above. Thus, it suffices to consider the contribution of the term In the region of integration in (3.7) we have Φ = 0, and in particular |Φ| | ξ| 2 ∼ ξ 2 3 . We may therefore use the identity e isΦ = (iΦ) −1 ∂ s e isΦ and integrate by parts to write (3.12) Using the notation from (2.12), we notice that we can write where m = χ 3 ( ξ)(iΦ) −1 is a symbol satisfying the hypotheses of Lemma 2.3(i). That is, m is supported on a region where |ξ 3 | max{|ξ 1 |, |ξ 2 |}, and one can check that ξ 2 3 m is Coifman-Meyer. We apply (2.15) with N ∼ s − 1 2 (cf. (2.10)) to obtain In view of Lemma 1.4, this is an acceptable contribution to the right-hand side of (1.9). We next turn to (3.9). From the definition of the cutoffs φ lo and φ hi (cf. (2.10)), we have ∂ s φ * (ξ j ) = ±(1/2)s −1/2 ξ j φ ′ (s 1/2 ξ j ). As multiplication by s 1/2 ξ j φ ′ (s 1/2 ξ j ) corresponds to a bounded projection to frequencies of size ∼ s −1/2 , we can write where f med denotes such a projection of f . Distributing the derivatives and considering all of the possibilities, one can see that to treat (3.9) it ultimately suffices to show how to estimate a term such as the following: This can be estimated as we did above for (3.8), using Lemma 2.3(i): which is acceptable in view of (1.8). We next turn to (3.10). Noting that we can use Lemma 2.3(i) with (a, b, c) = (u * , e is∂xx/2 ∂ s f * , u hi ) to get: which is acceptable. We can treat (3.11) in the same way, as we can exchange the role of a and b in (2.15). The term (3.12) can be treated similarly, using the second inequality in (2.15) with (a, b, c) = (u * , u * , P hi e is∂xx/2 ∂ s f ): which is acceptable (cf. (1.8)). This completes the estimation (3.1). For the remaining terms we once again write 1 = χ 1 ( ξ) + χ 2 ( ξ) + χ 3 ( ξ) for ξ ∈ R 3 , where each χ j is a smooth Coifman-Meyer multiplier such that (3.6) holds. We will show how to estimate the contribution of χ 2 . Similar ideas suffice to treat the contribution of χ 1 and χ 3 (see Remark 3.1 below for more details). We need only consider the contribution of χ 2 in terms containing f hi (ξ 2 ), since if |ξ 2 | s − 1 2 , then max j |ξ j | s − 1 2 and we can simply estimate using volume bounds, as we did for the low frequency term. On the support of χ 2 ( ξ) we further decompose 1 = χ η ( ξ) + χ σ ( ξ) + χ s ( ξ), and let χ 2, * := χ 2 χ * be smooth Coifman-Meyer multipliers such that See Appendix A for the construction of such multipliers. The subscripts indicate the variable with respect to which we will integrate by parts. According to this decomposition of the frequency space, we are faced with estimating the following three terms: (3.20) 3.2.1. Estimation of (3.18). Using (2.8)-(2.9), we see that on the support of χ 2,η we have Thus we can use the identity e isΨ = ∂ η e isΨ (is∂ η Ψ) −1 and integrate by parts in η to write We first estimate (3.22). Using (2.9), we see that ∂ η (1/∂ η Ψ) = −2|ξ 2 − ξ 1 | −2 . Recalling (3.21) and the fact that |ξ 2 | max{|ξ 1 |, |ξ 2 |} in the integral above, we can write where the symbol m = ∂ η 1/∂ η Ψ χ 2,η satisfies the hypotheses of Lemma 2.3(i). Here and throughout Section 3.2, ξ 2 plays the role of ξ 3 in the application of Lemma 2.3. Applying (2.15), we obtain an acceptable contribution: We next turn to (3.23). Two types of terms arise, depending on where ∂ η lands. First, if ∂ η lands on χ 2,η we are led to consider the following: Second, we note that ∂ η φ * (ξ j ) = ±s 1 2 φ ′ (s 1 2 ξ j ), and that multiplication by φ ′ (s 1 2 ·) corresponds to a projection to frequencies ∼ s − 1 2 . As before, we denote this by P med f = f med . Considering all of the possibilities, one can see that to treat the terms that arise when ∂ η lands on one of the φ * (ξ j ), it suffices to estimate the terms where m is a symbol that satisfies the hypotheses of Lemma 2.3(ii); that is, |ξ 2 |m is a Coifman-Meyer symbol supported on a region where ξ 2 is the largest frequency (up to a constant). We apply (2.16) and (2.14) to get the following acceptable estimate: As the term (3.28) can be estimated in the same way, this completes the treatment of (3.23). To estimate the term (3.24) we proceed similarly. We can write where m is a symbol satisfying the hypotheses of Lemma 2.3(ii). Using (2.16), we obtain: which is an acceptable contribution to the right-hand side of (1.9). The last term (3.25) can be estimated in the same way. (3.19). This term is very similar to (3.18). In particular, we note that on the support of χ 2,σ we have |∂ σ Ψ| = |ξ 2 − ξ 3 | |ξ 2 | | ξ|. Thus we can use the identity e isΨ = (is∂ σ Ψ) −1 ∂ σ e isΨ and integrate by parts in σ. The ideas used to estimate (3.18) then suffice to handle the resulting terms. (3.20). Using (2.8) and (3.17), we note that on the support of χ 2,s , we have Estimation of (3.29) We integrate by parts in s using the identity e isΨ = (iΨ) −1 ∂ s e isΨ . This yields which is an acceptable contribution to the right-hand side of (1.9). We next turn to (3.31). As observed earlier, we can write ∂ s φ * (ξ j ) f (ξ j ) = s −1 f med (ξ j ), where f med denotes the projection of f to frequencies ∼ s −1/2 . Considering all of the possibilities, one can see that to treat (3.31) it ultimately suffices to show how to bound a term such as We can estimate this term as we did (3.13), using (2.15). For the term (3.32), we can proceed as we did for (3.10)-(3.12): the hypotheses of Lemma 2.3(i) hold, and we can use (2.15) and (3.14) to estimate which is acceptable. In estimating the contribution from χ 2 , the key idea was to decompose frequency space into regions such that at least one of |∂ η Ψ|, |∂ σ Ψ|, or |Ψ| was suitably bounded below. In the support of χ 1 , using (2.9), we can achieve such a decomposition as follows: 50 |ξ 1 | and |ξ 3 − ξ 2 | ≤ 1 50 |ξ 2 | then |Ψ| |ξ 1 | 2 | ξ| 2 . Thus, we can use arguments similar to the ones above to handle the contribution of χ 1 . Similar ideas also suffice to treat the contribution of χ 3 . (3.3). We can estimate (3.3) in a very similar manner to (3.2). To wit, we split each function into low and high frequency pieces, and we handle the term containing all low frequencies with volume bounds. For the remaining terms, we decompose frequency space into regions where one of |ξ 1 |, |ξ 2 |, |ξ 3 | is (almost) the maximum, according to (3.6). On each such region, we decompose into regions where we have suitable lower bounds on either the phase Ω or its derivatives. Consider for example the region where |ξ 2 | ≥ max{ 9 10 |ξ 1 |, 9 10 |ξ 3 |}. Then we can define cutoffs χ η , χ σ , and χ s so that 1 = χ η + χ σ + χ s and 3.4. Estimation of (3.4). We can handle the term (3.4) in a relatively simple manner due to the gauge-invariance of |u| 2 u. Using (2.3) we can first rewrite Writing FM (s)F −1 = 1 + F[M (s) − 1]F −1 , it suffices to estimate the following: For the low frequencies we use the pointwise bound (2.11), Plancherel, and (2.4) to estimate Thus (3.35) L ∞ ξ t 1 s −1 u(s) 3 X ds, which is acceptable. This completes the estimation of (3.4), which in turn completes the proof of Proposition 1.5. 4.1. Estimation of (4.4). We recall the notation from (2.10) and write in the integrand of (4.4). We expand the product and encounter two types of terms: We estimate the contribution of term (i) by volume bounds: which is acceptable. We now turn to the terms of type (ii). As in Section 3, we write 1 = χ 1 ( ξ) + χ 2 ( ξ) + χ 3 ( ξ) for ξ ∈ R 3 so that (3.6) holds, and note that it suffices to show how to estimate In the region of integration in (4.9) we have Φ = 0; in fact, |Φ| | ξ| 2 . We may therefore use the identity e isΦ = (iΦ) −1 ∂ s e isΦ and integrate by parts to write We turn to (4.10) and fix s ∈ {1, t}. In the support of the integral we have |ξ 3 | max{|ξ 1 |, |ξ 2 |}; thus, recalling (2.8) and (2.9), we can write where m is a symbol satisfying the hypotheses of Lemma 2.3(iii); that is, |ξ 3 |m is a Coifman-Meyer symbol. Applying (2.17) with N ∼ s −1/2 as usual, we get In view of (1.8), this is an acceptable contribution to the right-hand side of (1.10). To estimate (4.11), we recall that we can write ∂ s φ * (ξ j ) f (ξ j ) = s −1 f med (ξ j ), where f med denotes the projection of f to frequencies ∼ s −1/2 . Considering all of the possibilities, one can see that to treat (4.11) it ultimately suffices to show how to estimate a term such as (4.14) To this end, we write (4.14) = t 1 e isξ 2 2 F(T m [ū * ,ū * ,ū hi ])(s, ξ) ds, where m is a symbol satisfying the hypotheses of Lemma 2.3(iii). We now estimate using (2.17), as we did for (4.10) above: which is an acceptable contribution to the right-hand side of (1.10). Note that (4.12) is a term of the same type and can be estimated similarly. We skip the details. For the remaining terms we proceed as we did in Section 3.2 and write 1 = χ 1 ( ξ)+χ 2 ( ξ)+χ 3 ( ξ) for ξ ∈ R 3 , where each χ j is a smooth Coifman-Meyer multiplier such that (3.6) holds. As before, we will show how to estimate the contribution of χ 2 ; similar ideas suffice to treat the contribution of χ 1 and χ 3 (see Remark 4.1 below). As before, we only need to consider the contribution of χ 2 in terms containing f hi (ξ 2 ), since if max j |ξ j | s − 1 2 , then we can simply estimate by volume bounds, as we did for (4.8). 4.2.3. Estimation of (4.17). As in (3.29) we note that on the support of χ 2,s we have Thus, we can use the identity e isΨ = ∂ s e isΨ (iΨ) −1 and integrate by parts in s to get Thanks to the lower bound on Ψ, these terms are similar to the ones in (4.10)-(4.13). In fact, (4.25) can be estimated exactly like the term (4.10). For the term (4.26), we can argue as in the estimate of (4.11) (see also (4.14)). Furthermore, the term (4.27) is similar to (4.12), while (4.28) is similar to (4.13). In particular, applying the trilinear estimate (2.17) in each case leads to acceptable contributions. Remark 4.1. We have estimated (4.15)-(4.17), which completes the estimation of the contribution of χ 2 to (4.5). As in Remark 3.1, we can also decompose the support of χ 1 and χ 3 so that we have suitable lower bounds for ∂ η Ψ, ∂ σ Ψ, or Ψ. Thus, we can use similar ideas as above to estimate the contribution of χ 1 and χ 3 . 4.3. Estimation of (4.6). We can estimate (4.6) in a very similar manner to (4.5). Once again the heart of matter is to decompose frequency space (away from the origin) into regions where one has suitable lower bounds on either the phase Ω or its derivatives. See Section 3.3 for a detailed discussion of this decomposition. Norm growth for a model nonlinearity In this section we study the model equation and prove Theorem 1.2. Throughout the section, we suppose u is a solution to (1.4) as in the statement of Theorem 1.2, with f (t) = e −it∂xx/2 u(t). In particular, u 1 Σ = ε, u is defined at least up to time T ε = exp( 1 cε 2 ) for some c > 0, and u satisfies the bounds given in (1.3). We write T max ∈ (T ε , ∞] for the maximal time of existence of u. The plan is to exhibit growth in time of | f (t, ξ)| 2 by comparing it to a (growing) solution to an ODE (cf. (5.5) and (5.3) below). To prove that the ODE accurately models the PDE requires good bounds for the solution. One of the benefits of working with (1.4) is that we can prove a better estimate for the L 2 x -norm of Ju than the one given in (1.3). (Recall that the bound in (1.3) holds with an arbitrary cubic nonlinearity.) In particular, we have the following. Proof. Comparing with (1.3), we need only consider the L 2 x -norms. Direct computation shows J(t) := x + it∂ x = M (t)it∂ xM (t), where M (t) = e ix 2 /2t . Thus, Ju L 2 x , and hence by the Duhamel formula and (1. Thus, by Gronwall's inequality, we have Ju(t) L 2 x t [Cε] 2 ε for 1 ≤ t ≤ T ε , which suffices if ε is small enough. The same argument treats the L 2 x -norm of u. Next, we prove that we can propagate bounds for u as long as we can control f in L ∞ ξ . Lemma 5.2 (Propagating bounds). Suppose T ε ≤ T 1 < T 2 < T max and Proof. The proof is similar to the arguments above. Define the set By assumption, T 1 ∈ S for some appropriate choice of C. Suppose toward a contradiction that S = [T 1 , T 2 ]. By continuity, we can find a first time T ∈ (T 1 , T 2 ] so that We turn to estimating the size of f . We define A(t, ξ) := 2| f (t, ξ)| 2 and observe that for each ξ ∈ R, the function A(t, ξ) satisfies an ODE in t. Indeed, rewriting the equation (1.4) as ∂ t f = e −it∂xx/2 (|u| 2 u) and using (2.3), we deduce where the remainder R is given by We expect that as long as u obeys good estimates, the remainder R will decay in time. Thus the behavior of A should be governed by a (growing) solution to the ODE We first consider the issue of controlling the remainder. Proof. The main ideas appear already in the proof of Lemma 1.4, but we include the details for completeness. First, note the pointwise bound |M (t) − 1| t −δ |x| 2δ for any 0 ≤ δ ≤ 1 2 . Taking δ = 1 5 , this together with Hausdorff-Young, Cauchy-Schwarz, (2.4) and (5.6) implies Using (5.6) we also have FM f L ∞ ξ µ. Estimating as above and using Plancherel, we obtain Furthermore, t − 1 10 µ 3 , and the result follows. In view of (5.9), Lemma 5.2 and Lemma 5. Combining these estimates yields the desired conclusion. We now complete the proof of the theorem. Proof of Theorem 1.2. Let K ≫ ε 2 be a constant to be determined below. We define T K to be the time such that B(T K ) = 4K: If T max ≤ T K , the conclusion of the theorem holds. Thus it remains to consider the case T max > T K , in which case it suffices to show for some t ∈ [T ε , T K ]. We proceed by contradiction and suppose that 2 ), we deduce that Recalling A 0 ≥ 1 5 ε 2 and (5.10) and rearranging, we find We now choose K = 1 200c , so that the above becomes |D(T K )|
2016-05-10T23:43:58.000Z
2016-05-10T00:00:00.000
{ "year": 2016, "sha1": "9947ea284fc157ab74fa5d1a3722ea3d9b881512", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/dcds.2017089", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "9947ea284fc157ab74fa5d1a3722ea3d9b881512", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
35054348
pes2o/s2orc
v3-fos-license
Glutamine dipeptide supplementation improves clinical responses in patients with diabetic foot syndrome 1Pharmaceutical Sciences Postgraduate Program State University of Maringá, Maringá, PR, Brazil, 2Department of Pharmacology and Therapeutics, State University of Maringá, Maringá, PR, Brazil, 3Department of Morphological Sciences, State University of Maringá, Maringá, PR, Brazil, 4Department of Clinical Analysis and Biomedicine, State University of Maringá, Maringá, PR, Brazil, 5Department of Biochemistry State University of Maringá, Maringá, PR, Brazil, 6Department of Medicine State University of Maringá, Maringá, PR, Brazil INTRODUCTION Diabetic foot syndrome has been defined as a pathological condition in which peripheral vascular disease, peripheral neuropathy, and infection lead to tissue destruction, resulting in possible lower-extremity amputation in people with diabetes (Canavan et al., 2008). Nerve damage in feet is characterized by increased oxidative stress, which leads to loss of neurons by apoptosis thereby reducing the regenerative capacity (Vicent et al., 2004), associated with loss of foot sensitivity.Diabetic foot ulcers are very common in diabetic patients and may lead to amputation (Schirmer, Ritter, Fansa, 2013).Moreover, following amputation, 45% of patients with neuropathic ulcers and 55% of patients with ischemic ulcers die within 5 years (Armstrong, Wrobel, Robbins, 2007). The amino acid glutamine is involved in many processes that are vital to cell function.The molecular mechanisms of glutamine action remain to be elucidated but may involve changes in gene and protein expression, protein activity, and changes in oxidative status (Newsholme et al., 2003).For this reason, the enteral and parenteral administration of glutamine has been recommended for critically ill patients (Newsholme et al., 2011;Vasconcelos, Tirapegui, 2002).In addition, oral glutamine has been used by healthy individuals, in particular by athletes, to maintain immune function (Cury-Boaventura et al., 2008).Moreover, it has been reported that glutamine supplementation caused a reduction in systolic blood pressure, hyperglycemia, abdominal circumference (Mansour et al., 2015), and improved insulin secretion (Samocha-Bonet et al., 2015). Although oral glutamine treatment is beneficial for human health, its low solubility and stability in aqueous solutions limits its availability in the blood.Furthermore, about 50% of orally administered glutamine is extracted by the splanchnic bed in healthy humans (Matthews, Marano, Campbell, 1993).However, this problem can be overcome with highly soluble stable L-alanyl-L-glutamine, a synthetic dipeptide composed of alanine and glutamine (Minguette-Camara et al., 2014;Rogero et al., 2002) which is commonly known as glutamine dipeptide (GDP). Thus, based on the therapeutic potential of GDP, we evaluated the impact of supplementation with this dipeptide on the metabolic profile, oxidative stress, hematological parameters and blood levels of cytokines. In this clinical investigation we used a wellestablished experimental approach in which each patient served as their own control (Sekhar et al., 2011;Borges-Santos et al., 2012;Nguyen et al., 2014) eliminating the interference of several factors such as age, duration of diabetes, and gender.During the consultation, patients were interviewed using a questionnaire to obtain information about their socio-demographic and disease factors (age, sex, medical history, educational level, marital status, duration of diabetes, diabetes-related disorders etc.), therapeutic profile, and lifestyle. PATIENTS AND METHODS Regarding diabetes, 33.3% of the patients had been diagnosed with diabetes at least 10 years ago, 44.4% of the patients had been diagnosed for 11 to 20 years ago and 22.2% of the patients had been diagnosed over 21 years ago.Most patients were female (83.3%), over 60 years of age (61.1%), sedentary (55.6), non-smokers (94.4%) and had at least 8 years of schooling (77.7%). Despite the fact that most patients had a family history of type 2 diabetes (88.9%), the majority of patients showed an absence of knowledge about diabetic foot syndrome (55.6%). Study Design After the interview, a foot examination based on the National Hansen's Disease Program developed by the University of Baton Rouge, USA was performed.This diabetic foot screening is not used to diagnose peripheral neuropathy, but to identify those patients who have lost protective sensation.The foot examination uses a 5.07 monofilament, which delivers 10 g of force to 12 locations on each foot, i.e., 24 points of sensation in total (Tan, 2010). Venous blood was collected from each patient after an overnight fast as previously described (Zubioli et al., 2013).After blood collection, hematological parameters were measured.In addition, serum glucose, triacylglycerol, total cholesterol, high density lipoprotein cholesterol (HDL-C), aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma glutamyltransferase (gamma GT), total protein, albumin, urea, and creatinine were evaluated by using kits from BioSys® and analyzed on Vitalab SelectraE® equipment.C-reactive protein was evaluated by using the CardioPhase hsCRP kit (Siemens®) and analyzed on a Siemens® nephelometer.Moreover, antioxidant activity was evaluated by means of total antioxidant capacity (Erel, 2004) and protein thiol groups (Faure, Lafond, 1995). Patients were required to ingest GDP (Ajinomoto North America, NC, USA) which was supplied in sachets (10 g), and was dissolved in water immediately before using, once a day, after lunch or after dinner (20 g/day), for 30 days.After this period of treatment, all clinical procedures were repeated (blood collection, biochemical and hematological evaluation, quantification of cytokines and foot examination). The effect of the treatment with GDP was evaluated by comparing each patient before (day 1) and after treatment (day 30).In this way, each patient served as his or her own control. Statistical analysis For statistical analysis we used the software R.2.10.1.Results were analyzed using the Wilcoxon test for comparing values before and after treatment.For quantitative variables, the Spearmanʼs correlation was used.Data were reported as the mean ± standard error (M ± SE).A p<0.05 level of probability was accepted as a statistically significant difference for all comparisons. RESULTS AND DISCUSSION A total of 18 patients with type 2 diabetes completed the study, while the remaining four patients were excluded because they did not take the GDP treatment as recommended. Supplementation with GDP reduced (P=0.048) the number of areas on the foot that lacked sensation from 5.9 ± 1.5 to 4.1 ± 1.3.Moreover, individual evaluation (Table I) showed that 10 patients (55.5%) experienced a reduction in the number of points without sensation after supplementation with GDP.In agreement with these results, supplementation with glutamine has been shown to reduce the loss of neurons in the duodenum of diabetic rats.This effect was attributed to the neuroprotective effect of glutamine which prevents oxidative stress by increasing the availability of reduced glutathione from glutamine (Zanoni et al., 2011). It should be emphasized that partial recovery of sensation occurred in the presence of reduced (P=0.047)fasting hyperglycemia and increased (P<0.01)HDL after treatment with GDP.However, total cholesterol, triacylglycerol, total protein, and albumin remained unchanged (Table II). The increased HDL-C, i.e., 2.9 mg/dL (Table II), is very important considering that: a) an elevation of 1 mg/ dL has been shown to reduce the risk of microvascular complications in type 2 diabetic patients (Toth et al., 2012); b) patients with diabetic foot syndrome have a higher risk of cardiovascular disease (Pinto et al., 2008); and c) hyperlipidemia is associated with diabetic neuropathy (Callaghan et al., 2012). The increased urea values (P<0.001) after GDP supplementation (Table II) confirm the increased ingestion of this dipeptide. The blood values of creatinine, AST, ALT, and GGT remained unaltered (Table II), suggesting the absence of renal and hepatic toxicity as consequence of supplementation with oral GDP.In agreement with these observations, it has been reported that glutamine (44-60 g/ day) does not cause any side effects (Bushen et al., 2004). In agreement with previous studies (Weigelt et al., 2009;Whitmont et al., 2013), we observed high blood levels of C-reactive protein (a marker of acute inflammation) before GDP treatment.However, C-reactive protein levels were not influenced by GDP treatment (Table II).This result could be partly explained by the fact that there is a simultaneous increase in the blood levels of pro-inflammatory (IFN-α, IFN-γ, IL-6, IL-7) and antiinflammatory (IL-4, IL-13, IL-12 p40) cytokines (Table III). However, how can the synchronous increase of pro-inflammatory and anti-inflammatory cytokines during GDP supplementation be accounted for? We suggest that the simultaneous rise of proinflammatory and anti-inflammatory cytokines during GDP supplementation is indicative of a pro-inflammatory and anti-inflammatory balance.In agreement with this suggestion, we previously reported a concurrent increase of blood pro-inflammatory and anti-inflammatory cytokines during an oral glucose tolerance test (Bazotte et al., 2016;Eik Filho et al., 2016).Furthermore, other studies have also demonstrated activation of pro-inflammatory and antiinflammatory cytokines during sepsis (Mancilla-Ramírez et al., 1993), diabetes (Chatzigeorgiou et al., 2010), and infections (Ng et al., 2003). This balance of pro-inflammatory and antiinflammatory cytokines could represent an important negative feedback mechanism, which protects the body from excessive inflammation and its consequences. Regarding the involvement of cytokines, it must be noted that these substances show pleiotropic effects in modulating immune responses and chronic inflammation (Akdis et al., 2011;Dinarello, 2007). In summary, the significant increases in IFN-α, IFN-γ, IL-4, IL-6, IL-7, IL-13, and IL-12 p40 may improve the immune responses after oral treatment with GDP.In agreement, with this proposition, as shown in Table IV, oral supplementation with GDP also increased the number of circulating leukocytes (P=0.037),eosinophils (P=0.049) and typical lymphocytes (P<0.001). Our results are of clinical relevance, as treatment with oral GDP (20 g/day) over a period of 30 days improved clinical responses in patients with diabetic foot syndrome. TABLE I - Individual evaluations of the number of areas on the foot without sensation (NAFWS) in type 2 diabetic patients before supplementation (BS) and after supplementation (AS) with glutamine dipeptide.The numbers 1-18 represent each patient included in the study TABLE II - Biochemical and toxicological parameters (mean ± standard error) of diabetic patients before and after supplementation with glutamine dipeptide.Number of patients = 18 Non parametric Wilcoxon test.A P value of <0.05 was considered as statistically significant. TABLE III - Serum cytokines levels (pg/mL) of diabetic patients before and after supplementation with glutamine dipeptide.N = number of patients TABLE IV - Hematological parameters of diabetic patients before and after supplementation with glutamine dipeptide (mean ± standard error).Number of patients = 18 Amplitude of the distribution of erythrocyte size (ADES), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC).*Non parametric Wilcoxon test.A P value of <0.05 was considered as statistically significant.
2017-11-12T08:17:05.560Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "04bce7358f5d579fbca5409192af6771715e56f4", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/bjps/v52n3/2175-9790-bjps-52-03-00567.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "04bce7358f5d579fbca5409192af6771715e56f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12039193
pes2o/s2orc
v3-fos-license
Personal Attributes Extraction in Chinese Text Bakeoff in CLP 2014: Overview This paper presents the overview of Personal Attributes Extraction in Chinese Text Bakeoff in CLP 2014. Personal attribute extraction plays an important role in information extraction, event tracking, entity disambiguation and other related research areas. This task is designed to evaluate the techniques for extracting person specific attributes from unstructured Chinese texts, which is similar to slot filling, but focuses on person attributes. This task brings some challenges issues because Chinese language contains some common words and lacks of capital clues as in English. The task organizer manually constructs the query names and corresponding documents. The value/presence of the texts corresponding 25 pre-defined attributes are annotated to construct the training and testing dataset. The bakeoff results achieved by the participators show the good progress in this field. Personal Attributes Extraction in Chinese Text Task is designed to evaluate the techniques for extracting person specific attributes, such as birth date, spouse, children, education, and title etc. from unstructured Chinese texts. These techniques play an important role in information extraction, event tracking, entity disambiguation and other related research areas. Slot filling task has been proposed as one of shared tasks in the TAC KBP workshop since 2009 [1]. Generally speaking, the mainstream techniques for slot filling and person attributes extraction may be camped into two major approaches, namely: Rule-based approach and statistics-based ones [2,3,4]. Rule-based approach normally defines the extraction rules manually or learns the rules automatically. The rules play the key role in this approach. As long as finding the constraint information which matches the rules in the text, the system may extract the target extraction information. As for the statistics-based approach, it has good portability to this extraction problem. Several statistics machine learning models such as Hidden Markov Model (HMM) and Condition Random Fields (CRFs) are employed. The shortcoming for this approach is that it requires large amount of training data which is always unavailable. Currently, there are limited existing works on personal attributes extraction in Chinese text. Comparing to the works on English, the characteristics of Chinese language including the Chinese word segmentation, the confusion of named entity with common words, lack of capital clues bring more difficulties for person attributes extraction in Chinese. The task of person attributes extraction in Chinese text in CLP 2014 bakeoff is designed on the basis of the slot filling task in the TAC KBP workshop [1]. The task organizer provides a collection of documents corresponding to a target person and a knowledge base which contains partial list of attributes for the person. Participants are required to extract additional attributes from the collections of documents. The task is similar to the slot filling, but it focuses on person attributes extraction. Furthermore, the collection of documents is not limited to the news corpus. Task description The Personal Attributes Extraction in Chinese Text Task is motivated by a component of a full slot filling (SF) system. This task focuses on the refinement of output from Chinese slot filling systems. Especially, personal attributes extracted from the unstructured text is useful for the construction of Chinese knowledge graph. The extraction task focused on extracting values for a set of pre-defined attributes ("slots") for target person entity from given source documents. Given an entity, the system is required to extract the correct value(s) for that pre-defined attribute from source documents and return the slot filler together with its provenance, which is a set of text spans from source document that justify the correctness of the slot filler. The extraction system need not extract the attribute values given in the Wikipedia knowledge base. Dataset preparation The person names are manually selected from the web, in which 10 person names are used in training dataset and 90 person names, including 48 names for Chinese person and 42 names for foreign person are used in testing dataset. The corresponding knowledge base is constructed from Wikipedia person entity while the source documents in each folder are constructed based on search engine output with manually selection. The personal attributes are categorized as being Person (PER) slots based on the type of entities about which they seek to extract information. The attributes are also categorized by the content and quantity of their fillers [5]. Attribute slot content Attribute slot content are divided into three categorizations, namely Name, Value, or String. Name slots are required to be filled by the name of a person. Name slots including the alternative name, spouse name, city of birth, country of death and so on. The detailed slot descriptions are given in the Personal Attributes Extraction in Chinese Text Task website. Value slots are required to be filled by either a numerical value or a date such as age and birth date. The numbers and dates in these fillers can be spelled out (forty-two; December 7, 1941) or written as numbers (42; 12/7/1941). String slots are basically a "catch all", meaning that their fillers cannot be neatly classified as names or values. The text excerpts (or "strings") that make up these fillers can sometimes be just a name, but are often expected to be more than a name. The typical string slots including cause of death and religion. Attribute slot quantity Slots are labeled as Single-value or List-value based on the number of fillers they can take. Since one slot may have different representations, participant is required to extract all of these representations. Single-value slots can have only single filler. While most single-value slots are obvious (e.g., a person can only have one date of birth), some may be less apparent. List-value slots can take multiple fillers as they are likely to have more than one correct answer in the source data. For example, people may have multiple children, employers, or alternate names. Table The following table of In this task, the organizer collects the source documents under each person name by using the search engine. Using the person name and the related attribute names as the query to search on the Internet, the top N high quality web pages are manually selected as the source documents. During the set construction, the organizer avoids to the attribute slots overlapping between different source documents. Table 2 gives the statistical information for source document. Sets Max Min Average Total Train set 4 1 2 24 Test set 5 1 2 235 Table 2.Statistical information of source documents The instance means one person's attribute slot appears in one source document. Table 3 lists the detail information about the instance number of one related person attribute in one source document. attributes Max Min Average Single 6 0 1 List 47 0 1 Table 3. Instances in source documents As mentioned above, the person attributes are divided into two categorizations: Single and List. The total instance numbers for the two categorizations in the training set and testing set are shown as follows. Evaluation Metrics In the evaluation, both the lenient evaluation and strict evaluation are performed. In the strict evaluation, all instance attributes are compared to the answers while in the lenient evaluation, the offset string_begin and string_end are ignored. The detail evaluation metrics are shown as follows. Single Attributes Evaluation Metric When numCorrect is zero, the numCorrect is set to 1.0; List Attributes Evaluation Metric When IP is the instance precision and IR is the instance recall, in the evaluation we set the weight F = 2, and when both IP and IR are zero, we set the ListSlotValue to zero; The overall evaluation metric is the average of single attributes evaluation score and list attributes evaluation score. The participant systems are ranked according to . Performance of the Participants In this bakeoff, 6 teams submitted 6 valid results. The team ID and the corresponding participants are listed in Table 4. Table 4. The Bakeoff Participants The achieved performances of these systems under lenient and strict evaluations, are shown in Figure 2 and Figure 3, respectively. the performances of Personal Attributes Extraction in Chinese Text (the SF _ Value) are uniformly lower than 0.5. Especially the ListScore lower than 0.4. Three participants submit the technical reports for this task. Dong YU et al. [6] use a mixture framework consists of supervised learning and rule based extractor and human knowledge database. Firstly, they divide 25 attributes into several groups. A specific combination of methods for extracting the values for each group is developed. The CRF model and regular expression are employed to extract the instances, and the protagonist dependency relationship based filter and attribute keywords based filter are employed to post-process the answers extract. This system achieves the SF_Value of 0.309 under lenient evaluation and 0.293 under strict evaluation. Kailun Zhang et al. [7] propose a method based on the combination of trigger words, dictionary and rules. This system narrow down the extraction scope by building attributes trigger words. The attributes such as state, province, and school, the cause of death and some similar fixed attributes are extracted by dictionary lookup directly through building the attributes dictionary. Some attributes extraction rules are developed to extract other instances. This system achieves the SF_Value of 0.363 under lenient evaluation and 0.352 under the strict evaluation. Zhen Wang et al. [8] use a dependency patterns matching technique to extract the attribute instances. In order to get the ontology, they use some patterns to match dependency relations and save the extracted information into RDF format file. An alignment process is used to group same classes and remove duplicates in RDF files. Finally, they align their ontology to CLP's. The performance of this system may be limited to some language process problems. It achieves SF_Value of 0.0043 under lenient evaluation and 0.0025 under strict evaluation. The top performance system, CASIA_CUC_ PAES did not provide the technical report. This system achieves SF_Value of 0.507 under lenient evaluation and 0.490 under strict evaluation. Analysis The SF_Value performances of Personal Attributes Extraction in Chinese Text systems are lower than 0.5 while the Single Score is lower than 0.7 and the ListScore is lower than 0.4. In this section, we analyze the factors influence the extraction performance. (1) One object sometimes have different expressions in Chinese language, for example, May 6, 1941May 6, , or 1990May 6, -5-6, or 5/6/1990 and so on. The extraction system has the difficulty to extract all of these instances. (2) In this evaluation, most system distinguish the titles and the alternate names hardly. Generally, alternate names refer to the assigned persons that are distinct from the "official" name. Alternate names may include aliases, stage names, alternate transliterations, abbreviations, alternate spellings, nicknames, or birth names. Compared with other slots, more inference should be used for selecting appropriate fillers for Alternate Names because the canonical names of entities often absent from source documents. As for the Titles or other extraneous information added to a name do not justify an alternate name. Generally, a given name alone is not a correct alternate name unless the person is unambiguously known that way. (3) The administrative region divisions in different countries are not the same. Thus, most systems distinguish the city and the state or province hardly. For example, the 福冈县 in Japan is divided as state or province level, but the 浮山县 in China should be divided as city level. In the bakeoff, the geopolitical entities are divided to three levels (city, town, or village). Thus, these attributes are hardly distinguished, especially for the statistical-based system. (4) Another problem is that attributes of string value are not be extracted exactly. For example, a mention of a serious illness is not an acceptable filler of cause of death unless it is explicitly linked to the death of the assigned person in the document. Assessors should be lenient in their judgment of the fullness of selected strings for cause of death. These types of attributes are basically a "catch all", meaning that their fillers cannot be neatly classified as names or values. The text excerpts (or "strings") that make up these fillers can sometimes be just a name, but are often expected to be more than a name. Due to various factors and complication of the evaluations, the organizer may only ensure the relative fairness for each system. Meanwhile, it is observed that some errors in the submitted results are come at very small points. The carefully development will be helpful. Furthermore, to make the evaluation results comparable, the organizer should use a uniform standard in te evaluation (besides the SingleScore, ListScore, and the SF_Value). Conclusion The Personal Attributes Extraction in Chinese Text task for CLP2014 has raised the problem in Chinese personal attributes extraction. Besides the basic difficulty of Chinese nature language processing and information extraction, there are other difficulties like common words detection, co-reference resolution. 6 teams have submitted their results. Most teams use rule-based methods or matching techniques while other team utilizes the statistical-based technique. Some proposed techniques are shown effective in person attribute extraction. The organizer expects this bakeoff is helpful to the research on person attribute extraction in Chinese text.
2015-03-21T21:52:17.000Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "cfa58f966618b138a12407a9cf1813813ef26067", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/W14-6817.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "cfa58f966618b138a12407a9cf1813813ef26067", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
234149161
pes2o/s2orc
v3-fos-license
Study on Failure Analysis of Crankshaft Using Finite Element Analysis Crankshaft is one of the crucial parts for the internal combustion engine which required effective and precise working. In this study, the aim of the study is to identify the stress state in the crankshaft and to explain the failure in automotive crankshaft and fatigue life of crankshaft by using finite element analysis. The 3D solid modelling of the crankshaft model was designed and developed using SolidWorks. A static structural and dynamic analysis on an L-twin cylinder crankshaft were used to determine the maximum equivalent stress and total deformation at critical locations of the crankshaft. The model was tested under dynamic loading conditions to determine fatigue life, safety factor, equivalent alternating stress and damage using the fatigue tool. The results obtained from this study indicated that the crankshaft has obvious fatigue crack which was belongs to fatigue fracture. The fatigue fracture developed was only attributed to the propagating and initiate cracks on the edges of the lubrication hole under cyclic bending and torsion. Overall, the crankshaft is safe for both static and fatigue loadings. In dynamic analysis, the critical frequency obtained in the frequency response curve should be avoided which it may cause failure of the crankshaft. Introduction Throughout many decades, internal combustion engine (ICE) has played a crucial role in our daily life. Research has been actively carried out to learn about the improved model for the generation. This contributes to the maximization power of ICE and minimization of fuel consumption of the automobile vehicle and air pollution to the environment, such as greenhouse effect [1]. ICE plays an important part in automotive industry due to the necessarily to transport goods and people [2]. The crankshaft is an essential mechanical part in ICE because it transforms the linear movement of the pistons into rational movement in the shaft. The crankshaft is upheld by a few main bearing journals and the rotation of the crankshaft happens because of the torque created by the connecting rod that connects with the piston to the crankshaft at crankpin [3]. The engine may be unavailable to use if the crankshaft is not working fine. The purchase and replacement would be expensive. The repair cost will not only involve the crankshaft but instead it will affect other parts as well such as the cylinder head, interfacing pole, chambers. This required extensive timeframe to fix, principally in light of the crankshaft area inside the engine [4]. The crankshaft bears complex loads and the condition of its workplace is harsh [5]. It is significant that the issues surrounding failure of crankshaft remained unsolved, and manufactures encountered numerous issues in relation to multi-axis load such as torsion and bending, the concentration of stress, the gradient of the stress and the effect of the variable amplitude load. The advancement in technology makes the need for high speed engineering machines. As a result, there is the need for compensation between speed, efficiency and size in the development of an engine crankshaft. In the past decade, the methodology to identify stress in crankshaft can be determined using frame and beam model. With the current advance technology, the stress on crankshaft can be determined using finite element analysis (FEA). FEA involves the simulation of a physical engineering structure using a numerical technique. It involves subdividing the structure into smaller elements called mesh. Several numbers of design analysis under different constraints are performed with FEA. In designing a complex structure, computer-aided design (CAD) is used. Among the software which can be implemented to perform analysis by using computer aided engineering is ANSYS. The computer aided software is able to determine the optimal performance as well as the lifespan with regards to design failure. Harmonic analysis can be used to determine the stress due to harmonically changing loads. Endurance limit can be defined as the fatigue limit. If the stress applied is lower than the endurance limit, it will have an infinite fatigue life. The examples of more renowned theory for fatigue analysis is Soderberg and Goodman failure theories [6]. ANSYS can be implemented to analyses the crankshaft. The use of harmonic analysis is able to determine the stress and effect of components such as flywheel on a crankshaft [7]. In the previous research conducted, harmonic response for torsional deformation can be identified using transient analysis [8]. Giakoumis et al. [9] had carried out analysis on a crankshaft inertia torque harmonic for finding the torsional deformation. Talikoti et al. [10] had used the harmonic method of mode superposition to perform the transient dynamic analysis in order to determine the stiffness, stress and value of the steady state of deformation. Reddy [11] had conducted the static structural analysis to optimizing the design of the crankshaft. The static structural analysis provides the details on the deformation and total stress of the crankshaft by incorporating different raw material and design. According to Mourelatos [12] as time changes, the load applied on various area of crankshaft changes, therefore, structural dynamic analysis should be performed. During the operation of the crankshaft, it undergoes various type of vibration such as torsional, flexural, axial and couple [13,14]. There is different type of the crankshafts had designed and developed using various type of the CAD software such as PRO-E, CATIA-V5 or SolidWorks before conducting the analysis of the crankshaft model in ANSYS [14][15][16][17]. Most of the analysis determined that at the center of the crankshaft, it is under maximum stress and deformation located at the center of the crankpin and fillet area [18][19][20]. The study on the failure analysis of the L-twin crankshaft has not been reported in any open source journals. In this paper, the fatigue tool in the static structural analysis and harmonic analysis was carried out in the simulation. This is to evaluate the fatigue behavior of the crankshaft, estimate the fatigue life and determine the stress distribution state of a crankshaft in the L-twin Superquadro engine which subjected to the bending and torsional load. The Superquadro engine is a MotoGP-derived 90° V4 with the desmodromic timing which is one of a kind engine featuring a counter rotating crankshaft and Twin Pulse firing order. The material used in the simulation analysis is AISI 4340 forged steel and applied in the mechanical solver in the ANSYS static structural analysis and harmonic analysis. Solid modelling of the crankshaft model SolidWorks version 2020 software was implemented to design the 3D modelling of the automotive crankshaft. For this study, a crankshaft was used for the simulation analysis can be seen in Fig. 1. According to the geometric shapes of the commercial crankshaft, the 3D CAD modelling was designed and generated in order to obtain a more precise and accurate result during simulation analysis. In order to conduct a simulation on ANSYS platform, an IGES file for the 3D CAD modelling was required. Simulation of the crankshaft model The model developed at the initial stage was required to assign with the suitable material so it may physically behave like the actual crankshaft. The AISI 4340 normalized steel was assigned to the crankshaft model and the mechanical properties of the material is stated in Table 1. The body size was fixed on the basis of Grid Independence Test (GIT) performed on the geometry of the crankshaft. The five mesh were used in this study with different element size and the results as relation between stress and strain was plotted for each mesh with fixed solver setting and boundary conditions. Based on the GIT reported in Fig. 2, it was being observed that the difference in the plot for the element size 0.0025 m (Mesh 4) and 0.0001 m (Mesh 5) was almost nil and hence therefore the element size 0.0025 m was used for the further analysis such as modal analysis, harmonic analysis and fatigue analysis. When the maximum gas pressure of 112.5 bar is exerted at the piston, the force acting on the piston and transmitted to the crankshaft was calculated by using kinematics relation which the value of the force is 118.2 kN. Ducati Superquadro engine is L-twin cylinder design with 1285 cc, and 118.2 kN is the calculated force acting on the crankpin. This is the maximum force value acting on the crankpin during power stroke. Finite element analysis (FEA) was conducted in ANSYS 2019 R3 to determine fatigue life of the simulation model under various loading conditions. Two boundary conditions as shown in Fig. 3 were used for the fatigue life cycle evaluation. The analysis was carried out into three stages such as fatigue analysis, modal analysis and harmonic analysis. The fatigue analysis was performed using ANSYS Workbench fatigue module. For fatigue analysis, the solver was programmed for the 2 steps and 50 sub steps with large defection off and program controlled nonlinear controls. The fatigue tool was used to evaluate fatigue damage and life cycle prediction. The case setup was tested for the same boundary conditions with zero based and fully reversed loading and the selecting mean stress theory the results are computed. The selections of the mean stress theory were Goodman, Gerber and Soderberg [6]. Modal 0.00E+00 5.00E+07 analysis was carried out with the objective to determine the natural frequencies of free vibrations. Ten order of natural frequency were calculated for the displacement at 10 nodes corresponds to vibration. It had provided the further background for the harmonics analysis at various loads which finally has a guiding significance in design and manufacture of carnkshafts. For harmonic analysis, the harmonic behavior of L-twin cylinder crankshaft was investigated for the bending and the torsional loading conditions. The rotating parts and subjected to fluctuating and cyclic loads were influenced with vibration which could harm the mountings and may produce high wear and tear. The crankshaft revolutions per minute (rpm) depends on the flow rate of charge inside combustion chamber, it varies from 0 rpm to 10500 rpm which is a very high speed for the generation of vibrations in this study. Hence, the behavior of the vibration effects are commonly analyzed by harmonic analysis. The minimum frequency and the maximum frequency of the vibration were set to 2782.1 Hz and 7340.7 Hz respectively which was obtained from the modal analysis. Results and Discussion The results visualization is the post processing stage which deals with the graphical representation of output values. Typically, the deformed configuration, mode shapes and distribution of stress are computed and displayed at this stage. The FEA experiment was conducted on the 3D simulation model discretized with the element size 0.0025 m for the boundary conditions and solver setup discussed, and the results were reported in this segment. The simulation was carried out for the two different loading conditions such as crankshaft subjected to bending load and crankshaft subjected to maximum torque. In this research, full reversed loading and Gerber theory were selected for the fatigue analysis. Fatigue analysis of crankshaft subjected to bending load The static structural case was solved at four different load values acting on the crankshaft, whereas the maximum was considered to be 118.2 kN. Von-Mises stresses was taken as failure criteria for ductile metal. Fig. 4(a) presents the stress value as the color map representation for von-Mises stress. It was observed that the location of maximum stress is located at radius at end of the crankpin. The maximum value of stress was reported as 274.21 MPa at the crankpin. For the maximum loading condition, the equivalent stress values are below the yield stress of 460 MPa. The crankshaft had a higher chance of failure at this location under cyclic load. The material also had a tendency to shear due to complex loading conditions. Fig. 4(b) shows the color map representation of shear stress concentration throughout the crankshaft. The maximum value of shear stress was calculated as 42 MPa near the corners of the crankpin. For the maximum loading condition, the shear stress values are below the yield stress of 265 MPa. The stress-strain curve for equivalent stress was plotted to check the response of the bending load as shown in Fig. 5. The curve had shown the linear relationship of stress vs strain, which was plotted on the logarithmic x-scale to represent the skewness towards the large values. The maximum value of stress calculated from the analysis is under safe yield value (460 MPa). To formulate the real dynamic behavior of crankshaft, the four load values were assigned at the same point, and the response was analyzed. It was observed that during 360 o of rotation, the crankshaft was subjected to different loads at different angles, which was a very complex situation to simulate. Von-Mises equivalent stress vs equivalent strain was plotted at different load values to determine the yield value or fracture as shown in Fig. 6. The curve response was found to be a linear relationship. The abscissa was set to the logarithmic scale, and, for the same strain value, the stress generated in the crankshaft was plotted. The maximum stress value developed for the load of 118.2 kN was reported to be 274.21 Mpa, which is under the permissible limit and the fatigue life of 1.61 e+9 hours was evaluated. The fatigue analysis was performed on the crankshaft for evaluating life cycle, crack propagation, and yield strength, which can be used for product integrity and optimization. The fatigue tool provided in ANSYS structural was used to evaluate the fatigue life, fatigue damage, safety factor, and alternating stress. The crankshaft was tested for the bending load of 118.2 kN with fixed ends. Table 2 and Fig. 7 indicate the results obtained from the fatigue test. Fatigue life indicated the available for given fatigue analysis which represent the number of cycles until the part will fail due to fatigue. The crankshaft fatigue life is shown in Fig. 7(a). Fig. 7(b) shows the contour plot of the fatigue damage, which is basically the design life or available life of the component. For the fatigue damage, the value greater than 1 represents the failure of the component before the design life is reached. It was observed that the failure take place at the dislocation, which is the point of maximum stress concentration. Fig. 7(c) shows the fatigue factor of safety with respect to the fatigue failure at a given design life. The maximum factor of safety is 15, the value of less than one results in the failure of the component before design life. Due to the fatigue load, the equivalent alternating stress was developed in the component (Fig. 7(d)). The alternating stress had illustrated the maximum and minimum values for the cyclic load. The crankpin had the portion subjected to maximum stress and the possibility of failure. The maximum stress life of the crankshaft for the high cycle loading was determined as 1.61 e+9 hours, under the alternating stress of 18.131 MPa. The maximum fatigue damage reported as 1.88 e-1 at the crankpin, which is less than one and ensures that the product durability. For the fatigue sensitivity, the change in fatigue results changes as the function of the loading at the critical location of the crankshaft model. The results show the decrease in fatigue life as the load value was increased from 50% to 150%. Fatigue analysis of crankshaft subjected to torsional load The second case was formulated with the crankshaft subjected to both bending and torsional stress. A torque of 144 Nm was applied at the extreme ends and the crankpin was fixed. The load of 14 kN was applied for the one-time step. Fig. 8 illustrated the color map representation of shear stress concentration throughout the crankshaft. The stress-strain curve plotted for the response of crankshaft was present in Fig. 9, the results validated the forces calculated at the crankpin. The color map of the crankpin and the fillet areas are the locations of high-stress concentration and with the higher possibility of failure. The results of the fatigue life cycle assessment were as shown in Table 3. To formulate the dynamic loading condition, four different torque values were assigned to the crankshaft, and the response was reported. The stress-strain curve for equivalent von-Mises stress was shown in Fig. 10 indicates the linear relationship, and the maximum value was calculated as 5.503 e+7 Pa, which is under safe permissible yield stress and the fatigue life of 1.61 e+10 hours was evaluated.. The maximum value of shear stress was calculated as 15.6 MPa, which near the corners of the crankpin. The observed values obtained were close enough to the results obtained from the crankshaft subjected to bending load. The results obtained for both the cases of loading conditions ensure the dynamic behavior of the crankshaft. The maximum torque applied to the crankshaft was 144 Nm, and it was decreased to 75 Nm. This was to check the response of stress-strain curve and to evaluate stress life. The fatigue life was calculated 2.778 e+11 hours in the absence of a bending load at the crankpins. The crankpin and the corners were the high-stress concentration zone and with the maximum possibility of crack formation and failure. However, in actual practice, the crankshaft is subjected to both bending and torsional loads, and it has to deal with the fluctuating loads with shocks as well, which depends on the engine operating conditions. Harmonic Analysis of Crankshaft The vibrational excitation of any structure was analyzed using the modal analysis. The technique was used to determine the vibrational characteristics like natural frequency and mode shapes. The harmonic analysis was performed at the various frequency of vibration obtained from the modal analysis. The harmonic analysis of the crankshaft was carried out by using finite element software such as ANSYS workbench. The advantage of using a finite element software package was that mode shapes could be accurately visualized and simulated. So, the stress and deformation in the crankshaft could be precisely calculated. The major and the critical locations of the crankshaft can be identified, where the possibility of failure could arise and suggest some suitable amendments may be performed on the design of the crankshaft according to constraints and boundary conditions. The various value of deformation and stress obtained from the analysis were shown in Fig. 11. The various graphs were plotted to represent the behavior of the crankshaft under a speed of 10500 rpm of the engine. Fig. 11 shown the equivalent von-Mises stress developed at the frequency of 7340 Hz, which is the highest frequency obtained at the modal analysis for bending as well as torsional load. While applying the bending load, the two extreme ends of the crankshaft were kept fixed, and the load was applied at the crankpin. Fig. 11 (a) illustrated the color map for the equivalent stress values when the crankshaft was subjected to the vibration of 7340 Hz. The maximum deformation was developed in the counterweight section at stress 8300 MPa. The portion of the crankshaft close to the counterweights was subjected to maximum stress and the possibility of crack formation. maximum torque. Fig. 11(b) indicated the equivalent von-Mises stress developed in the crankshaft when subjected to torsional load. The fillet around the shaft near journal bearing and crankpin were the locations of high stress represented in red color. The maximum value of stress was observed to be 33.8 MPa in the fillet region. The frequency responses for the normal stress values and total deformation were found in Fig. 12, 13 and 14. The peak value at the curve was the indication of the frequency of vibration is close to resonance frequency and cause the failure of component hence, therefore such frequency value must be avoided by introducing necessary damping mechanism. 12 shows the normal stress plot at the frequency range of 3237 Hz to 7340 Hz for the crankshaft which is subjected to bending load and torsional load respectively. The peak value of normal stress was generated at the frequency of 3780 Hz for the bending load and at the frequency of 3237 Hz for the torsional load. The material has the tendency to shear undergone the directional deformation. The maximum deformation occurs at the frequency of 3780 Hz for the bending load and at the frequency of 5843 Hz when the crankshaft subjected to torsional load as shown in Fig. 13. The corresponding phase angle response and the force response are shown in Fig 14. The operating frequency range could be decided from the results obtained from the harmonic analysis and the required modifications can be made in the design to improve the design life. The critical frequency obtained in the frequency response curve, which is highlighted in the square at the bottom right must be avoided as it may cause the failure of the component. Conclusion The aim of the research is to study the fatigue failure analysis of crankshaft using finite element analysis. The stress-strain curve was a linear relationship obtained for this analysis to ensure that the maximum stress developed is under the safe permissible range of yield values. The results reported from this analysis stated that the failure began at the fillet region on the lubrication hole and this was caused by the high bending stress concentration. The high stress concentration zone was the focus of this study as its location of failure, the crankpin and the corners were evaluated as the high stress concentration and fatigue failure zone in this study. Dynamic results proved that the design of the crankshaft statically safe failed under the dynamic condition for both the loading conditions of the crankshaft. The natural frequency under fixed boundary conditions was found to be in the range of 2782 Hz to 7340 Hz. Besides, the harmonic analysis shows that the maximum stress and deformation appeared at the web edge of the counterweight and the maximum stress intensity appeared at the fillets between the crankshaft journal and crankpin. The maximum stress intensity developed at the fillet region did not deviate from the design limit of the original crankshaft and theoretical values.
2021-05-11T00:05:50.733Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "de09e9a685fde2c60bde08809038f9ce5282a4c6", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2021/04/matecconf_eureca2020_03001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3eda062ab76a653369f447fa1e8f5c847f4df726", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
52878028
pes2o/s2orc
v3-fos-license
Determinants of uterine rupture among cases of Adama city public and private hospitals, Oromia, Ethiopia: a case control study Background Ethiopia is among the ten world countries with highest maternal death rates that accounts for more than 59% of global maternal deaths. Uterine rupture is one of the dangerous obstetric problems with high potential of causing maternal and neonatal morbidity and mortality. The case fatality rate of uterine rupture is high and hence identifying factors associated with uterine rupture remains important to guide decision makers and practitioners. The study aimed to identify factors associated with uterine rupture among clients managed in Adama city public and private hospitals during January 2011 to December, 2015. Methods Unmatched case control study design was employed. The sample size was determined using computer software considering the basic statistical assumptions and accordingly a total of 432 women, (144 with uterine rupture as cases and 288 with spontaneous vaginal delivery as controls) managed in all hospitals during the study period were included in the study. A data collection tool that contains available variables was designed and used to extract data from log books and client cards. Data were entered into EPI-Info-7 and exported to Stata-12 for cleaning and analysis. The study participants were characterized using descriptive statistics. The associations between uterine rupture and independent variables were modeled using binary logistic regression analysis. The association between independent variables and uterine rupture was estimated using odds ratio with 95% confidence intervals. The statistical significance of the association was declared at P-value < 0.05. Results The odds of having a uterine rupture were found to be more than six times higher among rural residents (AOR = 6.29; 95% CI: 3.39, 11.66) compared to urban. Other independent predictors include gravidity of five or more (AOR = 27.89; 95% CI: 8.42, 92.34), having a history of cesarean section scar (AOR = 9.94; 95% CI: 3.39, 11.66) and not having an antenatal care visit (AOR = 9.64; 95% CI: 4.37, 21.29). Conclusion Rural residence, multigravidas, cesarean section scar and not having an antenatal care visit were independent predictors of uterine rupture in the current study. Therefore, improving access and strengthening essential obstetric care, antenatal and family planning services with complete packages are crucial interventions in the reduction of the odds of having uterine rupture. In addition, the strengthening of the referral system is mandatory for women residing in rural areas. Background plain Ethiopia is among the ten world countries with the highest number of maternal deaths that accounts for more than 59% of global maternal deaths. Uterine rupture is one of the dangerous obstetric problems with high potential of causing maternal and neonatal death. This study aimed to identify factors associated with uterine rupture among clients managed in Adama city public and private hospitals during January 2011 to December, 2015. Methods The study was employed by comparing cases (women with uterine rupture) and controls (women with normal spontaneous vaginal delivery). A total of 432 women, (144 cases and 288 controls) managed in all hospitals during the study period were included in the study. Data were analyzed using STATA-12 computer soft. The characteristics of women participated in the study were descriptive using frequency distribution. A statistical analysis method called logistic regression was used to assess the associations between uterine rupture and independent variables. The chance to develop uterine rupture was estimated by odds ratio. Results The higher chance of developing uterine rupture was observed among women from rural residents compared to urban, women with large number of pregnancies, women who have scarred uterus due to previously delivered by operation and women never attend antenatal care for the current pregnancy. Conclusion Therefore, improving access and strengthening essential care during pregnancy and labor and family planning services with complete packages are crucial interventions. In addition, strengthening strong referral system is mandatory for women residing in rural areas. Background Uterine rupture is defined as a full-thickness separation of the uterine wall and the overlying serosa [1]. It is a catastrophic obstetric complication associated with high rates of maternal morbidity and mortality. Ethiopia was the fourth among ten countries collectively accounted for 59% of all maternal deaths worldwide [2]. Uterine rupture was one of the top four causes that attributed to 36% of these maternal deaths in addition to hemorrhage (22%), hypertensive disorders of pregnancy (19%) and sepsis/infection (13%) [3,4]. This unfortunate event has remained the most significant problem in developing nations [5].The overall incidence of uterine rupture is higher in developing countries than in developed countries, which is around 74 in 10,000 [6,7]. Uterine rupture usually occurs during labor but it can also occur during pregnancy [8]. It has also been reported in all trimester of pregnancy [9]. Uterine rupture in primi-gravida with no identifiable risk factor has also been reported [9]. The signs and symptoms of uterine rupture depend on the timing, site, and extent of uterine defect. The classical signs and symptoms of uterine rupture include fetal distress, loss of uterine contraction, abdominal pain, hemorrhage, recession of the presenting fetal part and shock. The initial signs and symptoms are however, non-specific, a condition that makes diagnosis difficult, which sometimes delays definitive therapy. This delay in diagnosis and treatment often leads to adverse maternal and perinatal outcome. It is therefore important to maintain a high index of suspicion [10,11]. Uterine rupture is one of the dangerous obstetric problems with higher potential of causing maternal and neonatal morbidity, if not death. Case fatality rate for uterine rupture is as high as 30.4% [12]. Consequences of uterine rupture depend on the time between diagnosis of uterine rupture and delivery. It has been postulated that from the time of diagnosis to delivery only 10-37 min are available before clinically significant fetal morbidity becomes inevitable due to catastrophic hemorrhage or fetal anoxia. Uterine rupture contributes significantly to maternal morbidity and mortality, and perinatal mortality. Fatal consequences are admitted to the neonatal intensive care unit, fetal hypoxia or anoxia, and neonatal death. Maternal consequences are hemorrhage, hypovolemic shock, bladder injury, need for hysterectomy, and maternal death. Morbidity and mortality following rupture of the uterus depend on level of medical care [13][14][15][16]. Evidences on factors associated with uterine rupture are scarce in our study setting. Therefore, the results of this study will inform both clinical practitioners and maternal health program planners on important areas of attention, and hence, contribute to the reduction of maternal morbidity and mortality. Moreover the current study will also serve as one sample study for meta-analysis studies aiming to identify the common predictors of uterine rupture. Study setting, period and design Hospital based unmatched case control study design was used to assess factors associated with uterine rupture among mothers given birth and managed at Adama city, one public and four private hospitals during January 2011 to December, 2015. Adama city forms an especial zone of Oromia region and located 99 km south east of Addis Abeba (Capital city of Ethiopia). Based on 2007 census conducted by the Central Statistical Agency (CSA) of Ethiopia [25], this city has a total population of about quarter a million. The city has one governmental and three private hospitals. These hospitals together serve over five million catchment populations and serve as a referral site for neighboring zones and regions (Affar, Amhara and Somali). The hospitals have an operation room with nine functional operation room tables, nine gynecologists, 15 anesthetists and 30 midwives. Average annual numbers of deliveries of all type in these hospitals on average was 8, 320. Study participants All mothers delivered in Adama city governmental and private hospitals during the study period were considered as a source population for the current study. Women with uterine rupture were taken as cases and those with a normal spontaneous vaginal delivery (registered following each uterine rupture case) were considered as controls. Sample size for unmatched case control study design was calculated using EPI Info version 7 [26], considering all the following assumptions; Power of the study = 80%, Confidence interval = 95%, Caseto-control ratio = 1 to 2, Percentage of controls exposed to risk factor (cesarean section scar in this case) = 35%, Odds ratio to be detected = 1.82 (planned detection capacity of our study), Percentage of cases exposed (has cesarean section scar in this case) = 49.55% (from relevant literature) [17]. Accordingly, a total sample size of 432 women that is 144 cases of uterine rupture and 288 normal spontaneous vaginal deliveries were included in the study. The sampling frame was developed based on the client's medical registration number recorded on the log books in the labor wards and operating rooms over the 5 years from January 2011 to December, 2015. Using this frame, mothers diagnosed for uterine rupture and managed in the selected hospitals and the two subsequent mothers who had normal spontaneous vaginal delivery were selected as cases and controls respectively. Data collection Data were extracted from both the log books and the client cards which had been recorded over the 5 years from January 2011 to December, 2015. As sources of data the medical records of the selected cases that were diagnosed for uterine rupture and the medical records of two subsequent mothers with spontaneous vaginal delivery were collected from medical record units. Data extraction tool that contains all required variables was designed and used to extract data from both the log book and the client cards. Data were collected by 8 third year resident in Gynecology and Obstetrics. Before data extraction, training was provided on data collection tool and the data collection process were conducted under the supervision of the investigators. Sample of completed data extraction tools were cross checked with original patient records and feedback was given to ensure data quality. Study variables Independent variables Socio-demographic variables: Age, Place of Residence Obstetrics/Genecology related variables: Parity, Gravidity, Gestational Age, Having Malpresentation, History of cesarean section scar, ANC visits, contracted pelvis, uterine anomalies, previous myomectomy Management and follow up related Variables: Oxytocin/prostaglandin misuse, uterine instrumentation, Fundal pressure application Data analysis The collected data were coded and entered into EPI-Info version seven then exported to STATA version 12 for cleaning and analysis. Descriptive analysis was used to explore the characteristics of mothers. The associations between uterine rupture and independent variables were modeled using binary logistic regression analysis. Bivariate logistic regression analysis was used to assess the existence of crude relationship between independent variables and uterine rupture. At this level the candidate variables for multivariate analysis were selected at P-value < 0.25 significance level [27]. Multivariate logistic regression was applied to estimate the adjusted effects of independent variables on uterine rupture. The association between independent variables and uterine rupture was estimated using odds ratio with 95% confidence interval. The significance of associations was declared at p-value of less than 0.05. The regression model was developed using backward stepwise strategy. The final fitted model was assessed for multicolliniarity using Variance Inflation Factor (VIF) [28] and goodness of fit using Hosmer and Lemishow test [29]. The model ability to correctly classify those subjects who experience outcome of interest and those who do not was assessed using Receiver Operating Characteristics (ROC) curve [28,29]. The parsimonious model that best explain data with minimum of free parameters was selected using Akaike Information Criteria (AIC) [30]. Socio-demographic characteristics of cases and controls In the current study, we incorporated 144 eligible cases of uterine rupture and 288 controls with normal spontaneous deliveries. As per the results of analysis significantly higher proportion of cases of uterine rupture tended to be older and belongs to rural residential. Accordingly the study revealed that, about 74(51.39%) of uterine rupture and 92(31.9%) of controls were in the age range of 26-30 years. Among cases of uterine rupture 95(75.40%) and among controls 78(27.66%) were from a rural residence (Table 1). Obstetric characteristics of cases and controls This study revealed that multiparous, multigravida, women with mal-presentation, women with at list one gynecologic risk factor, women with a history of CS scar and women with no history of ANC visit were highly proportionate among cases of uterine rupture compared to controls (p-value < 0.05) ( Table 2). Type of management performed The study showed that, lower proportions of cases of uterine rupture 10(7.35%) were managed with oxytocin/ prostaglandin compared to controls 45(16.25%). But the proportion of women managed with uterine instrumentation were higher among cases 8(5.88%) compared to controls 6(2.19%). The management of labor through fundal pressure was not much different between cases 1(1.14%) and controls 1(1.12%). The study revealed a higher proportion of cases 76 (66.09%) referred by health professionals compared to controls 117(45.17%) ( Table 3). Factors associated with uterine rupture The odds of having a uterine rupture in relation to different characteristics of women were estimated by odds ratio using logistic regression analysis. Bivariate logistic regression analysis was used to select candidate variables for multivariate analysis at p-value less than 0.25. In the model development process the existence of multi co-linearity was assessed using VIF. The result of the assessment showed that there was strong co-linearity between parity and gravidity (mean VIF = 9.26). As a result, we exclude parity from multivariate model. The final fitted model was also tested for goodness of fit using Hosmer and Lemeshow test. The test result showed that the model became poor by the inclusion of variable called mal-presentation (P-value = 0.0373). Though it had statistically significant association with uterine rupture, we fitted the final model excluding mal-presentation. In the final model the odds of having a uterine rupture across each independent variable were adjusted for confounding effects. Accordingly women's residential place, number of gravidity, history of CS scar and history of ANC visit were found to be significantly associated with the odds of having uterine rupture (p-value < 0.05) ( Table 4). As per the result of multivariate analysis being women from a rural residence were associated with 6.29 (AOR = 6.29; 95% CI: 3.39, 11.66) times higher odds of having a uterine rupture compared to urban. The odds of having a uterine rupture was 27.89 (AOR = 27.89; 95% CI: 8.42, 92.34) times higher for a woman of gravidity of five and above and 8.80 (AOR = 8.80; 95% CI: 2.96, 26.12) times higher for women of gravidity of two to four compared to primi-gravida. Having a history of cesarean section scar was associated with 9.94 (AOR = 9.94; 95% CI: 3.39, 11.66) times higher odds of having a uterine rupture compared to their counterparts. The odds of having a uterine rupture among women with history no ANC visit was 9.64 (AOR = 9.64; 95% CI: 4.37, 21.29) times higher compared to women with a history of ANC visits (Table 4). Discussion The study was aimed to identify factors associated with having uterine rupture. The result of analysis showed that the likelihood of having uterine rupture was found to be associated with being women from rural residence, increase in the number of gravidity, presence of CS scar and having no history of ANC visit. The current study revealed that the chance of having uterine rupture is higher for a woman from rural residence compared to urban, which is in line with prior studies done in Pakistan [31] and Debremarkos Ethiopia [32]. This could be due to lack of access to nearby health institution in rural residential areas. For women residing in rural area, health facilities are distant and accesses to information about institutional deliveries are limited in comparison to woman reside in urban. As a result the higher chance of uterine rupture for women from rural residents may be attributed to two delays in the process of getting obstetric cares. The first is delay to decide for seeking health care as early as possible and the second is delay in reaching health facility. Additionally, maybe there is a failure of early referral for any labor abnormality, thus resulting in delay in early intervention leading to ruptured uterus. These problems could be possibly addressed by construction of health institution nearby to the community that are capable of managing obstructed labor, increasing awareness on skilled birth attendance and establishing a referral system. This study also showed that the chance of having uterine rupture was increased by increase in number of gravidity. Similarly, studies done in Yemen [33] and Nigeria [34,35] revealed gravidity as one of risk factors for having uterine rupture. The abdominal wall becomes weak and lax for mothers with high number of pregnancies. As a result this contributes for the head of fetus not to be engaged early that leads to different mal-presentations. Mal-presentation was found to be one of the contributing factors for rupture in some previous studies. In the current study uterine rupture was found to be higher among women with a history of scarred uterus. The higher chance of uterine rupture was also observed among women with history of CS scar as per the studies done in United Kingdom [1], India [36] and Nigeria [35]. Mostly woman with previous CS delivery are highly likely to develop scar dehiscence especially when the incision of uterine wall is vertical and when the inter-pregnancy interval is short after a CS scar. Then these will lead women to develop rupture. This study also elucidated that, mothers not having ANC care were more likely to have uterine rupture. The occurrences of uterine rupture among women not having ANC patient have also been noted in other studies done in Nigeria [34,35,37]. The differences in the level of obstetric practices, an availability and under-utilization of the essential obstetric care services gained during pregnancy, would account for the high chance of developing uterine rupture. Limitation of the study This study has some limitations. It relied on a review of logbook and medical records, in which data on some of important socio-demographic and socioeconomic information were unavailable. Additionally, data were not primarily collected for research purpose and lack completeness. Being a case control study, it is unlikely to infer a causal association and needs further study to explore the causes of having uterine rupture. With these all limitations this study will serve as baseline information for a government organization, stakeholders and decision makers working on programs targeted to minimize the odds of developing uterine rupture and its consequences. Conclusion In general based on the current study, for the women living in rural residence and having larger numbers of previous pregnancies the chance to have uterine rupture were significantly higher compared to their counterparts. Similarly the chance to have uterine rupture was significantly higher for women having a history of CS scar and not using ANC. Therefore, it is mandatory to improve an access for emergency and essential obstetric care with due attention to women in rural residence. It is also important to strengthen family planning services with special attention to women of multigravida. Furthermore, strengthening antenatal care with complete packages and referral system are issues to be addressed in order to minimize the chance to have a uterine rupture.
2018-09-30T01:15:23.114Z
2018-09-27T00:00:00.000
{ "year": 2018, "sha1": "59bff61cec43cf1c2be0e66937b76c27b835badb", "oa_license": "CCBY", "oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-018-0606-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59bff61cec43cf1c2be0e66937b76c27b835badb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3926582
pes2o/s2orc
v3-fos-license
Circulating tumor DNA as a biomarker and liquid biopsy in head and neck squamous cell carcinoma The use of circulating biochemical molecular markers in head and neck cancer holds the promise of improved diagnostics, treatment planning, and posttreatment surveillance. In this review, we provide an introduction for the head and neck surgeon of the basic science, current evidence, and future applications of circulating tumor DNA (ctDNA) as a biomarker and liquid biopsy to detect tumor genetic heterogeneity in patients with head and neck squamous cell carcinoma (HNSCC). cancers have biomarkers that can diagnose disease and monitor pretreatment and posttreatment tumor burden, for example, prostate-specific antigen in prostate cancer or carcinoma antigen (CA19-9) in pancreatic cancer, head and neck cancer has no such test. Thus, head and neck cancer surveillance relies on clinical and radiological findings. 3 Patients often present with advanced-stage disease and the features of early invasion and metastasis create a significant morbidity and impact upon quality of life. 4 For these reasons, despite advances in treatment, head and neck cancer 5-year survival remains in the region of 60%, which is only slightly improved over the past few decades. 5 Using a blood test, circulating biochemical molecular markers in head and neck cancer hold the promise of being able to improve diagnosis, planning, treatment monitoring, and surveillance. 6 A blood test carries little morbidity, can be repeated at various time points during treatment, and is costeffective. One such suggested biomarker is the level of circulating tumor DNA (ctDNA). 7 The discovery that a proportion of circulating DNA in patients with cancer may be tumorderived has created the potential for a so-called "liquid biopsy," as an alternative to a tissue biopsy, to characterize tumor genetic features. 8,9 This review provides an introduction to the biological structure and function of circulating DNA. We discuss its use as a biomarker of tumor burden, and the potential utilization of ctDNA as a liquid biopsy in head and neck squamous cell carcinoma (HNSCC) to identify tumor genetic heterogeneity. We deliberately focus on HNSCC and a ctDNA-based liquid biopsy. Although it is beyond the scope of this article, authors should be aware of other circulating components that are under investigation for use as a potential liquid biopsy in head and neck cancer. For example, circulating tumor cells (CTCs) 8 or circulating viral DNA, such as plasma human papillomavirus DNA in oropharyngeal carcinoma 10 and plasma Epstein-Barr virus DNA in nasopharyngeal carcinoma. 11 | Circulating tumor DNA Circulating DNA is extracellular DNA found in circulating blood, which was first identified as early as 1948. 12 This DNA is released into the circulation by both pathogenic and physiological mechanisms, including apoptosis, cellular necrosis, phagocytosis, or exocytosis. 13 Ordinarily, circulating DNA is rapidly degraded by blood nucleases and eliminated by the liver, spleen, and kidneys and has a short halflife of around 10 to 15 minutes. 13 Therefore, systemic illness, such as liver or renal disease, can impact upon ctDNA levels and potentially bias the interpretation of blood results. 14 CtDNA is present in many forms; either free DNA, bound to protein complexes, cell surface bound, or in vesicles (apoptotic bodies, microvesicles, and exosomes). 13 In 1977, Leon et al 15 were the first to identify that patients with cancer had increased levels of circulating DNA fragments, thus prompting the hypothesis that tumors released DNA into the bloodstream. In 1989, Stroun et al 16 were able to show that a portion of these DNA fragments was in fact of tumor origin due to the presence of genome instability, and then, in 1994, Sorenson et al 17 demonstrated the presence of tumor-specific point mutations in the KRAS gene in ctDNA. The presence of cancer-specific genomic alterations (for example, point mutations) allows the differentiation between ctDNA and DNA from normal healthy cells. 9,13 An additional discriminating factor is the difference in DNA fragment base pair length. Cellular apoptosis creates DNA fragments of around 100 to 200 base pair, whereas necrosis, due to more irregular digestion, creates larger fragments sometimes many kilo-base pair in size. 13,18 The concentration of circulating DNA in healthy control patients is generally very low, in the region of <5 ng/mL, whereas patients with cancer can have elevated levels of several hundred ng/mL. 13,18 The increase in ctDNA in patients with cancer and the exact origin of ctDNA remains controversial. As tumor size increases, outstripping the metabolic supply, tissue hypoxia causes cellular necrosis, sloughing of tumor cells, and, thus, release of ctDNA. 13,18 The CTCs are not discussed in this review but, in theory, lysis of these cells in the circulation may also contribute to ctDNA, although the evidence is mixed. 10 It is not clear if ctDNA has an active role in carcinogenesis or whether it is just a byproduct of tumor shedding. García-Olmo et al 19 were one of the first groups to describe the concept that ctDNA could cause cancer metastases by transfecting healthy cells. They were able to induce tumors in healthy rats using plasma from tumor-bearing rats. The same group later demonstrated that the serum from patients with colorectal cancer induced tumor formation in in vitro cultured cells. 20 The laboratory methods of ctDNA detection and analysis have changed greatly over the past few decades, with the development of next-generation digital sequencing (NGS). The predominant method for the analysis of ctDNA is via polymerase chain reaction (PCR) amplification of ctDNA gene targets followed by downstream analysis. A variety of methods is used for the sensitive detection of point mutations, including real-time PCR, digital droplet PCR, and Sangertype sequence. For more comprehensive molecular profiling of the circulating tumor genome, whole genome amplification methods (such as multiple displacement amplification, or random hexamer amplification) can be used to amplify limited ctDNA input followed by library preparation and sequencing. 21 As can be seen, the protocol for the collection and analysis of a liquid biopsy assay is both complex and as yet not standardized in its approach, with a major hurdle being the sensitivity and error rate of NGS for ctDNA. 14,21 | Clinical applications of circulating tumor DNA With numerous data end points described in ctDNA studies, it is useful to clarify how each of these would impact upon clinical practice. In previous literature, the terms "biomarker" and "liquid biopsy" have at times been used interchangeably. In this review, we deliberately separate the terms biomarker and liquid biopsy as to avoid confusion. As opposed to a static biopsy that determines tumor characteristics, a biomarker should be an objective and quantitative test of disease progression and outcome. 22 We discuss the clinical application of ctDNA analysis in 2 broad categories: (1) a biomarker to assess tumor burden; and (2) a liquid biopsy to determine tumor genetic heterogeneity (Table 1). | Biomarker of tumor burden To date, the use of ctDNA to assess HNSCC tumor burden has focused on 2 areas: total ctDNA concentration and the detection of ctDNA as a tool in diagnosis and marker of prognosis. The use of a ctDNA liquid biopsy as a noninvasive screening tool is an interesting proposition and is currently under investigation. As with all screening tools, the technical and clinical demands of creating a sensitive and specific ctDNA-based test are immense. However, the development of low-cost NGS technology and complex bioinformatics data analysis make this concept a potential future reality. 23 The immediate application of ctDNA as a biomarker in the pretreatment phase will likely be for those patients in whom there is high risk or diagnostic uncertainty. For example, the monitoring of premalignant lesions in which the best course of treatment is still debated or when a biopsy may miss potential malignancy in a severely dysplastic lesion. 24 In the posttreatment phase, the high sensitivity of ctDNA poses a real opportunity for the first biomarker in HNSCC to assess for residual disease or locoregional recurrence. | Circulating tumor DNA levels and detection in patients with cancer As discussed, the finding that total circulating DNA concentration was increased in patients with cancer was the foundation of the hypothesis that part of this DNA may be of tumor origin. However, total ctDNA concentration, regardless of subsequent ctDNA genomic analysis, may also be used as a diagnostic and prognostic tool. Mazurek et al 25 assessed total circulating DNA levels in 200 patients with HNSCC when compared to a control group of 15 patients. Mean total DNA levels were higher in the HNSCC group but not up to significant levels. Of interest, oropharyngeal SCCs had significantly higher levels of ctDNA (P 5 .011) than other HNSCCs (nasopharynx, hypopharynx, and larynx). They also demonstrated a significant relationship among nodal status (N0-1 vs N2-3), stage (I-III vs IV), and age (<63 and >63) with increasing ctDNA concentrations. To be used as a biomarker, ctDNA detection must be a sensitive test for HNSCC and correlate with severity of disease. In the largest study to date, Bettegowda et al 26 evaluated the detection of ctDNA in 359 patients with 15 various cancer types. They divided the patients into those with localized disease (n 5 136) and those with metastatic disease (n 5 223), unfortunately, the numbers of head and neck cancer cases were relatively low (n 5 12). In the metastatic disease group, ctDNA was detected in 82% of patients, in contrast to 55% in the localized disease group. When evaluating ctDNA as a prognostic tool, by comparing the metastatic and localized groups in cancers with sufficient numbers (colorectal, gastroesophageal, pancreas, and breast), there was a significant relationship (P < .001) and also a clear trend with regard to advancing stage of disease and increased ctDNA quantity. | Posttreatment surveillance The ability to use ctDNA as a biomarker of disease recurrence in the posttreatment surveillance phase is arguably more valuable than its diagnostic merits. 27 Given the reliance on poorly sensitive clinical and radiographic tests, a biomarker of posttreatment tumor burden would be a valuable tool. 3 The study by van Ginkel et al 27 discussed the role of ctDNA in head and neck cancer surveillance, and proposed a workflow for how this would be applied to clinical practice. 8,9,13 In a study of 18 patients with colorectal cancer, Diehl et al 28 were able to directly correlate ctDNA detection and fluctuating levels with recurrence-free survival postsurgical treatment (P 5 .006). All but 1 of the patients who had detectable ctDNA postoperatively experienced recurrence and none of the patients with undetectable ctDNA experienced recurrence. They were able to elegantly plot graphical representations of ctDNA levels over time to provide an accurate representation of "personal tumor dynamic burden." In a similar study, Dawson et al 29 compared ctDNA to CTCs and carcinoma antigen (CA15-3) in 30 patients with metastatic breast cancer undergoing treatment. The ctDNA was statistically more sensitive than CTCs or carcinoma antigen (CA15-3) to measure treatment response (P < .002) and was a significant marker of survival (P < .001). In a study of 47 patients with HNSCC, Wang et al 30 were able to collect postsurgical treatment samples from 9 patients. In 3 patients who developed recurrent disease, the presence of ctDNA in plasma predated the clinical/radiographic evidence of recurrence by 15 months, 9 months, and <1 month. Of the 5 patients with negative ctDNA results, all were recurrence free at a mean follow-up of 12 months. 30 Hamana et al 31 compared the detection of ctDNA in 64 patients with oral SCC in the preoperative and postoperative phase. Forty-four percent of patients (28/64) demonstrated ctDNA with tumor-specific microsatellite alterations preoperatively and this dropped to 20% (13/64) postoperatively. Of the 28 preoperative patients with detectable ctDNA, 20 had no ctDNA detectable postoperatively and all of these patients were disease-free with no recurrence. Four of the patients with detectable ctDNA at 4 weeks in the immediate postoperative phase went on to develop distant metastases. In this study, the presence of cDNA was statistically correlated to early-stage (I/II) versus late-stage (III/IV) disease (P 5 .0378). | Liquid biopsy to assess tumor genetic heterogeneity The ability to identify driver mutations and epigenetic modifications of a tumor is an important step in the implementation of targeted therapy and improving outcomes in patients with head and neck cancer. Because HNSCC shows considerable tumor genetic heterogeneity, this means that different parts of the tumor may have different mutations. 32 Knowledge of all the important "driver" mutations of a tumor are necessary to be able to provide targeted treatment for that tumor. The current use of tissue biopsy as a diagnostic technique is a major deficiency in this regard. A tissue biopsy captures 1 or 2 parts of a tumor, and, therefore, is at high risk of bias and missing important driver mutations due to intratumoral heterogeneity, together with the invasiveness nature and morbidity of the procedure itself. 27 In contrast to the "static biopsy" obtained from tissue samples, the ability to collect multiple liquid biopsies at different time points during treatment creates the reality of a "dynamic biopsy" to detect tumor clonal evolution and identify recurrence or treatment resistance. This would allow the potential for real-time monitoring of cancer genetic mutational progression and the tailoring of personalized targeted molecular therapy. Recent studies in head and neck cancer have focused on the identification of tumor-specific genomic alterations in ctDNA. We discuss each applicable genomic alteration in turn. | Mutations Lebofsky et al 33 investigated tumor-specific mutations in the ctDNA of 34 patients with 18 various types of metastatic cancer (head and neck cancer 5 5). In 27 patients, ctDNA mutations matched those from solid tumor biopsies. In the aforementioned study of 47 patients with HNSCC by Wang et al, 30 they were able to detect plasma ctDNA with tumorspecific point mutations in 87% of cases. They assessed the presence of mutations in 6 genes (TP53, PIK3CA, CDKN2A, FBXW7, HRAS, NRAS, and E7 [human papillomavirus] DNA) frequently associated with HNSCC (>85% had TP53 mutations). The majority of patients had advanced (stage III or IV) disease. When they combined findings from plasma with sloughed DNA fragment detection in saliva this increased the diagnostic sensitivity to 96%. Of note, there was little variation in plasma detection rates among sites (80% oral cavity, 91% oropharynx, 86% larynx, and 100% hypopharynx). As expected, salivary DNA detection was significantly higher in oral cavity tumors (100%) when compared with other sites (47%-70%). Although mutation frequency was higher with advanced-stage disease (70% vs 92%) this was not to statistical significance. They concluded that the combination of DNA detection in 2 compartments (plasma and saliva) was a valuable tool to increase sensitivity. | Microsatellite alterations Microsatellite alterations include microsatellite instability (MSI) and loss of heterozygosity (LOH). Microsatellites are sections of DNA with short base pair motifs (usually 1-6 base pairs in length) repeated 5 to 100 times. In brief, they signify a defective mismatch repair system, which in turn is a marker of mutations in DNA repair genes. 34 There is strong evidence for the role of MSI in colorectal cancer as a marker of prognosis and survival, but the role of MSI in head and neck cancer is unclear. 34 Some studies report no relationship, whereas others reported a positive MSI conferring a better prognosis. The LOH is a result of loss of a copy of a diploid gene and is a common mechanism of inactivation of tumor suppressor genes. In a recent review by De Schutter et al, 34 they highlighted LOH as a more useful prognostic predictive marker than MSI, due in part to an increased frequency of LOH compared to MSI in HNSCC. The LOH is associated with advanced high-grade disease and is a negative prognostic indicator of survival, 35 with suggested evidence of a correlation with chemotherapy resistance. 36 Some of the earliest work to identify HNSCC tumorspecific genomic alterations in ctDNA was performed by Nawroz et al. 37 In a cohort of 21 patients, they identified microsatellite alterations in the ctDNA of 6 patients with HNSCC. All 6 patients had advanced (stage III-IV) disease and 5 had nodal metastases. The same group followed up these preliminary findings with a larger study of 152 patients with HNSCC. 38 Forty-five percent of patients (68/152) had tumor-specific microsatellite alterations in ctDNA, with 84% of the cohort having advanced-stage or recurrent disease (127/152). Those with advanced-stage disease and nodal metastases had a higher positive rate of microsatellite alterations than those with early-stage cancer. Of note, there was an obvious jump in detection from stage I to II disease (17% vs 47%) when compared to stage III and IV disease (52% and 44%). With a mean follow-up period of 27 months, disease-free survival was decreased in the positive ctDNA detection group but not to statistical significance. | Tumor suppressor gene hypermethylation The silencing of tumor suppressor genes via the hypermethylation of their promoter regions is one mechanism of gene suppression involved in carcinogenesis. 39 This epigenetic phenomenon of hypermethylation has been investigated and validated in HNSCC, with a host of tumor suppressor genes implicated as potential targets. 39 The detection of hypermethylation of ctDNA is, therefore, another potential prognostic application in HNSCC. Mydlarz et al 40 evaluated the methylation status of 100 patients with HNSCC when compared to a control group of 50 patients. They specifically examined hypermethylation of the EDNRB, DCC, and p16 (CDKN2A) genes. Ten patients (10%) exhibited EDNRB hypermethylation, 2 of these patients had DCC hypermethylation, and 1 of these 2 patients also had p16 hypermethylation. It was statistically significant that HNSCC samples had EDNRB amplification when compared to the control group (P 5 .02) but not with the DCC or p16 genes. It is difficult to ascertain the clinical utility of this data. Detecting hypermethylated regions in ctDNA is highly specific for a diagnosis of HNSCC but the sensitivity is poor. Moreover, with tens of tumor suppressor genes implicated in HNSCC, 39 an assay with the diagnostic specificity to be used as a diagnostic screening tool would need to evaluate each of these genes individually. One solution is to perform genomewide analysis of DNA methylation, which is a technique under investigation. 41 1.10 | Future questions to answer about circulating tumor DNA For ctDNA to be used as a biomarker and liquid biopsy the quantitation of ctDNA levels via the detection of genomic alterations must be standardized and validated in HNSCC. A gold standard investigation would need to take into account patient, tumor, procedural, and treatment factors to produce a risk-adjusted determination of prognosis and tumor heterogeneity ( Figure 1). Unfortunately, each of these factors has unanswered research questions. A significant challenge is the harmonization of the methodology used to detect and analyze ctDNA. 8,9 Herein lies the problem of creating a validated and universally accepted biomarker/liquid biopsy assay with set parameters that can be reproduced by different institutions. Furthermore, the mutational ctDNA load seems to vary greatly between tumor type and site, 18,26,33 and stage of disease or therapy may not always correlate with ctDNA levels. 18 For example, Mazurek et al 25 noted that oropharyngeal SCC had significantly greater levels of ctDNA than other HNSCC sites, the cause for this is unknown but presumably individual tumor biology has an impact. Thierry et al 13 noted that tumor growth kinetics and variance in cell proliferation and cell loss factors will impact upon ctDNA levels. In addition, the impact of systemic factors (comorbid disease, age, and smoking) on the levels and clearance of circulating DNA is not fully understood. The ability to quantify the above factors and apply these calculations to individual patient samples remains an unanswered task. | C ONCL US I ON As evidenced by this review, there are limited data relating to HNSCC and ctDNA. The presence of ctDNA in HNSCC seems to correlate with early versus late-stage disease and in the postoperative phase may predict recurrence or metastasis. The majority of studies provide proof-of-principle data that lay the foundation for future clinical trials. This review serves as an introduction for the head and neck surgeon into this flourishing field of research and, in time, a systematic review is required to further quantify all available data. Although other specialties have made great strides toward the application of ctDNA analysis into clinical practice, progress has not been as rapid in head and neck oncology. Despite this, the evidence is encouraging that ctDNA as a biomarker and liquid biopsy holds great promise to provide a noninvasive, cost-effective, and tumor-specific test in HNSCC. The advent and further development of NGS technology is a turning point in this regard and the need for robust clinical trials of ctDNA in HNSCC is paramount.
2018-04-03T05:09:42.003Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "bd99d0896fb51ffc651a324f7c7fae72a3e6f73b", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hed.25140", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "bd99d0896fb51ffc651a324f7c7fae72a3e6f73b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28418951
pes2o/s2orc
v3-fos-license
Laser cooling with a single laser beam and a planar diffractor A planar triplet of diffraction gratings is used to transform a single laser beam into a four-beam tetrahedral magneto-optical trap. This `flat' pyramid diffractor geometry is ideal for future microfabrication. We demonstrate the technique by trapping and subsequently sub-Doppler cooling 87Rb atoms to 30microKelvin. A magneto-optical trap (MOT) [1] is the starting point for the vast majority of cold and ultracold atomic physics experiments.Atoms are trapped and cooled to sub-milliKelvin temperatures using light scattering modified by the Zeeman and Doppler effects, respectively.MOTs are typically formed at the center of a spherical quadrupole magnetic field, in the overlap region of six (or less commonly four [2]) appropriately polarised red-detuned laser beams.The original pyramid MOT (PMOT) [3], utilising a square-based pyramidal reflector with 90 • apex angle between opposite sides, was devised as a means to turn a single laser beam into the six appropriately polarised beams required for a MOT.The PMOT simplifies optical alignment, saves a large number of optical components, and can also be modified to produce a beam source of cold atoms [4].The original PMOT has since been used to make a compact gravimeter [5], and a millimetre scale chip trap [6]. Recently we demonstrated a new kind of pyramid MOT, based on a four-beam tetrahedral geometry [7] originating from a single beam interacting with a triangular pyramid reflector.This geometry has many advantages over the original design: MOT formation outside the pyramid is possible which simplifies optical access to the atoms and the apex region of the pyramid is noncritical (the latter feature is also present in the PMOT design in Ref. [8]).The apex and mirror edges in the original PMOT will generate diffraction and incorrect apex angle will also generate intensity irregularities in the doubly-reflected beams counterpropagating with the input beam.As these irregularities pass directly through the MOT, they can hinder further cooling in optical molasses.Moreover although sub-Doppler cooling is possible with small atom number [5], for larger PMOTs the counterpropagating beam will contain a shadow of the atoms from the input beam, creating an intensity imbalance that will hinder molasses [9].This problem is obviated in the tetrahedral PMOT. In this letter we have experimentally realized our proposal to extend the tetrahedral PMOT [7] to a 'flat' geometry using diffraction gratings.The grating magnetooptical trap (GMOT) has a very similar working principle to the tetrahedral PMOT, and its properties are again largely the result of intensity balance and polarisation decomposition [7].One major difference is due to the fact that gratings spatially compress beams with a corresponding intensity increase (Fig. 1 a).The relationship between the intensities I i and I 1 of the vertical incident beam and the first-order diffracted beam, respectively, is determined by the corresponding beam widths w i and w 1 , and the first order diffraction efficiency R 1 : where the Bragg condition yields the first order diffraction angle α = arcsin (λ/d) for light with wavelength λ normally incident on a grating with groove spacing d. Note the relation α = 2θ allows direct comparison with mirror declination angle θ in our previous work [7]. The condition for balanced optical molasses from beams with intensities I j and wavevectors k j is: ( If we consider the configuration depicted in n identical gratings, then Eq. ( 2) is radially always satisfied, given the symmetry of the problem.Substituting Eq. ( 1) into Eq.( 2) and projecting onto the vertical axis yields the very simple condition for balanced optical molasses: which is completely independent of diffraction angle and hence grating period.For three beams this corresponds to first order diffraction efficiency of R 1 = 1/3.We consider only gratings for which second order diffraction is absent (i.e.first-order angles α > 30 • ), as they are simpler to model and additionally small α = 2θ leads to drastically reduced trapping and cooling properties [7].To create the maximum trap volume for a given beam size, the geometry in Fig. 1c) could be used.Unlike the system corresponding to our experimental realization (Fig. 1b) the zeroth grating order (a retroreflection with efficiency R 0 ) has to be considered as it is present in the beam overlap volume.This modifies the balanced molasses condition Eq. ( 3) to R 1 = (1 − R 0 )/n, and all cooling forces are reduced by the factor 1 − R 0 .The effect on the vertical trapping force depends on the relative zeroth order reflection phase shift between S and P polarizations, tending only to improve with non-zero relative phase. A critical point in the achievement of a tetrahedral magneto-optical trap is the circular polarization of the first order diffracted beams.efficiency is usually specified in terms of S and P polarization, and can vary dramatically with wavelength and polarisation.However, the difference in phase accumulation between S and P components φ SP also has to be taken into consideration.For our configuration, optimal cooling and trapping is achieved when the handedness (direction of circular polarization relative to beam propagation) of the incident vertical beam is reversed [7] and the total power drops by a factor 3 (i.e.all beams have equal intensity). One can show that the radial trapping constant of the GMOT, relative to the tetrahedral PMOT is reduced by a correction factor if the S and P linear components of the first order grating beams have an intensity ratio of I SP and a relative phase shift of φ SP respectively.The effect of relative S:P intensity ratio and phase is shown in Fig. 2 and is surprisingly robust.For an ideal π/2 phase shift between S and P even an intensity ratio I SP ∼ 0.07 still yields ∼ 1/2 the trapping strength (black curve in Fig. 2).We measured the efficiency of our gratings for a circularly polarized incident beam to be R 1 = 45.3%, of which 90 % has the correct circular handedness.If we consider the ensemble made of the grating and the quartz plate of the vacuum chamber (Fig. 3), overall diffraction efficiency drops to a near-optimal R 1 = 32.2%, with 85 % having the correct handedness, i.e. ζ SP = 70% (red curve in Fig. 2). In the experiment, trapping and repumping light are provided by two independent external cavity diode lasers Fig. 2. Relative grating MOT radial trapping strength compared to the tetrahedral PMOT [7], η SP , as a function of the intensity ratio I SP and relative phase φ SP between the S and P components of the first order beams at the MOT location.The black and red curves indicate 50% and 70% trapping reduction, respectively.[10].The lasers are overlapped and then spatially filtered by a 30 µm pinhole to remove rapid spatial intensity variation before and after diffraction from the gratings, which degrades the trap loading and can prevent sub-Doppler molasses.The beam is also over-expanded such that the intensity profile is as flat as possible within the 23 mm diameter apertured laser beam, to reduce intensity gradients in the three diffracted beams.A quarterwave plate changes the polarization to circular just before the vacuum chamber.We used inexpensive Edmund Optics gratings NT43-752.These 1200 grooves/mm gratings deflect a normally incident 780 nm beam at an angle ≈ 69.4 • , close to the ideal tetrahedron PMOT angle (arccos(1/3) ≈ 70.5 • ).This yields maximum trapping and cooling [7] albeit with a decreased capture volume using the grating geometry. The grating triplet is positioned below the glass vacuum cell, the gratings forming a triangle with the blazed direction pointing towards the center (Fig. 1 a).The gratings have dimensions 12.7 mm by 12.7 mm and thus are not completely illuminated by the 23 mm diameter laser beam.After diffraction, the beams are vertically squeezed to about w 1 = 2.7 mm, due to compression on the gratings, and the overlap region, where atoms can be trapped, is approximately a flattened rhombohedron [7].The overlap volume is ∼ 60 mm 3 and entirely above the 3 mm thick quartz vacuum window (Fig. 3).By deliberately tilting the gratings beyond the flat geometry to increase the beam overlap region, we found we could collect more atoms, indicating that the atom number is indeed overlap-volume limited.Ideally gratings with a longer period could be used, however for commercial gratings the variety in blaze angle (and hence polarization-dependent diffraction efficiency) is limited unless the spatial period is a multiple of 600 grooves/mm. With intensities of 1.3 mW/cm 2 in both the vertical and diffracted beams, a magnetic field gradient of 17 G/cm and 7 MHz red-detuning, we trap 10 5 87 Rb atoms in our GMOT (Fig. 3), consistent with the beam Fig. 3. Photograph of the experimental setup from the side.The MOT forms in the overlap of the downward incident beam (dashed white lines) and the three firstorder grating beams.The path of the diffracted beam (dotted white lines) from one grating (yellow schematic) refracts through the 3 mm thick quartz vacuum cell (blue), then propagates at the expected α ∼ 69 • .overlap volume reduction from our previous work [7].For our experimental parameters the atom number should only enter the volume-squared scaling regime [6] for beam diameters less than 2 mm. After initial MOT loading, the light frequency is further red-detuned for ms to achieve sub-Doppler (< 140 µK) MOT temperatures.The cloud temperature is measured using the sizes of background-subtracted fluorescence images after 0 and 10 ms time of flight.Fig. 4 shows the evolution of temperature as a function of extra in-MOT detuning, reaching significantly sub-Doppler temperatures of 30 ± 10 µK for red-detunings > 30 MHz.For a fixed 30 MHz extra red-detuning we investigated optical molasses formation by reducing the MOT magnetic field gradient to a fixed value in the range 0 − 17 G/cm for the last 10 ms of the 20 ms in-MOT cooling phase.Although the temperature remains approximately constant, as the final magnetic field gradient reaches zero the 1/e diameter of the Gaussian cloud prior to time of flight imaging reaches a size (∼ 4 mm) comparable to the beam overlap volume. It appears that deeper molasses cooling is largely prevented by spatial intensity variation in our small beam overlap volume, particularly near the edge of the cloud.Under optimal conditions we have seen preliminary evidence for optical molasses, however future experiments would be better performed in an optimized setup -with larger beam overlap and less dramatic diffracted beam compression.Both goals can be achieved using diffraction gratings with a larger groove period.A particularly appealing aspect of the GMOT is that it lends itself to custom microfabricated planar optical elements, with arbitrary groove spacing in a stand-alone planar element. In conclusion we have demonstrated a pyramid magneto-optical trap with 'flat' optics, extending our work on the tetrahedral pyramid MOT [7].A single beam is split into three new beams by a planar diffractor.This diffractor is well-suited to microfabrication, as technically challenging deep etching is not required.Additionally, MOT formation above the plane of the gratings has clear advantages for detection and further manipulation of the atoms.One can envisage applications in portable MOT-based devices.Moreover, we have demonstrated sub-Doppler temperatures in our grating MOT, and like the tetrahedral PMOT, sub-Doppler optical molasses should be achievable even with large atom number, suitable for applications requiring Bose-Einstein condensation formation. We are grateful for stimulating discussions with Joseph Cotter and Ed Hinds.PFG received support from the RSE/Scottish Government Marie Curie Personal Research fellowship program. Fig. 1 1 Fig. 1 . Fig. 1. a) Side view of a diffracted beam, illustrating geometric beam compression.b) Top view of three gratings and the circular vertical incident beam.The arrows show the blaze direction.c) A design for improved use of the laser beam area.Tomographic movies of the beam overlap volume for b) and c) are shown in Media 1, Media 2. Fig. 4 . Fig. 4. Temperature in the MOT as a function of the 20 ms extra detuning.For red-detuning jumps larger than 30 MHz, the temperature drops to 30 ± 10 µK.
2010-06-23T14:14:54.000Z
2010-06-23T00:00:00.000
{ "year": 2010, "sha1": "061a97592f11b2ac114d398bbde00dfc50213383", "oa_license": null, "oa_url": "https://hal.archives-ouvertes.fr/hal-01095913/file/Vangeleyn%20et%20al.%20-%202010%20-%20Laser%20cooling%20with%20a%20single%20laser%20beam%20and%20a%20planar%20diffractor.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "061a97592f11b2ac114d398bbde00dfc50213383", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
18779014
pes2o/s2orc
v3-fos-license
18F-Labeled Peptides: The Future Is Bright Radiolabeled peptides have been the subject of intense research efforts for targeted diagnostic imaging and radiotherapy over the last 20 years. Peptides offer several advantages for receptor imaging and targeted radiotherapy. The low molecular weight of peptides allows for rapid clearance from the blood and non-target tissue, which results in favorable target-to-non-target ratios. Moreover, peptides usually display good tissue penetration and they are generally non-immunogenic. A major drawback is their potential low metabolic stability. The majority of currently used radiolabeled peptides for targeted molecular imaging and therapy of cancer is labeled with various radiometals like 99mTc, 68Ga, and 177Lu. However, over the last decade an increasing number of 18F-labeled peptides have been reported. Despite of obvious advantages of 18F like its ease of production in large quantities at high specific activity, the low β+ energy (0.64 MeV) and the favorable half-life (109.8 min), 18F-labeling of peptides remains a special challenge. The first part of this review will provide a brief overview on chemical strategies for peptide labeling with 18F. A second part will discuss recent technological advances for 18F-labeling of peptides with special focus on microfluidic technology, automation, and kit-like preparation of 18F-labeled peptides. Introduction The deciphering of the human genome led to the identification of 483 drug targets by the entrance into the new millennium and an estimation of 5000-10,000 druggable targets on the basis of disease-relevant genes in the future [1]. Recently, Rask-Andersen et al. determined 475 potentially novel drug targets within the druggable human genome termed by Hopkins and Groom [1,2]. The vast majority of these drug targets (~88%) are represented by proteins. Highly specific targeting vectors comprise peptides, proteins, antibodies and antibody fragments. However, especially small peptides are ideal targeting vectors for numerous of current and future drug targets. The exquisite position of peptides among specific targeting vectors has attracted much interest from scientists of various disciplines over the last decades. In the emerging field of molecular imaging and nuclear medicine diagnosis and therapy, peptides became indispensable tools for in vivo visualization and monitoring of physiological and biochemical processes on the molecular and cellular level. Peptides are also attractive targeting vectors for treatment of diseases. In oncology, radiolabeled peptides have gained remarkable attention for targeted diagnostic imaging and radiotherapy. The high interest of using radiolabeled peptides for imaging and therapy stems from the overexpression of numerous specific peptide-binding receptors in various cancers and inflammatory tissues [3]. The application of peptides is furthermore justified by a manifold set of advantages. Automated solid-phase peptide synthesis (SPPS) ensures a simple and convenient synthetic access with a high degree of structural diversity to generate entire peptide libraries. Recent advances in molecular biology resulted in the development of novel techniques such as biopanning which uses phage-displayed peptide libraries for the identification of numerous molecular targets for peptide-based diagnostics and therapeutics, or to support the generation of lead structures for drug discovery. In contrast to larger targeting compounds like antibodies, peptides are characterized by a small size which allows for rapid clearance from the blood pool and non-target tissues. Good tissue penetration properties and high tumor uptake of radiolabeled peptides can lead to favorable tumor-to-background ratios as important requirement for good image quality and good cancer targeting properties in radiotherapy. Elimination from the body via excretory organs like kidneys is generally fast. Moreover, peptides are usually non-immunogenic [4]. The history of radiolabeled peptides dated back three decades when Reubi discovered an extraordinary high density of somatostatin receptors in pituitary tumors for specific targeting with radiolabeled somatostatin analogues in 1984 [5]. The first study of a radiolabeled peptide in humans was published in 1989 by Krenning et al. using a 123 I-radioiodinated somatostatin analogue ([ 123 I]204-090) in patients with endocrine-related carcinomas [6]. The first radiolabeled peptide approved by the US Food and Drug Administration (FDA) was 111 In-labeled DTPA-octreotide (Octreoscan ® ) which evolved to be the gold standard for imaging of neuroendocrine tumors and remained the only regulatory approved peptide in North America and Europe for a long time. To date, most peptides for targeted molecular imaging and therapy of cancer have been labeled with radiometals. Radiolabeling of peptides with the short-lived positron emitter fluorine-18 ( 18 F) represents an attractive alternative to radiometal-based peptides. 18 F is an ideal radionuclide for radiolabeling of small and medium-sized biomolecules like peptides. 18 F is characterized by favorable physicochemical and nuclear properties. This positron-emitting radionuclide exhibits high positron emission of 97%, and 18 F can be easily produced in high yields in a small biomedical cyclotron via the 18 O(p,n) 18 F nuclear reaction using an 18 O-enriched H2O target. This allows the production of high specific activity [ 18 F]fluoride in high radioactivity amounts of several hundred GBqs. Its favourable half-life of 109.8 min allows for syntheses and imaging studies over a few hours. This also allows shipping and distribution of [ 18 F]fluoride and 18 F-labeled radiopharmaceutical to facilities and hospitals without access to a cyclotron. The low positron energy of 0.64 MeV provides images with high spatial resolution due to the short maximum range in tissues (2.4 mm in water) [7]. A more accurate value for spatial resolution and tissue positron range is represented by the full width at 20% of the maximum amplitude (FW20H) of annihilation distribution and was determined to be 0.42 mm in compact bone, 0.54 mm in soft tissue, 0.58 mm in adipose tissue and 1.52 mm in lung tissue [8]. Moreover, the relatively short half-life of 18 F causes only minor radiation doses in patients, and 18 F-labeled peptides would also meet the needs and experience of PET clinicians with instrumentation and interpretation of PET scans as they are familiar with [ 18 F]FDG (2-deoxy-2-[ 18 F]fluoro-D-glucose)-the gold standard of PET imaging in oncology and other diseases [9]. However, 18 F-labeling of peptides remains a special challenge. Direct incorporation of [ 18 F]fluoride via nucleophilic aromatic substitution as one of the most prominent synthesis routes in 18 F chemistry is usually not feasible in the case of peptides due to the required harsh reaction conditions. Other challenges include laborious and time-consuming labeling procedures and chemoselectivity aspects for the incorporation of 18 F into peptides. Consequently, it was not until 11 years ago that the first human PET study based on a peptide labeled with the positron emitter 18 F has been initiated and conducted. Within this study the diagnostic performance of [ 18 F]FP-Gluc-TOCA-a carbohydrated octreotide derivative labeled with the prosthetic group 4-nitrophenyl-2-18 F-fluoropropionate-has been evaluated in comparison to Octreoscan ® in patients with somatostatin receptor-positive tumors [10,11]. Radiolabeled analogs of somatostatin which target somatostatin receptors became the prototype for imaging and radiotherapy of cancer with neuroendocrine origin and have been studied intensively. Somatostatin receptors belong to the class of G-protein coupled receptor family. Beyond somatostatin-based peptides to visualize somatostatin receptors, a broad range of other important peptide ligand-receptor systems have been identified for targeted molecular imaging and therapy of cancer in nuclear medicine [12]. Other prominent G-protein coupled receptors are gastrin-releasing peptide receptors (GRPRs) which can be targeted with bombesin peptide derivatives in prostate, breast, pancreatic and small-cell lung cancer or the cholecystokinin (CCK)/gastrin receptor system in colon and gastric cancers, as well as ανβ3-integrins [13]. Despite the vast number of 18 F-labeled peptides that have been designed and preclinically evaluated over the last years, only very few 18 F-labeled peptides (according to Li et al. only seven peptide-based 18 F-radiopharmaceuticals by 2013 [14]) have been subject of clinical patient studies [10,15]. A valid explanation can be found in the challenges of 18 F-radiosynthesis routes towards 18 F-labeled peptide PET radiopharmaceuticals. This review on 18 F-labeled peptides is organized into two parts. The first part summarizes the most frequently used synthetic routes for the preparation of 18 F-labeled peptides. The second part of the review is focused on recent technological advancements for peptide labeling with 18 F like automation, application of microfluidic technology, and kit-like production. The review is concluded with a brief summary to highlight the potential of a bright future of 18 F-labeled peptides for preclinical and clinical targeted molecular imaging. General 18 F Radiochemistry Concepts for Peptide Labeling Two general chemical strategies are known for the radiolabeling with 18 F using either nucleophilic substitution with no-carrier-added (n.c.a.) [ 18 F]fluoride or electrophilic substitution with carrier-added (c.a.) [ 18 F]fluorine gas. 18 F-labeled peptides as radiotracers usually require high specific activity (1-10 Ci/μmol [16]) as their corresponding receptors in vivo are easily saturable. Moreover, peptide-binding receptors are usually expressed in quite low receptor densities in vivo. Thus, electrophilic radiolabeling procedures generating 18 F-labeled compounds at low specific activity due to the presence of c.a. 18 F-fluorine gas are not suitable for peptide labeling with 18 F for targeted molecular imaging. Established synthesis routes towards 18 F-labeled peptides have been focused on nucleophilic substitution approaches and are discussed in the sections below. Conventional 18 F-labeling procedures require harsh reaction conditions such as high temperature, organic solvents and basic conditions to introduce [ 18 F]fluoride directly into target compounds. These conditions are usually not appropriate for the direct labeling of peptides with [ 18 F]fluoride. Moreover, acid side chains such as glutamic or aspartic acid in the peptide backbone may also interfere with direct nucleophilic radiofluorination reactions [4]. Hence, alternative procedures involving milder reaction conditions are needed to prepare 18 F-labeled peptides in sufficient radiochemical yields and pharmaceutical quality. Recently, an excellent and very detailed review on challenges and strategies for 18 F-labeling of macromolecules was published by Kuhnast and Dollé covering three decades of research activities [17]. Three main concepts for radiolabeling of peptides with nucleophilic n.c.a. [ 18 F]fluoride have been evolved over the last decades as recently compiled and illustrated by Liu et al. [18]. Concept 1 can be described by the activation of n.c.a. [ 18 F]fluoride followed by attachment to the peptide through bioconjugation chemistry via amine and sulfhydryl groups present in the peptide backbone. The activation of [ 18 F]F − is achieved by generation of bifunctional labeling agents or prosthetic groups which are further reacted under mild conditions with the peptide. In return, concept 2 involves the functionalization and activation of the peptide itself and subsequent fixation of n.c.a. [ 18 F]fluoride. This concept is also known as [ 18 F]fluoride acceptor chemistry. Three approaches have been developed using either silicon-, boron-or aluminum-[ 18 F]fluoride acceptor chemistry to radiolabel peptides within one step. This innovative methodology profits from the Lewis acid character of Si, Al, and B to form stable bonds with 18 F. The third concept involves activation of both reaction partners -n.c.a. [ 18 F]fluoride and the peptide. This dual activation concept is associated with highly prominent click chemistry methodology. Concept 1: The Prosthetic Group Approach for 18 F-Radiolabeling of Peptides Prosthetic groups, also referred to as bifunctional labeling agents, have been used in the majority of peptide labeling approaches with 18 F. These prosthetic groups are generated through introduction of [ 18 F]fluoride into a small-molecule compound with a second functional group that allows for bioconjugation to the peptide under mild conditions. Purification from unlabeled peptide and by-products via HPLC or solid phase extraction (SPE) ensures high specific activity of the 18 F-labeled peptide. Over the years, a wide variety of different prosthetic groups have been generated that can be divided into two categories: (1) amine-reactive prosthetic groups targeting the N α -terminal amino group or the lysine N ε -amino groups of the peptide backbone via 18 F-fluoroacylation and 18 F-fluoroamidation reactions, and (2) thiol-reactive prosthetic groups for radiolabeling using cysteine residues and maleimides according to 18 F-fluoroalkylation reactions. Figure 1 depicts a selection of the most frequently used prosthetic groups for peptide labeling with 18 F. 18 F-acylation agent that was mostly applied to radiofluorination reactions with cyclic glycosylated pentapeptides on the basis of the RGD (Arg-Gly-Asp) sequence to give 18 F-labeled galacto-RGD. This approach was successfully translated into the clinic for molecular imaging of ανβ3-integrins in cancer patients [19,20]. Compared to [ 18 F]NFP, prosthetic group [ 18 F]SFB is characterized by an aromatic [ 18 F]fluorobenzoyl residue that is incorporated preferentially into peptides via conjugation to primary amine groups present in the peptide backbone. Synthesis of [ 18 F]SFB was first reported in 1992 by Vaidyanathan and Zalutsky [21]. Radiosynthesis included a three-step synthesis procedure based on n.c.a. [ 18 F]fluoride incorporation into the 4-formyl-N,N,N-trimethylanilinium-triflate followed by oxidation to form 4-[ 18 F]fluorobenzoic acid and subsequent dicyclohexylcarbodiimide (DCC) activation to yield [ 18 F]SFB. The total synthesis time was 100 min, and the radiochemical yield was 25%. [ 18 F]SFB was first used for the radiolabeling of a monoclonal antibody F(ab')2. In the following years, acylation agent [ 18 F]SFB became one of the most important and frequently used prosthetic groups for peptide labeling with 18 F. The synthesis route of [ 18 F]SFB was constantly improved overtime. However, amine-directed prosthetic groups possess the distinctive challenge of achieving a site-selective conjugation to the peptide of interest. Radiolabeling of peptides with 18 F on-resin represents an interesting way to introduce 18 [27] or [ 18 F]SFB [28] selectively at the N-terminal amine group prior to cleavage off the peptide from the resin. However, reported procedures with on-resin peptide labeling via [ 18 F]SFB are time-consuming with total synthesis times over 130 min while providing only low to moderate radiochemical yields of 5%-16%. Typical overall radiochemical yields for in-solution peptide labeling with [ 18 F]SFB are reported to be in the range of 30%-46% [22,29]. Recently, radiochemical yields for solid-phase peptide conjugation using 4-[ 18 F]fluorobenzoic acid could be increased to 35%-64% in dependency of solid support and cleavage conditions [30]. Also, 18 F-fluoropropionic acid ([ 18 F]FPA) has been employed as alternative to 4-[ 18 F]fluorobenzoic acid to radiolabel peptides on solid support since 18 F-FPA may not alter size and lipophilicity as much as the aromatic 18 F-FBA [31,32]. Synthesis times were above 171 min generating 18 F-FPA-peptides conjugated selectively to either the N-terminus or the Lys-side chain in radiochemical yields of 3%-10%. Also, various prosthetic groups on the basis of thiol-reactive 18 F-labeled maleimides (Figure 1), have been developed to address the challenge chemoselectivity since maleimides undergo site-specific reactions with sulfhydryl groups according to a Michael addition. Hence, cysteine-containing peptides-naturally occurring or cysteine residue-modified-are suitable for this radiolabeling approach. Prominent examples of sulfhydryl-reactive maleimide-based prosthetic groups are N- [ Concept 2: 18 F-Radiolabeling of Peptides via [ 18 F]Fluoride Acceptor Chemistry The [ 18 F]fluoride acceptor chemistry represents a direct and elegant labeling method for peptides with fluorine-18 exploiting the formation of stable Si-18 F, B-18 F or Al-18 F bonds. The radiolabeling proceeds through an isotopic exchange reaction of 19 The strong nature of the Si-F bond prompted the investigation of [ 18 F]fluoride substitution at organosilicon synthons (SiFA, silicon-fluoride-acceptor) and modified peptides [37]. Positive attributes like little precursor amount (μg range) and high specific activity illustrate the advantages of this reaction. However, a challenge is the hydrolytic stability in vivo of 18 F-organosilicon compounds depending on the substitution pattern of the silicon moiety [38]. Hydrolytic degradation can be significantly reduced by the introduction of bulky substituents like tert-butyl groups to the silicon moiety. However, bulky substituents like tert-butyl groups drastically increase the lipophilicity of the peptide and results in high intestine, liver and gall bladder uptake as demonstrated by Hoehne et al. using several 18 F-labeled organosilico-bombesin derivatives [39]. The introduction of hydrophilic spacer like PEG and carbohydrates into 18 F-SiFA-tagged bombesin and RGD derivatives led partially to a compensation of the lipophilic nature, and therefore reduced logD values as demonstrated for 18 F-labeled SiFA-LysMe3-γ-carboxy-d-Glu-RGD peptide [40]. Recently, the development of the 18 F-SiFA approach including its application for peptide radiolabeling has extensively been reviewed [41]. The Perrin group studies boron-18 F acceptor chemistry as an alternative approach which led to the development of [ 18 F]aryltrifluoroborate ([ 18 F]ArBF3) bioconjugates. In 2011, they reported the radiolabeling of a boronic acid ester-modified marimastat peptide for molecular imaging of matrix metalloproteinases in breast cancer [42]. Isolated radiochemical yields were only 2%-4%. A special challenge of this methodology is the need to work in low reaction volumes of about 1.5 μL. Recently, Perrin and colleagues improved the initial reaction conditions by replacing bulky aryltrifluoroborates with alkylammoniomethyltrifluoroborate (AMBF3) groups. Octreotate decorated with AMBF3 was subjected to an 18 F-19 F isotopic exchange reaction using n.c.a. [ 18 F]fluoride to yield the corresponding 18 F-labeled peptide within 25 min including C18-SPE purification. Radiochemical yields were in the range of 20%-25% and specific activity was determined to be 111 GBq/μmol [43]. The repertoire of high specific activity 18 F-labeled peptides based on 18 F-B acceptor chemistry could successfully be extended to trimeric RGD peptides and dual-mode fluorescent-dimeric RGD [44]. The Al-18 F acceptor chemistry method combines the convenient chelator-based radiolabeling using minute amounts of peptide (nmol range) with favorable physicochemical characteristics of 18 F. McBride et al. pioneered the radiolabeling of a hapten peptide with 18 F according to this method. The reported uncorrected radiochemical yield was 5%-20%. An aqueous AlCl3 solution in sodium acetate buffer (pH 4) was mixed with cartridge-purified [ 18 F]fluoride to give Al-18 F complex which was reacted with the NOTA-functionalized peptide for 15 min without the need of further purification [45]. Conventional time-consuming azeotropic drying of [ 18 F]fluoride was not necessary. The Al-18 F complex is stable and no deradiofluorination was observed in vivo. The portfolio of Al-18 F-labeled peptides reported in the literature is mostly based on octreotide [46], a dimeric cyclic RGD peptide (E[c(RGDyK)]2) [47,48] and bombesin [49,50]. Figure 2 gives an overview on a selection of prominent Si-18 F, B-18 F, and Al-18 F building blocks for peptide radiolabeling via [ 18 F]fluoride acceptor chemistry. Concept 3: Click Chemistry for Radiolabeling of Peptides with Fluorine-18 Click chemistry is defined as a bioorthogonal, high-yielding, fast and chemo-and stereoselective reaction. Over the last decade, click chemistry has become a powerful and versatile synthesis approach in radiopharmaceutical chemistry [51][52][53]. The term click chemistry has been shaped by Sharpless, and it initially referred to the 1,3-dipolar Huisgen cycloaddition which is characterized by the formation of a triazole moiety through the copper(I)-catalyzed reaction of an alkyne with an azide [54]. Historically, the copper(I) catalyst was generated in situ from Cu(II) sulfate. More recently, copper(I) salts such as CuI or CuBr have been used directly [55]. The click chemistry methodology can also be considered as prosthetic group approach. Due to its advantageous reaction condition involving fast, chemo-and regioselective reactions in aqueous media, click chemistry has also been exploited for 18 F-radiolabeling of peptides. Marik and Sutcliffe pioneered this reaction in the field of radiopharmaceutical chemistry. They performed a Cu(I)-mediated click chemistry between azidopropionic acid-decorated model peptides radiolabeled with various ω-[ 18 F]fluoroalkynes. The reactions proceeded within 10 min in excellent radiochemical yields of 55%-99% [56]. 18 F-labeled triazole-peptide derivative at room temperature in 95% radiochemical yield within 15 min [59]. The first in vivo imaging approach utilizing a 18 F-labeled peptide prepared via click chemistry was reported by Li et al. 18 F-PEGylated alkyne-labeled RGD peptide dimer demonstrated better in vivo stability due to the multimeric nature of the peptide which also led to more favorable tumor targeting efficiency [60]. A click-based 18 F-carbohydration two-step synthesis procedure (including purification) for 18 F-Galacto-RGD was reported by Maschauer et al. [61]. [ 18 F]Fluoride acceptor chemistry was also combined with Cu(I)-mediated 1,3-dipolar cycloaddition, where an alkyne-modified 18 F-aryltrifluoroborate anion was reacted with only microgram quantities of an azido-bombesin antagonist peptide [62]. Recently, inverse-electron demand Diels Alder reactions of electron-deficient tetrazines with ring-strained trans-cyclooctenes or norbornenes were explored as copper-free click chemistry approaches in radiopharmaceutical chemistry. These powerful reactions are very useful for innovative in vivo pretargeting approaches. Several strain-promoted reactions applying azide-decorated peptides (octreotate [63], ανβ6 integrin-targeting peptide A20FMDV2 [64]) and 18 F-labeled cyclooctyne species were reported as versatile novel bioconjugation tools. Click chemistry functionalities are interchangeable as demonstrated for the reaction of an 18 F-labeled aliphatic azide with a cyclooctene-modified bombesin peptide. The obtained radiochemical yield was of 37% [65]. The very fast reaction of tetrazines with 18 F-labeled trans-cyclooctene species led to the introduction of various tetrazine-functionalized peptides. Very small amounts of a tetrazine-functionalized RGD peptide (30μg) were reacted with 18 F-labeled trans-cyclooctene by Selvaraj et al. to afford radiolabeled peptide in over 90% radiochemical yields within 5 min. PET imaging in U87MG-bearing mice revealed prominent tumor uptake of the copper-free click chemistry-generated 18 F-labeled RGD derivative [66]. Among others, our group pioneered the application of copper-free click chemistry for the synthesis of a stabilized bombesin peptide functionalized with a tetrazine-moiety [67]. Reaction of tetrazine-functionalized bombesin with [ 18 F]SFB-derived norbornene derivative gave dihydropyradazine-containing bombesin derivative as an alternative strain-promoted click chemistry methodology. The diversity of click chemistry reactions combined with its simple, fast and chemoselective nature equips (radio)chemists with a versatile chemistry tool for the production of 18 F-labeled peptides with high potential for translation into clinical practice. Furthermore, click chemistry according to concept 3 also includes activation of peptides via aminooxy-or hydrazine-modification. Functionalized peptides can subsequently be radiolabeled with 4-[ 18 F]fluorobenzaldehyde ([ 18 F]FBA) or [ 18 F]FDG to form corresponding oximes or hydrazones. Click chemistry-related oxime and hydrazone formation represents another innovative tool for chemoselective bioconjugation reactions. These beneficial one-step, high yielding syntheses (greater 60%) require only small amounts of peptide in the sub-milligram scale. Automated Synthesis of 18 F-Labeled Peptides Transition of 18 F-labeled peptides into clinics requires a radiosynthesis set-up which allows safe and reliable handling of large amounts of radioactivity to minimize radiation exposure. Automation and routine production require preferentially simple synthesis protocols with only a few reaction steps to yield the desired 18 F-labeled radiotracer. In general, the synthesis, purification, analysis and formulation of the radiopharmaceutical should not exceed two half-lives of the used radionuclide. This also fully applied to the automated radiosynthesis of 18 F-labeled peptides for clinical applications. Recent progress led to the development of fully-automated, remotely-controlled radiosyntheses of 18 [25]. A short conference abstract by Marik et al. is indicating the development and presence of an automated on-resin synthesis for 18 F-labeled peptides [26]. Peptides on solid support were radiolabeled with [ 18 F]FBA and [ 18 F]FPA and cleaved in a second step using a programmable automatic syringe pump equipped with a 8-port head simulating a continuous flow synthesizer. Reported radiochemical yields were comparable to the manually performed 18 F-labelling via SPPS with crude yields above 90%. A recent report by Ackermann et al. directs the way towards this goal as a fully automated synthesis of [ 18 F]FBEM-labeled model peptide glutathione could be achieved in an iPHASE FlexLab synthesis module [72]. Application of Microfluidic Technology for 18 F-Labeling of Peptides A highly flexible and versatile approach addressing aspects of chemoselectivity, required amounts of peptide precursor and synthesis time represents microfluidic technology. Application of microfluidic devices allows for rapid synthesis of radiolabeled peptides in high radiochemical yields by using only minute amounts of peptide precursor making this technology a promising tool for the synthesis of 18 F-labeled peptides as PET radiotracers for molecular imaging [73]. The fully-automated control of the microfluidic device supports safe handling of radioactivity. This novel technology is particularly advantageous for the radiolabeling of highly complex peptides as it enables purer product formation and chemoselectivity. A prominent example is cell-penetrating phosphopeptide containing several lysine and arginine residues which gave a highly complex and difficult to purify reaction mixture when radiolabeling was performed with [ 18 F]SFB using conventional labeling conditions in a small reaction vial. On the other hand, performance of the labeling reaction in a microfluidic reactor predominantly led to the reaction of [ 18 F]SFB on the N-terminus of the peptide which resulted in much cleaner product formation [74]. Radiochemical yields were increased to 26% compared to 2% via the conventional radiolabeling procedure. Reaction times were reduced to 12 min, and peptide precursor amounts could also significantly be reduced. The improved chemoselectivity favoring acylation reaction of [ 18 F]SFB on the N-terminal end of the peptide can possibly be explained by the masking of the arginine and lysine residues by the surface of the capillary-like microreactor. Moreover, the capillary design of the microreactor provides an enlarged specific surfaces which leads to a more efficient transfer and exchange of material and heat in course of the reaction [75]. Recently, microfluidic methodology was applied to the radiosynthesis of a clinically relevant octreotide TATE derivative [ 18 F]FDG-TATE [76]. The aminooxy-functionalized TATE derivative was labeled with [ 18 F]FDG in high radiochemical yields of greater than 82%. Figure 4 depicts an outline of the microfluidic-based synthesis set-up to prepare [ 18 F]FDG-TATE. Radioactivity level as relevant for the preparation of patient doses could also be applied. This result demonstrates principle feasibility to use microfluidic technology for the synthesis of 18 F-labeled peptides for clinical applications. Kit-Like Preparation of 18 F-Labeled Peptides "A next generation" of 18 F labeling methodology for peptides has recently been described by using a "kit-like" labeling protocol according to the well-established kit preparation of 99m Tc-and 188 Re-labeled radiopharmaceuticals. This procedure allows for highly efficient and reproducible radiolabeling reactions as required for clinical applications. Current efforts are directed towards the development of kit-like procedures exploiting 18 F-SiFA and 18 F-ArBF3 chemistry, as well as consideration of a recently developed simple and fast 18 F drying method via anion exchange cartridges [77][78][79]. To date, only peptide labeling according to Al-18 F chemistry was successfully used in a true kit-like preparation. The formation of the Al 18 F complex occurs in aqueous solution eliminating time-consuming drying steps and permits the use of USP grade [ 18 F]fluoride in saline [80]. Critical reaction parameters are pH (optimal: pH 4) and temperature (~100 °C). In vitro and in vivo stable Al 18 F bonds can be generated by complexation with NOTA as chelating agent [81]. The choice of the chelating agent is an important parameter. Another promising ligand for labeling with the (Al-18 F) 2+ species is NODA (1,4,7-triazacyclononane-1,4-diacetate) lacking an acetic acid in comparison to NOTA. Shetty et al. reported consistently higher labeling efficiency for NODA compared to NOTA suggesting an interence of the third carboxylic group with the binding of 18 F-fluoride to aluminium. Also various NODA derivatives with carbonyl functions at least 3-4 carbons distant from the chelator, such as NODA-MPAA (methylphenylacetic acid), can be radiolabeled in high yields (>78%) as opposed to NODA derivatives having a carbonyl group adjacent to the chelator ring. Subsequent formation of 5-or 6-membered rings with NODA reduce the labeling yield [80,82]. Figure 5 gives an overview on the kit-like preparation of 18 F-labeled peptides as reported in the literature by McBride et al. and Wan et al. [83,84]. The scheme illustrates the simplicity of the 18 F-Al chelating chemistry approach avoiding HPLC purification. McBride and coworkers successfully developed a versatile and highly reproducible kit-labeling protocol of peptides as exemplified with 18 F-labeled NODA-MPAA and NOTA-modified hapten peptides. The kit-prepared 18 F-labeled dimeric RGD peptide ( 18 F-alfatide) reported by Wan et al. was introduced into the clinic, and its feasibility was demonstrated in lung cancer patients with squamous or adenomatous carcinoma. The simple handling of lyophilized kits used for radiolabeling of PRGD2 peptide affords ready-to-use 18 F-AlF-NOTA-PRGD2 ( 18 F-alfatide) in 42% radiochemical yield within 20 min including C18-SPE purification [84]. [83]) in comparison to dimeric RGD derivative alfatide (Wan et al. [84]). (Pictures of the kit vials were adapted from [83].) REACTION More recently, attempts were made to apply click labeling with 18 F-ArBF3-into an easy-to-use "kit-like" procedure using cycloRGD peptides [78]. The overall synthesis time was 2 h, and the non-decay-corrected radiochemical yield was only 4%. The obtained low specific activity was due to the use of carrier-added [ 18 F]KHF2. Further optimization is needed to apply this protocol to 18 F-labeled peptides for subsequent clinical applications. Challenges and Trends in Peptide Receptor-Targeted Molecular Imaging Beyond highly promising recent chemical developments towards the synthesis of clinically relevant 18 F-labeled peptides such as automation and kit-like preparation, additional pharmacological aspects for the use of 18 F-labeled peptides in nuclear medicine have to be considered. Recently, two major "breakthroughs" have been published which have the potential to revolutionize the clinical application of 18 F-labeled peptides in the future. One of them involves the peptides interaction with its receptor. It was widely believed that peptides that act as receptor agonists are superior for optimal tumor targeting with high tumor uptake. Studies using peptide receptor antagonists 177 Lu-DOTA-sst2 demonstrated that more binding sites can be targeted merely by ligand-receptor interaction instead of subsequent internalization as typical for receptor agonists. The work of Cescato et al. pinpoints this change in paradigm for radiolabeled peptides [85]. Furthermore, the absence of internalization and induction of second messenger responses in the case of peptide receptor-based radiotherapy avoids pharmacological side effects as valid criteria. Another breakthrough is related to the recently introduced "to protect and serve" concept by Nock et al. [86]. This concept deals with the improvement of metabolic stability in vivo as a key element for successful tumor targeting with peptides in cancer. It involves co-administration of protease inhibitors like phosphoramidon with a range of unstabilized radiometal-labeled peptides (somatostatin, bombesin and minigastrin) instead of using other tedious classical synthetic stabilization methods for peptides, including multimerization. It was shown that the residency time of intact radiopeptides in the circulation was significantly extended when a protease inhibitor was co-injected. This resulted in significantly enhanced tumor uptake of radiolabeled peptide in various mouse xenografts. This concept was first reported by Bergmann et al. when the authors observed an enhanced half-life of a stabilized 18 F-labeled neurotensin derivatives in arterial rat blood in vivo when co-injected with protease inhibitors thiorphan and bacitracin [87]. Scope and limitations towards a translation of this concept into clinical practice needs to be investigated in the future since administration of pharmacological doses of protease inhibitors in patients have the potential to cause severe toxicological side effects. Summary and Conclusions In recent years, numerous radiolabeled peptides for diagnostic and therapeutic application in nuclear medicine have been designed and synthesized. This trend has stimulated the development of a multitude of innovative synthetic routes and technology advancements towards the preparation of 18 F-labeled peptides. The majority of these procedures involve the incorporation of 18 F via prosthetic groups. Click chemistry as versatile synthesis tool for bioconjugation enjoys increasing popularity in the world of radiopharmaceutical chemistry. 18 F-labeling of peptides using novel techniques such as microfluidic technology offer several advantages to conventional radiolabeling methods resulting in shorter reaction times, more efficient radiochemistry, improved chemoselectivity, and more economical use of starting material. Developments towards efficient fully-automated radiosyntheses of 18 F-labeled peptides will further stimulate and inspire the field in the future since it represents a highly promising synthesis pathway to translate more 18 F-labeled peptides into the clinic. Despite the successful and efficient automation of several 18 F-prosthetic groups for peptide labeling in patient-relevant doses, a fully remotely-controlled synthesis procedure to yield 18 F-labeled peptides starting from n.c.a. [ 18 F]fluoride still awaits its development. Thonon et al. were the first to produce up to 4 GBq of a 18 F-labeled RGD peptide derivative on a GE FASTlab synthesis unit using a semi-automated procedure which still required manual addition of the peptide precursor into the system [25]. A fully automated synthesis of model peptide glutathione with [ 18 F]FBEM was reported in 2014 [26]. This report raises confidence that the first fully automated synthesis of a 18 F-labeled peptide with clinical potential is only a minor step ahead. An appealing alternative to preparation 18 F-labeled peptides via prosthetic group conjugation is [ 18 F]fluoride acceptor chemistry. It benefits from the replacement of time-consuming, low yielding multi-step synthesis procedures. Moreover it is applicable to simple one-step lyophilized kit-like preparation of 18 F-labeled peptides. Successful implementation of this approach was reported very recently using the Al-18 F chemistry by McBride et al. and Wan et al. [83,84]. Hence, the Al-18 F method symbolizes a highlight procedure for the generation of 18 F-labeled peptides in a clinical environment to date. More kit-like approaches using silicon-and boron-[ 18 F]fluoride chemistry to produce 18 F-labeled peptides will probably follow soon. Recent innovative chemical and technology advancements combined with recent important findings in radiopeptide pharmacology will provide an efficient and elegant platform for the routine preparation and application of various 18 F-labeled peptides in clinical research and practice in the near future. These developments will lead 18 F-labeled peptides into a bright future.
2015-09-18T23:22:04.000Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "4d5e864f1ea599ec2d3b3cce831a6f332055fae3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/19/12/20536/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d5e864f1ea599ec2d3b3cce831a6f332055fae3", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
235535130
pes2o/s2orc
v3-fos-license
Comparison of Intravenous Lignocaine and Dexmedetomidine for Attenuation of Hemodynamic Stress Response to Laryngoscopy and Endotracheal Intubation BACKGROUND The purpose of the present study was to evaluate the efficacy of intravenous lignocaine 1.5 mg / kg & intravenous dexmedetomidine 1 mcg / kg for attenuating the haemodynamic response to laryngoscopy & endotracheal intubation in patients undergoing elective surgery under general anaesthesia. METHODS In this prospective randomised, comparative, clinical study, 60 patients were randomly divided into 2 groups, among them 30 patients were given infusion of 1.5 mg / kg IV lignocaine, diluted to 10 ml with normal saline, 3 minutes before intubation & 30 patients were given infusion of dexmedetomidine 1 mcg / kg diluted to 25 ml in normal saline over 10 minutes through infusion pump before induction. The heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, rate pressure product, oxygen saturation were measured at baseline, after study drug intubation at L + 1, L + 3, L + 5, L + 7 & L + 10 (L is onset of laryngoscopy). Statistical analysis was done by using descriptive & inferential statistics using chisquare test, Students paired & unpaired t test to find out the significance of the five variables namely mean heart rate (HR), mean systolic blood pressure (SBP), mean diastolic blood pressure (DBP), mean arterial pressure (MAP) and mean rate pressure product (RPP). RESULTS Dexmedetomidine provided better blunting of stress response during laryngoscopy and intubation without causing clinically significant respiratory depression, bradycardia or hypotension. It is better in achieving a low RPP, which is a good predictor of myocardial oxygen consumption. Dexmedetomidine provides better cardio-protection in patients against pressure response than lignocaine. CONCLUSIONS In these 60 patients, dexmedetomidine (1 mcg / kg) was found to be superior to lignocaine (1.5 mg / kg) for attenuation of pressor response. KEY WORDS Laryngoscopy, Endotracheal Intubation, Dexmedetomidine, Lignocaine, Rate Pressure Product B A C K G R O U N D Safe airway management is an essential skill for an anaesthesiologist. Laryngoscopy and endotracheal intubation are gold standard for securing the airway and giving positive pressure ventilation. Direct laryngoscopy has been used since many years as a conventional and routine method to facilitate this procedure. Laryngoscopy and endotracheal intubation are mandatory for most patients undergoing operation under general anaesthesia which is invariably associated with certain cardiovascular changes such as tachycardia (average 20 %) or bradycardia, rise in blood pressure (> 30 %) and wide variety of cardiac arrhythmias. Reid and Bruce in 1940 1 and King Harris 2 in 1951 described the circulatory response to laryngeal and tracheal intubation as reflex sympathoadrenal stimulation and showed that sympathetic reflex is provoked by the stimulation of epipharynx and larynx. The response is transient, occurring 30 seconds after intubation and lasting for less than 10 minutes. The tachycardia and hypertensive response to laryngoscopy and intubation are not of much consequence and short lived in normotensive patients but may prove hazardous in patient with medical problems like hypertension, ischemic heart disease, thyrotoxicosis and cerebrovascular diseases where circulation is already jeopardised. 3 Even a moderate increase in heart rate (15 %) is accompanied by a decrease in coronary perfusion pressure (17 %). In such cases acute left ventricular failure, acute myocardial ischemia and cerebral haemorrhage may occur. These changes may be fatal and sudden deaths have also been reported in patients with hypertension, ischemic heart disease and cerebrovascular diseases. Haemodynamic stability is an integral and essential goal of anaesthetic management plan. Various methods and drugs 4 have been used to attenuate the response to laryngoscopy and intubation. They are: • Deep inhalation anaesthesia. Intravenous (IV) lignocaine is one of the oldest, cheapest and most easily available drugs used for attenuation of hemodynamic response to laryngoscopy and intubation. Dexmedetomidine, introduced in 1999 for human use is a new selective alpha-2 adrenergic agonist having 8-times more affinity for alpha-2 adrenoceptors as compared with clonidine. Pre-treatment with dexmedetomidine attenuates haemodynamic response to laryngoscopy and intubation. The present study was undertaken to compare the efficacy of 1.5 mg / kg of IV lignocaine and 1 mcg / kg of dexmedetomidine IV infusion in attenuating the hemodynamic response to laryngoscopy and intubation. O b j e c t i v e s To evaluate the degree of cardiovascular response evoked by laryngoscopy and endotracheal intubation and compare the effectiveness of IV lignocaine 1.5 mg / kg and IV dexmedetomidine 1 mcg / kg as a premedication in attenuating this cardiovascular response (in terms of heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, rate pressure products, oxygen saturation (spo2) in patients undergoing elective surgeries under general anaesthesia. ME T H O D S This was a prospective randomised, comparative, clinical study conducted from 12 th February 2018 to 10 th February 2019. After approval by the institutional review board (IRB) committee & written informed consent obtained from patients, 60 patients of group L and D belonging to American Society of Anesthesiologists (ASA) grade I / II who satisfied the inclusion criteria, scheduled for surgery under general anaesthesia were included in this study. Sample size was calculated on the basis of previous studies. • Patients with anticipated difficult intubation. • Patients with history of sensitivity to drugs used in the study. • Patients not willing to give consent. S t u d y P r o t o c o l Preoperative Assessment Detailed history, physical and systemic examination of all patients was done on the day prior to operation. Laboratory investigations like CBC, renal function tests, liver function tests, blood sugar, serum electrolytes, urine analysis, X-ray chest and electrocardiogram (ECG) were reviewed. The nature of study & procedure was explained to the patient. Written informed consent was taken from the patient. Preoperative Preparation All patients were kept nil by mouth at least for 6 hrs before surgery. In the operation theatre, an intravenous line was secured & pulse oximeter, non-invasive BP & ECG were attached & baseline readings were taken. All patients were premeditated with Inj. glycopyrrolate 0.004 mg / kg IV, Inj. ondansetron 0.08 mg / kg IV before preoxygenation. All patients were randomly allocated into 2 groups. Randomisation was done using computer generated random number containing in opaque sealed envelope. Each group having thirty patients. • Group L: Infusion of plain normal saline in 25 ml over 10 minutes through infusion pump before induction & 1.5 mg / kg IV lignocaine diluted to 10 ml with normal saline 3 minutes before intubation. • Group D: Infusion of dexmedetomidine 1 mcg / kg diluted to 25 ml in normal saline over 10 minutes through infusion pump before induction & then 10 ml normal saline 3 minutes before intubation. All patients were preoxygenated with 100 % oxygen. Respective study drug was injected as mentioned above. Patient was induced with Inj. thiopentone sodium 6 mg / kg IV & Inj. suxamethonium 2 mg / kg IV. Patient was intubated with an endotracheal tube, maintained anaesthesia with O2 (50 %), N2O (50 %), sevoflurane & Inj. Vecuronium bromide 0.08 mg / kg. HR, SBP, DBP, MAP and SPO2 were monitored & RPP was calculated. All parameters were recorded at following stages: • Base line • After study drug • At intubation • L + 1 (after 1 minute of laryngoscopy) • L + 3 (after 3 minutes of laryngoscopy) • L + 5 (after 5 minutes of laryngoscopy) • L + 7 (after 7 minutes of laryngoscopy) • L + 10 (after 10 minutes of laryngoscopy) At the end of surgery, residual neuromuscular blockade was reversed with Inj. glycopyrrolate 0.008 mg / kg IV & Inj. neostigmine 0.05 mg / kg IV Extubation was carried out when the patient had adequately recovered from the effect of neuromuscular blockade with regular breathing pattern, good muscle tone / power, haemodynamic stability and was able to respond to verbal commands. S t a t i s t i c a l A n a l y si s Statistical analysis was done by using descriptive and inferential statistics using chi-square test, Students paired and unpaired t-test to find out the significance of the five variables namely mean heart rate, mean systolic blood pressure, mean diastolic blood pressure, mean arterial pressure and mean rate pressure product. Data analysis was carried out using Microsoft Excel spreadsheet and software online. R E S U L T S This study was conducted to evaluate and compare the efficacy of intravenous lignocaine and dexmedetomidine in attenuation of haemodynamic response to endotracheal intubation. Total sixty patients of either gender, belonging to ASA grade I or II were selected for study and divided in two groups, each group consisting of 30 patients each. P a t i e n t s D e m o g r a p h i c s Demographic Details Patients' Data Group L Group D P-Value Heart rate in both groups at baseline was comparable and there was no statistical difference between them (P-value > 0.05). Heart rate after infusion of study drug, during laryngoscopy and intubation in dexmedetomidine group and lignocaine group was statistically different and significant (Pvalue < 0.05). Heart rate at 1, 3, 5, 7 and 10 minutes after laryngoscopy in dexmedetomidine group and lignocaine group was statistically different and highly significant (P-value < 0.0001). Systolic blood pressure in both groups at baseline was comparable and there was no statistical difference between them (P-value > 0.05). Difference in systolic blood pressure after infusion of study drug between the two drugs was statistically different and significant (P-value < 0.05). During laryngoscopy and intubation at 1, 3, 5, 7 & 10 minutes thereafter, comparison of SBP in the dexmedetomidine group and lignocaine group was statistically different and highly significant (P-value < 0.0001). Diastolic blood pressure in both groups at baseline was comparable as there was no statistical difference between them (P-value > 0.05). After infusion of study drug, laryngoscopy and intubation at 1, 3, 5, 7 & 10 minutes DBP in the dexmedetomidine group and lignocaine group was statistically different and significant (P-value < 0.05). Mean arterial pressure in both groups at baseline was comparable as there was no statistical difference between them (P-value > 0.05). After study drug infusion, laryngoscopy and intubation, at 7 th minute MAP in the dexmedetomidine group and lignocaine group was statistically different and significant (P-value < 0.05). Whereas at 1, 3, 5 and 10 minutes after laryngoscopy and intubation MAP in the dexmedetomidine group and lignocaine group was statistically different and highly significant (P-value < 0.0001). Rate pressure product in both groups at baseline was comparable as there was no statistical difference between them (P-value > 0.05). After study drug administration, comparison of rate pressure product in dexmedetomidine J Evolution Med Dent Sci / eISSN -2278-4802, pISSN -2278-4748 / Vol. 10 / Issue 16 / Apr. 19,2021 Page 1127 group and lignocaine group was statistically different (P-value < 0.05). During laryngoscopy and intubation and thereafter at 1, 3, 5, 7 and 10 minutes, comparison of rate pressure product in dexmedetomidine group and lignocaine group was statistically different and highly significant (P-value < 0.0001). This prospective study was conducted to know whether dexmedetomidine, a newer α 2-agonist is effective for attenuating the hemodynamic response to laryngoscopy & endotracheal intubation with the conventionally used agent lignocaine. The pre-operative HR & BP of the two groups were having no significant difference (P > 0.05). After infusion of dexmedetomidine, there was a fall in HR and BP in the study group. D r u g a n d D o s a g e In 1985, Stanley Tam 9 concluded that 1.5 mg / kg lignocaine attenuates increase in BP & HR only when given 3 min before intubation. We used the same dosage for lignocaine in our study. In 2014, Raval DL et al. 10 did a comparative study of two different doses of dexmedetomidine (0.5 mcg / kg & 1 mcg / kg) on haemodynamic responses to induction of anaesthesia & tracheal intubation & concluded that dexmedetomidine 1 mcg / kg was more effective than 0.5 mcg / kg. We have used 1 mcg / kg of dexmedetomidine in our study. P a t i e n t D e m o g r a p h i c s There was no statistically significant difference between the mean age (years) & weight (kgs) & gender distribution among both groups (P > 0.05). (Table 1) H e m o d y n a m i c P a r a m e t e r s Heart Rate (Table 2) The results in our study are as following: • HR of both groups at baseline had no statistical difference between them (P > 0.05). • Following administration of study drugs, there was a fall in HR in both groups, group L (-3.4 %) and group D (-10.1 %). The fall in HR reduced to (-1.0 %) in group L and to (-7.7 %) in group D during laryngoscopy & intubation. Which was statistically significant (P < 0.05). • In L + 1, there was a maximum rise in HR from baseline in both groups, more with group L (16.6 %) than group D (0.8 %) so it was statistically highly significant (P < 0.0001). • At L + 3, L + 5 and L + 7 HR in both groups started falling from L + 1. However, there was a rise in HR from baseline in group L (12.5 %, 6.1 % and 0.8 %) and fall from baseline in group D (-1.7 %, -8.8 % and -13.0 %) respectively which was statistically highly significant (P < 0.0001). • At L + 10 HR in both groups was below baseline, (-3.2 %) in group L & (-16.4 %) in group D, difference was statistically highly significant (P < 0.0001). The results of our study were similar to those carried out by-1. Malde A et al. 11 concluded that lignocaine & fentanyl both attenuated the rise in pulse rate. In our study also HR reduced after administration of lignocaine which correlates with their study. 2. Gangappa RC et al. 8 A clinical study of intravenous dexmedetomidine (1 mcg / kg) versus lignocaine (1.5 mg / kg) premedication for attenuation of haemodynamic responses to laryngoscopy and endotracheal intubation. They concluded that dexmedetomidine is more effective than lignocaine. Systolic Blood Pressure (Table 3) The results in our study are as following: • SBP of both groups at baseline was comparable to each other and there was no statistical difference between them (P > 0.05). • Following administration of study drugs, there was a fall in SBP in both groups, group L (-0.7 %) and group D (-5.4 %) and was statistically significant (P < 0.05). • SBP increased by (1.6 %) in group L and reduced by (-4.2 %) in group D from baseline during laryngoscopy and intubation. This difference was statistically highly significant (P < 0.0001). • In L + 1, there was a maximum rise in HR from baseline in both groups, more with group L (17.2 %) than group D (5.3 %) so it was statistically highly significant (P < 0.0001). • At L + 3, L + 5 and L + 7 SBP in both groups started falling from L + 1. However, it was still above baseline in group L (12.0 %, 6.0 % and 1.2 %) respectively, while in group D it was above baseline at L + 3 (3.0 %) and decreased below baseline at L + 5 and L + 7 (-1.0 % and -5.0 %) respectively which was statistically highly significant (P < 0.0001) in favour of dexmedetomidine. • At L + 10 SBP in both groups was below baseline, (-3.2 %) in group L and (-8.6 %) in group D, difference was statistically highly significant (P < 0.0001). The results of our study are similar to those carried out by-1. J.M. Campbell et al. 12 have shown that IV lignocaine 1.5 mg / kg offered complete attenuation against post intubation rise in HR and arterial BP when given 3 min prior to intubation. 2. Singh G et al. 13 concluded that dexmedetomidine (1 mcg / kg) was more effective in blunting the SBP to laryngoscopy and endotracheal intubation as compared to lignocaine (1.5 mg / kg), which correlates with our study. Diastolic Blood Pressure (Table 4) The results in our study are as following: • DBP of both groups at baseline was comparable to each other & there was no statistical difference between them (P > 0.05). The results of our study were similar to those carried out by: 1. Keniya VM et al. 14 showed that 1 mcg / kg dexmedetomidine effectively attenuated the press or response to laryngoscopy & subsequent intubation, where the dexmedetomidine group was compared to the control group. After intubation, the maximal average increase was 8 % in SBP and 11 % in DBP, compared to 40 % and 25 %, respectively, in the control group, which correlates with our study. 2. Surabathuni S et al. concluded that dexmedetomidine 1 mcg / kg & lignocaine 1.5 mg / kg were effective in blunting the DBP to intubation, but dexmedetomidine was superior in blunting the haemodynamic response to laryngoscopy & endotracheal intubation, which correlates with our study. Mean Arterial Pressure (Table 5) The results in our study are as following: • MAP of both groups at baseline was comparable to each other & no statistical difference between them (P > 0.05). below baseline in group D (-2.6 %), which was statistically highly significant (P < 0.0001). The results of our study were similar to those carried out by-1. Samala S et al. 15 concluded that dexmedetomidine 1 mcg / kg was superior to lignocaine 1.5 mg / kg in blunting the MAP to laryngoscopy & endotracheal intubation without any significant side effects, which correlates with our study. 2. Kalakeri et al. 16 showed that dexmedetomidine (1 µg / kg) attenuates MAP compared to basal value after drug infusion & before induction. MAP remained lower than the baseline value even after the intubation. These findings correlate with our study. Rate Pressure Product (Table 6) The results in our study are as following: • RPP of both groups at baseline was comparable to each other & no statistical difference was present between them (P > 0.05). • Following administration of study drugs there was a fall in RPP in both groups, group L (-0.3 %) & group D (-15.0 %). This difference was statistically significant (P < 0.05). • During laryngoscopy & intubation, RPP increased above baseline in group L (4.5 %), but in group D, RPP was still below baseline (-11.5 %). Which was statistically highly significant (P < 0.0001). • At L + 1 & L + 3, there was a maximum rise in RPP from baseline in both groups, more with group L (41.9 % & 30.9 %) than group D (6.3 % & 1.3 %) respectively, which was statistically highly significant (P < 0.0001). • At L + 5 & L + 7 RPP was still above baseline in group L (17.0 % & 6.0 %) and below baseline in group D (-9.6 % & -17.4 %) respectively, which was statistically highly significant (P < 0.0001). • At L + 10 MAP in both groups was below baseline, in group L (-2.5 %) & in group D (-23.5 %), difference was statistically highly significant (P < 0.0001). The result of our study was similar to those carried out by 1. Kapdi MS et al. 17 studied the comparison of dexmedetomidine (1 mcg / kg) over 10 mins through infusion pump & found that RPP decreased at 0, 1, 2, 3, 4, & 5 mins from baseline in dexmedetomidine group. 2. Sale HK et al. 18 & Gangappa RC et al. 8 concluded that efficacy of dexmedetomidine (1 mcg / kg) in attenuation of the pressor response to laryngoscopy & intubation compared to lignocaine (1.5 mg / kg) was significantly higher in ASA I and II patients with respect to HR, SBP, DBP and MAP, which correlates with our study. SpO2 -All patients in each group had normal arterial saturation throughout the study. ECG: In present study no abnormal changes were recorded in any patient throughout the procedure. Burstein CL et al. 19 had similar findings as compared to our study. C O N C L U S I O N S Dexmedetomidine provides better blunting of stress response during laryngoscopy and intubation without causing clinically significant respiratory depression, bradycardia or hypotension. It is better in achieving a low RPP, which is a good predictor of myocardial oxygen consumption. Dexmedetomidine provides better cardio-protection in patients against pressure response than lignocaine. In conclusion, dexmedetomidine (1 mcg / kg) was found to be superior to lignocaine (1.5 mg / kg) for attenuation of pressor response. L i m i t a t i o n s We did not measure the plasma catecholamine levels which is an objective means of measuring hemodynamic stress response. Data sharing statement provided by the authors is available with the full text of this article at jemds.com. Financial or other competing interests: None. Disclosure forms provided by the authors are available with the full text of this article at jemds.com.
2021-06-22T17:55:57.295Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "853eeb8cc24b9fc2ed2715ea7fcd8006d42e6aa9", "oa_license": null, "oa_url": "https://doi.org/10.14260/jemds/2021/240", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "95fe8082eb7c223046f52db2372415e5bf51f640", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119337297
pes2o/s2orc
v3-fos-license
Finite Temperature Effects in One-dimensional Mott-Hubbard Insulator: Angle-Resolved Photoemission Study of Na_{0.96}V_{2}O_{5} We have made an angle-resolved photoemission study of a one-dimensional (1D) Mott-Hubbard insulator Na_{0.96}V_{2}O_{5} and found that the spectra of the V 3d lower Hubbard band are strongly dependent on the temperature. We have calculated the one-particle spectral function of the one-dimensional t-J model at finite temperatures by exact diagonalization and compared them with the experimental results. Good overall agreement is obtained between experiment and theory. The strong finite temperature effects are discussed in terms of the existence of the ``Fermi surface'' of the spinon band. The most striking and non-trivial theoretical prediction for 1D strongly correlated systems is the spin-charge separation [1]: the degrees of freedom of an electron are decoupled into elementary excitations of spin and charge called "spinon" and "holon", respectively. Angleresolved photoemission spectroscopy (ARPES) is a powerful technique to study the spin-charge separation: Kim et al. [2] performed a pioneering ARPES work on the 1D charge-transfer insulator SrCuO 2 and found that the spectra agree well with the theoretical one-particle spectra of the 1D t-J model with realistic parameters, identifying such spinon and holon excitations. We subsequently investigated whether the same scenario is applicable for a 1D Mott-Hubbard type insulator NaV 2 O 5 [3]. Strongly-correlated systems are often characterized by the presence of a characteristic low energy scale in spite of the large energy scales of bare interaction strengths. For the Hubbard model, the relevant low energy scale is set by the superexchange interaction J ∼ 4t 2 /U (≪ t, U ) rather than the bare interaction parameters, the transfer integral t and the on-site Coulomb repulsion U . Since the photoemission spectrum is a projection of the initial state onto the set of final states, drastic finite-temperature effects may be expected for a temperature change of the order of such a characteristic low energy scale. Finitetemperature effects can be particularly drastic in 1D systems [4] because of the existence of the "Fermi surface" of spinon excitations. In this Letter, we present the result of a temperature-dependent ARPES study on NaV 2 O 5 . While severe charging effects had previously prevented the measurements of NaV 2 O 5 below ∼ 300 K [3], more conductive Na 0.96 V 2 O 5 [5] enabled us to obtain ARPES spectra at temperatures as low as ∼ 120 K. Dispersive features of V 3d character in the lower Hubbard band were found to be dramatically dependent on the temperature. In addition, we have made a comparison with theory and confirmed that the observed finite-temperature effect is due to strong correlation effect rather than the simple thermal broadening. In the former case, this compound can be no doubt regarded as a half-filled chain [7], while in the latter case it is viewed as a quarter-filled ladder system [8,9]. At low temperatures around or below its spin-Peierls-like (SP) transition temperature T SP ∼ 34 K [7], the difference between those two models is significant in terms of the charge ordering pattern. On the other hand, the magnetic susceptibility χ(T ) well above T SP is successfully fitted to the Bonner-Fischer curve with J ∼ 560 K [7], indicating that NaV 2 O 5 behaves as a good 1D antiferromagnetic Heisenberg chain in this temperature region. In fact, it is theoretically supported that it can be mapped on to the 1D Heisenberg chain [8]. Except that the SP transition is suppressed [5], the Na-deficient Na 0.96 V 2 O 5 has almost the same magnetic properties as NaV 2 O 5 [10]. Though remaining an insulator, Na 0.96 V 2 O 5 is more conductive than NaV 2 O 5 due to the doped holes. Single crystals of Na 0.96 V 2 O 5 were prepared as reported in Ref. [11]. They could be easily cleaved parallel to the ab plane. The ARPES measurements were made using the He I resonance line (hν = 21.2 eV) and a hemi-spherical analyzer with angular resolution ±1 • and energy resolution 80-100 meV. The measurement temperature ranged from T = 120 K (= 0.21J) to room temperature 300 K (= 0.54J). The measurements were performed for several in situ cleaves, for which we carefully cycled the temperatures of cleavage and measurements in order to exclude any extrinsic effects such as surface degradation and contamination. Before discussing the V 3d band features of our main interest, we would like to mention that the O 2p band structure was found very anisotropic: ARPES spectra with momentum parallel to the b-axis (k b) show rich dispersive features while those with momentum parallel to the a-axis (k ⊥ b) have no dispersions, supporting the one-dimensionality of this compound. These results well agree with those of stoichiometric NaV 2 O 5 [3], indicating that the Na deficiency has no appreciable influence on the O 2p band structure. Furthermore, the O 2p spectra have no obvious temperature-dependent changes, making a clear contrast to the remarkable changes of the V 3d band described below. In addition, the absolute intensity at the Γ point was found to be weaker than half of those at φ > 10 • . These observations can be explained by the facts that the occupied V 3d orbital has xy symmetry lying approximately in the ab-plane [12] and therefore that normal emission from this orbital is forbidden due to selection rules [13]. We therefore conclude that the temperature dependence results from the intrinsic finite temperature effects of the V 3d xy band [14] and that the momentum dependence along the k ⊥ b direction is due to the matrix element effects of the d xy orbital. Figure 1 (d) shows the result for the "k b" cut. In order to avoid the Γ point where the matrix-element effects prohibit emission from d xy , the k a value was slightly offset from the b-axis as shown in Fig. 1 (b). The obtained 120 K spectra show rich dispersing features. In going from k b = 0 to π/2, a single peak centered at E B = 0.9 eV is split into two features: the splitting becomes largest at k b = π/2 with the two features located at E B ∼ 0.7 and ∼ 1.4 eV. The 0.7 eV peak then decreases in intensity in going from k b = π/2 to π and only a single broad peak is left at E B ∼ 1.1 eV at the BZ boundary k b ∼ π. The k b -dependence of the spectra between k b = 0 and 2π is (c) Spectra for the k ⊥ b cut and (d) those for the "k b" cut. Solid and dashed curves show spectra taken at 120 K and 300 K, respectively. Note that each spectrum is normalized to its area and that the absolute intensities of the spectra near θ = φ = 0 • are much weaker than the others. almost symmetric with respect to k b = π, which excludes significant changes in the photoemission matrix elements between the first and second BZ's. By contrast, the 300 K spectra, which well agree with the previous report [3,15], show less pronounced features than the 120 K spectra. The spectra at k b = 0 and π become a broader peak with a longer tail towards high binding energies. As for the spectra at k b ∼ 0.5π, the peak located at E B ∼ 0.7 eV becomes weaker and that at ∼ 1.4 eV stronger in going from 120 K to 300 K. We also performed ARPES measurements at 200 K and confirmed that the changes are gradual as a function of temperature. These observations are more clearly recognizable in the intensity plots (b) and (c) of Fig. 2. By noting that the finite-temperature effects strongly depend on the momentum (k ≡ k b ), they are clearly not due to simple thermal broadening or charging effects. The following two points may be remarked: (i) in the temperature range studied here, there is no phase transition in this compound that may give rise to such a dramatic change; (ii) the energy scale of the spectral change is not of order kT ∼ 0.03 eV but of order ∼ 1 eV. These phenomena are obviously beyond the conventional band picture and would reflect strong correlation effects. In order to interpret the above observations quantita- tively, we have adopted the 1D t-J model to calculate the one-particle spectral function A(k, ω) at finite temperatures by the exact diagonalization method [16]. As the behavior of χ(T ) well above T SP indicates, the t-J model is valid as one of the simplest models to describe this system in the temperature region considered. It may be applicable not only to the half-filled chain case [6,7], but also to the quarter-filled ladder case where each d electron is localized in a rung of two V atoms [8,9]. In the ladder case, only the half-filled bonding band is taken into account, because no electrons are in the antibonding band [8]. The quarter-filled ladder can be mapped on to the half-filled chain by taking only the half-filled bonding band with the empty antibonding band neglected and by an appropriate modification of the parameters t and J [8,17]. In addition, the t-J model has the advantage over a more realistic Hubbard model in that the former can treat larger clusters, which is crucial for the discussion of finite temperature effects. We have calculated A(k, ω); where Z = i e −βE N i is the partition function. Results for a half-filled 14-site t-J cluster with J/t = 1/3 at T = 0, J/4 and J/2 are shown in Fig. 3 [18]. A(k, ω) at T = 0 can be intuitively interpreted as a convolution of spinon and holon excitations, whose dispersional widths are ∼ J and ∼ 2t, respectively. In this scenario, in the ground state, the holon band is empty while the spinon band is half-filled up to the Fermi momentum k = π/2 [2]. The lineshape of the "spinon branch" (see Fig. 3) is determined by the band edge singularity of the holon band whereas that of the "holon branch" is determined by the Fermi edge singularity of the spinon band [19]. At finite temperatures, the spectral weight of the spinon branch at ω/t > 1 (ω/t < −2) is transferred from 0 < k < π/2 (π/2 < k < π) to π/2 < k < π (0 < k < π/2). At the same time, the intensity of the holon branch decreases. In fact, spectral weight is transferred from the spinon branch to a wide energy region, making the spectral features less pronounced and more symmetric with respect to k = π/2. In fact, the singularity of the holon branch due to the existence of the spinon Fermi surface is easily smeared out over the entire energy range of 4t (≫ T ∼ J) at finite temperatures of order J [4]. In Fig. 2 we show a comparison between the experimental and theoretical spectra. Figure 2 Here, we have assumed that t = 3J = 0.15 eV, which is plausible because J = 4t 2 /U ∼ 0.05 eV and U = 2-4 eV in typical vanadium oxides [20]. Comparing (a)-(c) and (d)-(f), overall agreement is satisfactory. In the low temperature region (T = 120 K or J/4), the experimental shift of the peak position between k b = 0.0 and 0.95π may be attributed to the existence of the spinon branch, resulting in the asymmetry of the spectra with respect to k = π/2. Besides, between k b = 0.32π and 0.72π, there are two dispersing features which may be assigned to the two holon branches as reproduced in the theoretical spectra. These findings are also substantiated in the comparison between (b) and (e). Unlike the 1D cuprates, where the intense O 2p structure obscures the higher binding energy part of the holon and spinon branches [2], the whole structure of theoretical A(k, ω) can be compared with the experimental results of this compound. In the high temperature region (T = 300 K or J/2), both results become broader and more symmetric with respect to k = π/2. As a result, the tendency of the experimental spectral weight redistribution due to finite temperature is grossly reproduced by the theory as seen in the intensity plots in Fig. 2. Around k ∼ 0, agreement is quite excellent. To this extent, the experimental finite temperature effects may be attributed to the existence of the spinon Fermi surface, which theoretically causes the dramatic spectral redistribution over the entire E-k space with changing temperature. To be more precise, however, there exist some discrepancies between theory and experiment. When the temperature is increased from 120 K to 300 K, the spectra change more dramatically than the theoretical prediction. Furthermore, while the temperature dependence is rather well simulated by theory at k b ∼ 0-0.32π, at k b ∼ 0.5π the feature around E B = 0.7 eV in experiment loses much of its spectral weight in going from 120 K to 300 K, in disagreement with theory. In addition, around k b ∼ π in going from 120 K to 300 K, the experimental spectra lose spectral weight at E B < 0.8 eV and a longer tail develops on the high binding energy side unlike the theoretical spectra. At low temperatures, where the decay of a photohole is probably dominated by purely electronic mechanisms while other decay channels may become available at higher temperatures. As a candidate, we may consider electron-phonon interaction and possible charge disorder, which may become important at higher temperatures. The difference between the t-J model and the Hubbard model as well as the degeneracy of the V 3d orbitals might be another origin for the discrepancy. In conclusion, we have made an ARPES study of Na 0.96 V 2 O 5 by changing the temperature and found that a strong spectral weight redistribution occurs in the lower Hubbard band. Also we have calculated the one-particle spectral function of the 1D t-J model at finite temperatures by the exact diagonalization method. The overall agreement between theory and experiment implies that the spin-charge separation picture is valid in this system. Although they are more drastic than the theoretical prediction, the experimental finite temperature effects have been partly explained by the theory, which may be expressed as the "Fermi surface" effect of the spinon band. We would like to thank H. Suzuura, H. Shiba, K. Penc, C. Kim, N. Kawakami, T. Mutou, and D. van der Marel for informative discussions. This work was supported by a Special Coordination Fund from the Science and Technology Agency of Japan. One of us (KK) is supported by a Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists.
2019-04-14T02:18:23.327Z
1998-08-31T00:00:00.000
{ "year": 1998, "sha1": "a4f29ea003a69b1d121f65c33ceb34d05f3c0377", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9808334", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a4f29ea003a69b1d121f65c33ceb34d05f3c0377", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
117905543
pes2o/s2orc
v3-fos-license
SOLIS/VSM Polar Magnetic Field Data The Vector Spectromagnetograph (VSM) instrument on the Synoptic Optical Long-term Investigations of the Sun (SOLIS) telescope is designed to obtain high-quality magnetic field observations in both the photosphere and chromosphere by measuring the Zeeman-induced polarization of spectral lines. With 1$^{\prime \prime}$ spatial resolution (1.14$^{\prime \prime}$ before 2010) and 0.05\AA\ spectral resolution, the VSM provides, among other products, chromospheric full-disk magnetograms using the CaII 854.2 nm spectral line and both photospheric full-disk vector and longitudinal magnetograms using the FeI 630.15 nm line. Here we describe the procedure used to compute daily weighted averages of the photospheric radial polar magnetic field at different latitude bands from SOLIS/VSM longitudinal full-disk observations. Time series of these measurements are publicly available from the SOLIS website at http://solis.nso.edu/0/vsm/vsm\_plrfield.html. Future plans include the calculation of the mean polar field strength from SOLIS/VSM chromospheric observations and the determination of the {\it true} radial polar field from SOLIS/VSM full-Stokes measurements. Introduction Polar field measurements are extremely important for several reasons: 1) they dominate the coronal structure over much of the solar cycle (except when the polar fields reverse), 2) polar magnetic flux plays a role in determining the properties/evolution of the heliospheric magnetic field, 3) the polar magnetic fields are thought to be the direct manifestation of the Sun's interior global poloidal fields which serve as seed fields for the global dynamo that produces the toroidal fields responsible for active regions and sunspots, and 4) the polar regions are the source of the fast solar wind. However, measuring the polar field is difficult due to foreshortening effects at the solar limb as well as the intrinsic weakness of the field near the poles, and interpretation of these measurements is complicated by a number of factors including the complexity of the polar magnetic landscape.Hinode observations of the polar regions have revealed patches of magnetic field with different spatial extent and distribution.While some are isolated, others form patterns like chains of islands.Many of these patches are coherently unipolar and have field strengths reaching above 1 kG.Their size tends to increase with latitude, up to about 5×5 arcseconds.All of the large patches have fields that are predominantly vertical relative to the local surface, while those of the smaller patches tend to be horizontal.If a radial correction is applied to line-of-sight (LOS) magnetograms, then the horizontal fields are incorrectly amplified with a strongly varying radial function.Depending on the distribution of the horizontal fields this may lead to a sign bias and inaccurate flux on any given day. Furthermore, for a given latitude, these effects will change with the B 0 angle.Because of projection effects, polar measurements obtained at favorable B 0 angle (around March/September for the southern/northern solar hemisphere) will be less noisy than other periods of the year.The sensitivity of the magnetic field measurement is also a significant factor, and seeing plays a role in ground-based observations.The impact of all of these factors on time series of polar field measurements is expected to be greater during solar minimum, when the strength of the poloidal field is stronger.This document describes the precise procedure used to compute an estimate of the mean polar magnetic field from SOLIS/VSM measurements. Wilcox Measurements The Wilcox Solar Observatory (WSO) provides measurements of the polar magnetic fields that data back to May 1976.Due to the long history of the program and the homogeneity of the dataset, they have been used as a reference in many studies.Daily measurements are made in the north and south hemispheres using 3 ′ square apertures that span the LOS field between approximately ±55 • and the corresponding poles.The solar coordinates of the apertures shift due to the Earth's orbital motion, and their orientation differs somewhat with each measurement.Because of the relatively large aperture, the WSO pole measurements are weighted by limb darkening.Time series of these measurements are available from the WSO web site at http://wso.stanford.edu/Polar.html in two flavors: 1) daily values from an average of all usable measurements in a centered 30-day window and 2) 20nhz low-pass filtered values that eliminate yearly geometric projection effects.Figure 1 shows these measurements, in which a strong annual modulation due to the variable B 0 angle is clearly visible in the unfiltered time series (top). Measurements and Analysis The current SOLIS/VSM pipeline provides, among other products, daily full-disk magnetograms of the longitudinal magnetic field using the FeI 630.15 nm spectral line.They are sufficiently high-resolution (1.0 arcseconds beginning in December 2009 and 1.14 prior) to not be significantly weighted by limb darkening.Before the polar field measurements are made, these magnetograms are converted from line-of-sight to radial flux density by assuming that the fields are approximately radial at the photosphere.This is a reasonable approximation for network structures and weak fields outside of active regions, typical of the solar polar regions of interest. In general terms, the polar caps typically extend above approximately 60 • latitude in the north and below −60 • in the south.For our purposes, three separate (but overlapping) bands of latitude are considered for each hemisphere: and ±65 • to ±75 • .Higher latitudes are not included because they would significantly increase the noise of the derived time series.For all bands, the longitude range is restricted to between ±50 • . The mean polar field strength is computed for the selected latitude bands following the approach described in Bertello et al. 2014(Solar Physics 289, 2419).In short, the original magnetogram pixels are evenly divided into subpixels in order to more accurately resolve the boundaries of the latitude bands.Then, the pixels (and partial pixels) corresponding to the latitude band of interest are selected according to their computed Stonyhurst heliographic longitudes (L) and latitudes (B) on the solar disk.If (x, y) are the Cartesian coordinates of a subpixel relative to the center of the solar disk and the position angle between the geocentric and solar rotational north poles is zero (P =0), then the Stonyhurst heliographic coordinates are given by where ρ = arcsin(r) − S • r is the heliocentric angular distance of the subpixel from the center of the Sun's disk, r = x 2 + y 2 /R • , θ = arg(x, y), R • is the solar radius in pixels, S is the angular semi-diameter of the Sun, and B 0 is the Stonyhurst heliographic latitude of the observer. Figure 2 shows how the total number of contributing full-disk pixels (including partial pixels) varies as a function of time for four of the six selected bands.The clearly visible transition in late 2009 corresponds to the higher spatial resolution of the Sarnoff cameras that replaced the older Rockwell cameras at that time.The strong annual modulation in the number of pixels is due to the combined effects of the varying B 0 angle and Sun-Earth distance. If f i is the magnetic radial flux density and w i is the fractional area (0 < w ≤ 1) of a pixel i that contributes to a particular latitude band, then the mean polar field strength is given by where N is the total number of contributors, and the variance is Results A quick inspection of Figure 3 reveals some interesting facts: 1.The σ time series show a strong modulation in phase and anti-phase with the B 0 angle. For the south measurements, this modulation is in phase with B 0 (i.e., the errors are larger when B 0 > 0) as expected.The opposite is true for the north pole measurements.Also, measurements taken before December 2009 with the Rockwell cameras show significantly larger errors, suggesting that the signal-to-noise ratio is higher in the Sarnoff era. 2. Polar field measurements taken with the older Rockwell cameras show a clear annual modulation.This correlates quite well with the modulation visible in the σ time series.This could, perhaps, be attributed to the larger uncertainties of these measurements compared to those obtained after December 2009 with the Sarnoff cameras.Differences in the derived uncertainties between the two cameras are expected, due to the different pixel-size and signal-tonoise. 3. A ∼27-day periodicity, due to solar rotation, is clearly detected in the VSM polar measurements. In general, the reliability of magnetic field measurements in the polar regions depends on the value of the B 0 angle.During times of positive B 0 angles, the northern polar regions are more easily observable than the southern regions.The opposite is true when B 0 is negative.One can account for this effect by properly weighting the daily measurements with some quantity that reflects this trend and produce a filtered (smoothed) version of the mean radial field time series.One such choice for the weights is the total number of full-disk pixels that contribute to the mean polar field strength of a given latitude band.Figure 4 shows the behavior of this quantity for 2012, both for the north and south hemisphere.The total number of pixels in the north correlates extremely well with B 0 , while those in the south are anti-correlated.Assuming a 361-day interval centered on a given day i, a filtered (smoothed) daily mean radial polar field value can be calculated as where B r,j is the daily unfiltered value (from Eq. 2) and N j is the corresponding total number of full-disk contributing pixels.The corresponding variance is then Prior to applying the above formulas, multiple measurements taken on the same day were averaged together, and the daily unfiltered time series were interpolated for missing days.Also, to account for the installation of new cameras in December 2009, the values of N j were divided by the square of the camera's spatial resolution (i.e., 1 before December 2009 and 1.14 2 afterwards). Because the sums in Eqs.4-5 are necessarily truncated at the edges of the time series, the most recent filtered values are computed using shorter time intervals.Results for two pairs of latitude bands are shown in Figure 5. A close look at this plot shows that during solar cycle 24, the mean polar fields measured in the lower latitude bands reverse their polarity for the first time about eight months before those at higher latitude.While the southern polar caps reversed only once, the behavior in the northern hemisphere is more complex.After a first reversal, in 2012, the strength of the northern magnetic polar caps decreased and stay around zero for most of 2014.The northern higher latitude band (60 • − 70 • ) never quite crossed the zero-line again, but the lower band seems to have changed its polarity at least another time before its final reversal in late 2015.For comparison purposes, we also plot in Figure 6 both SOLIS/VSM results for the two broader latitude bands and the Wilcox filtered LOS polar field measurements.Except for the different scales, the overall behavior of the VSM and Wilcox time series are very similar.The times of polar reversal in both the northern and southern hemispheres are also within about a month as indicated in Table 1.2006 2007 2008 2009 2010 2011 2012 2013 2014 Future Work Future plans include calculation of the mean polar field strength from SOLIS/VSM chromospheric Ca II 854.2 nm observations and determination of the true radial polar field from the SOLIS/VSM photospheric Fe I 630.15 nm full-Stokes data.Although the assumption that the magnetic field is, on average, radial cannot be made in the chromosphere, determination of the polar field strength from line-of-sight measurements in the core of the Ca II 854.2 nm spectral line is expected to be more reliable than its photospheric counterpart due to the canopy effect of magnetic flux tubes in the chromosphere.Estimation of the polar field strength from full-Stokes data is challenging due to their limited sensitivity in areas of relatively quiet Sun that are typical in the polar regions.However, SOLIS/VSM photospheric vector data at Fe I 630.15 nm uniquely provide both highspatial and high-spectral resolution.Because the former is not required for determination of the mean polar field, the sensitivity of these measurements can be significantly improved by spatially averaging the spectra prior to the Milne-Eddington inversion.Finally, a project is underway to recalibrate older Kitt Peak magnetograms obtained with the 40-channel (Jan. 1970-March 1974), 512-channel (Jan. 1974-April 1993), and spectromagnetograph (April 1992-Sept. 2003) instruments, as well as the earliest SOLIS/VSM photospheric magnetograms (August 2003-May 2006).Once complete, the polar field strength time series can be extended to a period in excess of 45 years. )Figure 2 : Figure 2: Number of full-disk pixels contributing to the calculation of the mean polar field strength as a function of time for selected latitude bands. Figure 3 : Figure 3: Daily SOLIS/VSM unfiltered photospheric mean radial polar field measurements since May 1st, 2006.Top panel shows north (red) and south (blue) polar measurements computed using heliographic bins in the latitude range [60 • ,75 • ] and [−60 • ,−75 • ], respectively, and the longitude range [−50 • ,50 • ].The corresponding 1-σ uncertainty in the daily weighted mean value for each observation is shown in the middle panel.For reference, the value of the B 0 angle for days when observations were taken is included in the bottom panel. Figure 4 : Figure 4: Total number of daily contributing full-disk pixels to the [60 • ,75 • ] north (in red) and [−60 • ,−75 • ] south (blue) latitude bands.Also shown are the values of the B 0 angle (black) for days when observations were taken during 2012. Figure 5 : Figure 5: Daily SOLIS/VSM filtered photospheric radial polar field measurements from October 15, 2006 to July 15, 2015.The 3-σ error bars are shown as shadowed color areas surrounding the values. Table 1 : Approximate times of final polar reversals during solar cycle 24 determined from the Wilcox and SOLIS/VSM photospheric measurements shown in Figure6.
2015-07-28T22:38:02.000Z
2015-07-28T00:00:00.000
{ "year": 2015, "sha1": "6769fad047f8280a47c84a7a07a8ec34d926c2c5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6769fad047f8280a47c84a7a07a8ec34d926c2c5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250191673
pes2o/s2orc
v3-fos-license
Influence of Nordic walking with poles with an integrated resistance shock absorber on carbohydrate and lipid metabolic indices and white blood cell subpopulations in postmenopausal women Background Regular and individualised physical activities have been shown to prevent adverse changes associated with the aging process. The main purpose of this study was to evaluate changes in carbohydrate and lipid metabolism and white blood cell (WBC) subpopulations in postmenopausal women participating in Nordic walking (NW) training and to compare the use of poles with an integrated resistance shock absorber (RSA) with the use of classic poles. Materials & Methods A total of 23 postmenopausal women participated in a 8-week programme of systematic physical activity between February and April. Before and after the training programme, somatic features and serum concentrations of 25-hydroxyvitamin D, glucose, and insulin, were assessed, as well as lipid profile and WBC count and its specific subpopulations. Results Analysis of differences in somatic features and biochemical indices before and after training in the group of women who used RSA poles showed significant decreases in fat mass content (p < 0.05), insulin (p < 0.05), homeostatic model assessment of insulin resistance (p < 0.05), triglycerides (p < 0.05), total cholesterol (p < 0.05) and monocytes (p ≤ 0.01). In the group of women who used classic poles (NW), there was a significant decrease in WBC (p ≤ 0.01), lymphocytes (p < 0.05), monocytes (p ≤ 0.01) and granulocytes (p < 0.05). Conclusion Increasing the training load through the use of RSA poles resulted in greater changes in carbohydrate and lipid metabolic indices compared to the use of classic NW poles. In turn, the more significant effect on WBC and its specific subpopulations count in the NW, compared to the RSA training programme, may indicate that specificity of training load is an important factor in modifying the immune system response. INTRODUCTION During the postmenopausal period women often experience a number of hormonal and metabolic changes that can adversely affect their organisms (Stachowiak, Pertyński & Pertyńska-Marczewska, 2015). However, research has shown that regular physical activity of adequate intensity is an important factor modifying the functioning of most metabolic pathaways and promotes the maintenance of good health (Khalafi, Malandish & Rosenkranz, 2021;Moreira et al., 2014;Sternfeld & Dugan, 2011). Furthermore, individualised and regular physical activities have been shown to prevent adverse changes associated with the aging process (Wang et al., 2020;Woods et al., 2012). Nordic walking (NW) is a physical activity that is popular, safe, and easily accessible (Bullo et al., 2018). NW is a marching activity with use of poles adapted from crosscountry skiing. Using poles enables to engage muscles that are not used during normal walking (Kocur & Wilk, 2006). Among the many different forms of physical training, NW is classified as an aerobic activity in which the whole body is engaged, promoting improvements in physiological parameters and muscle strength and fitness (Pérez-Soriano et al., 2014). An additional advantage of this form of training is that the exercise is performed outdoors, which may contribute to beneficial metabolic effects by increasing vitamin D concentrations in the body (Nowak et al., 2020). Previous studies pointed out the metabolic and anti-inflammatory effects of vitamin D and the relationship between serum 25(OH)D concentrations and subpopulations of WBC has been documented (Mousa et al., 2020). Pérez-Soriano et al. (2014) found that NW differs from conventional walking in its effects on the musculoskeletal system; it is more stable and can be considered an intermediate mode between walking and running; higher locomotor speeds in comparison to walking result in increased physiological loads, without increasing the subjective perception of effort. Results of a systematic review showed positive effects of NW programmes on anthropometric parameters, body composition, cardiovascular parameters, and glucose tolerance in overweight and obese people (Gobbo et al., 2019). In a study comparing the effects of NW and conventional walking in middle-aged men and women, Muollo et al. (2019) found that NW resulted in more beneficial and faster changes in parameters such as body mass index (BMI), total body fat, android fat, and leg fat, and improved physical performance to a greater extent, compared to walking. Furthermore, in another study, based on a comparison of 6 weeks of NW training with regular walking in postmenopausal women over 55 years old, Cebula et al. (2020) found that for the same speed, NW generated higher energy expenditure than regular walking (without poles). Thus, NW may be a primary and more effective tool than walking for counteracting overweight and obesity in middle-aged adults. A new form of NW utilises poles with an integrated resistance shock absorber (RSA). In this type of physical activity, the poles used for walking are modified. The premise of the RSA pole design is to increase the load on the upper body by working with resistance (Marciniak et al., 2020). Marciniak et al. (2020) suggested that participants taking part in training with this type of pole had to perform additional work with their upper limbs, thus increasing the overall intensity of the exercise, in comparison to the classical form of NW. A number of authors have demonstrated the effectiveness of NW as a systematic physical activity through, among others, analysis of lipid and carbohydrate metabolic indices (Hagner-Derengowska et al., 2015a;Hagner-Derengowska et al., 2015b;Prusik et al., 2018;Witkowska et al., 2021). In the review article on physical activity in people with type 2 diabetes, Pesta et al. (2017) suggested that current exercise recommendations to improve metabolic processes should pointed out a synthesis of higher-intensity resistance exercise and lower-intensity resistance training or endurance training. In a study in overweight/obese postmenopausal women, Johannsen et al. (2012) observed a reduction in total WBC and neutrophil counts after an aerobic exercise program in a dose-dependent manner. Taking into account the fact that the degree of modification of metabolic and inflammatory processes in the body depends on the type of training load, the main purpose of this study was to evaluate changes in carbohydrate and lipid metabolism indices and white blood cell (WBC) subpopulations in postmenopausal women participating in training with the use of RSA poles, compared with NW training with the use of classic poles. An additional aim was to evaluate the response of these indices to systematic training with regard to body fat content and 25-hydroxyvitamin D (25(OH)D) concentrations. Participants and the study protocol A total of 40 postmenopausal women were enrolled in the study. Women were randomly assigned to two groups according to use the type of poles-classic NW or RSA poles. Randomization was conducted as a simple blind random assignment using a computerized list. This allocation was performed by a person not involved in the conduct of the study. A questionnaire was used to obtain information on lifestyle, diseases, drugs and supplements used, and frequency of fish consumption. Subjects who used hormone replacement therapy or medication modifying lipid metabolism, who declared the presence of diabetes or liver disease, or who had stayed abroad in countries with high levels of sunlight during the two weeks preceding the study were excluded from further stages. Subjects who did not adhere to the study protocol by poor attendance at marching training or declared regular participation in other physical activity were also excluded. Finally, 23 women (NW: n = 15, RSA: n = 8) aged 66 ± 3.65 years were eligible for the research analysis. Subjects participating in the study declared that they had not previously taken part in organised Nordic walking classes. Prior to the study, the purpose and method of the study were explained to all subjects, and all participants voluntarily consented to the study in writing. The study was approved by the Bioethics Committee of Karol Marcinkowski Medical University in Poznan (code no. 1041/18 and 245/19). The study was conducted between winter and spring (February-April), and the women participated in the training programme during this period. Before (1st term of measurement) and after (2nd term of measurement) the training programme, somatic features, serum concentrations of selected indices of carbohydrate and lipid metabolism and the vitamin D metabolite (25(OH)D), and WBC count and its specific subpopulations were assessed. Training programme The training programme lasted 8 weeks, with training sessions held twice a week, for a total of 16 sessions. Women were assigned to two groups based on the type of poles used: classic poles (NW group) and RSA poles with 4 kg resistance strength (RSA group). RSA poles (Slimline BungyPump, Sport Progress International AB, Sweden) have a built-in shock absorber with a total length of 20 cm; marching with the RSA poles therefore leads to different positioning of the upper limbs, in comparison with classic NW poles. On pressing the RSA pole, muscles perform additional work to overcome the resistance of the elastic shock absorber. Pressing the shock absorber changes the length of the pole, which, when shortened by the maximum amount, is the same length as classic NW poles. Releasing the pressure causes the stick to deform to its original length with equal force, potentially causing sensations of altered body balance (Marciniak et al., 2021). All women (NW and RSA groups) took part in the training at the same time. The training was always conducted by the same NW instructor (Polish Nordic Walking Federation-qualified). Each training session began with a warm-up that lasted 10-15 min. After each half of the planned distance (approximately 1.7-2.2 km, at a pace of around 1 km per 10 min), participants performed strength exercises and balance training (15 min). Stretching exercises then took place at the end of the planned distance training (15 min). During the sessions, the walking distance was gradually increased from 3.5 to 4.5 km, and the number of exercises performed was increased from 8 to 12 repetitions. Exercise intensity corresponded to 50% heart rate reserve (HRR) during exercise sessions 1-8, while in sessions 9-16, intensity was increased to 65-70% HRR, measured using a heart rate monitor (Polar Electro Oy, Kernpele, Finland). A minimum required attendance of 13 training sessions (80%) was adopted. Before the intervention, participants were familiarised with the equipment and trained in the correct marching techniques during a 60-minute tutorial session. The training took place in a city park; the subjects walked along the inner lanes of the park, on varied ground. The length of the route was measured using the Endomondo application (Marciniak et al., 2020). Fat mass measurement Fat mass was measured using dual X-ray absorptiometry (DXA) on the whole body. DXA measurements were acquired using a Lunar Prodigy Advance densitometer (General Electric, USA). All measurements were performed by the same technician, using the same instrument. Quality control for the DXA scanner was performed according to the manufacturer's recommendations, and analyses of the measurements were performed using the integrated software according to the manufacturer's recommendations. Height and weight were measured using a certified Radwag (Radom, Poland) device with an accuracy of 0.5 cm. Body mass index (BMI) values were assessed according to the recommendations of the Committee on Diet and Health, taking into account the age of the subjects (Babiarczyk & Turbiarz, 2012). Biochemical analysis Blood was collected from the ulnar vein between 7:30 and 9:30 am (after participants had fasted overnight) and centrifuged to obtain serum for biochemical analysis. Blood serum was stored at −70 • C until biochemical analyses were performed. Biochemical analysis were performed as previously described in Huta-Osiecka et al. (2021). Serum 25(OH)D concentration was determined by chemiluminescent immunoassay (CLIA), using the LIAISON R 25 OH Vitamin D TOTAL Assay (DiaSorin Inc, Saluggia, Italy), with sensitivity 4 ng/ml. The concentrations of glucose and lipid profile (TC, total cholesterol; TG, triglycerides; HDL-C, high density lipoprotein cholesterol; LDL-C, low density lipoprotein cholesterol) were determined using an automatic biochemical analyser (ACCENT 220S; Cormay, Warsaw, Poland) and dedicated enzymatic tests supplied by Cormay (Warsaw, Poland). Sensitivity of tests was 0.41 mg/dl, 1.95 mg/dl, 1.4 mg/dl, 1.1 mg/dl, and 3.9 mg/dl, respectively. Insulin concentration was determined by immunoenzymatic ELISA (DRG Instruments GmbH, Marburg, Germany), with sensitivity 1.76 µIU/ml. Spectrophotometric measurements for the ELISA test were made using a multi-mode microplate reader (Synergy 2 SIAFRT, BioTek, Winooski, VT, USA). The insulin resistance index (HOMA-IR, Homeostatic Model Assessment) was calculated using the formula of Matthews et al. (1985): For determination of WBC and selected subpopulation counts (lymphocytes (LYM), monocytes (MON), and granulocytes (GRA)), blood was collected using S-Monovette tubes (Sarstedt, Germany) containing K2-EDTA (EDTA dipotassium salt) as anticoagulant. Statistical methods Data were collected as previously described in Wochna et al. (2019) and are presented as mean, standard deviation (SD), median and interquartile range. Normality of distribution was verified using the Shapiro-Wilk test. The t -test and Mann-Whitney U test were employed for normally and non-normally distributed variables, respectively, to evaluate the significance of differences between groups. The t -test and Wilcoxon test were used for normally and non-normally distributed variables, respectively, to evaluate the significance of differences over time (between the first and second times that subjects were tested). A 2 × 2 (group × time interaction) repeated-measures ANOVA was used to evaluate the influence of the training programme on the assessed indices (changes across time within each group). Pearson analysis for normally distributed variables and Spearman's rank analysis for non-normally distributed variables were used to calculate correlation coefficients. Statistical significance was set at an alpha of 0.05 for all statistical procedures. Statistical analysis of results was performed using Dell Statistica data analysis software (version 13, software.dell.com; Dell Inc., Round Rock, TX, USA). Table 1 presents descriptive statistics of somatic features, metabolic indices, and counts of WBC subpopulations in the group of subjects (n = 23) measured before and after training (1st and 2nd measurements). Comparative analysis of these parameters before and after training revealed significant changes in body mass (p = 0.0153), BMI (p = 0.0099), fat mass (%; p = 0. 0169), fat mass (kg; p = 0.0371), insulin (p = 0.0036), HOMA-IR (p = 0.0101), TG (p = 0.0455), TC (p = 0.0101), LYM (p = 0.0055), MON (p < 0.0001), GRA (p = 0.0152), and WBC (p = 0.0001). No significant changes were noted for other variables. Table 2 presents comparative analysis of somatic features, metabolic indices and subpopulations of WBCs (mean values and SD) measured before and after training for groups of women divided by the type of poles used during training (RSA, n = 8, and NW, n = 15). Comparative analysis of these variables between groups (RSA and NW) before and after training did not show any significant differences. In the RSA group, analysis of changes in somatic features and biochemical indices before and after training revealed significant decreases in fat mass content (%, p = 0.0066 and kg, p = 0.0142), insulin (p = 0.0326), HOMA-IR (p = 0.0267), TG (p = 0.0117), TC (p = 0.0430), and MON (p = 0.0038). On the other hand, in the NW group, there were significant increases in body mass (p = 0.0049) and BMI (p = 0.0047), and decreases in WBC (p = 0.0004), LYM (p = 0.0271), MON (p < 0.0001) and GRA (p = 0.0169). For the whole group (n = 23), correlation analysis was carried out to evaluate relationships between several parameters (body mass, BMI, fat mass (% and kg) and 25(OH)D concentrations) assessed before training and changes ( ) in metabolic indices (glucose, insulin, HOMA-IR, TC, LDL-C, HDL-C, TG) and WBC count and its specific subpopulations (LYM, MON, GRA) assessed after training. The only significant correlation identified was between body mass and glucose (p = 0.0482). Notes. Results are expressed as mean (SD); median (interquartile range). An asterisk (*) indicate p < 0.054, two asterisks (**) indicate p < 0.01; significant differences between the first and second terms of the study. Notes. Group RSA poles with an integrated resistance shock absorber Group NW classic poles Results are expressed as mean ± SD An asterisk (*) indicate p < 0.05, two asterisks (**) indicate p < 0.01; significant differences between the first and second terms of the study. DISCUSSION Many studies have shown that regular exercise promotes the maintenance of good health and is one of the best methods to prevent and treat metabolic diseases. In the present study, after a period of walking training with two types of poles, for the entire group of subjects there was a significant improvement in carbohydrate and lipid metabolism. Reductions in insulin concentrations and HOMA-IR indicates an improvement insulin sensitivity, as well as decreases in lipid metabolic indices (TC and TG) and fat mass [kg and %] were observed. However, comparing the above-mentioned indices in groups divided according to the type of poles used (NW or RSA), despite no significant interaction between RSA and NW, comparative analysis revealed significant differences that occurred after training only in the RSA group. Significant reductions were also observed in WBC and its specific subpopulations counts (LYM, MON, GRA). However, interestingly, significant changes occurred only in the NW group, the opposite of the metabolic indicators. It is important to note that improvements in carbohydrate and lipid metabolism occurred even though the duration of the training intervention used in this study was relatively short (8 weeks). In a study in postmenopausal women (average age 60 years) Akazawaa et al. (2012) also showed that an 8-week aerobic training period of walking and cycling (average of 47 min, four times a week, 60-75% maximum heart rate) had beneficial effects on weight changes and also led to reduced TG levels and increased HDL-C blood levels. Regarding NW training, studies have indicated the effectiveness of this specific type of physical activity on metabolic indicators. A study conducted in women with type 2 diabetes showed that a 12-week NW programme (60-90 min, 3 times a week) resulted in significant improvements in anthropometric and metabolic parameters, including reductions in glycosylated haemoglobin (HbA1c) and TG concentrations and an increase in HDL-C concentration (Sentinelli et al., 2015); such significant changes were not observed in the control group, who were also physically active (50 min of any activity, 3 times a week). Ten weeks of NW training were also shown to result in statistically and clinically more significant changes in blood carbohydrate and lipid metabolic markers than Pilates and dietary intervention in overweight and obese women (Hagner-Derengowska et al., 2015a;Hagner-Derengowska et al., 2015b). NW has been shown to be a more physiologically demanding activity than walking (Cebula et al., 2020), while Muollo et al. (2019) concluded that this form of training can cause more beneficial changes in somatic parameters and increase physical capacity to a greater extent than conventional walking. On the other hand, in a study conducted in postmenopausal women (>55 years) who participated in walking and NW training for 12 weeks (60 min, three times a week); Witkowska et al. (2021) found comparable effects and improvement in blood lipid profile in both study groups (decrease in LDL-C levels in women who performed NW training and decrease in TC and LDL-C levels in women who performed walking training). The purpose of our study was to evaluate whether applying a higher load in RSA pole walking training would have a greater effect on metabolic outcomes than NW. However, we did not observe significant interactions in the response of most metabolic indices to the training programme when comparing groups of women divided by the type of poles used; a tendency in interaction (group x time) occurred only for changes in HDL-C concentration. Nevertheless, it is worth noting that the applied training programme contributed to significant changes in carbohydrate metabolism only in the group of women using RSA poles (decreases in TG, TC, insulin concentration, and HOMA-IR index), for whom changes in fat content were also observed. The results obtained in this study suggest that training loads applied through the use of RSA poles are more effective than classic NW poles. A previous study comparing different training programmes for groups of obese people with and without diabetes (45-65 years) showed that while supervised NW and gym-based programmes were equally effective for improving several parameters (body weight, body composition, muscular flexibility and VO 2 max levels), only NW resulted in significant improvements in concentrations of HbA1c, total and HDL cholesterol (Pippi et al., 2020). In our study, we measured WBC and its specific subpopulations counts (LYM, MON, GRA). In addition to being immune system cells and non-specific indicators of inflammation, WBC have been reported to be related to carbohydrate metabolism (Lorenzo, Hanley & Haffner, 2014;Vozarova et al., 2002). In their study conducted in a population of nondiabetic Pima Indians, Vozarova et al. (2002) observed that a high WBC count was associated with reduced insulin sensitivity in this group; the authors therefore suggested that chronic activation of the immune system may play a role in the pathogenesis of type 2 diabetes. Furthermore, the Insulin Resistance Atherosclerosis Study, conducted over a period of 5 years in different nondiabetic ethnic groups (56% women; age range 40-69 years), revealed elevated total WBC, neutrophil (NEU) and LYM counts in individuals who were at increased risk of diabetes; LYM count was associated with insulin sensitivity, NEU and MON counts with subclinical inflammation, and total WBC with both insulin sensitivity and subclinical inflammation (Lorenzo, Hanley & Haffner, 2014). In our study, there was a significant decrease in WBC and its specific subpopulation counts (LYM, MON, GRA) after the training period, however, these parameters were reduced within the reference values. The decrease in WBC and GRA counts correlated with the decrease in insulin sensitivity and HOMA-IR observed in the study subjects, confirming the relationship between these indices. Timmerman et al. (2008) observed that a healthy, physically inactive group of subjects (65-to 80-year-old men and women) had a significantly higher percentage of circulating MON compared with an age-matched physical activity comparison group; they concluded that training by the previously inactive subjects markedly reduced the percentage and count of these proinflammatory cells in the circulatory system. It should be noted, however, that in our study, when women were divided into groups, more significant changes with respect to WBC subpopulation count were noted in the group of women using classic poles (NW), compared with women using RSA poles. Similar changes with respect to WBC were reported in a study on inactive postmenopausal, overweight, and obese women with an average age of 57 years; aerobic training (treadmill walking and semi-recumbent cycle ergometry) for 6 months (50% VO 2 peak intensity) was observed to reduce WBC and NEU counts in the study group, while the decreases were highest in the group with the highest training load (Johannsen et al., 2012). The differences between the groups of women (NW and RSA) in the response of WBC subpopulations to the training programme in our study are difficult to explain, but we can conclude that the specificity of the training load is important in modifying the immune system response; however, this requires further research. The results of other studies also confirmed the significant influence of the nature of the training on these indices. For example, Horn et al. (2010) retrospectively analysed the blood test results of elite athletes participating in different endurance sports and showed that more aerobically-oriented sports tended to result in lower WBC and NEU counts, especially when compared to team sports or skill-based sports. Regarding 25(OH)D concentrations, we did not observe changes in this metabolite during the training period in the study group as a whole (n = 23) or when comparing RSA and NW groups. Pilch et al. (2016) found that 6 weeks of NW training in late autumn contributed to lower blood 25(OH)D levels in women older than 55 years. The authors suggested that reduced 25(OH)D levels may have been the result of either decreased vitamin D biosynthesis in the skin (due to decreasing UV intensity during the study period) or vitamin D involvement in muscle metabolism. Therefore, the timing of our study was chosen on the assumption that the intensity of UVB radiation would not change significantly. In the latitude (Poland) where our study took place, this period (February-April) is characterised by low UV intensity (Andersen et al., 2013). We therefore hypothesised that the timing of the study would avoid seasonal changes in serum 25(OH)D concentrations, allowing us to observe its levels and evaluate possible changes in response to exercise load. A number of studies have suggested that physical activity may modify vitamin D levels. For example, Fernandes & Barreto Jr (2017) suggested that physical activity may help to achieve higher vitamin D serum levels in the population, as factors other than sun exposure appeared to be responsible for higher vitamin D levels in more active individuals; however, this phenomenon needs further investigation. In addition, increased vitamin D levels (p < 0.0001) were identified in women aged 65-74 years after training outdoors in Nordic walking 3 times a week for 60 min, from April to June; no statistically significant changes were found in the control group (Podsiadło et al., 2021). In light of these results, the authors of this study concluded that physical activity of average intensity, carried out outdoors (with sun exposure), positively affected the level of vitamin D; however, taking other studies into account, they concluded that indoor activity (without direct exposure to sunlight) may also have an positive influence. In the present study, an additional aim of the analysis was to evaluate the effect of marching training on carbohydrate and lipid metabolic indices in relation to 25(OH)D levels and somatic features (body mass, BMI, fat mass), which were assessed prior to the training programme. A number of previous studies suggested an association between serum 25(OH)D levels and indices of carbohydrate and lipid metabolism (Grimnes et al., 2011;Jungert, Roth & Neuhäuser-Berthold, 2015). Receptors for vitamin D have been identified in pancreatic cells (Christakos et al., 2016), adipose tissue, and the liver (Cimini et al., 2019), indicating that this vitamin is involved in energy metabolism. However, in our previous study, we did not find significant relationships between seasonal changes in 25(OH)D concentration and levels of carbohydrate and lipid metabolic indices in women who did not engage in physical activity during the study period (Huta-Osiecka et al., 2021). In the present study, we also found no association between 25(OH)D concentrations and metabolic indices, as well as no correlation between the changes in metabolic indices (comparing levels before and after training) and 25(OH)D levels either for the whole group or for RSA and NW groups. Thus, we assume that the changes in metabolic indices observed in our study were related to physical activity alone and that serum 25(OH)D levels did not modify these changes. Studies by other authors indicate a relationship between fat mass content and indices of carbohydrate and lipid metabolism; pathological changes in these indices are particularly observed in overweight and obese individuals (Jabłonowska-Lietz et al., 2017). Women participating in our study were mostly characterised by normal BMI values, with abovenormal BMI only found in five people. Zegarra-Lizana et al. (2019) found an association between elevated body fat content (%) and the presence of insulin resistance in a Peruvian population, despite BMI within the normal range. For the whole group of subjects, we did not observe any relationship between body weight, BMI, and fat mass (kg, %) measured prior to the training programme and the magnitude of the changes in metabolic indices due to the training programme. Therefore, we can conclude that the magnitude of changes in these indices was not determined by fat mass content. However, recent studies have revealed a significant relationship between seasonal changes in 25(OH)D concentration and body fat percentage measured at the beginning of the study (autumn period) in postmenopausal women (Huta-Osiecka et al., 2021). In the current study, we observed that for the whole group, fat mass content (kg and %) was significantly decreased at the end of the training programme, while for groups separated by type of pole, changes in these parameters were only observed in the RSA group. There was a positive correlation between the decrease in fat mass (kg and %) and the decrease in LYM count, while inverse correlations between the decrease in fat mass (kg and %) and the decrease in WBC and GRA counts were observed for the group as a whole. However, it is difficult to explain these differential directions of correlations between changes in body fat and WBC subpopulations. Interestingly, in the NW group, although fat mass did not change significantly after the applied training programme, we observed a significant increase in body weight and BMI, which may indicate a possible increase in lean mass; we did not observe such changes in the group using RSA poles. The significant effect of NW on muscle tissue is confirmed by Micielska et al. (2021) study comparing the effects of NW training with high intensity interval training (HIIT). These authors found that NW was more effective than HIIT at inducing changes in blood exerkine concentrations in elderly people. The limitation of this study is that it was carried out on a small sample. However, the small size of the group allowed subjects to train at the same time with a single instructor (the same one each time); similar training loads were thus applied to all subjects, with the exception of differences in intensity resulting from the use of two types of poles. In this study we did not analyse the influence of diet and supplements on the indices being studied (especially insulin and lipid indices concentrations). On the other hand, the strength of this study was that it excluded participants who were taking medication which could have affected the data, therefore limiting potential confounding factors. CONCLUSION A short marching training programme contributed to an improved profile of carbohydrate and lipid metabolic indices in postmenopausal women. These effects were not dependent on 25(OH)D levels or body fat content. Increasing the training load through the use of RSA poles resulted in greater changes in the aforementioned indices compared to classic NW poles. However, in the case of WBC subpopulations, significant changes occurred only in the group of women using NW poles, which may indicate that training load is important in modifying the immune system response; this finding may be the subject of further research.
2022-07-02T15:15:33.815Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "cad583dc9c7ad1e7599b7390862889c6afade51a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9c769a2dce99decba142c19f329398b825aa08af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14712001
pes2o/s2orc
v3-fos-license
Scant Extracellular NAD Cleaving Activity of Human Neutrophils is Down-Regulated by fMLP via FPRL1 Extracellular nicotinamide adenine dinucleotide (NAD) cleaving activity of a particular cell type determines the rate of the degradation of extracellular NAD with formation of metabolites in the vicinity of the plasma membrane, which has important physiological consequences. It is yet to be elucidated whether intact human neutrophils have any extracellular NAD cleaving activity. In this study, with a simple fluorometric assay utilizing 1,N6-ethenoadenine dinucleotide (etheno-NAD) as the substrate, we have shown that intact peripheral human neutrophils have scant extracellular etheno-NAD cleaving activity, which is much less than that of mouse bone marrow neutrophils, mouse peripheral neutrophils, human monocytes and lymphocytes. With high performance liquid chromatography (HPLC), we have identified that ADP-ribose (ADPR) is the major extracellular metabolite of NAD degradation by intact human neutrophils. The scant extracellular etheno-NAD cleaving activity is decreased further by N-formyl-methionine-leucine-phenylalanine (fMLP), a chemoattractant for neutrophils. The fMLP-mediated decrease in the extracellular etheno-NAD cleaving activity is reversed by WRW4, a potent FPRL1 antagonist. These findings show that a much less extracellular etheno-NAD cleaving activity of intact human neutrophils compared to other immune cell types is down-regulated by fMLP via a low affinity fMLP receptor FPRL1. INTRODUCTION In addition to its major role in the regulation of cellular redox-related metabolism, nicotinamide adenine dinucleotide (NAD) and its metabolites have been found to be important for various cellular signaling processes [1]. NAD can be metabolized extracellularly in a number of different ways by cell surface enzymes. Cell surface NAD glycohydrolases [2][3][4] and ADP-ribosyltransferases [5,6] cleave NAD at the N-glycosidic bond to produce ADP-ribose (ADPR) and nicotinamide. Cleavage by NAD glycohydrolases produces free ADPR, and ADP-ribosyltransferases transfer ADPR to an acceptor molecule. Another family of cell surface extracellular NAD cleaving enzymes is pyrophosphatase that can cleave NAD directly to adenosine monophosphate and nicotinamide mononucleotide [7]. Extracellular NAD cleaving activity of a particular cell type is physiologically important as it determines the rate of the degradation of extracellular NAD with formation of metabolites in the vicinity of the plasma membrane, indirectly determining the interaction of the cells with extracellular NAD or with its metabolites. Extracellular application of NAD or its metabolites, especially ADPR and cyclic ADPR (cADPR), reportedly affect intracellular signaling of several cell types: extracellular NAD increases intracellular free calcium concentration ([Ca 2+ ]i) in human neutrophils [8], and human monocytes, where ADPR was also effective [9]. Further, extracellular cADPR increases [Ca 2+ ]i and stimulates proliferation of human hemopoietic progenitors [10]. Thus, extracellular NAD cleaving activity depending on the cell types might have physiological meaning, and deserves substantial consideration to study. However, it is yet to be clarified whether intact human neutrophils have extracellular NAD cleaving activity. Furthermore, previously no study showed the comparison of extracellular NAD cleaving activity of intact human neutrophils with other immune cell types. In this study, with a simple fluorometric assay utilizing 1,N 6 -ethenoadenine dinucleotide (etheno-NAD) as the substrate, we have shown that intact human peripheral neutrophils have scant extracellular etheno-NAD cleaving activity which is much less than that of mouse bone marrow neutrophils, mouse peripheral neutrophils, human monocytes and lymphocytes. With high performance liquid chromatography (HPLC), it was identified that ADPR is the major extracellular metabolite of NAD degradation by human neutrophils. Furthermore, the scant extracellular etheno-NAD cleaving activity of intact human neutrophils is down-regulated by fMLP via the low affinity fMLP receptor FPRL1. Reagents used Etheno-NAD was obtained from Sigma-Aldrich Chemical, and 20 mM stock solution was prepared in 10 mM potassium phosphate buffer (pH 7.4). fMLP and retinoic acid were also from Sigma-Aldrich Chemical. WRW4 was from Tocris Bioscience (Bristol, UK). Preparation of human peripheral neutrophils Neutrophils were purified from venous blood of healthy volunteer. In brief, venous blood was collected with peripheral venous puncture and immediately anti-coagulated with 10 U/ml sodium heparin. Then, neutrophils were isolated by density gradient centrifugation in Histopaque-1077, followed by dextran sedimentation. Residual erythrocytes were eliminated with hypotonic lysis. The purity of neutrophils counted by Diff Quik staining was >95% average. Eosinophils were found to be <5%. The viability of neutrophils stained with tryphan blue was >99%. Preparation of mouse bone marrow neutrophils Procedures for animal experiments were approved by the Animal Experimentation Committee at Hallym University. C57BL/6J female mice were sacrificed by cervical dislocation, and their femurs and tibiae were carefully cleaned from adherent tissues. After bone ends were cut off, the marrow was collected. Residual erythrocytes were eliminated with hypotonic lysis. The bone marrow neutrophils were then isolated by density gradient centrifugation in Percoll and suspended at a density of 1×10 7 cells/ml in DMEM containing 10% FBS, 100 U/ml penicillin and 100 U/ml streptomycin. The purity of neutrophils counted by Giemsa staining was >90% average. Cultures were kept at 37 o C in a humidified atmosphere containing 95% air and 5% CO2. Preparation of mouse peripheral neutrophils Mouse peripheral neutrophils were isolated according to the manufacturer's protocol (mouse neutrophil isolation kit, MACS Miltenyl Biotec, Germany). Briefly, peripheral blood was collected from C57BL/6J female mice and immediately anti-coagulated with 10 U/ml sodium heparin. Erythrocytes were eliminated from the whole blood cells by lysis with BD Pharm Lyse TM followed by hypotonic lysis. Then, leukocytes were treated with neutrophil biotin-antibody cocktail, and neutrophils were selectively isolated by magnetic separation with LS column. Preparation of human monocytes and lymphocytes Monocytes and lymphocytes were isolated as described previously [11]. For monocyte isolation, peripheral blood mononuclear cells (PBMC) from healthy donors were isolated from buffy coats obtained by density sedimentation over Histopaque-1077. Residual erythrocytes were eliminated with hypotonic lysis, and then to separate monocytes from lymphocytes, PBMC suspension was carefully laid onto the hyper-osmotic Percoll solution in each tube and centrifuged with brake off. Monocytes layer at the interface was collected and suspended in RPMI-1640. To obtain lymphocytes, after collecting the monocyte fraction at the interface, the bottom layer was collected and after washing with PBS, the residual sediments were suspended in RPMI-1640 supplemented with 5% FBS. Cell culture HL-60 (a promyelocytic leukemic cell line), U937 (a promonocytic tumor cell line) and Jurkat (a T lymphocyte cell line) cells were cultured in RPMI-1640 supplemented with 10% FBS, 100 U/ml penicillin and 100 U/ml streptomycin and kept in a 5% CO2 humidified chamber. To differentiate HL-60 cells into the neutrophil lineage by dimethyl sulphoxide (DMSO), cells at a density of 0.3×10 6 cells/ml were grown in the presence of 1.3% DMSO for 3 days. Afterwards, the medium was replaced by fresh medium containing 0.65% DMSO and the culture continued for 3 more days. To differentiate HL-60 cells into the neutrophil lineage by retinoic acid, cells at a density of 0.2×10 6 cells/ml were grown in the presence of 100 nM retinoic acid for 4 days as described previously [12]. HPLC Human peripheral neutrophils at a density of 1×10 7 cells/ ml and mouse bone marrow neutrophils at a density of 3 ×10 6 cells/ml were incubated with or without NAD (1 μM) and incubated for 15 min or 1 hour at 37 o C. Then the extracellular media were collected after centrifugation. Aliquots were analyzed by reverse-phase HPLC (Jasco Instruments) using a C18 column (4.6×250 mm, particle size 5 μm). Absorbance was measured at 254 nm using a UV detector (Jasco UV-2075 plus intelligent UV/VIS detector) and data were processed by the chromNAV data acquisition system from Jasco instruments. Peaks were identified by comparison to known standards. 2. ADPR is the major extracellular metabolites generated from the degradation of NAD by both intact human peripheral neutrophils and mouse bone marrow neutrophil as determined by HPLC. Human peripheral neutrophils at a density of 1×10 7 cells/ml (A) and mouse bone marrow neutrophils at a density of 3×10 6 cells/ml (B) were incubated with or without NAD (1 μM) and incubated for 15 min or 1 hour at 37 o C. Then the extracellular media were collected after centrifugation. Aliquots were analyzed by reversephase HPLC (Jasco Instruments) as described in Methods. Statistical analysis Data were analyzed with Graphpad Prism 5.0 using ANOVA. Bonferroni test was used for post-hoc comparison. All data are presented as means±S.E.M from at least three independent experiments. p<0.05 was considered to indicate statistical significance. Comparison of extracellular etheno-NAD cleaving activity of intact human peripheral neutrophils with other immune cell types Funaro et al. reported that human neutrophils are inactive in terms of extracellular ADP-ribosylcyclase activity [13]. However, there is no clear indication whether intact human neutrophils have any other extracellular NAD cleaving activities. To address this uncertainty, we undertook a simple fluorometric assay using etheno-NAD. As shown in Fig. 1A, intact human peripheral neutrophils showed scant extracellular etheno-NAD cleaving activity, which is much less than that of mouse (either bone marrow or peripheral) neurophils. It is to be noted that mature mouse peripheral neutrophils have lower extracellular etheno-NAD cleaving activity than immature mouse bone marrow neutrophils (Fig. 1A). Also, there is no data until now comparing the extracellular NAD cleaving activity of intact human peripheral neutrophils with other immune cell types. Thus next, we compared extracellular etheno-NAD cleaving activity of human neutrophils with different human primary immune cells like monocytes and lymphocytes. As shown in Fig. 1B, human peripheral neutrophils showed a much less extracellular etheno-NAD cleaving activity compared to monocytes and lymphocytes. We also compared extracellular etheno-NAD cleaving activity of DMSO-and retinoic acid-differentiated neutrophillike HL-60 cells with U937 and Jurkat T cells to check whether the cell lines show the similar observation. As expected, DMSO-differentiated neutrophil-like HL-60 cells were devoid of extracellular etheno-NAD metabolizing activity, whilst U937 and Jurkat T cells, like primary human monocytes and lymphocytes, respectively, showed marked extracellular etheno-NAD metabolizing activity (Fig. 1B). However, retinoic acid-treated HL-60 cells showed a marked increase in extracellular etheno-NAD cleaving activity (Fig. 1B) as reported previously [14]. Identification of the major extracellular metabolite generated from extracellular NAD degradation by human peripheral neutrophils and mouse bone marrow neutrophils Next, we attempted to identify the extracellular metabolites generated from extracellular NAD degradation by human peripheral neutrophils and mouse bone marrow neutrophils with HPLC. As shown in Fig. 2, the major extracellular metabolite generated from extracellular NAD degradation by both human peripheral neutrophils and mouse bone marrow neutrophils was ADPR. Extracellular NAD degradation by human neutrophils, as expected, was scant following incubation of NAD with human neutrophils for 15 minutes or 1 hour ( Fig. 2A). Though about three times less mouse bone marrow neutrophils (3×10 6 cells/ml) than human neutrophils (1×10 7 cells/ml) were used, degradation of extracellular NAD by mouse bone marrow neutrophils was remarkable (Fig. 2B). N-Formyl-methionine-leucine-phenylalanine (fMLP) downregulates extracellular etheno-NAD cleaving activity of intact human neutrophils Next, we investigated whether the scant extracellular etheno-NAD cleaving activity of human neutrophils is regulated following activation with fMLP, a chemoattractant for neutrophils. Since fMLP-like molecules could be abundant at bacterial infection sites, we investigated whether the extracellular etheno-NAD cleaving activity of intact human neutrophils are affected following fMLP stimulation. As shown in Fig. 3A, stimulation with fMLP induced a concentration-dependent decrease in the extracellular etheno-NAD cleaving activity of intact human neutrophils, which became significant at 100 nM, and was more marked at 1 μM concentration. In the next experiment, it was of interest that whether fMLP can affect extracellular etheno-NAD cleaving activity in other cells such as monocytes. Monocytes, however, was found to be unaffected by fMLP (Fig. 3B). WRW4, an FPRL1 antagonist, blocks fMLP-induced decrease in the extracellular etheno-NAD cleaving activity of intact human neutrophils Human neutrophils express two types of fMLP receptors. The prototype receptor formyl peptide receptor (FPR) is activated by low concentration (in the picomolar to low nanomolar range) of fMLP and is considered a high affinity fMLP receptor. An FPR variant, FPR-like 1 (FPRL1), interacts with high concentrations (in the micromolar range) of fMLP and is defined as a low affinity fMLP receptor [15]. Since fMLP, at micromolar concentration showed the maximal inhibition in the extracellular etheno-NAD cleaving activity of intact human neutrophils (Fig. 3A), we hy-pothesized that fMLP-induced decrease in the extracellular etheno-NAD cleaving activity might occur through FPRL1. To test this hypothesis, the effect of WRW4, a potent antagonist for FPRL1, on the fMLP-induced effect was examined. WRW4 at 10 μM but not at 1 μM reversed the fMLP-induced decrease in the extracellular etheno-NAD cleaving activity of intact human neutorphils (Fig. 4), suggesting that fMLP-induced decrease in the extracellular etheno-NAD cleaving activity of intact human neutrophils occur via FPRL1. DISCUSSION Neutrophils and monocytes are the major immune cells to be recruited to the site of inflammation [16]. During tissue damage, NAD may be leaked to the extracellular environment [17]. At the inflammatory site, it may also be released from a mechanically stressed cell to the extracellular environment [17]. Thus, released or leaked NAD may impose autocrine or paracrine effects on the intracellular signaling of a variety of cells including human neutrophils [8,9]. Our observation of intact human neutrophils with scant extracellular etheno-NAD cleaving activity (Fig. 1) suggests that extracellular intact NAD than its metabolites might be more important for modulation of human neutrophil functions. This contention is in line with the previous report that extracellular application of as low as 1 μM NAD, but not ADPR, increases [Ca 2+ ]i in human neutrophils [8]. However, about 25 fold higher concentration of NAD is required for [Ca 2+ ]i increase in human monocytes [9]. Our observation indicates that human monocytes have a much higher extracellular NAD metabolizing activity than human neutrophils, with the consequence of a much higher NAD clearance in the vicinity of these cell membranes than that of human neutrophils, and thus, may explain why higher concentration of extracellular NAD is required for [Ca 2+ ]i increase in human monocytes [9] than human neutrophils [8]. Differentiation of monocytes into macrophages is accompanied with down-regulation of CD38 (the major extracellular NADase) expression [18]. Thus, the rather fully differentiated state of peripheral neutrophils could be compatible with the scant extracellular NAD metabolizing activity, compared with other kinds of immune cells. In line with this contention, ethenoNADase activity of mouse peripheral neutrophils was less than that of mouse bone marrow neutrophils (Fig. 1A). The limitation of this study is lack of definitive evidence for the specific enzyme responsible for the scant extracellular NAD metabolizing activity of human neutrophils. Prototypic members of cell surface NAD glycohydrolases are CD38 and CD157 that show both ADP-ribosylcyclase and cADPRhydrolase activity apart from their intrinsic NAD glycohydrolase activity [13,[19][20][21][22][23][24][25]. Presence of enzymatically inactive CD157 in human neutrophils [13], and also our observation of the generation of extracellular ADPR from extracellular NAD degradation by human neutrophils ( Fig. 2A) made us assume that CD38 might be involved in the scant extracellular NAD metabolizing activity of human neutrophils. Interestingly, the scant extracellular etheno-NAD cleaving activity of intact human neutrophils was further decreased by fMLP (100 nM∼1 μM) (Fig. 3A). It is notable that fMLP-induced CD38 cleavage was previously reported in human neutrophils [26]. However, the phenomenon reported by Fujita et al. occurs at much late time point (∼1∼2 hr) with 10 nM fMLP. Our study indicates that fMLP-induced decrease in the extracellular etheno-NAD cleaving activity occurs at much earlier time point starting from around 5 min with a relatively higher fMLP concentration (100 nM∼1 μM) (Fig. 3A). Further, we were unable to find any changes induced by fMLP (1 μM) in surface expression of CD38 determined by flow cytometry, CD38 expression determined by Western blot, and modification of CD38 by phosphorylation or ubiquitination (data not shown). The molecular mechanism for the fMLP effect (Fig. 3A) as well as the identity of the enzyme for the scant extracellular NAD cleaving activity of human neutrophils needs to be clarified. To our knowledge, this is the first report that explicitly shows scant extracelluar etheno-NAD cleaving activity of intact human neutrophils compared to other immune cell types, providing the physiological basis for the more sensitive regulation by NAD of human neutrophils compared with other immune cells [8,9]. Interestingly, this scant extracelluar etheno-NAD cleaving activity of human neutrophils was further decreased with fMLP in the WRW4 (an FPRL1 antagonist)-sensitive manner (Fig. 4B). The specific signaling processes downstream to the activation of FPRL1 [27] involved in the fMLP-induced inhibition of etheno-NADase activity remain to be delineated. However, though human monocyte expresses FPRL1 [28], extracellur etheno-NAD cleaving activity of human monocytes was found to be unaffected by micromolar fMLP (Fig. 3B). Extracellullar NAD-induced [Ca 2+ ]i increase in human neutrophils has been shown to translate into neutrophil activation as determined by superoxide and nitric oxide production and chemotaxis [8]. fMLP-like molecules as well as NAD would be abundant at the bacterial infection foci, where neutrophils are highly accumulated. Our observation of fMLP-induced inhibition of extracellular NAD cleaving activity of human neutrophils (Fig. 3A) would thus lead to an enhanced activation of neutrophils via NAD in the bacterial infection foci with the beneficial effect on the local bac-terial clearance.
2016-05-04T20:20:58.661Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "77aba03f257a150a97f2d0e39bcf0782194b338f", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4296039?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "77aba03f257a150a97f2d0e39bcf0782194b338f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119172213
pes2o/s2orc
v3-fos-license
The Jacobi-metric for timelike geodesics in static spacetimes It is shown that the free motion of massive particles moving in static spacetimes are given by the geodesics of an energy-dependent Riemannian metric on the spatial sections analogous to Jacobi's metric in classical dynamics. In the massless limit Jacobi's metric coincides with the energy independent Fermat or optical metric. For stationary metrics, it is known that the motion of massless particles is given by the geodesics of an energy independent Finslerian metric of Randers type. The motion of massive particles is governed by neither a Riemannian nor a Finslerian metric. The properies of the Jacobi metric for massive particles moving outside the horizon of a Schwarschild black hole are described. By constrast with the massless case, the Gaussian curvature of the equatorial sections is not always negative. Introduction An elegant device for implementing the Principle of Least Action of Maupertuis was introduced by Jacobi. One varies the action of a mechanical system along an unparameterized path γ in an n -dimensional configuration space Q with coordinates x i , i = 1, 2, . . . , n and canonical momenta p i subject to the constraint that along the curve γ the energy E is conserved. An equivalent formulation is to lift the curve γ to the cotangent space T ⋆ Q and restrict variations to a level set of the Hamiltonian H(x, p) = E. In the simplest case the kinetic energy T = 1 2 m ij (x)ẋ iẋj , where the space dependent mass matrix m ij (x)dx d x j endows the configuration space Q with a Riemannian metric and the Lagrangian giving the equations of motion is Jacobi showed that the unparamaterised curves extremizing the constrained action are geodesics of the rescaled Jacobi metric One recovers the parameterization of the motion as a function of physical time t by noting that length s with respect to the Jacobi metric is related to the original time parameter t by . (1.4) This procedure opened up the way open to investigations of the motion of the original mechanical system using the methods developed by differential geometers to investigate geodesic motion. Of particular interest is the influence of the curvature of the Jacobi metric [1,2]. An important application to gravity was the work by Ong [1] who studied the curvature of the the Jacobi metric for the Newtonian N-body problem (see also [3]). One has Q = R 3n with the flat metric 5) and the potential energy is Since in this case, the Jacobi metric is conformally flat the evalution of the curvature is straight forward. If N = 2, the problem reduces to the Kepler's problem of the relative motion and the relevant Jacobi metric is up to an unimportant over all constant factor 1 . ( By symmetry, one may restrict attention to the equatorial plane θ = π 2 which is a totally geodesic submanifold. One then has a 2 -dimensional axially symmetric metric Ong [1] showed that the sign of the Gaussian curvature of the metric has the opposite sign to that of the energy E. If E > 0, which of course corresponds to unbound hyperbolic or parabolic orbits, he showed that the Jacobi metric (1.8) is well defined and complete for 0 ≤ r < ∞ and if E < 0, which corresponds to bound elliptical orbits, it is well defined for 0 < r < 2M m E . Ong also gave an isometric embedding of the Jacobi manifold into three dimensional Euclidean space E 3 with Cartesian coordinates x, y, z as a surface of revolution z = f ( x 2 + y 2 ). If E = 0, then f ′′ = 0 and the surface is the cone z = √ 3 x 2 + y 2 which has deficit angle π, or equivalently, semi-angle 30 • . In the other two cases the surface approaches the cone near the origin. If E > 0, then f ′′ < 0 ; the surface has negative Gauss curvature and remains outside the cone . If E < 0, then f ′′ > 0 the surface remains inside the cone and asymptotes the cylinder x 2 + y 2 = 2M m −E . Ong also studied the three body problem using these techniques. It is obviously of interest, if only to extend one's intuition by means of an easily visualised model, to see whether these ideas can be applied to General Relativity. At a formal level, one takes the configuration space {Q, m ij } to be Wheeler's superspace equipped with its DeWitt metric (cf. [4]). In the vacuum case, the potential V is (1.9) and the Hamiltonian constraint implies that the energy vanishes E = 0. One then obtains a picture of spacetime as a sheaf of geodesics in superspace. Having obtained the geodesic between two points in superspace one obtains the time duration between them using (1.4), thus solving the much discussed "problem of time". Less ambitiously one may confine attention to a mini-superspace truncation, and this has been done in attempts to investigate inflation [5] and the chaotic behaviour of Bianchi IX Mixmaster models (see e.g. [6]). The focus of the present paper is different. It is the motion a test particle of rest mass m following a timelike geodesic in a stationary background. A limiting case would be a zero rest-mass particle. This latter case is well known in the static case to reduce to geodesic motion with respect to the optical or Fermat metric f ij (x) = 1 −gtt g ij . This was studied in [7,8] and a number of limitations on possible motions using the Gauss-Bonnet theorem. In particular in [8] the Gaussian curvature of the Schwarzschild optical metric restricted to the equstorial plane was shown to be everywhere negative and to approach a constant value near the horizon r = 2M. In [9] this behaviour was found to be universal for the near horizons of non-extreme static black holes. One purpose of the present paper is to extend this work to the case of massive particles. The extension of Fermat's principle to cover stationary spacetimes entails replacing the Riemmanian metric by a Finsler metric of Randers type (see e.g. [10,11,12]. The Jacobi metric for static spacetimes If the action for a massive particle is whence the Hamiltonian is the Hamilton-Jacobi equation becomes is the optical or Fermat metric. Thus which is the Hamilton-Jacobi equation for geodesics of the Jacobi-metric j ij given by Note that the massles case, m = 0, the Jacobi metric coincides with the Fermat metric up to a factor of E 2 and as a consequence the geodesics, considered as unparameterized curves, do not depend upon the energy E. However in the massive case, m = 0 the geodesics do depend upon E. In general, if the spacetime is asymptotically flat and the sources obey the energy The Schwarzschild Case In the case of the Schwarzschild solution the Jacobi metric is The first large bracket in (3.1 ) is the conformal factor and the second large bracket the optical metric. The later is defined for 2M < r < ∞ and the horizon at r = 2M is infinitely far way with respect to the radial optical radial distance or tortoise By spherical symmetry, in order to study geodesics, it is sufficient to consider the equatorial plane θ = π 2 on which the restriction of the Jacobi metric is By axi-symmetry we have a conserved quantity often called Clairaut's constant which corresponds physically to angular momentum. That is which agrees with the standard result that where τ is proper time along the particle's worldline and as long as Thus l is indeed angular momentum. In standard treatments u = 1 r satisfies Binet's equation where, for a massive particle orbiting a Schwarzschild black hole, we have and where h = l m is the conserved angular momentum per unit mass. If this were a classical central orbit problem we would say that we have a sum of an inverse fourth and inverse square law attraction. There is a first integral where C is a constant related to the energy per unit mass E = E m and the angular momentum per unit mass h by (3.14) Some Explicit Solutions The behaviour of the orbits depend on the two dimensionless quantities which are the specific energy E > 0 and M h . In general, the solutions are given by elliptic integrals. However there is a one parameter family of explicit solutions of the form These solutions arise because two of the the roots of the cubic eqns in (3.12) coincide. If ω is real and A + B = 0 the solutions are symmetric about φ = 0 , r = 1 A+B and end spiralling around a circular geodesic at r = 1 A . If h 2 = 12M 2 then B = 0 and we have the inner most stable circular orbit at r = 6M for which the specific energy E = 8 9 . If h 2 = 16M 2 the energy per unit mass E = 1. Since A = −B = 1 4M , these orbits are in free fall from infinity, starting from rest and spiral around a circular orbit at r = 4M. All orbits starting from rest at infinity (i.e. having E = 1) with |h| < 4M fall through the horizon at r = 2M while all such orbits with |h| > 4M are scattererd back to infinty. These latter orbits are relevant for the theory of the BSW effect [13]. Bound States and Jacobi functions For a bound orbit we have three real positive roots taken to satisfy (3.20) The first integral (3.12) leads to Thus [16] where the modulus k of the elliptic functions is given by where the constants r p ≤ r a are the radii at perihelion and aphelion respectively, since from (3.20) and L = 2r p r a r p + r a , e = r a − r p r a + r a . (3.28) These expressions generalise the Newton-Kepler case for which the orbits are ellipses with foci at the origin and are given by where L is the semi-latus rectum and e is the eccentricity Note that in both cases L is the harmonic mean of the perihelion and aphelion radii. Relation to Weierstrass Functions and Photon Orbits The term linear in u may be eliminated by setting where We then find that ifφ = φ It follows that the massive particle orbits are given by A particular example is provided by the cardioidal photon orbit [14] If we take the case for which 1 − 6Mc > 0 this gives . (3.39) These orbits run from the past singularity at r = 0 out to a maximum radius r max given by and return to the future singularity at r = 0. Note that r max ≥ 2M. For a recent treatment of photon orbits in the Schwarzschild metric using Jacobi elliptic functions, the reader is directed to [15]. Properties of the Jacobi meric Since any spherically symmetric is conformally flat we could adopt isotropic coordinates and avail ourselves of the results given in [1]. Alternatively, the calculation of the Gaussian curvature K with respect to the Jacobi metric on the equatorial planes is straight forward using equation (8) of [9]. However unless m = 0, this leads to rather complicated and un-illuminating expresions. A qualititive analysis, based on the behaviour of circular geodesics, i.e. those with r = constant seems preferable. We restrict attenton to an equatorial plane. If E 2 ≥ m 2 , K tends zero at infinity and in all three cases it tends to − 1 as r tends to the horizon at r = 2M, which is at infinite Jacobi radial distance, the curvature tends to a negative constant. If E 2 < m 2 , the Jacobi manifold has an outer boundary at which the metric vanishes. This happens when This outer boundary should be thought of as a point since the Jacobi circumference vanishes there. In the vicinity of the boundary K is positive. Circular Jacobi geodesics are possible. These correspond to extrema of the Jacobi circumference and are located at values of r for which Circular geodsics exist with real energies of all values of r ≥ 3M. For every value of E 2 m 2 > 1 there is a unique circular geodesic with radius r between 3M and 4M. For every value of E 2 m 2 between 8 9 and 1 there are two circular null geodesics, the inner, which is unstable, has its radius between 4M and 6M, and the outer whose radius is greater than 6M. Three interesting cases arise • m 2 = 0 , r = 3M. These are circular null geodsics which are cirular geodsics of the optical metric. These results are consistent with the standard approach to circular timelike geodesics which is to require the simultaneous vanishing of the right hand side of (3.7) and its derivative. Eliminating E 2 and solving for u gives varies from 0 to 3 4 as r varies from 3M to 4M. For the bound circular geodsics with m 2 ≤ E 2 ≤ 8 9 m 2 one finds that 16M 2 m 2 h 2 varies from 3 4 to 1 as r varies from 4M to 6M where it achieves its maximum value and therafter as r varies from 6M to infinity it decreases monotonically to zero. Gauss Curvature and Isometric Embedding We can use these results to say something about the Gauss curvature K. We begin by recalling that if the induced metric on the surface of revolution In our case, the embedding will extend from infinity towards the horizon at r = 2M as long as the r.h.s of (3.47) remains positive. The Jacobi metric is conformal to the spatial metric of the Schwarzschild solution and its equatorial plane is well known to be isometrically embeddable as all the way down to the horizon as the Flamm paraboloid [18,19] However this is in general not possible for the Jacobi metric. For example if m = 0 one is limited us to the region r > 9 4 M. In that case which is everywhere negative and near the horizon the optical metric is asymptotic to one of constant negative curvature equal to − 1 (4M ) 2 . This is analogous to the well known fact that the metric with A = 1 and C = ae − r a , a > 0 has constant negative curvature − 1 a 2 and may be embedded into E 3 as the surface of revolution whose meridional curve is a tractrix. However this " Beltrami's trumpet" for which is incomplete and only occupies the region for which ρ < a, i.e. r > 0. By (3.3) the Jacobi metric is conformal to the optical metric with conformal factor It also tends to constant negative curvature near the horizon. Thus in general one does not expect to be able to capture the near horizon geometry of the Jacobi metric by an isometric embedding into E 3 as a surface of revolution. This represents a practical obstruction to constructing black hole analogues using such materials as graphene [20] . Consider now the case m 2 > E 2 > 8 9 m 2 . Near the outer boundary the Gauss curvature is positive and it is positive at the outer circular geodsic which is a local maximum of the Jacobi circumference. By the time we get to the inner, unstable orbit, which is a local minimum of the Jacobi circumference, the Gauss curvature is negative and it is negative near the horizon at r = 2M near which the Jacobi circumference diverges. If E 2 < 8 9 m 2 there are no circular geodesics and the curvature is positive near the boundary and negative near the horizon. Thus the Gauss curvature of the Jacobi-metric restricted to the equatorial plane is not everywhere negative as is the case for the Fermat metric. The Jacobi metric for stationary spacetimes We cast the spacetime metric in Zermelo form [10] where I shall call h ij the Zermelo metric, W i the wind, and where K µ ∂ ∂x µ = ∂ ∂t , is the timelike Killing vector field. Note that if the wind vanishes, the Zermelo metric coincides with the optical or Fermat metric. The Lagrangian L for a point particle of mass m undergoing geodesic motion in a spacetime with metric (4.1) is whereẋ i = dx i dt . The canonical momenta p i are therefore given by . (4.4) The Hamiltonian, H = p iẋ i − L, is given by where h ik h kj = δ i j . Note that in the massles case, m = 0 the Hamiltonian H becomes which coincides with equation (14) If one solves (4.7) for p 0 = −H one obtains (4.5). In general, on the level set H = E of the Hamiltonian we have If the mass m is non-zero, it is not possible to cast this in he form of an expression which is a homogeneous degree two in momenta p i equated to a constant. If it were so, then and a Legendre transform would result in a Lagrangian which is of degree two in velocities v =ẋi and hence we would be dealing with a Finsler structure, possibly Riemannian, as in the case of a static metric. Thus we are faced with a geometric structure more general than a Riemannian or even a Finsler metric.
2015-09-09T07:35:51.000Z
2015-08-27T00:00:00.000
{ "year": 2015, "sha1": "75dd38dbbdfef0a2e5a2a261413e5597b9afb29e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1508.06755", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "75dd38dbbdfef0a2e5a2a261413e5597b9afb29e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
234781684
pes2o/s2orc
v3-fos-license
Mycobacterium tuberculosis precursor rRNA as a measure of treatment-shortening activity of drugs and regimens There is urgent need for new drug regimens that more rapidly cure tuberculosis (TB). Existing TB drugs and regimens vary in treatment-shortening activity, but the molecular basis of these differences is unclear, and no existing assay directly quantifies the ability of a drug or regimen to shorten treatment. Here, we show that drugs historically classified as sterilizing and non-sterilizing have distinct impacts on a fundamental aspect of Mycobacterium tuberculosis physiology: ribosomal RNA (rRNA) synthesis. In culture, in mice, and in human studies, measurement of precursor rRNA reveals that sterilizing drugs and highly effective drug regimens profoundly suppress M. tuberculosis rRNA synthesis, whereas non-sterilizing drugs and weaker regimens do not. The rRNA synthesis ratio provides a readout of drug effect that is orthogonal to traditional measures of bacterial burden. We propose that this metric of drug activity may accelerate the development of shorter TB regimens. S ince the advent of effective antibiotic therapy, there has been an enduring quest to shorten the length of treatment required to reliably cure tuberculosis (TB) [1][2][3][4][5] . Current standard regimens range in duration from 6 months for drugsusceptible TB to years for some drug-resistant TB infections 6,7 . Two factors considered crucial to treatment-shortening activity are the capacity of a drug to penetrate and accumulate in lung lesions (i.e., pharmacokinetic properties) 8,9 and the inherent activity of a drug against residual drug-tolerant Mycobacterium tuberculosis (Mtb) populations that survive initial drug killing 2,4,10,11 . Our limited understanding of drug activity against residual Mtb populations that withstand early killing impedes development of shorter treatments 2 . Conventionally, treatmentshortening activity has been viewed as synonymous with killing drug-tolerant Mtb, leading to the historical term "sterilizing activity." 12 However, it is unclear that killing alone entirely explains the ability of drugs or regimens to cure TB 1,2 . Drug activity against residual Mtb populations is currently not directly measurable 2 . Instead, in a lengthy and expensive process, the activity of drugs has been cataloged empirically based on the degree to which drugs shorten the time needed to achieve non-relapsing cure in animal relapse studies and human trials 2,13 . Rifampin, pyrazinamide, and bedaquiline have been classified as potent sterilizing agents 12,14 . Many other drugs, including isoniazid, streptomycin, and ethambutol, are classified as non-sterilizing because they may be bactericidal during the first days of treatment but contribute only modestly to shortening the time needed to achieve non-relapsing cure 12,14 . To develop new, shorter TB regimens, there is a critical need to measure treatment-shortening activity at an early stage of drug and regimen evaluation 1,4,10,11 . In this work, we evaluated an additional dimension of treatment-shortening activity: whether sterilizing drugs and non-sterilizing drugs have different impacts on the physiologic state of Mtb and whether this difference can be exploited as a biomarker for treatment-shortening potential 1,11 . Here we show that sterilizing and non-sterilizing drugs have distinct effects on rRNA synthesis, a fundamental bacterial physiologic parameter. Drugs and regimens that shorten treatment and ensure durable, non-relapsing cure profoundly suppress Mtb rRNA synthesis, whereas drugs and regimens with lower sterilizing activity allow surviving Mtb populations to sustain ongoing rRNA synthesis. Quantification of rRNA synthesis may serve as a marker of the ability of a drug or drug regimen to shorten TB treatment. Results Precursor rRNA abundance provides a measure of Mtb physiology. Mtb synthesizes a pre-rRNA transcript that includes mature rRNA (16S and 23S) and short-lived spacer sequences [external transcribed spacer 1 (ETS1) and internal transcribed spacer 1 (ITS1)] (Fig. 1a). Since spacer regions are rapidly degraded while mature rRNA is stable, the abundance of pre-rRNA relative to mature rRNA serves as a measure of ongoing rRNA synthesis 15 . For internal normalization, we defined the rRNA synthesis (RS) ratio as the ratio of ETS1 to 23S rRNA copies (measured via droplet digital PCR) multiplied by 10 4 . We also confirmed that measurement of ITS1 recapitulates results based on ETS1 ( Supplementary Figs. 1, 5 and 7). We used three experimental models to confirm the RS ratio is a physiologic marker that correlates with growth 16,17 . First, a progressive oxygen depletion model 18 demonstrated that Mtb growth and rRNA synthesis decreased in parallel (Fig. 1b). Second, we evaluated the time course of rRNA synthesis in the untreated chronic murine TB model. During the first days after infection, the burden of Mtb colony-forming units (CFU) in the lungs increased dramatically and the RS ratio was high (Fig. 1c), consistent with rapid bacillary replication. After the onset of adaptive immunity 19,20 , the Mtb burden plateaued and rRNA synthesis slowed (day 25). By day 53, the RS ratio was low but far from maximally suppressed, consistent with previous "replication clock" experiments showing that the plateau in CFU reflects a dynamic equilibrium between death and ongoing replication rather than a dormant non-replicating state 19,21,22 . Third, the RS ratio demonstrated that key histologic microenvironments of the C3HeB/FeJ mouse harbor distinct Mtb populations that have markedly different levels of ongoing rRNA synthesis. Unlike other murine models, after low-dose aerosol infection with Mtb, the C3HeB/FeJ mouse develops type 1 granulomas (Fig. 2a) that exhibit a large central area of hypoxic caseous necrosis surrounded by a rim of inflammatory cells that include viable and degenerate neutrophils and heavily vacuolated macrophages, and are bounded by a cuff of fibrotic compressed lung tissue and infiltrating leukocytes 9,23-26 . Using quantitative multiplexed RNA in situ hybridization (ISH), we measured the ratio of pre-rRNA to 23S rRNA signals within individual bacilli ( Fig. 2a-g Fig. 3c). While 23S rRNA signals were similar in the rim and caseum (P = 0.62), the pre-rRNA mean fluorescent intensity (MFI) was significantly lower in the caseum (P < 0.0001), indicating a quiescent caseum Mtb population with decreased rRNA synthesis. Evaluation of the RS ratio on an individual-bacillary level revealed population heterogeneity in the RS ratio within both the inflammatory rim and caseum (Fig. 2e). The Mtb population of the rim was more heterogeneous than the population of caseum (variance = 0.64 vs. 0.38, respectively P < 0.0001). These observations indicate that the RS ratio measures a fundamental physiological parameter of Mtb populations. Sterilizing potency and rRNA synthesis in vitro. We tested canonical TB antimicrobials for effects on rRNA synthesis using in vitro time-kill experiments. Drugs with well-established sterilizing potency included rifampin, bedaquiline, and pyrazinamide. Pyrazinamide was excluded from in vitro analysis because its full activity requires in vivo conditions 27 . The well-established nonsterilizing drugs examined included isoniazid, streptomycin, and ethambutol. We used drug concentrations that maximally decreased Mtb CFU (up to 99% after 3 days). Sterilizing drugs (rifampin and bedaquiline) suppressed rRNA synthesis significantly more than non-sterilizing drugs (isoniazid, streptomycin, and ethambutol) (Fig. 3a- Table 1). RNAseq results confirmed that sterilizing drugs reduce pre-rRNA abundance far more than non-sterilizing drugs (Fig. 3f). After 48 h of exposure, ETS1 was substantially reduced with rifampin or bedaquiline relative to isoniazid, streptomycin, or ethambutol. To confirm drug effect on rRNA synthesis via an alternative approach, we used radionucleotide incorporation to quantify de novo nucleic acid synthesis. Rifampin and bedaquiline suppressed synthesis of DNA and total RNA (comprised primarily of rRNA) markedly more than isoniazid, streptomycin, or ethambutol ( Supplementary Fig. 6b). The effect of drugs on the RS ratio was independent of their effect on CFU. Bedaquiline immediately inhibited rRNA synthesis while CFU did not decrease appreciably for >8 days. In contrast, rifampin decreased the RS ratio and CFU rapidly and simultaneously. The three non-sterilizing drugs decreased CFU quickly but had modest effects on the RS ratio. As anticipated, isoniazid monotherapy developed resistance but this resistant Fig. 2 Single-bacillus RS ratio in situ indicates that Mtb rRNA synthesis is lower in caseum than the granuloma's inflammatory rim. a H&E-stained section of a single typical lung granuloma from C3HeB/FeJ mouse. Most of the lesion is comprised of caseous necrosis (1). The caseum is surrounded by a rim of inflammatory cells that include viable and degenerate neutrophils and heavily vacuolated macrophages (2). The granuloma is contained by an outer rim of compressed lung tissue, fibrosis, and infiltrating leukocytes (3). b Granuloma from C3HeB/FeJ mouse lung with multiplexed ISH overlay staining for 23S rRNA (green), pre-rRNA (red), and DAPI for host-cell nuclei (blue). The channels for ISH are shown individually in c and d. c 23S rRNA ISH identified Mtb present throughout the granuloma. d pre-rRNA ISH indicated lower Mtb rRNA synthesis in the caseum compared with the inflammatory rim. e Graphical analysis as well as statistical testing of the RS ratio by ISH showed that there was greater population heterogeneity in rRNA synthesis in the inflammatory rim (orange) than in the caseum (blue). Components of this raincloud plot are: (1) density plots for the distribution of 164,878 RS ratio values for individual bacilli in a single granuloma on a log 10 scale, (2) scatterplots to visualize all points measured, and (3) boxplots to present the range of values in the RS ratio. The center and ends of the box represent the median and first and third quartiles of the RS ratio. The boxplot whiskers represent the maximum and minimum values in each group. f Magnification of high-powered images depicts co-occurrence of rRNA signals within individual bacilli. g Further magnification demonstrates 23S signals distributed in a reticular pattern around a central confluence of pre-rRNA signals. Panels a-d were imaged at ×40; scale bars represent 500 µm. Z-stacked images surrounding this image are provided in Supplementary Fig 4. Panels f and g were imaged at ×63; scale bars represent 10 and 5 µm, respectively. Panels a-g are results from a single lung granuloma. Replicate results from two additional granulomas from two different mice in separate independent experiments are provided in Supplementary Fig. 2. population did not expand sufficiently to have a substantial impact on the RS ratio until well after 2 days of drug exposure ( Supplementary Fig. 5i). As observed in in vitro experiments, the effects of drugs on rRNA synthesis and bacterial burden were independent (Fig. 4c). Consistent with its negligible bactericidal activity but potent sterilizing activity, pyrazinamide did not reduce CFU (P = 0.99) but did reduce the RS ratio 4.4-fold relative to pre-treatment control. By contrast, streptomycin decreased CFU significantly (0.5 log 10 CFU lung −1 , P = 0.003) but reduced the RS ratio only 2.1-fold relative to control, consistent with its potent bactericidal activity but low sterilizing activity. Isoniazid and rifampin 10 mg kg −1 reduced CFU similarly (1.2 and 1.1 log 10 CFU lung −1 , respectively), but rifampin suppressed the RS ratio to a far greater degree (P < 0.00001). Bedaquiline had the most potent impact on both the RS ratio and CFU, indicating that it stops rRNA synthesis and results in death. Co-plating revealed minimal acquired drug resistance ( Supplementary Fig. 8a). A quantitative marker of sterilizing activity in the relapsing mouse model. Using the conventional high-dose aerosol BALB/crelapsing mouse model that has historically been the backbone of pre-clinical TB drug and drug regimen evaluation 28 , we evaluated the RS ratio as an indicator of sterilizing activity among four regimens with a well-established rank order of sterilizing activity in this model [29][30][31][32][33] . Based on the standard microbiologic relapse outcome, our results confirmed the established rank order of time required for non-relapsing cure, ranging from HRZE (slowest) < PaMZ < BPaL < BPaMZ (fastest) ( Fig. 5a and Supplementary Table 2). After 2, 3, and 4 weeks, the most sterilizing regimen, BPaMZ, was clearly distinguishable from other regimens, suppressing the RS ratio more than the second most potent regimen, BPaL (P < 0.01 at each timepoint). In turn, BPaL suppressed the RS ratio more than the third most potent regimen, PaMZ (P < 0.01 at each timepoint). The regimens with the lowest sterilizing activity (PaMZ and HRZE) were indistinguishable at weeks 2, 3, and 4 (Fig. 5b). The decline in the RS ratio tracked with the duration of therapy such that, for each regimen, the longer the duration of treatment, the lower the RS ratio became (trend test P value <0.01 for all regimens) (BPaL shown in Fig. 5c, other regimens in Supplementary Fig. 10). Importantly, the RS ratio was substantially more sensitive than culture. At the end of treatment, most mice were culture negative, but nearly all had quantifiable RS ratios (Fig. 5a), indicating that the RS ratio can quantify drug effect beyond the point at which mice become culture negative. The RS ratio tracked with the propensity to relapse. For example, after 4 weeks of treatment with BPaL, the RS ratio was partially suppressed (median = 4.0) but, after a 12-week drug holiday, the RS ratio rebounded (median = 24.9) (Fig. 5d) and all mice relapsed. An additional 4 weeks of BPaL suppressed the RS ratio further (median = 2.2) at the end of treatment, but, after a 12-week drug holiday, the RS ratio remained quantifiable in 12 of 15 (80%) mice and 9 of 15 (60%) mice had microbiologic relapse. The longest BPaL arm (12 weeks) suppressed the end of treatment RS ratio to the lowest level (median = 0.95). After a 12-week drug holiday, the RS ratio was quantifiable in only four mice and no mice relapsed. A similar association between treatment duration, suppression of the RS ratio, and nonrelapsing cure was observed for all regimens ( Supplementary Fig. 10). An additional smaller study that included only HRZE and BPaL identified the same results ( Supplementary Fig. 11). Comparison of RS ratio and CFU results reinforces that they measure orthogonal properties. Consistent with in vitro testing of single drugs, combination regimens rapidly inhibited rRNA synthesis (10-to 100-fold decrease in RS ratio during week 1), well before there was a meaningful decline in CFU burden (Fig. 5b, c). During weeks 1-4, CFU grouped PaMZ with BPaL while the RS ratio grouped PaMZ with HRZE. However, after 20 weeks, PaMZ suppressed the RS ratio significantly more (P = 0.03) than HRZE. PaMZ was also more effective than HRZE in preventing both rebound in the RS ratio and relapse following the completion of treatment. After 14 weeks of HRZE and a 12-week drug holiday, the RS ratio rebounded (median = 11.0) and all mice relapsed. By contrast, after 12 weeks of PaMZ and a 12-week drug holiday, the RS ratio remained suppressed (median = 0.40) and 20% of mice relapsed. Effect of treatment on the RS ratio in human TB. To begin evaluation of the RS ratio as a marker of treatment response in humans, we quantified the RS ratio in serial sputa from 17 Ugandan and 28 Vietnamese patients treated with HRZE for drug-susceptible pulmonary TB (Fig. 6a) (Fig. 6b). Discussion We discovered that drugs and drug regimens that shorten the duration of TB treatment inhibit Mtb rRNA synthesis more than less potent drugs and regimens. The time needed to cure TB is determined primarily by drug activity against residual Mtb populations that survive initial drug exposure 2,4,10,11 . This activity has not previously been directly measurable. By quantifying the impact of drugs on rRNA synthesis rather than enumerating bacterial burden, the RS ratio provides a practical metric of drug activity that may enhance pharmacodynamic monitoring and accelerate development of shorter TB treatment regimens. Historically, the treatment-shortening activity of drugs has been characterized by observing the effects of a drug in a series of animal relapse studies and human clinical trials 2 . The length and expense of human and animal trials has impeded evaluation of candidate regimens. Several features suggest that the RS ratio may accelerate evaluation of drugs and regimens. First, the RS ratio measures a property that is distinct from bacterial burden. Drugs and regimens frequently affect CFU and the RS ratio differently. For example, bedaquiline suppressed the RS ratio within hours in vitro, indicating near-cessation of rRNA synthesis, yet CFU did not decline appreciably for 8-12 days. Similarly, the potent sterilizing agent pyrazinamide suppressed the RS ratio in mice but had no effect on CFU. Conversely, the effect of PaMZ on CFU early in treatment was greater than its effect on the RS ratio. The RS ratio is not a proxy for bacterial burden. Second, in both mice and humans, there was a dose-response in which higher doses of sterilizing drugs suppressed the RS ratio to a greater degree than lower doses. Finally, the RS ratio correlated with regimen sterilizing activity in the conventional relapsing mouse model. The regimens that cured TB fastest were those that suppressed the RS ratio more rapidly and most profoundly. Collectively, these findings suggest that the RS ratio may provide a needed practical marker of the sterilizing activity of drugs 10,13 . The RS ratio was able to quantify drug effect beyond the point at which all mice were culture negative. Understanding of sterilizing activity has long been hamstrung by the limited sensitivity of culture. Like humans, mice become culture negative before they are cured 28,34 . This is highlighted by our results with HRZE. After 16 weeks of HRZE, all mice were culture negative. Yet, when held for a 12-week drug holiday, 80% of companion mice had microbiologic relapse. By providing sensitive, precise quantitative information on drug effect through the entire course of treatment, the RS ratio opens a window on the critical but hitherto inaccessible late sterilizing phase. This has immediate practical implications for regimen evaluation in pre-clinical animal models. Because it is measured in a relatively small number of animals (3-6 mice) early in treatment, the RS ratio saves time, resources, and animals. By greatly increasing the speed with which large numbers of drug regimens can be ranked in preclinical models, the RS ratio should accelerate selection of regimens for human testing. The central challenge to shortening TB treatment is eliminating the drug-tolerant persister population that withstands the initial rapid killing phase. Genetically drug-susceptible bacterial populations that have survived prolonged drug exposure (such as those studied here) have been defined functionally as drug-tolerant persisters 35 . The physiologic state of persisters is uncertain 35 , with evidence supporting the existence of both replicating 36 and nonreplicating phenotypes 37,38 . The concept that certain persisters may continue replication is based on reports that M. smegmatis sustains ongoing replication during lethal isoniazid exposure 36 while sterilizing drugs, bedaquiline and rifampin, halt Mtb replication 39 . Since it is well-established that rRNA synthesis and bacterial replication are fundamentally coupled 16,17 , our results with the RS ratio seem consistent with drug-specific effects on Mtb replication. Conversely, our findings are not necessarily at odds with the conventional model that persisters are nonreplicating with a low basal level of transcriptional, translational, and metabolic activity 37,38 . This work focuses attention on the down-stream physiologic consequences of drug stress rather than the specific drug mechanism of action. For all but rifampin, the connection between mechanism of action and suppression of rRNA synthesis is indirect. The interaction of a drug with its target molecule initiates a cascade of indirect secondary damage, perturbing other cellular processes. Secondary effects are complex and currently difficult to predict based on drug mechanism alone. We see several non-exclusive possibilities for how inhibition of rRNA synthesis may accelerate time to cure. First, when drug stress rapidly and profoundly impairs the ability of a bacterium to synthesize a key macromolecule (rRNA), this may be a signal of injury that is incompatible with pathogenicity or long-term viability. A bacterium that cannot synthesize rRNA is likely unable to remodel its proteome, committing it to a single physiologic program and limiting its fitness to respond dynamically to its environment. A depleted, incapacitated Mtb population may be less capable of withstanding immune stress or may even elicit a different immune response. Finally, a bacterium that cannot synthesize rRNA will be unable to replicate 16,17 . A drug or regimen that abrogates rRNA synthesis will halt the production of new bacilli during treatment. Our results suggest several future studies. First, understanding the full range of possible drug effects on rRNA synthesis will require testing emerging new chemical entities with additional mechanisms of action. A limitation of this report is that we did not test drugs that may have sterilizing activity, including moxifloxacin, clofazimine, and linezolid, individually in vitro and in mice. Second, we cannot be certain that the association between RS ratio and treatment shortening activity is generalizable to nonrifamycin, non-bedaquiline-based regimens that we have not yet tested. It remains possible that inhibition of RNA or ATP synthesis suppresses the RS ratio in a way that is independent of sterilizing activity. Confirming the RS ratio as a practical surrogate for relapse in animals will require additional relapse studies with diverse regimens in multiple animal models. Finally, while this report provides proof-of-concept data in humans, the value of the sputum RS ratio remains to be determined. Ongoing clinical trials (ClinicalTrials.gov Identifier: NCT02410772) are testing the RS ratio as pharmacodynamic monitoring tool in humans. In summary, this study has identified a key difference in how sterilizing and non-sterilizing drugs and regimens affect Mtb rRNA synthesis. The RS ratio provides a needed molecular metric of drug activity that is based a key physiologic property rather than recapitulation of bacterial burden. The RS ratio may enable more intelligent design and evaluation of candidate regimens, accelerating development of regimens that can cure TB faster. Methods In vitro oxygen depletion model. M. tuberculosis H37Rv was grown in 125 ml Erlenmeyer flasks in 50 ml DTA medium (Dubos broth (BD Difco) supplemented with 0.5% bovine serum albumin (Research Products International), 0.05% Tween 80, and 0.75% glucose, pH 6.6) stirring at 200 r.p.m. with 50 × 8 mm stir bars using a Micro-Stir magnetic stirrer (Wheaton) at 37°C until mid-log (OD 600 0.4). Cultures were diluted to OD 600 0.004 in DTA medium in 16 ml volumes in sterile glass 20 × 125 mm tubes. Stopcock grease was applied to the threads of the glass tubes and tubes were sealed with phenolic caps. These cultures were stirred at 200 r.p.m. with 12 × 4.5 mm stir bars using a rotary magnetic tumble stirrer (V&P Scientific) for rapid oxygen depletion 18 . Cell pellets to assess rRNA synthesis ratios were collected as detailed in Supplementary Information every 12 h starting at day 4, after cultures had begun to become hypoxic. Growth rates were determined based on optical density readings at 600 nm (OD 600 ) every 6 h for the duration of the experiment. Quantification of the rRNA synthesis ratio via droplet digital PCR. RNA was reverse transcribed with SuperScript VILO cDNA synthesis kit (Invitrogen) according to the manufacturer's protocol, except that reverse transcription at 42°C was performed for 120 min. Transcript copies were quantified using the QX100 Droplet Digital PCR system (Bio-Rad). Primers and probe sequences and information are in Supplementary Table 3. Reaction were run in duplex, ETS1 with 23S and ITS1 with 23S with ddPCR SuperMix for Probes (no dUTP) (Bio-Rad). All primers were 900 nM final concentration and all probes were 250 nM final concentration. The thermocycling conditions for all ddPCR reactions were: initial denaturation at 95°C for 10 min, 40 cycles of 94°C for 30 s and 60°C for 60 s with a 2°C s −1 ramp rate, and a final hold at 98°C for 10 min. The ratio of ETS1/23S and ITS1/23S was calculated within each duplexed reaction by QX100 Droplet Digital PCR system software (Bio-Rad). RNA sequencing. RNA extracted from in vitro samples was reverse transcribed and prepared for sequencing using the Truseq Stranded Total RNA kit (Illumina), omitting the ribosomal depletion step but otherwise following the manufacturer's protocol. cDNA libraries were sequenced on a NovaSEQ 6000 (Illumina) at the University of Colorado, Anschutz Medical Campus Genomics and Sequencing Core. Sequence quality was evaluated using FastQC and adapters were trimmed using BBDuk (https://sourceforge.net/projects/bbmap/) with kmer = 23 and mink = 11. High-quality sequences were randomly subsampled to 100,000 sequences per sample with BBTools (v 35.85) 40 (https://sourceforge.net/projects/bbmap/) and mapped to M. tuberculosis Erdman ATCC35801 (accession number NC_020559.1) using Bowtie2 (ref. 41 ) with the default parameters, followed by visualization with IGV 42 . Bioinformatics analysis was performed on the Colorado Center for Personalized Medicine High Performance Computing Center at the University of Colorado, Anschutz Medical Campus. Animal efficacy studies. All animal studies were performed at Colorado State University in a certified animal biosafety level III facility. Ethics oversight was provided by the Colorado State University Animal Care and Use Program which is PHS Assured (A3572-01), USDA Registered (84-R-0003), and AAALAC accredited (#000834). The IACUC approved CSU protocol number is 17-7701A. Mice were housed socially (2-5 animals per cage) in a certified ABSL-3 facility in HEPA filter equipped techniplast cages on autoclaved bedding changed every 7-14 days. Mice had access to irradiated chow and water ad libitum. Twelve-hour light/dark cycles were employed and mice were maintained at temperatures between 65 and 75°F with 40-60% humidity. Infection of mice. Aerosol infection of mice with M. tuberculosis Erdman employed a Glas-Col inhalation exposure system 43,44 . Drug delivery and dose. Isoniazid, pyrazinamide, linezolid, and ethambutol were administered by oral gavage in a 0.2 ml volume at 25, 150, 50, and 100 mg kg −1 , respectively. Rifampin, bedaquiline, and pretomanid were administered by oral gavage in 0.2 ml volume at 10 or 30, 5 or 25, and 50 or 100 mg kg −1 , as indicated for each study. Streptomycin was given by subcutaneous injection at 200 mg kg −1 in 0.2 ml volume. In cases where individual mice were administered two oral drugs, each individual oral dose was separated by 1 h. The global standard HRZE regimen was prepared by combining isoniazid, ethambutol, and pyrazinamide in 0.2 ml volume and giving by oral gavage 1 h following delivery of rifampin as a separate oral dose. The PaMZ regimen was prepared by combining moxifloxacin and pyrazinamide and delivering~4 h after delivery of pretomanid as a separate oral dose. The BPaL regimen was prepared by combining bedaquiline and pretomanid and delivering~4 h prior to the delivery of linezolid as a separate oral dose. The BPaMZ regimen was prepared by combining moxifloxacin and pyrazinamide and delivering~4 h after delivery of bedaquiline and pretomanid both given as separate oral doses~1 h apart. All treatments were given once daily, 5 days a week (Monday through Friday). C3HeB/FeJ mouse experiments. Six-to 8-week old C3HeB/FeJ (Jackson Laboratories) female mice were exposed to a low-dose aerosol of M. tuberculosis Erdman using 1.5 × 10 6 CFU ml −1 to achieve~50-75 CFU in the lungs of each mouse 23,24 . Treatment was initiated on day 71 at the time when necrotic lesions have fully developed 23 and continued for 4 weeks. Each mouse was individually euthanized by CO 2 narcosis followed by cardiac puncture. Lung lobes were photographed and Type I caseating necrotic lesions of <5 mm were excised and placed into phosphate-buffered saline with 4% paraformaldehyde at 4°C for 3 days prior to further processing. BALB/c mouse chronic TB model. Six-to 8-week-old female pathogen-free BALB/c mice (Charles River Laboratories) were exposed to a low-dose aerosol of M. tuberculosis Erdman-Lux 44 using 2 × 10 6 CFU ml −1 to achieve~71 CFU in the lungs of each mouse. Three mice were individually euthanized by CO 2 narcosis on day 1, day 7, day 25, and day 53 post aerosol infection. Bacterial lung burdens were determined from the left lung lobe. Upper right lung lobes (superior and middle lobes) were flash frozen in liquid nitrogen prior to RNA extraction as described in the Supplementary Information. BALB/c mouse high-dose aerosol infection model. Six to 8-week-old female pathogen-free BALB/c mice (Jackson Laboratories) were exposed to high-dose aerosol of M. tuberculosis Erdman from broth culture (OD 600~0 .8) to achieve deposition of~3.8 log10 CFU in the lungs of each mouse 45,46 . Treatment was initiated on day 11 post aerosol and continued for 4 weeks. Groups of six mice were individually euthanized by CO 2 narcosis on day 11, prior to treatment initiation, and on the last day of treatment, to determine the bacterial loads in lungs. The left and lower right lung lobes (inferior and post-caval lobes) were used for bacterial enumeration. Upper right lung lobes (superior and middle lobes) were flash frozen in liquid nitrogen prior to RNA extraction as detailed in the Supplementary Information. BALB/c mouse relapse model. Six-to 8-week-old female pathogen-free BALB/c mice (Jackson Laboratories) were exposed to high-dose aerosol of M. tuberculosis Erdman from broth culture (OD 600~0 .8) to achieve deposition of~4.3 log 10 CFU in the lungs of each mouse 45,46 . Treatment was initiated on day 11 post aerosol and continued for up to 20 weeks. Groups of 3-6 mice, as indicated, were individually euthanized by CO 2 narcosis on day 11, prior to treatment initiation, and one day following the last day of treatment, to determine the bacterial loads in the lungs. Additional groups of 15 mice each from each treatment group were placed on a 12week drug holiday to allow bacterial relapse 25,[30][31][32]45 . The left and lower right lung lobes (inferior and post-caval lobes) were used for bacterial enumeration. Upper right lung lobes (superior and middle lobes) were flash frozen in liquid nitrogen prior to bead beating and RNA extraction as described in the Supplementary Information. Enumeration of CFU from lungs. The number of viable organisms was determined by serial dilutions of homogenates (Precellys Evolution, Bertin) prepared in phosphate-buffered saline plus 10% (w/v) bovine serum albumin from whole lungs (C3HeB/FeJ mice) or indicated lung lobes (BALB/c mice) and plating on 7H11-OADC agar plates containing 0.4% (w/v) activated charcoal to prevent drug carryover. Colonies were enumerated after at least 21 days of incubation at 37°C. For relapse assessments, tissues were homogenized in phosphate-buffered saline and plated in their entirety on 7H11-OADC agar plates without activated charcoal. Single-bacillary ISH. Mouse lung was formaldehyde-fixed and paraffin-embedded and stained by using multiplexed-ISH kit (Advanced Cell Diagnostics) according to the manufacturer's instructions 47 . Specimens were directly placed into 4% PFA upon excision and fixed for 48 h at 4°C before embedding. Next, 2 μm tissue sections were cut from FFPE blocks and mounted onto Superfrost Plus microscope slides (Fisher Scientific) prior to use. Multiplex-ISH was visualized after labeling with fluorescein isothiocyanate and cyanine 3.5 ( 47 ). Whole-slide digital images were acquired at ×40 magnification using the Axio Scan.Z1 slide scanning fluorescence microscope (Zeiss) or at ×63 magnification using the SP8 laser scanning confocal system (Leica). Image analysis was performed using the ilastik machinelearning-based (bio)image analysis (www.ilastik.org) 48 . The ilastik plugin for ImageJ (v. 2.1.0/1.53c) was used to export data from each region of interest in the ilastik HDF5 format. Image analysis statistics. The data tables were exported from ilastik as "*.csv" files for analysis using R software (version 3.5.2, R Foundation for Statistical Computing, www.R-project.org). Background intensity was corrected for each channel by subtracting the minimum 30-pixel neighborhood intensity from the MFI for each object. After correcting for background, the MFIs were analyzed and reported in log 10 . The heterogeneity of signal intensities within the inflammatory rim and the caseum was calculated by the variance. MFI values of each channel based on location were compared using an F-test. Measured MFIs in the two channels were tested for within group variance in both the inflammatory rim and the caseum. A non-parametric Kruskal-Wallis one-way analysis of variance was used to determine dominance in median values in observed differences between ratio values, channels, and locations. Human study subjects. This manuscript includes three human studies. The first was a longitudinal study of TB patients treated under routine care, conducted across eight outpatient clinics in Hanoi, Vietnam, by the US CDC TB Trials Consortium at the UCSF/Vietnam National TB Programme network, entitled "Study 36: A Platform for Assessment of TB Treatment Outcomes An Observational Study of Individuals Treated for Pulmonary Tuberculosis." The second was a longitudinal observational study in Uganda that included 17 adult inpatients (male and female) treated for drug-susceptible TB per local guidelines with the global standard four-drug regimen at standard doses. An analysis of Mtb mRNA in sputum from this cohort has been published 49 . The third was a biomarker substudy embedded in the Benin site of the RAFA trial which enrolled adults living with HIV who were co-infected with drug-susceptible TB. Patients were randomized to either a control arm, which was standard of care at the time (standard antitubercular treatment with 10 mg kg −1 doses of rifampicin and start of ART 8 weeks thereafter), or early start of ART (2 weeks after initiating antituberculosis treatment), or receive a high dose of rifampicin in the first 8 weeks of TB treatment (50% dose increase, i.e., 15 kg −1 , and start of ART 8 weeks after initiating antituberculosis treatment). Aspects of the RAFA trial have been published 50 . All participants in the three cohorts (Vietnam, Uganda, and Benin) provided written informed consent for the use of their sputa and clinical information for the purpose of developing novel biomarkers of treatment response. Ethical approval and supervising institutional review boards are fully described in Supplementary Information. Pharmacokinetic sampling in the RAFA trial. Four weeks after initiating treatment, patients were admitted overnight before pharmacokinetic sampling. Five serial blood samples were drawn: pre-dose (∼15 min before a dose) and 2-, 3-, 6-, and 10-h post-dose. Blood samples were processed, and plasma was stored immediately at −80°C before transfer on ice to the analytical laboratory (Division of Clinical Pharmacology, University of Cape Town, South Africa). Plasma samples were analyzed using a validated liquid chromatography-tandem mass spectroscopy (LC-MS/MS) assay 50 . Pharmacokinetic/pharmacodynamic modeling. The population pharmacokinetic model for rifampin was developed using nonlinear mixed-effects modeling in software NONMEM (v 7.3; Icon Development Solutions). Absorption of rifampin was described using a first-order absorption model, with a delay, using a chain of transit compartments 51 . One-compartment disposition model was used to describe pharmacokinetics distribution of rifampin 52 . Allometric scaling was applied to all clearance and volume of distribution parameters to account for the effect of body size using total body weight (TBW), fat-free mass (FFM), or body fat 53 . Betweenpatient variability in PK parameters (clearance, volume, absorption rate constants) was implemented using a log normal distribution. The final pharmacokinetic model was validated using internal validation techniques, such as visual predictive check and non-parametric bootstrapping. Estimates of individual areas under the concentration-time curve (AUC) and maximum serum concentrations (C MAX ) were derived from the models using integration from the system of ODE equations and individually estimated primary pharmacokinetic parameters (CL, V, k a ). The PKPD model was built using a sequential approach. The longitudinal pharmacodynamic biomarker rRNA synthesis ratio was modeled using a linear model where baseline and rate of change were estimated (intercept-slope) model from the data. Between-subject variability was implemented on baseline and slope parameters. Full variance-covariance model was estimated for baseline-slope parameter distribution. After a baseline model was developed which describes on treatment biomarker response as a function of treatment and baseline, we then evaluated rifampin pharmacokinetics for its significance as a covariate impacting the slope (rate of change), using a linear model. Therefore, the change from baseline in rRNA synthesis ratio was described in a final model as a function of, treatment, baseline and drug pharmacokinetics predictors (AUC or C MAX ). The likelihood ratio test, which compares −2 log likelihood between two nested models, was used to assess significance. A baseline model was developed first, followed by the addition of AUC or C MAX . Statistics and reproducibility. The decision on whether a parametric or nonparametric test should be used was based on the Shapiro-Wilk test. All statistical tests were two-tailed unless otherwise noted. For single exposure in vitro, data were evaluated by a Kruskal-Wallis one-way analysis of variance followed by a planned multiple comparison analysis. For murine studies, data were evaluated by a oneway analysis of variance followed by a multiple comparison analysis of variance by a one-way Tukey test. The association between rRNA synthesis ratio and rifampin PK was done using simple linear regression. For statistical analyses performed on MFI values, the one-way Kolmogorov-Smirnov test of significance comparing the empirical CDFs was first performed, followed by the Kruskal-Wallis test. Differences were considered significant at the 95% level of confidence (P < 0.05). Sig-maPlot software (v 11) and R (v 3.2.3) were used for data manipulation, plotting, and post-modeling analysis. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All raw sequencing data have been deposited in the Sequence Read Archive (SRA) under BioProject accession PRJNA615137. Individual samples have the following BioSample accession numbers: untreated, SAMN14446914; rifampin, SAMN14446915; isoniazid, SAMN14446916; streptomycin, SAMN14446917; ethambutol, SAMN14446918; bedaquiline, SAMN14446919. Data files related to image analysis of C3HeB/FeJ mouse lesions are available at https://github.com/JoshuaVasquezLab/ Walter-et-al.2021 (ref. 54 ). Other source data are provided with this paper in the Source Data file.
2021-05-20T06:16:18.990Z
2021-05-18T00:00:00.000
{ "year": 2021, "sha1": "3f57ca0b4f96d34a2ab4320725ba88ecf77e75d3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-22833-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c9bdcff6da051a2afa634a3294892bcff7437c27", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
58948985
pes2o/s2orc
v3-fos-license
The role of osteomodulin on osteo/odontogenic differentiation in human dental pulp stem cells Background Extracellular matrix secretion and odontoblastic differentiation in human dental pulp stem cells (hDPSCs) are the cellular bases for reparative dentinogenesis. Osteomodulin (OMD) is a member of the small leucine-rich proteoglycan family distributed in the extracellular matrix but little is known about its role in osteo/odontogenic differentiation. The objective of this study was to investigate the role of OMD during osteo/odontoblastic differentiation of hDPSCs. Methods hDPSCs were selected using immune-magnetic beads and their capability of multi-differentiation was identified. OMD knockdown was achieved using short hairpin RNA (shRNA) lentivirus and was confirmed by western blot. Gene expression was measured by real-time qPCR and osteo/odontoblastic differentiation of hDPSCs was determined by alizarin red S staining. Results Compared with uninduced cells, the transcription of OMD was up-regulated by 35-fold at the late stage of osteo/odontogenic differentiation. shRNA-mediated gene silencing of OMD decreased the expression of odontoblastic genes, such as alkaline phosphatase (ALP), dentin matrix acidic phosphoprotein 1 (DMP1) and dentin sialophosphoprotein (DSPP). Besides, knockdown of OMD attenuated the mineralized nodules formation induced by osteo/odontogenic medium. Conclusions These results implied that OMD may play a pivotal role in modulating the osteo/odontoblastic differentiation of hDPSCs. Background The dental pulp contains a unique precursor population of mesenchymal stem cells (MSCs) [1]. MSCs are multipotent, highly proliferative and have the ability to differentiate into odontoblast/odontoblast-like cells in response to the stimuli such as caries or dental trauma [2,3]. The odontoblasts can secret reactionary/reparative dentine matrix which underlies the formation of dentinal bridge. Cultured dental pulp stem cells can also differentiate into odontoblast-like cells and form calcium nodules under certain circumstances in vitro [4,5]. Identification of the factors that regulate these processes is of instructive significance. Dentinogenesis is highly regulated by the expression of the extracellular matrix (ECM) proteins. The dentin contains structural macromolecules and other proteins as extracellular matrix components, including type I collagen, osteonectin, osteopontin and dentin sialoprotein [6,7]. An important family of molecules with regulatory functions is the small leucine-rich proteoglycans (SLRPs), which are extensively involved in the dentinal biomineralization [8]. In particular, it has been confirmed that biglycan and decorin are identified in the matrices of dentin and implicated in dentinogenesis [9]. Osteomodulin (OMD), a heterologous protein of Osteoadherin, belongs to SLRPs and was originally isolated as a keratan sulfate proteoglycan from bovine long bone [10]. SLRPs normally distribute in extracellular matrices, but OMD is the only member specifically restricted to mineralized tissues [11,12]. Not only does OMD have a high affinity for hydroxyapatite crystals via its large and acidic C-terminal domain [10,13], but also it can directly regulate diameter and alter shape of type I collagen fibrils [14]. However, the functions of OMD on osteo/odontoblastic differentiation and mineralization have yet to be fully determined, nevertheless it has been shown that OMD expression starts in the polarized odontoblasts and increases in the odontoblast cell layer and alveolar bone during early crown formation [15,16]. Therefore, it was hypothesized that OMD may be positively correlated with the osteo/odontogenic differentiation and accordingly, the purpose of the present study was to investigate the influence of OMD deficiency on the biomineralization of hDPSCs. Isolation and culture of hDPSCs Healthy human third molars extracted for orthodontic treatment purpose were obtained from 17-to 20-yearold individuals at the oral surgery clinic of the Ninth People's Hospital affiliated to Shanghai Jiao Tong University School of Medicine. The primary cultured human dental pulp cells were isolated from ten molars using explant method. The cells were pooled together to select hDPSCs by STRO-1-labelled magnetic beads as described previously [17,18]. Cells were cultured in growth medium (GM): high-glucose Dulbecco's modified Eagle's medium (DMEM; Gibco-BRL, Grand Island, NY, USA) supplemented with 10% fetal bovine serum (Gibco-BRL, Life Technologies, Paisley, UK), 100 U/mL penicillin, and 100 mg/mL streptomycin. The medium was renewed every 2 or 3 days. Cell cultures between the second and fifth passages were used. Flow cytometric analysis The cell surface markers present on hDPSCs were detected by flow cytometric analysis [19,20]. Briefly, hDPSCs were incubated with fluorescence-conjugated antibodies for CD73-phycoerythrin (PE), CD105-PE, CD166-PE, CD34-PE, CD90-fluorescein iso thioocyanide (FITC) and CD45-FITC (BD Biosciences, San Jose, CA, USA, and Biolegend, San Diego, CA, USA). Cell suspensions in phosphate buffered saline (PBS) without antibodies served as controls. The cells were then washed three times with PBS to remove unbound antibodies and finally resuspended with 300 μL PBS. Cells were sorted using a flow cytometer (FACSCalibur; BD Biosciences, Mountain View, CA, USA) and analysed with FlowJo software (Tree Star, San Carlos, CA, USA). Alizarin red S staining and oil red O staining assay For osteo/odontogenic differentiation, hDPSCs were subcultured in human mesenchymal stem cell osteogenic differentiation medium (OM) (Cyagen Biosciences, Santa Clara, CA, USA) which contained dexamethasone, L-ascorbic acid and beta-glycerophosphate for up to 21 days. Cells cultured in GM were kept as a control group. After 3 weeks of differentiation, cells were fixed in 4% paraformaldehyde for 30 min. After being washed with PBS for three times, calcium deposition was visualized by staining with alizarin red S solution for 3-5 min. Excess stain was removed by washing with distilled water. To study adipogenesis, hDPSCs were cultured in GM until they reached 100% confluence, following which the medium was changed to adipogenic differentiation medium (AM) (Cyagen Biosciences) which contained Insulin, IBMX, Rosiglitazone and Dexamethasone according to the manufacturer's instructions. After three to five cycles of induction/maintenance, cells were fixed in 4% paraformaldehyde for 30 min and stained with fresh oil red O solution. Lentivirus production and transduction Sequences for constructing shRNA targeting human OMD were obtained from the RNAi Consortium (Broad institute) [21]. shRNAs against the OMD gene and a non-target gene were generated with PLKO.1 vector (sequences were shown in Table 1). 9 μg of the ViralPower™ Packaging Mix (Invitrogen, Carlsbad, CA, USA) and 3 μg of the constructed PLKO.1-shOMD or PLKO.1-Ctrl vector were used to co-transfect 6 × 10 6 293FT cells in the presence of 36 μL Lipofectamine™ 2000 (Invitrogen). Lentivirus was harvested from the culture supernatant at 48 h and 72 h after transfection and filtered through a 0.45 μm filter. For infection, hDPSCs cultured to 30-40% confluence were exposed to recombinant lentivirus in the presence of 10 μg/mL polybrene for 24 h. After incubated in GM for another 24 h, cells were treated with 1 μg/mL puromycin for 48 h to generate stable cell lines. Western blot analysis Western blot was performed as described before [22]. Briefly, cells were washed with PBS and harvested with EBC lysis buffer (50 mM Tris HCl, pH 8.0, 120 mM NaCl, 0.5% Nonidet P-40) supplemented with protease inhibitors (Selleck Chemicals, Houston, TX, USA). Protein fractions were collected by centrifugation at 10,000 g at 4°C for 10 min. The supernatants which contained 50 μg of protein samples were subjected to 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis, following protein quantitation using BCA protein assay. RNA isolation and determination Total RNA was extracted at designated time points using TRIzol reagent (Invitrogen) according to the manufacturer's instructions. Then, 500 ng extracted RNA was reverse transcribed using Ommiscript Reverse Transcription kit (QIAGEN, Valencia, CA, USA) and the resulting cDNA was diluted to 5 ng/μL. 1 μL of this diluted product was used as a template for real-time quantitative PCR (real-time qPCR) and 0.5 μL for reverse transcription PCR (RT-PCR). Real-time qPCR was performed using the DNA Engine Table 1. The mRNA levels of target genes were analysed according to the comparative Cq method [23] and normalized to β-actin. Statistical analysis Data were expressed as means ± standard deviation (SD) from at least three independent experiments. The statistical significance of difference was assessed using Student's two-tailed t-test. P < 0.05 indicated a significant difference between groups. Characterization of the hDPSCs The hDPSCs possess many in vitro phenotypic characteristics of bone marrow-derived MSCs [24,25]. Flow cytometric analysis was one of the methods based on cell surface molecules. hDPSCs showed the characteristic pattern of MSC-associated surface markers, including CD73, CD90, CD105 and CD166 and were negative for hematopoietic stem cell surface markers CD34 and CD45. Isolated cells that highly expressed CD73, CD90, CD105 and CD166 were used for subsequent experiments (Fig. 1a). The hDPSCs retained multilineage differentiation capacity. Calcium deposition was confirmed by alizarin red S staining (Fig. 1b) and lipid formation was revealed by oil red O staining (Fig. 1c), which indicated hDPSCs' differentiation into cells like osteoblasts and adipocytes. Up-regulation of OMD during osteo/odontogenic differentiation To investigate the expression pattern of OMD mRNA during osteo/odontogenic differentiation, hDPSCs were cultured in OM for 3 weeks, after which samples were analysed by real-time qPCR and RT-PCR. Compared with the control group in GM, hDPSCs which were incubated in induction medium showed a significant 35-fold up-regulation in OMD gene expression (Fig. 2). Knockdown of OMD in hDPSCs Puromycin treatment for 2 days almost completely eliminated uninfected hDPSCs without affecting the growth rate and morphology of successfully infected hDPSCs. hDPSCs which were infected with lentiviral constructs harbouring OMD shRNA and non-target shRNA could survive puromycin treatment and were indicated as shOMD-hDPSCs and Ctrl-hDPSCs, respectively. The efficiency of the shRNA-mediated knockdown was confirmed by real-time qPCR and Western blot without induction. The results showed that OMD mRNA in shOMD-hDPSCs was reduced with a concomitant decrease in OMD protein levels in GM ( Fig. 3a and b). Inhibition of OMD impairs osteo/odontoblastic differentiation of hDPSCs The stable gene knockdown was achieved throughout the differentiation period (Fig. 4a). The transcriptional level of OMD in Ctrl-hDPSCs was progressively increased with induction. The results of real-time qPCR showed that the expression of the odontoblast markers DMP1 and DSPP in Ctrl-hDPSCs were substantially up-regulated during 7 days of induction and ALP mRNA in Ctrl-hDPSCs reached its peak on day 14. As is shown in Fig. 4, the mRNA levels of DMP1, DSPP and ALP in shOMD-hDPSCs were significantly lower than those in the control group at each given time point. Specifically speaking, DMP1, DSPP and ALP mRNA levels in Ctrl-hDPSCs were at least twice as high as those in shOMD-hDPSCs at early induction period. After osteo/ odontogenic induction for 14 days, DMP1, DSPP and ALP mRNA levels in Ctrl-hDPSCs were at least three times higher than those in shOMD-hDPSCs. In addition, shOMD-hDPSCs could not form calcified nodules or differentiate into osteo/odontoblast-like cells after 21 days of culture in osteo/odontogenic medium compared with the control groups ( Fig. 4e and f ). Discussion hDPSCs play an essential role in dentinogenesis and dental repair [26]. The capacity of self-renewal and multilineage differentiation into cell types such as odontoblasts/osteoblasts, adipocytes, and neuron-like cells attracts researchers a lot. hDPSCs have been considered as an alternative therapeutic cell source for dental tissue and whole-tooth regeneration [27]. Therefore, the identification of genes regulating the hDPSCs into osteo/odontogenic fate will help clarify the mechanisms of regenerative strategies. Heretofore multiple signaling molecules and transcription factors, including bone morphogenetic proteins, fibroblast growth factors, Wnt proteins, Hedgehog families and Cbfa1/Runx2 protein, have been shown to be implicated in mediating differentiation and organization of osteogenic tissues when responding to inductive signals [28]. However, there still existed mechanisms that have not been disclosed. To the best of our knowledge, OMD was one of the genes which had seldomly been investigated with regard to its expression patterns and functions in hDPSCs' cytodifferentiation. In this study, the hDPSC subpopulation expressing STRO-1 surface marker was used. In accordance with previous reports [29,30], the isolated hDPSCs expressed MSC-specific cell surface antigens such as CD73, CD90, CD105 and CD166, and were negative for the hematopoietic surface markers CD34 and CD45. In our previous study, it was found that the expression of STRO-1 declined gradually with the continuing passage of cells (data not shown). Thus cell cultures between the second and fifth passages were used. In this study, mineral nodules (considered as a late marker of osteo/ odontogenic differentiation [31]) were found to increase with the induction period of hDPSCs. ALP, DMP1 and DSPP (also referred as osteo/odontogenic differentiation markers [32,33]) were up-regulated in the cells of the control group during the early induction period. DSPP is a pre-proprotein secreted by odontoblasts and its cleaved products named dentin sialoprotein and dentin phosphoprotein are found in significant quantity in the extracellular matrix of dentin [34]. DMP1 has been reported to be expressed during the initial stages of mineralized matrix formation in bone and dentin [35]. The expression levels of DSPP and DMP1 suggest the dentinogenesis ability of dental pulp cells (DPCs) [36]. In the studies of Lin et al. and Qi et al., the transcription levels of DSPP and DMP1 increased continuously during odontoblastic differentiation of DPCs and achieved the highest level after 14 days of induction [37,38], while in this study they reached the highest point at day 7. It is supposed that the difference in induction medium and cellular condition of hDPSCs may explain the discrepancies. Meanwhile, the expression of OMD was up-regulated in mRNA level after 21 days of induction. So it was proposed a hypothesis that OMD may regulate the osteo/odontoblastic differentiation of hDPSCs. To test our hypothesis, OMD knockdown hDPSCs were established by infecting them with a lentiviral construct harboring shRNA targeting OMD. It is worth noting that the OMD transcriptional level in Ctrl-hDPSCs after induction for 21 days was three times higher than that in Ctrl-hDPSCs cultured on day 0 (Fig. 4a), while the OMD mRNA of the induced and uninfected hDPSCs was around 35 times higher than that in uninduced and uninfected hDPSCs (Fig. 2a). Reasons bringing about this phenomenon may be the lentivirus process and its potential influence on gene expression. Nonetheless, it could be noticed that OMD knockdown dramatically suppressed the differentiation of hDPSCs into osteo/odontoblasts according to the low expression of ALP, DMP1 and DSPP and reduction of calcified nodules formation. Studies have explored the potential mechanisms of OMD and it has been reported that the expression of OMD is regulated by the cytokines TGFβ1 and BMP2: TGFβ1 down-regulates OMD, while BMP-2 up-regulates OMD [11]. Therefore, it is proposed that OMD may plays a certain role in TGFβ and BMP signaling during osteo/odontogenic differentiation. However, the exact function of OMD during mineralization remains to be fully elucidated. Conclusions This study demonstrates that OMD knockdown can inhibit the osteo/odontoblastic differentiation of hDPSCs by suppressing mineralization and the expression of osteo/odontoblast-related genes. OMD may promote the osteo/odontogenic differentiation of hDPSCs. Further investigation is required to elucidate the mechanisms by how OMD regulates the biological characteristics of hDPSCs.
2019-01-23T21:23:06.557Z
2019-01-22T00:00:00.000
{ "year": 2019, "sha1": "10643453a7534a0e7c1e23b4072b9e67c998c618", "oa_license": "CCBY", "oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/s12903-018-0680-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10643453a7534a0e7c1e23b4072b9e67c998c618", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
49314686
pes2o/s2orc
v3-fos-license
A 16-year odyssey of cardiac sarcoid masquerading as idiopathic premature ventricular contractions and then arrhythmogenic cardiomyopathy Key Teaching Points • In its early phase, cardiac sarcoidosis can present with isolated premature ventricular contractions, which makes the differentiation from benign premature ventricular contraction challenging. • The clinical course of cardiac sarcoidosis can resemble that of arrhythmogenic right ventricular cardiomyopathy (ARVC), evolving over years. • Myocardial scar in cardiac sarcoidosis can evolve with epicardial predominance, which also resembles ARVC. Introduction Cardiomyopathies can initially present with ventricular ectopy, which can be difficult to differentiate from idiopathic premature ventricular contractions (PVCs). Distinguishing cardiac sarcoidosis from arrhythmogenic right ventricular cardiomyopathy (ARVC) also can be challenging. We report the case of a patient who presented with benign PVCs that progressed to multiple recurrent ventricular arrhythmias over 16 years, was diagnosed as having ARVC, and eventually was found to have sarcoidosis. Case report A 44-year-old man with a diagnosis of ARVC was admitted for management of repetitive monomorphic ventricular tachycardia (VT) terminated by antitachycardia pacing from his implantable cardioverter-defibrillator. The patient's past history was remarkable for a diagnosis of idiopathic PVCs 16 years ago ( Figure 1). Over the years, his PVCs had been highly symptomatic. Because the patient was intolerant of multiple medications, including beta-blockers, he underwent repeat electrophysiological studies (EPS) with ablation. Each was followed by symptomatic improvement but subsequent recurrent arrhythmias. Notably, he had no family history of cardiomyopathy or sudden death. The patient exercised routinely, including running, but he was not a competitive athlete. At his third EPS performed 4 years after initial presentation, the right ventricular (RV) voltage map was normal (no areas of electrograms ,1.5-mV bipolar amplitude), and no sustained VT was inducible with programmed stimulation. Ablation targeted 3 different RV PVCs. Cardiac magnetic resonance imaging showed possible thinning of the anterior RV, but no late gadolinium enhancement was observed. Six years later at the fourth EP study, again performed to ablate symptomatic PVCs, inducible sustained monomorphic VT was found, and a small low-voltage (,1.5 mV) area at the RV outflow region was noted. No ablation was performed at this time. Cardiac magnetic resonance imaging showed RV enlargement and severe hypokinesis. Late gadolinium enhancement was observed at the RV base to mid-free wall and mid-inferior wall but not in the LV. An implantable cardioverter-defibrillator was inserted. A positron emission tomography (PET) scan for sarcoid, RV endomyocardial biopsy, and genetic testing for ARVC were unrevealing. The fifth EPS performed 1 year later because of recurrent PVCs and VT revealed no inducible VT. Endocardial and epicardial mapping showed lowvoltage areas (Figure 2, bottom), and substrate-guided ablation of these areas was performed. Symptoms improved, but frequent PVCs led to a sixth procedure 1 year later, targeting multiple morphologies of PVCs. The low-voltage endocardial scar was noted to have extended (Figure 2, middle). Symptoms again improved, but occasional symptomatic KEY TEACHING POINTS In its early phase, cardiac sarcoidosis can present with isolated premature ventricular contractions, which makes the differentiation from benign premature ventricular contraction challenging. The clinical course of cardiac sarcoidosis can resemble that of arrhythmogenic right ventricular cardiomyopathy (ARVC), evolving over years. Myocardial scar in cardiac sarcoidosis can evolve with epicardial predominance, which also resembles ARVC. PVCs still occurred and preceded the development of increasingly frequent sustained VT. At the current presentation, the electrocardiogram (ECG) during sinus rhythm showed slight prolongation of the PQ interval of 210 ms, T-wave inversion in the inferior and precordial leads, and low-voltage QRS complexes (Supplemental Figure 1). The ECG of the current VT is shown in Figure 3. Echocardiography revealed significant dilation and wall-motion abnormalities of the RV, but left ventricular size and function were normal. EPS was performed under general anesthesia, with mapping and ablation per our previously described methods. 1 Endo-and epicardial mapping were performed using a 3.5mm irrigated-tip ablation catheter. The RV endocardial electroanatomic voltage map revealed an extensive low-voltage (,1.5 mV bipolar) area extending from the inferior to anterior free wall and inferobasal aspect of the septum (Figure 2, top), which was substantially more prominent than that seen at the ablation procedure 4 years ago. Interestingly, no sustained VT was inducible with up to 4 extrastimuli and burst pacing from the RV apex and RV outflow tract at baseline and during isoproterenol or epinephrine infusion. Therefore, an initial decision was made to start with ablation guided by voltage mapping and pace-mapping. Surprisingly, pace-mapping at the anterior inferior RV reproducibly induced sustained monomorphic VT (Figure 3). Limited mapping during VT with entrainment was consistent with a broad isthmus extending along the anterobasal RV along the tricuspid annulus, defined in part by regions of unexcitable scar that did not capture at 10 mA, 2-ms pulse width, in the mid-anterior RV (gray regions in Figure 2, top). Radiofrequency application in the isthmus region terminated VT. Additional radiofrequency lesions placed extending between the electrically unexcitable scar and the tricuspid annulus rendered the region unexcitable to pacing at 10 mA, 2 ms. No VT was then inducible by pacing, including that from the anterior RV with burst pacing. The subtle ECG change with slight prolongation of PQ interval compared with the previous recording suggested an ARVC diagnosis. Before the ablation, a PET scan was performed with 18 F-fludeoxyglucose (FDG) imaging after a high-fat, no-carbohydrate diet to suppress cardiac glucose uptake and facilitate recognition of cardiac inflammation. The scan revealed multiple areas of 18 F-FDG uptake in the RV anterior wall and ventricular septum that were not present 6 years previously and were not attributable to catheter ablation (Supplemental Figure 2). Multiple 18 F-FDG-avid lymph nodes were now also present in the mediastinum and paratracheal area. Biopsy of a mediastinal lymph node revealed noncaseating granulomas. A diagnosis of cardiac sarcoidosis was made, and immunosuppressive steroid therapy was initiated. Over 6-month follow-up, the patient has been free from VT. Discussion This patient is remarkable for a course of RV arrhythmias that evolved over more than 15 years, initially diagnosed as idiopathic RV outflow tract arrhythmias, subsequently diagnosed as ARVC based on 3 major and 2 minor criteria for ARVC, 2 and then finally determined to be cardiac sarcoidosis. Although we cannot absolutely exclude the possibility that his initial PVCs were idiopathic, this seems unlikely given their multiple locations. Although he had undergone multiple ablation procedures, the areas of low-voltage scar that evolved were more extensive than the ablation areas and unlikely were related to the procedures alone. A gradually progressive course of sarcoidosis seems most likely. Cardiomyopathies can initially present with ventricular ectopy, and the ectopy sometimes arises from the outflow septum, making differentiation from idiopathic PVCs challenging. 3 After evidence of multiple RV arrhythmia sites and RV functional abnormalities was noted, a diagnosis of ARVC was made after a PET scan performed for sarcoidosis and endomyocardial biopsy was negative. It is well known that differentiating cardiac sarcoidosis from ARVC can be challenging, and there have been several case reports of sarcoid initially diagnosed as ARVC when RV enlargement was the only structural abnormality identified. [4][5][6][7][8][9] The suspicion of sarcoidosis is increased by the presence of lymphadenopathy, parenchymal pulmonary nodules, and conduction disturbances, but these are often absent, and the diagnosis may not be established until examination of the heart at necropsy or after cardiac transplantation. 7 Our patient had undergone 6 EPS over 15 years, with catheter ablation targeting symptomatic PVCs. During those 6 studies (the last performed 12 years from the start of symptoms), sustained VT was not inducible. Up to the first 4 sessions (10 years from the start of symptoms), endocardial mapping depicted no low-voltage area except for what was interpreted as postablation scar localized to the RV outflow tract. In the fifth and sixth sessions (4 and 5 years ago), endocardial mapping showed the same finding as in previous sessions. In contrast, epicardial mapping depicted a large basal and inferior low-voltage area extending to the apex. Those areas increased in size within 1 year, and the PVCs arising from this region became dominant (Figures 2 and 3). The present ablation procedure was performed 4 years after the last session. Endocardial low-voltage areas were now located opposite the epicardial scars seen at the previous session. It is noteworthy that scar expansion seemed to progress from the epicardium to endocardium over time, as is believed to occur in ARVC. In a previous series of sarcoid VT patients, we noted that the area of RV low-voltage scar was often larger in the epicardium than in the endocardium. 10 This seems to be another feature that can be common to both sarcoid and ARVC. Conclusion We report a case of cardiac sarcoidosis that can mimic idiopathic ventricular arrhythmias and ARVC, and can have a slowly progressive course that eludes diagnosis for years despite advanced imaging. The patient also demonstrated gradual myocardial scar progression in an epicardial to endocardial direction, demonstrating that this feature, commonly seen in ARVC, can also occur in cardiac sarcoid.
2018-06-21T00:24:22.696Z
2018-04-05T00:00:00.000
{ "year": 2018, "sha1": "669c0f7768f32a69cb9e8494e7204fa9d438bc03", "oa_license": "CCBYNCND", "oa_url": "http://www.heartrhythmcasereports.com/article/S2214027117301963/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "669c0f7768f32a69cb9e8494e7204fa9d438bc03", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231592697
pes2o/s2orc
v3-fos-license
Deep Learning Assisted Calibrated Beam Training for Millimeter-Wave Communication Systems Huge overhead of beam training imposes a significant challenge in millimeter-wave (mmWave) wireless communications. To address this issue, in this paper, we propose a wide beam based training approach to calibrate the narrow beam direction according to the channel power leakage. To handle the complex nonlinear properties of the channel power leakage, deep learning is utilized to predict the optimal narrow beam directly. Specifically, three deep learning assisted calibrated beam training schemes are proposed. The first scheme adopts convolution neural network to implement the prediction based on the instantaneous received signals of wide beam training. We also perform the additional narrow beam training based on the predicted probabilities for further beam direction calibrations. However, the first scheme only depends on one wide beam training, which lacks the robustness to noise. To tackle this problem, the second scheme adopts long-short term memory (LSTM) network for tracking the movement of users and calibrating the beam direction according to the received signals of prior beam training, in order to enhance the robustness to noise. To further reduce the overhead of wide beam training, our third scheme, an adaptive beam training strategy, selects partial wide beams to be trained based on the prior received signals. Two criteria, namely, optimal neighboring criterion and maximum probability criterion, are designed for the selection. Furthermore, to handle mobile scenarios, auxiliary LSTM is introduced to calibrate the directions of the selected wide beams more precisely. Simulation results demonstrate that our proposed schemes achieve significantly higher beamforming gain with smaller beam training overhead compared with the conventional and existing deep-learning based counterparts. integrated into both base stations (BSs) and user equipment (UE) [4]. Therefore, large antenna arrays can be equipped at BS and UE sides to implement directional beamforming, such that high pathloss can be compensated by the beamforming gain [5]- [7]. In order to enhance the received power of mmWave signals, beam training has been widely adopted to search for the optimal transmitting and receiving beams with the maximum beamforming gain [8]- [12]. Since the beams are generally selected from the predefined finite-size codebook, the brute-force beam search, which exhaustively sweeps all the transmitting and receiving beam pairs in the codebook, is an optimal beam training scheme [8]. However, this may lead to excessively high training overhead. To address this problem, the two-level beam search scheme based on a hierarchical multi-resolution codebook was proposed in [9], [10], where the first-level search aims to find the optimal wide beam, and the secondlevel search confirms the optimal narrow beam direction in the range of the selected wide beam. Another low-trainingoverhead scheme is the interactive beam search, where the candidate transmitting and receiving beams are swept separately [11], [12]. Nevertheless, the two-level and interactive beam search schemes still bring considerable training overhead. Various schemes based on conventional beam search have been proposed to enhance beam alignment accuracy [13] or to reduce beam training overhead [14]- [16]. An optimized two-stage search algorithm was proposed in [13] to better utilize the fixed training budget and ensure good alignment accuracy. The work [14] proposed to calculate the optimal narrow beam based on the ratio of the beamforming gains between the selected wide beam and its neighboring wide beams, but this scheme is sensitive to noise and multipath interference. The study [15] proposed an interactive beam alignment procedure that minimizes BS transmit power subject to throughput constraints. Moreover, an adaptive sequential alignment algorithm based on the hierarchical codebooks was proposed in [16], which selects the most likely beam based on the posterior distribution. However, this algorithm is prone to errors due to under-exploration of the beam space. Recently, deep learning has been broadly adopted to enhance the performance of wireless communications [17], [18]. In order to reduce the overhead of beam training, deep learning was introduced to predict the optimal beam directly [19]- [25]. Specifically, a coordinated beamforming scheme was proposed in [19], which uses convolutional neural network (CNN) to predict the optimal beam from the pilot signals received at multiple BSs. Deep neural network (DNN) was applied to determine mmWave beams based on the low-frequency channel state information (CSI) in [20], [21]. The works [22], [23] proposed to exploit cameras to assist the mmWave beam prediction based on deep learning tools. However, the studies [19]- [23] rely on auxiliary information, which may not be available in most scenarios. By contrast, the works [24], [25] utilized DNN to predict the optimal beam according to the training results of the uniformly sampled beams. However, noise and multipath interference can degrade the prediction accuracy seriously when the directions of the sampled beams are not adjacent to the dominant path. Because of the relative stability of UE movement within a short time, prior channel information can be used to track UE locations and assist beam training [26]- [34]. The extended Kalman filter (EKF) has been widely applied to track the angle of the dominant path [26]- [28], but this method suffers from error propagation according to [30]. In [30], auxiliary beams were adopted for beam tracking, where two beams with perturbations from the former estimated angle are measured to track the angle variation. However, the method cannot be applied to the codebook based scenario, since the directions of auxiliary beams may not be perfectly matched. Differently, the works [31], [32] formulated the time-varying AoA and AoD as discrete Markov processes with known transition probabilities, and proposed beam tracking strategies to maximize the successful tracking probability. The BS handover was further taken into consideration to fight against beam misalignment and blockage in mobile scenarios [33]. Alternatively, longshort term memory (LSTM) network was adopted to infer the optimal mmWave beam at the target BS based on the prior CSI of the source mmWave BS [34]. However, the estimation of mmWave CSI may lead to huge pilot overhead due to large number of antennas. Motivated by the feasibility of estimating the angle of the dominant path based on the channel power leakage in the received signals of mmWave beam training [25], [35], this paper proposes to calibrate the narrow beam direction by utilizing the received signals of wide beam training. Also the prior received signals are used to track the movement of UE, which can further reduce beam misalignment caused by noise. Considering the complex nonlinear properties of the channel power leakage, we adopt deep learning to predict the optimal narrow beam directly. More specifically, three deep learning assisted calibrated beam training schemes are proposed. The first scheme leverages CNN to implement the prediction based on the instantaneous received signals of wide beam training. Since the prediction results are expressed as the probability that each candidate narrow beam is the optimal one, the additional narrow beam training according to the predicted probabilities can be performed to further calibrate beam directions. However, this scheme relies only on one wide beam training, which is sensitive to noise. To address this issue, in the second scheme, in order to enhance the robustness to noise, LSTM is utilized to track the movement of UE and calibrate the narrow beam direction according to the received signals of prior beam training. To further reduce the overhead of wide beam training, an adaptive beam training strategy is proposed in the third scheme, where partial wide beams are selected to be trained based on the received signals of prior beam training. Two cri-teria of the wide beam selection, namely, optimal neighboring criterion (ONC) and maximum probability criterion (MPC), are designed, where ONC selects the neighboring wide beams of the predicted optimal beam, while MPC selects the wide beams with the top predicted probabilities. Moreover, since the optimal beam direction may switch in mobile scenarios, auxiliary LSTM is introduced to predict the optimal wide beam corresponding to the current instant in advance for calibrating the directions of the selected wide beams more precisely. Simulation results demonstrate that our proposed schemes achieve significantly higher beamforming gain with smaller beam training overhead compared with the conventional and existing deep-learning based counterparts. The main contributions of this paper can be summarized as follows: • We propose a wide beam based training method to predict the optimal narrow beam, where CNN is applied to implement the prediction. • We propose to enhance the prediction accuracy by using the received signals of prior beam training, where LSTM is applied to extract the UE movement information for further calibrating the predicted beam direction. • We design an adaptive beam training strategy, where two criteria, ONC and MPC, are proposed to select partial wide beams to be trained based on the received signals of prior beam training. Moreover, we propose an auxiliary LSTM to calibrate the directions of the selected wide beams more precisely. The paper is organized as follows. Section II presents the channel model and beam training model. Our three calibrated beam training schemes are detailed in Sections III, IV and V, respectively. Section VI presents the simulation results. Our conclusions are drawn in Section VII. We adopt the following notational conventions. Z denotes the set of integers, N * is the set of positive integers, and C m×n denotes the m × n complex space. Boldface capital and lowercase letters stand for matrices and vectors, respectively, e.g., A and a, while calligraphic capital letters denote sets, e.g., A. The logical AND is denote by ∧, and j = √ −1, while ℜ(·) and ℑ(·) denote the real and imaginary parts of a complex number, respectively. The transpose and conjugate transpose operators are denoted by (·) T and (·) H , respectively, while | · | denotes the magnitude operator. The n × n identity matrix is denoted as I n and 0 n is the n-dimensional vector whose elements are all zero, while · 2 and · ∞ denote the 2-norm and infinite norm, respectively. · denotes the order statistics, e.g., for A = {a 1 , a 2 , . . . , a n }, A = {a σ1 , a σ2 , ..., a σn } with a σ1 ≤ a σ2 ≤ ... ≤ a σn . The notationˆon the top of a variable indicates the estimated value, e.g.,p, and mod stands for modulo operator. A. Channel Model Consider the downlink mmWave multiple-input multipleoutput (MIMO) communication system serving single user, where BS and UE are equipped with M Tx and M Rx antennas, respectively. Further assume that a single radio frequency (RF) chain is employed at both BS and UE sides. Since the line-ofsight (LOS) path is typically significant, exploiting the lowattenuation LOS path can efficiently enhance the coverage of mmWave signals [36], [37]. For simplicity, we assume the twodimensional (2D) channel model, where only azimuth angles are considered. We consider the narrowband frequency-flat channel model [34], [38] consisting of the LOS path and C clusters. Specifically, the channel matrix H ∈ C MRx×MTx can be expressed as In this model, the c-th cluster containing L c paths has pathloss ρ c , angle-of-arrival (AoA) θ c and angle-of-departure (AoD) φ c , while α c,l , θ c,l and φ c,l are the complex gain, AoA offset and AoD offset, respectively, corresponding to the l-th path in the c-th cluster. Similarly, the LOS path has pathloss ρ LOS , AoA θ LOS and AoD φ LOS . For convenience, we use H LOS and H NLOS to represent the LOS part and the non-line-of-sight (NLOS) part of the channel matrix, respectively. Furthermore, a Tx ∈ C MTx×1 and a Rx ∈ C MRx×1 denote the antenna response vectors of BS and UE, respectively. We assume that uniform linear arrays (ULAs) are adopted at both BS and UE sides, and thus the two antenna response vectors are expressed respectively as a Rx (θ) = 1 M Rx 1 e j2πdRx sin θ/λ · · · e j2π(MRx−1)dRx sin θ/λ T , where d Tx and d Rx are the antenna spacings at BS and UE, respectively, λ denotes the wavelength, φ and θ denote the corresponding AoD and AoA. For simplicity, we set d Tx = d Rx = λ/2. B. Beam Training Model We assume that phase shifter based analog beamforming is applied, where f ∈ C MTx×1 aligned with the direction γ Tx is denoted as the transmitting beam of BS, and w ∈ C MRx×1 aligned with the direction γ Rx is denoted as the receiving beam of UE. The transmitting and receiving beams are selected from the predefined codebooks F and W, which consist of N Tx and N Rx candidate beams, respectively. Assuming that the discrete Fourier transform (DFT) codebook is utilized [39], the candidate transmitting beam f m , m ∈ {1, 2, ..., N Tx }, and receiving beam w n , n ∈ {1, 2, ..., N Rx }, can be written respectively as f m = 1 M Tx 1 e jπ sin γTx,m · · · e jπ(MTx−1) sin γTx,m T , (4) w n = 1 M Rx 1 e jπ sin γRx,n · · · e jπ(MRx−1) sin γRx,n T , where γ Tx,m and γ Rx,n denote the beam directions of the mth candidate beam at BS side and the n-th candidate beam at UE side, respectively. To cover the whole angular spaces of BS Γ Tx and UE Γ Rx , we assume that the transmitting and receiving beam directions are uniformly sampled respectively in − Γ Tx /2, Γ Tx /2 and − Γ Rx /2, Γ Rx /2 [14], i.e., Given the channel matrix H and beam pair {f , w}, the received signal y can be written as where P is the transmit power and x is the transmitted signal with |x| = 1, while n ∈ C MRx×1 denotes the additional white Gaussian noise (AWGN) vector with the noise power σ 2 , i.e., n ∼ CN 0 MRx , σ 2 I MRx . A straightforward scheme of beam training to solve the above optimization is the brute-force beam search, where all the candidate transmitting and receiving beams are swept to find the beam pair with the maximum power of the received signal [8]. However, the scheme requires N Tx N Rx measurements, which leads to excessively huge training overhead. To tackle this problem, the two-level beam search based on a hierarchical multi-resolution codebook can be considered, where the codebook consists of the wide beam codewords in the first level and the narrow beam codewords in the second level [9], [10]. To illustrate this approach, consider obtaining the wide beams by switching on partial antennas [40]. Specifically, M Tx /s Tx antennas are utilized to generate N Tx /s Tx wide beams at BS, where s Tx ∈ N * defines the number of narrow beams within each wide beam. Similarly, the wide beams at UE can be implemented with M Rx /s Rx antennas, where s Rx ∈ N * denotes the number of narrow beams within each wide beam for UE. Therefore, the candidate transmitting wide beam f w,m , m ∈ {1, 2, . . . , N Tx /s Tx }, and receiving wide beam w w,n , n ∈ {1, 2, . . . , N Rx /s Rx }, can be written as where the beam directions of the m-th candidate wide beam at BS side γ w Tx,m and the n-th candidate wide beam at UE side γ w Rx,n can be expressed as Based on the hierarchical multi-resolution codebook, the beam search is divided into two levels. The first-level searches for coarse beam alignment based on the wide beam codebook, given by where H w ∈ C MRx/sRx×MTx/sTx is the sub-channel matrix corresponding to the antennas for wide beam training. The first level search requires N Tx N Rx /s Tx s Rx measurements. Recall that N Tx and N Rx antennas correspond to N Tx and N Rx candidate transmitting and receiving narrow beams of (4) and (5) with the directions of (6) and (7), respectively. The secondlevel search confirms the 'optimal' narrow beam pair in the range of the selected wide beam pair (14), given by The second-level search needs further s Tx s Rx measurements. Hence, the two-level beam search requires N Tx N Rx /s Tx s Rx + s Tx s Rx measurements, which is significantly smaller than that imposed by the brute-force beam search. Another overhead-reducing scheme is the interactive beam search, which selects the beams at BS and UE sides separately [11], [12]. Specifically, with UE antennas set to be the omni-directional pattern, BS sweeps all candidate transmitting beams to find the one with the maximum beamforming gain. Then with this 'optimal' transmitting beam, UE sweeps all candidate receiving beams to find the beam with the maximum beamforming gain. In other words, the 'optimal' beam pair are obtained by solving the following two optimization problems separately: This scheme requires N Tx +N Rx measurements, which is much lower than the brute-force search. A. Motivation As indicated in (16) and (17), the beam search can be implemented at BS and UE separately. For simplicity, we investigate the selection of the transmitting beams at BS side, where the single-antenna UE is assumed and thus the receiving beam w is omitted. Since mmWaves have weak penetration ability and significant reflecting power loss, the power of the LOS path is considerably higher than its NLOS counterparts. Hence the LOS path is dominant in mmWave channels [41], [42]. To achieve the maximum beamforming gain, the transmitting beam direction γ Tx should be aligned with the AoD of the LOS path φ LOS , while other NLOS paths can be treated as the noise. Specifically, we can rewrite the received signal model (8) as where the equivalent noise n eq = √ where q(φ LOS ) = a H Tx (φ LOS )f reflects the alignment degree between γ Tx and φ LOS , which determines the beamforming gain. Since the number of beam directions is limited under the on-grid assumption, the AoD of the LOS path φ LOS may not be perfectly aligned, leading to the quantization error [32]. This error causes the channel power leakage in the received signals of beam training [35]. Specifically, assuming that the m-th candidate transmitting beam is applied, then q m (φ LOS ) = a H Tx (φ LOS )f m is given by where φ ∆ m = sin γ Tx,m − sin φ LOS . If φ LOS locates in the side lobe of the m-th candidate beam, i.e., |φ which indicates that the power of the LOS path leaks to the m-th candidate beam. An example of the channel power leakage is illustrated in Fig. 1, where it can be seen that the relative relations of the T are decided by φ LOS . Further assume that the transmitted signal x is fixed and the equivalent noise n eq is omitted. Then the relative relations among the elements of q(φ LOS ) are reflected in the received signals, which provides the feasibility of estimating φ LOS based on the received signals of beam training. B. Problem Formulation To reduce the beam training overhead, we propose to train a small number of candidate beams and calibrate the beam direction according to the received signals. How to find properly trained beams that achieve high accuracy under the given training overhead is crucial. Intuitively, a straightforward way is to uniformly sample partial beams [24], [25], but its performance may degrade significantly due to the low signalto-noise ratio (SNR) when the AoD of the LOS path φ LOS does not locate in the main lobe of any sampled beam. Motivated by the two-level beam search with low training overhead where the wide beam codebook can cover the whole angular space, we propose to measure the received signals of wide beams for calibrating the beam direction. Different from [15] and [16], the property of the channel power leakage is utilized to estimate the accurate AoD in our proposed approach. Specifically, the calibrated beam training scheme based on the wide beam codebook is proposed. For convenience, we define the received signal of the m-th candidate wide beam as y w,m and concatenate the received signals of all the wide beams into the received signal vector y w = y w,1 y w,2 · · · y w,NTx/sTx T . Since the narrow beam codebook enjoys higher angular resolution, the proposed calibrated beam training scheme aims to predict the index of the optimal narrow beam at BS side m ⋆ based on the received signal vector of wide beams y w . Because the number of candidate narrow beams is limited, the prediction can be formulated as a multi-classification task, where each classified category corresponds to one candidate narrow beam. Mathematically, the prediction model can be represented by the classification function f 1 (·) as However, it is difficult to implement this prediction by conventional estimation methods for two reasons. First, the relationship between y w and φ LOS is highly nonlinear, and second, the distribution of the equivalent noise n eq is difficult to acquire since the NLOS paths vary with the propagation environment. These two reasons make the estimation too complicated by a conventional means. Consequently, deep learning with strong ability to learn complex nonlinear relations is utilized to implement the prediction [43]. Besides, we propose to perform the prediction at BS side, which has sufficient computational capability to ensure low prediction delay. Our proposed scheme consists of two stages, training and predicting. In the training stage, training data are collected to train the deep learning model, where each sample comprises a received signal vector as the model input and the index of the corresponding optimal narrow beam as the classification label, which can be obtained by conventional beam training schemes and fed back to BS. Because of the similar directional and power properties between uplink and downlink channels, the feedback overhead can be reduced by performing uplink beam training, where the received signal vector at BS side is used as the prediction input [44], [45]. After the model is well-trained with sufficient data, it switches to the prediction stage. In this stage, BS and UE only perform the wide beam search, and the corresponding received signals are leveraged to predict the optimal narrow beam by the well-trained model. Thus, the narrow beam search is avoided and the overhead of beam training is reduced considerably. It is worth emphasizing that our scheme can be extended to various application scenarios. First, the scheme can be utilized to predict the optimal receiving narrow beam at the UE side with multiple antennas, since w H a Rx (θ LOS ) can be analyzed in a similar manner to (20). The scheme can be extended to the wideband multicarrier case, since the channel power leakage occurs in the received signals on subcarriers. The scheme can also be adopted in the multi-user scenario, where the calibration of beam directions is performed for each user separately. Our proposed scheme still works well in the NLOS scenario with one dominant cluster, e.g., the reconfigurable intelligent surface (RIS)-assisted scenario [46], [47], where the beam direction is aligned with the AoD of this cluster while other clusters are treated as the noise. C. Model Design CNN is adopted to implement the prediction due to its outstanding performance in classification tasks [48]. The proposed CNN based model is depicted in Fig. 2, which can be divided into three parts, the preprocessing module, the convolution module and the output module. Preprocessing module: Since the received signal vector y w is complex-valued with large dynamic ranges, which cannot be fed to the CNN directly, the preprocessing module firstly normalizes y w by the maximum amplitude of its elements, which can be written as The normalized received signal vector y N w = ℜ y N w + jℑ y N w is divided into the two real-valued feature channels of ℜ y N w and ℑ y N w , which are fed to the following convolution module. layer is followed by the ReLU activation layer to provide nonlinear fitting ability. In order to avoid the overwhelmingly complex model, the pooling layer is introduced after the final ReLU activation layer, where each feature channel is downsampled to be a scalar. Output module: To predict the optimal narrow beam from all the candidate narrow beams, the fully-connected (FC) layer is introduced after the pooling layer to implement the transformation from the extracted features to the candidate narrow beams, followed by a softmax activation layer for normalizing the outputs into probabilities, which can be written aŝ wherep m is the predicted probability that the m-th candidate narrow beam is the optimal one, and v is the output vector of the pooling layer, while u m and b m are the weight vector and bias of the FC layer corresponding to the m-th output. Finally, the narrow beam with the maximum predicted probability is selected, i.e.,m ⋆ = arg max m∈{1,2,...,NTx}p m . The predicted probabilities provide the qualities of beams, and a beam with larger probability is predicted to enjoy higher beamforming gain over other beams with smaller probabilities. Therefore, the additional narrow beam training according to the predicted probabilities can be performed to further calibrate the beam directions. Specifically, the K n narrow beams with the top predicted probabilities are trained, whose indices L n are specified by p σ1 ,p σ2 , · · · ,p σN Tx = p 1 ,p 2 , · · · ,p NTx , Let y m denote the received signal corresponding to the m-th candidate narrow beam. The narrow beam with the maximum power of the received signal is chosen as the optimal one, i.e., Obviously, increasing K n can enhance the beamforming gain at the cost of imposing higher training overhead. Cross entropy loss is an evaluation metric widely used in classification tasks, which is utilized to train our proposed model. Mathematically, it can be expressed as where p m = 1 if the m-th candidate narrow beam is the actual optimal beam. Otherwise p m = 0. A. Motivation Although the scheme proposed in Section III is capable of reducing the overhead of beam training, the prediction depends on the received signals of only one wide beam training, which lacks robustness to noise. To address this problem, by exploiting the stability of UE movement within a short time, we can utilize prior information to track the UE movement and calculate the AoD of the LOS path φ LOS based on the estimated UE location, such that the beam misalignment caused by noise can be calibrated. Beam training is periodically performed in mmWave communication systems, where typical training periods are smaller than 160 ms [49]. Because the received signals of beam training are the results of the interaction between the transmitted signals and the propagation environment around BS and UE, these signals manifest an RF signature of the UE location [19], [50]. Consequently, prior received signals of beam training can be leveraged to track the UE movement and calibrate the beam direction without additional beam training overhead. B. Problem Formulation We now formulate the calibrated beam training scheme based on the received signals of prior beam training. Assume that beam training is performed periodically, and denote the received signals of the t-th wide beam training as y w,t . To predict the optimal narrow beam corresponding to the t-th wide beam training m ⋆ t , the received signals of both prior wide beam training y w,1 , y w,2 , · · · , y w,t−1 and current wide beam training y w,t are jointly utilized. The prediction can be formulated as a multi-class classification task with the classification function f 2 (·), i.e., m ⋆ t = f 2 y w,1 , y w,2 , · · · , y w,t , m ⋆ t ∈ {1, 2, · · · , N Tx }. (29) Since the AoD of the LOS path φ LOS varies with the UE movement nonlinearly, we adopt deep learning model to implement the prediction. Different from the model deployed in Section III, in order to extract the UE movement features, the received signals of wide beam training and corresponding optimal narrow beam indices are packed in time order for UE, which forms a training sample. C. Model Design LSTM is used as the prediction model due to its excellent capability in temporal sequence learning [51]. The basic structure of LSTM is shown in Fig. 3(a), where the input of the current time slot x t together with the cell state and output of the previous time slot {c t−1 , h t−1 } are jointly fed to the LSTM at the t-th time slot, so that LSTM can learn the features from prior inputs. The proposed LSTM based model is depicted in Fig. 3(b), which reuses partial structures in the CNN based model of Section III. Once the t-th wide beam training is performed, the corresponding received signals are firstly fed to the preprocessing and convolution modules to extract the preliminary features related to y w,t . Next, the LSTM module further calibrates the narrow beam direction based on the received signals of the current and prior beam trainings. Finally, the output module provides corresponding predicted probabilities p 1,t ,p 2,t , · · · ,p NTx,t , and the narrow beam with the maximum probability is selected as the predicted optimal beam, whose index is denoted asm ⋆ t . Cross entropy loss is also utilized to train the model, where the loss of one training sample is calculated as the average loss of all the narrow beam predictions for the UE. A. Motivation The schemes proposed in Sections III and IV share one disadvantage that wide beam training still imposes considerable overhead. As illustrated in Fig. 1, the leaked power of the beams far from the AoD of the LOS path φ LOS is small, where effective information is difficult to extract from the corresponding received signals due to low SNRs. Therefore, we can train partial wide beams with high SNRs and use the corresponding received signals to predict the optimal narrow beam, such that the training overhead can be further reduced at the expense of slight degradation in beamforming gain. To find the high-SNR wide beams, the stability of UE movement again suggests that φ LOS can be estimated from the received signals of prior beam training. Thus, we can select the wide beams to be trained based on the prior received signals adaptively. B. Basic Scheme Accordingly, we propose the adaptive calibrated beam training scheme, where partial wide beams are selected to be trained based on the received signals of prior beam training. To determine the initial AoD of the LOS path φ LOS from the whole angular space, one full wide beam training is performed firstly, where the received signals of all the candidate wide beams are measured. Afterward, only partial wide beams need training, where the corresponding received signals are utilized to predict the optimal narrow beam index. For the convenience of analysis, we focus on the beam selection for the t-th wide beam training with t > 1, and the corresponding AoD of the LOS path is denoted by φ LOS,t . Let K be the number of the wide beams to be trained, and the corresponding indices are denoted as L w,t . Two criteria are proposed to select the wide beams with high SNRs, which are the optimal neighboring criterion (ONC) and maximum probability criterion (MPC), respectively. ONC: As indicated in Fig. 1, the wide beam adjacent to the LOS path is more likely to enjoy high received power. Thus ONC aims to select the wide beams whose directions are the nearest to φ LOS,t . Unfortunately, φ LOS,t cannot be accurately obtained. To find a proper approximation of φ LOS,t , it is noticed that the UE location at the t-th wide beam training is around the location corresponding to the (t − 1)-th wide beam training, and consequently we propose to use the direction of the previous predicted optimal narrow beam γ Tx,m ⋆ t−1 to approximate φ LOS,t . Mathematically, the ONC based beam selection is formulated as The direction difference γ w Tx,m − γ Tx,m ⋆ t−1 modulo π as expressed in (30) MPC: MPC is based on the property that the predicted probabilities reflect beam qualities, and it selects the wide beams with the top predicted probabilities in the (t − 1)-th beam prediction. However, the prediction results only provide the probabilities of narrow beams instead of wide beams. To obtain the approximation of the predicted probability for the m-th wide beamp wap m,t , the predicted probabilities of all the narrow beams within the m-th wide beam are added together. Therefore, the MPC based beam selection is mathematically formulated aŝ Once the t-th partial wide beam training is performed, the corresponding received signal vector y wp,t = y wp,t [1] · · · y wp,t [N Tx /s Tx ] T ∈ C (NTx/sTx)×1 can be obtained according to ∀m ∈ {1, 2, · · · , N Tx /s Tx }, where y w m,t is the received signal of the m-th candidate wide beam at the t-th wide beam training. Note that the format of y wp,t is similar to the received signal vector of full wide beam training y w,t , and hence the received signal vectors of both the full and partial wide beam training can be processed by the same model. Therefore, the prediction model represented by the underlying multiclassification function f 3 (·) is formulated as The proposed adaptive calibrated beam training model is illustrated in Fig. 4. The prediction model structures and loss function are the same as their counterparts in Section IV, but the model deployment is different from the previous case. Specifically, in the training stage, in order to learn to predict the optimal narrow beam from the partial received signals, the model simulates partial wide beam training, i.e., only the received signals of the selected wide beams are used as the model input. In the predicting stage, the proposed scheme only trains the selected wide beams and uses corresponding received signals to predict the optimal narrow beam, so that the overhead of wide beam training is significantly reduced. C. Enhanced Adaptive Calibrated Beam Training with Auxiliary LSTM The proposed scheme of Subsection V-B selects the wide beams to be trained according to the results of the previous beam prediction. Therefore, the prediction may become outdated if the AoD of the LOS path φ LOS varies in mobile scenarios. To track the varying AoD of the LOS path φ LOS , we can utilize the received signals of previous beam training to estimate the current UE location in advance, such that the indices of the selected wide beams to be trained can be calibrated to further enhance received SNRs. Specifically, we propose the enhanced adaptive calibrated beam training scheme. To find the wide beams with high SNRs corresponding to the t-th wide beam training, auxiliary LSTM is introduced to predict the t-th optimal wide beam index m ⋆ w,t in advance based on the received signals of prior wide beam training {y w,1 , y wp,2 , · · · , y wp,t−1 }, which can be expressed as where f au (·) denotes the above multi-classification function for the wide beam prediction. Similarly, the output of the model is expressed as the predicted wide beam probabilities {p w 1,t ,p w 2,t , · · · ,p w NTx/sTx,t }, where the wide beam with the maximum predicted probability is selected as the optimal wide beamm ⋆ w,t . In particular, ONC selects the wide beams whose directions are the nearest to the predicted optimal wide beam, and (30) becomes On the other hand, MPC selects the wide beams with the top predicted probabilities, and (34) is rewritten as The enhanced adaptive calibrated beam training model is depicted in Fig. 5, where the LSTM module and the proposed auxiliary LSTM module share the same preprocessing and convolution modules to reduce the model overhead, and the wide beam output module is used to obtain the predicted probabilities of the wide beam prediction. Auxiliary LSTM does not require to collect extra training data in the training stage. Once the t-th wide beam training is performed, the wide beam index with the maximum power of the received signal is used as the classification label of the wide beam prediction. Similarly, cross entropy loss is used as the loss function in wide beam predictions. auxiliary LSTM, the losses of both narrow beam predictions loss n and wide beam predictions loss w are combined with the weight coefficient µ, which can be expressed as loss = loss n + µloss w . A. Simulation System Setup We consider a mmWave wireless communication system with LOS serving one single-antenna user, and the mobile scenario is assumed. Unless otherwise stated, UE performs the rectilinear motion with uniformly randomly distributed speed v UE ∈ [10, 50] m/s and acceleration a UE ∈ [−8, 8] m/s 2 , and the motion direction is randomly generated in [0, 2π]. In order to simulate the channel variations with UE movement, we apply the COST 2100 channel model [52], [53], which defines several groups of far scatterers in the space and each group corresponds to one NLOS cluster together with a visible region, i.e., the area where the cluster exists. Based on the locations of BS, UE and far scatterers, the channel matrix H can be generated by (1). The default parameters of the simulated mmWave communication system are listed in Table I. The power of the AWGN σ 2 is calculated as (−174 + 10 log 10 W + N F ) dBm, where the noise factor N F = 6 dB. Moreover, the pathloss PL is obtained as PL = (26 log 10 d + 20 log 10 f c − 147.56) dB [53], where d denotes the propagation distance. For the proposed deep learning models, the detailed structures and parameters are specified in Table II, where f i and f o denote the numbers of input feature channels and output feature channels, respectively. The parameters in convolution layers (p 1 , p 2 , p 3 ) represent the kernel size, sampling stride and zero-padding size, respectively. To accelerate model training convergence, batch normalization (BN) is applied in the convolution module, which transforms the processed data to the standard distribution with mean 0 and variance 1 [54]. To enhance model generalization ability, LSTM layers and FC layers exploit dropout strategy to abandon part of neurons randomly in the training stage for preventing overfitting [55]. We construct a dataset with 20,480 samples, where 80% and 20% of the dataset are used as the training set and the validation set, respectively. The model is trained for 80 epochs in the training stage, where Adam optimizer based on the back propagation algorithm is used to optimize the model parameters [56]. Three metrics specified below are utilized for performance evaluation. 1) Cross entropy loss of the narrow beam prediction loss n defined in (28), where the subscript denotes narrow beam prediction. 2) Normalized beamforming gain G N defined as where f m ⋆ and fm⋆ are the actual optimal narrow beam and the predicted optimal narrow beam, respectively. 3) Effective spectral efficiency E [16], [33] defined as where T tot is the total time of a communication session, and T tra is the beam training time. Typically, beam training is performed periodically with the period τ . Hence, T tra equals to the product of Ttot τ with the number of trained beams required in each beam training and the duration of one beam measurement t s . In the simulation, we assume that t s = 0.1 ms [33]. The average result over the entire validation set and 5 training runs is actually used as the evaluation metric value. The source code of our simulations can be found in [57]. B. Investigation of Training Parameters and Complexity We now investigate the impact of the training parameters on the narrow beam prediction loss loss n for the proposed calibrated beam training (CBT) schemes. The learning rate r L is a key algorithmic parameter of Adam optimizer [56]. Fig. 6 investigates the impact of r L on the achievable loss n for our proposed schemes. The number of trained wide beams in the enhanced adaptive CBT scheme is K = 7 with the weight coefficient µ = 1. As expected, larger r L leads to higher loss n but faster convergence. It can be seen from Fig. 6 that to achieve a best combined steady state loss n performance and convergence rate, r L = 0.0003 is appropriate for the CNN and LSTM assisted CBT schemes, while r L = 0.0001 is appropriate for the adaptive CBT and enhanced adaptive CBT schemes. Next, the impact of the weight coefficient µ on loss n for the enhanced adaptive CBT scheme under K = 7 is investigated in Fig. 7. It can be seen that loss n is minimized when µ is around 1 for both the ONC and MPC based schemes. Therefore, µ is set to be 1 in our simulation study. The complexity of our proposed schemes are summarized in Table III in prediction are all smaller than 50 µs, which ensures fast beam alignment in mobile scenarios. C. Investigation of Our Adaptive CBT Schemes Four adaptive CBT schemes are actually proposed in Section V, namely, the adaptive CBT with ONC, adaptive CBT with MPC, enhanced adaptive CBT with ONC and enhanced adaptive CBT with MPC. We now compare these adaptive CBT schemes. We first investigate the impact of the number of wide beam trainings t on the normalized beamforming gain G N in Fig. 8 for the four schemes, given the number of trained wide beams K = 5 and 7 as well as no additional narrow beam training. As expected, G N increases with t for both the ONC and MPC based schemes, because more prior received signals provide more accurate UE movement information. The results thus demonstrate that both ONC and MPC can select the wide beams with high SNRs effectively. After t = 6, G N appears to converge in all the cases. It can also be seen that increasing the number of trained wide beams from K = 5 to K = 7 improves the achievable G N . Moreover, the performance of the enhanced adaptive scheme is better than its basic counterpart, which validates that auxiliary LSTM is capable of enhancing the accuracy of tracking UE location in mobile scenarios. Also the ONC based scheme outperforms its MPC based counterpart. Fig. 9 shows the impact of the number of trained wide beams K on the achievable G N , where the results are obtained by averaging G N over t ≥ 6. Due to the symmetry of neighboring wide beams, only the results with odd K are shown for the ONC based enhanced adaptive scheme. It can Fig. 10. Comparison between normalized predicted probabilities and normalized beamforming gains of wide beams for the MPC based enhanced adaptive CBT scheme, where red columns denote the selected wide beams to be trained. be seen that G N increases with K since the deep learning model can extract more robust features from the received signals of more wide beams. The results confirm that the enhanced adaptive scheme outperforms its basic counterpart. Also the performance of the ONC based scheme is better than its MPC based counterpart, especially when K is small. This is because noise and multipath interference may lead MPC to select irregular indices of wide beams, which may bring difficulties for the deep learning model to extract stable features. An example of K = 3 is illustrated in Fig. 10, where the predicted probabilities have several local maximums due to the noise and NLOS paths. This may make MPC ignore the neighboring wide beams of the strongest wide beam and lead the prediction model to fail to track the beam switch in mobile scenarios. The results of this investigation suggest that among the four adaptive CBT schemes introduced in Section V, the ONC based enhanced adaptive scheme performs the best. Accordingly, we use this scheme to represent our adaptive CBT approach in the following performance comparison. D. Performance Comparison We compare the performance of our proposed schemes with the following three baselines: Baseline 1: The noise-free beam prediction scheme in [14] with N Tx /s Tx measurements of wide beams. Baseline 2: The deep learning based beam prediction scheme in [25] with N Tx /s Tx measurements of uniformly sampled narrow beams. Baseline 3: The adaptive and sequential beam alignment scheme in [16] with N Tx /s Tx measurements of hierarchical beams and the target resolution N Tx . With no additional narrow beam training, i.e, K n = 0, we first investigate the impact of the number of wide beam trainings t on the achievable normalized beamforming gain G N in Fig. 11, where all the three baselines and our CNN assisted scheme do not rely on the prior information and thus their G N performance does not change with t. It can be seen that all our deep learning schemes significantly outperform all the three baselines. Specifically, the performance of the deep learning based baseline 2 and the adaptive alignment based baseline 3 are dramatically better than baseline 1, but G N of our CNN assisted scheme is 5% higher than those of baseline 2 and baseline 3. Moreover, the LSTM assisted scheme attains the best performance, and the enhanced adaptive scheme has the second-best performance. Similar to the enhanced adaptive scheme, G N of the LSTM assisted scheme increases with t and it converges after t ≥ 6. According to Fig. 11, the normalized beamforming gains G N of all the models converge after t ≥ 6. Therefore, in all the following simulation experiments, we use the results of G N after the models have converged, i.e., we use the average values of G N over t = 6 to 10. The superiority of our proposed schemes over the three baselines is further demonstrated in the cumulative distribution functions (CDFs) of the predicted narrow beam gains of various schemes shown in Fig. 12. The following general observations can be drawn from Figs. 11 and 12. Our CNN assisted scheme achieves better beam prediction than baseline 1, since it can extract more robust features from the received signals of all the wide beams. Our CNN assisted scheme can estimate the range of the optimal narrow beam more accurately than baseline 2 and it still obtains the suboptimal narrow beam even when the beam direction is not perfectly aligned. Thus our CNN assisted scheme outperforms baseline 2 because wide beams can achieve greater angular coverage than the sampled narrow beams used in baseline 2. Baseline 3 performs worse than our CNN assisted scheme due to its under-exploration of the beam space. The comparison between the CNN assisted scheme and the LSTM assisted scheme demonstrates that the prior information can effectively reduce incorrect predictions. It can also be seen that the enhanced adaptive scheme significantly reduces the training overhead at the cost of some degradation in prediction accuracy. In our proposed schemes, the additional narrow beam training can be performed to further calibrate beam directions according to the predicted probabilities. Here, we investigate the impact of the number of additional trained narrow beams K n on the normalized beamforming gain G N in Fig. 13. As expected, the additional narrow beam training improves the achievable G N for all the schemes. Observe that the performance gains of our deep learning based schemes over the existing deep learning based baseline 2 increase with K n . Moreover, with K n = 4, the LSTM assisted scheme achieves almost the perfect beam alignment of G N = 99.0%, and the enhanced adaptive scheme given K = 5 attains G N = 96.0%. The enhanced adaptive scheme of course has an additional advantage of requiring significantly lower overhead of beam training. Next, Fig. 14 investigates the impact of UE velocity v UE on the achievable normalized beamforming gain G N performance by varying v UE from 10 m/s to 50 m/s. It can be seen that v UE does not affect the performance of the three baselines and our CNN assisted scheme much, since these schemes do not rely on the prior information. By contrast, v UE has some adverse effect on our LSTM assisted scheme and enhanced adaptive scheme. Specifically, when v UE increases from 10 m/s to 50 m/s, their G N performance reduces by around 7%. This is because that UE movement information is more difficult to accurately extract under high UE velocities. We also investigate the impact of the transmit power P on the normalized beamforming gain G N in Fig. 15 by varying P from 10 dBm to 25 dBm. Obviously, G N increases with P owing to higher SNRs. It can be seen that our CNN assisted scheme achieves larger performance enhancement than the existing deep-learning based baseline 2 and adaptive alignment based baseline 3 as P increases, which further verifies the advantage of our proposed scheme in high SNR scenarios over baseline 2 and baseline 3. The achievable normalized beamforming gains G N for different candidate narrow beam numbers N Tx with K n = 0 are depicted in Fig. 16, where the wide beam number N Tx /s Tx is fixed to 16. Obviously, G N decreases as N Tx increases for all the schemes. Our schemes clearly outperform the three baselines for all N Tx , which demonstrates the scalability of our prediction models. Finally, we investigate the achievable effective spectral efficiency performance, E of (43), for various schemes over T tot = 1, 000 ms from initial access. Specifically, Fig. 17 depicts the effective spectral efficiency performance as the functions of the beam training period τ . It can be seen that E initially increases with τ owing to reduced beam training overhead. However, after achieving the maximum E, increasing τ further decreases E. This is because the loss of beam alignment is more likely to occur during longer data transmission. Observe that our enhanced adaptive CBT scheme with additional narrow beam training achieves the largest E especially under small τ , since it can achieve almost perfect beam alignment with smaller overhead of beam training. VII. CONCLUSIONS To reduce the overhead of mmWave beam training, a deep learning assisted calibrated beam training approach has been proposed in this paper, and the feasibility of estimating the angle of the dominant path based on the channel power leakage in the received signals of beam training has been elaborated. Three schemes have been designed to predict the optimal narrow beam according to the received signals of wide beam training, by utilizing deep learning models to handle the highly nonlinear properties of the channel power leakage. Specifically, CNN has been adopted in the first scheme to predict the beam based on the instantaneous received signals. Furthermore, the additional narrow beam training according to the predicted probabilities has been proposed to further calibrate beam directions. In the second scheme, LSTM has been adopted to track the movement of UE and calibrate the beam direction based on the received signals of prior beam training. In the third scheme, an adaptive beam training strategy has been proposed where partial wide beams are selected to be trained based on the prior received signals. Two criteria, namely, ONC and MPC, have been designed for the selection, where ONC selects the neighboring wide beams of the predicted optimal beam, while MPC selects the wide beams with the top predicted probabilities. To better cope with UE mobility, auxiliary LSTM has been introduced to calibrate the directions of the selected wide beams more precisely. Simulation results have demonstrated that our proposed deeplearning based schemes achieve significantly higher beamforming gain while imposing smaller beam training overhead, compared with the conventional and existing deep-learning based beam training schemes.
2021-01-14T02:15:49.170Z
2021-01-08T00:00:00.000
{ "year": 2021, "sha1": "eea6c1101683e7c9f06f3304342dd417171b8353", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.05206", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "eea6c1101683e7c9f06f3304342dd417171b8353", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
35389426
pes2o/s2orc
v3-fos-license
High-Fat Diet and Voluntary Chronic Aerobic Exercise Recover Altered Levels of Aging-Related Tryptophan Metabolites along the Kynurenine Pathway Tryptophan metabolites regulate a variety of physiological processes, and their downstream metabolites enter the kynurenine pathway. Age-related changes of metabolites and activities of associated enzymes in this pathway are suggestable and would be potential intervention targets. Blood levels of serum tryptophan metabolites in C57BL/6 mice of different ages, ranging from 6 weeks to 10 months, were assessed using high-performance liquid chromatography, and the enzyme activities for each metabolic step were estimated using the ratio of appropriate metabolite levels. Mice were subjected to voluntary chronic aerobic exercise or high-fat diet to assess their ability to rescue age-related alterations in the kynurenine pathway. The ratio of serum kynurenic acid (KYNA) to 3-hydroxylkynurenine (3-HK) decreased with advancing age. Voluntary chronic aerobic exercise and high-fat diet rescued the decreased KYNA/3-HK ratio in the 6-month-old and 8-month-old mice groups. Tryptophan metabolites and their associated enzyme activities were significantly altered during aging, and the KYNA/3-HK ratio was a meaningful indicator of aging. Exercise and high-fat diet could potentially recover the reduction of the KYNA/3-HK ratio in the elderly. Alteration of the kynurenine pathway has been described in various diseases such as ischemic stroke, Alzheimer' s disease, Parkinson' s disease, Huntington' s disease, and multiple sclerosis [4][5][6]. Recent studies have focused on identifying underlying disease mechanisms and their potential targets for therapeutic intervention [4,6,7]. KYN metabolite levels have represented as the overall status of the central nervous system (CNS) [5], and measuring TRP metabolites could be a useful biomarker of CNS aging. In addition, aerobic exercise or high-fat diet have been proposed to recover somatic alterations by aging and cognitive decline [8][9][10][11][12][13]. Based on the idea of TRP metabolites as a biomarker of aging, we examined whether there are differences in TRP metabolites and enzyme activity of the kynurenine pathway according to ages. In addition, we also attempted to evaluate whether the TRP metabolites can be modified by aerobic exercise or high-fat diet. Mice C57BL/6 (Orient Bio Inc) mice were used for all experiments. Mice were housed three mice per each cage (for 8-month-old mice, two mice per cage after day 5) under pathogen-free conditions with a 12/12 hour light-dark cycle and ad-libitum access to water and food. All animal protocols were approved by the Institutional Animal Care and Use Committee of Seoul National University Clinical Research Institute. Aging, diet, and exercise assays 6 weeks old male C57BL/6 mice were arranged into 5 groups (total n=29) and were sacrificed at different ages; 6 weeks, 3 months, 6 months, 8 months (n=6 per timepoint) or 10 months (n=5). To evaluate the effect of a high-fat diet and exercise, separate groups of mice at the age of 2, 5, and 7 months (total n=57) were arranged into a high-fat diet group (n=5 per timepoint), an exercise group (n=5 per timepoint), or a control group (n=9 for 2 months and 5 month, n=10 for 7 months). Mice in the high-fat diet group were fed a high-fat diet (D12492, Research Diets, Inc., New Brunswick, NJ) for 1 month, which is consisted of 20% protein, 20% carbohydrate, and 60% fat (in kcal%). Fat composing the diet was formulated with soybean oil and lard. Mice in the exercise group were given voluntary chronic aerobic exercise for 1 month by providing a running wheel (Lafayette Instument, Co., Lafayette, IN) in the cage [14,15]. Numbers of revolutions were measured automatically by the rotation counter within the wheel and the length of exercise was calculated (0.4 meters per revolution). For the control group, mice were fed an ordinary diet (PicoLab® Rodent Diet 20, LabDiet, St. Louis, MO), consisted of 20% protein, 52.9% carbohydrate, and 10.6% fat (in kcal%), and without a running wheel in the cage. For each mouse, blood samples were taken at the time of sacrifice (3, 6, and 8 months) to quantify TRP metabolite levels after overnight starvation. Measurement of tryptophan metabolites and enzyme activity The concentration of TRP and its metabolites (KYN, KYNA, and 3-HK) in serum were determined using a modified version of a previously established method [16,17]. Briefly, sample protein was precipitated using methanol. Tryptophan methyl ester was used as an internal standard for the quantification of TRP and the above metabolites. A LC-MS/MS system with an Agilent 1200 series HPLC (Agilent Technologies, Santa Clara, CA, USA) coupled to an Applied Biosystems API4000 triple quadrupole mass spectrometer (AB Sciex, Framingham, MA, USA) was used for quantification. Chromatographic separation was conducted on a Synergi Polar-RP (Phenomenex Inc., Torrance, CA, USA) with a mobile phase consisting of 5 mM ammonium formate in distilled water and 0.1% formic acid in methanol. The intra-and inter-day accuracies of this method ranged from 99.76% to 106.8%, and the intra-and inter-day precisions were greater than 5.4% throughout. To estimate enzyme activity of the TRP metabolic pathway, a conversion ratio between two of the metabolites was quantified. Activity of TDO and IDO was estimated by calculating the ratio of serum KYN to TRP levels (KYN concentration divided by TRP concentration) [18,19]. KMO activity was estimated using the ratio of 3-HK to KYN levels (3-HK concentration divided by KYN concentration), and KAT activity using the ratio of KYNA and KYN levels (KYNA concentration divided by KYN concentration). The ratio of KYNA and 3-HK, which represents the ratio of KAT activity and KMO activity. was also calculated. Keon-Joo Lee, et al. The effects of aging on tryptophan metabolites and enzyme activity Among the age groups, there were significant differences in the levels of serum TRP metabolites (Table 1 and Fig. 1), specifically in TRP (p=0.039), KYN (p=0.013), and KYNA (p=0.01) concentrations. These metabolites showed more reduced levels in the older groups. However, 3-HK concentration did not differ significantly between the age groups (p=0.269). KYN/TRP ratios were significantly different, with the lowest level at 6 months (p=0.015). In ad-dition, the level of KYNA/KYN ratio (p=0.001) and the KYNA/3-HK ratio (p=0.001) were significantly reduced in the older groups compared with the younger groups. However, there were no significant differences detected in 3-HK/KYN ratio (p=0.067). The effects of high-fat diet on tryptophan metabolites and enzyme activity In the high-fat diet group, TRP concentration was higher in 8-month-old mice (p=0.002), while the concentration of KYN was lower in the 3-month-old mice compared with control groups (p=0.003) ( Table 2 and Fig. 2). 3-HK was decreased in 3-month-old group compared with controls (p=0.028), while KYNA increased in the 6-month-old (p=0.021) and 8-month-old group (p=0.028) (Fig. 2). The KYN/TRP ratio was decreased in the 3-month-old mice compared with controls (p=0.006), while the 3-HK/KYN ratio was constant throughout the age groups. KYNA/KYN ratio was increased in the 6-month-old (p=0.045) and 8-month-old age group (p=0.011), resulting in the elevation of KYNA/3-HK ratio in the 6-month-old and 8-month-old mice (p=0.005 and p=0.018, 6-month-old and 8-month-old, respectively) (Fig. 2). DISCUSSION Our data suggest that serum TRP metabolites and the activity of enzymes in the kynurenine pathway are altered by aging and are further affected by diet or exercise. The KYNA/3-HK ratio represents the overall status of the kynurenine pathway, and alterations in this ratio are strongly related to aging. There was also an agerelated increase in the neurotoxic products compared with neuroprotective products. These products signify the relative neurotoxic and neuroprotective processes downstream of the kynurenine pathway and are potential biomarkers and therapeutic targets of aging. On the other hand, exposure to a high-fat diet or exercise recovered these changes by raising the activity of KYNA/3-HK ratio. These effects were shown in only certain age groups, which implies age specific intervention via diet or exercise might be beneficial. Two products of KYN metabolism, 3-HK and KYNA, are produced through separate metabolic pathways and have distinct effects. The effects of 3-HK are toxic, as it promotes the production of reactive oxygen species [20]. In contrast, KYNA has a neuroprotective effect through inhibition of N-methyl-D-aspartate (NMDA) [21], glutamate [22], kainate [23], and α7 nicotinic ace- Table 3. Tryptophan metabolites and enzyme activities in C57BL/6 mice of 3-month-old, 6-month-old and 8-month-old subjected to voluntary chronic aerobic exercise by inserting a running wheel in the cage for previous 1 month and without running wheel in control group Tryptophan Metabolite Pathway and Aging tylcholine receptors [24], as well as potent antioxidant properties [25]. Multiple inflammatory conditions activate IDO, including infection [26], cancer [27], atherosclerosis [28], obesity [29], and chronic heart disease [30]. For these conditions, IDO is considered to have anti-inflammatory effects [7]. The activities of KAT and KMO directly contribute to the production of KYNA and 3-HK, respectively. Therefore, these enzymes have a crucial role in this pathway and are potential targets for pharmacological intervention [6,7,31]. Several animal studies suggest TRP metabolism has a role in the aging process [32]. There are previous studies that demonstrated the changes of plasma TRP level in human, mostly showing a decline with aging [33][34][35]. TDO and IDO activity decrease with aging in most organs while IDO activity in brain increase. Mild inflammatory environment related to aging process might explain IDO activation [7]. Thus, decreased KYN/TRP ratio compared to the youngest age group and mild elevation during aging in our results might be explained by both decreased TDO and IDO activity in older age groups and decline of TRP level and IDO activation by inflammatory processes during aging. Aside from KYN/TRP ratio, we have demonstrated that there is a decrease in KYNA, KYNA/ KYN ratio, and the KYNA/3-HK ratio during aging, suggesting a reduced neuroprotective capacity as aging proceeds. On the other hand, 3-HK concentration and 3-HK/KYN ratio remained constant throughout the aging process, suggesting the neurotoxic properties of this pathway do not change as aging proceeds. In addition, senile blood-brain-barrier dysfunction could increase transmission of the TRP metabolites that do not usually enter the CNS, such as KYN and quinolinic acid. These metabolites might further contribute to neurological degenerative disorders and cognitive decline related to aging process [36,37]. Thus, although our study only demonstrated the changes of TRP metabolites in the blood, there might be a possibility that these can influence the agerelated changes of the brain. Further study focused on age-related TRP metabolite level changes directly measured in the brain along with behavior studies for cognition and memory might be helpful to reveal the potential linkage. A high-fat diet and exercise increased the KYNA/3-HK ratio in the 6-month-old and 8-month-old mice. However, these changes were not observed in the 3-month-old mice. This increase in the KYNA/3-HK ratio appeared to be the result of distinct processes as both elevated KYNA/KYN ratio and decreased 3-HK/KYN ratio were observed in the exercise group, while a decrease in the 3-HK/KYN ratio seems to mainly contribute to the increase of KYNA/3-HK ratio in the high-fat diet group. Exercise is known to accelerate TRP metabolism through activation of IDO [38]. One previous study showed that wheel running in rat increases plasma and brain free TRP level via stimulating lipolysis, and unesterified fatty acids decreases TRP binding to albumin [39]. A recent animal study indicates that exercise increases the skeletal expression of KAT through the PGC-1 α1-PPAR α/δ pathway [40], which could be a potential mechanism for explaining our results. In addition, our data show a reduced 3-HK/KYN ratio, suggesting another beneficial result of exercise is reducing oxidative stress. The effect of a high-fat diet on the kynurenine pathway is not well understood. Previous study in rats revealed inhibition of liver TDO by high-fat-diet, increase in plasma and brain free TRP level and lowering total TRP level [41]. Another previous work in rabbits detected no changes in enzyme activity in this pathway by a high-fat diet [42]. While exercise has well-characterized beneficial effects [8,9,11], a high-fat diet is known to negatively affect life span [11] and cognitive function [12,43]. The proposed mechanisms contributing to the harmful effects of a high-fat diet include induction of oxidative stress, inflammation, insulin resistance, and a decrease in the expression of neurotrophic factors [11,12,43]. However, recent studies indicate an ameliorating effect of a high-fat diet on premature aging and neuronal damage [10,44]. Increased NAD+ and sirtuin activity are proposed to be key mechanisms that lead to the beneficial effects on mitochondrial homeostasis [10,45]. These inconsistencies could be the result of age-specific effects of a high-fat diet, as the rescuing effect of the kynurenine pathway only occurred in the 6-month-old and 8-month-old mice. Since NAD+ is one of the known end products of the kynurenine pathway [4], a link between kynurenine pathway and mitochondrial homeostasis might be suggested as the mechanism of the observed beneficial effect of high-fat-diet. Moreover a recent animal study, which revealed the association between gene expression and lifespan showed lifespan correlated with inflammation, apoptosis, PPAR signaling and various metabolic pathways [11], which are known to be also related with the kynurenine pathway. However, more specified studies are required for revealing the connection between interventions and beneficial effect in specific age groups. Our study has several limitations. First, mice of the older age could be more pertinent to identify the effect of aging. However, in our study, mice older than 10-month-old showed poor compliance to voluntary exercise and diet, and were therefore excluded. Second, the effect of diet would have been verified more precisely by comparing the amount of diet consumption or body weight between mice receiving high-fat-diet and ordinal diet. A previous study demonstrated that animals receiving high-fat-diet showed increased body weight and more fat in body content [11]. Third, downstream effect should have been documented more directly by measuring metabolite levels in target organs or measuring concentration of downstream molecules such as NADPH.
2018-04-03T02:52:39.294Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "4c557b2692b22025be165514afe27d0dcbef8c5b", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5491581?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4c557b2692b22025be165514afe27d0dcbef8c5b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
260053116
pes2o/s2orc
v3-fos-license
Pharmacological Basis for Antispasmodic, Bronchodilator, and Antidiarrheal Potential of Dryopteris ramosa (Hope) C. via In Vitro, In Vivo, and In Silico Studies Background:Dryopteris ramosa is used as an old treatment for several diseases. D. ramose fronds are eaten to treat gastrointestinal (GIT) issues and as an antibiotic. However, there is a dearth of literature justifying its traditional use. Aims and objectives: the current work used biological and molecular docking studies to support traditional usage and elucidate D. ramosa’s multitarget mechanism. Materials and methods: Bioactive compounds were docked in silico. Force displacement transducers coupled with a power lab data gathering system examined the effects of compounds on rabbit jejunum, trachea, and aorta tissues. Albino mice and rats were used for in vivo studies. Results: Bioactive compounds interacted with inflammation, asthma, and diarrhea genes, according to in silico studies. D. ramosa crude extract (Dr.Cr) calmed impulsive contractions and K+ (80 mM)-provoked contractions in the jejunum and tracheal tissue dose-dependently, showing the presence of the Ca++ channel-blocking (CCB) effect, further verified by the rightward parallel shift of CRCs equivalent to verapamil. Polarity-based fractionation showed spasmolytic activity in Dr.DCM and muscarinic receptors mediated spasmogenic activity in the Dr.Aq fraction. Dr.Cr vasoconstricted the aortic preparation, which was totally blocked by an angiotensin II receptor antagonist. This suggests that Dr. Cr’s contractile effect is mediated through angiotensin receptors. In rats and mice, it showed anti-inflammatory and antidiarrheal action. Conclusion: This study supports the traditional medicinal uses of D. ramosa against GIT disorders and may be an important therapeutic agent in the future. INTRODUCTION Since the beginning of their evolutionary history, humans have had discrete pharmacological knowledge of the medicinal effects of plants, leaving traces in prehistoric and subsequent cultural heritage. 1 Ethnopharmacology is the scientific study of conventional medical procedures and how many cultures use plants, animals, and minerals for therapeutic purposes. But there have also been a number of disputes around this topic including cultural appropriation and violation of intellectual property rights of native people, as there are instances where pharmaceutical companies have patented traditional practices without the consent of indigenous communities. These debates show how important it is for ethnopharmacology to take ethical concerns and cultural sensitivity seriously in order to uphold and value indigenous knowledge and practices. 2 Besides this, recently, the transition from conventional ethnopharmacology to drug discovery has been facilitated by the introduction of specialized extraction techniques including sophisticated new methodologies such as high-performance liquid chromatography (HPLC), liquid chromatography-mass spectrometry (LC-MS), gas chromatography-mass spectrome-try (GC-MS), chemoinformatic techniques, the advancement of isolation and characterization techniques, and the rise in computing power including molecular docking and gene target prediction of compounds. 1 A lot has been accomplished recently in the rapidly developing discipline of ethnopharmacology. Using cutting-edge methods like metabolomics and high-throughput screening, scientists are discovering new molecules in traditional medicinal plants. These substances have the potential to be turned into novel treatments and medications. 3 The validity of the effectiveness and safety of conventional medicines has been greatly aided by ethnopharmacology. Clinical trials are being carried out by researchers to examine the efficacy of conventional treatments hepatotoxicity, total renal clearance, and ability to inhibit the P-glycoprotein were also calculated using pkCSM. 2.5.2. Molecular Docking. The previously reported method of Sirous et al. 27 was used to perform molecular docking studies for protein and bioactive compounds. Ligand Preparation. The PubChem database (https://pubchem.ncbi.nlm.nih.gov, accessed on May 28, 2022) was employed to retrieve two-dimensional (2D) structures of already reported bioactive compounds from D. ramosa, and these ligands were modified in the LigPrep module of Maestro (Schrodinger Suite 2018, Schrodinger, Inc. NY) for the ionization, minimization, and optimization of ligands. The Epik tool of this module was used to produce the ionization state of ligands at cellular pH (7.4 ± 0.5), and the OPLS3e force field was applied using the module for minimization as well as optimization of these ligand structures that produce the minimal energy conformers of ligands. Protein Preparation. Maximum-resolution protein X-ray structures for molecular docking were retrieved using the Protein Databank (RCSB PDB) (https://www.rcsb.org., accessed on May 28, 2022). These structures were put through the Maestro (Schrodinger Suite 2018, Schrodinger, Inc., New York) Protein Preparation Wizard to add H-atoms to the structure of the protein, remove extra water molecules from the solvents, assign bond orders, create disulfide bridges, fill in the absent side chains, and generate the protonation condition using the Epik tool for protein assemblies for ligands at the cellular level pH (7.4 ± 0.5). Following refining, PROPKA was used at pH 7.0 to improve the protein structures. Using the OPLS3e force field, we carried out restrained minimization for energy and geometry optimization of the protein structure. Receptor Grid Generation. The Receptor Grid Generation module of Maestro defined the active regions of protein structures for molecular docking (Schrodinger Suite 2018). With the use of some already bound protein ligands and existing literature, a cubic grid block for each protein was created. The grid box's dimensions were changed to be 16 Å long. The potential of the receptor's nonpolar components was reduced to 1.0 Å on the van der Waals radius of nonpolar protein atoms with a partial atomic charge cutoff of 0. 25 Å. 2.5.2.4. Molecular Docking. The premade ligand structures and protein structures were put via Maestro's Ligand Docking (Glide) module's extra precision (XP) mode (Schrodinger Suite 2018), utilizing a grid file for the receptor that had already been created. A 0.80 Å van der Waals radii scaling factor was changed with a partial charge cutoff of 0. 15 Å. Using the VSGB solvation model and the OPLS3e force field, the Prime MM-GBSA module was utilized to analyze the docking results and identify the binding dynamics of ligand molecules with the target protein structure. 2.5.2.5. Inhibition Constant (K i ). The following equation was used for the calculation of the inhibition constant by using the binding free energy of a ligand previously produced by Prime MM-GBSA. 2.6. In Vitro Assays. Isolated tissue responses in physiological conditions were noted by Bio science isometric and isotonic force displacement transducers attached to the Power lab data acquisition system (AD Instruments, Bella Vista, NSW, Australia), displaying results on a computer having lab chart software (version 6) installed. The effect of the test substance was measured as the percentage change in the response of tissue recorded after the administration of test doses. 28,29 2.6.1. Preparation of Isolated Jejunum. Rabbits were sacrificed to obtain the jejunum. Mesenteries were removed from the tissue, and jejunal segments of 2−3 cm lengths were prepared and suspended in priorly filled 15 mL of tissue organ baths having carbogen (5% CO 2 and 95% O 2 ) bubbling through Tyrode's solution having NaCl (136.9 mM), NaHCO 3 (11.90 mM), MgCl 2 (1.05 mM), KCl (2.68 mM), glucose (5.55 mM), CaCl 2 (1.8 mM), and NaH 2 PO 4 (0.42 mM) and maintained at 37°C. All tissues were permitted to equilibrate for around 30 min, before being stabilized with a 3 min interval of Ach (1 μM) to get a persistent tissue response prior to the inclusion of any drug solution or plant extract. Before starting the experiment, the isolated tissue organ bath fluid was replaced with fresh Tyrode solution, and spontaneous rhythmical contractions were noted prior to testing the drug. The possible spasmolytic or spasmogenic response of Dr.Cr was studied on equilibrated jejunal preparation by cumulative addition of different doses of Dr.Cr. A dose−response curve was created, and the response was reported as a percentage of the control contractions. For determination of CCB activity, 80 mM KCl was used to precontract the jejunum. 30,31 To further illustrate the Ca++ channel antagonistic action, isolated rabbit jejunum tissues were stabilized in regular Tyrode solutions, that solution was subsequently replaced with calcium-free Tyrode solution containing 0.1 mM EDTA (chelating agent), for around 30 min. This solution was then replaced with Ca++-free and K+-rich Tyrode solution having NaCl (91.0 mM), NaHCO 3 (11.89 mM), C 6 H 12 O 6 (5.6 mM), KCl (50.1 mM), Na 2 HPO 4 (0.43 mM), EDTA (0.12 mM), and MgCl 2 (2.0 mM) for about 30 min to get stability. The control of concentration−response curves (CRCs) of Ca++ was created by applying Ca++ concentration in a cumulative way. A gradual increase in the contraction of jejunal tissue indicates the dependency of the contractile response of smooth muscles on extracellular calcium. 28,31 After two cycles, superimposable curves were attained, and tissues were then washed and given time to stabilize in the presence of various dose concentrations of Dr.Cr, and CRCs were recreated after incubation times of 50 ± 10 min and compared with control curves. 31 2.6.2. Preparation of Isolated Trachea. The trachea was divided into 2−3 mm rings (2−3 cartilaginous rings) after being dissected. The tracheal rings were cut longitudinally on the side opposing smooth muscle, creating a strip having smooth muscles sandwiched between cartilaginous sides of the strip. Following that, the tissues were fixed in an isolated tissue organ bath having carbogen (5% CO 2 and 95% O 2 ) bubbling through Krebs solution having NaCl (118.2 mM), NaHCO 3 (25.0 mM), CaCl 2 (2.5 mM), KCl (4.7 mM), MgSO 4 (1.2 mM), and glucose (11.7 mM) and maintained at 37°C. A preload stress of 1 g was given to tissues, and the tissues were permitted to equilibrate for 50 ± 10 min prior to any experiment. High K (KCl, 80 mM) and carbachol (1 μM) were utilized in order to achieve a persistent agonistic response for the purpose of determining the broncho-relaxant action of Dr.Cr. A sustained reaction is attained after 45 min, at which point the extract was applied cumulatively to attain a concentrationdependent inhibitory response of the extract. Isometric tissue responses were captured using Bio science transducers. 31 2.6.3. Preparation of Isolated Aorta. To examine the impact of the plant extract on vascular resistance, the descending thoracic portion of the aorta was cut into 2−3 mm broad rings, placed in isolated tissue organ baths individually, and given 50 ± 10 min to equilibrate. At the beginning of the experiment and throughout, a preload stress of 2 g was applied. Each tissue organ bath had carbogen (5% CO 2 and 95% O 2 ) bubbling through Krebs solution having NaCl (118.2 mM), NaHCO 3 (25.0 mM), CaCl 2 (2.5 mM), KCl (4.7 mM), MgSO 4 (1.2 mM), and glucose (11.7 mM) and was kept at a temperature of 37°C. To clarify any potential vasorelaxant or vasoconstrictor effects, Dr.Cr was added cumulatively to a baseline resting state. 32 Prazocin, losartan, and cyproheptadine were used as pretreatments on isolated aortic rings in order to clarify the mechanism of contraction. 33 2.7. In Vivo Activity. 2.7.1. Anti-Inflammatory Activity. The previously described carrageenan-induced rat paw edema assay was used to test Dr.Cr for its possible anti-inflammatory effect, after minor adjustments. 34 20 Wistar albino rats (♀, ♂) weighing between 180 and 220 g were divided into four groups, i.e., Group I: control group (0.9% saline), Group II: drug group (aspirin 0.01 g/kg), and Groups III and IV: test groups of Dr.Cr doses (0.1 and 0.2 g/kg, respectively). Edema was induced by inoculating 1% carrageenan into the subplanter area of the right hind paw after 55 ± 5 min of dose administration (IP) of Dr.Cr, and the extent of the edema was measured up to 4 h later using a plethysmometer (UGO Basile, Italy). Results were expressed as the percentage inhibition of edema. Castor Oil-Induced Diarrhea. Twenty mice of either gender (♀, ♂) were randomly divided into four groups having five animals in each group to test the antidiarrheal effects of the Dr.Cr extract. Before the experiment, all of the groups were kept in separate cages with free access to water; however, food was withheld for the night prior to testing. Group I labeled as the negative control group was orally given 0.9% saline (NS) at a dose of 10 mL/kg. Loperamide (10 mg/kg) was given orally to Group II, which was designated as the positive control MW: molecular weights of the molecules in Dalton, i.e.,130.0−500.0; QPlogPo/w: the forecast lipophilicity partition coefficient of octanol/water, ranging from 2 to 6.5; QPlogS: aqueous solubility predicted: −6.5 to 0.5; QPlogHERG: IC50 for HERG K+ channel blockade predicted > 5; QPPCaco: foreseeable apparent Caco-2 cell, a model for the gut−blood barrier, with weak permeability if 25 and large permeability if >500; QPlogBB: brain/blood partition coefficient predicted, −3 to 1.2; QPPMDCK: blood−brain barrier nonactive transport predicted by apparent MDCK cell permeability in nm/sec with 25 bad and >500 fantastic predictions; QPlogKp: skin permeability predicted, −8.0 to −1.0 cm/s; QPlogKhsa: anticipated serum albumin binding of human, −1.5 to 1.5; and CNS permeability: greater than −2 able to penetrate. Groups III and IV were given Dr.Cr doses of 0.2 and 0.4 g/kg, respectively. 30 min after the administration of treatment, all groups were orally administered 10 mL/kg castor oil to induce diarrhea and monitored for wet diarrheal spots for up to 4 h. 28 The mean amount of feces for each group was determined and the outcomes were presented as %age inhibition. where D cn is the mean defecation of the control group and D t is the mean defecation of the test group. Statistical Analysis. All of the results are expressed as mean ± (S.E.M). "GraphPad Prism (GraphPad, San Diego, California: http://www.graphpad.com)" was utilized to obtain the median effective concentrations (EC 50 value) with a 95% (CI). In the instance of the in vivo study, statistical analysis was one-way ANOVA and two-way ANOVA, and it was further trailed by Dunnett's test, where a probability of (p < 0.05)* was deemed significant statistically. 35 RESULTS AND DISCUSSION Natural medicine has seen a significant rise in popularity over the last two decades. 36 Ethnopharmacological methods offer hints in the search for bioactive substances. 24 Dryopteris ramosa has many traditional uses including its use in GIT disorders and also as a febrifuge. Therefore, this study was designed to validate the potential mechanisms of D. ramosa in digestive and respiratory systems by an integrated strategy of molecular docking of D. ramosa bioactive compounds and its validation through different in vitro and in vivo experimental models. 3.1. Pilot Phytochemical Screening. Preliminary phytochemical analysis of ethanolic extract (Dr.Cr) showed flavonoids, glycosides, saponins, phenols, tannins, and steroids amid secondary bioactive metabolites of the plant. Flavonoids have been shown to have antispasmodic and calcium channelblocking properties. 37,38 3.2. In Silico Studies. 3.2.1. ADMET Analysis. A literature review shows that several bioactive compounds have been extracted from D. ramosa including gallic acid, quercetin, caffeic acid, vanillic acid, cinnamic acid, iriflophenon glycoside, mangiferin, and isomangiferin. 19,23,24 These compounds were subjected to an ADMET analysis utilizing the Schrodinger QikProp module, 39 SWISS ADME, and PkCSM ( Table 1). The aqueous solubility, octanol/water partition coefficient, and a number of other physical characteristics may all be predicted. Cell permeability, the brain/blood partition coefficient, QPlogKhsa to forecast binding affinity to serum albumin of humans, and QPlogHERG to measure HERG K+ channel blockade and human oral absorption percentage were determined. Voltage-Gated Calcium Ion Isomangiferin, iriflophenon glycoside, quercetin, mangiferin, and vanillic acid displayed a strong binding affinity for L-type voltage-gated calcium ion channels during molecular docking analysis, and in addition to these, gallic acid, caffeic acid, and cinnamic acid showed an affinity for myosin light chain kinase. All these binding affinities and predicted properties of these bioactive compounds are similar to those of verapamil, a known calcium channel blocker. Therefore, it can be presumed that D. ramosa has a powerful antispasmodic effect, which is caused by the significant binding affinity of compounds for their intended protein targets, inhibiting the signal transduction process of smooth muscle contraction. But there is a discrepancy. These predictions invalidate the traditional use and reported in vivo activity of the methanolic extract of D. ramosa as a laxative. 44 To address this discrepancy, we have parted the active constituents of the crude ethanolic extract of D. ramosa (Dr.Cr) through further polarity-based fractionation into Dr.Aq and Dr.DCM. In Vitro Activities. 3.3.1. Response on Rabbit Jejunum Preparations. For validation of this proposed mechanism of action of D. ramosa, all three extracts Dr.Cr, Dr.DCM, and Dr.Aq were evaluated on isolated jejunum tissue preparations. Jejunum is used because of its high reactivity among smooth muscles. 28 An isolated tissue is unaffected by any neurological or hormonal influences and only responds intrinsically, so it is employed for the study of underlying mechanisms. 45 When extracts were added to spontaneously contracting jejunum, diversity in results was found. Dr.Cr resulted in the inhibition of spontaneous contractions and demonstrated spasmolytic action ( Figure 2B) in a cumulative dose range of 0.003−1 mg/mL with an EC 50 value 0.41 mg/ mL (95% CI: 0.20−0.85, n = 5) in a manner comparable to verapamil. 46 Dr.DCM also caused suppression of contractions, while Dr.Aq enhanced the contractile response ( Figure 2D,E). The spasmogenic effect of Dr.Aq was blocked by atropine (1 μM), thus indicating the presence of some cholinomimetic constituents and supporting its use as a laxative. 47 There are a number of physiological mediators that regulate the motor tone of the gastrointestinal tract by controlling the movement of Ca++ inside and out of cells. 48 These physiological agents raise cytosolic calcium ion concentrations either by increasing the influx of calcium from the extracellular fluid or by stimulating its release from cytosolic calcium stores. 41 The PLC is stimulated by the M3 muscarinic receptor's activation, and this in turn causes the secondary messenger inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG) to be hydrolyzed from phosphatidylinositol 4,5bisphosphate. Inositol 1,4,5-trisphosphate receptors (IP3R) on the sarcoplasmic reticulum are stimulated by IP3 to release calcium ions, which raises the level of calcium in the cytosol. The formation of a calcium/calmodulin complex is brought about by the activation of a regulatory protein kinase C (PKC) by DAG and calcium. Myosin light chains (MLCs) are phosphorylated as a result of this calcium/calmodulin combination, activating another myosin light chain kinase (MLCK). Phosphorylated MLCs and actin then form an interaction network to produce a contractile response. 47 The primary cause of smooth muscle spontaneous contraction is a transient rise or drop in free Ca++ in the cytosol. This readily available cytosolic Ca++ interacts with the contractile components of the muscle to generate a transient activation or deactivation of the contractile components, resulting in the elicit of resting membrane potential and the contractile response of smooth muscles ( Figure 10). In order to more thoroughly assess the antispasmodic mode of action of Dr.Cr, rabbit jejunum tissue was exposed to prolonged constriction by addition of high-K+ (80 mM). Studies show that at high concentrations, K+ activates the Ca+ + channels (voltage-dependent), causing an invasion of free Ca ++ into the cytosol and a significant depolarization of the membrane action potential, which causes a persistent contraction that lasts for a long time. 47 Repolarization results in a relaxation of the smooth muscle. 29 Dr.Cr, when added cumulatively, completely relaxed the high-K+ (80 mM) evoked contractions ( Figure 2C) in a tissue organ bath having a concentration of 0.003−3 mg/mL with EC 50 of 0.76 μM (95% confidence interval: 0.48−1.19, n = 5; Figure 3A), thus preventing the stimulated depolarization. The membrane action potential is repolarized and substances that inhibit smooth muscle contractions caused by high-K+ levels appear to impede Ca++ influx into the cytosol. 47 Verapamil, a common calcium channel blocker that was used as a positive control, demonstrated a comparable response and relaxed both spontaneous and high potassium-induced contractions, with EC 50 values of 0.34 μM (95% CI: 0.22−0.51, n = 5) and 0.04 μM (95% CI: 0.02−0.10, n = 5), respectively ( Figure 3B). For confirmation of the calcium channel-blocking response of Dr.Cr, CRCs of calcium were constructed in the absence and presence of extract. All Ca++ channel blockers have the trait of blocking calcium's sluggish entry, and this effect can be reversed by adding Ca++. 49 Pretreating with the extract Dr.Cr caused calcium CRCs to be suppressed at doses of 0.3 and 1 mg/mL with a rightward parallel shift on jejunum tissue preparation, comparable to that of verapamil at doses of 0.1 and 0.3 μM. 50 These results were compared with the common calcium channel blocker verapamil, which caused complete suppression of CRCs at doses of 0.3 and 1 μM compared to Dr.Cr ( Figure 3C,D). This outcome suggests the existence of Ca++ channel blockers in Dr.Cr, since they are helpful in cases when the gut is overactive. 49 Wahid et al. reported the strong affinity of quercetin with PLC. 47 Antispasmodic agents disrupt this pathway, and they are used to treat overactive gastrointestinal ailments. 29,51 Numerous investigations have revealed that the Ca++ antagonistic effect of medicinal herbs is the primary cause of their mode of action. 50 Multiple diseases can be cured through the interactions between bioactive substances and their target proteins. 52 3.3.2. Response on the Trachea. The potential bronchodilator properties of D. ramosa were investigated, as it has the presence of flavonoids, and besides their antispasmodic properties, flavonoids are also known to play a role as bronchodilators. 53−55 Capasso et al. 37 studied the bronchodilator effect of quercetin on rat tracheal tissues. Djelili et al. 38 reported that quercetin and rutin had a bronchodilator effect on isolated human bronchus tissues. Chang et al. 56 reported the antiasthmatic activity of quercetin and rutin. Ko et al. 57 studied the broncho-relaxant effect of quercetin on KCl (30 mM), and carbachol (0.2 μM)-induced spastic contraction on isolated guinea pig tracheal tissue preparations. To explore the mechanism of relaxation, investigations of Dr.Cr were conducted in isolated rabbit tracheal preparations that had already been precontracted with CCh (1 μM) and high-K+ (80 mM). 58 Dr.Cr inhibited high-K+ (80 mM)-induced contraction in rabbit tracheal preparation ( Figure 4) in a dosedependent manner with a corresponding EC 50 of 0.6194 mg/ mL (95% CI: 0.388 to 0.988, n = 5). The response of Dr.Cr on CCh-induced contraction was insignificant, since it did not totally relax the contraction caused by 1 μM CCh until 10 mg/ mL ( Figure 5A). So, it confirmed the speculations that the extract contained some cholinomimetic and calcium channel blocker constituents, 47 and CCB was a prominent mechanism for the bronchodilator effect. 59 Partial relaxation of CChinduced contractions showed that Dr.Cr had some muscarinic M3 receptor activity, which is masked at higher doses by some other bioactive agonist compounds, as depicted by the spasmogenic effect of Dr.Aq at the jejunum ( Figure 2E). Comparatively, verapamil, a common Ca++ channel antagonist, reduced high-K+ (80 mM), and CCh (1 μM) provoked contractions with corresponding EC 50 values of 0.063 μM/mL (95% CI: 0.02−0.17, n = 5) and 0.09 μM/mL (95% CI: 0.063−0.14, n = 5; Figure 5B). Given that they are intended to be tracheal relaxants, CCBs are also known to be useful as a cough remedy. 60 3.3.3. Effect on the Aorta. Moreover, the smooth muscle relaxant effect of Dr.Cr was confirmed on isolated aortic rings where it caused a contractile reaction on stable aortic tissue preparation ( Figure 6A), potentially as a result of activation of one or more types of receptors on the aorta. This contractile response of Dr.Cr decreases when tissue is pretreated with losartan ( Figure 6B). Dose−response curves of Dr.Cr after pretreatment with increasing concentrations of losartan are made, and it is shown in Figure 7 that by increasing the concentration of losartan, dose−response curves are shifted toward the right by suppression of contractions at initial doses of Dr.Cr, which are concluded to be mediated through activation of angiotensin II receptors. 31 3.4. In Vivo Activities. 3 ramosa suggests the anti-inflammatory effect of plants, as studies have shown the relation of the anti-inflammatory effect of plants with the presence of flavonoids. 18 Dr.Cr significantly reduced the inflammation by inhibiting edema in the paws of rats at dosages of 0.1 and 0.2 g/kg in treatment groups compared to the control group. Carrageenan causes edema to develop in two stages. During the first hour following carrageenan exposure, histamines, serotonins, prostaglandins, and cytoplasmic enzymes are released from neighboring cells of wounded tissue. After the first hour, the second phase begins, and it is characterized by a rise in prostaglandin secretion and the release of leukotrienes and kinins from the inflammatory area. 61,62 The suppression of edema following pretreatment with Dr.Cr (0.1 g/kg; I/P), at time intervals of the 1st, 2nd, 3rd, and 4th hours, was measured to be 17.218, 19.934, 28.6, and 36.226%, respectively. In contrast, pretreatment with Dr.Cr (0.2 g/kg, I/P) led to percentages of 38.65, 43.648, 50.826, and 59.554% at respective intervals of the 1st, 2nd, 3rd, and 4th hours, respectively, while standard aspirin therapy (0.01 g/kg; I/P) resulted in 17.072, 21.968, 54.864, and 69.978% suppression of inflammation, respectively (Table 3 and Figure 8). Dr.Cr substantially reduced the edema in the second phase and proposed that it works by inhibiting the formation of cyclooxygenase. This result is equivalent to that of nonsteroidal anti-inflammatory medications. These results are also validated by the results of our docking studies, which showed strong interaction of bioactive compounds from ethanolic extract of D. ramosa like iriflophenon glycoside, isomangiferin, mangiferin, and different polyphenolic compounds quercetin, gallic acid, caffeic acid, vanillic acid, and cinnamic acid with the human cyclooxygenase-2 enzyme, suggesting the role of these compounds in anti-inflammatory activity. Effect on Diarrhea Brought on by Castor Oil. Diarrhea is described as the abnormal passing of soft stools as a result of problems with the colon's ability to carry water and electrolytes. Castor oil alters colonic water and electrolyte transport, which results in diarrhea and increases peristaltic motions. 63 Dr.Cr caused a significant decline in the number of wet drops of feces in mice (Table 4). Group I (positive control group) receiving normal saline (NS) at 10 mL/kg dose showed 8 ± 0.50 wet drops of feces. Group II (negative control) receiving loperamide at 5 mg/kg dose showed 0.5 ± 0.1 fecal drops. Groups III and IV receiving extract at 200 and 400 mg/ kg doses showed 1.75 ± 0.29 and 1.25 ± 0.26 fecal drops, respectively (Table 5 and Figure 9). As shown in (Table 5) both 200 and 400 mg/kg doses of ethanolic crude extract were found to produce a decrease in the frequency of defecation and diarrheal stools in castor oil-treated mice groups in 4 h observations in comparison to the negative control (normal saline) group. There was significant postponement in the commencement of diarrhea at a dose of 400 mg/kg comparable to loperamide. This dose-dependent increase in protection from diarrhea plus delay in onset of diarrhea at a high dose (84.38% at dose 400 mg/kg) is comparable to loperamide (93.75% at 10 mg/kg dose; Table 5). Loperamide has a propensity to interfere with the calcium-mediated signaling system, hence regulating intestinal tone. 64 Thus, in vivo results showed that extract reduces intestinal tone experiments by blocking the calcium channel. These results are supported by in vitro and in silico studies ( Figure 10). CONCLUSIONS The prospective compounds like quercetin, mangiferin, and isomangiferin are promising bioactive compounds of Dryopteris ramose, which showed a strong binding affinity to L-type voltage-gated Ca++ channels, human myosin light chain kinase, Cox2, and arachidonate-5-lipoxygenase during computational studies and during validation of these proposed effects through biological experiments. The hydroethanolic extract (Dr.Cr) of D. ramosa (Linn.) exhibited spasmolytic, spasmogenic, bronchodilator, and vasoconstrictive activities through different mechanisms. The spasmolytic and bronchodilator activities are mediated through blockage of Ca++ channels, spasmogenic activity through activation of muscarinic receptors, and vasoconstrictive activity may be due to the presence of angiotensin II agonistic compounds. Also, it has anti-inflammatory activity, so it may be helpful in treating asthma as well as diarrhea by controlling the contractile effect via calcium-mediated signaling.
2023-07-22T15:27:18.287Z
2023-07-20T00:00:00.000
{ "year": 2023, "sha1": "e6d1577fa33e72688d4919712b5c2581675d6e29", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acsomega.3c01907", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4e79be4b7fe1bde5d5b71ae7b644743ae4f70de", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
3926010
pes2o/s2orc
v3-fos-license
Effects of 17β-Estradiol on Activity, Gene and Protein Expression of Superoxide Dismutases in Primary Cultured Human Lens Epithelial Cells. PURPOSE Protective effects of estradiol against H2O2-induced oxidative stress have been demonstrated in lens epithelial cells. The purpose of this study was to investigate the effects of 17β-estradiol (E2) on the different superoxide dismutase (SOD) isoenzymes, SOD-1, SOD-2, and SOD-3, as well as estrogen receptors (ERs), ERα and ERβ, in primary cultured human lens epithelial cells (HLECs). MATERIALS AND METHODS HLECs were exposed to 0.1 µM or 1 µM E2 for 1.5 h and 24 h after which the effects were studied. Protein expression and immunolocalization of SOD-1, SOD-2, ERα, and ERβ were studied with Western blot and immunocytochemistry. Total SOD activity was measured, and gene expression analyses were performed for SOD1, SOD2, and SOD3. RESULTS Increased SOD activity was seen after 1.5 h exposure to both 0.1 µM and 1 µM E2. There were no significant changes in protein or gene expression of the different SODs. Immunolabeling of SOD-1 was evident in the cytosol and nucleus; whereas, SOD-2 was localized in the mitochondria. Both ERα and ERβ were immunolocalized to the nucleus, and mitochondrial localization of ERβ was evident by colocalization with MitoTracker. Both ERα and ERβ showed altered protein expression levels after exposure to E2. CONCLUSIONS The observed increase in SOD activity after exposure to E2 without accompanying increase in gene or protein expression supports a role for E2 in protection against oxidative stress mediated through non-genomic mechanisms. Introduction When comparing the incidence of cataract for men and women of the same age, women after menopause have an increased risk of developing cataract, as shown by several epidemiological studies. [1][2][3] The dramatic reduction of estradiol at menopause has been hypothesized to lead to increased risk of cataract in women and accordingly, with studies showing reduced risk of cataract by exogenous estrogens, that is, hormonal replacement therapy, estrogens have been suggested to protect against cataract. [4][5][6] The primary estrogen, estradiol, is found both in men and women and the most potent form, 17β-estradiol (E2), binds to estrogen receptors (ERs), ERα and ERβ. Both types of receptors have been found in the human eye lens. 7,8 Protective effects of E2 against H 2 O 2 -induced oxidative stress have been demonstrated in lens epithelial cells (LECs). [9][10][11] In addition, several animal models have demonstrated effects of estrogen in the lens indicating protective effects against cataract formation. 12,13 The mechanism for estrogen-mediated protection is not fully elucidated, and both genomic and non-genomic mechanisms have been demonstrated. Estrogens can exert their effects through the classic genomic pathway by binding to ERs, thereby regulating gene expression via estrogen response elements (EREs) or by a non-classical genomic mechanism through ligand-activated ER interactions with co-regulators and transcription factors, such as activator protein 1 (AP-1) and transcription factor Sp1. 14,15 Non-genomic effects of steroids in general do not depend on gene transcription or protein synthesis and involve cytoplasmic or membrane-bound regulatory proteins or membrane-localized ERs. There are also ligand-independent pathways where ER activity can be regulated through activation of several different signal transduction pathways such as extracellular signal-regulated kinases (ERKs) included in the mitogen-activated protein kinase (MAPK) pathway. 16 Nongenomic effects of E2 have been demonstrated in LECs by MAPK activation and prevention of mitochondrial membrane potential collapse during oxidative stress. 17 Moreover, studies have shown estrogen-mediated protection against oxidative stress through upregulation of antioxidative enzymes including superoxide dismutases (SODs). [18][19][20] The purpose of the present study was to investigate the effects of 17β-estradiol (E2) on the activity, immunolocalization, protein and gene expression of the different SOD isoenzymes, SOD-1, SOD-2, and SOD-3, as well as the effects on protein expression and immunolocalization of ERα and ERβ in primary cultured human lens epithelial cells (HLECs). Materials and methods Human lens epithelial cell culture Capsulorhexis specimens from patients undergoing cataract surgery were obtained, and primary cell cultures of HLECs were essentially cultured as previously described. 21 Capsulorhexis specimens and eventually HLECs were all cultured in a humidified CO 2 -incubator using Eagle's minimum essential medium (MEM) with phenol red supplemented with 100 U/ml penicillin, 100 µg/ ml streptomycin, 2 mM L-glutamine, 2.5 µg/ml amphotericin B (Sigma-Aldrich, St Louis, MO, USA), and 10% fetal bovine serum (FBS) (Thermo Fisher Scientific, Rockford, IL, USA). In all experiments, three or more different primary cell cultures of HLECs derived from separate individuals were used. Each cell culture was grown in monolayers, and passages between IV and XV were used. Prior to each experiment, cells were washed with Dulbecco's phosphate buffered saline (PBS) without calcium and magnesium (Thermo Fisher Scientific, Rockford, IL, USA), after which the medium was changed to MEM without phenol red (Gibco, Paisley, Scotland, UK) and 5% FBS for 22-24 h before exposure to 17β-estradiol (E2) in serum free medium. Stock solution of E2 (10 mM) was prepared in 99.5% ethanol (Sigma-Aldrich, St Louis, MO, USA). HLECs were incubated in triplicates with E2 (0.1 µM and 1 µM) for 1.5 h or 24 h. Control cells were used in all experiments and were incubated simultaneously, in an ethanol concentration equivalent to the highest E2 concentration, for 1.5 h or 24 h. Cells were cultured in 6-well culture dishes (TPP, Trasadingen, Switzerland) and collected with cell scrapers before further analyses in all experiments except for visualization with immunofluorescence, then cells were cultured in 8-well chamber slides (Lab-Tek, Nalge Nunc International, Rochester, NY, USA). The Regional Research Ethics Committee in Gothenburg approved the study, and the tenets of the Declaration of Helsinki were followed. Gene expression analysis After exposure to E2, the cells were collected and pellets were used for extraction of total RNA, performed on the Maxwell 16 Instrument (Promega Corporation, Madison, WI, USA) according to the manufacturer's protocol. The quality and integrity of RNA was determined in all samples with Agilent R6K ScreenTape on the Agilent 2200 TapeStation (Agilent Technologies Waldbronn, Germany). All samples had RNA integrity number (RIN) >8 and 28S/18S ratio >2, showing high quality and integrity of the total RNA extracted from HLECs. RNA concentration was measured on NanoDrop 1000 (Thermo Fisher Scientific, Rockford, IL, USA) and Infinite M200 PRO NanoQuant Plate (Tecan group Ltd., Männedorf, Switzerland). cDNA synthesis was performed with reverse transcription polymerase chain reaction (RT-PCR) and SuperScript VILO cDNA Synthesis Kit (Invitrogen, Carlsbad, CA, USA) using 0.6 µg of total RNA. Real-time quantitative polymerase chain reaction (qPCR) was performed using 2 µl cDNA (10 ng/µl) in a final volume of 10 µl with TaqMan Gene Expression Master Mix and TaqMan Gene Expression Assays specific for the genes studied: SOD1 (Hs00533490_m1) SOD2 (Hs00167309_m1) and SOD3 (Hs00162090_m1). Eight reference genes were tested, and ultimately the relative gene expression data were normalized to the reference genes: RPLP0 (Hs99999902_m1) and PPIA (Hs99999904_m1). Each reaction was performed in triplicate on 384-well plates on the ABI 7900HT (Applied Biosystems, Foster City, CA, USA). Protein expression analysis HLECs exposed to E2 were rinsed in ice cold PBS, followed by lysis in modified NuPage 0.5% lithium dodecyl sulfate (LDS) sample buffer (Novex, Life Technologies, Carlsbad, CA, USA). The cell lysates were heated at 70°C for 10 min and sonicated for 20 s at 50% amplitude (Branson Ultrasonic corporation, Danbury, CT, USA). All cell and lysate handling was performed on ice. Immediately before gel loading, the reducing agent (DTT; dithiothreitol) was added to a final concentration of 50 mM. Triplets of the samples were loaded on NuPage 4-12% Bis-Tris gradient minigels using NuPage MES or MOPS SDS running buffer and the Novex Sharp Pre-Stained Protein Standard (Novex, Life Technologies, Carlsbad, CA, USA). After electrophoresis, the proteins were transferred to nitrocellulose membranes followed by a blocking in 5% nonfat milk powder in PBS overnight in +4°C. Primary antibodies used for Western blotting included polyclonal rabbit anti-ERα (H-184; 1:50), ERβ (H-150; 1:50), SOD-1 (FL-154; 1:500), as well as monoclonal mouse anti-SOD-2 (B-1; 1:500), β-actin (C4; 1:500) (Santa Cruz Biotechnology, Dallas, TX, USA). Primary antibody binding was detected with the corresponding secondary antibodies conjugated to horseradish peroxidase (Santa Cruz Biotechnology, Dallas, TX, USA). Protein expression bands were visualized with Luminata Forte Western HRP Substrate (Millipore Corporation, Billerica, MA, USA) in ImageQuant LAS 500 (GE Healthcare, Piscataway, NJ, USA), followed by densitometric analysis using ImageJ software version 1.37 (National Institute of Health, USA). β-actin was used for normalization of densitometric data, and results were expressed as area under the curve (AUC). Immunocytochemistry After E2 exposure, the cells were rinsed in PBS and fixed in 4% paraformaldehyde (pH 7.4). The cells were rinsed again and permeabilized by 0.25% triton-X in PBS for 10 min at room temperature (Sigma-Aldrich, St Louis, MO, USA). Following standard protocols for immunocytochemistry, the cells were labeled with antibodies (same as used for Western Blot) against ERα (1:50), ERβ (1:50), SOD-1 (1:50), and SOD-2 (1:50) and visualized by Alexa Fluor 488 Goat Anti-Rabbit or Anti-Mouse IgG (H + L) antibodies (1:200) (Molecular Probes, Eugene, OR, USA). Nuclear morphology was viewed using cell permeable Hoechst 33342 (Sigma-Aldrich, St Louis, MO, USA) at final concentration of 10 μg/ml. Prior to fixation, cells were incubated with MitoTracker Deep Red FM (Molecular Probes, Eugene, OR, USA), which was used for mitochondrial localization at final concentration of 500 nM. The cells were viewed using a fluorescence microscope (Nikon Eclipse TE300; Nikon, Tokyo, Japan). Total superoxide dismutase activity HLECs exposed to E2 were rinsed with PBS, and the cell pellets were sonicated after which SOD activity was measured using the Superoxide Dismutase Assay kit, according to the manufacturer's protocol (Cayman Chemical Company, Ann Arbor, MI, USA). Absorbance was measured at 440 nm on the microplate reader Infinite M200 PRO (Tecan group Ltd., Männedorf, Switzerland). The SOD assay uses tetrazolium salt to detect superoxide radicals generated by xanthine oxidase and hypoxanthine. One unit (U) of SOD is defined as the amount of enzyme needed to exhibit 50% dismutation of the superoxide radical. The SOD assay measured total SOD activity (U/ml) in whole cell lysate and protein concentration was determined with BCA protein assay reagent (Pierce, Perbio Science, Cheshire, UK) using bovine serum albumin as standard. Total SOD activity levels (U/ml) were related to cell protein levels (mg/ml) and expressed as U/mg. Statistical analyses Total SOD activity and Western blot experiments were repeated at least once with similar results, and data from triplicate samples (n = 3) are shown in figures as mean ± SD after analysis with one-way ANOVA with Dunnett's post hoc test. Relative gene expression data were normalized to the reference genes, RPLP0 and PPIA, and compared to the expression in control cells according to the 2 −ΔΔCt method. 22 Data were analyzed using a linear mixed model and expressed as fold change. Statistical analyses were performed using IBM SPSS Statistics version 21 (IBM Corp., Armonk, NY, USA), and p-values ≤0.05 were considered statistically significant. Gene expression of superoxide dismutases After normalization to the reference genes, no significant changes in SOD1, SOD2, or SOD3 gene expression were seen after 1.5 h or after 24 h exposure to 0.1 µM and 1 µM E2, when compared to the expression in control cells (Figure 1). The SOD3 gene expression was generally lower compared to SOD1 and SOD2 (qPCR amplification curves not shown). Protein expression of superoxide dismutases A slight increase in both SOD-1 and SOD-2 protein expression was seen at 0.1 µM after 1.5 h, and elevated SOD-2 levels were also seen at 1 µM E2 after 24 h. However, these results were not statistically significant compared to the control cells ( Figure 2). Immunolocalization of superoxide dismutases Strong immunolabeling with SOD-1 was seen in the cytosol and nucleus in contrast to SOD-2 where mitochondrial localization dominated for SOD-2. No subcellular redistribution of SOD-1 or SOD-2 was seen with E2 exposure (Figure 3). Superoxide dismutase activity Significant increase in total SOD activity was seen in whole cell lysate from HLECs after exposure to 0.1 µM and 1 µM E2 for 1.5 h. By 24 h, however, the SOD activity was back to baseline values (Figure 4). Protein expression and immunolocalization of estrogen receptors Significantly decreased ERα expression was detected in cells exposed to the higher (1 µM) E2 concentration after 1.5 h as well as after 24 h exposure; whereas, significantly increased ERβ protein expression was seen at both 0.1 µM and 1 µM E2 after 1.5 h exposure and at 1 µM after 24 h, as compared to control cells ( Figure 5). Both ERα and ERβ were present in the nucleus and mitochondrial localization of ERβ was evident by colocalization with MitoTracker. The immunolabeling of ERβ increased slightly after exposure to 1 µM E2 ( Figure 6). Discussion Reactive oxygen species (ROS) can induce oxidative modifications to lens proteins, lens fiber membranes, and DNA, thereby contributing to cataract formation. 23 The SOD isoenzymes are part of the antioxidative defense in the lens and catalyze the dismutation of superoxide into hydrogen peroxide, which is further processed by catalase and glutathione peroxidase (GPx). 24 In humans, SOD-1 is primarily found in the cytosol and the nucleus and the predominant SOD isoenzyme in mitochondria is SOD-2, while SOD-3 on the other hand is secreted and found in the extracellular matrix of tissues. [25][26][27] The crystalline lens is largely built up of tightly stacked lens fibers containing cytoplasm devoid of organelles, where SOD-1 is the predominant isoenzyme. 28 In the lens epithelium and the superficial lens fibers -the only parts of the lens that contain mitochondria-both SOD-1 and SOD-2 are found; whereas, SOD-3 is secreted extracellularly and found in the cell culture medium when lens epithelial cells are cultured. Thus, only SOD-1 and SOD-2 were studied with immunocytochemistry, and immunolabeling with SOD-1 was evident both in the nucleus and cytosol in contrast to SOD-2, which was localized mainly to the mitochondria. It has previously also been shown with immunocytochemistry that SOD-1 is widely distributed both in the nucleus and cytosol of human cells. 29 As expected, we observed lower gene expression levels of SOD3 in HLECs as compared to SOD1 and SOD2 levels. However, the relative gene expression data did not show any effects of E2 in gene expression of SOD1, SOD2, or SOD3 compared to control cells. The total SOD activity measured included all three SODs, and increased activity levels were seen after 1.5 h exposure to E2 although this effect did not correlate with protein or gene expression of the different SODs. These results are in accordance with Gottipati et al. who showed a significant increase in SOD-2 activity levels in transformed lens epithelial cells (HLE-B3) after exposure to E2 without any changes in either mRNA or protein expression levels. 30 In addition, studies have also reported that E2 increased SOD-2 activity levels without alteration of SOD-2 protein levels in mitochondria. 20,31 However, other studies demonstrated both upregulated gene and protein expression of SOD-2 and SOD-3 in an E2 concentration-and time-dependent manner mediated by ERs, in vascular smooth muscle endothelial cells. 18 In addition, E2 showed antioxidative effects by upregulation in gene expression of GPx and SOD-2 via activation of MAPK pathway through ERK phosphorylation. 19 The discrepancies regarding E2 effects on SOD in both protein and gene expression may be attributed to difference between cell lines. However, the rapid, transient increase in activity seen in HLE-B3 was explained by Gottipati et al. to not influence mRNA or protein expression. This can also explain why we only observed an increase in activity after 1.5 h exposure in HLEC, indicating non-genomic mechanisms of E2. Even though we have observed a slightly higher rate of cell growth in capsule-epithelium specimens derived from female cataract patients, 32 no difference in results were seen between cells from different genders in our previous study of estrogen effects. Therefore, HLECs, only derived from women over 60 years undergoing cataract surgery, were used in this study. 21 Flynn et al. has shown that estrogen protection and distribution of ER splice variants in HLECs are gender independent, and the estrogen-induced mitochondrial cytoprotection is wtERβ1 dependent. They also showed a difference in ERβ variants distribution and RNA expression as well as responsiveness to oxidative stress between primary cultured HLECs and the transformed lens epithelial cell line, HLE-B3. 33 However, the subcellular localization of wild-type ERβ was the same for the primary cultured HLECs as HLE-B3 and in accordance with our results; ERβ was found in both the nucleus and in mitochondria, colocalized with MitoTracker, while ERα was localized to the nucleus and cytosol and not found in mitochondria. 34,35 Under normal physiological conditions, the ERα to ERβ ratio in breast tissue is determined by the plasma E2 levels. In postmenopausal women, the dramatic drop in E2 levels leads to elevated expression of ERα, and Cheng et al. showed that ERα and not ERβ was downregulated when E2 levels increased. 36 Our results also showed reduced ERα expression levels as well as the reversed effect, elevated ERβ expression levels, with increased E2 concentration. This may be explained by several studies demonstrating that ERα is inhibited to bind to the estrogen-responsive promoters by ERβ. E2-dependent AP-1 mediated transactivation by ERα is also suppressed by ERβ suggesting that ERβ exhibits an inhibitory effect on ERαmediated gene expression, when ERs are coexpressed. [37][38][39] Both estrogen receptors, ERα and ERβ, were immunolocalized in primary cultured HLECs and showed altered protein expression levels. The mitochondrial localization and elevated expression levels of ERβ owing to E2 exposure indicate mitochondrial involvement. Moreover, this is consistent with the suggestion that E2-induced mitochondrial cytoprotective effects are mediated through ER-dependent mechanisms in HLECs. However, no such conclusions can be drawn, and further investigations of E2-mediated antioxidative effects are essential. We observed increased SOD activity levels after 1.5 h exposure to E2, thereby implying non-genomic mechanisms of E2 because no changes were seen in neither gene nor protein expression levels of SODs.
2018-04-03T03:07:55.869Z
2018-02-12T00:00:00.000
{ "year": 2018, "sha1": "c37e4ce44c0acbd583864a6d672d8bb3b137455a", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02713683.2018.1437923?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "4d8200047e25cc20313a02a7997b95e1f5e64903", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
266616860
pes2o/s2orc
v3-fos-license
An uncommon case of metastatic undifferentiated pleomorphic soft tissue sarcoma during pregnancy: Literature review and case report Soft tissue sarcomas accounts for 1–2% of adult malignancies. Undifferentiated pleomorphic sarcoma (UPS) is a rare subtype that lack immunohistochemical markers for a specific definition. About 18% of sarcomas are at a locally advanced stage, often requiring several cycles of chemotherapy and radiotherapy, in addition to surgery. For a young woman, this can mean delaying pregnancies with a high risk of therapy-induced ovarian damage. For this reason, proper counseling on fertility preservation plays a key role. In addition, all women of childbearing age with cancer, should be informed about the importance of planning a pregnancy to improve maternal and neonatal outcomes. We report a rare case of a 40-year-old woman with a UPS who, during CT scan after chemotherapy to decide on surgery, find out she was pregnant. After counseling, the patient decides to go ahead with the pregnancy. Introduction Cancer during pregnancy, although rare, is an important ethical and biological issue for the appropriate care of both mother and fetus.Despite being a growing public health problem, details on its epidemiology are scarce and conflicting due to lack of publications and data.[1]. The most common types of pregnancy associated cancer (PAC) are breast cancer, melanoma, cervical cancer, lymphomas and leukemias [2]. However, the real incidence of PAC is probably underestimated because not all of cases are recorded in databases: spontaneous abortion and voluntary termination of pregnancy are often overlooked. Several studies showed that the incidence of PAC is increasing [3,4].Although delayed childbearing appears to be a risk factor because the development of cancer is associated with older age, advanced diagnostic techniques and greater interaction with healthcare services during pregnancy could also be factors contributing to the increased incidence rates [4]. Eibye S. et al. estimated a rise from 5,4% to 8,3% the incidence of pregnancy-associated cancer during a 30-year period in Denmark [5]. Similarly, Lee YY et al. showed, over a 14-year period, the same increasing trend, especially for mothers older than 35 years [4]. In addition, the probability of initiating and managing a successful pregnancy in metastatic cancer is scarcely reported in literature [6]. In this article we present the case of a woman with a metastatic undifferentiated pleomorphic sarcoma (UPS) who started and conduced her pregnancy combined with a literature review to discuss the etiology, clinical manifestations, diagnosis, treatment and prognosis of pregnancy complicated with STS. Case-report A 40-year-old G2P0 woman presented at 14 + 4 weeks gestation to the High-Risk Pregnancy Unit of Vittore Buzzi University Hospital in Milan, Italy, in November 2020. Regarding her obstetrical history, in 2016 she underwent an urgent caesarean section with longitudinal hysterotomy for previa placental abruption at 24 + 4 gestational weeks.The newborn, weighing 699 g, died of septic shock on day 4. In 2018 she was diagnosed with a grade III undifferentiated pleomorphic sarcoma (UPS WHP 2016) involving the left thigh without a specific phenotype.No genetically transmitted diseases or malignancies were present in her family history.She underwent three cycles of chemotherapy with Adriamycin and Ifosmamide and radiotherapy for a dose of 50 Gy in 25 fractions with VMAT technique in view of surgery to remove soft parts in the left obturator region.Surgery was performed in July 2018 and histological examination showed undamaged surgical resection margins.In March 2019, the patient underwent double thoracotomy for lung metastasectomy with two further cycles of chemotherapy with Adriamycin and Ifosfamide.Planned surgery for an increased size of the focal lobar lesion had to be postponed due to the occasional CT scan finding of an evolving pregnancy at the 9th gestational week. The probability of stochastic and genetic effects on the unborn child, given the estimated in utero dose of 6.3 mSv, was calculated as < 0.028% and < 0.0001%, respectively (ICRP publication 103). The pregnancy, monitored every fortnight, developed physiologically.At 22 + 4 weeks' gestation, the MRI detected an increased in volume of the focal apical lesion of the left lobe, so the patient underwent left lower pulmonary lobectomy: postponing the surgery to postpartum would have resulted in an increase in the volume of the lesion such that it was no longer surgically treatable.Nevertheless, the procedure had to be delayed two weeks later than planned due to an asymptomatic COVID-19 infection.The surgery was performed without intra-and post-operative complications. Histological examination confirmed lung metastases of undifferentiated pleomorphic spindle cell sarcoma with massive infiltration of the visceral pleura but without peribronchial lymph nodes involvement. Subsequently, the pregnancy was complicated by gestational diabetes at 25 weeks, which was treated with diet alone.At the beginning of the third trimester, a diagnosis of fetal growth restriction (FGR) was made with an estimated fetal weight at the 6 th percentile according to the growth curves. Weekly fetal Doppler velocimetry monitoring was always regular, but the mean pulsatility index (mPI) of both uterine arteries was always above the 95th percentile. At 34 weeks the patient was hospitalized for hypertension and was diagnosed preeclampsia.The 24-hour proteinuria collection was negative (308 mg/lt) and blood tests were normal with no signs of organ damage. At 35 weeks and 5 days gestation, the patient underwent an urgent caesarean section due to altered computerized cardio-tocography (cCTG), reduced fetal movements, fetal growth arrest, preeclampsia and rhythmic uterine contractile activity. During the caesarean section, which was performed without any complications, both peritoneal lavage and fetal blood from the umbilical artery were sent for cytological analysis to assess the presence of further metastases. A live male baby weighing 2180 g (12th centile) was delivered; APGAR scores at one minute and five minutes were 9 and 10, respectively.Arterial blood gas analysis of the umbilical cord blood showed no signs of acidosis: pH 7.29, pCO2 57,4 mmHg, base excess -1.0 mmol/L, Lac 1.44 mmol/L. The infant was transferred to the neonatal intensive care unit for monitoring due to prematurity: he had no complications and was discharged after 6 days. Breastfeeding was started and the mother's postoperative course was uncomplicated, she was discharged with the baby. Cytologic examination of the peritoneal washings performed before and after opening the uterine cavity did not show any malignant tumor cells.Cytologic examination of umbilical cord blood with "buffy coat" preparation was negative for malignant tumor cells.[7]. Histological placental examination did not reveal the presence of neoplastic lesions. The histological and cytological reports of mother, foetus and placenta are shown in Table 1. The placenta showed areas of maternal vascular malperfusion: distal villous hypoplasia with a huge intervillous space (Fig. 1) and collapsed ghost villi diagnostic of an old infarct (Fig. 2), consistent with FGR. Although the amounts of ischemic areas are not in itself sufficient for a diagnosis of maternal vascular malperfusion, the combination of areas of old infarction and distal villous hypoplasia is suggestive of maternal vascular malperfusion. The patient died 18 months after the caesarean delivery due to complications from metastatic disease. Soft tissue sarcomas (STS) STS are tumor arising from muscles, tendons, synovial, adipose tissue and connective tissues that affect all ages and gender.They represent 1-2% of adult malignancies.[8]. About 16% of sarcomas are found in a locally advanced stage, with a 5-year relative survival rate of 17% and a median survival rate close to 18 months.The main site of metastasis are lungs.[9,10]. MFH has been permanently removed from the 2013 World Health Organization (WHO) classification of soft tissue tumors and reclassified as undifferentiated pleomorphic sarcoma (UPS) [14]. Pregnancies associated to Soft Tissue Sarcomas (STS) are rare [15] and data on pregnancies in women with histologically documented metastatic sarcomas are limited. Yazigi A et al. reported four cases of women with metastatic sarcomas who carried on their pregnancies after stopping systemic cancer treatment with good maternal-fetal outcomes and prolonged maternal survival.Three histotypes were involved in the report: epithelioid hemangioendothelioma, low-grade fibromyxoid sarcoma and GIST. The only case of malignant fibrous MFH in pregnancy reported in literature concerns a 38-year-old woman who received the inauspicious diagnosis in the immediate postnatal period and died after 3 months.[13]. Diagnosis of STS In most cases, the diagnosis of STS is already known at the time of pregnancy.A recent case series conducted in Toronto reported 48 women diagnosed with STS during a 10-year period: only 10 patients (20.8%) were diagnosed with cancer during pregnancy.[12]. The guiding symptom is pain and the main sign is a mass localized mainly to the abdominal-pelvic region and to the upper and lower limbs.Sometimes hemorrhagic syndromes are present due to involvement of large retroperitoneal vessels.[12].Diagnostic imaging methods include both CT scan and magnetic resonance imaging (MRI).Ionizing diagnostic imaging during pregnancy should be avoided or used only when essential for the management of the pregnancy.[16]. Nonionizing imaging procedures such as ultrasound and MRI are safe during pregnancy except for the use of gadolinium.[18]. STS diagnosis with biopsy to define the histological subtype is difficult with pathologists' discordance rate sometimes reaching 30%.[11]. Surgery Surgery is the standard treatment in case of localized adult-type STS.[19] The American College of Obstetricians and Gynecologists' Committee on Obstetric Practice has stated that there are no data to make specific recommendations for non-obstetric surgery during pregnancy.However, at any gestational age, teratogenic have never been demonstrated by anesthetic agents when used in standard concentrations, nor is there evidence that fetal exposure to anesthetic drugs affects neurodevelopment.Therefore, a pregnant woman should never be denied a necessary and unpostponable surgery, regardless of trimester.It is reasonable to state that the first part of the second trimester should be preferred to limit miscarriage. Laparoscopy appears to be safer than laparotomy if the surgery is performed by experienced surgeons, also because it allows a better vision of the abdominal cavity.[20]. Post-operative radiotherapy ESMO -EURACAN Clinical Practice Guidelines recommend postoperative radiotherapy as the standard treatment in tumor with unfavorable prognostic factors such as high-grade, tissue invasion or tumor diameter > 5 cm.[10]. In clinical practice, it is usual to postpone treatment to the postpartum period to avoid fetal harm, unless there is an urgent clinical need and the irradiation site is sufficiently distant from the uterus.[21,22]. Several adverse effects have been reported for the fetus after gestational radiotherapy, including intrauterine growth restriction (IUGR), risk of childhood cancer (solid tumor and/or leukemia) and subaverage intellectual functioning.The severity of adverse effects depends on the extent of the irradiation field, time of radiation exposure and gestation period.[23]. Chemotherapy Regarding chemotherapy, its role remains controversial in both neoadjuvant and adjuvant settings: in case of metastatic disease, surgical treatment is recommended as the first choice in lung disease with a limited number of metastases without other extrapulmonary localizations.[10]. If the patient undergoes chemotherapy, both preoperatively and postoperatively, anthracycline and/or ifosfamide seem to be the most appropriate choice.Miller et al. conducted a multi-institutional retrospective study on 13 patients, to evaluate the administration of anthracyclines and/or ifosfamide in pregnancy-associated sarcoma.They found a lower rate of live births in patients receiving a combination of doxorubicin and ifosfamide during pregnancy (5/9, 55.6%) compared to patients treated with anthracycline-based regimens without ifosfamide (4/4, 100%).Besides, they showed that combination therapies with doxorubicin and ifosfamide may have higher risks of fetal harm when given early in the second trimester as compared with later in pregnancy.[24]. In case of inoperable metastatic disease, chemotherapy is palliative and does not affect survival. Metastasectomies In case of metastatic disease, surgical treatment is recommended as the first choice in lung disease with a limited number of metastases without other extrapulmonary localizations. There is consensus in repeating metastasectomies in case of relapse of disease, always respecting the above-mentioned criteria of radicality and patient selection.[11]. Management of pregnancy complicated by STS The aim is to carry the pregnancy to term.However, if the disease is severe and requires immediate intervention, early termination of pregnancy may be advised especially in the first trimester.[25]. 2-weekly ultrasound monitoring of fetal growth is strongly recommended: in maternal cancer population poor general health, malnutrition and chemotherapy or radiotherapy (if any) are risk factors for intrauterine growth restriction (IUGR).[26,27]. Delivery is usually planned.Iatrogenic preterm delivery should be avoided in order not to incur the long-term comorbidities of preterm infants: from 37 weeks of pregnancy delivery can be considered unless there are major life-threatening complications for both mother and fetus.[28]. Regarding delivery, several studies show that vaginal delivery could be the first choice unless there are contraindications.Nevertheless, other studies show an increased percentage (30%) of caesarean sections [4] mainly due to psychophysical stress and tumor mass effect with limited joint mobility, especially in STS.[12]. Placental histology is recommended in assessing the risk of fetal metastases, especially in patients with metastatic tumors.Metastases, if present, are usually found in the intervillous space.If metastases are present, they should also be investigated in the newborn by clinical examination and initially by ultrasounds.Although there is limited evidence, it seems that the transfer of mother-to-fetus metastases only occurs if metastases are found at the villus level.[26]. Discussion STS represent less than 1% of all tumors [29] and UPS is a rare subtype that lack immunohistochemical markers for a specific definition. According to our knowledge, our is the first case reported in literature of a pregnancy that occurred and was successfully carried to term in a woman diagnosed with metastatic UPS. The rarity of this case is not only tied to the uncommon type of tumor, but also to the low probability of the patient to get pregnant after her clinical history. The first finding in literature on the effects of chemotherapy drugs on the female reproductive system dates back to the 1970 s and is related to cyclophosphamide therapy, which was linked to amenorrhea and follicular destruction.[30]. As chemotherapy treatments usually involves combination of several drugs, it is not easy to understand the effects of a single drug on the female reproductive system but is certain that the most severe long-term outcome of exposure to cytotoxic drug treatment is infertility due to premature ovarian failure (POF) or insufficiency (POI) [31]. Radiation treatment, on the other hand, can damage the uterine vasculature and the structure of the endometrium and myometrium.Although it is unclear whether this is a consequence of ovarian damage or the result of direct damage to the uterus, fertility may decline.[32]. Considering all the cancer treatment protocols the patient performed without undergoing any methods of prevention and reduction of ovarian damage induced from chemotherapy and radiotherapy, the probability of pregnancy was very low. Given the relationship between chemo-radiotherapy, miscarriage, impaired organogenesis and unplanned pregnancy, the birth of a child without anatomical defects is to be considered an extraordinary event. It cannot be ruled out if IUGR was a consequence of the cancer treatment the woman underwent in previous years: damaged uterine tissue could alter placentation and trophoblastic invasion.Consequently, as is well known, aberrant spiral arteries remodeling leads to altered maternal-fetal blood flow and thus fetal growth restriction.[33]. The patient's placenta was histologically analyzed: villous immaturity suggested a decreased utero-placental blood flow, which probably contributed to IUGR. The issue of sarcoma's growth acceleration during pregnancy is debated in literature.In our case cancer progression was evident during pregnancy.However, it's still unclear whether this progression was due to pregnancy-related factors when compared to similar tumors in women who are not pregnant or if it is a direct consequence of stopping chemotherapy or radiotherapy.However, in previous reports about STS during pregnancies, enhancement of tumor was evident in several cases.[34,35]. Since a general treatment strategy for pregnant women with sarcoma cannot be outlined because it is a rare condition, each case should be discussed by a multidisciplinary team and the diagnostic and therapeutic approaches should be tailored for every woman. Conclusions The rarity of this clinical case lays the groundwork for discussing the important ethical dilemma of cancer in pregnancy and in particular the complexity of managing pregnancy-associated STS. The poor prognosis of UPS and its tailored therapies seemed to be incompatible with this pregnancy that, contrary to expectations, was carried through and ended without any severe maternal-fetal complications. This unplanned pregnancy placed the patient in front of two options: termination of pregnancy or delaying cancer treatment once the vitality of the fetus is reached.After a meticulous counseling to the couple, the patient chose to go through with the pregnancy.It is very hard for a woman to choose the best deal and for the doctor to make the best approach guiding the patient in her decisions. A multidisciplinary approach involving oncologists, obstetricians, neonatologists, and other specialists is essential to navigate the intricate decisions required to balance maternal oncological care with fetal wellbeing. Table 1 Histological and cytological examination report of mother, foetus and placenta.
2023-12-30T16:12:57.869Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "854d92bd39ea10cf1541540c808d371c1fc6611f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.eurox.2023.100278", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "23b5bc7aa5f9ed8034f192d9bed97456880476ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
234636676
pes2o/s2orc
v3-fos-license
Characterization of the skin and gill microbiomes in farmed seabass and seabream across different age groups Background Important changes in microbiome composition related to sexual maturation have been already reported in the gut of several vertebrates including mammals, amphibians and sh. Such changes in sh are linked to reproduction and growth during developmental stages, diet transitions and critical life events. We used amplicon (16S rRNA) high-throughput sequencing to characterize the skin and gill bacterial microbiomes of farmed seabass and seabream belonging to three different developmental age groups: early and late juveniles and mature adults. We also assessed the impact of the surrounding estuarine water microbiome in shaping the sh skin and gill microbiomes. Results Microbiome diversity, composition and potential metabolic functions varied across sh maturity stages. Alpha-diversity in the seabass microbiome varied signicantly between age groups and was higher in older sh. Conversely, in the seabream, no signicant differences were found in alpha diversity between age groups, although it was higher in the skin of juveniles. Microbiome structure varied signicantly across age groups. Different bacterial metabolic pathways were predicted to be enriched in the microbiomes of both species. Finally, we found that the water microbiome is signicantly distinct from all the sh microbiomes across the studied age groups, although a high percentage of ASVs is shared with the skin and gill microbiomes. Conclusions We report important microbial differences in composition and potential functionality across the different ages of farmed seabass and seabream. These differences may be related to somatic growth and the onset of sexual maturation. Importantly, some of the inferred metabolic pathways could enhance the host coping mechanisms during stressful conditions. Our results provide new evidence suggesting that growth and sexual maturation have an important role in shaping the external mucosa microbiomes of sh and highlight the importance of considering different life stages in microbiome studies. and seabream, respectively). We, thus, conclude that growth and sexual maturation are likely the main drivers of the differences found herein. Overall, our results were in line with what has been previously found in the skin [35, 36] and gill [14, 36] microbiomes of several wild reef sh, suggesting this could be a general pattern across sh. Our results also highlight the importance of considering sexual maturation as a key factor shaping external sh mucosa microbiomes, especially in studies focusing on farmed sh, where the microbiome and disease dynamics can be very important. Abstract Background Important changes in microbiome composition related to sexual maturation have been already reported in the gut of several vertebrates including mammals, amphibians and sh. Such changes in sh are linked to reproduction and growth during developmental stages, diet transitions and critical life events. We used amplicon (16S rRNA) high-throughput sequencing to characterize the skin and gill bacterial microbiomes of farmed seabass and seabream belonging to three different developmental age groups: early and late juveniles and mature adults. We also assessed the impact of the surrounding estuarine water microbiome in shaping the sh skin and gill microbiomes. Results Microbiome diversity, composition and potential metabolic functions varied across sh maturity stages. Alpha-diversity in the seabass microbiome varied signi cantly between age groups and was higher in older sh. Conversely, in the seabream, no signi cant differences were found in alpha diversity between age groups, although it was higher in the skin of juveniles. Microbiome structure varied signi cantly across age groups. Different bacterial metabolic pathways were predicted to be enriched in the microbiomes of both species. Finally, we found that the water microbiome is signi cantly distinct from all the sh microbiomes across the studied age groups, although a high percentage of ASVs is shared with the skin and gill microbiomes. Conclusions We report important microbial differences in composition and potential functionality across the different ages of farmed seabass and seabream. These differences may be related to somatic growth and the onset of sexual maturation. Importantly, some of the inferred metabolic pathways could enhance the host coping mechanisms during stressful conditions. Our results provide new evidence suggesting that growth and sexual maturation have an important role in shaping the external mucosa microbiomes of sh and highlight the importance of considering different life stages in microbiome studies. Background Research on animal microbial communities (microbiomes) is growing exponentially as the link between microbiome and host health is strongly validated by emerging evidence [1][2][3][4][5][6][7][8]. Age-related uctuations in microbiomes are well studied in humans and are considered as "natural, inevitable and benign" [9]. Critical microbial changes occur during infancy and old age, coinciding with stages when the immune system is also more fragile [9]. Results linking changes in the gut microbiome to reproduction and growth (e.g. monkeys, [10]) or disease resistance in early life stages (e.g. amphibians, [11]) have been also found in other vertebrates. Ecological factors, such as diet transitions [e.g. 21] or critical life events (e.g. habitat transition, [20]), which in turn are intrinsically linked to sexual maturation, also play a major role in shaping the sh gut microbiome. Importantly, most studies testing the role of age on sh microbiomes were cross-sectional and based on a single time point or short time window [e.g. 16,18,19,21,22]. Thus, given the high susceptibility of the sh microbiome to environmental changes and the high inter-individual microbiome variability [e.g. [28][29][30], the compound effect of all these factors could be hard to interpret [31]. Fish skin and gills and their associated mucous and microbes form a natural physical and chemical barrier to pathogens [4,32,33]. Despite this protective role, little is known about potential host developmental effects on skin and gill microbiomes. Filling out this knowledge gap is especially important in sh farming, where diseases are a main concern causing large mortality rates [e.g. 34]. Two previous studies in wild reef sh comparing the gill [14] and skin [35] microbiomes of juvenile and mature adult sh from several species, showed a general pattern of differentiation between life-stages with the differences being attributed to intraspeci c niche partitioning [14,35]. Additionally, increases in body weight were seen to be associated with an increase in the microbiome structure (i.e. beta-diversity) of the skin and gill microbiomes of wild rabbit sh [36]. The European seabass (Dicentrarchus labrax) and the gilthead seabream (Sparus aurata) are two of the most important farmed sh in Europe (global production of 191,003 and 185,980 tns, respectively, in 2016, [37]). The gilthead seabream is a protandric hermaphrodite, maturing rst as males between years 1 and 2, with sex reversal occurring in the following 2-3 years [38][39][40]. The European seabass reaches sexual maturity between years 2 and 3 in males, and after year 3 in females [41][42][43]. Typically, in semi-extensive production systems, both sh are reared until they reach their rst commercial size (18-24 months). However, demand for larger sh sizes has been increasing, meaning that both species can reach sexual maturation before harvest. Here we used amplicon (16S rRNA) high-throughput sequencing to characterize over six months the skin and gill bacterial microbiomes of farmed seabass and seabream from different ages (juvenile stages and mature adults). Our main aim was to describe differences in composition, structure and potential metabolic functions. Additionally, we investigated the impact of the microbial communities present in the water column in the skin and gill microbiomes. Results Skin, gill and water microbial samples from the different age groups of both species were collected simultaneously (same day) from separate ponds. Three age cohorts were sampled for the seabass, which included sh on their 1st, 2nd and 3rd year of age, and two for the seabream, which included sh in their 2nd and 3rd year of age. Due to non-invasive sampling, we have coupled available information from the literature [38][39][40][41][42][43] with data provided by the sh farmers about the weight and age of maturation of both species, to classify samples into age groups. The three seabass age cohorts were then classi ed as early juveniles, late juveniles and mature adults, respectively; while the two seabream age cohorts were classi ed as juveniles and mature adults, respectively -see the Materials and Methods section for more details. Differences in the average weight estimated for each age group at the beginning and end of our sampling indicate a 245% growth for the seabass early juvenile group, an 83% growth for the late juvenile group and a 43% growth for the mature adult group. For the seabream a 143% growth was estimated for juveniles and a 16% growth for the mature adults. Descriptive analyses were performed for each age group separately and comparative statistical analyses were performed between groups. Microbiome diversity across age groups Alpha-diversity. Microbial alpha-diversity was calculated using Shannon, Faith's phylogenetic diversity (PD), ACE and Fisher indices. The skin microbiome showed higher alpha-diversity than the gill microbiome across all age groups in both sh species (Additional le 1). In seabass, the skin and gill microbiomes of late juveniles and mature adults sh presented higher alpha-diversity than the microbiomes of the early juveniles ( Fig. 1A, Additional le 2). In seabream, the skin microbiome of juveniles presented higher alpha-diversity than the microbiome of mature sh, while the gill microbiome showed similar diversity in both cohorts (Fig. 1B, Additional le 2). Linear Mixed Effects (LME) model analysis (diversity ~ age group + (1|sampling date) showed most alpha-diversity estimates varied signi cantly between seabass age groups in both tissues. Pairwise comparisons between age groups in seabass showed signi cant differences in alpha-diversity for almost all of the early vs late juvenile and early juvenile vs mature adult comparisons (p < 0.05, Table 1), while late juvenile vs mature adult comparisons were never signi cant (p > 0.05, Table 1) in both tissues. In the seabream only the Shannon and PD indices of the gill microbiomes varied signi cantly between juveniles and mature adults (p < 0.04, Table 1). Table 1 Mean alpha-diversity values, and alpha-and beta-diversity comparisons for the skin and gill microbiomes of the different age groups of seabass Dicentrarchus labrax and seabream Sparus aurata. Variation in alpha-diversity was assessed using Linear Mixed Effect models, with age groups as a xed factor and sampling time as a random factor. Differences in beta-diversity were assessed using PERMANOVA. For each linear model effect test (alpha-diversity) we report the F statistic and signi cance (P value) and for each PERMANOVA test (beta-diversity) we report the R2 statistics and signi cance (P value Beta-diversity. Microbial structure was estimated using phylogenetic UniFrac (unweighted and weighted) and Bray-Curtis distances. The PERMANOVA analyses of dissimilarities (diversity ~ age group, strata = sampling date) showed signi cant differences between the age groups of both species (p < 0.02, Table 1), except for the UniFrac Weighted distance between the gills of early and late seabass juveniles (p = 0.1, Table 1), seabass late juveniles and mature adults (p = 0.2, Table 1), and the skin of juveniles and seabream adults (p = 0.3, Table 1). Principal Coordinate Analyses (PCoAs) were used to visualize microbial structure (dissimilarity) and depicted the differences between early and late juvenile/mature seabass groups, in both tissues (Bray-Curtis distance, Fig. 2). For the seabream, however, differences between age groups were not evident (Fig. 2). Bacterial taxa. Proteobacteria and Bacteroidetes were the most abundant (≥ 5%) phyla in the skin (averaging 41 ± 4% and 39 ± 2% of the sequences in seabass and 55 ± 4% and 31 ± 4% in seabream) and gill (averaging 52 ± 7% and 25 ± 5% in seabass and 69 ± 4% and 12 ± 1% in seabream) microbiomes of all studied age groups ( Table 2). The NS3a marine group and a genus belonging to the Flavobacteriacea family were the most abundant (≥ 5%) genera in the skin (10 ± 1 and 11 ± 2, respectively) and gill (6 ± 1 both) of all the age groups in seabass; while Burkholderia-Caballeronia-Paraburkholderia was the most abundant genus in the skin (17 ± 1) and gill (25 ± 0) of both age groups in seabream ( Table 2). The most abundant microbial phyla and genera found in both sh species varied between age groups and tissues (Fig. 3, Table 2). LME models showed that the relative abundance of all those phyla was signi cantly different between age groups, except in the gill microbiome of the seabream, where the relative abundance of Cyanobacteria did not vary (Additional le 3). LME analyses also revealed that 100% and 63% of the genera varied in the skin and gill of the seabass, respectively, while 40% and 50% varied in the skin and gill of the seabream, respectively (Additional le 3). Pairwise comparisons of taxa across age groups in seabass yielded a higher percentage of signi cant differences between early juveniles and mature adults in both tissues (100% in the skin and 38% in the gill) than between early and late juveniles (67% in the skin and 13% in the gill), or between late juveniles and mature adults (0% in the skin and 25% in the gill) (Additional le 3). Table 2 Relative mean proportions (%) of the most abundant phyla and genera (≥ 5%) in the skin and gill microbiomes of the different age groups of the seabass Dicentrarchus labrax and the seabream Sparus aurata, and in the water column. Taxa with a ≥ 5% relative mean proportion in a group are indicated in bold. Unknown genera are identi ed as u.g. Seabass Seabream Skin Microbial predicted functional diversity across age groups About 462 ± 18 KEGG pathways were inferred in the skin and gill microbiomes of the seabass, while 455 ± 4 pathways were inferred in the skin and gill microbiomes of the seabream. Linear discriminant analysis of the metagenomic predictions performed in LEfSe, showed that different pathways were signi cantly enriched for each species and for each age group in both species (Fig. 4, Additional le 4). While there were no signi cantly enriched pathways in the skin of early juvenile seabass, enriched pathways in the gills of this age group were related to metabolic regulator biosynthesis, purine nucleotide degradation, sugar degradation and fermentation of pyruvate. In the skin of late juveniles seabass enriched pathways were related to thiamine biosynthesis, aldehyde degradation and L-arabinose degradation; while in the gills were related to denitri cation, galactose degradation and nitrogen compound metabolism. In mature seabass, pyrimidine and purine deoxyribonucleotide de novo biosynthesis were enriched in both tissues. Additionally, the gills were also enriched by pathways related to the biosynthesis of chlorophyll, folate, hemiterpene, L-alanine, L-tyrosine, NAD, secondary metabolite and ubiquinol, chloroaromatic compound degradation, fermentation to lactate and glycolysis (Fig. 4, Additional le 4). In the skin of seabream juveniles, enriched pathways were related to amine and polyamine biosynthesis and degradation, choline biosynthesis, and sugar acid and toluene degradation; whereas in the gill only pyrimidine and purine deoxyribonucleotide de novo biosynthesis were identi ed. The enriched pathways of the seabream mature adults were related to fatty acid, L-methionine, NAD, palmitate, palmitoleate, siderophore, stearate and unsaturated fatty acid biosynthesis, pyrimidine and purine nucleotide salvage, aspartate superpathway and TCA cycle in the skin; whereas pyrimidine and purine deoxyribonucleotide de novo biosynthesis, autotrophic CO2 xation and fermentation of pyruvate were enriched in the gill (Fig. 4, Additional le 4). Fish and water microbiome comparisons The microbiome of shpond water showed higher alpha-diversity than the skin and gill microbiomes of seabass and seabream, except when compared to the Shannon index estimated for the seabass late juveniles (Additional le 1). The analyses of dissimilarities between the skin and gill microbiomes and the water microbiome were statistically signi cant for all pairwise comparisons (PERMANOVA, p < 0.001, Table 3). Moreover, results from Mantel tests revealed a correlation between gill and water microbiomes of seabass and seabream across age groups (p < 0.03, Table 3), except in the case of late juvenile seabass (p > 0.05, Table 3). PCoAs showed that the water microbiome clustered more closely to the skin microbiome than to the gill microbiomes in both shes (Additional le 5). In both species, the percentage of ASVs shared between skin and water microbiomes, and between gill and water microbiomes was very similar (14%±1 and 15%±1 of ASVs (amplicon sequence variants), respectively) (Fig. 5). Discussion We characterized the skin and gill microbiomes of different age groups of farmed European seabass and gilthead seabream using 16S rRNA amplicon high-throughput sequencing. By taking into account potential environmental and seasonal effects, the results of the present study show that sh age in uences skin and gill microbiome diversity and structure ( Fig. 3, 5) and predicted functions (Additional le 4; Fig. 4). Microbiome diversity across age groups Fish growth and sexual maturation is usually accompanied by extreme morphological and physiological changes [e.g. 44,45]. Importantly, some of the changes reported for the skin and gills have been suggested to also affect their microbiota. For example, changes in epidermal structure derived from sexual maturation (e.g. increases in the number, size and activity of the mucous cells) have been reported in several sh species [e.g. 44,46], and suggested to contribute to a higher infection with Saprolegnia fungus in the cases of the sea trout and brown trout [47]. Likewise, changes in the hormones expressed in the skin alter the biochemistry of the skin mucous and also potentially affect its microbiome [48]. Fish growth and sexual maturation also impact gill morphology and function in some sh species. For example, the ability to osmoregulate at different salinities was seen to increase throughout the developmental stages of the seabass (between larva and juvenile individuals, [45]. Additionally, body size was also identi ed as the main factor affecting morphological variation in gill rakes and the size of their pores in the Silver Carp and Gizzard Shad, suggesting that the overall ltering ability of these species is related to size and maturation [49]. Importantly, a recent study in rabbit sh showed that increases in body weight are accompanied by increases in the microbial community structure of the skin and gill of rabbit sh [36]. We thus hypothesize that such physiological and morphological changes occurring during sh growth have led to the changes in microbiome diversity, composition and predicted functionality observed in the present study. The skin and gill microbiomes of older age groups of seabass showed signi cantly higher alpha-diversity than early juveniles. Although all of the most abundant phyla were maintained between age groups, the skin and gill microbiomes of the seabass were highly dynamic, diversifying with age. Conversely, the skin microbiome of seabream juveniles showed a tendency to exhibit higher alpha-diversity than adults, though these differences were not signi cant. Variation in microbiome alpha-diversity between different age groups has been previously reported for many sh species. For example, studies on the zebra sh and salmon gut microbiome, have reported differences between mature and immature life stages; however those differences also coincide with other major ecological changes in the sh, such as diet [17] or environment transitions [20]. The differences found in the present study in microbiome structure across age groups, which were consistently signi cant in both species, have been already reported in other sh (e.g., several reef sh [14,35]; Salmo salar [19]), mainly in longitudinal studies several months long [13,17,20]. Microbial predicted functional diversity across age groups The predicted functional analysis suggests that distinct signi cantly enriched metabolic pathways are expressed in skin and gill microbiomes of both sh species across age groups. Following alpha-diversity patterns, the number and diversity of enriched pathways was higher in mature seabass adults when compared to juveniles, especially in the gill. In seabream, on the other hand, there were essentially no differences in both microbial diversity and number of enriched pathways between age groups. However, one must interpret these results with caution, since PICRUSt2 results are limited by the currently available genomes and biased towards human health microorganisms [50]. However, it is worth noticing that some of the enriched metabolic pathways detected in the present study could be driven by the high environmental variability of the Alvor estuary where these sh are reared. In estuaries, salinity variations occur on a daily basis due to tides and pollutants can be prevalent [e.g. 51]. Biosynthesis of fatty acids and unsaturated fatty acids were two of the predicted metabolic pathways enriched in the microbiome of mature seabream skin. These same pathways have also been enriched in previous analyses of the skin and gut microbiomes in the atlantic salmon [52,53] and in the skin microbiome of the common snook [54] when transitioning between freshwater and seawater. Additionally, two of the predicted metabolic pathways identi ed in both sh species were related to degradation of toxic compounds. Speci cally, biodegradation of the highly prevalent toxic pollutants toluene and chloroaromaric compounds by bacteria is essential to remove them from the environment and to prevent absorption through the skin and gills in aquatic animals [55][56][57]. Fish and water microbiome comparisons The water microbiome of shponds were signi cantly distinct and more diverse than the skin and gill microbiomes of both sh, regardless of their age. It is known that free-living microbial communities retain higher richness than host-associated communities [31], with many studies showing a higher bacterial diversity in water relative to sh skin [28,30,36,[58][59][60], gills [14,36], gut [7,15,18,21,61], stomach [36], hindgut [36] and whole larvae [22]. Although some studies in sh have shown that the microbial communities found in the water tend to be recovered in the larval gut microbiome [17,21], others have also shown that water microbiomes do not in uence directly the microbiomes of sh mucosa [7, 8, 13-15, 18, 19, 22, 28, 30, 34, 36, 58-60, 62, 63]. Importantly, a previous study of the skin microbiome of the seabass and seabream [59] also showed signi cant differences with plaktonic communities. However, in that study only a low number of OTUs (3%) was shared between skin and water microbiomes, whereas in the present study higher percentages of ASVs were shared between the skin (14%±1) and the gill (15%±1) of both sh species and the surrounding water. Microbiome dissimilarities depicted by PCoAs showed that, although signi cantly different, the skin microbiome of both species clustered more closely to the water microbiome than the gill microbiome. However, only a small percentage of the variation (PC 1 -average 18%±2; PC 2 -average 10%±1) was explained by this analysis. On the other hand, the results from the Mantel tests showed a correlation between the water and gill microbiomes (p < 0.03), but not the skin microbiomes. This suggests that although both skin and gill are permanently in contact with water, the gill environment may be more susceptible to variations in the water microbiome. Conclusions Skin and gill are important mucosal barriers that protect the sh from the external environment. They are in permanent contact with the water column and thus prone to pathogenic bacterioplankton colonization. However, most studies so far investigating microbiome changes related to sh age have either strictly focused on early life stages (i.e., larvae development) or on the gut microbiota. In the present study important differences were uncovered in the diversity, composition, and predicted function of the skin and gill microbiomes across age groups of farmed seabass and seabream. Besides the increments in biomass recorded at the end of our sampling and the onset of sexual maturity, the estimated growth rate of each cohort also changed. Growth rate decreased drastically with age, being much higher in juveniles (243% and 83% for early and late seabass juveniles, and 143% in seabream) relative to adults (43% and 16% in adult seabass and seabream, respectively). We, thus, conclude that growth and sexual maturation are likely the main drivers of the differences found herein. Overall, our results were in line with what has been previously found in the skin [35,36] and gill [14,36] microbiomes of several wild reef sh, suggesting this could be a general pattern across sh. Our results also highlight the importance of considering sexual maturation as a key factor shaping external sh mucosa microbiomes, especially in studies focusing on farmed sh, where the microbiome and disease dynamics can be very important. Fish species, sampling and preparation Fish were sampled at a semi-intensive open-water farm in the Alvor Estuary (Ria Formosa, Portimão, Portugal). In this sh farm, seabass and seabream production can take up to 36 months, so having a healthy mucosa during this time is of utter importance. The gilthead seabream is a protandric hermaphrodite, maturing rst as males between years 1 and 2 in the wild, with sex reversal occurring in the following 2-3 years [38][39][40]. The European seabass reaches sexual maturity between years 2 and 3 in males, and after year 3 in females [41][42][43]. In this particular sh farm, seabass typically reaches sexual maturity at approximately 275 g, whereas for seabream maturity is usually attained at 300 g. We monitored the skin and gill microbiomes of seabass and seabream of different age cohorts, including juveniles and adults. Due to sampling restrictions within the sh farm, sampling was strictly non-invasive and sh could not be dissected to con rm sexual maturation. The categorization of the age group cohorts was based on previous studies [e.g. 38,41] and the weight at maturity records available at this farm. Samples were collected every other week (12 sampling time points) between August 2017 and January 2018 (6 months). We simultaneously sampled three seabass age groups cohorts with approximately one year old difference. Fish were categorized as early juveniles (9 months and an average weight of 22 g at the beginning of the study and 15 months and an average weight of 76 g at the last sampling date), late juveniles (18 months and an average weight of 151 g at the beginning of the study and 24 months and an average weight of 277 g at the last sampling date), and mature adults (32 months and an average weight of 467 g at the beginning of the study and 38 months and an average weight of 669 g at the last sampling date). We also simultaneously sampled two seabream cohorts categorized as juveniles (15 months and an average weight of 103 g initially and 21 months and an average weight of 250 g at the last sampling date), and mature adults (37 months and an average weight of 411 g at the beginning of the study and 37 months and an average weight of 476 g at the last sampling date). Seabream of an intermediate age were not available. Each age group and species was reared in separated but not distant open-water ponds (maximum 344 m and 380 m apart for seabass and seabream, respectively). In this sh farm, all ponds shared the same in ow of estuarine, which circulates between ponds and is naturally recycled. Hence, sh share roughly the same water quality and environment. Additionally, sh of each species were bought from commercial hatcheries where genetic background is limited. Fish were caught from each tank using a sh line, and gill and skin samples were non-invasively taken using sterile swabs (Medical Wire & Equipment, UK). The right laments between the rst and second arches of the gill and the right upper lateral part of the sh skin from head to tail were swabbed. Afterwards sh were released unharmed. Water samples (1 L) were collected from the ve different culture ponds at the same time as sh swabbing was performed, except during the month of December, when no water samples could be collected. Water samples were ltered through 0.2 µm lters on collection day. Swabs and lters were immediately frozen at -20ºC and then transported in dry ice to the CIBIO-InBIO laboratory where they were kept at -80ºC until processing. Five sh were sampled per week per age group, totaling 60 individuals per species and age group. A total of 360 seabass samples (60 skin and 60 gills x 3 age groups) plus 29 water samples from their corresponding shponds and a total of 240 seabream samples (60 skin and 60 gills x 2 age groups) plus 16 water samples from their corresponding shponds were processed. The seabass and their corresponding water samples were processed using the PowerSoil DNA Isolation Kit (QIAGEN, Netherlands), while seabream and their corresponding water samples were processed using the PureLink Microbiome DNA Puri cation Kit (ThermoFisher Scienti c, UK). We used two different DNA extraction kits due to supply shortage at the time of extraction. This technical difference did not impact the goals of our study since we studied each sh species separately (i.e., microbiomes are not compared between sh species). DNA concentration and quality were measured in a NanoDropTM 2000 Spectrophotometer (ThermoFisher Scienti c, USA). DNA extractions were shipped on dry ice to the University of Michigan Medical School (USA) for ampli cation and sequencing according to the protocol of Kozich et al. [64]. Each sample was ampli ed for the V4 hyper-variable region of the 16S rRNA gene (~ 250 bp). All amplicon libraries were pooled and sequenced in a single run of the Illumina MiSeq sequencing platform. Approximately 8,313,608 and 6,943,265 16S rRNA sequences were retrieved for seabass and seabream, respectively. The number of sequences per sample ranged from 726 to 46,001 in seabass and from 5,145 to 151,713 in seabream. After normalization and removal of non-bacterial reads, 8,724 and 5,754 ASVs were assigned to the skin and gill, respectively, of seabass; while 5,308 and 3,423 ASVs were assigned to the skin and gill, respectively, of seabream. A total of 2,543 ASVs were retrieved from the water samples collected in seabass shponds, while 1,440 ASVs were retrieved from the waters of seabream shponds. Taxa showing a mean relative proportion ≥ 5% in any group were considered the most abundant in that group. Data processing and statistical analysis Raw FASTQ les were denoised using the DADA2 pipeline in R with the paramenters for ltering and trimming being trimLeft = 20, truncLen = c(220,200), maxN = 0, maxEE = c(2,2), truncQ = 2 [65]. A midpoint rooted tree of ASVs was estimated using the Quantitative Insights Into Microbial Ecology 2 package (QIIME2; release 2019.7). A table containing amplicon sequence variants (ASVs) was constructed and taxonomic inferences made against the SILVA (138 release) reference database [66]. ASV abundances were normalized using the negative binomial distribution [67], which accounts for library size differences and biological variability. Microbial taxonomic alpha-diversity (intra-sample) was calculated using Shannon, Faith's phylogenetic diversity (PD), ACE and Fisher indices as implemented in the R package phyloseq [68]. Variation in microbial composition (alpha-diversity) and the mean proportions of the most abundant taxa (≥ 5% of all reads) were assessed using Linear Mixed Effects models (LME) with the lmer R package [69]. Since we were interested in assessing whether microbial diversity varied across sh age groups (predictor), we used age groups as a xed factor and sampling date (with 12 sampling time points) as a random factor. The nal general LME formula was expressed as: microbial diversity ~ sh age group + (1|sampling time point). Microbial structure (beta-diversity) was estimated using phylogenetic Unifrac (unweighted and weighted) and Bray-Curtis distances. Dissimilarity in microbial structure between samples was visualized using principal coordinates analysis (PCoA). Additionally, differences in community structure driven by sh age group were further tested using permutational multivariate analysis of variance (PERMANOVA) as implemented in the adonis function of the vegan R package [70]. We used the strata argument to permutate sampling dates and ran 1,000 permutations. Previous sh studies of skin and gill microbiomes [e.g. 7,8,36,61], including seabass and seabream [71], have shown remarkable differences in microbial composition and structure across host species and tissues. Additionally, a previous study by our group [72] showed that disease and antibiotic treatment in seabass leads to asymmetrical shifts in skin and gill microbial communities. Therefore, all our statistical analyses were carried out separately for each sh species and tissue. To assess to what extent water microbial communities shaped skin and gill microbiomes across sh age groups, we estimated the number of shared ASVs between sh and water microbiomes and constructed Venn diagrams in R. PERMANOVA and mantel testes [73] were used to assess differences in community structure and correlations between tissues and water microbiomes, respectively, in both species. Finally, microbial potential metabolic functions were predicted using the metagenomic Phylogenetic Investigation of Communities by Reconstruction of Unobserved States software (PICRUSt2) embedded in QIIME2 [74], applying a weighted nearest sequenced taxon index (NSTI) cutoff of 0.03. Predicted metagenomes were collapsed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway metadata [75]. Differentially abundant metabolic pathways in the skin and gill microbiomes of seabass and seabream across age groups were identi ed using linear discriminant analysis (LDA) in LEfSe, using age groups as classes [76]. As suggested by the authors, we used a Pvalue cut-off of 0.05 and a LDA effect size cut-off of 2 [76]. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Availability of data and material The datasets generated and/or analyzed during the current study are available in the NCBI Sequence Read Archive (SRA) database within the BioProject ID XXXXX (will be added upon acceptance). Competing interests The authors declare that they have no competing interests. were performed with a P value and LDA score cut-offs of 0.05 and of 2, respectively. Additional le 5: PCoA plot computed using Bray-Curtis distances for water, skin and gills microbiomes of the seabass Dicentrarchus labrax (A) and the seabream Sparus aurata (B). Each dot represents a microbiome sample and is coloured by tissue/origin (skin, gill and water). Figure 1 Mean values and standard deviations of Shannon alpha-diversity estimates plotted for the early juveniles/juveniles (green), late juveniles (yellow) and mature adults (orange) of the seabass Dicentrarchus labrax (A) and the seabream Sparus aurata (B). Pairwise comparisons of alpha-diversity were assessed using Linear Mixed Effect models with age groups as a xed factor and sampling time as a random factor. Statistically signi cant differences are denoted with an asterisk and non statistically signi cant differences with "ns".
2020-07-30T02:02:28.685Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "3bba1408b315306ba600d1bd7a75bc41483622e2", "oa_license": "CCBY", "oa_url": "https://animalmicrobiome.biomedcentral.com/track/pdf/10.1186/s42523-020-00072-2", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "cd1b91ceeb4317e5ffddab69dd3a28e21e852fec", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
270066244
pes2o/s2orc
v3-fos-license
Laser-sound reproduction by pulse amplitude modulation audio streams Recently, the possibility to reproduce complex continuous acoustic signals via pulsed laser-plasma sound sources was demonstrated. This was achieved by optoacoustic transduction of dense laser pulse trains, modulated via single- or multi-bit Sigma–Delta, in the air or on solid targets. In this work, we extend the laser-sound concept to amplitude modulation techniques. Particularly, we demonstrate the possibility of transcoding audio streams directly into acoustic pulse streams by analog pulsed amplitude modulation. For this purpose, an electro-optic modulator is used to achieve pulse-to-pulse amplitude modulation of the laser radiation, similarly to the multi-level Sigma–Delta method. The modulator is directly driven by the analog input stream through an audio interface. The performance of the system is evaluated at a proof-of-principle level for the reproduction of test audio signals such as single tones, double tones and sine sweeps, within a limited frequency range of the audible spectrum. The results are supported by computational simulations of the reproduced acoustic signals using a linear convolution model that takes as input the audio signal and the laser-generated acoustic pulse profile. The study shows that amplitude modulation allows for significant relaxation of the laser repetition rate requirements compared to the Sigma–Delta-based implementation, albeit at the potential cost of increased distortion of the reproduced sound signal. The nature of the distortions is analyzed and a preliminary experimental and computational investigation for their suppression is presented. Sound generation by laser induced breakdown (LIB) in ambient air or by laser ablation (LA) on solid targets by short or ultrashort laser pulses has been known since the early 60s 1,2 , shortly after the invention of the laser.Since then, laser-plasma sound generation has attracted significant scientific interest and has been extensively studied both experimentally and theoretically [3][4][5][6] .Laser-plasma sound sources (LPSSs) exhibit high practical interest for technological and scientific applications from the macro-scale to the micro-scale and from the very low frequencies up to the ultrasounds 6 .Point-like sources generated by tightly focused nanosecond laser pulses exhibit perfect omnidirectional emission across the entire frequency range, while line-like sources generated by loosely focused femtosecond pulses exhibit a cylindrical acoustic emission 7,8 .In the time domain, LPSSs have a rapid N-pulse pressure profile and a broadband and highly repeatable frequency spectrum 6 .Finally, LPSSs are effectively massless and spatially unbound, so that they can be reproduced over long distances without the need for in-situ power supply, receiver or demodulation devices. Over the years, laser technology has evolved to develop laser systems capable of emitting high-energy short and ultrashort pulses at a broad range of wavelengths, durations and energies.Nanosecond, picosecond and femtosecond lasers capable of inducing breakdown in ambient air or other gases have become compact and affordable, enabling the adoption of LPSSs in scientific and industrial applications, such as laser-induced breakdown spectroscopy (LIBS) 9 , non-destructive materials testing and diagnostics 10,11 , underwater or air-water communication signal transmission 12 and military applications 7 , while it has also been proposed for acoustic measurements 13,14 . In previous works, we demonstrated the possibility to generate arbitrary complex and continuous acoustic signals through laser-plasma optoacoustic transduction by utilizing digital pulsed modulations, such as Sigma-Delta (ΣΔ) modulation 15,16 .Based on this concept, Lengert et al. have produced tones through laser-induced breakdown Laser-plasma sound sources Laser-sound generation of complex continuous signals is based on optoacoustic transduction following laserinduced plasma generation by short or ultrashort high intensity laser pulses.For gas targets, e.g., ambient air, the non-linear interaction of such pulses with the neutral gas initially generates a hot electron cloud through photon absorption and electron-electron interactions.The free electrons interact with the colder ions and molecules in the excited volume and transfers energy to them, resulting in rapid and localized thermalization of the gas.The consequent thermal expansion and elastic collapse of the thermalized gas volume lead to the emission of a shock wave that exhibits a characteristic N-pulse shape, with the positive part corresponding to the expansion phase and the negative to the collapse phase.As the shock wave propagates away from the source, it gradually becomes a linear acoustic wave, as shown in Fig. 1a. For solid targets, the generated acoustic wave has the same characteristic shape in the time-domain, however the physical mechanism leading to the initial shock formation is different.Plasma generation and the resulting heating causes the material near the surface to sublimate in a process known as laser ablation 19 .The escaping particles collide with the surrounding air particles, which are forced away from the surface of the material.This rapid disposition and consequent relaxation of the air molecules leads to the generation of pressure and the emission of a shock wave with the characteristic N-pulse shape.A typical acoustic N-pulse generated by ablation of stainless steel via 532 nm, 10 ns, 1 mJ laser pulses is shown in Fig. 1b).Laser ablation can be achieved with a fluence of the order of 1 of 10 8 -10 10 W/cm 2 , which is well below the value required for breakdown in ambient air (~ 10 11 -10 12 ).As will be shown in the next Section, the reduction of the breakdown threshold is exploited here to lower the laser intensity required for the proof-of-principle experiments on laser-sound reproduction presented here. In the frequency domain, the laser-plasma acoustic pulses exhibit a 1st order high-pass profile up to a spectral peak, after which the spectral content drops with frequency.The frequency spectrum of the acoustic pulse of Fig. 1b) obtained via discrete Fourier transform (DFT) is shown in Fig. 1c), where the characteristic high pass profile is apparent up to 20 kHz, which is the bandwidth limit of the acoustic measurement system (see "Methods" section).Importantly for sound generation in the audible range, acoustic pulses generated by tightly focused nanosecond laser pulses with energies of several tens of millijoules exhibit spectral peaks in the low ultrasounds Pulse amplitude modulation In traditional pulse amplitude modulation (PAM), the information of the modulating signal is encoded as a stream of pulses with constant width and an amplitude proportional to the instantaneous amplitude of the input signal.Assume an analog input signal x(t) , where t is the time variable, and a time interval T 0 so that x(nT 0 ) is the amplitude of x(t) at the time instances nT 0 , with n = 0, 1, 2 ….The PAM signal y PAM (t) constitutes a pulse train s p (t) with a pulse-to-pulse distance T 0 and amplitude of the nth pulse equal to x(nT 0 ) .To express the PAM signal mathematically, we multiply the input signal by a periodic train of impulses with period T 0 : where δ(t) is the Cronecker Delta function, so that: Then, the signal x s (t) is convolved with the time profile of a single pulse p(t) , resulting to the PAM signal: The frequency spectrum of y PAM (t) is calculated by the Fourier transform: where ω is the angular frequency.The Fourier transform X s (ω) of x s (t) is given by: Finally: The spectrum Y PAM (ω) is the sum of infinite copies of the input spectrum X(ω) shifted by kω 0 and weighted by P(ω) .If the input signal is bandlimited in f ∈ 0, f 0 so that the copies X(ω − kω 0 ) do not overlap, the spectrum of the PAM signal in the baseband is given by Eq. ( 6) for k = 0: This is the spectrum of the input signal weighted by the spectral profile of the N-pulse.Unlike most PAM implementations that use rectangular pulses, the laser-sound system reproduces the typical N-pulses p N (t) with the 1st order high-pass spectral profile P N (ω) in the band of interest, as is well-known from existing works 6,13 .Hence, the spectrum of the PAM-based laser sound system in the baseband is a high pass filtered version of the input spectrum: Experimental platform The basic structure of the PAM laser-sound system is shown in Fig. 2. Laser pulse amplitude modulation is achieved by an electro-optic modulator (EOM) based on the electro-optic Kerr effect.Initially, the laser beam is expanded by 4 times (L1, L2) in order to achieve stronger focusing on the target and is then directed into a polarizer (P1) that removes any unpolarized components.Then a quarter wave plate (QWP) converts the linear polarization into circular.Consequently, the beam enters the Pockels cell, a crystal whose birefringence is electrically controlled via a high voltage generator.During propagation inside the Pockels cell, the pulse polarization is shifted to a desired polarization state depending on the amplitude of the applied voltage.By exiting the Pockels cell, the pulse is directed into a second polarizer (P2) that blocks the horizontally polarized components.As a result, the amount of light that reaches the focusing lens (Lf) depends on the amount of vertically polarized light at the exit of the Pockels cell. As shown in Fig. 2, the high voltage generator (HVG) that controls the Pockels cell is driven by a function f (x(t)) of the analog audio signal s aud , delivered through an audio interface.This results in the laser pulse train and, consequently, the reproduced acoustic pulse train, being correlated with the input audio signal.Particularly, the possible amplitude levels of the laser pulses are theoretically infinite and continuous, which in turn results to infinite and continuous amplitude levels of the generated acoustic pulses, leading to an analog reproduced audio signal.Moreover, by choosing a linear function f lin (x(t)) , the modulation of the optical pulse energy is directly proportional to the instantaneous amplitude of x(t) , hence producing a linear analog PAM optical pulse stream.However, as is also shown in the "Results" section, this is not an optimal choice because the complex and highly non-linear physical processes resulting to laser-plasma sound generation (see also 6 ) introduce nonlinearities in the LS-PAM system's acoustic response.This becomes evident by considering the LS-PAM system as a three-step conversion process: In the first step, the audio input x(t) is used to modulate the amplitude of the laser pulses, so that for the nth laser pulse p laser (n) , it holds: The electro-optic modulation system allows for precise control of the laser pulse amplitude so that the relation between the instantaneous amplitude x(nT 0 ) of the audio signal at time t = nT 0 and the amplitude of the nth laser pulse p laser (n) can be considered linear and time independent.Also, it is well-known that the Pockels cell has a response in the picosecond scale, rendering any changes in its polarization state instantaneous compared to the time scales of the modulation.However, for the next step of the process, the conversion of the laser pulse to an acoustic pulse: is strongly non-linear.Hence, the selection of a linear function f lin (x(t)) is not optimal and leads to acoustic non-linearities in the reproduced signal.The selection of the optimal function requires an evaluation of the p laser (n) → p N (n) relation, which is here carried out experimentally.Inversion of the p laser (n) → p N (n) relation leads to improved acoustic reproduction with suppressed harmonic distortion.Further details on the Pockels cell driving are presented in the "Results" section. Results In the "Results" section, experimental results on the ability of the analog PAM laser-sound system to reproduce single sinewaves, two tone signals and sine sweep signals are presented.The impulse and frequency response of the system are obtained and compared to the simulated response.The possibility for system response equalization by input signal pre-filtering is also experimentally demonstrated for the two-tone signals.Two different control mechanisms, a linear and a non-linear, are investigated and compared in terms of generated harmonic distortion. Linear control Figure 3 shows the experimentally measured spectra of discrete sinewaves with frequencies 63, 125, 250 and 500 Hz, respectively, for a linear driving function f lin (x(t)) .It can be seen that the magnitude of the fundamental frequencies of the sinewaves increases by 6 dB when the frequency is doubled.This is in accordance with the linear behavior of the system, characterized by the 1st-order high pass profile, As described in the model presented in the subsection "pulse amplitude modulation".The higher harmonics also increase with increasing fundamental frequency, while the background noise remains the same, as expected.The measured spectra include additive noise, mainly originating from the rotor of the metal target.The contribution of the rotor to the background noise was measured to be approximately 55 dB-SPL.Another source of noise in the measured signal is a random deviation in the amplitude of the generated acoustic pulses from the expected.Despite the high repeatability of laser-plasma sound generation, pulse-to-pulse repeatability in the presented experiments was compromised by the low available optical power, which was close to the breakdown threshold.This, in combination with the use of the metal disc, whose progressive surficial degradation due to ablation lead to uncontrollable irregularities in the ablation conditions, resulted to a generally increased noise in the measurements.Higher optical power and careful design of the ablating target, or optimally the transduction in the air, would significantly improve the acoustic performance of the system.Moreover, higher order harmonics are present in the measured signals, especially the 2nd harmonic, while the 3rd harmonic is also apparent above the noise floor for the 250 and 500 Hz sinewaves.The harmonic distortion originates in the non-linear relation between the amplitude of the optical pulses and the resulting amplitude of the acoustic pulses.A preliminary investigation of the non-linearities of the PAM laser-sound system is presented in the next subsection.Figure 4a shows the experimentally measured frequency response of the system in comparison to the simulated frequency response, whereby a very good agreement can be observed.Both curves www.nature.com/scientificreports/have the characteristic 1st-order high pass profile, as predicted by the computational model of Eq. ( 8), for a flat input spectrum X(k) = 1: The main difference between the two curves is the apparent additive noise in the measured signal. The experimentally evaluated and simulated Impulse Responses (IRs) of the system are presented in Fig. 4b, where amplitude normalization for the two curves is carried out so that the energies of the two signals are equal.In the IR signal, three distinct features can be observed, with the prominent feature corresponding to the linear response and the preceding features corresponding to the non-linearities of the system, as described in 20 .From Eq. ( 9) and for x(t) = δ(t) we obtain the linear impulse response of the system: which effectively is the N-pulse pressure profile.In the zoomed window of the measured IR, a central N-pulse can be observed along with lateral ripples.The ripples originate from the frequency range of the input sine-sweep, which is limited to 2 kHz compared to the 192 kHz bandwidth available at the used 384 kHz sampling rate. System response equalization As it was computationally demonstrated for the ΣΔ-based laser-sound system 5,6 , the high pass frequency response of the LS-PAM system can be equalized by pre-filtering of the input signal with a 1st-order low pass filter (see also "Signal processing" subsection in "Methods" section).Here, an experimental demonstration of system equalization is presented.In Fig. 5a-d the measured response of the LS-PAM system is presented for reproduction of two-tone signals with a distance of the musical interval of a major 3rd and fundamental frequencies 63, 125 250 and 500 Hz, respectively.It can be seen that the spectral magnitudes at f 0 and f 3maj are practically equal (less than 0.3 dB difference), while without pre-filtering the spectral magnitude at f 3maj would be 20log 10 5 4 ∼ = 2 dB higher than at f 0 .It is noted that, after equalization, each two-tone signal was individually normalized to the maximum possible amplitude, so that the best signal to noise ratio (SNR) to be separately achieved for each signal. Non-linear control The acoustic performance of the system can be significantly improved by considering the non-linear relation g(•) between optical excitation energy and reproduced acoustic amplitude.This can be done by measuring the reproduced acoustic pulses for a ramp control signal.Note that the relation between the input signal and the laser (9) www.nature.com/scientificreports/pulse amplitude is linear so that the two can be used interchangeably for the purposes of the following analysis. The results are shown in Fig. 6, where two main regimes can be identified in the p laser -p N curve: • a sub-threshold-or "no sound"-region, where the optical intensity of the laser pulses is not sufficient to ablate the metal target and hence, no sound is produced, • a "sound" region with a sigmoid-like profile. It should be noted that this curve corresponds to the specific laser system and the metal target and hence cannot be considered as general. The measured non-linear relation can be used to calculate the system's impulse response via the computational model, by applying g(•) in the calculated PAM signal y sw PAM (t) resulting from a sine sweep input as: where y sw LS−PAM (t) is the calculated output of the system when the non-linearity of the p laser -p N curve is consid- ered.The result is plotted against the measured IR in Fig. 7, where the curves are again normalized to have the same energy.It can be seen that the model correctly identifies the appearance of the three higher-order harmonics, hence demonstrating the influence of the p laser -p N curve on the non-linear behavior of the system.The overestimation of the amplitude of the second harmonic highlights the need for more precise evaluation of the p laser -p N . It can be argued that suppression of the higher-order harmonics requires the elimination of the "no-sound" region and the linearization of the "sound" region.For the former, various methods related to the physics of lasermatter interaction could be considered, as for example the use of femtosecond laser pulses or the development of special targets with lower breakdown or ablation thresholds.For the non-linearity of the "sound" region, a general roadmap entails application of the inverse function g −1 (•) of the sigmoid curve to the input signal, so that: www.nature.com/scientificreports/It can be easily seen that such a predistortion of the input signal theoretically leads to a distortion-free PAM signal reproduced by the laser-sound system: Here, preliminary experimental results by measurements using predistortion of the input signal based on the measured curve of Fig. 6 are presented.It should be noted that, since the curve of Fig. 6 is valid only within the particular amplitude range of the input signal, effective suppression of the system non-linearities requires that the pre-distorted signal has the same amplitude range. Figures 8 and 9 show the measured impulse response and two-tone signals reproduced by the system, respectively.From the impulse response it becomes immediately apparent that the second harmonic, which is the strongest non-linear feature when linear control is used, is almost eliminated, while the third harmonic is mostly left unaffected.Quantification of the harmonic energy in the two curves shows a reduction of about 85% for the non-linear control method.This fact is also reflected in the spectra of the two-tone signals of Fig. 9. Particularly, for the 63 Hz fundamental frequency, no harmonic distortion is observed above the noise level, while for the 125, 250 and 500 Hz, the harmonic distortion is significantly suppressed compared to that of Fig. 5b-d.Particularly for the fundamental frequencies of 250 and 500 Hz, the magnitude of the higher harmonics is suppressed by more than 10 dB. Discussion In this work, a laser-sound system based on pulse amplitude modulation was developed and experimentally evaluated for the reproduction of test audio signals.The system effectively constitutes an extension of the original ΣΔ-based laser-sound system towards amplitude modulation without quantization of the permitted acoustic levels.It uses laser-plasma sound generation to form acoustic pulse trains by fast nanosecond laser excitation on a metallic target.Modulation of the laser radiation was achieved using an electro-optic modulator based on a Pockels cell.It was shown that the PAM laser-sound system achieves reproduction of acoustic signals within an intended band of interest by using a laser repetition rate two times the Nyquist frequency of the band.Experiments were carried out for reproduction of single sinewaves, two-tone signals and sine-sweep signals at a repetition rate of 8 kHz.This was only restricted by the specifications of the available laser unit but can be raised to 40 kHz or more in order to cover the complete audible spectrum. For the control of the laser pulse amplitude modulation from the audio input signal, two different driving approaches, a linear and a non-linear, were investigated.For the linear control, noticeable harmonic distortion was observed in the reproduced acoustic signals, which was associated with the non-linear relation between laser pulse and acoustic pulse energy.Owing to its pulsed nature, laser-based sound reproduction allows for simple experimental evaluation of this non-linearity, in contrast to non-linearities encountered in traditional electromechanical transducers 21 .Here, the non-linear relation was experimentally evaluated by measurements of the reproduced pulse trains for a ramp input signal.The results were used in computational simulations, which confirmed the experimental findings.Preliminary experimental results from the application of nonlinear control aiming to reverse the non-linearity of the system by appropriate predistortion of the input signal, showed significant reduction of the harmonic distortion in the reproduced signal.It is noted that predistortion is a well-established technique used in various applications e.g., in RF amplifiers 22,23 . In comparison to the previously presented digital ΣΔ-based lased-sound system, the PAM laser-sound system can significantly reduce the required repetition rate of the driving laser to the Nyquist frequency of the input audio signal.This is an important finding, as the repetition rate is a critical factor for the feasibility, efficiency and Impulse response of the non-linearly driven PAM laser-sound system. cost of a fully functional laser-sound system operating in the complete audible range.However, the susceptibility of the PAM laser-sound system to harmonic distortion and noise could impose limitations on the fidelity of the reproduced audio signals.This could potentially restrict its applicability to cases where there are no stringent requirements for reproduction accuracy.A full comparison of the performance of the ΣΔ and PAM laser-sound systems for high-fidelity audio reproduction will be carried out in the future. Experiments A picture of the prototype LS-PAM experimental platform can be seen in Fig. 10.It is based on an Nd:YAG laser (IS-200-2-L, EdgeWave, Germany) capable of emitting pulses with 532 nm wavelength, 10 ns duration and 1 mJ energy at a repetition rate of 8 kHz.In the experiments, a fixed repetition rate of 8 kHz was utilized.The acoustic streams were captured recorded by a special microphone (4192, B&K, Denmark, Germany) with a high dynamic range of 19-162 dB and frequency response spanning from the low infrasonic frequencies to the high audible frequencies f ∈ [3 Hz, 20 kHz] using a microphone preamplifier (2690-0S2, B&K, Denmark).The microphone was placed at a distance of 5 cm from the target point.The microphone signal was sampled by an audio interface (Adi-2 Pro, RME, Germany) at f s = 384 kHz with a 24-bit resolution.Recording was done via the Audacity software 24 enhanced by the Aurora plugin 25 . Optoacoustic transduction was done here by focusing the laser pulses on a metal disc, which allowed for a significant reduction of the required optical energy compared to focusing into ambient air.This was necessary due to limitations in the available optical power of the laser.The disc was mounted on a rotor rotating at approximately 300-500 rpm (7450 ES, EBM-Papst, Germany), in order to avoid repeated consecutive ablation on the same spot, which would lead to rapid localized degradation of the material with detrimental effects on the reproduced sound.It should be noted that the main acoustic characteristics of the system are not affected by the use of the rotating metal target, as the pressure profile of the acoustic pulses generated by laser ablation on metals is identical to that of laser-induced breakdown.This fact becomes apparent from the pressure profile and frequency spectra of the typical ablation-generated acoustic N-pulse shown in Fig. 1b and c. Signal processing The input signals used here for the evaluation of the system are single sine waves with frequencies 63, 125, 250 and 500 Hz (octave-bands central frequencies) as well as two-tone signals containing an octave-band central frequency and a frequency in the harmonic interval of a major 3rd.The higher tone f 3 in the major 3rd interval is related to the fundamental tone f 0 according to f 3 = 5 4 f 0 and is suitable for aural evaluations as it can be easily identified by its characteristic consonance.System equalization was carried out by pre-filtering of the two-tone input signals with a 1st order low pass filter.The filter's frequency response is shown in Fig. 11. Moreover, logarithmic sine sweeps x sw (t) from 20 Hz to 2 kHz are used to evaluate the impulse and fre- quency response of the system.The impulse response h PAM (t) is obtained by convolution of the measured signal y sw LS−PAM (t) with the inverse filter h inv (t) of the logarithmic sweep: where The frequency response of the PAM laser-sound system is derived from the impulse response by DFT: It should be noted that, due to the nature of laser-plasma sound generation, it is impossible to control the sign of the generated N-pulses.Simply, it is impossible to produce "negative" N-pulses as a typical PAM signal would require and hence unipolar PAM needs to be used.For this reason, a unipolar input signal is generated by adding a DC component to the input signal equal to the minimum value of the signal: Figure 1 . Figure 1.(a) Generation of a pressure wave following laser breakdown or laser ablation, initially propagating at supersonic velocity as a shock wave and progressively transforming into a linear sound wave, (b) typical pressure profile of an acoustic N-pulse generated by ablation of stainless steel via 532 nm, 10 ns, 1 mJ laser pulses and (c) frequency profile of the acoustic N-pulse obtained via DFT.Orange dashed line marks the bandwidth limit of the acoustic measurement system (20 kHz). Figure 2 . Figure 2. Block diagram of the LS-PAM prototype platform. Figure 4 . Figure 4. Measured and simulated (a) frequency response and (b) impulse response of the linearly driven PAM laser-sound system. Figure 5 . Figure 5. Measured two-tone signals with fundamental frequencies 63, 125, 250 and 500 Hz generated by the linearly driven PAM laser-sound system. Figure 6 . Figure 6.Measured curve of the relation between laser pulse energy and acoustic pressure. Figure 7 . Figure 7. Simulated impulse response of the linearly driven PAM laser-sound system accounting for the nonlinear relation between laser pulse energy and generated acoustic pressure.Comparison with the measured impulse response.Insets show details of the linear and non-linear IR features. Figure 9 . Figure 9. Measured two-tone signals with fundamental frequencies 63, 125, 250 and 500 Hz reproduced by the non-linearly driven PAM laser-sound system. FinallyFigure 10 . Figure 10.Picture of the prototype LS-PAM experimental platform, where the EOM and driving systems can be seen. Figure 11 . Figure 11.Frequency response of the 1st order low pass filter used for equalization of the LS-PAM system.
2024-05-29T06:17:32.991Z
2024-05-27T00:00:00.000
{ "year": 2024, "sha1": "effdfaecdba80ea979e61def440a719c414a08d2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c41585532e83262897dfa9adbe94a1a562a0cf23", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
91266059
pes2o/s2orc
v3-fos-license
Invader vs . invader : intra-and interspecific competition mechanisms in zebra and quagga mussels The zebra mussel, Dreissena polymorpha (Pallas, 1771), is considered to be one of the world’s worst invasive species with a large impact on local biodiversity and ecosystem services in Europe and North America. Recently, a large-scale displacement of the invasive zebra mussel by the similarly invasive quagga mussel, Dreissena rostriformis (Deshayes, 1838), is occurring in large parts of Western and Central Europe. While the exact reasons for the competitive advantage of the quagga mussel remain unknown, its potentially higher fitness might play a role. This replacement of one invasive species by a closely related invasive species offers a unique opportunity for unravelling patterns and processes of competition. To test whether the quagga mussel derives its competitive advantage from higher growth rates, a fully closed and controlled microcosm system was used to subject specimens of both species to different intensities of intraspecific and interspecific competition. The study revealed that both species reacted qualitatively similar to the different treatments. However, under all competition scenarios the quagga mussel showed substantially higher growth rates and larger growth ranges. Therefore, these characteristics might provide the quagga mussel with a higher flexibility in fluctuating environments and allow it to reach adult size earlier. This, in turn, can make the quagga mussel less prone to parasite pressure and other biological constraints during growth, and provides an advantage in the competition for space (hard substrates) and food. Introduction Invasive species are considered to be a major driver of biodiversity loss (e.g., Sala et al. 2009).They can have dire consequences for ecosystem services and may cause enormous economic damage (Pimentel et al. 2005;Charles and Dukes 2007;Connelly et al. 2007;Pejchar and Mooney 2009;Sousa et al. 2014).While today almost all ecosystems are affected by invasive species, brackish and freshwater systems are particularly vulnerable (e.g., Gherardi 2007;Gherardi et al. 2009). The zebra mussel Dreissena polymorpha (Pallas, 1771) is one of the "100 of the World's Worst Invasive Alien Species" (Global Invasive Species Database 2017).Native to the fresh and brackish waters of the Caspian and Black Sea drainage basins, it quickly spread throughout much of Europe after the construction of several inter-basin canals (e.g., Black Sea-Baltic Sea) at the end of the 18 th and the beginning of the 19 th century (Karatayev et al. 2007).The zebra mussel continues to spread throughout Europe and was, for example, first reported in the southern Balkans in 2010 (Wilke et al. 2010). Only 30 years ago, the zebra mussel arrived in North America, most likely by release of its larvae with ship ballast water into Lake St. Clair near Detroit, Michigan (Hebert 1989).From there it quickly spread throughout much of eastern North America and was recently discovered in Mexico (Naranjo-García and Castillo-Rodríguez 2017).However, despite hundreds of relevant studies, the invasion biology of the zebra mussel remains unclear.In fact, scientists only realized in 1991 that several years earlier a congener, the quagga mussel Dreissena rostriformis (Deshayes, 1838), had also been introduced into the Great Lakes (formerly known as D. rostriformis bugensis; for a revised nomenclature see Stepien et al. 2013).Confined to the northeastern part of the United States for decades, it was only recently reported from southwestern states (Stokstad 2007).At about the same time, the quagga mussel was also introduced into western Europe and quickly spread along major water pathways (Bij de Vaate et al. 2013;Heiler et al. 2013;Marescaux et al. 2016b).Moreover, it is outcompeting the zebra mussel in many sympatric populations (Heiler et al. 2012(Heiler et al. , 2013;;Matthews et al. 2014;Marescaux et al. 2015Marescaux et al. , 2016b)). Strong competition between invasive species, in general, has been suggested before (e.g., Gérard et al. 2014), and the displacement of one invasive species by another is typically attributed to higher competitive strength in regard to resource exploitation (e.g., Braks et al. 2004).Specifically, Diggins et al. (2004) found that the quagga mussel is expelling the zebra mussel from hard substrate at sites in Lake Erie, while the latter is seeking refuge on macrophytes.In addition, the quagga mussel is able to colonize silty sediment and has a higher tolerance towards low oxygen concentrations, resulting in a better adaptation to the profundal zones of deep lakes (Karatayev et al. 1998(Karatayev et al. , 2014)). Preliminary analyses indicate that growth rates of quagga and zebra mussels, as a proxy for competition strength, are differentially affected by, for example, food availability and water temperatures.Accordingly, the quagga mussel outcompetes the zebra mussel at low food concentrations (Baldwin et al. 2002) and at lower temperatures (Karatayev et al. 2010).However, despite ample research aimed at understanding the competitive advantage of the quagga mussel, the direct influence of intra-and interspecific competition on the growth rates of both species has not yet received adequate attention.The major goal of this study was therefore to experimentally assess the effects of different intra-and interspecific competition levels on growth rates of quagga and zebra mussels.To minimize the possibility of other factors influencing the outcome, we established a closed microcosm system using artificial waters with controlled biotic and abiotic conditions and a defined food supply. Origin and acclimatization of mussels All mussels were collected from the back waters of the River Main in Hanau-Steinheim in Germany (50.1103ºN;8.9169ºE).The two species were kept separately in fish tanks and acclimated to laboratory conditions over four days. Experimental setups To examine the response of zebra and quagga mussels to changing densities of both intra-and interspecific competition, we studied their growth for 82 days.All experiments were conducted under fully controlled conditions in climate chambers (12h/12h light/dark cycle) at Justus Liebig University in Giessen, Germany (Figure 1).The basic experimental setup was inspired by Grudemo and Bohlin (2000). Individual competition experiments were performed in 900 mL polyethylene terephthalate containers with 800 mL of artificial water, a base sand layer of 3 cm, and a small brick of aragonite-sand cement (3 × 2.5 × 1 cm) as hard substrate.Each container was also equipped with a wadding-wrapped foam filter with a pore size of 1 mm.Prior to the experiments, substrate and filter materials were inoculated with nitrifying bacteria (Sera Bio-Nitrivec, sera GmbH, Heinsberg, Germany).Artificial water supply was prepared from de-ionized water supplemented with biocalcium (Tropic Marin, Wartenberg, Germany) to a final concentration of 0.25 g L -1 to increase water hardness to approximately 8 °dH.Approximately 5 µL L -1 vitamin/highly unsaturated fatty acid (HUFA) solution was added to support the growth of nitrifying bacteria and to avoid potential dietary deficits.The latter solution consisted of 2 mL Lipovit (Tropic Marin), 5.25 g lecithin and 25 mL glycerin.Air supply was provided through glass pipettes.Both water flow and water exchange were achieved by using medical infusion bags and a tube through which fresh water ran into the container at a rate of approximately 200 mL per day.Excess water overflowed through a small hole near the top of each container.During the experiments, mussels were fed daily with one drop (approximately 50 µL) per container of Rotifer Diet ® HD and two drops of Shellfish Diet ® 1800 (both products of Reed Mariculture, Campbell, CA, USA), which is the food concentration we have been using for our mussel cultures since 2010.Water temperatures (mean of 19.1 °C, minimum 18 °C, maximum 22 °C) were recorded with Hobo Pendant ® Temperature/Light Data Loggers (Onset, Bourne, MA, USA). Competition treatments For each species, five intra-and interspecific competition scenarios were studied (Figure 1 These densities were chosen to represent low, medium and high population densities that are known from natural settings (Heiler et al. 2011(Heiler et al. , 2012)).Mussels of roughly the same size were used as focal individuals and marked with a small dot of nonirritating nail polish for identification purposes.Twelve replicates were set up for each treatment and the positions of the containers in the climate chamber were randomized to minimize possible differences in inner chamber temperature.Deceased focal individuals and focal individuals with lost markings were excluded from the analyses.Other deceased mussels were replaced with individuals of a similar size to keep the competition pressure at a constant level throughout the experiment.Prior and after the experiments (i.e., after 82 days) the wet weight of each focal individual was measured with a highresolution balance and the differences between start and end weights were recorded as "growth rates". Statistics All statistical analyses were done using the R statistical environment version 3.4.1 (R Core Team 2017).Normality and variance were assessed with the Shapiro-Wilk test and the Bartlett's test, respectively.As the samples did not meet the normality or the equal variance assumptions, the two-sided test for the nonparametric Behrens-Fisher problem (Konietschke et al. 2015) was used to compare the overall reactions of the zebra and quagga mussels. To determine significant differences among the treatments, a linear mixed-effects model was generated using the package lme4 version 1.1-15 (Bates et al. 2015).The growth data was logarithmized (base 10) after adding 0.07 to eliminate negative values that likely resulted from weight loss during the experiment.In the model, treatment was set as fixed effect and the start weight was included as random effect to account for its potential influence on growth rates during the experiment.Outlier residuals were removed from the model with the package LMERConvenienceFunctions version 2.10 (Trembley and Ransijn 2015).A Tukey multiple comparison test (package multcomp version 1.4-8 (Hothorn et al. 2008)) was used to determine the differences between individual treatments. Results After excluding deceased individuals and individuals with lost marking from the dataset, the number of focal individuals at the end of the experiment was 55 for the quagga mussel in setups A q -E q and 50 for the zebra mussel in setups A z -E z , resulting in replicate numbers ranging from 6 to 12 (see Figures 1 and 2). 1 and Supplementary material Table S1). The quagga mussel showed an overall higher mean and median growth rate as well as a greater range of growth rates in response to the different treatments (Figure 2).The two-sided test for the nonparametric Behrens-Fisher problem showed that the overall growth rate for all quagga mussels tested tended to be greater than for zebra mussels (estimator: 0.16, 95% C.I.: 0.09-0.25,p < 0.01). A pairwise comparison indicated that for all treatments, growth rates and ranges of growth rates were substantially larger for the quagga mussel (Figure 3, boxplots A q -E q ) than for the zebra mussel (Figure 3, boxplots A z -E z ).This finding is largely confirmed by the linear mixed-effects model.Accordingly, the Tukey multiple comparison test (Table S2) revealed significant differences between the two species for all treatments (p = 0.0169 (A q and A z ), p < 0.01 (B q and B z) , p < 0.001 (D q and D z , E q and E z )) except for the strong intraspecific competition treatment (C q and C z : p = 0.8976). When comparing the treatment-specific growth rates within species, no significant differences could be found for the zebra mussel (p > 0.1, see Table S2).In contrast, the growth rates of the quagga mussel were significantly lower under strong intraspecific competition compared to all other treatments (p < 0.001 (C q and A q , C q and D q ), p < 0.01 (C q and B q ); E q and C q : significant at the 10% level (p = 0.0514)).Moreover, quagga mussels showed higher growth rates under interspecific than under intraspecific competition, while the zebra mussel reacted to inter-and intraspecific competition in a similar way (Figure 4). Discussion Understanding the complex invasion mechanisms of the highly invasive zebra and quagga mussels is fundamental for maintaining native biodiversity and avoiding further losses to ecosystem services (sensu Pejchar and Mooney 2009).Typically, mechanisms of invasion are difficult to infer due to the complex interactions in multi-species competition systems.Here, the zebra-quagga-mussel-system has the advantage that the two congeners are ecologically similar, both invasive, and exert a strong competitive pressure on each other.This, in turn, enables the testing of selected species-based mechanisms. Our study, which mainly aimed at testing whether the quagga mussel derives its competitive advantage from higher growth rates under different competition scenarios, showed that both species reacted qualitatively similar to different intensities of intra-and interspecific competition.However, in four of the five treatments, the quagga mussel showed overall higher growth rates as well as greater growth ranges (see Figures 2, 3 and 4).Interestingly, growth rates of the zebra mussel were not substantially influenced by the presence of conspecifics or congeners, while the growth of the quagga mussel was affected by the presence of other dreissenids (Figure 4).Although not significant, the quagga mussel showed the highest growth rate and also the greatest range of growth rates under medium interspecific competition (Figures 3 and 4).For all other competition treatments, their growth was slightly lower, with growth under high intraspecific competition being significantly lower than under all other treatments. When alien species invade a new habitat with new environmental conditions, two mechanisms predominantly enable a population to persist: genetic adaptation and phenotypic plasticity (Chevin et al. 2010).As plasticity, in contrast to genetic changes, does not depend on favorable mutations, a population can almost instantaneously respond to new challenges (for a review see Pfennig et al. 2010).This has led to the assumption that greater plasticity provides a fitness advantage to invasive species (e.g., Richards et al. 2006).Recent studies have substantiated this claim by showing that invading species do generally have greater phenotypic plasticity than co-occurring non-invasive species (e.g., Davidson et al. 2011), and thus directly linked plasticity with the potential for invasiveness.Although not specifically tested here, the notable differences observed in the growth ranges of the two species during our study point towards differential degrees of phenotypic plasticity, not only between native and invasive but also within these two invasive species. Higher growth rates allow a competitor to reach adult size earlier, consequently being comparatively less prone to, for example, parasite pressure or other biological constraints during growth (Dillon 2000).In combination with potentially higher phenotypic plasticity and the observed varying growth rates under different competition pressures, higher growth rates might provide the quagga mussel with greater flexibility in a fluctuating environment (Davis 2009).Quagga mussels potentially benefit from higher growth rates in situations where space (i.e., hard substrate) is limited, and greater growth ranges can provide more flexibility when hard substrate is more heterogeneous or when only less suitable substrate is available.Additionally, higher growth rates might influence filtering activities.When comparing larger mussels, filtration rates of quagga mussels are higher than those of zebra mussels (Diggins 2001).This difference might be further enhanced when quagga mussels reach larger sizes earlier than zebra mussels of the same age, increasing the competitive advantage of the former species. The competition scenarios analysed in the present paper constitute an ecological snapshot tailored to the given laboratory conditions.Given that temperature ranges do have an important impact on adaptive physiological processes such as consumption, excretion, and filtration (Aldridge et al. 1995;Matthews et al. 2014;Marescaux et al. 2016a), an important question for further investigations will be to what extent the identified reaction patterns of quagga and zebra mussels are valid for broader ranges of temperatures.This is particularly relevant in the light of the continuing range expansion of both species (e.g., Naranjo-García and Castillo-Rodríguez 2017; Prié and Fruget 2017) and the Europe-wide changing water temperature regimes caused by global change (Floury et al. 2013). Figure 1 . Figure 1.Closed microcosm setup for competition experiments in quagga and zebra mussels.Left: Climate chamber setup.Right: Experimental design for the competition experiments.The growth of focal individuals (■ = quagga mussel; ▲ = zebra mussel) was measured under five competition treatments (A q -E q : focal individual = quagga mussel; A z -E z : focal individual = zebra mussel).Each treatment was replicated twelve times.□ = quagga mussel individual, Δ = zebra mussel individual.Photograph by K. C. M. v. Oheimb. ): A) no competition; solitary focal individual as "control" (equivalent to 36 individuals m -2 ), B) medium intraspecific competition with 1 focal individual and 7 conspecific individuals (equivalent to 288 individuals m -2 ), C) strong intraspecific competition with 1 focal individual and 27 conspecific individuals (equivalent to 1009 individuals m -2 ), D) medium interspecific competition with 1 focal individual and 7 congeneric individuals, and E) strong interspecific competition with 1 focal individual and 27 congeneric individuals. Figure 2 . Figure 2. Boxplots of overall growth rates for quagga and zebra mussel focal individuals after 82 days.Symbols indicate the species identity of the focal individual.n = number of focal individuals for each species. Figure 3 . Figure 3. Boxplots of treatment-specific growth rates of quagga (treatments A q -E q ) and zebra (treatments A z -E z ) mussel focal individuals after 82 days of treatment.Symbols indicate the species identity of the focal individual.n = number of focal individuals for each species/treatment.Treatments according to Figure 1.(control = no competition, medium intraspecific competition = 7 conspecific individuals, strong intraspecific competition = 27 conspecific individuals, medium interspecific competition = 7 congeneric individuals, strong interspecific competition = 27 congeneric individuals). Figure 4 . Figure 4. Plot of competitionspecific growth rates of quagga and zebra mussel focal individuals after 82 days under different competition treatments (control = no competition, medium = 7 competing individuals, strong = 27 competing individuals).Symbols indicate the species identity of the focal individual and line types the competition treatment (intraspecific = conspecific individuals as competitors, interspecific = congeneric individuals as competitors). Table 1 . Weight distributions of zebra and quagga mussel focal individuals at the beginning and end of the experiment (start/end weights in g; total duration 82 days).
2019-04-03T13:08:42.330Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "2de2d8fef724909721cc6664f23fcd18c3471826", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3391/ai.2018.13.4.05", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "2de2d8fef724909721cc6664f23fcd18c3471826", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
254109498
pes2o/s2orc
v3-fos-license
The Hawking temperature in the context of dark energy for Kerr–Newman and Kerr–Newman–AdS backgrounds We show that the Hawking temperature is modified in the presence of dark energy in an emergent gravity scenario for Kerr–Newman(KN) and Kerr–Newman–AdS(KNAdS) background metrics. The emergent gravity metric is not conformally equivalent to the gravitational metric. We calculate the Hawking temperatures for these emergent gravity metrics along θ=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta =0$$\end{document}. Also we show that the emergent black hole metrics are satisfying Einstein’s equations for large r and θ=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta =0$$\end{document}. Our analysis is done in the context of dark energy in an emergent gravity scenario having k-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k-$$\end{document}essence scalar fields ϕ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\phi $$\end{document} with a Dirac–Born–Infeld type Lagrangian. In KN and KNAdS background, the scalar field ϕ(r,t)=ϕ1(r)+ϕ2(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\phi (r,t)=\phi _{1}(r)+\phi _{2}(t)$$\end{document} satisfies the emergent gravity equations of motion at r→∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r\rightarrow \infty $$\end{document} for θ=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta =0$$\end{document}. The motivation of this work is to calculate the Hawking temperature in the presence of dark energy for an emergent gravity metric which is also a blackhole metric. We consider two cases, (a) when the gravitational metric is a Kerr-Newman and (b) when the gravitational metric Kerr-Newman-AdS. In Sect. 2, we have described k-essence and emergent gravity where the metricG μν contains the dark energy field φ and this field should satisfy the emergent gravity equations of motion. Again, forG μν is to be a blackhole metric, it has to satisfy the Einstein field equations. The formalism for k-essence and emergent gravity used is as described in [18][19][20][21][22]. In Sects. 3 and 5, we have shown that for Kerr-Newman and Kerr-Newman-AdS both cases, the emergent gravity metrics are mapped on to the Kerr-Newman and Kerr-Newman-AdS type metrics in the presence of dark energy. The emergent metric satisfies Einstein equations for large r and the dark energy field φ satisfies the emergent gravity equations of motion for along θ = 0 at r → ∞. We have calculated the Hawking temperature for emergent gravity metrics for Kerr-Newman and Kerr-Newman-AdS backgrounds in Sects. 4 and 6, respectively. We have clarified that the Hawking temperature is spherically symmetric from very general conditions and taking θ = 0 does not therefore affect this property of the Hawking temperature. It has been shown elaborately in [52], how the Hawking temperature is independent of θ , although the metric functions depend on θ . Also Hawking temperature is purely horizon phenomenon of the spacetime where the temperature is not depending on θ . So we can say that the Hawking temperature is spherically symmetric. k−essence and emergent gravity The k-essence scalar field φ minimally coupled to the gravitational field g μν has action [18][19][20][21][22] where X = 1 2 g μν ∇ μ φ∇ ν φ. The energy momentum tensor is and ∇ μ is the covariant derivative defined with respect to the gravitational metric g μν . The equation of motion is and 1 + 2X L X X L X > 0. Carrying out the conformal transfor- Then the inverse metric of G μν is A further conformal transformation [13,14]Ḡ μν ≡ c s L X G μν gives Here one must always have L X = 0 for the sound speed c 2 s to be positive definite and only then equations (1) − (4) will be physically meaningful, since L X = 0 implies L is independent of X , then from Eq. (1), L(X, φ) ≡ L(φ) i.e., L becomes a function of pure potential and the very definition of k-essence fields becomes meaningless because such fields correspond to lagrangians where the kinetic energy dominates over the potential energy. Also the very concept of minimal coupling of φ to g μν becomes redundant, so the Eq. (1) meaningless and Eqs. (4)(5)(6) ambiguous. For the non-trivial configurations of the k− essence field φ, ∂ μ φ = 0 (for a scalar field, ∇ μ φ ≡ ∂ μ φ) andḠ μν is not conformally equivalent to g μν . So this k− essence field φ field has the properties different from canonical scalar fields defined with g μν and the local causal structure is also different from those defined with g μν . Further, if L is not an explicit function of φ then the equation of motion (3) is reduces to; We shall take the Lagrangian as This is a particular case of the DBI Lagrangian [13][14][15][16][17] This is typical for the k-essence field where the kinetic energy dominates over the potential energy. Then Note the rationale of using two conformal transformations: the first is used to identify the inverse metric G μν , while the second realises the mapping onto the metric given in (9) for the Lagrangian Kerr-Newman metric and emergent gravity We consider the gravitational metric g μν is Kerr-Newman (KN) and denote ∂ 0 φ ≡φ, ∂ r φ ≡ φ . We consider that the k-essence scalar field φ ≡ φ(r, t). The line element of Kerr-Newman metric is [44][45][46][47][48] where, ; It is to be noted that the above metric (10) also rediscovered in [50,51]. In [52], elaborately shown how the Hawking temperature is not depending on θ although the metric functions depend on θ . In our case the emergent gravity metric (9)Ḡ μν contains extra terms (first derivative of k-essence scalar fields) but these extra terms are still not depended on θ . Therefore, the modified Hawking temperature will still be independent of θ . For this reason, we will choose our evaluation for some fixed θ ,i.e., θ = 0 only. Assuming the Kerr-Newman metric along θ = 0. Then the above line element (10) becomes with F(r ) = Δ(r ) Σ and Σ = r 2 + α 2 . Also in [53] have shown that the four dimensional spherically non-symmetric Kerr-Newman metric (10) transformed into a two dimensional spherically symmetric metric (11) in the region near the horizon by the method of dimensional reduction. The emergent gravity metric (9) components arē Then the emergent gravity line element (12) along θ = 0 becomes Now transform the coordinates [13,14] from (t, r ) to (ω, r ) such that and considerinġ we get the line element (13): We consider the solution of Eq. (15) Then the Eq. (15) reduces tȯ where K is a constant and K = 0 since k-essence scalar field will have non-zero kinetic energy. Now from (17) we and φ 2 (t) = √ kt and choosing integration constant to be zero. Therefore the line element (16) becomes where This new metric (19) is also Kerr-Newman (KN) type along θ = 0 in the presence of dark energy. Note that K = 1 since β cannot be zero, as then the metric (19) becomes singular. Also we have the total energy density is unity (Ω matter + Ω radiation + Ω darkenergy = 1) [14,49]. So we can say that the dark energy density i.e., kinetic energy (φ 2 2 = K ) of k-essence scalar field (in unit of critical density) cannot be greater than unity. Again K cannot be greater than 1 because the metric (19) will lead to wrong signature. The possibility of non-zero K appears because that would imply the absence of dark energy. Therefore, the only allowed values of K are 0 < K < 1. So there is no question of K approaching towards unity and confusions regarding this limit is avoided. It can be shown that, for r → ∞, this metric (19) is an approximate solution of Einstein's equation. Also mention that the mass and charge of this type black hole are modified as M = M 1−K , Q = Q √ 1−K respectively in the presence of dark energy density term K =φ 2 2 . Now we can show that the k−essence scalar field φ(r, t) given by equation (18) to satisfy the emergent equation of motion (7) along the symmetry axis θ = 0 at r → ∞. For θ = 0, the emergent equation of motion (7) takes the form The first term vanishes since φ 2 (t) is linear in t and the last two terms vanish becauseḠ 01 =Ḡ 10 = 0. Using the expression for the second and third terms for r → ∞ goes as ( From the Planck collaboration results [54,55], we have the value of dark energy density (in unit of critical density) K is about 0.696. Therefore, the second and third terms of (20) is negligible as the denominator goes to infinity. Therefore, in this limit the emergent equation of motion is satisfied. A massless particle in a black hole background is described by the Klein-Gordon equation We can expands Ψ as to obtain the leading order inh the Hamilton-Jacobi equation is We consider S is independent of θ and φ. Then the above Eq. (30) The action S is assumed to be of the form Then Then The two values of W (r ) correspond to the outer and inner horizons, respectively. Therefore the Eq. (32) becomes So the tunneling rates are and where K B is Boltzman constant. From these above two expressions (37) and (38) the corresponding Hawking temperatures of the two horizons are and The usual Hawking temperature for Kerr-Newman black hole is [52] The above temperatures (39,40) are modified in the presence of dark energy. These temperatures are different from usual Hawking temperature (41) as the presence of terms β = 1 − K , M = M 1−K and Q = Q √ 1−K where K is the dark energy density (in unit of critical density). Kerr-Newman-AdS background We consider the gravitational metric g μν is Kerr-Newman-AdS (KNAdS). The line element of KNAdS metric [62][63][64][65][66] is The parameters M and α are related to the mass and angular momentum of the black hole, G is the gravitational constant and l is the curvature radius determined by the negative cosmological constant (Λ < 0) Λ = − 3 l 2 . Again we choose symmetric axis along θ = 0 as before since in [52] elaborately shown that the Hawking temperature is independent of θ . Then the line element (42) reduces to with F(r ) = Δ r Σ and Σ = r 2 + α 2 . Using this (45) the emergent gravity metric (9) components arē Again we consider the k-essence scalar field φ(r, t) is spherically symmetric. So the emergent gravity line element for KNAdS background along θ = 0 is Transform the coordinates (t, r ) to (ω, r ) as and we choosė Then the line element (47) becomes We consider again the solution of Eq. (49) as φ(r, t) = φ 1 (r ) + φ 2 (t). Then the Eq. (49) iṡ where K is a constant and K = 0. From (51) we geṫ where and Now we clarify the parameters of the above Eq. (52): C = −1 For this type of k-essence scalar field φ (52), the line element (50) reduces to Similar reasons as before here also the only allowed values of K are 0 < K < 1. Also it can be shown that this metric (55) is an approximate solution of Einstein's equations at r → ∞ along θ = 0. Note that the parameters M, Q, l are also modified in the presence of dark energy density (K ). The Hawking temperature for KNAdS type metric in the presence of dark energy We calculate the Hawking temperature using tunneling formalism [58,[64][65][66]. The horizons of the metric (55) in the presence of dark energy are determined by Δ r = (r 2 + α 2 ) 1 + r 2 l 2 − 2G M r + Q 2 = β l 2 [r 4 + r 2 (α 2 + l 2 ) − 2G M l 2 r + l 2 (α 2 + Q 2 )] = β The equation Δ r = 0 has four roots, two real positive roots and two complex roots. We denote r d ++ and r d −− are complex roots and r d + and r d − are positive real roots in the presence of dark energy (K ). Here we consider r d + > r d − so that r d + is the black hole event horizon and r d − is the Cauchy horizon of the KNAdS type black hole (55). Now we use the Eddington-Finkelstein coordinates (v, r ) or (u, r ) along θ = 0 i.e., advanced and retarded null coordinates [14] v = ω + r * ; u = ω − r * with dr * = (r 2 + α 2 )dr we get the emergent gravity line element (55) becomes
2022-12-01T15:53:46.636Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "d7fe9e53acc7bf7cbcbf9b6a35ccc6c1f738346b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-019-7066-z.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "d7fe9e53acc7bf7cbcbf9b6a35ccc6c1f738346b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
79517499
pes2o/s2orc
v3-fos-license
An Atypical Presentation of Massive Pulmonary Embolism Background: Pulmonary embolism (PE) is an obstructive disease of the pulmonary arterial system caused by the embolization of thrombus originating from the deep veins of the lower extremities. Almost 25% patients of PE present with sudden cardiac death and not all patients may have classical symptoms. Hyper-coagulable states have been reported to cause cerebrovascular and myocardial thrombosis but rarely PE. Case Report: We present a case of a 27 year old male who presented to the Emergency Department with complaints of low backache and giddiness. Patient was found to be tachycardic, tachypneic and in shock. Patient had a low probability of PE with a Well’s score of 1.5 but was diagnosed as having massive bilateral acute pulmonary embolism with deep vein thrombosis secondary to protein C deficiency. Conclusions: It is imperative for emergency physicians to have a high index of suspicion in young patients presenting with atypical symptoms and low clinical probability for PE in order to thrombolyse the patient on time Introduction Pulmonary embolism (PE) is an obstructive disease of the pulmonary arterial system caused by the embolization of thrombus originating from the deep veins of the lower extremities [1]. PE is the third leading cause of cardiovascular related deaths after coronary arterial diseases and stroke [2]. Its incidence rises in older age group [3]. The frequency of developing pulmonary embolism in young patients is low but once developed it has a high potential for mortality if not diagnosed and managed early. Hyper-coagulable states like protein C and anti-thrombin III deficiency have been reported to cause cerebrovascular thrombosis but rarely been reported to cause PE. It is often a dilemma for emergency physicians whether or not to pursue the diagnosis of PE in patients who present with atypical signs and symptoms of pulmonary embolism. Despite diagnostic advances, delay in diagnosing pulmonary embolism is common and represent an important clinical issue. We report a case of a young male who presented with atypical symptoms with no co-morbidities and was diagnosed with PE and treated for the same. Case Report A 27 years old male patient was brought to the Emergency Department (ED) with history of left low backache since 2 days. He also complained of giddiness followed by a fall on the presenting day. There was no history of fever, chest pain or shortness of breath. The patient did not have any significant past medical or surgical history. He was an occasional smoker and there were no significant family history. On examination the patient was tachycardic, tachypneic and in shock with a heart rate of 140/ min, respiratory rate of 34/min and unrecordable initial blood pressure. Patient was normo-thermic and saturating at 80% on room air. On secondary survey patient was pale and there was a lacerated wound over left occipital region and bilateral rhonchi were present on chest auscultation with bilateral equal air entry. Rest of the examination was unremarkable. Patient was started on oxygen inhalation and intravenous fluids to which the patient responded well. ECG was done which showed sinus tachycardia and a chest x-ray showed haziness in the left lung base. To rule out a cardiac cause 2D echocardiography was done which showed dilatation of the right atrium and right ventricle and trace tricuspid regurgitation with normal ejection fraction [ Fig.1]. Patients Well's score was 1.5 which comes under low probability group but keeping in view his echocardiography findings, he was further evaluated for PE. He had a D-dimer of >5000 ng/mL (0.0-750.0) and subsequently Computed Tomography pulmonary angiography (CT PA) was planned. The CTPA showed soft tissue density filling defect noted in the distal segment of the left main pulmonary artery extending upto the sub-segmental divisions of almost all lobes of left lung while on right side there was mild eccentric filling defect in the right middle lobar division not extending upto the sub-segmental level [ Fig.2-5]. The patient was diagnosed with bilateral acute massive thromboembolism and was thrombolysed with tenecteplase. Doppler study of the lower limbs revealed a large echogenic thrombus in the right common femoral vein [Fig.6]. Patient was shifted to the Cardiac Care Unit for further management. Subsequent blood work revealed the patient to be deficient in protein C as 33% (70-140) and antithrombin III as 35% (83-128). Patient was further managed conservatively with anti-coagulants and other supportive management. He responded well to given medical therapy and was discharged in a stable condition with the final diagnosis of massive acute bilateral pulmonary thrombo-embolism and deep vein thrombosis (DVT) secondary to protein C deficiency. Discussion The risk of blood clots is increased by cancer, prolonged bed rest, smoking, stroke, certain genetic conditions, estrogen-based medication, pregnancy, obesity, and after some types of surgery [4]. About 90% of emboli are from proximal leg DVTs or pelvic vein thromboses. Clinically apparent DVT is present in only 11% of confirmed cases of pulmonary embolism. The classic presentation of PE is the abrupt onset of chest pain, breathing difficulty and hypoxia but some patients may have no obvious symptoms at presentation. The diagnosis of pulmonary embolism should be suspected in patients with respiratory symptoms unexplained by an alternative diagnosis. Evidence-based literature supports the practice of determining the clinical pre-test probability of pulmonary embolism before proceeding with diagnostic testing [5]. The three validated systems include the Modified Wells Scoring System, the Revised Geneva Scoring System, and the Pulmonary Embolism Rule Out Criteria (PERC) [6][7][8]. Low probability PE can be ruled out with D-dimer testing [9]. CTPA is the gold standard for diagnosing pulmonary embolism [10]. A hyper-coagulation workup should be performed if no obvious cause for embolic disease is apparent and the patient does not have any risk factors for the same. Protein C is a 62-kD, vitamin K-dependent glycoprotein synthesized in the liver. The activation of the protein into activated protein C (aPC) is catalyzed by thrombin when it is bound to the endothelial glycoprotein thrombomodulin [11,12]. The catalytic activity of aPC is greatly enhanced by the vitamin K-dependent cofactor protein S [13]. A deficiency of protein C disturbs the delicate balance between pro-coagulant and anti-coagulant proteins and engenders a prothrombotic state. Protein-C and anti-thrombin III deficiency leads to a three-fold to seven-fold increase in risk of thrombosis [14]. Protein C deficiency however leading to PE has rarely been reported. For almost one-quarter of PE patients, the initial clinical presentation is sudden death [15]. Early thrombolytic therapy has been shown to have beneficial outcomes in patients having massive PE. Conclusion In our case report, the patient had a Well's score of 1.5 and was found to have massive PE on CTPA along with DVT secondary to protein C deficiency. It is imperative for astute emergency physicians to have a high index of suspicion in young patients presenting with atypical symptoms and low clinical probability for PE in order to thrombolyse the patient on time.
2019-03-17T13:12:40.547Z
2018-05-13T00:00:00.000
{ "year": 2018, "sha1": "5b67b213c45d0725e95d8fabc3d3f30eef2d9057", "oa_license": null, "oa_url": "http://www.casereports.in/filedownload.aspx?id=835", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8863ea7dd59792a4c64ffd2c17579ee3ef773c4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235407941
pes2o/s2orc
v3-fos-license
Role and economic importance of crop genetic diversity in food security Determination of genetic diversity and their relationships among breeding materials is very crucial in crop improvement strategies. Characterization and evaluation of germplasm is pre-requisite to screen out the desired genetic materials for the genetic improvement programs. The collection of germplasm relies on the several number of accessions it possesses and the genetic materials available in those accessions for yield and yield components. Climate change and geographical isolation are identifi ed as two majors in the formation of new species. The other sources of germplasm diversifi cation and evolution are biotic factors like competition and predation among themselves. Phenotypic characters are the most important conventional tools to analyse variation among the genetic materials and the visible morphological traits are very crucial tools in genetic diversity investigation. Plant breeding is primarily relied on the variation exist in the genetic diversity of cultivated and their wild relatives together for further improvements. Plant phenotyping is defi ned as the investigation of plant characters by researchers for yield, quality and resistance to biotic and abiotic stresses. Genetic variation and selection are the two basic principles of plant breeding. Additive (heritable) and non-additive variance (dominance and epitasis) are the important components of genetic variance of any quantitative traits. Hence, it is important to decompose the visible phenotypic variation into heritable and non-heritable components with suitable genetic components like genotypic coeffi cient of variation, heritability and genetic advance. Genetic diversity is the totality of genetic difference of genetic variation in the genetic make-up of a species. Genetic diversity ha paramount role in the perpetuation of a species through offering adaptation mechanisms to biotic and abiotic environmental stresses and enables change in the genetic composition to cope with changes in the environment. Eventually, plant genetic diversity is playing a key role in the continuation of agricultural development with signifi cant improvement in different morphological and agronomical characteristics. Selection for improvement highly depends on inherent levels of genetic diversity present at the time in the species, rate of evolutionary response and adaptation to the environmental conditions. As the genetic diversity increases the ability to adapt to changing environments also increases within a given species. Especially, when the climate fl uctuation, new pests and diseases are occurred, the species which have huge genetic diversity capable of overcome the challenges. Since crop plant improvement program is integrated with different research disciplines, the availability and accessibility of diverse genetic materials ensure the sustainability of global food production network. Review Article Role and economic importance of crop genetic diversity in food security Introduction Biological diversity is the existence of variation among and within the living world for the betterment of improvement especially in crop plants [1]. Biological diversity is necessarily decomposed into three different major components such as genetic diversity, species diversity and ecosystem diversity [2]. Genetic diversity is the availability of variability of heritable traits in a population of the given species [3]. Genetic variation is defi ned as the differences in DNA sequence, biochemical characteristics, physiological and morphological characters such as plant height, fl ower position, fl ower color and other different function. Ramanatha & Hodgkin, [2] described genetic diversity as the presence of difference in alleles, genotypes, the result of their performance (phenotypes) and the overall sum of genome. The utilization of genetic diversity is relevant for making advanced improvement for crop plants [4]. In the presence of narrow genetic diversity within crop species are susceptible to emerging pathogens and other several constraints leading to loss of productivity and this problem leads to serious decline in the areas of adaptation [5]. Genetic variation is a major driver of evolutionary diversifi cation and source of phenotypic variation. The achievement in the crop improvement primarily relies on the broad base of degree genetic divergence [6]. Genetic Genetic diversity is the natural gift that arised due to mutation, gene fl ow hybridization and polyploidy of genetic materials. Genetic variation is the allelic differences of genes in DNA or RNA arrangements in the genetic pool of a population. Genetic diversity is the broadest term consisting all the variation existing between the different genetic materials in relation to genetic make-up of crop species. There is three level of categories of biological diversity. These are: ecosystem diversity which is the representation of variability between distinguished communities of species at the highest hierarchy. The second level of biological diversity is diversity in species which is indicating the different species within a community and the third biological diversity is genetic diversity which is the diversity present within distinguished cultivars of species. Genetic diversity has signifi cant role in ensuring food security through increasing farmer's income and plays in current and future food production [11]. The importance of plant genetic diversity is very large especially from the very beginning of agriculture, natural genetic variability has been exploited within crop species to meet subsistence food requirement, and now it is being focused to surplus food for growing populations. Today's crop gene banks have emerged in response to two different intended purposes: fi rst, the mobilization, management, and long-term storage of materials that can be readily used in crop variety improvement programs and second, the long-term conservation of crop genetic diversity, for the potential future use of humanity. Crop genetic resources are the basis of agricultural production and signifi cant economic benefi ts have resulted from their conservation and use. Genetic resources provide the fundamental mechanics that enable plants to convert soil, water and sunlight into something of critical value to humans-food. Diverse genetic resources allow humans to select and breed plants with desired characteristics, thus increasing agricultural productivity. Genetic diversity is the backbone of a nation's food security and the basis of economic development as a whole. Advantages of genetic diversity Genetic diversity is the base for crop improvement and existence of crop pant in nature. It clear that the genetic diversity offers opportunity for improvement of cultivars with desired traits which consist both farmer-preferred traits and breeder-preferred traits. To meet subsistence food requirement, genetic variability has been used in beginning of agriculture. Nowadays, climate adapted cultivar development is the issues of plant breeder since the climate components are fl uctuated and causing adverse problems on the normal growth and development of crop plants. The availability of genetic diversity directly related with presence of desired alleles and help to develop in breeding climate resilient varieties. Sustainability of crop production and food security is being threatened by the increasing unpredictability and severity of drought stress due to global climate changes. The incorporation of the adapted natural genetic variations into breeding programs can enrich the current genetic diversity of stress tolerance and improve yield under stress. Genetic diversity enable for the development of high yields of farmers and breeders preferred improved quality cultivars. Genetic analysis which is a good indicator of genetic diversity. This system is required to be tested on wide ranges of crops to be verifi ed [7]. The potential improvement of crop plants determined by the magnitude of genetic diversity available in provided crop species. Characterization followed by cataloguing of genetic materials are an essential prerequisite for a successful improvement program. Genetic diversity analysis tools are used to measure the degree of genetic divergence amongst different populations [8]. The presence of genetic variation in plant populations is useful for conservation and breeding programs. This day the crop genetic diversity is decreased due to several reasons as compared to the previous one. It is crucial to enhance the crop productivity through the providing the appropriate protection and conservation to genetic diversity of crops and the management practices of growing environments should be modifi ed. Eventually, human population is increasing alarming and becoming beyond the expectation of life standard which caused the scarcity of natural resources [10]. Therefore, knowledge of genetic variability is the key component in selecting genotype that withstand to changing environments, including new pests, diseases and new climatic conditions for the future breeding program. The objective of the paper was to understand the role and economic importance of crop genetic diversity in food security. Concept of diversity and its impact in crop improvement Genetic diversity is defi ned as the availability of genetic variation which is heritable traits in a population of given species [3]. For the development of climate resilient cultivars, the existence of genetic diversity in the form of wild species, related species, breeding stocks and mutant lines are the source of desirable alleles which assist plant breeders [10]. Agriculture Organization reported the depletion of genetic diversity as the most serious environmental concerns [12]. In general, genetic diversity strictly the amount genetic variation available between crop species [13]. Variability and adaptability The largest genetic variation of crop species offers the greater opportunities for improvement to adapt to environmental conditions. Adaptability is the better survival Effects of genetic erosion Genetic erosion is the depletion of genetic variability due to several factors over a particular period of time in a particular location. The loss can include individual genes or combination of genes. Genetic loss is the reduction of genetic diversity over time [29]. Genetic loss primarily caused due to the as an alleviation in evenness [32]. Genetic loss as depletion in evenness originates from the variability indicators used in population genetics, such as Shannon's index [33] and Nei's gene diversity index [34]. Genetic diversity is measured using frequencies of genes within a group of genotypes in a specifi ed region. Diversity level is reduced because of dominant single genotypes or alleles. Genetic loss at ex situ conservation can be occurred because of depletion of genes as a result of regeneration and storage practices [35]. The main cause of narrow genetic base is the replacement of diversity of land races with few modern varieties. Genetic erosion has negative developmental effect when loss of genetic diversity has profoundly narrowed the genetic base of modern crop varieties [36]. Narrow genetic base is defi ned as the loss of genetic diversity and commonly refers to the reduction in the quantities of specimens of a species [37,38]. Green revolution was the transition of cultivation of landraces to modern varieties to increase the agricultural productivity using improved varieties, excessive agricultural inputs and mechanized agriculture. Replacement of landraces that evolved with and has been genetically improved by traditional agriculturists, but has not been infl uenced by modern breeding practices or traditional varieties by modern varieties or high yielding varieties is one of the most important reasons. The landraces of a primary centre of origin are assumed to contain many valuable genes particularly for resistance or tolerance to various biotic and abiotic stresses and hence hold promise for their utilization in future plant-breeding programs. The term genetic erosion is sometimes used in a narrow sense, i.e. the loss of genes or alleles, as well as more broadly, referring to the loss of varieties. There are a number of different ways to represent the problems of genetic erosion. One of the most useful indicators is the narrowness of the food base. Narrow genetic base is the depletion in population variation because of inbreeding and genetic drift which is largely causes the endangerment of small isolated populations. Narrowing of genetic diversity might result the complete loss of crop plants. Conclusion Genetic diversity is the extent of genetic variation available among crop species to use in improvement program. The presence of suffi cient genetic variation is a key for the success of breeding program. Genetic diversity has paramount importance for the development of superior varieties in terms of yield and other desirable traits. It is also very crucial in the production of superior hybrids and desirable recombinants. Genetic diversity determines the effi ciency and effectiveness of improvement which may result in enhanced food production. From plant breeding aspects, classifi cation of genetic variability to respective heterotic group is critical for the development of vigorous and outstanding hybrids in terms of economically important traits. Genetic diversity is providing vital protection to other nature against climate change, pests and diseases stresses. Creating suffi cient genetic variation for golden crop improvement is becoming challenges to keep improving genetic yield potential. Nowadays, plant breeders are utilizing genetic materials without knowing its genetic background such as exotic non-adapted, exotic adapted and existing genetic material as a source of new alleles that protect and improve genetic gain through selection. In ensuring food and nutritional security, genetic diversity is contributing very amble quantity. Knowledge of genetic diversity of the genetic material is very critical in crop improvement. Effective selection is highly important in any crop improvement where the suffi cient genetic variation is available for different characters. The genetic variability analysis of crop cultivars for different agronomical and morphological characters are very critical in providing opportunity to select a number of promising cultivars. Genetic variation is the basic foundation
2021-06-11T16:11:40.224Z
2021-04-17T00:00:00.000
{ "year": 2021, "sha1": "c20b56768cc1dc3f0516822d4d67538ec6b2aebe", "oa_license": "CCBY", "oa_url": "https://www.peertechzpublications.com/articles/IJASFT-7-204.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c20b56768cc1dc3f0516822d4d67538ec6b2aebe", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [ "Biology" ] }
147089445
pes2o/s2orc
v3-fos-license
Declines in American Adults’ Religious Participation and Beliefs, 1972-2014 Previous research found declines in Americans’ religious affiliation but few changes in religious beliefs and practices. By 2014, however, markedly fewer Americans participated in religious activities or embraced religious beliefs, with especially striking declines between 2006 and 2014 and among 18- to 29-year-olds in data from the nationally representative General Social Survey (N = 58,893, 1972-2014). In recent years, fewer Americans prayed, believed in God, took the Bible literally, attended religious services, identified as religious, affiliated with a religion, or had confidence in religious institutions. Only slightly more identified as spiritual since 1998, and then only those above age 30. Nearly a third of Millennials were secular not merely in religious affiliation but also in belief in God, religiosity, and religious service attendance, many more than Boomers and Generation X’ers at the same age. Eight times more 18- to 29-year-olds never prayed in 2014 versus the early 1980s. However, Americans have become slightly more likely to believe in an afterlife. In hierarchical linear modeling analyses, the decline in religious commitment was primarily due to time period rather than generation/birth cohort, with the decline in public religious practice larger (d = −.50) and beginning sooner (early 1990s) than the smaller (d = −.18) decline in private religious practice and belief (primarily after 2006). Differences in religious commitment due to gender, race, education, and region grew larger, suggesting a more religiously polarized nation. Creative Commons CC-BY: This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). Article Are Americans less religious than they used to be? In previous research, the answer depended on how religious commitment was measured. Most studies agree that religious affiliation has declined in the United States since the 1970s; for example, more Americans in recent years chose "none" when asked to identify their religion (e.g., Hout & Fischer, 2002;Lim, MacGregor, & Putnam, 2010;Pew Research Center, 2015). However, several recent studies have concluded that religious service attendance, belief in God, and prayer have not changed or have even increased in recent years (e.g., Dougherty, Johnson, & Polson, 2007;Presser & Chaves, 2007;Smith & Snell, 2009;P. Taylor, 2014;Wachholtz & Sambamoorthi, 2011). Based on data up to 2008, Chaves (2011) concluded that belief in God and frequency of prayer did not change in the General Social Survey (GSS) since the 1980s. Examining 18-to 24-year-olds in the GSS 1972-2006, Smith and Snell (2009 found only small changes in religious affiliation and service attendance, and no changes in frequency of prayer and belief in God. They concluded that emerging adults "have not since 1972 become dramatically less religious or more secular . . . if such a trend is indeed perceptible, it would seem to be weak and slight" (pp. 99-100). Other sociologists of religion have echoed these sentiments. Stark (1988, 2005) contended that the overall religiousness of the American public has remained relatively constant as a whole, although fluctuations in affiliation and expression have occurred. Similarly, Berger's (1999) work explored the overall constancy of American religious affiliation over time, with a particular focus on how religiousness moved back to the forefront of political and economic discourse in recent decades-sentiments also echoed in many other seminal works (Berger, 2011;Berger, Davie, & Fokas, 2008). Thus, at least up to the mid-to late-2000s, research suggests that Americans' private religious practice and beliefs (e.g., those religious practices, disciplines, and beliefs that may be conducted alone or without explicit religious affiliation) and religious service attendance remained unchanged even as more did not affiliate with a particular religious tradition. Another possibility is that religious belief has been replaced by spirituality (Fuller, 2001;Saucier & Skrzypińska, 2006). In other words, the prevailing conclusion has been that Americans have remained just as religious and/or spiritual in a private or personal sense, but less religious in a public sense. This may be due to a more general disassociation from large groups-for example, Americans have become significantly less confident in virtually all large institutions from government to medicine (Twenge, Campbell, & Carter, 2014). Such an explanation would also be consistent with many popular conceptions of religion as a socially organizing institution (e.g., Ysseldyk, Matheson, & Anisman, 2010) that transmits cultural values, mores, and rules (Graham & Haidt, 2010). As societal norms have shifted away from institutional identification to individualism, one would expect commitment to religion, a ubiquitous social institution, to similarly decline. However, it is unclear whether such decreases in external commitment would also be associated with decreases in personal religious involvement or practice. Despite popular conceptions that public religious involvement has decreased while private expressions of religion and spirituality have stayed about the same, stark distinctions between religion and spirituality may be more theoretical than practical. Although religion and spirituality are known to be distinct constructs (i.e., religion comprises social and ritualized aspects of personal belief, whereas spirituality includes the search for meaning or transcendence in daily life; Pargament, 1999), these two constructs often overlap, and highly religious individuals often identify as being highly spiritual as well (for a review, see Hill & Pargament, 2003). Moreover, although some individuals certainly do identify as spiritual but not religious (e.g., Saucier & Skrzypińska, 2006), a much larger proportion of individuals identify as both religious and spiritual (Pargament, 1999), and many people have difficulty substantively differentiating between the two on an individual level (Hill et al., 2000;Zinnbauer et al., 1997). Therefore, as religious commitment has decreased, one may also expect decreases in private religious practice and individual spirituality. In this article, we seek to examine whether Americans' religious service attendance, religious practice, religious beliefs, religiosity, spirituality, confidence in religious institutions, and religious affiliation have changed since the 1970s, with a particular focus on the years since 2006 and on 18-to 29-year-olds. We take the additional step of calculating effect sizes and performing statistical significance testing to quantify the size of the changes. We draw from the nationally representative GSS of U.S. adults conducted 1972-2014. Because this survey draws from a multiage sample above 42 years, it can isolate the effects of age from those of time period and generation. 1 Thus, unlike some surveys conducted over a shorter period of time (e.g., 7 years: Pew Research Center, 2015), this data set can determine, for example, if the Millennial generation (born approximately 1980-1994) is less religious because they are young or because of generational or time period change. That is, are Millennials less religious than Generation X (born 1961-1979) and Boomers (born 1943Boomers (born -1960 were when they were 18-to 29-year-olds? This data may also provide an early look at iGen (born 1995-2012) and their religious attitudes. Changes over time and generations in attitudes, values, and personality traits are rooted in cultural change (Stewart & Healy, 1989;Twenge, 2014), with cultures and individuals mutually influencing and constituting one another (Markus & Kitayama, 2010). One cultural change relevant for religious orientation is the rise in individualism, a cultural system placing more emphasis on the self and less on social rules (e.g., Bellah, Madsen, Sullivan, Swidler, & Tipton, 1985;Fukuyama, 1999;Myers, 2000;Twenge, 2014). Several studies have documented increases in focus on the self (Twenge, Campbell, & Gentile, 2013;Twenge & Foster, 2010) and declines in focus on institutions, empathy for others, and moral rules (Kesebir & Kesebir, 2012;Konrath, O'Brien, & Hsing, 2011;Twenge et al., 2014). There are several reasons we would expect religion to decline with greater individualism. First, religiosity implies some level of commitment to a larger group or organization. As Welzel (2013) suggests, the trend in Western societies has been toward more freedom and less commitment to groups. Second, belonging to a religious group may require assent with the group's beliefs, opinions, and practices. This can create tension when differences in opinion arise between an individual and an organization (e.g., Exline, Pargament, Grubbs, & Yali, 2014;Exline & Rose, 2013). Third, religiosity usually involves some rule-following and submission to authority (e.g., Graham & Haidt, 2010), another characteristic that goes against emancipation and individualism. Fourth, religion often focuses on concerns outside of the self, such as helping others and serving God (e.g., Pichon, Boccato, & Saroglou, 2007;Shariff & Norenzayan, 2007). Thus, the increasing individualism of American culture may have produced decreased religiosity among more recent time periods and generations. Based on previous research and cultural changes, we expect a decline in religious affiliation. We also predict declines in religious service attendance; while religiously unaffiliated Americans may attend services for a time, they may become less likely to do so as they feel more disassociated from religion. Most crucially, we predict declines in more private expressions of religious belief and practice, such as prayer, religiosity, and belief in God, with the declines especially evident among young people. Belonging to a religion and more privately believing in its tenets are traditionally linked (e.g., Park et al., 2013;Smith, Denton, Faris, & Regnerus, 2002); as more Americans are unaffiliated with religion, a greater proportion may become not just unaffiliated but secular in their beliefs and practices. These declines may be especially evident in recent years and among 18-to 29-year-olds, given the generational and cultural trends toward emphasizing social rules less and individual freedom more (known as "Generation Me": Twenge, 2014;or "emancipative values": Welzel, 2013). Moving away from social institutions and community engagement would likely detract from one of the key facets of religion as a whole-that is, community involvement and social value transmission. A secondary question is whether changes in religious orientation over time are caused by time period or generational (cohort) effects. If successive generations are less religious (forming their religious orientation while young and not changing), any decline would be due to generation. If people of all ages have become less religious during certain times, any decline would be due to time period. New hierarchical linear modeling techniques (called APC or age-periodcohort analyses) attempt to separate the effects of age, generation, and time period (Yang, 2008;Yang & Land, 2013). Some have argued that these techniques do not resolve the identification problem that has long plagued simultaneous analysis of age, period, and cohort effects (e.g., Bell & Jones, 2013; however, these criticisms appear to largely rest on untenable assumptions that are not consistent with basic APC models (Reither et al., 2015). In addition, APC techniques have become widely used. For example, Schwadel (2011) performed an APC analysis on some of the GSS religion variables up to 2006. However, at that time the data included only a handful of Millennials, a generation purported to be less religious; by 2014, however, Millennials were the entirety of 20-to 29-year-olds. In addition, further time period change may have occurred in the 8 years of data available since 2006. Thus, we perform APC analyses to examine whether shifts in Americans' religious orientation are due to generational or time period effects. In addition, we examine possible moderators of change over time in religious orientation. Trends may differ among men and women, Blacks and Whites, education levels, and U.S. regions, as these groups differ in their levels of religiosity and cultural focus (Blaine & Crocker, 1995;Piff, Kraus, Cote, Cheng, & Keltner, 2010;Plaut, Markus, & Lachman, 2002;R. Taylor, Chatters, Jayakody, & Levin, 1996;Vandello & Cohen, 1999). We theorize that the decline in religious orientation will be larger among demographic groups and regions with higher social power and more individualism, including Whites, men, those with a college education, and living in the Northwest and West, and lower or nonexistent among groups with lower social power and less individualism, including Blacks, women, those without a college education, and the Midwest and South (e.g., Piff et al., 2010;Vandello & Cohen, 1999). Groups with relatively high social power might not see themselves as having a significant need for religion or God, so these groups might pioneer the movement toward less religiosity. Thus, we have three goals in this article: (a) to perform a comprehensive examination of American adults' religious orientation from 1972 through 2014, with a particular emphasis on 2006-2014 and 18-to 29-year-olds, and including effect sizes; (b) to examine whether these changes are due to generation or time period; and (c) to examine whether the trends differ by gender, race, education, or U.S. region. Sample We drew from the GSS, 1972-2014, a nationally representative survey of U.S. residents over 18. Depending on the item, ns range between 12,862 and 58,893. As suggested by the GSS administrators, we weight the analyses by the weight variable WTSSALL to make the sample nationally representative of individuals rather than households and correct for other sampling biases. However, these weighted analyses differ only very slightly from unweighted analyses. Also as suggested by the administrators, we excluded the black oversamples collected in 1982 and 1987. Items We identified and analyzed all items on respondents' own religious orientation asked in at least six administrations of the GSS. They were as follows: 1. Religious preference: "What is your religious preference? Is it Protestant, Catholic, Jewish, some other religion, or no religion?" We analyzed the percentage of respondents who chose "no religion." Asked 1972-2014. 2. Strength of religious affiliation: "Would you call yourself a strong (Christian, Jew, etc.) or not a very strong (Christian, Jew, etc.)?" Response choices were "not very strong," "somewhat strong," and "strong." Asked 1974-2014. 3. Religious service attendance: "How often do you attend religious services?" Response choices were "never," "less than once a year," "about once a year," "about once or twice a year," "Several times a year," "about once a month," "2-3 times a month," "nearly every week," and "every week." Asked 1972-2014. 4. Belief in the afterlife: "Do you believe there is a life after death?" Response choices were "yes" and "no." Asked 1973-2014. 5. Believing the Bible is literal: "Which of these statements comes closest to describing your feelings about the Bible?" Response choices were "The Bible is an ancient book of fables, legends, history, and moral precepts recorded by men"; "The Bible is the inspired word of God but not everything in it should be taken literally, word for word"; and "The Bible is the actual word of God and is to be taken literally, word for word." Asked 1984-2014. 6. Frequency of praying: "About how often do you pray?" Response choices were "never," "less than once a week," "once a week," "several times a week," "once a day," and "several times a day." Asked 1983-2014. 7. Belief in God: "Please look at this card and tell me which of the statements comes closest to expressing what you believe about God." Response choices were "I don't believe in God"; "I don't know whether there is a God and I don't believe there is any way to find out"; "I don't believe in a personal God, but do believe in a Higher Power of some kind"; "I find myself believing in God some of the time, but not at others"; "While I have doubts, I feel that I do believe in God"; and "I know God really exists and I have no doubts about it." Asked 1988-2014. 8. Confidence in religious institutions: "I am going to name some institutions in this country. As far as the people running these institutions are concerned, would you say that you have a great deal of confidence, quite a lot of confidence, only some confidence, or very little in them?" One of the items is "organized religion." Response choices were "hardly any confidence at all," "only some confidence," or "a great deal of confidence." We excluded "don't know" and "refused" responses. Asked 1973-2014. 9. Identification as a religious person: "To what extent do you consider yourself a religious person?" Response choices of "not religious at all," "slightly religious," "moderately religious," and "very religious." Asked 1998 and 2006-2014. 10. Identification as a spiritual person: "To what extent do you consider yourself a spiritual person?" Response choices of "not spiritual at all," "slightly spiritual," "moderately spiritual," and "very spiritual." Asked 1998 and 2006-2014. Of these, religious preference, strength of religious affiliation, religious service attendance, and confidence in religious institutions are public religious variables, and belief in an afterlife, believing the Bible is literal, frequency of praying, belief in God, identification as a religious person, and identification as a spiritual person are private religious variables. Possible Moderators We analyzed moderation by gender (men vs. women), race (White vs. Black, the only racial groups measured in all survey years), education level (high school graduate and below vs. attended some college and above), and U.S. region (Northeast, Midwest, South, and West). Procedure Data collected over time can be analyzed in many ways, including grouping by 20-year generation blocks, by decades, or by individual year. Given our focus on both overall change since the 1970s and change since 2006, we separated the data into 5-year intervals from 1972-2004 and reported data by individual year from 2006 to 2014. We report the effect sizes (d, or difference in terms of standard deviations) and p values for t tests comparing 1972-1974 with 2014 and 2006 and 2014. We also include two figures with all of the year-byyear data for some variables. We report both continuous variables (e.g., the 0-8 scale for religious service attendance) and dichotomous variables (e.g., the percentage who never attend religious services). We use the tables for means and report percentage changes in the text. For the APC models, we estimated random coefficient models allowing intercepts to vary across time periods (years) and generations (cohorts). Thus, effectively, an intercept (mean religious orientation) score is calculated (using empirical Bayes) for each cohort and each survey year. In addition, a fixed intercept (grand mean) is estimated along with a fixed regression coefficient for age and age squared. This model has three variance components: One for variability in intercepts due to cohorts (τ u0 ), one for variability in intercepts due to period (τ v0 ), and a residual term containing unmodeled variance within cohorts and periods. Variance in the intercepts across time periods and cohorts indicates period and cohort differences, respectively (Yang & Land, 2013). Thus, the technique allows for a separation of the effects of generation/cohort, time period, and age. Weighting could not be used for the mixed-effects analyses because proper probability weighting for variance component estimation requires taking into account pairwise selection probabilities, which is not possible in current statistical software. Trends in Religious Orientation American adults in the 2010s were less religious than those in previous decades, based on religious service attendance and more private religious expressions such as belief in God, praying, identifying as a religious person, and believing the Bible is the word of God (see Table 1 and Figure 1). These findings held when restricted to 18-to 29-year-olds (see Table 2 and Figure 2), demonstrating that Millennials are less religious than previous generations were at the same age. 2 While religious affiliation and service attendance have been declining since the 1990s, the decrease in more private religious expressions began fairly recently, becoming pronounced only after 2006 (see Figures 1 and 2). Effect sizes ranged from moderate (around d = .50; Cohen, 1988) to small (around d = .20). The increase in never praying among 18-to 29-year-olds was d = .80, equaling the guideline for a large effect. As found in previous research, fewer Americans now affiliate with a religion. Although the majority of Americans are still religious, three times as many in 2014 (vs. the early 1970s) have no religious affiliation, and twice as many never attend religious services. Fewer have confidence in organized religion; the number who said they had "hardly any" confidence went from 14% in the early 1970s to 24% in 2014, a 71% increase, and those who said they had "a great deal" of confidence was cut in half (from 41% to 20%). By 2014, the declines in religious orientation extended to more personal and private religious beliefs. Five times as many Americans in 2014 (vs. the late 1980s) never prayed (eight times more among those ages 18-29). Slightly more Americans in 2014 (vs. the 1980s) said they prayed "several times a day" (28%, up from 26%), but the 20% who prayed "less than once a week" in the 1980s became only 11% in 2014, apparently moving to "never" praying (3% in the 1980s vs. 15% in 2014). Americans in 2014 were less likely to say they believed in God. In the late 1980s, only 13% of U.S. adults expressed serious doubts about the existence of God (choosing one of the less certain response choices such as "I don't believe in God"; "I don't know whether there is a God and I don't believe there is any way to find out"; or "I don't believe in a personal God, but do believe in a Higher Power of some kind"; these responses were combined into "Do not believe in God" in Tables 1 and 2). By 2014, however, 22% expressed doubts, a 69% increase. Among 18-to 29-year-olds, 30% had serious doubts by 2014, more than twice as many as in the late 1980s (12%). Americans have also become less likely to believe that the Bible is the word of God. In 1984, 14% of Americans believed the Bible "is an ancient book of fables, legends, history, and moral precepts recorded by men" rather than the word of God; by 2014, 22% of Americans believed this, a 57% increase. Among 18-to 29-year-olds, 29% believed this by 2014, nearly twice as many as in the late 1980s (15%). Has religiosity been replaced with spirituality? It does not appear so. Identifying as a spiritual person increased between 1998 and 2006, but then declined between 2006 and 2014 (see Table 1 and Figure 1). In all, 62% identified as moderately or strongly spiritual in 1998, compared with 70% in Figure 1. Percentage of all American adults with no religious affiliation, who never attend services, never pray, do not believe in God, are not religious at all, and are not spiritual at all. 2006 and 65% in 2014; thus, identification as a spiritual person increased 5% between 1998 and 2014, a small increase compared to the larger declines in religious belief and practice. In addition, the percentage of 18-to 29-year-olds identifying as moderately or strongly spiritual declined 6%, from 50% in 1998 to 47% in 2014. In 1998, 14% of 18-to 29-yearolds said they were not spiritual at all, rising to 19% by 2014, a 36% increase (see Table 2 and Figure 2). Thus, there is some suggestion that young people were less spiritual in 2014 versus 1998, though the decline was not statistically significant. In 2014, fewer 18-to 29-year-olds (Millennials) identified as spiritual (47%) than those 50 and above (72%). This suggests that identification as a spiritual person may continue to decline. One increase in religious belief did emerge: Slightly more Americans believe in life after death (see Tables 1 and 2). Thus, more Americans believe in life after death even as fewer belong to a religion, fewer attend religious services, and fewer pray. In the 1970s, only about 7% of Americans never attended religious services but nevertheless believed in life after death; by 2014, twice as many (15%) showed this disconnect between behavior and belief, and 21% among young people. Mixed-Effects Analyses to Separate Time Period, Generation, and Age First, we performed a principal components analysis to determine whether the religion variables could be combined into a composite variable for use in the mixed-effects APC model analyses; combining these variables into an index increases internal reliability over single items. (The religious person and spiritual person variables were not asked in enough years to be included, so we limited this analysis to the other eight variables.) We used the continuous form of six variables (strength of religious affiliation, religious service attendance, frequency of prayer, belief in God, belief in the Bible as literal, confidence in religious institutions), with religious affiliation (none vs. affiliated) and belief in an afterlife (yes vs. no) dichotomous. We included only respondents who completed at least four of the eight items. Horn's (1965) parallel analysis of n = 8,513 cases with no missing values indicated that only a one-component solution had an Eigenvalue better than chance levels. Moreover, all variables loaded highly onto a single principal component explaining 46% of the variance, with a model fit of .94 (on a 0-1 scale). Using the omega function available in the {psych} package in R (Revelle, 2015) indicated that 50% of the common variance in the item scores could be accounted for by a general factor of religious orientation. In addition, the omega coefficient, which is the best estimator of single factor saturation (see Zinbarg, Revelle, Yovel, & Li, 2005), was .70 suggesting that a single factor accounted for much of the variability in these items. The principal components analysis indicated a single principal component across the decades of data collection. Therefore, all variables were z scored, and a composite religious orientation variable was formed (n = 52,497, M = 0.01, SD = 0.69, α = .83). Next, we performed mixed-effects analyses to separate the effects of time period, generation, and age on the composite variable. 3 The SD in intercepts for period (survey year) was .12 [.09, .16] and for cohorts was .03 [.00, .04], suggesting that almost none of the variability in religious orientation was due to cohorts. There was also a statistically Overall, there was a marked time period effect when generation and age were controlled (see Figure 3). Religious orientation declined d = −.38 from 1973 to 2014, and d = −.15 between 2006 and 2014. The generational effect was weaker, with religious orientation declining the most between those born in the 1930s and the Millennials born in the 1980s-1990s (d = −.06). Although religious orientation formed a single factor, we also examined whether the pattern of change was different for public (affiliation, strength of affiliation, service attendance, confidence in religious institutions) and private (belief in the afterlife, belief that the Bible is literal, praying, belief in God) religious practice. Similar to the analyses with one combined variable, time period explained more of the change than birth cohort for both public and private religious practice. However, the pattern of change and its size differed (see Figure 4). The decline in public religious practice was larger (d = −.50 between 1972 and 2014, and d = −.42 between 1984 and 2014) Moderators of the Decline in Religious Orientation We next analyzed whether the time period and cohort decrease in religious orientation (controlled for each other and age) differed based on race, U.S. region, sex, and education level. The trends were moderated by race, with no change in religious orientation for Black Americans (d = .00) and a large decrease among White Americans (d = −.48). In the early 1970s, Whites and Blacks differed little in religious orientation (d = .15, 1973(d = .15, -1974, but by 2014, there was a marked racial difference, with Blacks higher (d = .67). Cohort effects were weak for both Whites and Blacks. The effects also differed by U.S. region, with the decline in religious orientation largest in the West (d = −.42), Northeast (d = −.27), South (d = −.10), and Midwest (d = −.07). However, Midwesterners showed a pronounced cohort effect from those born in the 1880s to those born in the 1990s (d = −1.15), compared with the nonexistent cohort effects in the other three regions. In the early 1970s, Southern residents were only somewhat more religious than those in the Northeast (d = .23), but by the 2010s, Southerners were moderately higher in religious orientation compared to Northeasterners (d = .40). The West was the least religious region in both eras, with Westerners lower than Southerners in 1972 (d = −.47) but even more so in 2014 (d = −.78). An intriguing pattern appeared when examining men and women separately: The time period difference was somewhat larger for women (d = −.28) than for men (d = −.12), but men showed a pronounced cohort decline in religious orientation (d = −.93), while women showed virtually no effect for cohort (d = −.02). Similarly, the time period decline in religious orientation was somewhat larger among those who had not attended college (d = −.28) compared with those who attended at least some college (d = −.15). However, there was a moderate cohort decline in religious orientation among those who attended college (d = −.39) and virtually none for those who did not attend college (d = −.02). Overall, gender, race, education, and regional differences in religious commitment grew larger between the 1970s and the 2010s or between cohorts born in the late 19th century and those born in the late 20th century. Discussion By 2014, American adults were less likely to pray, believe in God, identify as religious, attend religious services, or believe the Bible was the word of God than they were in previous decades. Thus, the decline in religious affiliation found in previous research has now extended to religious service attendance and, by 2008 and afterward, to personal religious belief and practice. The only exceptions were an increase in belief in the afterlife and a small increase in identifying as spiritual between 1998 and 2006 limited to those above 30. The declines in religious orientation were particularly striking between the early 2000s and 2014 and among those 18 to 29 years old. Nearly a third of Millennials are not just religiously unaffiliated, but secular in other ways (doubting the existence of God, believing the Bible is a book of fables, not attending religious services, describing oneself as "not religious at all," never praying), and one out of five also say they are "not spiritual at all." Although religious orientation is often conceptualized as a multidimensional concept (e.g., Cornwall, Albrecht, Cunningham, & Pitcher, 1986;Idler et al., 2003), the present data indicated that declines in religious affiliation extended across various measures of religious participation and commitment. The decline in religious affiliation and participation has now extended to private practices and beliefs, though the decline in private religious practice and belief is smaller and began later than the decline in public religious practice. Mixed-effects analyses demonstrated that these trends were primarily due to time period. Millennials were less religious than their Boomer and Generation X predecessors were at the same age, demonstrating that their lower religious commitment is not solely due to their developmental stage of young adulthood. However, this appears to be due to a time period effect in which all generations are growing less religious over time. This suggests support for the idea that growing individualism has been accompanied by less religion on a larger cultural basis, with a larger linear cohort decline in some groups (men, Midwesterners, the college educated). These findings contradict popular culture notions of generations cycling back and forth with, for example, a less religious generation being followed by a more religious one. For example, generational theorists Howe and Strauss (2000), who adhere to the theory that generations come in cycles, proposed that Millennials would be more religious than GenX'ers. However, these data strongly suggest that the opposite is true. Men and women, Blacks and Whites, the college educated and not college educated, and the South versus the Northeast are becoming more polarized in their religious orientation: While differences in religious commitment between these groups were small during the 1970s, they have grown larger in recent years and with recent cohorts. The decline in religious commitment was most pronounced among men, Whites, and those in the Midwest, Northeast, and West, and was nearly absent among Black Americans and small in the South. It appears that groups with relatively high social power are less likely to see themselves as having a significant need for religion or God in recent years. In comparison with those from earlier years and generations, American adults in recent years and generations were slightly more likely to believe in an afterlife. Combined with the decline in religious participation and belief, this might seem paradoxical. One plausible, though speculative, explanation is that this is another example of the rise in entitlement-expecting special privileges without effort (Campbell, Bonacci, Shelton, Exline, & Bushman, 2004;Twenge & Foster, 2010). Entitlement appears in religious and spiritual domains when people see themselves as deserving spiritual rewards or blessings due to their special status. Entitlement centered on afterlife beliefs could be seen as a modern rendition of Pascal's wager, in which the individual observes that believing in God and a positive afterlife has few downsides, but not believing has the major possible downside of condemnation to eternal suffering (Hájek, 2003). However, the current data make it difficult to determine the cause of rising belief in the afterlife. Limitations and Future Directions Using the GSS data set has several major advantages, including the ability to examine trends among carefully sampled U.S. adults over long periods of time. Nonetheless, this form of research also has its limitations. Responses are limited to self-report, and measures must be brief. As such, the GSS does not provide the opportunity for nuanced or in-depth measurement of specific ideas of interest over time. Principal component and omega analyses demonstrated that a single factor captured the eight religious orientation variables. Although religiosity is usually conceptualized as multidimensional (Cornwall et al., 1986;Idler et al., 2003), in this data set, the majority of variation in religious orientation was determined by a single factor. We tried to strike a balance between internal reliability and diversity among individual items by presenting analyses of single items in the tables and focusing the APC analyses on the composite measure and on the public and private practice measures. Our focus here was on individuals in the United States, so cross-cultural studies should examine temporal trends in religious orientation in other countries. Religious groups may also differ in how they manage the discrepancy between religious participation and afterlife beliefs, based on teachings about the afterlife and whether (and how) the afterlife is connected with choices or behaviors during this life. Conclusion The 2010s are a time of tremendous change in the religious landscape of the United States. Although the majority of Americans are still religious, the declines in public religious affiliation observed in previous research have, by 2014, extended to private religious belief and action (such as prayer, belief in God, and identifying as religious). This decline was not replaced by a substantial increase in those identifying as spiritual. The slight increases in afterlife belief represent a potentially important exception to this pattern. Overall, the data suggest a pervasive decline in religious participation and belief among Americans, with a burgeoning minority becoming decidedly nonreligious. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research and/or authorship of this article. Notes 1. Birth cohort refers to everyone born in a given year, and generation to those born within a specified period. Both refer to the effects of being born during a certain era and thus are thus somewhat interchangeable; we will use the term generation most of the time but will use birth cohort when we are specifically referring to birth year. Generational labels (such as Boomers and Millennials) use arbitrary birth year cutoffs; we use these labels only for ease of presentation. 2. In the 2014 survey year, the 18-and 19-year-olds were born after 1995 and thus are iGen instead of Millennials. The n of 18-to 19-year-olds was too small to justify a separate analysis (e.g., n = 51 in 2014). As a proxy, we examined 18-to 22-yearolds (n = 153 in 2014; total n 1972-2014 = 4,927), which in 2014 includes those born 1992-1996 (and thus, those at the cusp between Millennials and iGen). In most cases, the decline in religious orientation was even more dramatic among 18-to 22-year-olds than among 18-to 29-year-olds. For example, the percentage of 18-to 22-year-olds who reported no religious affiliation rose from 11% in 1972-1974 to 36% in 2014; the percentage who reported they never prayed rose from 4% in 1980-1984 to 28% in 2014; the percentage who said they were "not spiritual at all" rose from 13% in 2006 to 25% in 2014. Belief in God declined d = -.54 , being a spiritual person d = -.21 (1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014), and attendance at religious services declined d = -.48 . This suggests that iGen will continue the decrease in religious orientation rather than reversing it, even in spirituality. 3. Some controversy has surrounded the issue of which intervals to use in APC models (Bell & Jones, 2013. We analyzed the data in 2-year, 5-year, and 10-year intervals, and found that they all produced very similar results.
2019-05-08T13:29:37.658Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "91d446d27b5a27377e23b3165fc6632172630997", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2158244016638133", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "3b0d3784ea0664af08d54f4b8735902157ef75dd", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
10758182
pes2o/s2orc
v3-fos-license
Coexistence of quantum and classical flows in quantum turbulence in the $T=0$ limit Tangles of quantized vortex line of initial density ${\cal L}(0) \sim 6\times 10^3$\,cm$^{-2}$ and variable amplitude of fluctuations of flow velocity $U(0)$ at the largest length scale were generated in superfluid $^4$He at $T=0.17$\,K, and their free decay ${\cal L}(t)$ was measured. If $U(0)$ is small, the excess random component of vortex line length firstly decays as ${\cal L} \propto t^{-1}$ until it becomes comparable with the structured component responsible for the classical velocity field, and the decay changes to ${\cal L} \propto t^{-3/2}$. The latter regime always ultimately prevails, provided the classical description of $U$ holds. A quantitative model of coexisting cascades of quantum and classical energies describes all regimes of the decay. Vortices in superfluid 4 He are different [4] in that all of them have fliamentary cores surrounded by inviscid flow of identical velocity circulation κ = h/m = 0.997 × 10 −3 cm 2 s −1 (h and m being the Plank's constant and atomic mass of 4 He, respectively) [4].Hence, turbulence in this system (quantum turbulence or QT) is a tangle of vortex lines [5,6].Yet, QT might be an analog of the classical scenario in that there are two coexisting structures [7]: one (flow round bundles of vortex lines) possesses all properties of the classical coherent vortices, while another incoherent component (flow round individual lines) is responsible for the transfer of energy towards the dissipative processes at smaller scales and adjusts its own extent self-consistently.Importantly, the concept of the energy cascade is still potent [8].Of special interest is the limit of zero temperature, T = 0, at which QT is non-dissipative down to length scales much smaller than the typical distance between vortices, ℓ = L −1/2 , where L is the length of vortex line per unit volume.The nature and rate of the corresponding energy cascade and ultimate dissipative processes remain open questions of fundamental importance [9,10].Two extreme cases of QT in the T = 0 limit have been studied [11][12][13] and revealed different types of free decay L(t).One ('ultraquantum' or 'Vinen QT') is a random tangle of vortex lines with negligible velocity fluctuations at length scales r ≫ ℓ.Such a tangle is fully described by L. Another limit ('quasiclassical' or 'Kolmogorov QT') is that of partially polarized tangles with the dominant contribution to energy coming from flow round many vortex lines.Here a second parameter is required, the amplitude of velocity fluctuations U at the integral length scale L i -usually of order the container size D. Many questions remained.Which of these regimes is transient and which is the ultimate 'equilibrium' type?Which parameters describe their interplay?What is the ratio of the contributions from the coherent and random components to the vortex length in the 'equilibrium' state?Our experiment, in which vortex tangles were generated with a known value of U , answers these questions. The energy, per unit mass, of the turbulent state is the volume-averaged E = 1 2 < v 2 > with velocity v given by the Biot-Savart integral over all vortex lines.We consider a developed bulk QT, for which L i ≫ ℓ.Then there are two major contributions to the energy, E = E q + E c .In the near field r ≪ ℓ, the 'quantum energy' is dominated by the velocity of fluid circulating round individual lines, where γ ≈ ln(ℓ/a 0 )/4π and vortex core radius is a 0 ≈ 1.3 Å [4].In our experiments, ℓ is within the range 0.14-2 mm; hence, γ ≈ 1.2 ± 0.1 ≈ const.On the other hand, in the far field r ≫ ℓ ('classical length scales'), the flow velocity arises from contributions of many aligned vortex lines.If the forcing is at length scale D, and Re s ≡ UD κ ≫ 1 [14], then the coarse-grained velocity field should obey classical fluid dynamics with no dissipation.Hence, the Kolmogorov-like energy cascade [2] is expected, with the classical energy dominated by U , In the T = 0 limit, the energy could be removed either by phonon emission due to short-wavelength Kelvin waves [15,16] or diffusion of small vortex rings [9,10,[17][18][19]. Both processes are related to length scales r < ∼ ℓ, and are fuelled by vortex reconnections [20,21].The flux of energy towards these dissipative processes, is expected to obey [22,23] ( Whether the dimensionless parameter ζ ∼ 1 depends on the tangle's polarization [24] is still an open question. with t V = γ ζκL0 (where L 0 = L(0)).Such decay with a universal prefactor, corresponding to ζ ≈ 0.10, was observed in QT generated after a brief injection of ions in cells of different sizes [11,13] and also in numerical simulations of Vinen QT [25,26]. In the opposite limit of Kolmogorov QT with dominant flux of classical energy, | Ėc | ≫ | Ėq |, the rate of the energy release is controlled by the lifetime of largest eddies and the cascade time, both of order L i /U .We hence assume that Ėc ∼ −U 3 /L i and the energy flux at smallest lengths with t K = a D U0 (where U 0 = U (0)) and the prefators a, β ∼ 1 depending on the container shape and boundary conditions [13].Equating ǫ c = ǫ d results in typical for decaying QT with the classical inertial length saturated by the container size [31].Such decay with ζ ∼ 0.1 was observed for QT in the T = 0 limit, generated either by a towed grid or after a long intensive injection of ions [13]. In the present work, we developed a method of generating QT, in which U can be controlled.A cubic volume with sides D = 4.5 cm, made of six earthed metal plates, contained 4 He with 3 He fraction 2 × 10 −11 [32] at pressure 0.1 bar.Experiments were conducted at temperature T = 0.17 K, at which the normal fraction ρ n /ρ = 1.0 × 10 −7 [33] and mutual friction parameter α = 8 × 10 −9 [34] are negligible.The mean density of vortex lines L in the cell was evaluated by measuring the losses of charged vortex rings (CVRs) propagating from an injector in the centre of a side plate to the collector at the opposite side [36].In order not to affect the decaying QT by the injected CVRs, each realization of vortex tangle, decaying for time t after turning off the injection, was only probed once. The turbulence was generated by an injector of electrons in the middle of the bottom plate.This had a fieldemission tip, to which a negative voltage of magnitude V * in the range 290-380 V was applied during time ∆t * between 1 s and 500 s, resulting in the current of magnitude I * (V * ) in the range 0.8-470 pA to a grid 2 mm from the tip.Injected electrons, each in a bubble of radius 2 nm [37] ('negative ions'), immediately nucleate small vortex rings which quickly (during first ∼ 0.15 s) build up a dense vortex tangle between the tip and the grid [38].The ions remain trapped on vortex lines until they reach the grid where most of them terminate, while the jet of fluid continues into the cell.Thus, by exerting force on these ions, the turbulence is simultaneously forced both on small lengths ∼ ℓ due to the ballooning out of the charged vortex segments leading to the growth of the line density L, and on large scales ∼ D due to the increase in the mean velocity ∼ U of the jet. We relate U to the total hydrodynamic impulse P through P ∼ ρD 3 U , where ρ = 145 kg m −3 is the density of helium.Before reaching the grid, each ion transfers to the fluid impulse eV * /v * (with v * ∼ 0.2 m s −1 being the mean velocity of ions dragged by electric field through the slower vortex tangle as a consequence of frequent reconnections when at T < 0.7 K [39,40]).The rate of transfer of impulse to the jet into the cell (see Fig. 1) is hence Ṗ+ ≈ V * I * /v * , while the rate of loss is Ṗ− ∼ −P/τ P ∼ − P 2 ρD 4 , where τ P ∼ D/U is the time required for the jet to reach the opposite wall, during which the impulse is conserved.The dependence U (t) during injection, which commenced at t = −∆t * and ended at t = 0, can be found from the solution of the equation Ṗ = Ṗ+ + Ṗ− : where is the time scale for the given injection intensity I * (V * ), that separates regimes of growing U (t) and saturated U .In what follows, we will need a general expression for the value of t K in (5), where U 0 = U (V * , I * ) at t = 0 (here a, b ∼ 1), This relation was firstly tested for the limit of long injection, ∆t * ≫ τ * , in which turbulence with a steady classical energy flux is established.In Fig. 2 (from Eq. 8 in the ∆t * ≫ τ * limit) with a = 1.2, and the common late-time asymptotic with (β/ζ) 1/2 = 7.We thus confirm that a long intensive injection can generate Kolmogorov turbulence whose decay follows Eq. 6, and that our model for the amplitude of injected large-scale velocity U (V * , I * , ∆t * ) and associated time scale t K , Eq. 8, is in agreement with experiment. In Fig. 3, which is the main result, we show the measured L(t) for several decaying vortex tangles, created with different initial values of U 0 by varying ∆t * while keeping V * and I * the same.Except for the top dataset with the longest ∆t * = 500 s, the decay begins with a universal dependence L ∝ (t + t V ) −1 , expected for Vinen QT (4).This dependence is continued for some time until it gradually switches to L ∝ (t + t K ) −3/2 , characteristic of wall-bounded Kolmogorov QT (6).The longer the injection time ∆t * (i.e. the greater the value of U 0 ), the earlier the switch occurs.To model the dependence L(t) during free decay, we write the energy balance at length scales ∼ ℓ, Following [41] we assume that the flux of classical energy (5) effectively reaches this length scale after the delay time ∼ t K (V * , I * , ∆t * ) from the beginning of injection at t = −∆t * .We hence introduce a simple delay function [41] which can be solved numerically for L(t) subject to the initial parameters L 0 and U 0 (via t K = aD/U 0 ).In Fig. 3 we show solutions of Eq. 10 with ζ = 0.10 and β = 4.9 (i.e. with the same late-time asymptotic (6) with = 7 as in Fig. 2), with values of t K (V * , I * , ∆t * ) calculated by Eq. 8 with a = 1.2, b = 0.7 and v * = 0.2 m s −1 , and with one-for-all L 0 = 6 × 10 3 cm −2 .The good agreement with all experimental L(t) suggests that the model (10) adequately represents the dynamics of QT of arbitrary degree of polarization.We will now discuss some implications of the model. At early times, whether the decay will begin from either Vinen or Kolmogorov type depends on the interplay of the total L 0 and the vortex length necessary to sus- (from Eq. 6).If L 0 ∼ L 0 , only the Kolmogorov decay (6) will be observed from the very begining (like the top dataset in Fig. 3).On the other hand, with L 0 ≫ L 0 , i. e. , the Vinen regime (4) would firstly dominate.For ℓ ≪ D, any initially excessive quantum energy E q always decays faster than the classical E c , because the decay time associated with the Vinen regime (4), τ V ≃ γ ζκL , is shorter than that for the Kolmogorov regime (6), For the ultimate Kolmogorov decay L ∝ t −3/2 to be restored while ℓ ≪ D, the condition is ∼ 1 [42].And in the opposite limit, Re s (0) ≪ 1, only the Vinen decay could be observed.Note that this criterion differs from theory by Barenghi et al. [43].They claim that if a spatiallyuniform injection of small vortex rings is stopped before the inverse cascade (which promotes large-scale velocity fluctuations upon the tangling of vortex rings) extends up to the largest length scale ∼ D, only the L ∝ t −1 decay can be observed.However, in all our experiments in which either a beam of vortex rings [11] or a vortex tangle (this work) is injected, the large-scale velocity component U 0 is present from the very moment of tangling -without the need of an inverse cascade.This is because of the collimated profile of resulting jets. Finally, approaching the crossing point of the asymptotics ( 4) and ( 6) at density the formal solution of Eq. 10 deviates from (6) and eventually switches to (4).However, the model of homogeneous QT might no longer be adequate at corresponding ℓ ∼ D/6.Instead, it is expected that remnant vortices will replace the decaying tangle at similar densities L r ∼ 2 ln(D/a 0 )D −2 ∼ 40D −2 [44]. Let us turn to the question whether coherent bundles of vortex lines might be identifiable during the Kolmogorov decay L ∝ t −3/2 .This could be characterized by the ratio χ ≡ L /L, where L is the length of aligned vortices which generate the quasiclassical velocity field, while the rest, L × = L − L , is made of random vortex segments.A similar decomposition was introduced previously [45][46][47][48] and found meaningful [7].To estimate L , we sum, in quadrature, contributions to classical vorticity from different length scales [46]: where is the Kolmogorov K41 spectrum with C ≈ 1.5, and x ∼ 1 defines the effective cut-off wavenumber for the classical spectrum.If the classical energy flux ǫ c dominates, ǫ c ≈ ǫ d , then, with Eq. 3, Thus, the late-time decaying tangles maintain a substantial and constant degree of alignment [49]. In fact, the phenomenological expression (3) for the rate of dissipation ǫ d might have alternatives.One could argue that the component L × is passively advected by classical flow and is hence involved in the transfer of energy at the same rate as in Vinen QT [45,47], while L might not contribute to the removal of energy as efficiently because it is related to the classical velocity field which evolves at its own pace.Hence, as a special case, with ζ = 0.10.Eq. 13 would still be compatible with all previous experimental observations, including those for grid turbulence [13], provided L × ∼ L. Assuming that, like in the previous case of Eq. 12, χ is constant during the late-time decay, and using ( 11) and ( 13), we arrive at Its solution for x = 1 is χ = 0.57 -indicating that L and L × are indeed comparable.We solved Eq. 9 numerically with ǫ d given by ( 11)&( 13), instead of (3).It turned out, all experimental data L(t), shown in Fig. 2 and Fig. 3, can be modelled nearly as satisfactorily, e. g. if one chooses ζ = 0.10, β = 0.8, a = 1.0, b = 0.7 and x = 0.5.Thus, the important question, which of Eq. 3 and Eq. 13 is more appropriate to describe the dynamics of QT of various degrees of polarization, requires further investigation. To conclude, we developed a technique of generating, in the T = 0 limit, QT with the known amplitude U 0 of flow velocity at the integral length D. For the range of injection conditions as in Fig. 3, the superfluid Reynolds number Re s spans the range between 25 and 650.Our model, which combines the fluxes of quantum and classical energy [11], describes all features of the observed decays L(t).If the initial line density L 0 greatly exceeds the aligned fraction L 0 (U 0 ), associated with the quasiclassical flow, L rapidly decreases following a universal decay law of Vinen QT, L ∝ (t + t V ) −1 .Yet, the initial quasiclassical flow decays slower, and when L /L reaches ∼ 1, the late-time decay of Kolmogorov QT L ∝ D(t + t K ) −3/2 , universal for the given container, is maintained.Only for very small initial U 0 < ∼ κ/D (i.e. when Re s < ∼ 1 is too small to warrant classical behavior of even largest eddies), can this ultimate regime never be reached. We acknowledge fruitful discussions with Joe Vinen and Henry Hall, help by Alexandr Levchenko and Steve May in constructing equipment, and supply by Peter McClintock of isotopically-pure 4 He.Support was provided by EPSRC under EP/E001009, GR/R94855, EP/I003738/1, and EP/H04762X. FIG. 1 . FIG. 1. Side cross-section of the experimental cell with the injector tip and grid at the bottom.The top row, left to right, illustrates the development of the vortex tangle (blue) and large-scale flow (red) after a brief injection.The bottom row shows the development during a continuous injection. V we plot experimental L(t) for several decaying vortex tangles generated by injections of the same duration ∆t * = 300 s but of different intensities (V * , I * ) for which τ * takes values from 210 s to 25 s.The solid lines are Eq.6 with the initial values corresponding to t K = aτ * = aD 2 ρv * FIG.2.L(t) for decaying tangles, forced for the same duration ∆t * = 300 s but by different V * and I * (listed in legend).Lines correspond to Eq. 6 (see text).
2017-03-10T12:04:53.000Z
2017-03-10T00:00:00.000
{ "year": 2017, "sha1": "82150ae62ed16b19fe14841ab628b4be1c7f2994", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.118.134501", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "82150ae62ed16b19fe14841ab628b4be1c7f2994", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
637200
pes2o/s2orc
v3-fos-license
Acculturation and body mass index among marriage-based immigrant Vietnamese women in Korea Objective This study aimed to analyze the association of socioeconomic factors, acculturation, and body mass index (BMI) as the first large prospective cohort study to determine the state of health of Vietnamese-born migrant women residing in Korea. Methods Participants were Vietnamese marriage-based immigrant women living in Korea. Data (n=1,066) was collected during both periods of baseline (2006–2011) and follow-up (2012–2014) in 34 cities in Korea. Results The results show that acculturation stress is relatively low among participants. Current BMI showed a significant difference according to the current age, monthly family income, and psychophysical stress. Depending on age, education level, monthly family income, we identified a significant difference in the annual BMI change. In correlation analysis, current BMI was significantly associated with age at arrival, reading and writing in Korean language adaptation, and psychophysical stress. Annual BMI change was significantly associated with age at arrival and years since immigration. Conclusion Our analysis revealed that acculturation measured by Acculturative Stress Scale for International Students had no association with current BMI or annual BMI change, but had an association with several socioeconomic statuses. This study had the advantage that subjects had a homogenous background of marriage-based immigrant women, so we could see the association of BMI and acculturation, without considering cofounding factors. Introduction Overweight and obesity are currently critical global public health issues. According to the World Health Organization, approximately 1.6 billion of the world's population is estimated to be overweight, and 400 million are estimated to be obese [1]. In recent years, obesity has been considered as one of the key diseases because it has caused cardiovascular diseases, diabetes, cancer, inflammatory disease, and chronic metabolic diseases. In Korea, rapid economic growth and globalization have increased the number of obese people due to changes in dietary habits and a lack of physical activity. The health problems related to obesity become one of the most Acculturation and body mass index among marriagebased immigrant Vietnamese women in Korea Da Eun Lee, et al. Acculturation and BMI of immigrant women important social issues. The prevalence of obesity has steadily increased in parallel with socioeconomic growth in Korea [2]. According to the Korea National Health and Nutrition Examination Survey, the percentage of adults that are obese (body mass index [BMI] of 25 or higher) has increased from 25.1% in 1998 to 28% in 2013 [3]. In Korea, there is an increase in international marriages as a result of economic growth and globalization. According to the National Statistical Office of the Republic of Korea, international marriage comprised 23.3% of marriages in 2014, and marriage between Korean men and Vietnamese women accounted for 29.4%, which is the second largest international marriage rate with its rapid increase after the marriage between Korean men and Chinese women [4]. As international marriages have increased, health problems of married immigrant women have emerged as a new issue. Acculturation means a change taking place in one or both groups when 2 cultures have relatively long contact. The effect of acculturation appears across different cultural levels of interaction [5]. The cultures in the host country, such as the ecological environment, cultural heritage, dietary habit, or education, influence immigrants to physically and psychologically resemble people in residence. Acculturation affects not only behavior pattern but also their psychophysical wellbeing. To explain the mental problems in the process of acculturation, Berry and Annis [6] suggested the term 'acculturation stress' that was immigrants' suffering and adverse effects in the process of acculturation. Using acculturation scales might help to measure acculturation stress accurately and objectively and understand the relationship between migration and obesity. Overweight and obesity can be considered as one of the most important health-related factors occurring from a morbid lifestyle. The migrants, through the acculturation process, will undergo a change in lifestyle that can cause changes in their health status accordingly. Many studies have examined the relationship between obesity and acculturation. Studies on Asian Americans identified a positive association between acculturation and body weight [7,8]. Other studies have confirmed an increase of unhealthy weight gain among immigrants that have moved from a low-income country to a highincome country [9]. However, acculturation varies depending on ethnicity, sex, age, time of migration, and duration of residence. By using a prospective cohort study, this study aims to 1) measure acculturative stress by using the Acculturative Stress Scale for International Students (ASSIS), 2) analyze the association between acculturation, current BMI, and annual change of BMI, and 3) investigate the impact of socioeconomic factors of marriage-based immigrants on current BMI and annual change in BMI. A total of 1,074 women participated in both baseline and follow-up studies, conducted in 34 cities where many Vietnamese immigrant women lived. Participants were recruited through advertisement by the Marriage Immigrant Family Support Center and the Korean Language School for Marriage Immigrants. After completing the survey and measuring BMI, finally, 1,066 women were selected as participants of the study. The survey included basic questions, such as age, age at arrival, life habits (food adaptation, dietary change, Vietnamese dietary times, and language preference), and socioeconomic factors (monthly family income and education). Materials and methods In addition, in order to measure the acculturation stress of Vietnamese immigrant women, the study used the ASSIS [10] adapted in Korean [11] and translated into Vietnamese. It consists of 34 questionnaires with a 5 Likert scale ranging from 1 as strongly disagree to 5 as strongly agree. The scores ranged from 36 to 180 on this scale and higher scores on each item meant a higher acculturative stress. We calculated BMI by weight in kilograms divided by the square of the height in meters (kg/m 2 ). Research staff measured each participant's height and weight at the study site using a height measuring rod and a standard scale. Annual change of BMI was calculated by BMI in the baseline study subtracted from BMI in a follow-up study, divided by the follow-up period. Statistical analysis We conducted descriptive analysis in order to identify the distribution of sociodemographic features and variables of acculturation. By using analysis of variance and t-test, the mean BMI was compared with consideration of sociodemographic characteristics and various acculturation statuses. We also conducted bivariate linear regression and multiple linear regressions to estimate the unadjusted, age-adjusted and multivariate-adjusted association between each acculturation variable and BMI individually. In order to assess the confounding effects, each of the covariates, which include age, family monthly income, and education, were added into the model. We used the SPSS ver. 21.0 (IBM Corp., Armonk, NY, USA) for the analysis. Continuous variables were described as a mean±standard deviation for the statistical analysis, and the result was considered to be statistically significant when the P-value was less than 0.05 for all items. Results The sample of this study was a total of 1,066 Vietnamese marriage-based immigrant women. The average age of the participants was 28.51±4.25, mean age at arrival was 21.63±3.69, mean BMI was 21.53±2.71(kg/m 2 ), and mean annual BMI change was 0.135±0.431 (kg/m 2 /year). The average Acculturative Stress Scale was 2.23±0.74, which was lower than 3 meaning that acculturation stress was relatively low. The largest percentage, 39.4% (420 participants), were educated from the age of 13-15, and 49% (511 participants), got monthly family income from 1,500 to 3,000 dollars. Sixty point four percent of participants felt that they had no economic problems. Among the participants, 78.2% were 1st generation immigrants who married and migrated to Korea before the age 24. The largest percentage (41.5%) had lived from 6 to 8 years in Korea. In terms of language, 62% answered that they were good/excellent at speaking Korean, 64% were good/excellent in reading and writing Korean. Sixty-one point seven percent of participants had dietary changes after immigration, and 62.6% had difficulty in adjusting to Korean food (Table 1). Current BMI significantly differed in terms of current age, monthly family income, and psychophysical stress, but had no difference in age at arrival, years since immigration, ASSIS quantile level, education level, language adaptation, emo-tional stress, economic problems, Korean food adaptation, or dietary change after immigration. Annual BMI change significantly differed with age, education level, and monthly family income, but had no difference in regards to age at arrival, years since immigration, ASSIS quantile level, language adaptation, psychophysical stress, emotional stress, economic problems, Korean food adaptation, or dietary change after immigration (Table 1). Table 2 indicates unadjusted, age-adjusted and multivariateadjusted (adjusted for age, education level, household income) parameter estimates for all acculturation variables. AS-SIS, and current BMI and annual BMI change were β=−0.134; standard error [SE]=0.132 and β=−0.007; SE=0.023 respectively, and had no significant associations. In unadjusted models, current BMI was positively correlated with age at arrival (β=0.077; SE=0.023), but there was no correlation between them in the age-adjusted and multivariate-adjusted models. Annual change of BMI was not correlated with age at arrival in unadjusted models and multivariate-adjusted model but had a positive correlation in age-adjusted model (β=0.015; SE=0.07). Less than 6 years since immigration showed significant association with the annual change of BMI (β=0.088; SE=0.042) in the multivariate-adjusted model. Regarding the Korean language, when the adaptation level of reading and writing was good, it was significantly correlated with current BMI in unadjusted (β=−0.470; SE=0.210), age-adjusted (β=−0.480; SE=0.207), and multivariate-adjusted model (β=−0.574; SE=0.210), but there was no significant correlation between current BMI and the adaptation to Korean language speaking. Annual change of BMI was not significantly associated with the adaptation to Korean language reading, writing or speaking adaptation. The absence of psychophysical stress was significantly correlated with current BMI in unadjusted (β=0.491; SE=0.171), age-adjusted (β=0.515; SE=0.169), and multivariate-adjusted models (β=0.560; SE=0.172), but was not correlated with annual change of BMI. Other acculturation variables, including economic problems, Korean food adaptation, dietary change after immigration, and Vietnamese dietary times were not significantly correlated with either current BMI or annual change of BMI. Discussion This study examined the association between acculturation Da Eun Lee, et al. Acculturation and BMI of immigrant women In the correlation analysis, current BMI was associated with age at arrival, reading and writing in the Korean language adaptation, psychophysical stress, and annual BMI change was significantly correlated with age at arrival, and years since immigration. The participants were the 1st generation marriage-based immigrants whose mean age at arrival was 21.63±3.69 years. Participants were in general well acculturated. Delavari et al. [12] have argued that previous studies on migration and acculturation used inconsistent methods to measure the acculturation by using surrogate measures, which make it difficult to conduct comparative or meta-analysis. They suggested using standardized acculturation scales in order to get accurate results. Our study conducted 2 kinds of surveys, ASSIS and Suinn-Lew Asian Self-Identity Acculturation Scale (SL-ASIA), among standardized acculturation scales that were used in many other studies, directed at 300 marriages based immigrant women. Out of the 2 surveys, we chose to use ASSIS because it showed a higher Cronbach's α coefficient score (0.966) than SL-ASIA. This study complemented the surrogate marker of acculturation by investigating years since immigration, language adaptation, and food adaption. The study on immigrant Hispanics in the US found weak relationships between acculturation and obesity with controlling for socioeconomic status [13]. For Mexican Americans, socioeconomic status and assimilation differently affected obesity depending on sex [14]. A systematic review found 9 papers that studied the association between acculturation and overweight/obesity of the immigrant population who moved to high-income countries from low or medium-income countries using a standardized acculturation scale. Out of these 9 studies, 6 studies showed that a higher acculturation was positively associated with BMI, and 3 studies showed associations between higher acculturation and lower BMI, mainly among women [12]. A study on low-income Puerto Rican women in the US found that participants who were less acculturated had a lower risk of obesity [15]. However, our study showed no significant difference between acculturation stress and BMI. Since various studies on acculturation and obesity found no consistent results, it is necessary to conduct more follow-up studies. A previous study analyzed data from the US New Immigrant Survey (NIS) and found that immigrants who arrived in the US before the age of 20 and had a higher chance to be overweight/obese than those who arrived at older ages as their duration of residence increased [16]. This showed that age at arrival might affect overweight/obesity prevalence. However, our study found that there was no correlation between age at arrival and BMI because there was low age difference (21.63±3.69). Among immigrants living in the US, there were significant and positive associations between weight status and duration of residence [17]. A study on Asian Americans found that foreign-born Asian Americans were more likely to be obese with increasing duration of residence [18]. A study in Canada also confirmed that obesity prevalence was positively associ-ated with years since immigration [19]. Another study found that although immigrants were less obese than the general population at their arrival in Canada, their BMI increased with a longer period of residence [20]. Research using the US NIS reported that the level of obese/overweight risk was increased among the immigrants that had migrated between the ages of 21 to 30 and had resided for more than 1 year [16]. Among Mexican-born women who had resided more than 10 years, they identified a significant difference in BMI [16]. Similarly, a longitudinal study of Asian Americans' acculturation and BMI, reported that, among foreign-born Asians, a short period of residence in the US showed association with larger increases in 5-year BMI compared to those who had lived in Korea for more than 25 years [21]. Our study did not find a difference between current BMI or annual change of BMI in terms of duration of residence or age at arrival, in Korea. There was no significant change in BMI because the participants were all women and 79.3% of them had resided in Korea for a relatively short period (less than 8 years) since immigration. In our study, current BMI significantly differed by monthly family income. One of the previous studies on the association of economic status and body weight researched 4,647 participants and found that BMI was inversely associated with socioeconomic status, which might be derived from divergent health behaviors [22]. A study of Finnish women and men using income data in a taxation register found out that there was a linkage between obesity and income disadvantage, particularly for women from a better socioeconomic background [23]. Other studies also confirmed the association between economic status and body weight among the general population in the US [24][25][26]. Based on these studies, we infer that the difference of current BMI by monthly family income might be derived from economic status rather than acculturation. Past studies have shown that there was a weak association between language and BMI [8]. Our study found the association of Korean language adaptation (reading and writing) and current BMI. We need to understand the unique situation of participants that they were marriage-based immigrants who lived with Korean men and had to use the Korean language more than other immigrants who came to Korea with their whole family or resided alone. As the duration of residence increases, immigrants are more likely to change their dietary habits that become similar to that of the host country. Such a dietary change may affect their body weight [27][28][29], and cause obesity. A study on the im-Da Eun Lee, et al. Acculturation and BMI of immigrant women migrants living in the US found that immigrants who reported high dietary change had a higher possibility of becoming overweight or obese than those who reported a low dietary change [16]. However, our results showed that there was no difference in BMI in terms of whether a person experienced a dietary change. One possible explanation for this result is that our survey did not ask the level of dietary change, but whether there was a dietary change or not. It can also be explained by the fact that Korean food is not hugely different from Vietnamese food. In this study, our analysis found a significant correlation between psychophysical stress and BMI among Vietnamese immigrant women. This result remained the same when we adjusted the parameters by age, education level, and monthly family income and, thus, we could confirm the importance of psychophysical stress of Vietnamese marriage-based immigrant women. We could also infer the relationship between acculturation and BMI, which is a strong point of this study. Furthermore, our study is reliable because we included height and weights measured using the same measurement tool, and measured BMI accurately by continuous values. The participants in this study were young Vietnamese women who were the first generation of marriage-based immigrants. Because the participants had the homogenous background, we did not have to consider the various factors affecting acculturation. However, since the participants had a shorter period of residence in Korea, they might be less acculturated. Accordingly, there would be no group with various levels of acculturation and few participants would be fully acculturated. Compared to other immigrants who moved to Korea with their whole family or resided alone, the participants in this study have a unique circumstance in that they immigrated to Korea based on marriage and had to use the Korean language, eat Korean food, and encounter Korean culture. It would be ideal for a longitudinal study to diagnose migrants before their migration up to 5 years post-migration [30] by conducting a baseline study on demographics, weight and lifestyle and a follow-up study, which collected annual data on weight, lifestyle, and acculturation. By doing this, we would understand when individual behaviors change and how these changes explain weight changes. If behavior changes can be forecast, we can provide interventions to prevent immigrants from adopting detrimental changes [30]. Thus, our cohort study would provide policy implications in regards to health programs for Vietnamese immigrant women in Korea. It can also be used as the basis for a future study on marriage-based immigrant women. In conclusion, this study was the first large prospective cohort study that examined the health condition of Vietnamese marriage-based immigrant women living in Korea. The results show that acculturation measured by ASSIS had no association with current BMI or annual BMI change, but had an association with monthly family income. Furthermore, the psychosocial stress of the participants was significantly correlated with BMI even after adjusting for age, education, and monthly family income. It explains the importance of managing their psychosocial stress to improve their health conditions.
2018-04-03T03:08:32.011Z
2017-12-29T00:00:00.000
{ "year": 2017, "sha1": "645b849e02478a6fcee7deabed28fca15806ad03", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5468/ogs.2018.61.1.118", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "645b849e02478a6fcee7deabed28fca15806ad03", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
119181471
pes2o/s2orc
v3-fos-license
Investigations of Bragg reflectors in nanowire lasers The reflectivity of various Bragg reflectors in connection to waveguide structures, including nanowires, has been investigated using modal reflection and transmission matrices. A semi-analytical model was applied yielding increased understanding of the diffraction effects present in such gratings. Planar waveguides and nanowire lasers are considered in particular. Two geometries are compared; Bragg reflectors within the waveguides are shown to have significant advantages compared to Bragg reflectors in the substrate, when diffraction effects are significant. I. INTRODUCTION Semiconductor nanowires including nanowire lasers are promising as building blocks for realization of nanoscale photonic devices 1,2 . Various techniques such as molecular beam epitaxy (MBE) or metalorganic chemical vapour deposition (MOCVD) can be used to form nanowires with accurately controlled geometry and material composition, yielding a high level of flexibility 3 . To obtain an efficient laser resonator, the reflectivity at the end facet of the nanowire must be high. The refractive index contrast between the semiconductor nanowire and its surroundings is typically very large; the simplest nanowire-laser designs could thus use the cleaved end facets as reflectors. With such designs, the reflectivity of the guided lasing mode is quite moderate for single mode semiconductor nanowire lasers (∼ 25% for GaAsbased nanowire, ∼ 18% for ZnO based) [4][5][6] . Bragg gratings have been proposed to obtain a higher end facet reflectivity. Such gratings are fully compatible with most nanowire fabrication methods, e.g. MBE and MOCVD, and have already been realized experimentally 7 . Chen et al. have performed numerical analyses of nanowire Bragg structures. They show that a nanowire superlattice can be used to achieve near unity modal reflectivity at single mode operation 8 . Additionally, they have performed an optoelectronic analysis of nanowire lasers with distributed Bragg reflector mirrors 9 , showing a significant improvement in output power. Friedler et al. 10 have used coupled mode analysis to calculate the reflectivity of a dielectric Bragg grating within a GaAs nanowire. They conclude that the reflectivity of such a grating is rather poor in the single mode regime for a GaAs nanowire, and propose to rather use metallic mirrors. When the lateral scale of a waveguide is of the order of the wavelength of the guided light, diffraction effects become significant. In this work we perform a detailed analysis of Bragg grating reflectors in connection to such diffractive waveguides, to investigate in which regimes a a) Also at University Graduate Center, Kjeller, Norway. Bragg grating is efficient. A semi-analytical model will be used; as compared to finite element methods this helps in explaining more of the mechanisms that influence the reflectivity. A substrate grating (Fig. 1 II) is found to have surprisingly low reflectivity compared to a grating within the waveguide (Fig. 1 I). Furthermore we see that even for extremely small waveguides, where only a small fraction of the field is within the waveguide, a near unity reflectivity may be obtained by having enough periods in the Bragg grating. The reflection and transmission properties of an interface are fully described by reflection and transmission matrices. These matrices describe the amount of mode i that is reflected or transmitted into mode j. In our previous work a formalism was developed to calculate the transmission and reflection matrices for end facets of waveguides 6 . The method is particularly useful for highly diffractive waveguides and nanowire laser applications. In this paper we extend the formalism to examine the effect of Bragg reflectors, and consider two different geometries; (I) Bragg reflector in the substrate, (II) Bragg reflector at the top of the nanowire (within the waveguide). The two geometries are sketched in Fig. 1. A heterostructure based on GaAs and Al 0.3 Ga 0.7 As is used throughout as an example. The outline of this article is as follows: A presentation of the multimode transfer matrix formalism is presented in Sec. II. The calculation model for the reflection and transmission matrices is presented in Sec. III. A brief summary of previous work describing the reflection at the end facet of a waveguide is given, as well as the necessary generalizations to describe a Bragg grating within a waveguide or in the substrate. Sec. IV contains a discussion concerning the design of the Bragg gratings. Numerical results for a planar waveguide structure are given in Sec. V, and some results concerning nanowires, with 2D confinement are given in Sec. VI. II. TRANSFER MATRIX FORMALISM The theory of transfer and scattering matrices can be found in standard textbooks 11 . We will here briefly review the concepts, to introduce our choice of notation. Consider a stack of layers, with layer boundaries perpendicular to the propagation axis, z. Each layer is homogeneous w.r.t. z. The field in each layer can be described using its modes. Throughout this article we define modes as being pairs of electric and magnetic fields that are eigenfunctions of the electromagnetic propagation operator along the z-axis. For an infinitely long waveguide that is homogeneous along the z-axis, the modes will correspond to the eigenmodes of the whole structure. However, for waveguides of finite length or with inhomogeneities the modes are merely local modes, not to be confused with the supermodes of the overall structure. Let the forward propagating mode n in layer i have amplitude a i n , and the backward propagating mode have amplitude b i n . The vectors a i and b i contain the amplitudes of all forward propagating modes and backward propagating modes, respectively. Let r ji and t ji be matrices describing the modal reflection and transmission respectively, for light incident from layer i towards layer j; similarly r ij and t ij describe the reflection/transmission from the opposite side. Using these matrices, we can relate the field in layer i to the field in layer j: The matrix r ji has elements r ji kl , i.e. r ji = r ji kl , similarly for t ji , r ij and t ij . Eq. (1) can be rewritten in matrix form as Here S ji is the scattering matrix: When considering a sequence of layers, it is convenient to reformulate (2) so that the field in layer j can be ex- plicitly expressed using the field in layer i, i.e., The matrix M ji is known as the transfer matrix; a general transfer matrix is illustrated in Fig. 5. In light of (1) it can be expressed in terms of the scattering matrix S ji : Here, the superscript (+) denotes the matrix inverse or More-Penrose pseudo inverse depending on whether the matrix is quadratic or rectangular. The presence of evanescent modes in a layer could cause ill-conditioned transfer matrices. As the transmission coefficients t ij kl involving evanescent modes may be extremely small, matrix inversion of the transmission matrix may cause numerical instabilities. To avoid such problems it is preferable to use recursive relations derived from transfer matrices rather than direct matrix multiplication. We consider a stack of three layers, 1,2 and 3; the individual transfer matrices are multiplied to find the total reflection and transmission properties. Recall that r ji ( t ji ) denote the reflection (transmission) from layer i to layer j. The combined reflection and transmission coefficients for the system of layers are given by: t 31 = t 32 I + r 12 I − r 32 r 12 + r 32 t 21 (6b) r 13 = r 23 + t 32 r 12 I − r 32 r 12 + t 23 (6c) Propagation in the z-direction within one layer can be described in the same manner. Mode k propagates according to where β k is the modal propagation constant in zdirection, and d is the propagation distance. III. FINDING THE REFLECTION AND TRANSMISSION MATRICES The problem of finding the reflection and transmission matrices at the end facet of a waveguide terminated in a homogeneous medium has been addressed by us previously 6 , here we briefly sum up the main results. The geometry of the problem is shown in Fig. 3, here exemplified using a circular waveguide. We describe the field at both sides of the boundary, z = 0, using a set of modes. The modes in the half-space z>0, constitute a continuous set of radiation modes, whereas for z<0 the modal spectrum consists of a discrete set of bound modes and a continuous set of radiation modes. The modal spectrum is discretized using periodic boundary conditions at each side, the width of the computational cell in both x and y direction is 2L. The electric field of mode m in the ambient half space can be written as The magnetic field, H m , is described in the same way. The label m is a collection of the modal indices, m = (p, q, pol). The polarization, pol, is TE or TM, and the real transverse wavevectors are k x = p π L , k y = q π L , where p and q are integers. The modal propagation constant k z is given by k 2 z = n 2 a ω 2 /c 2 −k 2 x −k 2 y , where n a is the refractive index of the half-space z > 0, and c is the vacuum light velocity. The constant vectors can be expressed 12 where A = 1 (k 2 x + k 2 y )|k z |2L 2 . We assume that the medium is nonmagnetic, and the permittivity of the medium is ε a . The modal fields of the waveguide are denoted e i = e i (x, y) and h i = h i (x, y), i = 1, 2, . . .. We now use the continuity of the transverse electric and magnetic fields. Assuming the incoming mode {e i , h i }, we can write valid for all x and y. Here r a,wg ji is the reflection coefficient from mode i to mode j, and t a,wg mi is the transmission coefficient from mode i (z < 0) to mode m (z > 0). The superscript (t) stands for the transverse component (x and y components) of the vector. Eqs. (10a) and (10b) can be combined as follows. Take the vector product between (10a) and H Here we have defined the inner products The unit vector in the z-direction isẑ, and It is straightforward to extend the formalism to describe the reflection and transmission for light incident onto the facet from the ambient medium. We assume the incoming wave {E i , H i }, and consider the boundary conditions, similarly to (10). The resulting expressions for the reflection and transmission are We have here assumed that the waveguide modes are orthogonal, and fulfill This orthonormality relation can always be fulfilled for the modes of nonabsorbing waveguides 13 . Note that the transmission matrices may also be found directly from the inner products; A mode is said to be real if it has a real-valued propagation constant and the transverse electric and magnetic field of the mode can be written real for all values of x and y. For modes in nonabsorbing waveguides the transverse fields can always be written real 13 . Modes with a realvalued propagation constant will therefore be real modes. For coupling between real modes i and m with real propagation constants, we have β i /|β i | = κ(m) * /|κ(m)|. In this case Ψ mi andΨ im are both real, and we have Ψ mi =Ψ mi , and Φ mi =Φ mi . We then see directly that t a,wg = t wg,a , exactly as predicted by the reciprocity theorem. It is a necessary condition when solving for the reflection matrices to have a well defined system of equations. A minimum requirement is to use the same number of orthogonal modes at both sides of the boundary. This is however not an ideal solution, as the sampling in the spatial frequency domain is quite different at the two sides of the boundary; thus a large number of modes would be necessary for an accurate description of the interface. We have rather chosen to use a higher number of modes on the ambient side; this enables a good description of the forward reflection r a,wg . However, for the backward reflection r wg,a , we cannot directly find the reflection coefficients for all modes of the ambient medium. The procedure is as follows: First we find the reflection matrix for the ambient modes with the lowest spatial frequencies. These modes must be sufficiently well described by a superposition of waveguide modes. More precisely they obey k 2 x + k 2 y ≤ (n co ω/c) 2 − β 2 lim , where β lim is the possibly imaginary propagation constant of the highest order waveguide mode. For higher spatial frequencies, we approximate the reflection coefficients using the scalar Fresnel equations for reflection at an interface with index contrast n a /n cl . This approximation shows very good agreement provided the number of modes on the waveguide side is not too small. For Bragg gratings within the waveguide one also needs to describe the reflection and transmission properties when there is a waveguide at both sides of the interface. One possibility for performing this calculation would be to repeat the procedure described previously, using waveguide modes at each side of the interface. As we have waveguide modes at both sides of the boundary the inner products similar to (12) and (15) could no longer be formulated as Fourier transforms. To avoid this problem, we formulate the reflection and transmission matrices for transitions between waveguides in terms of the previously acquired relations for a transition from a waveguide to an ambient (11). We start by formulating the boundary condition as in (10). The fields at both sides are then expressed in terms of the inner products (12) between each waveguide and a dummy ambient layer. Using this procedure we obtain expressions for the reflection and transmission between waveguides formulated in terms of their reflection and transmission matrices towards a dummy ambient medium. Note that there are no assumptions made here, and the accuracy is given from the accuracy of the reflection and transmission matrices from the waveguide to the dummy ambient medium. The details concerning this calculation are given in Appendix A. For a transition from waveguide b towards waveguide c the result is where the superscript a denotes the dummy ambient media, and The opposite transition is described by interchanging indices b and c. For a thin diffractive waveguide, the imposed boundary conditions cause artificial reflections from the boundary. To deal with these artificial reflections, we introduce some loss into the system, i.e., ε → ε+iγε 0 , at both sides of the boundary. The loss parameter γ should be small enough not to alter the reflection properties of the boundary significantly 6 . Note that this loss is merely artificial, and it will only be included when necessary. To describe the Bragg grating, we must treat the scattering at interfaces as well as propagation in homogeneous layers. The loss is not included in the propagation description, as this would lead to an underestimate of the reflection as compared to the physical situation. Since the loss is included in the description of the interfaces, but not in the propagation description, it represents a deviation from a physical structure and some error is to be expected in the final result. Decreasing γ will decrease this error. We previously assumed that the waveguide modes fulfilled the orthonormality relation (16). This is however only generally true for nonabsorbing waveguides. For slightly absorbing waveguides we may assume that the deviation from (16) is small 13 . The orhonormality relation can even be exactly fulfilled for the planar step index waveguides considered in this paper when ε → ε + iγε 0 . IV. DESIGNING THE BRAGG GRATING In this section we consider the design of the Bragg grating structure in connection to a waveguide. Let a waveguide be terminated by some grating consisting of layers with refractive indices n b and n c , and thicknesses d b and d c respectively. The waveguide itself has refractive index n c . The structure is designed to be a quarter wave stack for the fundamental mode of the waveguide, i.e. the mode with propagation constant β (1) c in z-direction. The thicknesses of the quarter wave layers are given by The response of the grating is highly dependent on which material that constitutes the terminating layer. To illustrate this we consider a planar structure with the grating within the waveguide (Fig. 1 I). The heterostructure is based on GaAs and Al 0.3 Ga 0.7 As and the ambient medium is vacuum. The waveguide where the lasing is to occur consists of GaAs, i.e. n c = n(GaAs), n b = n(AlGaAs). The lasing wavelength for GaAs in the Zinc blende (ZB) crystal phase is 870 nm at room temperature 14 . At this wavelength, the refractive indices of GaAs and Al 0.3 Ga 0.7 As are 15 n (GaAs) =3.6 and n (AlGaAs) =3.4. The structure is surrounded by air. The total reflection matrix is found using the recursive relations (6). To this end we need the propagation matrices, the reflection and transmission matrices describing the interfaces between the waveguide layers, and the reflection matrix for the transition from the terminating grating layer towards the surrounding ambient. We calculate the total reflection when the terminating layer consists of either the low index or the high index material. Let the frequency of the light be ω and the width of the waveguide be 2a. We include all modes with β 2 > β 2 lim , where β lim is the cut-off limit. In this example, a(ω/c) = 1, β lim = 10i(ω/c), L = 100a, and γ i = 0.1. Fig. 4 shows the reflection coefficient for the fundamental TE even mode, as a function of the number of periods, for gratings terminated by either GaAs or Al 0.3 Ga 0.7 As. Note that the behavior is fundamentally different depending on whether the grating is terminated by the high index material or by the low index material. When the material with the lowest refractive index terminates the grating, the reflection is reduced rather than increased for the first layers. This can be explained as follows. For a quarter wave stack all multiple reflections interfere constructively, as there is an additional phase shift of π at every second interface when the refractive index goes from low to high. If the interface towards the ambient layer breaks this periodicity, the portion of the field reflected at this last interface will interfere destructively with the rest of the field. Unlike in conventional Bragg gratings, this last reflection may be crucial, as there is such a large index contrast between the grating and the surrounding air. If the grating consists of several periods, most of the field is reflected before it reaches the last interface; the effect of this additional phase shift is therefore gradually reduced. Fig. 4 clearly shows that to enable an efficient grating the terminating layer should be made from the high index material. An alternative solution if one needs to terminate the grating using the lowest index material, is to grow the terminating layer with twice the thickness to compensate for the phase shift. A waveguide grating terminated by the highest-index material followed by an ambient of the same material would yield a similar effect, as the effective refractive index of the fundamental mode in the waveguide is lower than the refractive index of the bulk material. This may be part of the reason why Friedler et al. 10 obtain so low reflectivity for the thinnest waveguides. As the thickness of the waveguide increases, the index contrast and thus the reflectivity at the last interface will decrease, and this effect would diminish. The phase shift at the first interface has a similar effect. If the first interface breaks the periodicity, the contribution from this first reflection will be out of phase with the remaining contributions. This situation may e.g. occur when a GaAs waveguide is terminated by an Al-GaAs/GaAs Bragg grating in the substrate (Fig. 1 II). As the waveguide thickness decreases below a certain limit, the effective refractive index of the fundamental mode in the waveguide will decrease below that of the first layer of the substrate (consisting of AlGaAs). This will cause the reflection at the first interface to interfere destructively with the remaining backscattered field, leading to reduced reflectivity. One possible solution to compensate for this phase shift, is to adjust the thickness of the first layer accordingly. V. PLANAR WAVEGUIDE STRUCTURE We now look more closely into some numerical examples for a planar waveguide with a Bragg grating. Two situations are considered; the grating is either within the waveguide (Fig. 1 I) or in the substrate below the waveguide (Fig. 1 II)). We also briefly consider an intermediate geometry. The planar waveguide with 1D confinement is less computationally demanding compared to the 2D case; in addition both bound and unbound modes can be found analytically 6 . The planar case is therefore well suited to test qualitative relations and convergence criteria. In a planar waveguide there is no coupling between modes of different parity (odd/even) or between modes of orthogonal polarization (TE/TM). The discussion is therefore limited to even TE-polarized modes. First, let the Bragg grating be within the waveguide, as shown in Fig. 1 I). Such structures can be realized by growing the Bragg grating at the end of the nanowire growth, by alternating the source materials during the epitaxial growth. In this example, the main part of the waveguide consists of GaAs, and alternating layers of Al-GaAs and GaAs are grown at the top of the waveguide. The uppermost layer consists of GaAs, and the structure is surrounded by vacuum. We have performed calculations for up to 100 periods, to see the behavior in the limit of several periods. Note however that this is a very high number, which is not easily achieved with today's technology. We consider four normalized waveguide widths; a(ω/c) = 0.1, a(ω/c) = 0.5, a(ω/c) = 1, and a(ω/c) = 10. For reference, the single mode regime for even TE modes in this GaAs waveguide extends up to a = It is seen that the end facet reflectivity of all waveguides converge towards a value very close to 1. Even as the normalized waveguide width decreases below aω/c = 0.5, one can still obtain high reflectivity by increasing the number of periods in the grating. As will be seen later this is contrary to what is observed for the case with the grating in the substrate. Before we proceed it is instructive to review somewhat how the modal fields are influenced by diffraction. Firstly, as the width of the waveguide decreases, a decreasing proportion of the modal field will be confined within the core of the waveguide. As a consequence, the effective refractive index of the fundamental mode decreases towards the limit where it is close to the refractive index of the cladding material. Secondly, as the modes are confined to smaller areas in space, a corresponding spreading of the spatial frequencies of the mode must follow. This will e.g. imply that the waveguide modes will couple more strongly to each other upon reflection, and that there will be a larger angular spread of the beam upon transmission towards an ambient medium. A grating within the waveguide with a relatively low refractive index contrast will roughly preserve the same set of modes along the grating. Except for the modes that are very close to their cut-off, each mode will thus expe- rience a jump in the effective refractive index in a manner quite similar to what is seen for conventional Bragg gratings. Only a small amount of energy will therefore be transferred from e.g. the fundamental mode to the higher order modes. If the index contrast of the grating is larger, each transition in the grating represents a more significant perturbation to the modal field, and there will be a larger amount of cross coupling between modes. Some of the energy from the fundamental mode may thus couple into other modes. Fig. 6 displays the reflectivity of a Bragg grating consisting of two materials with higher index contrast. Here GaAs has been replaced by a material with refractive index 4, and Al-GaAs has been replaced by a medium of refractive index 2. The simulation parameters are the same as for the corresponding GaAs/Al 0.3 Ga 0.7 As structure. Note that the reflection coefficient of the fundamental mode now converge towards a value less than unity. As can be seen by comparing Fig. 5 and Fig. 6, there is a trade-off here in terms of the index contrast. Higher index contrast enables quite high reflection using fewer periods. On the other hand the maximum obtainable reflection is larger for the lower index contrast system. We proceed to consider a Bragg grating in the substrate below the waveguide, as shown in Fig. 1 II). For nanowire applications, such structures can be realized by growing the substrate Bragg grating before the nanowire. This geometry has the advantage that it is easier to control the thickness and composition of the layers compared to the structure with the grating within the nanowire. The structure consists of the same materials as for the case with the grating within the waveguide, and we consider the same four waveguide widths as before. The half-width of the computational cell, L, used in the calculations was 1000/(ω/c), which is larger than for the case with the grating within the waveguide. The rea-son for this is that the transition towards this substrate Bragg grating represents a more significant change in the modal fields. The coupling from the fundamental mode to higher order modes including radiation modes is therefore enhanced, and these modes will be more influenced by the artificial boundary conditions. The modal cut-off limit was taken to be β lim = 3(ω/c)i. The resulting reflectivity as a function of the number of periods is shown in Fig. 7. In the geometric optics limit, i.e. as the normalized width of the waveguide increases, this substrate grating and a corresponding grating within the waveguide should approach each other. In this limit the reflection and transmission coefficients of the bound modes can be approximated by those of plane waves at a homogeneous interface 6 . Comparing Fig. 7 and Fig. 5, we see that this approximation is indeed accurate for a = 10(ω/c). However in the highly diffractive regime, there are large differences between the two Bragg geometries. Using a substrate grating, increasing diffraction will lead to decreased reflection. It is not possible to compensate for this by adding more layers. Fig. 8 helps us understand this effect. The upper plot displays the reflectivity of the quarter wave stack separately, i.e. the reflectivity of plane waves incident from the first substrate layer towards the remaining quarter wave stack. The lower plot displays the transmission coefficients from the fundamental mode of the waveguide into each of these plane waves. The Bragg grating has reduced reflectivity in the region k x = 0.75(ω/c) to k x = (ω/c). Increasing the number of periods in the Bragg grating will increase the frequency of the oscillations in this region, but it will not decrease the width of this region with reduced reflectivity. The energy transmitted into plane waves in this low reflectivity region will therefore be partly transmitted through the grating and transported away. This explains why we do not achieve high reflectivity. For highly diffractive waveguides, a significant amount of the energy is transmitted into evanescent modes (k x > ω/c). The evanescent modes do not transport energy, so eventually they will couple back into propagating plane waves; especially to the plane waves with similar spatial frequencies. It is thus natural to assume that most of the energy in the evanescent modes is coupled back into the plane waves with reduced reflectivity; thus a large part is transported away from the structure. As aω/c → 0, the portion of the fundamental mode transmitted into the region with k x < 0.75(ω/c) will decrease, thus reducing the effect of the grating. For very small aω/c, the effective refractive index of the fundamental mode tends to the refractive index of the cladding medium (vacuum). The reflectivity of the fundamental mode will therefore converge to zero for very thin waveguides if the cladding material is the same as the substrate, as would be the case in the absence of a substrate grating. By adding a substrate layer of Al 0.3 Ga 0.7 As at the end of a GaAs waveguide, the effective index contrast for the fundamental mode will first decrease and then increase again as the waveguide width is decreased. As a consequence, the reflection coefficient for the fundamental mode towards the Al 0.3 Ga 0.7 As substrate will vary correspondingly. One may thus achieve relatively high reflectivity, but this is due to the fact that there is a high effective index contrast at the waveguide/substrate interface, not due to constructive interference in the quarter wave grating. As a function of the number of periods, the reflectivity for the thinner waveguides decreases before it starts to increase (Fig. 7). This can be understood in terms of the phase shifts at the interfaces. As the effective refractive index of the waveguide decrease, the phase of the reflection at the interface between the waveguide and the substrate will change. As discussed in Sec. IV, this phase shift may lead to destructive interference between the backscattered contributions. For the two extreme waveguide widths a = 10(ω/c) and a = 0.1(ω/c), the phase shift of the first reflection coefficient is close to 0 or π, respectively. This leads to destructive interference for the waveguide width a = 0.1(ω/c). To compensate for this, we doubled the thickness of the first layer in the Bragg grating, which strongly increased the reflectivity. The result is shown in Fig. 7. We have seen that the differences between the two grating geometries become large in the diffraction limit. Before we proceed to 2D calculations on nanowires, we therefore consider an intermediate geometry. Here, the segments of the grating have a larger lateral width than the central waveguide. Such structures can be realized Transmission coefficients from the fundamental mode into the plane wave components in the first layer of the substrate grating (lower plot), and the respective reflectivity when these plane waves are reflected due to the grating (upper plot). Note that the reflectivity is higher than unity for spatial frequencies corresponding to propagating modes in GaAs, but evanescent modes in the Al0.3Ga0.7As. As no energy is transported by the evanescent modes, this does not violate energy conservation. by first growing a substrate Bragg grating, and then etch to reduce the lateral size of the reflector. This could be a potential way to overcome some of the weaknesses associated with substrate Bragg gratings, while maintaining a structure that is relatively easy to fabricate. The width of the central waveguide in the calculation was taken to be a(ω/c) = 1. Fig. 9 displays the reflection coefficient r total 11 2 as a function of the number of periods for a grating of lateral width a(ω/c) = 2. The simulation parameters were β lim = 5(ω/c)i, γ i = 0.1, and L = 100/(ω/c). As a reference, we also show the corresponding reflection coefficient for a substrate grating and a grating with equal width as the central waveguide (taken from Fig. 5 and Fig. 7). From Fig. 9, we see that the reflection of the a(ω/c) = 2 grating is intermediate between the two geometries discussed earlier. There are also large fluctuations as a function of the number of periods. In the transition from the central waveguide (a(ω/c) = 1) towards the segment with a(ω/c) = 2, the field of the fundamental mode experiences a significant alteration. This leads to large coupling into several higher order modes of the a(ω/c) = 2 waveguide. The quarter wave resonance condition is however only fulfilled for the fundamental mode. The total reflection coefficient r total 11 2 will therefore oscillate as a function of the number of periods, depending on the interference conditions for the energy transmitted into the high number of higher order modes. For grating structures of increasing lateral width, the wavevector separation between neighboring modes will decrease. The fluctuations will therefore be smoothed out in the limit of very wide grating structures. VI. RESULTS, NANOWIRE STRUCTURE In this section we consider the effect of Bragg reflectors at the end facet of a semiconductor nanowire. A GaAs nanowire will be used as an example, with a Bragg grating consisting of GaAs/Al 0.3 Ga 0.7 As. The nanowires have a hexagonal cross-section. The lateral size of the waveguide is described using the effective radius ρ eff , defined such that a hexagon with effective radius ρ eff has the same area as a circle with radius ρ eff . Substrate Bragg gratings were shown in the previous section to be inefficient for highly diffractive planar GaAs/AlGaAs waveguides. This was because only a small portion of the fundamental mode is transmitted into propagating plane waves with high reflectivity when diffraction effects are significant. In a 2D waveguide there will in general be coupling between modes of various polarization, thus both the TE and TM plane waves of the substrate have to be taken into consideration. Fig. 10 0.75(ω/c) and (ω/c). This is a strong indication of the limited effect of Bragg reflectors in the substrate. A further study of Bragg gratings in the substrate is therefore omitted. Note however that due to the Brewster effect one might achieve very low reflectivity for TM polarized light in the absence of a Bragg grating. A Bragg grating may therefore help somewhat in those cases. Bragg gratings within the waveguide were seen to be promising in the planar case. The analysis is therefore extended to find the reflection properties of a hexagonal nanowire using such a grating. The two alternating materials are again taken to be Al 0.3 Ga 0.7 As and GaAs. The modes of the hexagonal waveguides were found using Comsol Multiphysics TM . When extending from a 1D to a 2D analysis, there is a large increase in computational resources. We have therefore chosen to limit the 2D calculations to a more qualitative analysis, i.e. the simulation parameters are such that some inaccuracy should be expected in the results; we limit the number of modes included in the calculation and reduce the computational cell size. Two waveguide widths have been considered; ρ eff ω/c = 1 and ρ eff ω/c = 10. In the simulation we used L = 25 and γ = 0.1; β lim = (ω/c) for ρ eff ω/c = 10 and β lim =0 for ρ eff ω/c = 1. Fig. 11 displays the reflectivity of the fundamental mode as a function of the number of layers in the grating. The behavior is very similar to what was seen for the planar case. The grating does yield a significant increase in reflectivity. For the largest waveguide, the reflectivity of the fundamental mode as a function of the number of periods is in fact almost identical to what was seen for the corresponding planar waveguide. For the smaller waveguide, where diffraction effects are more pronounced, the reflectivity converges towards a value around 0.93. The deviation from the convergence limit of the corresponding planar structure is within the uncertainty due to the limited simulation parameters. The maximum width for single mode operation in the (ZB) GaAs nanowire is around ρ eff (ω/c)=0.7. This implies an efficient radius of 97 nm when the excitation light is at the lasing wavelength λ=870 nm. The corresponding length of one period in the grating would be 237 nm, and a grating of 20 periods would thus be 4.7 µm long. Such long nanowire gratings might be challenging to achieve with today's technology. For practical purposes one is thus limited to grow much shorter gratings. Higher reflectivity for shorter gratings may be achieved by increasing the index contrast, as is e.g. clearly seen in Fig. 6. An increase in the aluminum composition up to x = 0.7 would yield a refractive index of 3.15. In a planar structure with a(ω/c) = 1, such a grating would be capable of achieving a reflectivity of 0.9 after 7 periods, compared to 16 periods for x = 0.3. A similar increase is to be expected for nanowire structures. It might be possible to also perform wet etching of the AlGaAs layers, as has been successfully done for VCSELs 16,17 . Wet etching of AlGaAs with high aluminum content will increase the refractive index contrast further, the refractive index of the oxidized AlGaAs layer is around 1.6 18 . A reflector consisting of such oxidized layers would however be non-conductive. Wet etching would thus be challenging for electrically driven nanowire lasers, as the end facets cannot be used for current injection. VII. CONCLUSION A semi-analytical model has been used to analyze the reflection properties of Bragg reflectors to increase the end facet reflectivity of diffractive waveguides. Such grating are promising to enable high quality nanowire laser cavities. We have considered a geometry with the grating within the waveguide/nanowire itself and a geometry with a substrate grating. The substrate grating has the advantage that the composition and thickness are more easily controlled, compared to the grating within the waveguide. For diffractive waveguides they were however found to yield a surprisingly small reflectivity. On the other hand, using the geometry with the grating within the waveguide, one could obtain near unity reflectivity even for extremely small waveguides, where only a small fraction of the field is within the waveguide. This would however imply using a high number of periods. The semi-analytical model enables us to understand the mechanisms governing the efficiency of reflection gratings in connection to diffractive waveguides. The model is however not able to give very exact results for two dimensional highly diffractive waveguides, unless a high number of radiation modes and evanescent modes are included. The structure with the grating within the waveguide is clearly seen to be the most promising when diffraction is significant. For GaAs waveguides terminated by such GaAs/Al 0.3 Ga 0.7 As gratings, one obtains maximum reflectivity after approximately 40 periods of the GaAs/Al 0.3 Ga 0.7 As grating, both for the planar waveguides and the nanowire waveguides. This is a high number that is not easily achieved with today's technology. To reduce the number of periods in the grating, one might increase the refractive index contrast. For the example considered here this could be achieved by increasing the aluminum composition in the AlGaAs layers, possibly in combination with wet etching. This will create a steeper increase in reflection as a function of the number of periods, but will also somewhat reduce the maximum obtainable reflectivity. ACKNOWLEDGMENTS This work was supported by the "NANOMAT" program (Grant No. 182091) of the Research Council of Norway. Appendix A: Reflection and transmission at boundary between waveguides This section describes the procedure for calculating the reflection and transmission matrices at a boundary between two waveguides. The situation is sketched in Fig. 12. Let waveguide b be in the half-space z < 0, and waveguide c be in the half-space z > 0. The electromagnetic field is discretized at both sides of the interface using the waveguide modes. Note that the modes can be divided into two parts; the discrete bound modes, and the continuous radiation modes. An artificial boundary condition e.g. periodic or metallic has to be applied to both half-spaces in order to fully discretize the modal spec-
2012-06-22T14:19:07.000Z
2011-11-03T00:00:00.000
{ "year": 2011, "sha1": "0ee5c105e04d7b42b0f4a5c209df7dfb20193dea", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1111.0767", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0ee5c105e04d7b42b0f4a5c209df7dfb20193dea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
234239077
pes2o/s2orc
v3-fos-license
Effect of H2SO4/H2O2 pre-treatment on electrochemical properties of exfoliated graphite prepared by an electro-exfoliation method The effect of pre-treating graphite sheets in a H2SO4/H2O2 solution before electro-exfoliation is reported. It was revealed that the volume fraction of H2SO4 to H2O2 during pre-treatment could control the degree of exfoliation of the resulting exfoliated graphite (EG). X-ray diffraction (XRD), Raman, and Fourier transform infrared (FTIR) spectroscopy analyses have suggested that EG produced by first pre-treating the graphite sheet in H2SO4/H2O2 solution with the H2SO4 : H2O2 volume fraction of 95 : 5 demonstrates the highest exfoliation degree. This sample also demonstrated excellent electrochemical properties with good electrical conductivity (36.22 S cm−1) and relatively low charge transfer resistance (Rct) of 21.35 Ω. This sample also showed the highest specific capacitance of all samples, i.e., 71.95 F g−1 at 1 mV s−1 when measured at a voltage range of −0.9 to 0 V. Further measurement at an extended potential window down to −1.4 V revealed the superior specific capacitance value of 150.69 F g−1. The superior morphology characteristics and the excellent electrical properties of the obtained EG are several reasons behind its exceptional properties. The pre-treatment of graphite sheets in H2SO4/H2O2 solution allegedly leads to easier and faster exfoliation. The faster exfoliation is allegedly able to prevent massive oxidation, which frequently induces the formation of graphite/graphene oxide (GO) in a prolonged process. However, too large H2O2 volume fraction involved during pre-treatment seems to cause excessive expansion and frail structure of the graphite sheets, which leads to an early breakdown of the structure during electrochemical exfoliation and prohibits layer by layer exfoliation. Introduction Exfoliated graphite (EG) is a type of graphene-like material formed when stacked graphitic layers undergo partial separation. 1,2 Even though the produced material is not as thin as graphene, it is considered a good deal as its properties resemble that of graphene and it is easier to produce and is scalable for industry. 3 Like graphene, EG can be applied in various applications such as catalyst supports, gaskets, the anode of lithiumion batteries, supercapacitors, etc., in which materials with large surface area and superior electrochemical properties are needed. 1,4-6 Among various feasible applications, graphene and EG applied as supercapacitors have been highly projected and widely studied. 7,8 The supercapacitor is a type of energy storage device with several advantages such as high power density, relatively long lifetime, and simple operation. 9 This superiority makes it suitable for the applications which require high power output and fast charge-discharge process such as load-levelling in various energy sources and energy recovery from regenerative braking in vehicles. 10 Compared to pristine graphene, EG is expected to show better long-term properties as a supercapacitor because it normally contains enough oxygen functional groups bound to its edges and basal planes. 11 The presence of oxygen-containing functional groups is benecial as they can prevent the re-stacking of graphene layers, thus increasing the specic capacitance. [11][12][13] However, the excessive oxygen functional groups are not good for supercapacitor applications since they can also disrupt the electron transport in the materials, and thus decreasing the specic capacitance. 14,15 Various methods have been frequently used to fabricate EG, including micromechanical exfoliation, 16 chemical exfoliation through reduction of exfoliated graphite oxide, 17 and electroexfoliation. 18 Among them, electro-exfoliation has several advantages such as cost-effective, easy, and high scalability. 19 This method offers controllable properties of EG products by selecting applied voltages, aqueous or non-aqueous electrolytes, and types of graphite source. 19,20 Besides, this method also produces EG with lower oxygen content, and higher electrical conductivity than that produced via chemical exfoliation route. 18,21 For those positive reasons, the electro-exfoliation method is considered highly suitable to be applied in producing EG that ts various applications in which EG with excellent electrochemical properties is required, including supercapacitor. 21 Based on the applied bias, the electro-exfoliation process could be classied into two types, i.e., cathodic and anodic exfoliation. Cathodic exfoliation is a process where a negative potential is applied to the graphite electrode. This process requires organic electrolytes or solvents such as dimethylformamide (DMF), dimethyl sulfoxide (DMSO), N-methyl-2pyrrolidone (NMP), or propylene carbonate (PC). On the other hand, in anodic exfoliation, a positive potential is applied to the graphite precursor, and it can be easily conducted in an aqueous solution of inorganic electrolytes. 20 The needs for organic solvents make the application of cathodic exfoliation limited because the organic solvents are relatively expensive, and their utilisation may cause pollution to the environment. Therefore, anodic exfoliation is preferred as it can be carried out in an aqueous solution using less expensive electrolyte materials. However, despite its advantages, the conventional anodic exfoliation tends to produce a rather low yield. 22,23 To improve the exfoliation efficiency, Munuera et al. involved a simple pretreatment before the electrochemical process by immersing a graphite precursor in concentrated sulfuric acid (H 2 SO 4 ) for 48 h. 24 They could improve the yield of the graphene product up to ve times produced without the pre-treatment, i.e., 50 wt%. The pre-treatment is believed to improve the exfoliation of graphite during the electrochemical process by letting H 2 SO 4 molecules ll the voids and interstitial spaces within the graphite, turning the graphite become hygroscopic materials. The change in the graphite properties is supposed to allow easier intercalation of the anions and water from electrolytes into the graphite during the electrochemical process, resulting in a more extensive exfoliation of graphene layers. The high yield of EG obtained via H 2 SO 4 pre-treatment has made the anodic exfoliation process of graphite more efficient. However, modication to the procedure is still needed as the pretreatment time up to 48 h is still rather long. Therefore, an improved pre-treatment procedure is needed to produce EG with good properties and high yield in a relatively shorter time. The initial expansion of graphite has become one powerful way to increase efficiency and reduce the required time for graphite exfoliation. [25][26][27] This method involves an interlayer gas evolution caused by a chemical reaction between graphite intercalant molecules (such as FeCl 3 , CrO 3 , and H 2 SO 4 ) and reactive species (such as (NH 4 ) 2 S 2 O 8 , and H 2 O 2 ). Lin et al. have studied the initial expansion of graphite by treating the graphite in the mixture of CrO 3 and H 2 O 2 . 27 They have successfully obtained the highly expanded graphite (1000 folds of initial volume), which could be easily exfoliated to produce few-layer graphene. The utilisation of H 2 O 2 as reactive species is interesting because it could be carried out at room temperature without producing harmful byproduct. 27 Besides, Gu et al. have also reported the utilisation of H 2 O 2 and H 2 SO 4 to open up the spacing of graphite layers into expandable graphite. 28 The expandable graphite was then subjected to thermal shock to produce worm-like expanded graphite (WEG), which was used as the precursor for graphene synthesis via ultrasonication. A simple modication of the pre-treatment process was carried out by involving H 2 O 2 in the H 2 SO 4 pre-treatment of graphite in the present work. During this process, the graphite is expected to experience initial expansion due to the evolution of oxygen gases caused by H 2 O 2 decomposition, which is probably benecial to ensure a more efficient electro-exfoliation process. To the best of our knowledge, none has reported the pre-treatment process of graphite precursor in the H 2 SO 4 /H 2 O 2 solution before the electro-exfoliation procedure. The effect of graphite pre-treatment in H 2 SO 4 /H 2 O 2 solution on the electroexfoliation process and the electrochemical properties (e.g., charge transfer resistance, specic capacitance, and potential window) of the EG products were investigated. This study is benecial for the development of exceptional EG-based supercapacitors via a facile and efficient electro-exfoliation method. solution by leaving about 1.5 cm part unsubmerged for alligator clamp. Aer pretreatment for 3 min, the graphite sheet was carefully moved to the different chamber for the electro-exfoliation process. The graphite sheet (GS) samples aer pre-treatment were named aer the variation of H 2 SO 4 : H 2 O 2 volume ratio used during pre-treatment, i.e., GS 100 : 0, GS 95 : 5, GS 93 : 7, and GS 91 : 9. Preparation of electrochemically exfoliated graphite In the electro-exfoliation process, the pre-treated graphite sheet and Pt wire were set as the working electrode and counter electrode, respectively. As for electrolyte, 100 mL of ammonium sulfate ((NH 4 ) 2 SO 4 ) 0.1 M solution was prepared by dissolving 1.32 g of (NH 4 ) 2 SO 4 in DI water. During the electrochemical exfoliation process, the electrodes were given a constant potential bias of 10 V until no more current could be observed, which means the end of the process. Aer that, the exfoliated product was washed repeatedly using DI water until a pH of 7.0 was reached. This action was carried out to remove the remaining ions attached to the sample. Aer being dried in an oven at a temperature of 40 C for 6 h, the product was dispersed in DI water (1 mg mL À1 ) using an ultrasonic homogeniser for an hour (pulsed 480 W). The resulting dispersion was centrifuged at 1000 rpm for 20 min to separate large particles from the dispersion. Finally, the top part of the dispersion was poured onto a Petri dish and oven-dried at 80 C for 18 h. The resulting exfoliated graphite (EG) samples were named aer the variation of H 2 SO 4 : H 2 O 2 volume ratio used during pre-treatment, i.e., EG 100 : 0, EG 95 : 5, EG 93 : 7, and EG 91 : 9. Characterisations The structural ordering was characterised using an X-ray diffractometer (XRD, Bruker D8 Advance, Bruker) with Cu Ka radiation at 1.5406Å. Meanwhile, Raman spectroscopy (Modular Horiba Jobin Yvon type iHR320, Horiba) with 532 nm laser excitation was utilised to study the crystal defect and exfoliation of the samples. Morphology observation and C/O elemental composition analysis of the samples was carried out using a scanning electron microscope and energy dispersive X-ray, respectively (SEM and EDX, SU3500, Hitachi). Fourier transform infrared spectroscopy (FT-IR, Prestige 21, Shimadzu) characterisation was employed to observe the functional group contained in the samples. The electrical conductivity of pelletised samples was measured using a four-point probe connected to a direct current (DC) source (R6240A, Advantest) and multimeter (2100 Series: 6 1/2 -digit USB multimeter, Keithley). Meanwhile, their electrochemical impedance spectroscopy (EIS) measurement and cyclic voltammetry (CV) was carried out using potentiostat/ galvanostat (PARSTAT 3000A, Princeton Applied Research) with Ag/AgCl as reference electrode and Pt wire as a counter electrode in a three-electrode system. Before the measurement, the EG samples were rst mixed with PVDF binder and conductive carbon black ratio of 8 : 1 : 1 using NMP as a dispersant to form slurries. The resulting slurries were deposited on a stainless-steel foil and dried in a vacuum oven at 100 C for 12 h to make electrodes. The electrodes were then cut into a size of ca. 1 Â 1 cm. EIS measurement was carried out with 10 mV AC voltage at a frequency range of 100 000 to 0.1 Hz. Meanwhile, CV measurement was conducted with a scan rate of 1-200 mV s À1 in 6 M KOH aqueous electrolyte. Results and discussion As a result, oxygen gases (O 2 ) are generated within the graphite slabs, and forces expansion of the interlayer graphite. 27,30,31 The proposed mechanisms are in line with the fact that the expansion of the graphite sheet is greater with the increasing H 2 O 2 volume fraction involved during pre-treatment, as shown by the photographed and optical microscope images of all samples in Fig. 1(a)-(j). This judgment is also in line with the visible evidence witnessed during the pre-treatment of the graphite sheets; more bubbles were generated when more H 2 O 2 was involved, indicating more oxygen produced during the process. Morphology of the pre-treated samples observed using an optical microscope revealed several macro cracking seen from the samples pre-treated using a higher proportion of H 2 O 2 (H 2 SO 4 : H 2 O 2 93 : 7 and 91 : 9) as indicated by the yellow arrows. This phenomenon suggests the more extensive expansion due to a larger amount of oxygen bubbles generated within the graphite slabs. Fig. 2(a) shows the Raman spectra of the p-GS and EG samples. All spectra reveal the presence of three peaks, which are typically seen in Raman spectra of graphitic carbon materials, i.e., D, G, and 2D peaks at Raman shi of ca. 1350 cm À1 , 1580 cm À1 , and 2650 cm À1 , respectively. Sourced from the stretching of C-C bonds (sp 3 ), D peak is oen associated with the structural defect and oxidation of graphitic carbon. 32,33 On the contrary, the G peak is attributed to the structural perfection of graphitic carbon as it is originated from the stretching of sp 2 hybridised carbons (C]C). 34 Thus, the ratio of D to G peak intensity (I D /I G ) is oen used to assess the extent of damage in carbonaceous materials. 35 It can be seen from Fig. 2(a) that the intensity of D peak and I D /I G value of all EG samples both are higher than that of the pristine graphite sheet, indicating more defective and oxidised structure was formed aer electrochemical exfoliation. However, the I D /I G value of EG decreases with the increasing of H 2 O 2 volume fraction used during the pre-treatment, suggesting better crystallinity. The shoulder peak, seen at Raman shi of around 1620 cm À1 , corresponds to D 0 peak, are originated from a single phonon intra-valley scattering process. The intensity of the D 0 peak is known to be proportional to the average number of defects in the unit cell of samples. 36 Fig. 2(a) shows that the intensity of this peak decreases with the increasing H 2 O 2 amount used during pretreatment. This result is consistent with the I D /I G value of EG that samples subjected to pre-treatment in higher H 2 O 2 volume fraction demonstrated a lower structural disorder. While D and G peaks are mainly considered to assess the damage and structural perfection, a 2D peak is oen analysed to assess the layered characteristic of graphene. Several studies have reported the dependency of shape and position of the 2D peak on the graphene thickness. [37][38][39] The 2D peak of single-layer graphene is oen observed as a symmetric peak at ca. 2680 cm À1 under the exposure of 532 nm laser excitation. 36 Whereas, graphite and multilayers graphene typically show a wide asymmetric peak at Raman shi of 2600-2800 cm À1 , which can be deconvoluted into several smaller peaks. The enlarged Raman spectra of all samples along 2600-2800 cm À1 (2D) depicted in Fig. 2(b) shows that none of the samples resembles the characteristic of a single graphene layer. However, it can be seen that the position of 2D peaks in EG samples shied to the le as compared to the pristine graphite sheet. The 2D peak gradually red-shied in the spectra of samples pre-treated with H 2 SO 4 /H 2 O 2 solution with volume fraction H 2 SO 4 : H 2 O 2 100 : 0 and 95 : 5, indicating the thinner graphene layers in these samples. However, increment of H 2 O 2 volume fraction did not shi the peak further. Instead, it pushed the 2D peak back nearing that of the pristine graphite sheet, advising that the thicker EG was obtained when the graphite sheet is pre-treated with an excessive amount of H 2 O 2 . Another sign was the resemblance of the 2D peak shape of these samples with that of the graphite sheet shown in Fig. 2(b). This phenomenon allegedly caused by the extreme early expansion of the graphite sheet during the pre-treatment, especially with the large H 2 O 2 volume fraction (EG 93 : 7 and EG 91 : 9), marked by several macro cracking that was visible within the samples (Fig. 1(i) and (j)). The extreme early expansion and macro cracking made the samples so friable that they were broken into thick graphite chunks during the electrochemical exfoliation process, as was observed using Scanning Electron Microscopy (SEM) shown in Fig. 3. The SEM images of the pristine graphite sheet and the obtained EG aer centrifugation are depicted in Fig. 3. It can be seen that the thick graphite sheet samples look more delaminated, forming thinner layers of EG. The SEM images of EG 100 : 0 ( Fig. 3(b)) and EG 95 : 5 (Fig. 3(c)) show the presence of thin layers of graphene. Meanwhile, the SEM images of EG 93 : 7 (Fig. 3(d)) and EG 91 : 9 (Fig. 3(e)) reveal a thick fragment of multilayers graphene, which resembles the SEM image of graphite ( Fig. 3(a)). The result of SEM observation is in line with Raman characterisation, indicating that delamination of the graphite sheet could not be conducted well in the sample pretreated with a large volume fraction of H 2 O 2 . As noted earlier, the extreme expansion of the graphite sheet during pretreatment with a large amount of H 2 O 2 apparently caused the structure of the graphite sheet to crumble easily. Fig. 4(a) shows the normalised X-ray diffraction (XRD) patterns of all samples. They show the existence of graphitic materials, indicated by the peak around 2q of 26 and 55 , which correspond to diffraction plane (002) and (004), respectively. However, small bumps can be seen at 2q of ca. 12 and 43 in the sample prepared without H 2 O 2 . This phenomenon likely corresponds to the diffraction peak (001) and (101) of graphene oxide (GO), respectively. 40 The enlarged XRD pattern along 2q of 10-14 shown in Fig. S1(a) † indicates that only sample 100 : 0 revealed a (001) peak of GO. Whereas, the enlarged pattern along 2q of 41-47 ( Fig. S1(b) †) reveals that all EG samples which previously subjected to pre-treatment show very small bumps at 2q of 43 , which correspond to (101) plane of GO. 40 However, it can be seen that the intensity of the bumps decreases gradually with the increase of the H 2 O 2 amount. The formation of GO in the sample pre-treated with only H 2 SO 4 is supposedly triggered by the formation of graphite intercalation compound (GIC). GIC is widely known as an important stage in GO production via Hummer's and other chemical exfoliation methods. 41,42 The intercalation of H 2 SO 4 and HSO 4 À compounds in the graphite interlayer may facilitate oxidising agents (e.g., HNO 3 and KMnO 4 ) enter the graphite slab, which may further lead to oxidation of carbon atoms on the basal plane. 43,44 Cao et al. have reported that the electrochemical exfoliation of the GIC-graphite with low stage index (n) leads to the formation of highly oxidised GO. 44 GIC formation facilitates the entrance of water during the electrochemical process. The reaction between the nucleophilic water with the positively charged graphite results in the generation of oxygencontaining functional groups. 44,45 GIC with lower n facilitates more water entering the graphite slabs, increasing the possibility of water attacking the positively charged graphite layers to form GO during the electrochemical process. However, in the case of less intercalation (high n), the product of this process will be a mixture of GO and graphene layers. 44 Fig. 4(b) shows the enlarged diffraction pattern of all samples along the 2q of 25-28 . It can be seen that (002) graphitic peaks shied to lower 2q, in the EG 100 : 0 and EG 95 : 5, and shied back to higher 2q in the samples EG 93 : 7 and 91 : 9. This trend agrees well with the shiing of 2D peak shown in Raman spectra (Fig. 2(b)), as the shiing of (002) peaks to lower 2q is also known as a sign of the expansion of graphite interlayer spacing along the c-axis (d 002 ). 46 The expansion can be caused by at least two reasons, including the delamination of graphite layers and oxidation of graphite edge and basal plane. During the electrochemical exfoliation process, positive electric potential causes the insertion of nucleophilic water molecules into graphite interlayer then forms oxygencontaining functional groups. Aer that, the water molecules in the graphite slabs are slowly oxidised to form oxygen gases. The gases are then accumulated and caused the generation of pressure inside the graphite slab, which may overcome van der Waals interactions between the graphite layers and cause the delamination of graphene layers. 18,47 Fig . 5 shows the proposed mechanism of electrochemical exfoliation that involves pre-treatment in H 2 SO 4 , with and without the addition of H 2 O 2 . The graphite sheet sample pretreated in the H 2 SO 4 /H 2 O 2 solution experienced early expansion due to the decomposition reaction of H 2 O 2 that releases O 2 gases. This process reduces the delamination time, which is considerably longer in the electrochemical exfoliation of graphite sheet that is only pre-treated using H 2 SO 4 . Besides, the less time-consuming procedure can prevent the massive attack of nucleophilic water to the graphite sheet that oen leads to the graphene oxide formation in a prolonged time. In order to get information related to the functionality of the obtained EG, Fourier transform-infrared (FT-IR) spectra of the samples were collected and compared with the FT-IR spectrum of the pristine graphite sheet (Fig. 6). All spectra show an absorbance peak at ca. 1620 cm À1 , attributed to vibration of C]C bending from sp 2 hybridised carbons. 48,49 Whereas the peaks related to oxygen functional groups originated from alcohol and water, such as O-H stretching, CO-H bending, and C-O stretching is visible at ca. 3400 cm À1 , 1380 cm À1 , 1090 cm À1 , respectively. 50,51 While the peak at 3429 cm À1 (O-H stretching) looks similar for all samples, the C-O stretching peak at 1116 cm À1 looks more prominent in the EG samples, suggesting oxidation of the graphite sheet during the electro-exfoliation process. However, it can also be seen that the peak decreases gradually with the increasing amount of H 2 O 2 added during pre-treatment, proposing less oxidation of the graphite pre-treated with more amount of H 2 O 2 . As mentioned previously, this phenomenon can be caused by a very fast breakdown of the fragile graphite sheet pre-treated with a higher dose of H 2 O 2 . The oxidation mechanism of graphite during electrochemical exfoliation is oen explained as the nucleophilic substitution reaction between the positively charged graphite sheets and water molecules. The reaction causes C-OH group formation and may lead to the formation of C-O-C epoxide groups at a prolonged time. 44 Meanwhile, further oxidation of C-OH may form -C]O (carbonyl) or O]C]O (carbon dioxide). However, FT-IR spectra of EG shown in Fig. 6 reveal no peak indicating the presence of C]O carbonyl groups, suggesting the low to moderate oxidation of the obtained EGs. The spectra show that the peak attributed to C-O stretching decreases, and CO-H bending increases with the increase of H 2 O 2 used during pre-treatment. It seems that -COH groups transformed into the -COC epoxide group at the longer electrochemical reaction in samples pre-treated with a small amount of H 2 O 2 . The oxidation level of all samples represented as a relative percentage of oxygen-containing functional group (RP OCFG ) has been determined by comparing the integrated area of peaks attributed to oxygen functional groups (1398 cm À1 and 1116 cm À1 ) to the integrated area of all peaks along 900-1850 cm À1 , as proposed by Kumar et al. 48 Table 1 shows the calculated RP OCFG value of each EG sample. The RP OCFG value tends to decrease with the increasing amount of H 2 O 2 involved during the pre-treatment. The lowest oxidation degree is demonstrated by the EG 91 : 9, with an RP OCFG of 87.49%. The same trend was also revealed from the elemental analysis using an energy dispersive X-ray spectrometer (EDS), represented as the O/C ratio and shown in Table 1. The O/C ratio of EG samples is higher than that of the pristine graphite sheet, suggesting the oxidation of graphite sheets during the electrochemical exfoliation. It can be seen that the O/C ratio decreases with the increase of H 2 O 2 amount involved during pre-treatment, which are in line with the result of the RP OCFG calculation. The reason behind the less oxidation of EG 91 : 9 is the friable structure of the pre-treated graphite sheets in a solution containing more H 2 O 2 . The sample allegedly prematurely broke into large pieces of graphite fragment during the early stage of the electrochemical process, preventing the continuous delamination of the graphite layer and evolution of oxygen functional groups from -COH to -COC. This proposed mechanism agrees well with the SEM observation shown in Fig. 3. The EG prepared with a higher amount of H 2 O 2 during pretreatment (EG 93 : 7 and EG 91 : 9) seemed less exfoliated. The samples apparently broken into chunks of graphite instead of exfoliated as a thin layer of EG. The electrical conductivity of the obtained EG in a function of the H 2 O 2 volume fraction involved during pre-treatment is displayed in Fig. 7. The electrical conductivity tends to increase with the increase of H 2 O 2 amount involved during pretreatment, and the highest was demonstrated by sample EG 91 : 9 with electrical conductivity of 103.72 AE 6.59 S cm À1 . This trend is inversely proportional to the RP OCFG and O/C because electrical conductivity is affected negatively by the presence of defects and functional groups. Covalent functionalisation of graphene may halt the delocalisation of p electron from the carbon atoms due to the transformation of planar sp 2 hybridisation into tetrahedral sp 3 hybridisation. 52 Besides, oxygen content in the sample could disrupt electron transport, causing a decrease in electrical conductivity. 52 Fig . 8 shows the Nyquist plots obtained from the electrochemical impedance spectroscopy (EIS) measurement of the EG samples. The plots were tted using an equivalent circuit depicted in Fig. S2. † The semicircle in the plot corresponds to the resistance between graphene sheets and contact resistance between the electrode and current collector, 53,54 of which diameter represents the value of charge transfer resistance (R ct ) at the interface of electrolyte and electrode. 55 Among all samples, EG 91 : 9, the least oxidised sample, shows the lowest R ct , i.e., 2.59 U (Table 2). This result suggests that the oxidation degree of graphitic materials is equally proportional to the R ct value. The reason behind it is that the oxygen functional group in the EG samples will act as an insulating layer that inhibits interfacial charge transfer and increases the R ct value. 56 The R ct value corresponds with electrical conductivity measurement that the sample with the lowest R ct has the highest electrical conductivity. 57 The capacitive behaviour of all samples was analysed by conducting cyclic voltammetry (CV) measurement. Fig. 9(a) shows the cyclic voltammograms of pristine graphite and various EG samples, measured at a scan rate of 1 mV s À1 along À0.9 to 0 V. All samples, except EG 100 : 0, revealed electric double layer capacitive (EDLC) properties indicated by their pseudo-rectangular-shape CV proles. 9,12 CV prole of EG 100 : 0 demonstrates oblique-shape CV prole, which allegedly caused by the high electrical resistivity of this sample. Table 3 shows the specic capacitance of samples measured at a scan rate of 1 mV s À1 . The specic capacitance of all samples was calculated from their cyclic voltammogram using the following formula: 58 with C s is a specic capacitance, Ð I dV is an integrated area of cyclic voltammogram, m is mass of active material (g), v is scan rate (V s À1 ), and DV is a potential window (V). All EG samples exhibited higher specic capacitance than the pristine graphite sheet because of their large surface area caused by successful electro-exfoliation. The large surface area is an essential factor determining the specic capacitance of carbon-based supercapacitor, as samples with a large surface area can adsorb more ions on their surface. 59 Among all samples, EG 95 : 5 sample, showed the highest specic capacitance of 71.95 F g À1 , which is higher than electrochemically exfoliated graphene prepared without any pre-treatment as reported by Parvez et al. 18 The reason behind the excellent capacitive properties is the high degree of exfoliation, as conrmed by Raman spectroscopy measurement and SEM observation. Even though EG 95 : 5 and EG 100 : 0 showed a similar degree of exfoliation, more electrically conductive EG 95 : 5 enabled higher electron mobility on the electrode surface, leading to the higher specic capacitance of this sample. 14,59,60 Fig. 9(b) shows the CV plots of EG 95 : 5 sample measured at three different voltage windows, i.e., À0.9 to 0 V, À1.2 to 0 V, and À1.4 to 0 V. It can be seen that the pseudo-rectangular form of the CV plots could be maintained down to À1.4 V, suggesting that this sample owns the high potential to be applied as electrode materials for the wide voltage window supercapacitor. The voltammogram measured along the À1.4 to 0 V voltage window showed a hump at around À0.5 V. The hump probably corresponds to the redox reaction of oxygen-containing functional groups in the sample, which also has been reported by Oh et al. 61 Table 4 lists the specic capacitances of EG 95 : 5 sample measured at different voltage windows. The data reveals the increase of capacitance value with the widening of the voltage window. More charges stored along the wider voltage window produced a higher current response, thus increasing the capacitance. Fig. S3 † shows the CV proles of EG 95 : 5 sample measured at various scan rates from 1 to 200 mV s À1 along À1.4 to 0 V. All CVs demonstrate the rectangular-like curves observed even at a high scan rate of 200 mV s À1 , revealing the ideal EDLC performance. 21 Aside from focusing on the improvement of power and energy density of supercapacitors, lowering the fabrication costs and ensuring the involvement of environmentally friendly processes are several points of interest in the fabrication of supercapacitor materials. 62,63 Although several groups have reported the fabrication of EG with higher specic capacitance, the utilisation of additional organic substances in the exfoliation and dispersion process is not economically and environmentally favorable. 21,64,65 Our method offers low a cost and easy process using a more environmentally friendly water-based process. Besides, it also demonstrates excellent exfoliation of graphene layers with outstanding capacitive properties. Fig. 9 Cyclic voltammogram of (a) EG samples prepared using previously pre-treated graphite sheet in H 2 SO 4 with a various volume fraction of H 2 O 2 , compared with that of the pristine graphite sheet (at 1 mV s À1 ), (b) EG 95 : 5 at various potential windows using a scan rate of 1 mV s À1 . Table 3 Specific capacitance (C s ) of EG samples extracted from the cyclic voltammograms measured at a voltage window of À0.9 to 0 V and a scan rate of 1 mV s À1 (Fig. 9(a) Table 4 Specific capacitance (C s ) of EG 95 : 5 at various voltage windows measured using a scan rate of 1 mV s À1 (Fig. 9(b)) Voltage window (V) C s (F g À1 ) À0.9 to 0 71.95 À1.2 to 0 120.84 À1.4 to 0 150.69
2021-05-11T00:05:58.653Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "16fce58e7d33f61df257c76c118cdcdb4a05d915", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/ra/d0ra10115j", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2cf28a26c801b04ee71165a3ed5ff4862377afc4", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
248042150
pes2o/s2orc
v3-fos-license
Bayesian Hierarchical Modelling of Historical Data of the South African Coal Mining Industry for Compliance Testing Bayesian hierarchical framework for exposure data compliance testing is highly recommended in occupational hygiene. However, it has not been used for coal dust exposure compliance testing in South Africa (SA). The Bayesian analysis incorporates prior information, which increases solid decision making regarding risk management. This study compared the posterior 95th percentile (P95) of the Bayesian non-informative and informative prior from historical data relative to the occupational exposure limit (OEL) and exposure categories, and the South African Mining Industry Code of Practice (SAMI CoP) approach. A total of nine homogenous exposure groups (HEGs) with a combined 243 coal mine workers’ coal dust exposure data were included in this study. Bayesian framework with Markov chain Monte Carlo (MCMC) simulation to draw a full P95 posterior distribution relative to the OEL was used to investigate compliance. We obtained prior information from historical data and employed non-informative prior distribution to generate the posterior findings. The findings were compared to the SAMI CoP. The SAMI CoP 90th percentile (P90) indicated that one HEG was compliant (below the OEL), while none of the HEGs in the Bayesian methods were compliant. The analysis using non-informative prior indicated a higher variability of exposure than the informative prior according to the posterior GSD. The median P95 from the non-informative prior were slightly lower with wider 95% credible intervals (CrI) than the informative prior. All the HEGs in both Bayesian approaches were in exposure category four (poorly controlled), with the posterior probabilities slightly lower in the non-informative uniform prior distribution. All the methods mainly indicated non-compliance from the HEGs. The non-informative prior, however, showed a possible potential of allocating HEGs to a lower exposure category, but with high uncertainty compared to the informative prior distribution from historical data. Bayesian statistics with informative prior derived from historical data should be highly encouraged in coal dust overexposure assessments in South Africa for correct decision making. Introduction South Africa (SA) is one of the largest producers of coal in the world, with an estimated 86,000 workers according to the Mineral Council of South Africa, 2019 [1]. During coal mining, coal dust is generated, and when inhaled, it can cause coal mine lung disease (CMDLD) [2,3]. To lower the risk of coal dust overexposure and potentially prevent lifethreatening CMDLD, the occupational exposure limit for respirable coal dust in SA is set at 2 mg/m 3 [1,4]. As part of a continuous process of monitoring overexposure and compliance, the South African Mining Industry Code of Practice (SAMI CoP) stipulates that the identification of homogenous exposure groups (HEGs) is an important proxy for the assessment of personal exposures [5]. HEGs are defined as a group of employees who have similar exposure, such that a sample can be drawn from them for predicting the exposure of all remaining workers [5,6]. In the SAMI CoP, HEGs are constituted by a stepwise process [5,6]; in step one, the mine is subdivided into ventilation districts based on areas with common intake and return air. In the next step (step 2), the area is divided into activity areas (as found in coal mines). The personal exposures in each activity area are then compared to the OEL, which is the eight-hour time-weighted average (TWA8h) coal dust concentration to which almost all workers may be repeatedly exposed without any adverse health effects whatsoever. Each HEG is assigned to exposure classification categories, which are associated with their distance to the OEL [4] (Table 1). The sampling size of each HEG is equal to 5 or 5% of the HEG population, whichever is greater, whereas the sampling frequency is determined by the exposure classification category. The results of each sampling campaign are evaluated independently of the previous data (i.e., historical data) for compliance. This current practice in South Africa shows that HEGs are too heterogeneous concerning exposure levels, resulting in an overestimation or underestimation of exposure for individual workers [7]. Yet, in good practice, previous sampling results can be used to update current data for the categorization of HEGs by using a Bayesian framework to elucidate prior information from the historical data [8]. The framework with broad use of informative prior is highly encouraged in occupational hygiene because it accommodates historical data into the empirical measurement of monitoring exposure data for accurate exposure grouping [8][9][10][11][12]. For compliance testing, the SAMI CoP approach assumes that the sample data are normally distributed, and the 90th percentile of exposure data (P90) should be below the OEL. Overexposure according to the European Standardization Committee (CEN) [13] and the British and Dutch guidelines is that the 95th percentile (P95) of the lognormal exposure distribution should be below the OEL [14]. It is important to note that SAMI CoP is based solely on current data, yet the incorporation of historical data in the Bayesian framework could improve the identification of overexposed HEGs as future risk management of exposure is based on the exposure profile of the data at hand. The SAMI CoP approach is based solely on a point estimate of the P90 of the current data and does not consider uncertainty surrounding the estimate. The Bayesian framework uses the credible interval to describe uncertainty surrounding a parameter. For example, the credible interval is interpreted as the probability of an estimate being found between a certain range, given the data [15]. The SAMI CoP approach would mean that similar exposure in different areas needs to be done repeatedly, but in a Bayesian sense, this can be achieved naturally by using a sample result as a prior distribution from common population distribution [16]. The Bayesian inference is exact and easily understood by anyone. For example, the probability that a person is overexposed to coal dust is 95%. Currently, the Bayesian framework has not been applied for exposure to coal dust in the mining industry, and no study has emphasized the use of historical data in determining overexposure in a routine occupational hygiene assessment. Therefore, the first objective of this study was to compare the posterior P95 of the non-informative and informative application of the Bayesian framework and SAMI CoP relative to the OEL. The second objective was to compare the grouping of posterior probabilities of the P95 exposure according to the South African occupational exposure categories between the two Bayesian approaches. The present paper describes the development of an informative prior, taken from the historical coal dust exposure data, and combines this with the present exposure data to achieve posterior distributions in a Bayesian framework. The decision-making on exposure risk management according to SAMI exposure categories is compared to posteriors derived from a non-informative prior and the strengths and limitations are discussed. Study Design and Data Collection This is a cross-sectional study. Respirable coal dust exposure data were collected periodically from different geographic locations of the mines. The population included in this study were only male mine workers who were working in underground coal mines. The data were collected from workers working in HEGs within each mine. Aligned with the SAMI CoP approach, a mixed stratified and random selection sampling frame was used, considering that "either 5% of the workers assigned to a HEG, or a minimum of five workers should be selected for a measurement campaign" [5]. The sampling collection and sample analysis methods have been described in a previous paper [7]. Briefly, each selected worker was issued with a size-selective cyclone with a mixed cellulose ester filter, which was attached to a dust sampling. The analysis of the cyclone's filter was done according to the standard of the National Institute for Occupation Safety and Health (NIOSH) method 0600 [17]. Statistical Analysis Statistical analysis was carried out in R version 4.1.1(R Core Team, Auckland, New Zealand), using RStan and bayestestR packages [18][19][20]. Consistent and similar coal dust historical data of the HEG was used to update current monitoring data to produce posterior geometric mean, geometric standard deviations, P95, and the posterior probabilities of P95 exposure in each of the exposure categories. For the prior specification, we randomly selected a prior sample size of five from each HEG's historical data, as recommended from previous studies for occupational exposure assessment [21,22]. From the occupational exposure perspective, the prior sample size should be between 10% to 40% of the current data to get accurate information on the posterior distribution for decision making. Therefore, the sample size of five was used to keep the focus of the posterior distribution on the current data, as the sample size increased. This is important as in Bayesian statistics, the posterior distribution is a compromise between the information from the prior and the current data, but the distribution must be observed from the current data to a good measure as the sample size increases [15]. For the likelihood function, we took all the available current monitoring data. Model Specification Using Current Monitoring Data The model was specified by using the geometric mean (GM), given as exp(µ), and geometric standard deviation (GSD), denoted as exp(σ), which are the exponents of the mean and standard deviation, after log transforming the data. The likelihood function was presented as below in Equation (1). where y i is the log-transformed current monitoring data and n is the number of observations of current monitoring data. The OEL exposure categories were added as a random variable in the model directly to produce the posterior probability distribution of the P95 to each of the categories [21,22]. From the OEL exposure categories (Table 1), the highest category was assumed to have P95 > OEL and the lowest was P95 < 10% of the OEL. Model Specification Using a Non-Informative Uniform Prior Distribution In occupational health research, uniform prior distribution is highly encouraged. A current monitoring data vector, Y = (y 1 , . . . , . . . , . . . , y n ), where n is the sample size, with the data Y ∼ Norm µσ 2 , where µ represents the log of the geometric means (GM) and σ is the log of the geometric standard deviation (GSD). Then where a and b are the lower and upper bounds of the prior distribution, respectively. We took inspiration from Gelman 2006 [23], where, for lower bounds, µ was indicated as 0 and for σ was −1/2 and the upper bounds were set to infinity. Informative Prior Specification from Historical Data For informative prior, we assumed that the log-transformed historical data with n 0 observations had sample variance s 2 y0 = ∑ (y i0 − y 0 ) 2 /(n 0 − 1). If the historical data y i0 ∼ Norm µ, σ 2 , then the mean of the historical data y 0 ∼ Norm µ, σ 2 /n 0 , we put µ as a random quantity and replace σ 2 with the prior estimate s s y0 , so that µ takes the form as shown below in Equation (2). The full conditional for µ and σ 2 was based on truncated normal prior distributions and truncated inverse gamma prior distributions, respectively [9,22]. In the truncated prior distributions, we placed bounds on µ using the suggestion of Bayesian decision analysis (BDA) from Hewett et al., 2006 [12], which was 0.005. We used 0.001 in this study to make sure it less likely affected the results, while the upper bound was allowed to vary iteratively. The upper bound was allowed to vary to avoid a prior from being unfairly skewed toward a more favourable result. For σ 2 , the lower bound from BDA was used and the upper bound was let to change iteratively. To develop prior for σ 2 , we started with the expression, (n − 1)s 2 y0 /σ 2 ∼ x 2 n 0−1 , where x 2 n 0−1 represents the chi-square distribution with (n 0 − 1) degrees of freedom. We put σ 2 as the random variable, given s s y0 from the historical data. Therefore, the variance is given by Equation (3). For n 0 > 1, where IG (a, b) is an inverse gamma distribution in Equation (4) with parameter a and scale parameter b Therefore, Further details on the prior specification for µ and σ 2 and the full conditional distributions are available in Supplementary File S1 of the Markov chain Monte Carlo (MCMC) algorithms to draw full posterior conditional distribution inform of the Gibbs sampler [24] were implemented. The Gibbs sampler was applied because of its easy computational application. The Gibbs sampler samples from a conditional distribution. For example, if a given parameter has been divided into sub-parameters, the Gibbs sampler works by drawing each sub-parameter conditional on the values of all the others iteratively. The sub-parameter is updated conditional several times on the latest values of all the components of the parameter to produce the marginal posterior distribution. We used 20,000 MCMC number of iterations to draw samples from the posterior. The posterior convergence diagnostic was carried out using the Gelman-Rubin convergence diagnostic, which compares the between and within-Markov chain variability for the model parameters to confirm whether they are stationary [25]. The between-Markov is the variance of the posterior mean of the samples, while the within-Markov is the mean of the variance in each sample. If the test statistics denoted by R-hat is ≤1.05, then convergence is achieved. The reliability of the posterior quantiles was confirmed using the bulk and tail effective sample size [26]. An effective sample size greater than 100 per chain is considered good. The convergence diagnostic test indicated an R-hat of less than 1.05 and an effective sample size of more than 100 in this study (not shown), implying convergence was achieved. Table 1 indicates the exposure classification of SAMI CoP. The highest category is four, which is that P90 exceeds the OEL for exposure to be classified as poorly controlled, and the lowest category is that P90 should be less than 0.1 of the OEL for highly effective control. In Table 2, a summary of the current monitoring data and corresponding historical data from nine HEGs are displayed. HEGs C and G had the highest AM of 2.42 in the current monitoring data compared to the rest of the HEGs. In the past data, HEG A had the highest AM of 2.00. HEGs D and G have the lowest AMs of all the other HEGs in current and past data, respectively. GSD indicated a high variability of exposure with the current monitoring data in HEG B, D, E, F, H, and I (GSD > 3), while in the past historical data, the variability of the exposure was high in HEG B, D, F, G, and I. Table 3 indicates the SAMI P90, median, and 95% credible intervals of the posterior GM and GSD, and the P95 percentiles for non-informative and informative prior. The SAMI CoP P90 values are much lower than the P95 in the non-informative and informative prior Bayesian approaches. The SAMI approach is the only method that showed that only one HEG (HEG D) had P90 values lower (1.62 mg/m 3 ) than the OEL of 2 mg/m 3 . The posterior median GM indicated that all HEGs exposures were below the OEL 2 mg/m 3 . There was high exposure variability in the majority of HEGs, as indicated by GSD greater than 3. Three and four of HEGs under non-informative and informative prior had less exposure variability according to the GSD. The patterns of the medians of the posterior P95 and 95% credible intervals (CrI) from Table 3 are shown in Figure 1. Generally, the median and 95% CrI were similar across all HEGs between the non-informative and informative prior. Five out of nine HEGs in the graph indicated that P95 are lower in the non-informative prior with wider 95% CrI bounds compared to the informative. Overall, there was high uncertainty in the non-informative indicated by the wider 95% CrI (also higher upper bounds) compared to the informative prior distribution. The comparison in the grouping of the HEGs' posterior probabilities of the P95 according to the different OEL categories (see Table 1) is presented in Table 4. In both Bayesian approaches of the prior distribution, HEG D showed the lower posterior probability of the exposure level being in category four, which is poorly controlled compared with the rest of the HEGs. All the HEGs in both prior distributions were in poorly controlled category four with more than 90% and 95% probabilities, respectively. Some of the posterior probabilities of the non-informative prior distribution, although all in category four, were slightly lower than in the informative prior. Discussion We used informative prior from historical data to update current monitoring data on lognormal distribution in the Bayesian framework to produce the posterior geometric mean, geometric standard deviation, and the P95. Similarly, a non-informative prior was used, and SAMI CoP was based on P90. The findings were compared. The posterior probability of the P95 percentile exposures was also grouped according to the SAMI exposure category. The use of the past data is important because decision-making on exposure risk management only based on the current data might be misleading. The weight of the historical data in the analysis is important; Symanski et al. [27] used an equal weight for both current and past data. We decided that weight should be unequal, with a small prior sample size to have less influence on the current data. This is consistent with other studies where small prior sample size was thought to produce inferential benefits when the results were compared to non-informative priors [22,28]. The Bayesian framework is also known to be robust with a small sample size [15], so even in the paucity of data, exposure risk analysis can be conducted with relative confidence. The application of both approaches to the prior distribution indicated that the posterior estimates of GM were below the OEL. This means that the level of coal dust risk control was similar in the past and the present. However, risk mitigation and decision making regarding exposure control should not be based on the central distribution (mean/median) of the data, but on P95, which constitute at least 95% of the underlying distribution. The posterior GSDs were also quite similar across the two Bayesian prior distributions, however, those of the non-informative distributions tended to be somewhat higher. The GSD indicated high variability in the informative prior distribution. The comparison of the SA SAMI using the P90 for compliance and Bayesian prior methods showed that P90 was lower (with one HEG exposure below the OEL) compared to P95. This is consistent with our previous study, where the SAMI approach tended to underestimate overexposure risk [4]. The Bayesian approaches considered the uncertainty of overexposure not just based on a point estimate as to the SAMI CoP. All the HEGs in the Bayesian approach indicated their P95 were very high and above the OEL. The distribution of the median and P95 was similar to the non-informative and informative prior distribution ( Figure 1). The majority of HEGs in the non-informative prior indicated that P95 were a bit lower and had wider 95% CrIs, indicating a high uncertainty compared with the informative prior derived from the historical data. This underscores the importance of the use of historical data in coal mining occupational exposure assessment. Decisions on overexposure risk can be made with greater confidence when historical data are brought together to update the current data, as it is only natural that they are part of the current data. We then compared the posterior probabilities of grouping the P95 in each exposure category between the Bayesian approach from non-informative and informative prior (from historical data) distribution (Table 4). In both approaches, the highest probabilities (greater than 95%) of P95 were observed in category four of the exposure, which indicates poorly controlled exposure. None of the HEGs' posterior P95 was in the lower exposure category. From these results, the use of historical data to update current data in Bayesian statistics for occupational exposure assessment is very important, as non-informative prior tend to assign HEGs' in a lower category. This affects informed decision making with the regard to overexposure risk mitigation. Assigning HEGs to a lower category is similar to a simulation study that showed that non-informative uniform priors group the P95 probabilities in lower exposure categories [22]. The difference with the informative prior from historical data might be because of the possible use of incorrect prior for certain HEGs, resulting from a lack of adequately repeated measurements and sampling of prior data. As seen from the above, the uncertainty to inform risk management decisions is not low and high variability of exposure is shown by the non-informative prior distribution compared to the informative prior from the historical data. Although this study showed that non-informative priors tend to locate the posterior probabilities of P95 in a lower category and increase variability, its interpretation must be taken with caution as the probability density function used to specify the prior, usually an infinite integral might yield improper posterior distributions [29]. Therefore, the specification of the non-informative uniform prior must be considered. Our findings could indicate that the decision to regard these HEGs as compliant or non-compliant should also consider the variability of the data. The strength of this study is that the Bayesian analysis naturally allows for combining prior information from historical data with current data within a solid decision-making framework [30]. With the robustness of the findings based on even a small sample size, the Bayesian analysis provided inferences that are based conditionally on the data, which makes them exact and easily interpretable. For example, the probability of the posterior P95 being in category four (poorly control group) can be expressed quantitatively [31]. Regarding the limitations of this study, it is important to recall that HEGs used in this study are created by grouping workers based on the common air intake and return air. This might mean that the HEGs can be too heterogeneous because within a HEG there can be several job titles with different exposure variabilities [7]. As demonstrated earlier [7], HEGs tend to have high variability, which also affects their compliance to the OEL and grouping according to exposure category. From the Bayesian perspective, sometimes historical data are unavailable or are not similar and consistent to the current data, and hence it is not possible to conduct an informative prior. Conclusions It is clear from the findings that the use of the Bayesian framework with informative prior can inform concise decision making on occupational exposure risk mitigation in the coal mining industry with great confidence. Bayesian analysis from the non-informative uniform prior distribution tends to put HEGs in lower exposure categories than informative prior distribution derived from historical data. The non-informative prior findings also showed high uncertainty and variability, thus a decision on exposure risk assessments would likely be made with less confidence. This makes overexposure risk likely to be underestimated. We recommend increased use of the Bayesian framework with the use of prior information from historical data in the coal mining occupational exposure assessment. This will improve solid decision-making concerning coal dust overexposure risk and compliance. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2022-04-09T15:13:39.208Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "f0f1f5c91d62e161b535d0452a07c280853744d8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/8/4442/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b65aa4cac7dffc12580239adfa1cb3b9d253209b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
236427187
pes2o/s2orc
v3-fos-license
Landscape Connectivity Analysis and Optimization of Qianjiangyuan National Park, Zhejiang Province, China As natural ecosystems in most parts of the world come under increasing human influence, fragmentation is becoming the major driving factor of the global biodiversity crisis. Therefore, connectivity between habitat patches is becoming even more important. China began building national parks with the primary purpose of protecting nationally representative natural ecosystems and maintaining the integrity of their structure, processes and functions. Research is necessary to improve the internal connectivity of national parks and to propose suggestions for existing functional zoning and biological corridors. In this study, Qianjiangyuan National Park was selected as an example park, and landscape fragmentation was evaluated exponentially and simulated visually. The habitat characteristics of protected species in the region, morphological spatial pattern analysis and the delta of the probability of connectivity were used together to identify key habitat patches and their importance levels in the study area. Potential habitat corridors in the region were then obtained using least-cost path analysis and gravity modeling methods based on the distribution of key habitat and the migration costs of target species. The results of this study show that the disturbed landscape of the study area is dominated by tea plantations and drylands, with central roads being an important factor affecting the overall landscape connectivity. In terms of the distribution of key habitat patches, the mountains have a high value. In terms of area, their size is not directly proportional to their importance for maintaining landscape connectivity in the region, but large area patches are generally of higher importance. In terms of distance, key habitats that are closer to each other have a stronger correlation and a greater possibility for species migration. Combined with the functional zoning of Qianjiangyuan National Park, the setting of strictly protected areas and recreational areas is reasonable, and traditional use areas and ecological conservation areas could be appropriately adjusted according to the distribution of key habitats. The important corridor in the middle of the ecological conservation area is crucial for the overall connectivity of the national park, and the connectivity between strict protected areas will depend on successful protection of the ecological conservation area. Introduction National parks are one of the most important types of protected area in China. China has been building a national park system since 2015 with the main aim of protecting the integrity and authenticity of the natural ecosystem. The first batch of parks, including 10 pilot national parks, was completed in 2020. Qianjiangyuan National Park, which focuses on the forest ecosystem, is one of the 10 existing pilots of the national park system in China and is in Zhejiang Province, one of China's most economically developed provinces. With rapid urban and rural development, rates of habitat loss and fragmen- Landscape Distribution Map of Qianjiangyuan National Park We used high-resolution remote sensing images taken by China's Gaofen-1 (GF-1) satellite, which was successfully launched in 2013, as a data source. We selected high-resolution cloud-free remote sensing images (RS images) from March 2019 (resolution = 2 m, road area resolution = 0.8 m) based on the land cover characteristics of the study area. Based on ENVI 5.1 pre-processing of the images for geometric correction, full-color sharpening, cropping, and other pre-processing, followed by supervised classification and random sampling of the spots (381 sampling samples, sampling rate of 20%), the accuracy of the interpretation samples was tested by combining the Google Earth electronic map and field survey sample points. The accuracy of the interpretation layer and the image were checked for the degree of coincidence, missing judgment and misjudgment. The interpretation accuracy was >95%, and the minimum feature size of the map was 1000 m 2 , which met the accuracy requirements of this study (The remote sensing image interpretation and accuracy test of interpretation results in this study were commissioned by 21st Century Space Technology Co., LTD., Beijing, China). The landscape of Qianjiangyuan National Park is classified into 24 categories based on the habitat type and land cover characteristics of the target species: Sustainability 2021, 13, 5944 4 of 21 natural arboreal forest, shrubland, bamboo forest, other woodland, natural grassland, nursery land, tea plantation, pond, dry land, bare land, highway, road land, bridge, trail, urban residential land, rural residential land, detached house, house under construction, land for hydraulic construction, ditch, other construction land, reservoir, river and lake. Vegetation Type Distribution Map of Qianjiangyuan National Park The forest resources survey data for Qianjiangyuan National Park in 2017 were provided by the Zhejiang Forestry Resources Monitoring Center and used to determine the distribution of forest types, including broadleaf forest, coniferous forest, mixed coniferous forest, bamboo forest, shrubland, grassland, cultivated land, and non-forest land. Habitat Identification and Dispersal Distance Threshold of Target Species In this study, the analysis of landscape connectivity was based on the habitat characteristics of the target species in the study area. In landscape connectivity assessment, the selection and dispersal distance of the target species is crucial [30]. The landscape connectivity of protected species can guide the management of protected areas. Dispersal distance is a key process in determining distance thresholds and is species-specific. There are two species in Qianjiangyuan National Park that are important for protection at the national level in China, namely Elliot's Pheasant (Syrmaticus ellioti) and the Black Muntjac (Muntiacus crinifrons), both of which are also CITES Appendix I listed species. In this study, a landscape connectivity analysis was conducted based on the dispersal distance threshold and habitat selections of these two species. Elliot's Pheasant is a typical ground-dwelling forest resident bird that can inhabit broad-leaved forests, coniferous forests, bamboo forests, and short-term shrublands close to forests, with evergreen or deciduous broad-leaved forests being the most suitable habitat [31]. Li [32] observed that the winter dispersal distance of Elliot's Pheasant can span two or three hills with a diameter of 1.5-2 km, while Shi and Zheng [33] tracked the spring dispersal process over a long distance of more than 3 km using radio telemetry. Peng and Ding [34] used telemetry to determine the spring breeding dispersal linear distance to be 1.5-2.1 km. Zhang [35] used telescopes to observe and study the behavior of Elliot's Pheasant during wilderness training using radio telemetry tracking equipment, GPS, compass, telescopes, and other equipment, and an analysis of the behavior of Elliot's Pheasants released after wilderness training determined the dispersal distance to be 0.2-3.0 km. The habitat of the Black Muntjac includes broadleaf forest, mixed coniferous forest, coniferous forest, scrub, and bamboo forest, but mixed coniferous forest and broadleaf forest are preferred [36]. The dispersal distance of the Black Muntjac is poorly studied, and its activity patterns are territorial, generally involving movement within the domain, with some individuals moving up to 2.5-5.0 km [37,38]. Based on the literature described above, we set a dispersal threshold of 3.0 km for Elliot's Pheasant and 5.0 km for the Black Muntjac. Landscape Fragmentation Analysis of Qianjiangyuan National Park With reference to relevant studies [39], natural arboreal forests, shrublands, natural grasslands, bamboo forests, other woodlands, bare lands, rivers, lakes and reservoirs in Qianjiangyuan National Park are classified as protected landscapes, whereas tea plantations, nursery lands, dry lands, highways, road lands, trails, bridges, ponds, ditches, rural residential lands, urban residential lands, detached houses, houses under construction, lands for hydraulic construction and other construction lands are classified as non-protected landscapes. The area percentage (P), area weighted mean plaque fractal dimension (AWMPFD), fragmentation index (F) and relative agglomeration (Con of protected landscape patches were calculated to analyze the spatial characteristics of protected landscape patches at the landscape level and as a whole. The formulae for calculating each landscape index are shown in Table 1. Landscape Index Abbreviation Formula Description Percentage of protected landscape patch area P ∑ n i=1 S i A Range: 0 ≤ P ≤ 1; S i is the area of the i-th protected landscape patch; A is the total area of the national park Area weighted mean plaque fractal dimension [6,23] Range: 1 ≤ AWMPFD ≤ 2; P ij is the perimeter of the j-th patch of type i, a ij is the area of the j-th patch of type i Relative aggregation [6,24] Range: 0 ≤ C ≤ 1; P ij is the perimeter of the j-th patch of the i-th type, n is the total number of patch types in the landscape Fragmentation reduces patch area and increases the number of patches, so the average patch area decreases [30]. Therefore, we used the average patch area to quantify fragmentation. The remote sensing images were cropped into grids (500 m × 500 m) using the Fishnet tool in ArcGIS. The area and number of patches in each grid were counted, and the average patch area was calculated to indicate the degree of fragmentation of the landscape. Key Habitat and Connectivity Analysis of Qianjiangyuan National Park This study refers to the method of combining morphological spatial pattern analysis (MSPA) and the delta of the probability of connectivity (dPC) adopted by Guo et al. [30] to identify the habitats in Qianjiangyuan National Park. First, according to the habitat types of Elliot's Pheasant and the Black Muntjac, as well as the relevant literature, the areas where vegetation, elevation, and slope matched the habitat selection of the two species, respectively, were selected as potential habitat areas in Qianjiangyuan National Park, and the study area was reclassified into foreground (potential habitat) and background (all other areas) to obtain a binary map of potential and non-potential habitats. A MSPA was then conducted in the GuidosToolbox software (https://forest.jrc.ec.europa.eu/en/ activities/lpa/gtb/) (accessed on 21 May 2021) to reclassify the landscape into seven categories based on morphological features: core, islet, loop, bridge, perforation, edge, and branch [40]. The edge width was set to five pixels, and "core" areas were extracted to identify habitats under the eight-neighbor rule. Unlike traditional methods that focus on the area or importance of individual patches without considering the overall landscape connectivity, this method provides four-or eight-neighbor rules because the connectivity analysis is performed on a raster grid, which allows for automatic classification based on pixel-level geometric concepts. The probability of connectivity (PC) is an area-based functional connectivity method that is well suited to identify key elements that maintain overall habitat connectivity to quantitatively describe landscape connectivity and identify patches with important connectivity [41]. The delta of PC (dPC) can be used to calculate the contribution of each patch to the overall connectivity of the ecological network. The formula for the calculation is as follows: where a i and a j refer to the area of habitat i and j, respectively; the strength of the connection between any pair of patches is denoted by P * ij , which describes the ease of dispersion between patches i and j; and AL refers to area of the national park. Conefor Sensinode Sustainability 2021, 13, 5944 6 of 21 2.6 software (http://www.conefor.org/) (accessed on 21 May 2021) was used to calculate dPC values. According to the different dispersal abilities of the target species, the dispersal distance thresholds were set at 3.0 km and 5.0 km for Elliot's Pheasant and the Black Muntjac, respectively. If the distance between two patches was within the threshold, we set 0.5 as the probability of dispersion between patches [42]. Finally, the top 10 patches with the highest dPC values were selected as key habitats. Potential Habitat Corridor Analysis of Qianjiangyuan National Park Each pair of key habitats can be connected to each other by a least-cost path based on a least-cost model [6]. Least-cost paths are often used to optimize grid modules [30], and the grid's resistance value describes its facilitating or hindering effect on species dispersal processes. Resistance values are attached to each land cover cell to calculate the connectivity between two habitats [43]. Therefore, using cost-path analysis in ArcGIS to calculate the path of least resistance for organisms moving along key habitat patches, potential corridors between key habitats can be obtained. Resistance values for different landscape types were key factors that influenced the results. Based on the published literature related to the habitat selection of Elliot's Pheasant and the Black Muntjac in the study area, the five resistance indicators of vegetation type, elevation, slope, distance from roads, and distance from settlements were selected to summarize the selection of different habitats by the target species in the literature and to assign habitat levels to them. The golden divide method [44] was used to assign resistance coefficients to each habitat level, construct cost surfaces for different resistance indicators based on the resistance coefficients, and superimpose the cost surfaces for each resistance indicator to construct a composite cost surface that was used in the least-cost path analysis to identify potential corridors for this study. We used the top 10 patches with the highest dPC values as key habitat areas and used the gravity model to identify general and important corridors in the potential corridors. The gravity model was used to quantitatively evaluate the interaction force between the source and the target [45], with a greater force indicating a more important corridor. Therefore, the relative importance of the corridor was evaluated as follows. G ab is the interaction force between core patches a and b, N a and N b are the weight values of the two patches, D ab is the standardized value of the potential corridor resistance between patches a and b, P a is the resistance value of patch a, S a is the area of patch a, L ab is the cumulative resistance value of the corridor between patches a and b, and L max is the maximum value of the cumulative resistance of all the corridors in the study area. In this study, we constructed an interaction matrix between 10 key habitats based on the gravity model, and corridors with an interaction force greater than 0.1 were extracted according to the matrix evaluation results as important corridors, with the rest designated as general corridors. The general corridors, important corridors and key habitats were superimposed to construct a network of key habitats and corridors in Qianjiangyuan National Park. Analysis of Landscape Fragmentation The landscape type distribution map of Qianjiangyuan National Park was obtained after interpreting the high-resolution remote sensing image to extract land cover information ( Figure 2). Table 2 shows the landscape classification results, in which natural arboreal forest is the largest area, accounting for 78.37% of the total area of the national park, and other major landscape types include shrubs (accounting for 3.97%), bamboo forest (accounting for 2.85%), tea plantations (accounting for 7.41%), and dry lands (accounting for 4.39%). In Qianjiangyuan National Park, natural arboreal forests, shrublands, natural grasslands, bamboo forests, other woodlands, bare lands, rivers, lakes and reservoirs are classified as protected landscapes, whereas tea plantations, nursery lands, dry lands, highways, road lands, trails, bridges, ponds, ditches, rural residential lands, urban residential lands, detached houses, houses under construction, lands for hydraulic construction, and other construction lands are classified as non-protected landscapes. The distribution of protected and non-protected landscapes is shown in Figure 3. The average patch area of the protected landscapes was used to roughly simulate landscape fragmentation. As shown in Figure 4, the central and outer edge portions of the national park are more severely fragmented in comparison with other areas. The four landscape indices calculated for the proportion of protected landscape area (P), area weighted mean plaque fractal dimension (AWMPFD), fragmentation index (F), and relative aggregation (C ) are shown in Table 3. Percentage of protected landscape area (P) 0.87 Area-weighted average plaque fractal dimension Morphological Spatial Pattern Analysis Areas where land cover/vegetation type, elevation and slope conditions matched the habitat conditions were extracted based on the habitat characteristics of Elliot's Pheasant and the Black Muntjac ( Table 4). The suitable (including most suitable, sub-preferred and generally suitable) vegetation types for Elliot's Pheasant are broad-leaved forest, coniferous forest, mixed forest, bamboo forest, shrub forest and farmland at an elevation of 200 m, slope ≤50 • , >700 m distance from roads and >700 m distance from settlements. Suitable vegetation types for Black Muntjac are broad-leaved forest, mixed-coniferous forest, coniferous forest and shrub forest, at altitudes 600 m, with slopes of ≤45 • , ≥50 m distance from roads and ≥200 m distance from settlements. The habitat characteristics are shown in Figure 5. GuidosToolbox (https://forest.jrc.ec.europa.eu/en/activities/lpa/gtb/) (accessed on 21 May 2021) was applied to analyze the morphological features of the habitat map to reclassify the suitable habitats of Elliot's Pheasant and Black-fronted Muntjac into seven categories: core, edge, perforation, bridge, loop, branch, and islet ( Figure 6). In comparison with Elliot's Pheasant, the Black Muntjac had fewer patches and fewer areas due to its habitat requirements, but both species were found to be relatively evenly distributed. GuidosToolbox (https://forest.jrc.ec.europa.eu/en/activities/lpa/gtb/) (accessed on 21 May 2021) was applied to analyze the morphological features of the habitat map to reclassify the suitable habitats of Elliot's Pheasant and Black-fronted Muntjac into seven categories: core, edge, perforation, bridge, loop, branch, and islet ( Figure 6). In comparison with Elliot's Pheasant, the Black Muntjac had fewer patches and fewer areas due to its habitat requirements, but both species were found to be relatively evenly distributed. GuidosToolbox (https://forest.jrc.ec.europa.eu/en/activities/lpa/gtb/) (accessed on 21 May 2021) was applied to analyze the morphological features of the habitat map to reclassify the suitable habitats of Elliot's Pheasant and Black-fronted Muntjac into seven categories: core, edge, perforation, bridge, loop, branch, and islet ( Figure 6). In comparison with Elliot's Pheasant, the Black Muntjac had fewer patches and fewer areas due to its habitat requirements, but both species were found to be relatively evenly distributed. The Delta of the Probability of Connectivity Analysis The core area from the morphological spatial pattern analysis was extracted, and the top 30 patches were selected in descending order according to the size of the area to generate the node files and link files required for the connectivity probability calculation, with different dispersal distances considered in the calculation (Elliot's Pheasant = 3 km, Black Muntjac = 5 km). The delta of the probability of connectivity (dPC) for the 30 core patches was calculated using Conefor Sensinode2.6 software (http://www.conefor.org/) (accessed on 21 May 2021) ( Table 5). The top 10 patches with dPC values were selected as key habitats to obtain a map of key habitats importance ranking (Figure 7). The Delta of the Probability of Connectivity Analysis The core area from the morphological spatial pattern analysis was extracted, and the top 30 patches were selected in descending order according to the size of the area to generate the node files and link files required for the connectivity probability calculation, with different dispersal distances considered in the calculation (Elliot's Pheasant = 3 km, Black Muntjac = 5 km). The delta of the probability of connectivity (dPC) for the 30 core patches was calculated using Conefor Sensinode2.6 software (http://www.conefor.org/) (accessed on 21 May 2021) ( Table 5). The top 10 patches with dPC values were selected as key habitats to obtain a map of key habitats importance ranking (Figure 7). As shown in Table 5, the habitat patches that contributed most to maintaining landscape connectivity were not the largest area patches, and the size of the key habitat area for each target species was not proportional to its role in maintaining landscape connectivity in the national park, but the importance of large area patches was higher for both species, so the size of the key habitat was important for maintaining landscape connectivity in the region. The strict protected areas of Qianjiangyuan National Park are all located within large areas of habitat. Although there are strong interactions between a few small patches in the south with a high potential for species migration, these patches are small and are not key habitat areas that support the entire national park. Analysis of Potential Habitat Corridors The key habitat patches were transformed into particles, the minimum cost dist and cost back link between each particle were calculated, and then cost path analysis performed. The cost raster data were crucial for this step, and the cost surface for resistance indicator was constructed by determining the cost values of five resistanc dicators: land cover/vegetation, elevation, slope, distance from road, and distance residence (Figures 8 and 9). Tables 6 and 7 show the resistance and costs for each resist indicator set at different levels for Elliot's Pheasant and Black Muntjac, respectively, Analysis of Potential Habitat Corridors The key habitat patches were transformed into particles, the minimum cost distance and cost back link between each particle were calculated, and then cost path analysis was performed. The cost raster data were crucial for this step, and the cost surface for each resistance indicator was constructed by determining the cost values of five resistance indicators: land cover/vegetation, elevation, slope, distance from road, and distance from residence (Figures 8 and 9). Tables 6 and 7 show the resistance and costs for each resistance indicator set at different levels for Elliot's Pheasant and Black Muntjac, respectively, after reference to the relevant literature values. The final overlay yielded the respective cost raster data for Elliot's Pheasant and Black Muntjac ( Figure 10). Potential corridors between key habitats were obtained by cost-path analysis of 10 key habitats ( Figure 11). (e) Key Habitat and Important Corridor Network Analysis The interaction matrix between the 10 key habitats was constructed using a gravity model (Tables 8 and 9). According to the matrix evaluation results, corridors with an interaction force greater than 0.1 for Elliot's Pheasant were considered to be important corridors, and the rest were designated as general corridors (Figures 12 and 13). The key habitats and corridors of the two species were overlaid to construct a network of key habitats and corridors in Qianjiangyuan National Park (Figure 14). Discussion Landscape fragmentation in Qianjiangyuan National Park occurs in the northern and southern edges where human disturbance is concentrated, and in the central part of the park. The disturbed landscapes in the southern and northern edges are dominated by tea plantations, while the central disturbed landscapes are dominated by drylands. Based on the distribution of key habitats, fragmentation and the distribution of corridors, the major roads in the central part of the national park are one of the main factors influencing the overall landscape connectivity in the region. Roads have also been highlighted in other relevant studies as major causes of landscape fragmentation and barriers to biological movement, resulting in reduced overall landscape connectivity for many native species [32]. The strictly protected areas are appropriately zoned settings for habitat protection, whereas the recreational areas are almost all within non-key habitats, which is reasonable. However, approximately half of the traditional use areas are located in key habitats, especially near the lower outer edges of the park, and excessive human intervention during recreational activities should be avoided in these areas. The ecological conservation area is the most widely distributed area within Qianjiangyuan National Park, and most potential corridors are located in this area, which is highly protected by strict Chinese laws and regulations, which favor the restoration of potential corridor areas. There are a number of distinct potential corridors in the ecological conservation area that are important for improving the overall connectivity of the national park, and due to their location close to traditional use areas, i.e., human disturbances such as settlements and roads, it is also particularly important to consider enhancing conservation management in this area to elevate its conservation status [46,47]. Qianjiangyuan National Park has three separate strictly protected areas located in the northern and southern regions of the park, so connectivity between these strictly protected areas is dependent on successful protection of the conservation area, highlighting the importance of protecting potential corridors in the central part of the park. Discussion Landscape fragmentation in Qianjiangyuan National Park occurs in the northern and southern edges where human disturbance is concentrated, and in the central part of the park. The disturbed landscapes in the southern and northern edges are dominated by tea plantations, while the central disturbed landscapes are dominated by drylands. Based on the distribution of key habitats, fragmentation and the distribution of corridors, the major roads in the central part of the national park are one of the main factors influencing the overall landscape connectivity in the region. Roads have also been highlighted in other relevant studies as major causes of landscape fragmentation and barriers to biological movement, resulting in reduced overall landscape connectivity for many native species [32]. The strictly protected areas are appropriately zoned settings for habitat protection, whereas the recreational areas are almost all within non-key habitats, which is reasonable. However, approximately half of the traditional use areas are located in key habitats, especially near the lower outer edges of the park, and excessive human intervention during recreational activities should be avoided in these areas. The ecological conservation area is the most widely distributed area within Qianjiangyuan National Park, and most potential corridors are located in this area, which is highly protected by strict Chinese laws and regulations, which favor the restoration of potential corridor areas. There are a number of distinct potential corridors in the ecological conservation area that are important for improving the overall connectivity of the national park, and due to their location close to traditional use areas, i.e., human disturbances such as settlements and roads, it is also particularly important to consider enhancing conservation management in this area to elevate its conservation status [46,47]. Qianjiangyuan National Park has three separate strictly protected areas located in the northern and southern regions of the park, so connectivity between these strictly protected areas is dependent on successful protection of the conservation area, highlighting the importance of protecting potential corridors in the central part of the park. Functional zoning is a commonly accepted approach to national park planning and management. Previously, functional zoning schemes for national parks and protected areas have been based on the current status of natural resource characteristics and species distribution or developed by considering the compatibility of land use and landscape features or planning, or established by adjusting functional zones from the perspective of multiple stakeholders [48][49][50][51][52][53]. Habitat connectivity is severely affected by types of human activities and infrastructure that are rarely considered. Few habitat corridor studies have been conducted to support the zoning design of national parks. However, relevant studies have shown that interconnected habitat areas are critical for biodiversity conservation, especially in the face of climate change [54]. Therefore, it is important to consider habitat corridors in zoning design and as parts of functional zones. Noss and Harris proposed a conceptual model of core areas connected by corridors as a means of long-term conservation of protected area species, and their model can also be applied to zoning within protected areas [55]. Compared with the current functional zoning method of national parks which only considers the distribution of natural resources, this study provides support for the zoning design of national parks based on landscape connectivity and corridor design, which can improve the conservation efficiency of national parks. Conclusions This study analyzed landscape fragmentation in Qianjiangyuan National Park and identified key habitats and important corridors. We found that: (1) Roads, settlements, and cultivated land have a significant impact on the landscape connectivity of Qianjiangyuan National Park, with roads being one of the main reasons for the fragmentation of the overall landscape. We recommend that several potential corridors in the center of the park that connect key habitats on both sides of the road be protected to help link habitat patches, mitigate the impact of the road, and appropriate vegetation restoration and reforestation of tea plantations and drylands in the study area will increase the landscape connectivity of Qianjiangyuan National Park. (2) The area of each patch of key habitat is not proportional to its contribution to the landscape connectivity of Qianjiangyuan National Park, but the size of key habitats is important for maintaining landscape connectivity. At the landscape scale, large habitat patches of high importance should be prioritized for protection to promote habitat connectivity and species conservation in the study area. At the same time, groups of small patches with a high potential for species migration should also be protected as a whole to avoid further fragmentation or even area loss. (3) The locations and boundaries of strict protected areas and recreational areas of Qianjiangyuan National Park are relatively reasonable, and the scope of ecological conservation areas and traditional use areas could be adjusted to better match the distribution of key habitats. Special attention must be focused on protecting and managing ecological conservation areas because of the pressure of human disturbance around the area. However, at the same time, there are several important corridors in the area, and the connectivity between strictly protected areas depends on successful protection of ecological conservation areas. Using high-resolution remote sensing image data, the landscape connectivity of different types of connected protected landscapes was analyzed as an integrated mosaic at the scale of an individual national park. In comparison with the more commonly used method of functional zoning of national parks, in which only the distribution of natural resources is considered, this study used detailed habitat characteristics to grade resistance and analyze landscape connectivity and corridors to support the zoning design of national parks. This method can improve the conservation effectiveness of national parks by ensuring that ecological connections are maintained and strengthened where they exist and restored where they are lost. While this study used two detailed habitat selections in the habitat corridor analysis and graded the resistance produced by different species based on landscape categories, it did not consider the region's plant migration characteristics, the connectivity of aquatic organisms in the forest ecosystem, or the migration characteristics of the region's less studied and less data-accessible species, such as insects, amphibians, reptiles, etc. In addition, this study only analyzed and optimized connectivity recommendations within the study area, and further ecosystem integrity and connectivity analyses can be conducted in conjunction with surrounding potential habitat patches to make useful recommendations for range optimization in Qianjiangyuan National Park. The current study assumes that highways, railroads, and national roads have a segmentation effect on habitats for connectivity analysis, but the form of road construction varies in different regions of China, especially in mountainous areas, where bridges and tunnels are more common, and the extent of impact on habitats is not clear, so the calculations in this study may underestimate the connectivity in some areas. It would be useful to analyze the impacts of different road types on the park ecosystem in future research. Data Availability Statement: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
2021-01-06T04:16:55.949Z
2020-12-09T00:00:00.000
{ "year": 2021, "sha1": "03dc59ce7f95bebf34bc1eea22ff6af809cd499a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/11/5944/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e93dd4a6c180731875e5546302fb4636cf8dffa2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
67442379
pes2o/s2orc
v3-fos-license
The Scheduling of Resources in Program Architecture —As a programmer, there are some difficulty in resource scheduling. Most of them are caused by inappropriate acquiring and releasing resources among concurrent transactions. The author analyzes the resource scheduling problem caused by inappropriate usage of synchronization mechanism, and then provides several methods to resolve this problem from different perspectives. These methods can provide some guidelines for computer programmers and the way to solve resource scheduling problem. Procedure-Based [1] and Object-Oriented [2] Programming are the most classical programming models. They divide the whole system into a series of procedures or objects. The procedures and objects can be called and re-used easily to complete the whole system flow. But such models only focus on the static information (system composition), each component executes according to pre-defined schedule. This doesn't well adapt to the dynamical characteristic of system transactions. To resolve the above insufficiency, Event-Based Programming (EBP) [3,4] model is brought out. It is also called Event-Driven Programming. The essential of this model is event and its handling. For understanding conveniently, event is also called as message. Generally, in any system which based on EBP model, message is triggered by some request; message processing is performed by a component. The processing ability of the component has upper limit, so it can only handle more or less limited messages at the same time. As to those messages which haven't been handled in time, they are buffered in an explicit or implicit Message-Queue. Besides, the message processing provided by the component has to be performed in an executing context. Normally, such context is provided by a process (or thread) in nowadays major operation systems. Thus the messages are processed by processes/threads one by one circularly. The group of these processes/threads is called as Service-Buffer. II. THE RESOURCE ALLOCATION PROBLEM The resource allocation problem is because of the confliction of the acquired resources. In EBP model, the processes in Service-Buffer are also a kind of resource; such resource needs to be acquired at first before handling messages. Considering a set of transactions, each one has two messages handled in sequence. In the first message, a resource is required, and in the second message, the resource is released. Furthermore, assuming there are three same transactions T1, T2 and T3 are executed concurrently. There are two processes in the Service-Buffer: P1 and P2. This is shown in Fig. 1. The execution procedure is described as below: 6) When P1 finishes processing T1-Message1, it sends T1-Message2 for triggering the next step of the transaction; 7) Then P1 continues to choose the next message T3-Message1 from the Message-Queue and process it; 8) But because R hasn't been released, P1 can't acquire R either when it handles T3-Message1. So P1 can do nothing but only wait there; 9) At this time, all processes in the Service-Buffer are waiting for R, but the message T1-Message2 which releases R can't be processed by any process. Accordingly, the system falls into resource dead lock state. If we treat the processes in Service-Buffer as another kind of resource, for the above transactions, the acquiring sequence of the resources is shown as Fig. 2. It can be found that there are two opposite resource acquiring sequences: 1) Acquiring R after acquiring process. 2) Acquiring process after acquiring R. Such a resource contention problem can be similarly promoted to the systems of larger scale: if there are N processes in the Service-Buffer, when the concurrency for acquiring some resources reaches N+1, the resource-contention problem can be triggered. Therefore, as an implicit resource, the processes which handle the messages conflicts with the explicit resource R, and this finally causes resource dead lock. There are two points of essence leading to this phenomenon: Firstly, the operations of acquiring and releasing resource are performed in the process of two different messages. In the period between the two messages, all processes in the Service-Buffer may be blocked on acquiring resources; thereby the message for releasing the resources can't be scheduled by an available process. Secondly, a process can't schedule the other message once it is blocked on acquiring resources. RAG (Resource Allocation Graph) is often used to describe how the resources are acquired by concurrent transactions. A RAG is composed by nodes and edges. Nodes include resources and requesters. Edges represent the relationship between resources and requesters. An edge from resource to requester means the resource is held by the requester; and an edge from requester to resource means the requester is waiting for the resource. So if some edges form a loop in the graph, then it means there is dead lock in the corresponding resource allocation scenario. In the above procedure, its RAG is represented as formula (1) (SB means the processes in Service-Buffer): Apparently this RAG forms a loop, which implies dead-lock. All in all, when designing and implementing the Event-Based Programming based systems, it must be very cautious when acquiring exclusive resources, the confliction with service processes must be considered carefully. To resolve the resource dead lock problem, one of the four conditions must be broken. The method to handle resource dead lock can be categorized into three types [5]: 1.Prevent dead lock; 2.Avoid dead lock; 3.Detect and relieve dead lock. The 3rd category of method needs to relieve the dead lock by terminating some attending processes when detecting the dead lock. The transaction logic needs special processing to adapt to being suddenly terminated during execution, this brings huge complexity into the programming design, thus it can't be commonly used in various system. So our proposed solutions mainly focus on the first and second category of methods. There are mainly three resolutions being promoted as follows: A. Constrain Release Point of Resource This method requires that, if a resource is acquired when handling a message; the resource must be released in the same message processing step. Apparently, this method can make sure that the process isn't acquired after acquiring the resource, thus the dead lock could not happen. This is a method of preventing dead lock. By this method, there won't be the scenario that service processes are all blocked on the resource, since the resource must be able to be released after being acquired. But the restriction of acquiring and releasing a resource in one message is too strict, because a complicate transaction may have a lot of processing work after acquiring a resource. This work may be not suitable to be implemented in one message. For example, after acquiring a resource, a transaction may want to do a series of asynchronous I/O, and the resource cannot be released unless the I/O is finished. To meet this requirement, the message processing must wait for I/O's completion, so the service process keeps being occupied and cannot serve other messages. This affects the system concurrency and throughput severely. This method actually constrains the asynchronous feature of EBP based system. Therefore, it can only apply to simple systems which don't have high throughput requirement. B. Bind Message Handler This method means when a process handles a message, once a resource is acquired; the process is bound with the transaction which sends the message. Then all afterward messages which are sent in this transaction must be handled by this process, until the message which releases the resource is processed. While a process is bound with some transaction, it cannot handle the other messages which belong to other transactions and need to acquire some resources. In this way, this method also makes sure it won't appear that process is acquired after acquiring the resource. Because the process has been bound with the transaction, the process is always available after acquiring the resource. This is shown in Fig. 3. 2) P1 obtains T1-Message1 to process; 3) P2 obtains T2-Message1 to process; 4) When P1 handles T1-Message1, it acquires R successfully; 5) Once R is acquired, P1 is bound to T1, so P1 can only handle T1's messages; 6) When P2 handles T2-Message1, it can't acquire R, so it must wait there; 7) When P1 finishes processing T1-Message1, it sends T1-Message2 for triggering the next step of the transaction; 8) P1 continues to choose the next message from the Message-Queue, since P1 is bound to T1, so P1 can't choose T3-Message1, but it has to choose T1-Message2; 9) When P1 handles T1-Message2, R could be released; 10) Then P2 could be woken up and acquire R successfully; no dead lock could happen. When P2 is blocked on waiting for R, the RAG at that moment is described as formula (2). Apparently, there isn't any loop in this RAG, so the deadlock is impossible. The basic idea of this method is reserving the process which holds the resource, to make sure that the resource can be released in this process. This looks similar to A. But the difference is that, this method doesn't make constraint to how to acquire and release the resource. The resolution is resolved in the system architecture layer, the actual transaction won't see any special processing (i.e., the logic of how the message is handled need not special processing). Therefore, this method belongs to the method of avoiding dead lock. But in the period when the process is bound, it can't handle other transactions' messages which need to acquire resources, so this method also constrains the concurrency and throughput of the systems. But comparing with A, even after binding in this method, actually the process can still handle those messages which don't need to acquire resources, so its concurrency and throughput are better than A. C. Multiple Level Message-Queues The above-mentioned two methods focus on ensuring the messages of acquiring and releasing resources can be handled in the same process. But this requirement is too strict. Actually it is only necessary that the message of releasing resource could be handled by some process, it is not a requirement that the process must be as same as the one which acquires the resource. In order to achieve this, this method defines dedicated Message-Queue and Service-Buffer for the messages which acquire resources. Still considering the example in section II, because Message1 needs to acquire R, Message-Queue2 and Service-Buffer2 are defined dedicatedly for handling the messages which needs to acquire R. This is shown in Fig. 4. 1) Assuming there is only one process P3 in Service-Buffer2. 2) When transaction T1, T2 and T3 start, they all send Message1 to Message-Queue2; 3) Then P3 processes T1-Message1 at first, it can acquire R successfully and send T1-Message2; 4) Then P3 processes T2-Message1, but since R has been acquired by T1-Message1, P3 has to wait; 5) But because T1-Message2 doesn't need to acquire R, T1-Message2 isn't handled by Service-Buffer 2, but by Service-Buffer1 instead; 6) So T1-Message2 is sent to Message-Queue1, the process in Service-Buffer1 can handle T1-Message2; 7) Then R can be released properly; 8) Then P3 is woken up and R could be acquired properly. When P3 is blocked on waiting for R, the RAG at that moment is described as formula (3). So there isn't any loop in this RAG either, the dead lock won't happen. In this way, because all messages which acquire R are handled by Service-Buffer2, the processes in Service-Buffer1 are never blocked by R; therefore the messages for releasing R can always be handled properly. Ideally, each single resource needs to specify with a corresponding Message-Queue and Service-Buffer. A large scale system may use many resources, it isn't reasonable to specify Message-Queues and Service-Buffers for every resource. As an optimization, it can be defined according to the categories of the resources. For example, some transactions acquire the resource R in Message1 and release R in Message2. But some other transactions acquire the resource S in Message1 and release S in Message2. Meanwhile, if there isn't any relationship between R and S, i.e., there isn't any transaction which needs to acquire R and S simultaneously, then R and S can be considered into the same category, they can be handled with the same Message-Queue and Service-Buffer. Nevertheless, if some transactions acquire S after acquiring R, then R and S should be considered in different categories, they can't share the same Message-Queue and Service-Buffer, or the resource dead lock can be triggered. Resources are categorized depending on how the resources are acquired. At first, resources level is introduced as the following definition: 1) Within a transaction, before a resource is released, if there isn't any other resource being acquired, its level is 1; 2) Within a transaction, before a resource is released, if there is N resources being acquired, its level is N+1; 3) For one specific resource, if different transactions give different levels, the maximum one is chosen as the resource's level; 4) For the resource in point 3, if its level is changed from X to Y in some transaction, then increase the levels of the resources which have larger level in this transaction by Y-X. So eventually, each level corresponds to one category, all resources have the same level are divided into one same category. And each category is specified with a unique Message-Queue and Service-Buffer. It is worth mention that in the recent years, operating system academic circles promote a Servant/Exe-Flow Model based operating system [6]. Its synchronization mechanism is as similar as the above method. In this operating system, the saving for the thread's contexts is performed by an object named Mini-Port. Because this operating system natively supports the similar synchronization mechanism, the EBP architecture implementation based on this operating system won't cause the dead-lock problem. IV. CONCLUSIONS This thesis discusses the resource contention problem when using Event-Based Programming model, and promotes three detailed solutions against this problem. The first method is very simple, but it does strict limitation on how resources are used, so it can't adapt to asynchronous scenario, the performance and applicability are poor. The other two methods require no restriction, so they could be applied to any scenario. The second method binds some processes. This decreases the concurrency, so its performance isn't as good as the others. The third method needs to categorize the resources, and more memory is required for extra Message-Queues and Service Buffers. EBP model has the benefit of loosely coupled architecture; this makes it easily be used in a complex and large systems. But the more complex of the systems, the harder the dead-lock issue described in this paper is perceived. It is even possible that the dead-lock issue is caused by the interaction among multiple system components. So if the dead-lock issue can be considered in the system design phase, and can be eliminated by using the solutions described in this paper, then the stability and robustness of the system can be highly improved. Depending on the concrete appliance scenario, different solution described above could be chosen.
2019-02-17T14:03:56.294Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "021b5630eeeaa9187c431864e997fd0bc044582c", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/25876918.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "021b5630eeeaa9187c431864e997fd0bc044582c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119458057
pes2o/s2orc
v3-fos-license
Geometrodynamics in a spherically symmetric, static crossflow of null dust The spherically symmetric, static spacetime generated by a crossflow of non-interacting radiation streams, treated in the geometrical optics limit (null dust) is equivalent to an anisotropic fluid forming a radiation atmosphere of a star. This reference fluid provides a preferred / internal time, which is employed as a canonical coordinate. Among the advantages we encounter a new Hamiltonian constraint, which becomes linear in the momentum conjugate to the internal time (therefore yielding a functional Schr\"{o}dinger equation after quantization), and a strongly commuting algebra of the new constraints. I. INTRODUCTION Beside the covariant description of general relativity, a Hamiltonian formalism of gravity based on the existence of foliations for globally hyperbolic space-times was developed by Arnowitt, Deser and Misner (hereafter ADM) [1]. In this approach the canonical coordinates are the components of the induced metric on the 3-leaves of the foliation, while the canonical momenta are related in a simple way to the extrinsic curvature of these spatial hypersurfaces. Gravitational evolution is therefore quoted as geometrodynamics. The freedom to perform coordinate transformations on the leaves of the foliation leads to the diffeomorphism (or momentum) constraints. The true dynamics is encompassed in the so-called Hamiltonian constraint. These four constraints (per point) form a Dirac algebra [2], which is not a true algebra in the mathematical sense, as its closure is obstructed by the appearance of the induced metric in the Poisson brackets of the Hamiltonian constraints. The problem of time in canonical gravity was reviewed by Isham in Ref. [3]. Approaches for introducing the concept of time are of three types: time is either identified before or after quantization, or in certain approaches time plays no fundamental role at all. In what follows, we are interested in identifying time at the classical level. Time is not preselected by any Hamiltonian description of gravity, there are infinitely many ways to choose the time (many-fingered time formalism) [4]. Despite this ambiguity, in certain cases it is possible to select a preferred time function, by either imposing coordinate conditions [5] or by filling space-time with an adequate reference fluid [6], [7] and letting gravity to evolve in the (proper) time of the chosen reference fluid. In certain cases new canonical variables can be introduced, providing new constraints for gravity [8], [9], [10]. Among the advantages we count that the Dirac algebra transforms into a true algebra and the quantization of the Hamiltonian constraint, usually leading to the Wheelerde Witt equation (which has no linear space of solutions), rather gives a Schrödinger equation. This program has been particularly successful for incoherent dust, as presented by Brown and Kuchař in Ref. [6]. A similar formalism [11] was by Bičák and Kuchař applied for null dust, the geometrical optics approximation to non-gravitational radiation. Null dust however provides no natural time function, basically because, unlike the congruence of the incoherent dust particles, null world lines have no natural parametrization. While for ordinary dust the Hamiltonian-and supermomentum constraints depend on four pairs of canonical variables associated with the proper time and the comoving coordinate frame of the dust, the constraint equations for null dust contain only three pairs of comoving coordinates. The quantum theory of gravitational collapse can be modelled in the most simple spherically symmetric case by a collapsing thin shell of null dust [12], [13]. A second null dust shell can be introduced in the model in order to test the quantum behaviour of the geometry induced by the first shell. Motivated by certain problems in the above scenario, the canonical formalism in the presence of a null dust has been recently extended to the case of two cross-flowing, non-interacting null dust streams in a spherically symmetric space-time by Bičák and Hájíček [14]. This formalism combines ingredients of the canonical formalisms developed for null dust [11] with elements of the geometrodynamics of the Schwarzschild space-time [15], developed by Kuchař. The lack of a time-standard for a single null dust however deprived the canonical formalism of the cross-streaming null dust from a time-standard as well. This is because the starting point of the canonical description [14] is simply the sum of the spherically reduced Einstein-Hilbert action for gravity and two pieces of the null dust action, also reduced by spherically symmetry. The null dust variables are therefore doubled, without any of them becoming an internal time. The basic assumption of Ref. [14] is that the cross-flowing null dust streams interact only gravitationally, therefore the energy-momentum tensors of the components are conserved separately. The analysis of the equations of motion provides two pairs of integrals of motion (per point), one pair for each null dust component. Unfortunately the Hamiltonian density could not be explicitly expressed in terms of these quantities, except in the case when one of the null dust components is switched off. In this case the action can be transformed such that the matter part of the Liouville form contains the integrals of motion associated to the null dust component in question. The formalism derived in Ref. [14] is valid for certain known spherically symmetric space-times, for example the Vaidya space-time, describing the one-component null dust [16], and the static space-time found in Ref. [17] by one of the present authors. The latter space-time represents the geometry in the presence of a static crossflow of non-interacting null dust streams. Although it is asymptotically non-flat and it has a central naked singularity, it can be conveniently interpreted as the radiation atmosphere of a star. A second interpretation presented in Ref. [17] is of a 2-dimensional dilatonic model, in the presence of a pair of 2-dimensional scalar fields. While the dilaton is the square of the radial coordinate, the scalar fields are related to the energy densities of the null dust streams. The third interpretation, based on previous work of Letelier [18], is of an anisotropic fluid, with radial pressure equal to its energy density and no tangential pressures. The static solution [17] has a homogenous counterpart [19], which can be interpreted as a Kantowski-Sachs type cosmology. These two spacetimes obey a unicity theorem, as they are the only spherically symmetric solutions of the Einstein equation in the presence of a cross-flow of null dust streams with an additional (fourth) Killing vector [19]. Interestingly, for null dust streams with negative energy density, wormhole space-times emerge [20], [21]. The anisotropic fluid interpretation of the static spacetime the cross-flow of null dust streams with positive energy densities is particularly important for our purposes. The physical model of the anisotropic fluid has a preferred time, which is the time elapsed in the rest frame of the fluid. This suggests that in contrast with the single null dust model, for the two component null dust an internal time formalism can be constructed. In this paper we will explicitly construct the matter action for the static configuration of non-interacting null dust streams in terms of suitable variables, containing the internal time singled out uniquely by the cross-flow of null dust. In Sec. II we summarize the basic ingredients necessary for the purposes of the present work. We present: (A) the canonical formalism of ordinary incoherent dust [6], with special emphasize on how the proper choice of the internal time allows us to introduce a set of new constraints for gravity, such that the new super-Hamiltonian constraint becomes linear in the canonical momentum conjugate to the internal time; (B) the geometrodynamics of the spherically symmetric static vacuum [15], with special emphasize on the introduction of geometrically motivated canonical variables (including the Schwarzschild mass) in the gravitational sector; (C) the spherically symmetric, static space-time with crossflowing null dust streams [17] and (D) the anisotropic fluid interpretation of the cross-flow of non-interacting null dust streams [18], which provides the internal time for the two component null dust. In Sec. III we introduce an action functional of three scalar fields characterizing the static cross-flow of null dust minimally coupled to gravity. We show that variation with respect to the metric together with the equations of motion reproduces the energy-momentum tensor of two non-interacting radiation streams. Two pairs of conservation equations for the rest mass currents and the momentum currents also emerge. In Sec. IV we derive the contribution of the two null dust streams to the super-Hamiltonian and diffeomorphism constraints. Then we fulfill the program of replacing the total super-Hamiltonian and diffeomorphism constraints by an equivalent set, in which both momenta conjugate to the temporal and radial canonical variables appear linearly. We also prove that the new constraints form an Abelian algebra. Sec V. contains a discussion of the falloff conditions the gravitational variables, the lapse and the shift should obey. In Sec. VI we compare our findings with the results presented in Ref. [14] and we show that similar techniques can be employed in the more generic context of Ref. [14] as well. We also underline the connections between our canonical variables and those employed in Ref. [14], specified for the static case. Finally in Sec. VII we summarize our results. II. PRELIMINARIES In this section we present a more technical summary of the results of Refs. [6], [15], [17] and [18] needed later on in the paper. A. Geometrodynamics of space-times with ordinary dust The space-time action of ordinary dust was constructed by Brown and Kuchař [6] from eight scalar fields Z k , W k , T, M (k = 1, 2, 3) minimally coupled to the space-time metric (4) g ab : The four-velocity U a is expressed as the Pfaff form of seven scalar fields. The equations of motion are According to Eq. (4) the three vector fields Z k are constant along the flow lines of U a (they can be interpreted as comoving coordinates for the dust.) Eq. (3) shows that the four-velocity U a is a unit time-like vector field. Eq. (5) allows us to interpret M as the rest mass density of the dust and it represents mass conservation. Eq. (6) can be interpreted as the momentum conservation law. From Eqs. (2), (3) and (4) it is straightforward to deduce that T is the proper time along the dust world lines, measured between a fiducial hypersurface T = 0 and an arbitrary hypersurface with constant T . The dust energy-momentum tensor T ab can be found from the variation of the action (1) with respect to (4) g ab . From the conservation of T ab and M it follows that the dust particles evolve along geodesics. The Legendre transformed action is where g ab denotes the induced metric on the leaves, N and N a are the lapse function and shift vectors, respectively, and the momenta P and P k are conjugate to T and Z k . (The original variables W k were expressed in terms of P and P k .) The constraints are The dependence of the Hamiltonian constraint on the variable M is spurious. This can be shown as follows. By varying the action with respect to M we obtain an algebraic expression from which M can be given in terms of the other variables. Substituting this into the Hamiltonian constraint gives so the mass multiplier M is eliminated from the action. By employing that the total (gravitational + dust) constraints have to vanish, e.g. on the constraint hypersurface, and solving the constraints (10), (9) with respect to the momenta, we can replace the old constraints by an equivalent set. The new super-Hamiltonian constraint can be cast into the form where p ab are the momenta conjugate to g ab . Similarly the new supermomentum constraint is: The quantization of the linearized constraint (11) gives a Schrödinger equation [6]. B. Geometrodynamics of spherically symmetric static vacuum After the preliminary studies on the canonical formalism of the spherically symmetric space-times [23], a comprehensive analysis of Hamiltonian dynamics for Schwarzschild black holes was given by Kuchař [15]. In this section we summarize those results of his work which are relevant for our purposes. The space-time was foliated by spherically symmetric leaves Σ t which were labelled by the parameter time t. The induced metric on these 3-leaves can be characterized by two metric functions Λ and R, where r is a space-like coordinate and dΩ 2 is the line element on the unit sphere. Under coordinate transformations R behaves as a scalar and Λ as a scalar density. In the ADM decomposition of the spherically symmetric geometry, the shift vector has a non-vanishing component only in the radial direction, denoted with N r , which together with the lapse function N depend solely on the variables t and r. The metric functions R and Λ are chosen as canonical coordinates and their momenta, as derived in [15], are The vacuum action for the spherically symmetric geometry can be written as with super-Hamiltonian and supermomentum constraints There exists a canonical transformation, through which the only dynamical characteristics of the Schwarzschild space-time, the Schwarzschild mass M turns into a canonical variable. The new set of variables is (M, R; P M , P R ), where M (t, r) is expressed in terms of the old variables (Λ, R; P Λ , P R ) through the formula of the Schwarzschild mass derived by Kuchař: The remaining part of the canonical transformation is: The second advantage of the new set of canonical variables is that the momentum P M is the gradient T ′ of the Schwarzschild time (cf. Eq. (80) in Ref. [15]). The gravitational constraints (18) and (19), written in terms of the new canonical variables, become where Λ rather than being a canonical variable, is only a shorthand notation for the following expression of the new canonical variables We will also introduce the canonical variable M in the description of the gravitational sector of the crossstreaming null-dust space-time. We mention here the related result of Varadarajan [22], who has derived a transformation from the usually employed canonical variables (induced metric + extrinsic curvature), to a set of new canonical variables, which have the interpretation of Kruskal coordinates. This transformation is regular on the whole space-time, including the horizon. The constraints simplifies in such an extent, that those are equivalent to the vanishing of the canonical momenta. C. The spherically symmetric, static space-time with crossflowing null dust streams The static superposition of two non-interacting null dust streams propagating along the null congruence u a and v a is characterized by the energy-momentum tensor with The same time-independent energy density ρ was chosen for both null dust components in order to assure no net energy flow (static configuration). The spherically symmetric, static space-time containing such a cross-flow of two non-interacting null dust streams has been presented in Ref. [17]: where Z and L are the time and radial coordinates adapted to the symmetry and R is the following expression of the radial coordinate: Here a is a positive constant and B is a parameter. The four-velocity null vectors of the null dust streams are then The energy density becomes The superposition of the in-and outgoing null dust streams can be interpreted as an anisotropic fluid. This indicates that there may be a possibility to use the same procedure as in the case of the incoherent dust to obtain an internal time for the canonical dynamics of cross-flowing (but otherwise non-interacting) null dust streams, minimally coupled to gravity. Letelier has shown that the energy-momentum tensor of two null dust streams is equivalent with the energymomentum tensor of a specific anisotropic fluid [18]. As consequence, the source of the static, spherically symmetric space-times (27) can be interpreted as an anisotropic fluid with radial pressure equaling its energy density and no tangential pressures: Here χ α is the (normalized) radial direction and U α is the unit four-velocity of the fluid particles, obeying −U a U a = χ a χ a = 1 , U a χ a = 0 . They are related to the null vectors by By employing Eqs. (29), we can also express the vector fields U a and χ a in the coordinate basis defined by Z and L: In the anisotropic fluid picture ρ represents both the energy density and the pressure, while no tangential pressure components to the spheres of constant L are present. The fluid is isotropic only about a single point, the origin. III. ACTION PRINCIPLE FOR THE STATIC, SPHERICALLY SYMMETRIC CROSS-FLOW OF TWO NON-INTERACTING RADIATION STREAMS A generic spherically symmetric space-time, in coordinates (T , R, θ, ϕ) is characterized by two metric functions h and f as: Let us introduce two scalar fields Z (T ) and L (R), and the following advanced-type and retarded-type combinations of the 1-forms dZ and dL, which span the (T , R) sector: with W given by Eq. (30). Thus in this co-basis the 1forms u a and v a do not have time-dependent components. They are entirely expressed in terms of the two scalar fields Z and L (as the coefficient functions W and R can be given in terms of L). Note that the expressions (36) are identical with (and in fact motivated by) Eqs. (29), but this time the scalars Z and L are not related to any exact solution, and in consequence the 1-forms u a and v a are not necessarily null for the generic spherically symmetric metric (35). They do have instead the same length: We also note that Let us define a dynamical system by the action: where ρ (L) is a third scalar field. We do not know at this stage, what is the dynamical system described by the action (39). Variation of the action with respect to the metric gives the energy-momentum tensor: while the variation with respect to the coordinates Z, L, and the parameter ρ give the Euler-Lagrange equations: In Eq. (42) we have employed the relation dW/dL = W (2RL − dR/dL) /2R. Eq. (43) together with Eq. (37) implies that both u a and v a are null vectors. Then the energy-momentum tensor (40) reduces to characterizing a non-interacting cross-flow (in the null directions u a and v a ) of null dust streams with energy density ρ. One can define rest mass currents as in [6] J a := − (4) gρu a , and momentum currents as In term of these Eq. (41) is a continuity equation for the net flow of radiation: As the vectors u a and v a are null, Eq. (42) simplifies to implying that both momentum currents are conserved individually: as expected for non-interacting radiation fields. We have shown that the action defined by Eqs. (36) and (39) describes a cross-flow of non-interacting null dust streams in a static configuration with energy density ρ. As the vectors u a and v a are null, we can partially normalize them as u a v a = −1. Also, from Eq. (37) we get dZ/dT = (f h) 1/2 RdL/dR. Then Eq. (38) allows to express both metric functions as By inserting these into the generic spherically symmetric metric (35), we obtain the metric form (27), however without the additional information (28) and (31). In order to recover these, we need the Einstein equations, derived from the sum of the Einstein-Hilbert action and the cross-flowing null dust action (39). These are identical to those presented in Ref. [17], thus lead to the solution summarized in Section II.C. At the end of this section we note that the equivalent action in the anisotropic fluid picture is with U a and χ a given by Eq. (34) . Due to the equivalence of the two interpretations, all equations are the same, irrespective of they being derived from the crossstreaming null dust action (39) or from the anisotropic fluid action (52). IV. CANONICAL FORMALISM In this section we present the calculations yielding linearized constraints for the two-component null dust, similar to Eqs. (11) and (13) derived for ordinary dust. A. 3+1 decomposition of the two null dust Lagrangian The ADM decomposition of any spherically symmetric metric yields [15]: where Λ and R are the metric functions from the induced line-element (15) and (t, r) are generic coordinates orthogonal to the (θ, ϕ) sector . The variables ρ, Z, L characterizing the radiation cross-flow thus depend on both coordinates: ρ = ρ(t, r), Z = Z(t, r) and L = L(t, r). From Eq. (53) (4) g . The (3+1)-split form of the Lagrangian density taken from the action (39) is The canonical momenta conjugate to the radiation variables Z and L become or inverted with respect to the velocities we obtaiṅ By inserting the velocities only in one factor of the velocity-squared terms of (54) we obtain the Lagrangian in the "already Hamiltonian" form where the Hamiltonian and momentum constraints associated with the cross-flow of null dust streams are found to be Remarkably, the momentum constraint has the same form as the dust constraint (9). B. Introduction of new dust constraints If we vary the dust action (54) with respect to the comoving density ρ of the dust, we obtain from which ρ can be expressed as By substituting this result into the Hamiltonian constraint (58), we get Since (59) implicates that the last two terms below the root appear in H 2N D r 2 , we eliminate them from (62). The final form of the Hamiltonian constraint is We note that in the spherically symmetric case the momentum constraint H 2N D r can also be brought to a square root form. From Eq. (15) and (63) we have (64) Eq. (63) is of similar form to the Hamiltonian constraint of the incoherent dust derived in [6]. There is one difference, namely that the Hamiltonian constraint (11) of the incoherent dust depends on the momenta conjugate to the 3-dimensional coordinate frame variables only through the momentum constraint, while in (63) P L appears both explicitly and through H 2N D r . In spite of this, we can still follow the algorithm of [6], as will become transparent in the following. The ADM decomposition of the total action leads to the super Hamiltonian and super momentum constraints where the vacuum constraints H G ⊥ and H G r are expressed in terms of the canonical variables (M, R; P M , P R ) in Eqs. (22) and (23). By using Eqs. Then P Z can be separated from the other variables in Eq. (67) : From (68) we know Here we used the notation (12). By using (59) and (69), the constraint (66) can be written as Which gives We will denote the constraint (71) by H ↑L . Thus we have obtained a new, more convenient set of super-Hamiltonian constraint H ↑Z and supermomentum constraint H ↑L . Both linearized constraints contain exactly one null dust momentum. The Dirac algebra of the old constraints turns into an Abelian algebra of the new constraints: where H ↑J = (H ↑Z , H ↑L ) . This feature is similar to the case of the one-component ordinary dust [6], and in fact the proof proceeds exactly in the same way. Following [6] first we note that the Poisson brackets of the new constraints must vanish, at least weakly (on the constraint hypersurface). However, due to the linearity of the constraints (68), (72) in the momenta P Z , P L , the brackets do not depend on any of P Z , P L . But then there is no way the constraints (68), (72) would help in turning into zero the Poisson brackets. Therefore they have to strongly vanish. V. FALLOFF OF THE CANONICAL VARIABLES A. Falloff conditions for the eternal Schwarzschild black hole The proof of Kuchař in Ref. [15] that the mapping (Λ, R, P Λ , P R ) → (M, R, P M , P R ) is a canonical transformation in the gravitational sector relies on the check that the difference of the Liouvillle forms is an exact form. This translates to show that the expression vanishes on the boundaries of the domain of integration. For the eternal Schwarzschild black hole discussed there, the desired behaviour was assured at r → ±∞ by imposing suitable falloff conditions for the canonical variables, based on the treatment of Beig and O'Murchadha [24]. The proper falloff of the variables Λ, R, P Λ , P R , Killing time T , lapse function N and shift N r , given by Eqs. B. Falloff conditions for r → 0 in flat space-time Hájíček and Kiefer have studied the evolution of a spherically symmetric null dust shell in the space-time generated by an other spherically symmetric null dust shell [12]. The (innermost) region surrounded by the incoming null shell is Minkowski. In order to avoid the occurrence of a conical singularity at r = 0, following the method developed for cylindrical gravitational waves [25], they have imposed boundary conditions on both the coordinates and their spatial derivatives at the regular centre. Based on these, Bičák and Hájíček [14] have shown that the boundary term (74) also vanishes at r → 0. Louko, Whiting and Friedman have discussed the Hamiltonian dynamics of a thin (distributional) null-dust shell under both sets of boundary conditions: first at the two spatial infinities r → ±∞ of the Kruskal-like manifold and second at r → 0 and r → ∞ [26]. In the latter case, the falloff conditions at r → 0 for the canonical variables, lapse and shift in the flat geometry within the null shell are given by their system of Eqs. (7.1): where Λ 0 , R 1 , P Λ2 , P R1 , N 0 and N r 1 are functions of time. With these falloffs, the expression B (0) vanishes, in accordance with the conclusion of Ref. [14]. Given the falloff behaviors (75), all terms in the vacuum gravitational super-Hamiltonian constraint (18) are O r 2 , with two exceptions: The leading term vanishes for For this choice, the falloff conditions obey the vacuum gravitational super-Hamiltonian constraint. The gravitational super-momentum constraint (19), in turn behaves as Thus, the falloff conditions are consistent with the vacuum constraints. They are also preserved in time, as noted in [26], but we will show that only for This can be seen from the following argument The time-evolution of the super-Hamiltonian and supermomentum constraints are linear combinations of the constraints and their covariant derivatives on the leaves: Here K = Λ −1 R −2 (RP R − ΛP Λ ) + 2R −2 P Λ is the trace of the extrinsic curvature of the leaves Σ t ., given in Ref. [15]. From the falloff conditions (75) we obtain Thus the terms proportional to the gravitational constraints, whether they contain K or not, will decay at least as O (r) and O r 2 , respectively (provided R 1 = Λ 0 was chosen). Problems could aarise only from the terms containing derivatives of the constraints. The falloff conditions (75) and the covariant derivatives of the scalar and vector densities H G ⊥ and H G r give at r → 0 Thereforė By chosing the condition (77), the expression forḢ G r will decay as O (r). As we exclude the possibility N 0 = 0 (which would froze time evolution at r = 0), the only possibility remaining for a proper decay ofḢ G ⊥ is to set P Λ2 = 0, which completes our proof. C. Falloff conditions for the radiative atmosphere of a star Now we study the question, whether the Kuchař mapping (Λ, R, P Λ , P R ) → (M, R, P M , P R ) of the gravitational variables remains a canonical transformation in the configuration discussed in this paper. In order to answer this question, first we remark that the range of the cross-streaming null dust metric parameter B is restricted by R > 0. This determines a lower boundary L min of L, corresponding to R = 0. The quasi-local mass function m (L) (for a definition see [17]) vanishes at some L m=0 (a, B) and takes negative values below, in the interval L min < L < L m=0 . Besides, for L → ∞ the solution (27) is not asymptotically flat. One can escape these unpleasant features by cutting off the space-time between certain L 1 > L m=0 and an appropriate high value L 2 > L 1 and matching with appropriate metrics across these boundaries (see Fig. 1). The cross-streaming null dust region (27) is matched then from the interior with the interior Schwarzschild solution, representing a static star with mass M 1 , whereas from the exterior it is bounded by incoming and outgoing Vaidya regions, and it touches three exterior Schwarzschild regions in three points (these are 2-spheres, if we take into account the angles θ, ϕ). Therefore the solution (27) is interpreted as a thick shell of 2-component radiation, created from the intersection of incoming and outgoing thick radiation shells. The intersection of the last incoming ray with the first outgoing ray is the point (2-sphere) where the junction to the outermost Schwarzschild region (characterized by mass M 2 ) is done. This region extends towards the spatial infinity i 0 . As the fluid region is bounded, only the proper falloff at i 0 of the gravitational variables M, R, P M , P R has to hold, as summarized in the first subsection of this Section. The situation is not so trivial on the other boundary, at r → 0. There, in contrast with the previous treatments of Refs. [12], [14] and [26], we do not have vacuum, but rather the center of a static star represented by the interior Schwarzschild solution, where the falloff conditions are not yet known. The line-element representing the gravitational field in the interior Schwarzschild solution, is generated by a perfect fluid with energy-momentum tensor where the energy density ρ and pressure p (with respect to the 4-velocity u a of the fluid particles.) are given as ρ = const , Here κ 2 = 8πG and a, b are constants, chosen such that p ≥ 0. As the canonical treatment of the interior Schwarzschild solution has not been developed yet (and it is beyond the scope of the present paper), we will impose the simplifying condition that the worldlines of the fluid particles of the stellar material are along the time evolution vector ∂/∂t. where α (t, r) > 0 is a scaling function. From the condition of normalization of the 4-velocity u a u a = −1 we obtain α 2 = N 2 − N r 2 . This choice of the allowable foliations is in accordance with the generic expectation, that whenever a reference fluid is present in the system, it is advantageaus to introduce the parameter associated with the world-lines of the reference fluid as time variable. Outside the interior Schwarzschild region, the leaves Σ t are still allowed to be arbitrary space-like hypersurfaces. The energy density and energy current density of the fluid with respect to the chosen foliation become µ = T ab n a n b = N α With the falloff conditions (75) at r → 0 the condition B → 0 will continue to hold, thus the Kuchař transformation is canonical. But are these falloff conditions consistent with the constraints? In order to responde affirmatively, first we note that for the fluid variables we have the following falloff conditions These, together with √ g = ΛR 2 sin θ imply that which shows that the total constraints of gravity and fluid are obeyed for the chosen falloffs on the boundary, provided the condition (77) holds. The last question to address is whether time evolution conserves these falloffs. In order to see this we note that bothḢ star Alternatively, if we do not insist on the interpretation of the cross-streaming null dust space-time region as a radiation atmosphere of a star, we can let the outgoing radiation to emerge from the origin and the incoming component to be absorbed by the boundary at r = 0. In this case Cauchy surfaces can be chosen in such a way, that their boundary at r → 0 is in a flat space-time, as in Fig. 3. of Ref. [14]. In this setup, the expression B again vanishes, and the Kuchar transformation is proved to be canonical. VI. COMPARISON WITH PREVIOUS RESULTS In this Section we will establish the connections between the sets of canonical variables employed in this paper and in Ref. [14]. In order to do this first we illustrate in Subsec. V.A that an internal time can be introduced for a generic spherically symmetric crossflow of radiation streams. We start from the variables employed in Ref. [14]. In Subsec. V.B we show that the connection of those variables with our variables can be written up explicitly. A. Constraints of null dust crossflow Bičák and Hájíček [14] generalized the canonical formulation of the one-component null dust, presented in Ref. [11] for a two-component null dust, with the specification of spherically symmetry. The gravitational part of their action was given by (17), whereas the energymomentum tensor has been chosen as with being the four-velocity null vectors of the ingoing and outgoing null dust streams. The latter were characterized by the canonical coordinates Φ + , Φ − and their conjugate momenta Π + , Π − . The unit normal to the leaves was denoted n a . The canonical action of the system became and super-momentum constraint H T r := H G r + H BH r = 0 , Following the convention of Ref. [14], we assume that Π + Φ ′ + < 0 < Π − Φ ′ − . Thus we will use The constraints (95) and (96) can be conveniently combined as follows a major inconvenience for the analysis presented there, where the one-component null dust limit (Vaidya spacetime) is discussed in detail. If one does not aim to have this limit in the formalism, the situation is different. We have shown on the example of the static crossflow of radiation streams how to construct an internal time for the two-component system. By suitable canonical transformations we have introduced the time function Z as canonical coordinate and we have constructed the new super-Hamiltonian and super-momentum constraints, Eqs. (68), (72), which have strongly vanishing Poisson brackets. With this, we have turned the Dirac algebra of the original constraints into an Abelian algebra. The new constraints contain the momenta conjugate to the crossflowing null dust variables linearly. This convenient feature can be further exploited in the pro-cess of quantization, which will turn the new super-Hamiltonian constraint into a functional Schrödinger equation. The latter has the obvious advantage over the Wheeler-deWitt equation obtained by the quantization of the original super-Hamiltonian constraint, that its space of solutions is linear. Further properties of the resulting functional Schrödinger equation are under investigation and we propose to discuss this topic in detail elsewhere.
2019-04-14T02:23:30.425Z
2006-05-22T00:00:00.000
{ "year": 2006, "sha1": "c9155fa6ab199a33e08adf90c55225d1487611f9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/0605116", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4ee37b100a02b862c049c614be1fbda60f75b7b5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248424691
pes2o/s2orc
v3-fos-license
Prediction of Freezing of Gait in Parkinson's Disease Using Unilateral and Bilateral Plantar-Pressure Data Background Freezing of gait (FOG) is an intermittent walking disturbance experienced by people with Parkinson's disease (PD). FOG has been linked to falling, injury, and overall reduced mobility. Wearable sensor-based devices can detect freezes already in progress and provide a cue to help the person resume walking. While this is helpful, predicting FOG episodes before onset and providing a timely cue may prevent the freeze from occurring. Wearable sensors mounted on various body parts have been used to develop FOG prediction systems. Despite the known asymmetry of PD motor symptom manifestation, the difference between the most affected side (MAS) and least affected side (LAS) is rarely considered in FOG detection and prediction studies. Methods To examine the effect of using data from the MAS, LAS, or both limbs for FOG prediction, plantar pressure data were collected during a series of walking trials and used to extract time and frequency-based features. Three datasets were created using plantar pressure data from the MAS, LAS, and both sides together. ReliefF feature selection was performed. FOG prediction models were trained using the top 5, 10, 15, 20, 25, or 30 features for each dataset. Results The best models were the MAS model with 15 features and the LAS and bilateral models with 5 features. The LAS model had the highest sensitivity (79.5%) and identified the highest percentage of FOG episodes (94.9%). The MAS model achieved the highest specificity (84.9%) and lowest false positive rate (1.9 false positives/walking trial). Overall, the bilateral model was best with 77.3% sensitivity and 82.9% specificity. In addition, the bilateral model identified 94.2% of FOG episodes an average of 0.8 s before FOG onset. Compared to the bilateral model, the LAS model had a higher false positive rate; however, the bilateral and LAS models were similar in all the other evaluation metrics. Conclusion The LAS model would have similar FOG prediction performance to the bilateral model at the cost of slightly more false positives. Given the advantages of single sensor systems, the increased false positive rate may be acceptable to people with PD. Therefore, a single plantar pressure sensor placed on the LAS could be used to develop a FOG prediction system and produce performance similar to a bilateral system. INTRODUCTION Parkinson's disease (PD) is a progressive neurodegenerative condition that presents various symptoms, including rigidity, bradykinesia (slowed movements), postural instability, tremor, and freezing of gait (FOG) (1). FOG is an intermittent walking disturbance, often experienced in mid-late stage PD (2) as a sudden inability to step despite the intention to walk. FOG can lead to falling, injury, and long-term effects such as fear of future falls and loss of mobility (3). Various wearable sensorbased systems have been developed (4,5) to detect FOG using data from the freeze episode or predict freeze onset using data preceding the freeze (6)(7)(8)(9). Cueing using auditory, visual, and tactile stimuli has been used during a freeze to help end the freeze and help the person resume walking (10)(11)(12). However, an intelligent cueing approach that generates a stimulus before the freeze, based on freeze prediction, is preferable, since it may prevent FOG from occurring. Accelerometers and gyroscopes are the most commonly used sensors for FOG detection and prediction (4,5). FOG prediction systems often use multiple sensors of the same type mounted on various body parts (7,9,(13)(14)(15)(16)(17). Given that FOG identification systems would benefit from increased wearability and simplicity, researchers have developed FOG detection systems that use everyday devices and clothing such as smartphones (18)(19)(20)(21) and pants (22,23). However, noise from sensor movement relative to the body can adversely affect performance. Plantar pressure insole sensors that can be easily worn in a shoe have also been effective for FOG detection (24,25) and prediction (26)(27)(28) and have advantages in terms of wearability and simplicity. In addition to sensor type considerations, attempts have been made to reduce prediction system complexity by using only a single sensor input, such as a single shank-mounted accelerometer (29) or a waist-mounted inertial measurement unit (IMU) (30). A single-sensor system would eliminate the need for multisensor synchronization, reduce the number of sensors worn, reduce the amount of data to acquire and process, and may be more acceptable to end users. However, additional study is required to determine if single-sensor FOG prediction systems could produce models comparable in performance to multisensor systems. One approach to reduce the number of sensors would be to limit sensors to one side of the body. While single-side (7,9,(13)(14)(15)29) and bilateral (16) IMU sensors have been investigated for FOG prediction, the unilateral use of plantar pressure sensors compared to bilateral use has not been studied. An important factor in using sensors on only one side of the body is that PD motor symptoms manifest asymmetrically and commonly affect one side of the body more severely (31). The most affected side (MAS) and the least affected side (LAS) are person specific and do not correspond to the dominant leg or hand. Although FOG detection systems have been effective using only waist and left leg sensor locations [e.g., using the Daphnet dataset (32)] without consideration of the MAS and the LAS, FOG prediction models have lower sensitivity and specificity than FOG detection models that used similar methods (4,17,25) and could be improved. While previous studies have not considered PD motor symptom asymmetry in FOG prediction models, there is a potential advantage of considering the MAS and the LAS in FOG prediction model development, especially if a single sensor is used exclusively. Given the asymmetry in PD gait, benefits of single-sensor FOG prediction systems and ease of wearing plantar pressure insoles, there is a need to determine if single-limb insole instrumentation can be as effective as bilateral instrumentation in FOG prediction and if there is a preferred leg for plantar pressure insole instrumentation in FOG prediction. This study aimed to determine whether MAS, LAS, or bilateral plantar pressure data were most useful for FOG prediction. Identification of the most appropriate implementation approach is important in developing optimal systems for end users and guiding clinicians in setting up future FOG cueing systems. Data Collection The dataset used in this study is the same as in Pardoel et al. (25), with the data collection methods summarized here. Walking data were collected from 11 males with PD who experienced freezing at least once per week. Inclusion criteria were: ability to walk independently (without a walking aid), not have undergone deep brain stimulation, and not have conditions other than PD that impair their ability to walk. Data were collected during a single visit to the Human Movement Performance Laboratory, University of Ottawa. Ethics approval was obtained from the University of Ottawa (H-05-19-3547) and University of Waterloo (40954) and all the participants provided informed written consent. Participants were tested while on their regular antiparkinsonian medication dosage and schedule. Data collection was typically scheduled in the hours prior to the participant's next dose, so that the medication would be wearing off during testing and FOG would more likely occur. Participants provided disease history and were assessed using the New Freezing of Gait Questionnaire (NFOG-Q) and the Unified Parkinson's Disease Rating Scale (UPDRS)-Part III (motor examination). Participants were also asked whether their PD symptoms predominantly affected the right or left side of their body. Laterality and severity of symptoms were confirmed by the researcher (JN) conducting the UPDRS III. Pressure-sensing insoles (FScan, Tekscan, Boston, Massachusetts, USA) were used for plantar pressure measurement during walking and data were recorded at 100 Hz. FScan insoles are thin (<1 mm) and flexible, with a resolution of 3.9 pressure-sensing cells per cm 2 . Prior to data collection, a new pair of insoles were equilibrated using a pressurized air bladder and trimmed to fit inside the participant's regular shoes. Immediately before starting the walking trials, the sensors were calibrated by having the participant stand on one foot, transfer their entire weight to the other foot, and repeat this starting with the second foot. The walking trials were video recorded using a smartphone camera (30 Hz). Participants walked a freeze-inducing path up to 30 times (Figure 1). The walking path included one 90 • and one 180 • turn in each direction around the cones. The path also included a 180 • turn in a narrow hallway. Prior to data collection, participants were asked which turning direction is most likely to cause freezing. This direction was selected for each participant as the primary turning direction in the narrow hallway. In some cases, participants were asked to change the turn direction after some trials did not produce a freeze. Additional physical and verbal tasks were performed simultaneously to increase likelihood of freezing. The physical task involved carrying a plastic tray with objects on it and the verbal task consisted of continuously saying words out loud beginning with a specific letter. Data Labeling Following data collection, the video and plantar pressure data were synchronized and labeled using a custom MATLAB 2019b program. Synchronization was achieved by performing a single leg stomp at the beginning of each trial and confirmed using multiple heel strike events. During data collection, FOG episodes were identified and offline labeling was later performed by researcher SP to refine the FOG onset and termination times. In cases of uncertainly, a second labeler was consulted. Each video frame was labeled as FOG or non-FOG. The video labels were transferred to the synchronized plantar pressure data using linear interpolation to the closest timestamp. The beginning of a freeze was defined as "the instant the stepping foot fails to leave the ground despite the clear intention to step." The end of the freeze was defined as "the instant the stepping foot begins or resumes an effective step." For example, a step was considered effective the instant the heel lifted from the ground, provided that it was followed by a smooth toe off with the entire foot lifting from the ground and advancing into the next step without loss of balance. As a special case, if a person froze, stopped trying to advance and remained standing, the instant that the participant stopped trying to advance was considered the end of the freeze. This was determined by the complete absence of foot movement and known FOG characteristics such as trembling of the knee, medial-lateral weight shifting, or attempt at shuffling. Furthermore, gestures and facial expressions clearly indicated that the participant was no longer trying to advance. Only a few of these special cases occurred. Pre-FOG labels were applied to all the data within the 2 s period immediately prior to the onset of a freeze episode and non-FOG labels were applied to all data that were not FOG or pre-FOG. If two FOG episodes were less than 2 s apart, the data between the two FOG episodes were labeled as pre-FOG. The 2 s pre-FOG duration represents the duration of approximately two strides and has been sufficient for FOG prediction in previous studies (8,27). Furthermore, 2 to 3 s pre-FOG durations have led to higher pre-FOG classification accuracy than longer pre-FOG durations (17). Data Windowing Following data collection and labeling, data for each walking session were windowed using a 1 s window with a shift of 0.2 s between consecutive windows (Figure 2A). Prior to classifier model development, windows were grouped into target and nontarget classes and models were trained to differentiate between the classes. The objective was to develop a single model that could predict and detect FOG. Therefore, the target class included data windows containing purely pre-FOG data (W9-W13), windows containing both the pre-FOG and FOG data (W14-W18), and purely FOG data (W19) (Figures 2A,B). The non-target class included all the other windows (W1-W8, W20). Feature Extraction and Ranking Features were calculated from each data window and used to train FOG prediction models. Anterior-posterior and mediallateral center of pressure (COP) positions and total ground reaction force (GRF) were extracted from the plantar pressure data. Prior to COP calculation, if the GRF of one foot accounted for less than 5% of the two foot total, the GRF was set to zero to remove noise or residual pressure during swing phase. COP velocity and acceleration were determined from the first and second derivatives of COP position. The GRF and COP position, velocity, and acceleration were used to calculate 13 time-domain features and 22 frequency-domain features ( Table 1). The features used have been previously used for FOG identification (25). In total, 166 unilateral and 1 bilateral features (number of weight shifts) were extracted from the plantar pressure data, resulting in a total of 333 features. Relief-F ranking feature selection was used to determine the best features (Relief-F was found to be better for feature reduction than minimum-redundancy maximum-relevance ranking, tested in earlier experiments). For bilateral limb models, all 333 features were ranked. For the unilateral models, separate datasets were created with the 166 MAS and the 166 LAS features. Relief-F feature ranking was performed for the MAS and LAS datasets. Freezing of Gait Prediction Model Training All prediction models developed in this paper used the same parameters and training methods. The only difference between the models was the input dataset and input features. Separate prediction models were trained using the top ranked 5, 10, 15, 20, 25, and 30 features from each of the MAS, LAS, and bilateral datasets. These values were based on previous testing that found no performance improvement when using more than 30 features. Additionally, using more than 30 features substantially increased model training time. Steps of 5 features were used to limit the total number of models trained and evaluated. Each data window was classified using a binary classification model. Decision tree ensembles using random undersampling boosting (RUSBoosting) were trained. Each of the 100 trees had 5 splits. A leave-one-freezer-out cross-validation was performed, as in Pardoel et al. (27). Participants who did not freeze were always included in the training dataset and never held out as the test participant. Freezing of Gait Prediction Model Performance Evaluation The trained models were evaluated using windows and FOG episodes similar to the evaluation in Pardoel et al. (27). The window-based evaluation compared each window classification to the ground truth label and calculated sensitivity and specificity. The FOG episode-based evaluation determined if and when each episode was detected by the model. Classification of three consecutive windows to the target class ( Figure 2B) resulted in a model trigger decision (MTD) (Figure 3), which would trigger a cue if applied in a cueing system. If a MTD occurred within the MTD target zone (explained below), then the corresponding FOG episode was successfully identified. Identification delay (ID) was the time between FOG onset and a successful MTD identification. A negative ID indicated that the model predicted the FOG episode before onset and a positive ID indicated that the model detected the FOG episode after onset. In the literature, pre-FOG gait has been identified 3 steps prior to FOG onset (37) and predictions have been reported 4-5 s in advance (7,38) of FOG. Furthermore, model classification target zones have been defined as 8 s prior to FOG onset (9). In this paper, the MTD target zone was specific to each FOG episode (Figure 3), based on a prediction target zone that was initially set to 6 s prior to FOG onset (Figure 3). If another FOG, stand to walk transition, or turn to walk transition occurred within the 6 s period prior to FOG onset, the prediction target zone was shortened to exclude these turning, standing, or FOG data. This ensured that false positives caused by the end of the previous FOG episode, turn to walk transition, or stand to walk transition were not mistakenly interpreted as predictions of the upcoming FOG. To ensure that the turning data were not included in the MTD target zone, a 1 s delay was used, so that the prediction target zone started 1 s after the end of the turn. Similarly, for transitions from standing to walking, a 1 s delay was used to remove periods of gait initiation from the MTD target zone (Figure 3). Each MTD was considered to be either a true positive (within the MTD target zone) or a false positive (outside the MTD target zone). The MTD false positive rate (false positives/trial) was calculated for each participant using the number of false positives and number of trials. In this analysis, false positive MTD that occurred during standing or gait initiation were ignored. Gait initiation was defined as the first second of walking after standing. As a final step in model development, a 2.5 s no-cue interval was used, wherein MTD were ignored if they occurred <2.5 s after the previous MTD (27). Table 2. The number of FOG episodes that occurred during turning are presented in Table 3. Participant information is presented in Freezing of gait prediction model performance for each number of features used is shown in Figure 4. All values are means calculated over all held out test participants (i.e., freezers). Overall, the highest sensitivity (79.5%) was for the LAS model with 5 features. The LAS model had the highest sensitivity for 5, 10, 15, and 25 features. The bilateral model had the highest sensitivity for 20 (74.6%) and 30 (66.7%) features. Specificity for all MAS, LAS, and bilateral models ranged between 81.3 and 88.0%. The highest overall specificity (88.0%) was for the bilateral model with 30 features. The LAS (87.5%) and MAS (83.9%) models using 30 features also had a high specificity. The highest percentage of identified FOG episodes ranged from 90.2 to 94.9% for all models that used 5, 10, or 15 features. For increasing numbers of features, the percentage of identified FOG decreased for all models. Overall, the highest percentage of identified FOG (94.9%) was for the LAS models with 5 features. For each model, some FOG episodes were predicted in advance of the freeze, while other FOG episodes were detected after onset. The ID is the average of all FOG identifications for each participant. The LAS and bilateral models produced similar identification delays using 5, 10, 15, and 20 features. Overall, the earliest identifications were for the bilateral and LAS models with 5 features, which both had a −0.8 s ID. For all the models that used 5, 10, or 15 features, the ID values were between −0.4 and −0.8 s. The MAS models had the lowest average false positive rate per walking trial for all numbers of features and the LAS models had the highest false positive rates. Overall, the lowest false positive rate was for the MAS model using 20, 25, or 30 features (1.0 false positives/trial). The highest false positive rate was for the LAS model using 10 features (3.3 false positives/trial). Generally, using more features tended to increase specificity, decrease sensitivity, decrease percentage of identified FOG episodes, and decrease number of false positives per trial. Increasing the number of features resulted in later predictions for the bilateral and MAS models. To select the best feature set, the model for each feature set (Figure 4) was ranked for each evaluation metric and the model (feature set) with the smallest rank sum was selected. For example, for the MAS, the model with 5 features was the third best model for sensitivity, fifth best for specificity, third best for percentage of identified FOG episodes, best (first ranked) for ID, and fifth best for false positive rate. These ranks (3, 5, 3, 1, and 5) were summed to produce a summed score of 17 for the MAS model feature set with 5 features. This ranking was done for the MAS, LAS, and bilateral models ( Table 4). According to the ranking, the best MAS model used 15 features. The best LAS and bilateral models both used 5 features. The features used in the best models are given in Table 5. To examine model performance for each participant, the cross-validation results for the best MAS, LAS, and bilateral models are presented in Tables 6-8. (17). A model using gyroscope data from the shins predicted FOG with 84.1% sensitivity and 85.9% specificity (17). However, the model was developed using data from only 35 FOG episodes. For comparison, the models developed in this paper used data from 362 FOG episodes. Using many FOG episodes during model training is desirable, since it can help achieve good model generalizability and thus, better classification performance when tested on previously unseen data. Other models in the literature achieved even higher sensitivity and specificity (9,14,15). For example, a personspecific model (i.e., model tuned to a specific individual) using an ensemble of 9 support vector machine classifiers and data from 3 IMU sensors reported 93% sensitivity and 87% specificity (9). Using the same dataset, a 3 class (pre-FOG, FOG, and non-FOG) k-nearest neighbors classifier achieved 94.1% sensitivity and 97.1% specificity (14). However, these systems were person specific and may not be generalizable to new participants or they used multiple sensors on various parts of the body and are thus, not directly comparable to this study, which used a single sensor to create personindependent models. The sensitivity and specificity of the LAS model were comparable to other single-sensor FOG prediction studies in the literature (6,29,30,39). The best LAS model performed better for FOG prediction than a similar tree-based algorithm (AdaBoosted C4.5 decision tree) that used data from a single waist-mounted IMU (30). Compared to a FOG prediction model that used electroencephalography (EEG) signals, the LAS had lower sensitivity (79.5% compared to 85.86%) and similar specificity (81.3% compared to 80.25%) (6). However, a single plantar pressure sensor could be integrated into regular footwear and could be used in a much more user-friendly wearable system than EEG sensors. The LAS model FOG episode identification performance was very good compared to other models in the literature. The LAS model identified 94.9% of episodes, which was similar to (9), where 94% of episodes were identified and only slightly worse than a person-specific model used in Naghavi et al. (16) that identified 97.4% of episodes. The best MAS, LAS, and bilateral models in this paper all identified more than 91.0% of the FOG episodes. Furthermore, the LAS and bilateral models identified FOG 0.8 s before the freeze initiation. Thus, if used as part of a cueing system, the LAS and bilateral models would cue most of the FOG episodes, with identifications made just under 1 s prior to FOG onset. The LAS model had higher sensitivity, earlier FOG identifications, and identified a higher percentage of FOG episodes than the MAS model. This may be explained by an increased role of the least affected limb in balance and postural stability during walking. Differences between the MAS and LAS have been identified in various motor tasks (40) and participants with PD (with and without FOG) preferentially adjusted the positioning of their least affected limb to retain balance after slipping (41). Therefore, the LAS limb may also be preferentially used for stability during walking, similar to how amputees rely on the intact limb for stability and balance (42). Postural stability and FOG are intricately related (43) and stability in freezers can be negatively affected by dual-task walking, which is a common trigger for FOG (44). Furthermore, stability and postural control in PD can be assessed using COP (45,46). COP-based features that indicate postural instability may also indicate upcoming FOG. Therefore, if participants are preferentially using the least affected limb for stability control when walking, the link between instability and FOG may lead to the LAS being the more informative limb for FOG prediction. The connection between postural stability, FOG, and the preferential limb for stability control during walking should be further investigated. The best MAS model had the highest specificity, lowest false positive rate, and latest predictions compared to the best LAS and bilateral models. Therefore, the MAS predicted FOG less in advance, but resulted in fewer false positive MTD. The best MAS model had 1.9 false positives per walking trial. Using the duration of each walking trial and averaging over all walking trials and all participants, the best MAS model thus produced one false positive approximately every 38 s of walking. Similarly, one false positive was produced approximately every 28 s for the bilateral model and every 24 s for the LAS model. However, since a specially designed freeze-inducing walking path was used in this paper, fewer false positives may be experienced during daily walking. In a clinical setting, the choice of limb to use for sensor mounting and data collection may depend on the person, their FOG history, and the intervention (cueing) approach. For someone who tends to recover independently following a freeze, minimizing false positives may be more important than early cueing. Thus, instrumenting the MAS may be preferable, to benefit from the higher specificity and fewer false positives. In contrast, for someone who frequently experiences loss of balance and potential falls when freezing, the LAS may be preferable, since FOG episodes would be identified earlier and with higher sensitivity. For this person, a late or missing cue may be more disruptive to overall walking than the increased number of false positives. The type of cue may also influence the decision to instrument the MAS or LAS limb. When using a low intensity or minimally distracting cue, false positives may be better tolerated, thus supporting the use of the LAS model. An intense or potentially bothersome cue may be best used with MAS instrumentation to reduce unnecessary cueing. While the LAS model performance was similar to the bilateral model, the bilateral model is recommended for best FOG prediction performance, since it produced fewer false positives. On the other hand, the difference between false positive frequencies (LAS 1:24 s, bilateral 1:28 s, and MAS 1:38 s) may be imperceptible to the user. Furthermore, single sensor systems can potentially be simpler, less expensive, and more user-friendly than systems with multiple sensors. These advantages may be more important than a slight decrease in false positive rate. Therefore, systems that use plantar pressure data from the LAS may be preferable to systems that use plantar pressure data from both the feet. Of the total 362 FOG episodes, approximately half (n = 177) occurred during turning. Of the 177 turning FOG, 109 occurred when the MAS was the outside limb (LAS was the pivot limb). The LAS model identified 90.9% of turning FOG when the LAS was the pivot limb and 98.7% when the LAS was the outside limb. The best MAS and bilateral models correctly identified more than 95% of turning FOG when the MAS was the pivot limb and correctly identified 83.0% (MAS model) and 89.3% (bilateral model) when the MAS was the outside limb. The performance of the models could be affected by most hallway turns having the MAS as the outside limb, the imbalance in the number of turns in each direction, and the differences in number of freezes during turns for each participant. Future study may explore FOG identification for the MAS or LAS turns. This study involved 11 participants, 7 of which froze during testing. In total, 362 FOG episodes were recorded, with 221 FOG episodes from participant P07; further study with larger datasets is recommended. A larger dataset including participants with various FOG subtypes (e.g., shuffling, trembling in place, akinetic) would allow the exploration of connections between FOG subtypes and preferred limb for instrumentation. CONCLUSION This study compared the performance of FOG prediction models that used plantar pressure data collected from the most affected side, least affected side, and both sides. Of the RUSBoosted ensembles of decision trees trained, the best models used 5 features for the LAS and bilateral models and 15 features for the MAS model. The LAS model had higher sensitivity and identified a higher percentage of FOG episodes more in advance of the FOG onset compared to the MAS model. The MAS model had higher specificity and fewer false positives. In a system that uses a single plantar pressure sensor, the decision to instrument the LAS or MAS may be person specific. For someone who tends to recover independently from FOG, instrumenting the MAS may be preferable, since there would be fewer false positives. However, for someone who experiences loss of balance during freezing, cueing earlier may be more important than minimizing false positives, thus instrumenting the LAS may be preferable. The LAS and bilateral model performance was similar for all evaluation metrics, except the false positive rate. The LAS model had a slightly higher false positive rate than the bilateral model. Therefore, based on prediction performance, using plantar pressure data from both feet are recommended. However, since the difference in false positive rate between the LAS and bilateral models was small, the advantages of a single sensor system may outweigh the increase in false positive rate. In practice, using a single-limb plantar-pressure based FOG prediction system could enhance wearability and compliance, since fewer sensors would need to be worn. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary materials. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT This study involving human participants was reviewed and approved by the University of Ottawa (H-05-19-3547) and University of Waterloo (40954) Research Ethics Boards. The participants provided their written informed consent to participate in this study.
2022-04-29T14:03:46.233Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "b63521139b0cc11d7e82f9c074420bb6e3357f45", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "b63521139b0cc11d7e82f9c074420bb6e3357f45", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234987474
pes2o/s2orc
v3-fos-license
Advancing Mental Health and Psychological Support for Health Care Workers Using Digital Technologies and Platforms Background The COVID-19 pandemic is a global public health crisis that has not only endangered the lives of patients but also resulted in increased psychological issues among medical professionals, especially frontline health care workers. As the crisis caused by the pandemic shifts from acute to protracted, attention should be paid to the devastating impacts on health care workers’ mental health and social well-being. Digital technologies are being harnessed to support the responses to the pandemic, which provide opportunities to advance mental health and psychological support for health care workers. Objective The aim of this study is to develop a framework to describe and organize the psychological and mental health issues that health care workers are facing during the COVID-19 pandemic. Based on the framework, this study also proposes interventions from digital health perspectives that health care workers can leverage during and after the pandemic. Methods The psychological problems and mental health issues that health care workers have encountered during the COVID-19 pandemic were reviewed and analyzed based on the proposed MEET (Mental Health, Environment, Event, and Technology) framework, which also demonstrated the interactions among mental health, digital interventions, and social support. Results Health care workers are facing increased risk of experiencing mental health issues due to the COVID-19 pandemic, including burnout, fear, worry, distress, pressure, anxiety, and depression. These negative emotional stressors may cause psychological problems for health care workers and affect their physical and mental health. Digital technologies and platforms are playing pivotal roles in mitigating psychological issues and providing effective support. The proposed framework enabled a better understanding of how to mitigate the psychological effects during the pandemic, recover from associated experiences, and provide comprehensive institutional and societal infrastructures for the well-being of health care workers. Conclusions The COVID-19 pandemic presents unprecedented challenges due to its prolonged uncertainty, immediate threat to patient safety, and evolving professional demands. It is urgent to protect the mental health and strengthen the psychological resilience of health care workers. Given that the pandemic is expected to exist for a long time, caring for mental health has become a “new normal” that needs a strengthened multisector collaboration to facilitate support and reduce health disparities. The proposed MEET framework could provide structured guidelines for further studies on how technology interacts with mental and psychological health for different populations. Introduction The COVID-19 pandemic, as a prolonged global public health crisis, has heavily burdened health care systems and the health care workers who are the direct responders to safeguard people's health. Positive and optimistic emotional states play important roles in stimulating the human body's immune system, which could enable health care workers to effectively engage in the fight against the pandemic. Excessive pressure, anxiety, and depression can be detrimental to mental health and may prevent health care workers from actively performing their duties in response to the pandemic. The scale, pervasiveness, and complexity of the stressors associated with the ongoing pandemic are unprecedented, despite the fact that some countries have achieved milestones in controlling the pandemic and have moved forward to the initiation of vaccination [1]. With the realization that the end of the pandemic is far from close, the toll of the pandemic on the mental health and well-being of health care workers still requires urgent attention. Experiencing intense pressure at work for a long time may cause a series of problems that can affect physical and mental health, which can also affect workers' quality of life and work efficiency [2,3]. The threat of being infected by the virus, inability to complete work, emotional impact of patients' deaths, and concerns regarding the safety of family members all increase the emotional pressure on health care workers. Although vaccines have been distributed in some countries, research indicates that vaccine compliance remains variable and inconsistent [4,5]. The existing mental problems in the face of extensive media coverage of the rising numbers of casualties, overburdened health care systems, and psychological issues caused by the COVID-19 pandemic may have fostered health care workers' anxieties and distrust in preventative health care. These fears could also result in vaccine hesitancy [4]. Given that the COVID-19 pandemic is expected to persist for a long time, caring for health care workers' mental health has become a "new normal" that requires strengthened multisector collaboration to facilitate mental health support and reduce health disparities. However, to enhance their psychological preparedness for the new normal of the pandemic, there is a need to integrate resources and provide a more comprehensive and concerted psychological support for health care workers. Understanding the risks of mental health issues that health care workers have been experiencing, identifying effective interventions to address the adverse effects of the pandemic, and proposing tailored strategies based on digital health will offer valuable support for health care workers. As we look to an uncertain future, a conceptual framework for the development and deployment of support will facilitate well-being endeavors and provide a foundation for addressing long-term mental health needs. The aim of this study is to develop a framework to describe and organize the mental health and psychological problems that health care workers are facing during the pandemic. Based on the framework, this study also proposes potential interventions from digital health perspectives that health care workers could leverage during and after the pandemic. Conceptual Framework In this study, we reviewed and analyzed the psychological problems and mental health issues that health care workers have encountered during the COVID-19 pandemic, and we developed the MEET (Mental Health, Environment, Event, and Technology) framework ( Figure 1) to demonstrate the interactions of mental health, digital interventions, and social support. There is a mismatch between the societal and organizational sources of psychological problems, such as lack of personal protective equipment, overwhelming workload, and the attempts by health care systems to address mental health issues at an individual level [6]. In this framework, mental health includes cognitive status, activities of daily living, behaviors, and instrumental activities of daily living. Environment refers to factors that are related to social support, family, and network composition. Events include the COVID-19 pandemic, lockdown, social distancing, and vaccine distribution [7]. Technology includes diverse types of digital interventions and platforms [8], such as online support forums, telehealth platforms, health apps, and wearable devices. Through the MEET framework, it is possible to better understand the interactions between mental health, event, environment, and technology. Search Strategy The search strategy and selection criteria were designed to search the PubMed, Health Source: Nursing/Academic Edition, Embase database, and Scopus databases to identify relevant articles published up to the date of the study. Search terms included COVID-19, 2019-nCov, SARS-CoV-2, SARS-CoV, and coronavirus and in combination with health care worker, mental health, psychological health, technology, and digital intervention. The search strategy also included Medical Subject Headings terms in the search strategy for PubMed and Emtree terms for Embase. The search was not restricted by study design. Study Selection and Eligibility Criteria This study included any type of study about any type of health care worker during the COVID-19 pandemic with outcomes relating to their mental and psychological health, as well as studies about digital health technologies and platforms. The prevalence of mental health issues and effects and the interventions aimed at preventing or reducing negative mental health issues were analyzed and summarized narratively. The search strategy imposed no restrictions on study design, methodology, or language. This study focused on the health care worker population and identified references by searching (title/abstract) using the keywords from the four domains (mental health, environment, event, and technology) that are listed in Textbox 1. Anger The inner sense of security of health care workers has been threatened by the global pandemic. Health care workers may feel helpless and powerless. From the psychological perspective, anger is a type of psychological defense [10]. There are multiple triggers of anger: anger with sudden outbreak, helplessness during the spread of the pandemic, delayed vaccination, etc. Anxiety Health care workers have received professional medical training, which can help them address the pandemic objectively and rationally. However, they also have the same emotional responses as the general population, who are experiencing feelings of anxiety and panic. In addition to worrying about themselves and their families being infected with SARS-CoV-2, some health care workers are worried that the pandemic will continue to spread. Some health care workers pay too much attention to negative news and information. When they feel physical discomfort, especially with respiratory symptoms, they often manifest anxiety, nervousness, and restlessness. This sense of losing control will likely result in overthinking, pessimism, loss of appetite, overeating, or weight loss [11]. Obsessive-Compulsive Habits, Traits, and Disorder Obsessive-compulsive disorder [12] refers to a mental disorder with the main symptoms of repeated compulsive actions or forced thinking. In the current pandemic situation, hand washing, opening windows, and wearing masks are effective means to prevent SARS-CoV-2 infection. However, health care workers may engage in forced actions of excessive disinfection behaviors. They may overthink the negative consequences of the disease, while these abnormal behaviors can cause painful feelings. Hypochondriasis Hypochondriasis is a psychopathological status [13] in which a person ascertains that they have a specific disease without clear medical evidence. In the context of the COVID-19 pandemic, health care workers are in close proximity or direct contact with a large number of patients, and the potential risk factors for infection are significantly increased, which can lead to hypochondriasis. When physical discomfort occurs, the health care workers may overthink their symptoms, which may cause unnecessary anxiety and nervousness. Depression Pessimistic feelings are likely to trigger negative and hopeless emotions. These emotions are signs of depression. Various factors may contribute to depression, such as grief over the loss of lives, fear of becoming ill, and psychological trauma from the global pandemic [14]. Sleep Issues Sufficient sleep is essential for health care workers to restore physical strength and improve immunity after high-intensity work. Having a good quality of sleep can reduce the risk of illness. However, during the pandemic, there are multiple barriers to sleep: isolation from society, disordered life rhythm, mental fatigue, depression, loss of interest and joy in life, etc. Health care workers may have difficulty falling asleep even when in an exhausted physical state, or they may experience shortened sleep time, frequent waking and dreaming, and disordered sleep rhythms [15]. Physical Discomfort and Somatization During the pandemic, some health care workers may feel physical discomfort, which may be caused by physiological or mental health issues. Strong psychological fluctuation will lead to physical discomfort involving organ systems throughout the body. When health care workers are under great pressure, negative emotions tend to be transformed into physical symptoms, which is commonly called somatization [16]. With these symptoms, psychological disorders and pain may not be detected but may be present in the psychopathological process in the forms of physical discomfort or dysfunction. Common types of physical discomfort include palpitations, chest tightness, shortness of breath, airway obstruction, dizziness, bloating, fatigue, decreased appetite, unstable blood pressure, and menstrual disorders [17]. These experiences of physical discomfort tend to increase the tendency toward hypochondriasis and often lead to a sense of panic. Cognitive Issues in Concentration The human body will redistribute blood nutrients to the heart, muscles, and other organs when it is under stress. This process will reduce the essential supply to the brain and result in inattention, inability to focus, and decreased ability of judgment and perception. In addition, paranoia may be generated in such situations [18]. Behavioral Issues In hospitals, when health care workers treat patients who are suspected SARS-CoV-2 carriers, and they are likely to be sensitive to patients' coughing and prone to have conflicts with them. An irregular lifestyle, such as unhealthy diet, poor sleep, and lack of physical activity, will increase the likelihood of infection. Common behavioral problems include performance avoidance, decreased work enthusiasm and physical activity, increased dependence on families, and disorderly lifestyle and self-management. Health care workers may also experience unhealthy lifestyle activities, such as smoking, drinking, staying up late, and overeating; or blind behaviors, such as panic buying and stockpiling of disinfection supplies, food, drugs, etc [19]. Mental Health and Psychological Protection Interventions The COVID-19 pandemic has resulted in an increase in risk factors for mental health issues, which requires both short-term adaptations and sustained responses. Lack of training, social support, effective communication, and accommodative coping are common factors for developing psychological morbidities and adverse psychiatric outcomes [20]. Comprehensively integrated intervention approaches are often more effective than single treatment methods and have a longer-lasting effect. The emerging health information technologies [21] coupled with recent innovations in digital health could enable health services to offer tailored and proactive mental health care for health care workers. Digital Communication Platforms During the COVID-19 pandemic, health care workers are most likely to communicate with colleagues with whom they have been working together closely. This is because these colleagues are empathetic and understand the hardships and difficulties of frontline work, and their mutual consolation will be an effective intervention. Understanding and support from family are also important. However, due to the policy of social distancing, digital platforms may be more accessible during a pandemic [22]. These platforms could enable health care workers to communicate, which is an essential component of any universal, community-led response to the pandemic [23]. Furthermore, the digital communication platforms could provide a peer-support network for health care workers to share their emotional feelings, challenges, and personalized resolutions, which may foster resilience and comradeship. Telehealth Platforms Another ideal communication partner is a psychologist. Communicating with psychologists through telehealth or remote platforms can allow health care workers to express negative emotions, actively talk about the difficulties they face, and express personal feelings encountered during the work. Primary mental health care modes such as counseling, psychotherapy, or pharmacological treatment should be provided through the health care workers' local health care system or organization as needed. Professional guidance from psychologists will help health care workers to relieve negative emotions, adjust their negative cognition, and restore a healthy mentality to enable them to better cope with work and interact harmoniously with their families. For those health care workers who are too busy to receive support from local psychologists, resources such as employee assistance programs, [24] crisis hotlines [25], and other institutional resources may be good first steps. Self-Guided Psychological Interventions Nonpharmacological interventions, such as cognitive-behavioral therapy, meditation, mindfulness, breathing, and relaxation training through websites or mobile apps, will be suitable for health care workers. Internet-based psychological intervention may be the most convenient, fast, and economical means for health care workers who are currently fighting the novel coronavirus. With the help of information technology, these interventions can be transformed into audiovisual interactions, which enables health care workers to access web-based psychological intervention without being restricted to a particular time and place. This media information can also transmit scientific psychological crisis response strategies to frontline health care workers effectively. In this way, health care workers can improve their mental health protection awareness and take timely action. Internet-Based Interventions Regularity, order, and a sense of control are effective means of coping with anxiety and panic. During the pandemic, despite the limited range and number of activities, health care workers are still expected to actively balance work and life. They should not overuse alcohol or tobacco to relieve pressure or negative emotions. Health care workers who have sleep issues need to pay attention to sleep hygiene and decrease their use of caffeine [26]. Studies have shown that evidence-based internet interventions can be helpful to address these issues [27]. For health care institutions that have not implemented internet-based interventions, providing mindfulness education or meditation interventions could significantly reduce stress and other psychological diseases [28,29]. Web-Based Learning Communities Obtaining mental health knowledge through web-based learning communities is another effective approach. Emotions such as anxiety and fear are normal psychological reactions, and moderate anxiety can help people increase their awareness of prevention [30] and avoid dangerous environments. However, excessive pressure and anxiety will weaken the human body's immune system and damage its protection mechanisms. Receiving mental health education and training enables health care workers to make rapid and scientific judgments about their psychological status and offers them keen insights into abnormal psychological reactions. This training includes education on the psychosocial impact of high-casualty events in different settings. Health care workers could develop a personalized resilience plan that involves the identification of anticipated responses. Meanwhile, they should also be taught how to use digital and mobile health technologies for delivering care [31,32]. The earlier the intervention, the more likely that negative moods and psychological situations will be adjusted in time. Furthermore, this training will help health care workers understand stress-related obstacles and approaches to adjust their emotions in the face of catastrophic events as well as establish the correct psychological defense mechanism against crises. Although training and education may not generate an immediate effect, these efforts will create an active continuum of improved environment [33], reinforce the capacity to support increased access to care for mental health issues, and strengthen health care workers' readiness for the new normal of the postpandemic era. Artificial Intelligence in Health Care Systems The COVID-19 pandemic has increased the stress of health care workers who were already overwhelmed by high workloads. Many health care workers are on the fringe of reaching their physical and psychological limits. High stress and overwork not only damage health care workers' physical and mental health [34] but also affect their decision-making during clinical work [35]. Health care workers should objectively assess their own ability to withstand pressure and stress and measure their ability to devote themselves to effective work. Using artificial intelligence approaches, such as machine learning and deep learning [36], to plan a reasonable schedule of shifts and assist in clinical decision-making [37] may help health care workers avoid physical and mental burnout [38]. Mobile Health Point-of-care systems such as portable and smart devices [39], home diagnosis technologies based on the Internet of Things [40], and other digital interventions can help health care workers detect potential physical issues at early stages. In addition, these interventions could be tailored to health care workers and fit with their personal needs and lifestyles. Short Videos Health care workers are always seeking a transparent understanding of the situation during the pandemic [41]. Short videos provide a panoramic and detailed record of the actual situation. The intuitive ways in which they present information greatly improve the audience's acceptance and understanding [42]. Some short videos could provide advice on ways to stay healthy by teaching health care workers how to include sufficient physical activity in their routine, eat fresh food, and consume natural supplements that can support their immune systems. In addition, the short videos could facilitate wellness therapies to relieve stress, anxiety, and help health care workers maintain a general sense of mental and physical well-being. New Media The timely disclosure and dissemination of information could help health care workers and their families understand the course of the incident, the truth, and the real situation [43]. Meanwhile, authoritative information also eliminates rumors and prevents excessive pressure on health care workers. Higher satisfaction with disseminated public information may contribute to lower psychological distress. In the current situation, authoritative news could be quickly and widely disseminated through health communication technologies (ie, social media, short videos) to address public concerns. This information can strengthen the credibility of official departments and help reduce or even eliminate the influence of rumors [44]. New media platforms are also enhancing the affinity and attractiveness of digital approaches. Social Media Social media platforms are important sources for supervision of public opinion. During the pandemic, all departments, agencies, and institutions in society have been interlocked in their responses to the emergency, which requires an orderly, accurate, and efficient workflow. Mobile information and health communication technologies play prominent roles in media supervision, investigation, and filling in information gaps. Through mobile communication platforms, health care workers from different departments at the front line could share their perseverance, efforts, and strategies to prevent and control the pandemic situation from multiple perspectives [45]. Through social media, humanistic information and communication can not only calm health care workers and boost their confidence but also positively guide the public and help mitigate negative and anxious environments [46]. Social media is playing a comprehensive role in science popularization, as it is based on modern mobile communication technologies that are conveying scientific knowledge to the public in a fast, timely, and vivid fashion. With multiplatform and multichannel support to achieve rapid information coverage, the public can obtain a scientific understanding of the dynamic situation in a short time and mobilize their subjective initiatives for effective preventive actions, which is more efficient than passive installation [47]. Principal Findings Health care workers and professionals have the critical responsibilities of saving lives and protecting people's health during the COVID-19 pandemic. The pandemic has undoubtedly created universal psychological distress. Efforts to address the problem and to prevent the long-term mental health deterioration of health care workers are paramount in the response to COVID-19. Understanding the risks of mental health issues that health care workers are experiencing, identifying effective interventions to address the adverse effects of the pandemic, and proposing tailored strategies based on digital health will offer valuable support for health care workers. We provide a conceptual framework for allocation of the main sectors (mental health, environment, event, and technology) at the individual, organizational, and societal levels, focusing on addressing health care workers' well-being needs during and after the pandemic. To prepare for the long-term fight against the pandemic, these guardians of human life must maintain their physical and mental health to work effectively to take care of more patients. Providing health care workers with positive support will help mobilize their self-psychological protection capabilities, thus allowing them to continue their valuable work. The need for more mental health services will introduce additional burdens to health care systems, and digital health technologies are playing vital roles to relieve these overwhelmed systems. Leveraging hybrid solutions that offer web-based, telehealth-based, or blended face-to-face intervention and treatment may be more accessible and effective [48]. In addition to using digital technologies and platforms, health care workers should avoid information overload. Due to the modernization of communication approaches, the amount of information about the pandemic is overwhelming, which can increase the sense of insecurity and uncertainty. The traditional ways in which people obtain information, such as newspapers, radio, and television, have been transferred to the internet and mobile platforms such as social media, video, or live broadcast platforms. Mobile information and health communication technologies have revolutionized information dissemination, data exchange, media supervision, guidance of public opinion, and health communication [49]. Health care workers should pay more attention to authoritative information, actively avoiding negative news and preventing information from overwhelming them. Meanwhile, health care workers should also keep in regular contact with families and friends, which can not only increase emotional interaction and psychological support but can also increase mutual encouragement. The pandemic may cause pressure, panic, and psychological trauma to health care workers. Technologies could not solve all the problems. Mild emotional distress can be adjusted by health care workers themselves, while serious panic will seriously impact their daily life. Self-regulation often has a limited effect and requires professional assistance, and it is not suitable for every health care worker, especially young health care providers who have not experienced such a serious crisis. Health care workers with insufficient clinical experience may generate more pressure and experience persistent depression, anxiety, insomnia, and other symptoms. Health care workers should request remote counseling from experts or go to a psychological clinic for consultation when necessary. If their psychological problems cannot be relieved after receiving professional psychological intervention or mental health services, psychiatrists should intervene in time and provide corresponding diagnosis and treatment. Given that the COVID-19 pandemic is expected to continue for a long time, caring for mental health has become a new normal that needs strengthened multisector collaboration to improve social support and reduce health disparities. To enhance the psychological preparedness of health care workers for the new normal of the pandemic, there is a need to integrate resources and provide them with more comprehensive and concerted psychological support. Conclusion The COVID-19 pandemic has heavily burdened health care systems throughout the world. It is urgent and critical to protect the mental health and strengthen the psychological resilience of health care workers. The proposed MEET framework could aid understanding of the interactions among the mental health, event, environment, and technology sectors. In addition, this framework may provide structured guidelines for future research on mental and psychological studies for different populations. Long-term, proactive individual, organizational, and societal infrastructures to support health care workers' mental health are needed to mitigate the psychological impact of the COVID-19 pandemic. Embedding these mental health practices as part of the new normal can be a stepping stone to a new future with benefits and implications for other global public health issues far beyond the response to the COVID-19 pandemic.
2021-05-22T00:03:34.095Z
2020-07-02T00:00:00.000
{ "year": 2021, "sha1": "36a2616638ede2eac5cef2052b47c34f1ac991bb", "oa_license": "CCBY", "oa_url": "https://formative.jmir.org/2021/6/e22075/PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22c114d8ad7fa285b510f33fd4436e88940eabd5", "s2fieldsofstudy": [ "Psychology", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
138037816
pes2o/s2orc
v3-fos-license
Differences Research of Particles between Line Smoke and Lampblack In this paper differences of the particle size between the pine smoke powders and lampblack powders were studied. The results showed that the particle spectrum of the pine smoke powders was unimodal distribution, while it was bimodal for the tung lampblack powders. The average particle size, volume of lampblack powders were about 1/3 and 3% of the pine smoke powders respectively. But the specific surface area of lampblack particles was 5 times more than the pine smoke particles, which revealed the fact that the lampblack powders were more exquisite than the pine smoke powders . Introduction Hui ink-stick, ranks in the second of China's "Four Treasures for Calligraphy" , embodies lots of effort and sweat of many practitioners, it is very important in the development and spread of traditional Chinese culture [1][2][3][4][5][6]. The main material for ink making is soot, smoke powder particles of incompletely burned pine or vegetable oil [7,8]. But during mass production of ink pine forest was seriously damaged as pines live a long growing season [9,10]. While vegetable oil such as tung oil, rapeseed oil, soybean oil, etc. with short growing season are ecological materials [11]. In this paper taking the pine smoke powders and tung lampblack powders as examples, differences of the particle size and the feasibility of using lampblack to replace the pine smoke were studied based on the particle size distribution of both ink making materials. Instrument and Material In the study BT-9300H laser particle size analyzer was used. The pine smoke powders and lampblack powders for experiment were the products of Jumotang ink industry co., LTD. And the distribution of particle spectrum was analyzed by taking 0.1g powders of each materials to put in the laser particle size analyzer. 3.1Particle Spectrum Distribution of the Pine Smoke Powders Results of the particle spectrum distribution of the pine smoke powders showed that the particle size ranged from 0.361 to 84.95 ȝm, mean volume diameter was 21.14 ȝm, mean area diameter was 9.461 ȝm, mean length diameter was 2.331 ȝm, median diameter was 19.11 ȝm, specific surface area was 234.8 m 2 /kg, the particle spectrum was unimodal distrbution and the peak was seen near 22 ȝm (figure 1). The particle size was relatively concentrated, the size from 21.12 to 23.51 ȝm accounted for 6.24% of the total particles, and the size from 17.05 to 32.41 ȝm was the main composition of the pine smoke particle size which accounted for 35.8% of the total. 3.2Particle Spectrum Distribution of the Tung Lampblack Powders Results of particle spectrum distribution of the tung lampblack powders were shown in figure 2. The particle size of the lampblack powders ranged from 0.100 ȝm to 68.58 ȝm, and size less than 0.100 ȝm couldn't be detected since it was beyond the minimum testing limit of the laser particle size analyzer. Even existed, the proportion of particles less than 0.100 ȝm should be very small as size from 0.100 ȝm to 0.111 ȝm in diameter accounted for only 0.03% of the total. Differences Between the Smoke Powders of Pine and Lampblack Results of the particle spectrum distribution of the lampblack powders showed that the mean volume diameter was 9.625 ȝm, mean area diameter was 1.873 ȝm, mean length diameter was 0.401 ȝm, and median diameter was 6.163 ȝm, which revealed that the average median particle size was about 1/3 of the pine smoke powders, and the volume of lampblack powders was only 3% of the pine smoke powders which was much smaller than the pine's. However, it was opposite that the specific surface area of the lampblack was 5 times more than the pine's compared to the differences of volume. The specific surface area of the lampblack was 1186 m 2 /kg. There were significant differences from the pine's spectrum. The particle spectrum distribution showed that the particle size of lampblack had double peaks appeared near 0.9 ȝm and 9.5 ȝm, respectively. And the particles near peaks were much less than the pine's. Size of particles from 0.850 to 0.947 ȝm around peak 0.9 ȝm accounted for 1.3% of the total particles, and particles from 8.970 to 9.983 ȝm near peak 9.5 ȝm accounted for 3.41%, hence the lampblack particles near peaks were much less than the pine particles accounted for 6.24%. Results above suggested that the lampblack particles were highly dispersed and more exquisite than the pine smoke particles. Conclusions Research of the paper concluded that the particle size of tung lampblack had more complex structure than pine particles since the size peaks in diameter of the pine smoke powders appeared in 22 ȝm but the peaks of the lampblack appeared in 0.9 and 9.5 ȝm, respectively. And the lampblack powders were more exquisite than the pine smoke powders since the average particle size, volume of lampblack powders were about 1/3 and 3% of the pine smoke powders, respectively, while specific surface area of lampblack particles was 5 times more than the pine smoke particles. As a whole, it was difficult for the lampblack material to replace the pine smoke powders in the production as there existed significant differences between the pine smoke powders and the tung lampblack powders. Acknowledgements I deeply appreciate teacher Yang, my supervisor, who walked me through the process of the writing. Without his consistent and illuminating instruction, the paper wouldn't have been so well. This work was Carried out as part of project 11007230, "Research of advanced protective development for traditional industry of Hui ink and She inks tone", for which funding is gratefully acknowledged.
2019-04-29T13:13:07.322Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "fff1cb947a2a8ee1f081560abcdc58d5f2c9ebfb", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/30/matecconf_smae2016_06104.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5462a16cca538ac59fc02a0c584ceac0b2d22428", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
236524119
pes2o/s2orc
v3-fos-license
Adaptation and psychometric validation of Diabetes Health Profile (DHP-18) in patients with type 2 diabetes in Quito, Ecuador: a cross-sectional study Introduction The Diabetes Health Profile (DHP‐18), structured in three dimensions (psychological distress (PD), barriers to activity (BA) and disinhibited eating (DE)), assesses the psychological and behavioural burden of living with type 2 diabetes. The objectives were to adapt the DHP‐18 linguistically and culturally for use with patients with type 2 DM in Ecuador, and to evaluate its psychometric properties. Methods Participants were recruited using purposive sampling through patient clubs at primary health centres in Quito, Ecuador. The DHP-18 validation consisted in the linguistic validation made by two Ecuadorian doctors and eight patient interviews. And in the psychometric validation, where participants provided clinical and sociodemographic data and responded to the SF-12v2 health survey and the linguistically and culturally adapted version of the DHP-18. The original measurement model was evaluated with confirmatory factor analysis (CFA). Reliability was assessed through internal consistency using Cronbach’s alpha and test–retest reproducibility by administering DHP-18 in a random subgroup of the participants two weeks after (n = 75) using intraclass correlation coefficient (ICC). Convergent validity was assessed by establishing previous hypotheses of the expected correlations with the SF12v2 using Spearman’s coefficient. Results Firstly, the DHP-18 was linguistically and culturally adapted. Secondly, in the psychometric validation, we included 146 participants, 58.2% female, the mean age was 56.8 and 31% had diabetes complications. The CFA indicated a good fit to the original three factor model (χ2 (132) = 162.738, p < 0.001; CFI = 0.990; TLI = 0.989; SRMR = 0.086 and RMSEA = 0.040. The BA dimension showed the lowest standardized factorial loads (λ) (ranging from 0.21 to 0.77), while λ ranged from 0.57 to 0.89 and from 0.46 to 0.73, for the PD and DE dimensions respectively. Cronbach’s alphas were 0.81, 0.63 and 0.74 and ICCs 0.70, 0.57 and 0.62 for PD, BA and DE, respectively. Regarding convergent validity, we observed weaker correlations than expected between DHP-18 dimensions and SF-12v2 dimensions (r > −0.40 in two of three hypotheses). Conclusions The original three factor model showed good fit to the data. Although reliability parameters were adequate for PD and DE dimensions, the BA presented lower internal consistency and future analysis should verify the applicability and cultural equivalence of some of the items of this dimension to Ecuador. Background Diabetes mellitus (DM) is a high priority public health problem. It is the most frequent chronic disease in the world and, in 2014, affected 422 million people. According to the World Health Organization, people with type 2 Diabetes mellitus (T2DM) represent 90% of all diabetics. The prevalence of T2DM has increased more rapidly in low-and middle-income countries than in high-income countries, as is the case in Latin America and Ecuador [1]. In 2016, the prevalence of T2DM in Ecuador was estimated at 7.3% and has been rising significantly in all age groups [2][3][4][5]. According to data from the STEPS Survey of Ecuador in 2018, the prevalence of diabetes was 6.6% in both sexes (6.6% in men and 6.5% in women) of the Ecuadorian population between 18 and 69 years of age, and increased to 10.7% in the age group between 45 and 69 years in both sexes [6]. T2DM is the most common metabolic cause of mortality, due to its complications and associated pathologies [7]. It negatively affects quality of life [8], defined as a person's individual perception of the physical, emotional and social state [9], as a result of associated physical disabilities and mental health problems [10]. Clinical measures can provide a good estimate of disease control, but the ultimate goal of DM care is to maintain or improve the patient's quality of life [11]. There are generic instruments to measure quality of life that can be used both in the general population and in all disease groups [12,13]. However, specific instruments have been developed to measure specific effects of diseases and are more responsive to changes. Diseasespecific instruments can help determine which conditions best explain a patient's limitations in physical and / or mental function, and, therefore, are more useful in outcome research, health care cost studies, and clinical practice [14]. In Ecuador, advanced age, longer disease duration, hypertension and kidney disease are associated with a lower health related quality of life in patients with T2DM [15,16]. In addition, a direct relationship was found between low socioeconomic status and the development of the disease [17]. Despite the rapid growth in the prevalence of T2DM and the existence of different instruments to measure quality of life in diabetic patients, none of them have been linguistically or psychometrically validated in Ecuador. Although there is a wide description of different questionnaires to assess quality of life in diabetic patients [18], such as the Diabetes Care Profile, which aims at assessment of factors important in a patient's adjustment to diabetes and its treatment in daily life and which consists of 234 items, the Appraisal of Diabetes Scale, which aims at Assessment of diabetes-related distress and which consists of 7 items, the Diabetes Distress Scale, which aims to Measure of diabetes-related emotional distress for use in research and clinical practice and which consists of 17 items, among others [19]. We chose the Diabetes Health Profile (DHP) because of its advantages over other diabetes-specific patient reported outcome measures. It is a specific instrument to evaluate psychological and behavioural impact of living with diabetes [20]. It generates a health profile that measures psychological distress, barriers to activity, and uninhibited eating. Each answer is rated on a scale, and the scores by dimension are presented on a scale in which a higher DHP value is associated with a worse perception of quality of life. The short version of DHP with 18 items has been used in different countries, demonstrating adequate metric properties [21][22][23]. The objectives of this study are to adapt the Diabetes Health Profile-18 (DHP-18) both linguistically and culturally for use with patients with T2DM in Ecuador, and to evaluate its psychometric properties. Participants We included type 2 diabetic patients, who were at least 18 years of age, had been diagnosed for at least 12 months, resided in Quito with no intention of moving in the near future and were native Spanish speakers. Recruitment to the study used purposive sampling through a patient club for people with diabetes at the Chimbacalle Health Center and contacts from health promoters from several health centres in Quito (Número 1, Jardín del Valle, Cotocollao, Jaime Roldos Aguilera, Corazón de Jesus, Comité del Pueblo, San Antonio de Pichincha, Colinas del Norte, Pomasqui, Carcelén Bajo, El Condado, Mena del Hierro, La Bota, Pisulí, Puellaro, Chavezpamba, Cotocollao Alto and Calacalí). In this setting, diabetic patient's clubs are sometimes established in primary health care centres, either by initiative of the health staff or the patient's themselves. The role of patient clubs is to motivate patients through the exchange of experiences among its members, in addition to the orientation, advice and guidance offered by health professionals on behaviour modification (physical activity/diets) [24,25]. Our selection sought to include a group of patients that was heterogeneous in terms of sex, age and level of education. All participants gave their consent to participate in the study. Procedure The interviews were carried out between February and July 2020. The DHP-18 validation process consisted of 2 phases. Linguistical and cultural adaptation Two Ecuadorian medical researchers reviewed the original version of the DHP-18 (English) and the existing translation (Spanish for the United States) to assess the cultural and linguistic relevance for its use in Ecuador. They suggested some changes in text, as well as the reasons for these changes and provided a new recommended translation. Changes were discussed with the other members of the team and a new adapted version of the questionnaire was proposed. Subsequently, 2 different researchers carried out interviews to assess the linguistic and cultural understanding of the adapted questionnaire with 8 people with T2DM of Ecuadorian nationality in the Chimbacalle Health Centre. Participants were asked to answer the questions and then, the necessary time was recorded, the answer options were discussed, the wording that was difficult to understand was commented, and alternative wording was suggested based on the participants' own words. A second adapted version was proposed. The interviews were recorded and transcribed verbatim for analysis. Finally, participants' responses were summarized in a pilot test report including recommended changes and suggestions. The report was then sent to the original authors of the questionnaire for verification and approval. Psychometric validation Firstly, we recruited 146 participants for the baseline test where they responded to the questions posed in the tool previously linguistically validated DHP-18 instrument in Ecuador and in another tool (SF-12v2 in its version for use in Ecuador) [26] in order to assess the correlation with generic quality of life as a construct validity test. Two weeks later, we assessed the intra-observer reliability of the new tool in a random sample of 75 of the previously interviewed patients, where only DHP-18 was retested, along with the following question: "Compared to the last time you completed the questionnaire, how do you assess your condition today? (1) unchanged, (2) improved, (3) greatly improved, (4) impaired or (5) highly impaired". Data collection The 8 interviews carried out during the linguistic and cultural adaptation were held face to face but given the situation generated by the COVID19 pandemic [27], the data for the psychometric validation was collected through individual telephone interviews. Responses were digitally recorded by the interviewer using the Kobo toolbox (http:// www. kobot oolbox. org/) free open-source software on electronic tablets. Informed consents were provided orally and were audio recorded. DHP-18 questionnaire Participants responded to the adapted version of DHP-18. We used the Diabetic Health Profile (DHP) -18 because it is a shortened version of DHP-1, a specific instrument for measuring the psychological and behavioural impact of type 1 diabetes. We decided to use the short version of the DHP because it can be used in people with both type 1 and type 2 diabetes aged 11 and older. And because the instrument has demonstrated adequate metric properties and its completion time is approximately 5-6 min. Items are scored using a 4-point Likerttype scale ranging from 0 to 3. Items are provided with one of four sets of responses (1) never, sometimes, generally, always; (2) never, sometimes, often, very often; (3) not at all, a little, a lot, very much; and (4) very likely, quite likely, unlikely, not at all likely. The raw subscale scores are transformed into a common score range from 0 to 100, with 0 representing no dysfunction. The DHP-18 consists of three dimensions: psychological distress (includes questions like depressed from diabetes; more arguments or upsets at home than there would be if you did not have diabetes; losing your temper over unimportant things; etc.), barriers to activity (includes questions like food controls life; difficult staying out late; avoid going out when sugar is low; etc.) and disinhibited eating (includes questions hard to say no to food you like; ease of stopping when you eat; wish there were not so many nice things to eat; etc.). SF-12 v2 The SF-12 v2 is an instrument for measuring healthrelated quality of life [26], based on SF-36. It includes twelve items, has an application time of approximately two minutes, and is used to evaluate the degree of wellbeing and functional capacity of people over 14 years of age. The response options form Likert-type scales (where the number of options varies from three to six points, depending on the item), which assess intensity and / or frequency of people's health status. The score ranges from 0 to 100, where the higher score implies a better healthrelated quality of life. The SF-12v2 has demonstrated adequate validity and reliability in the United States and internationally, and the Spanish version has been used successfully in Latin America and with Spanish-speaking populations in the United States. Investigations that use these twelve items of the SF have verified that the instrument is a valid and reliable measure in Latin American countries such as Colombia and Chile in adult population, and a translated version is available for Ecuador. The SF12v2 includes questions related to health status and limitations in doing activities, problems with work or other regular daily activities due to physical health, due to emotional problems, pain, feelings, etc. Sociodemographic and clinical variables We collected sociodemographic and clinical variables (all self-reported by the participants): age, sex, marital status, ethnicity (mestizo or other minorities. The mestizos are an ethnicity composed of Spanish and indigenous heritages), educational level, monthly income, employment status, smoking status, alcohol intake, weight, height, duration of illness, use of medications, diabetes complications and comorbidities. Statistical analysis We included descriptive statistics through frequencies, the mean (standard deviation) or the median (interquartile range), as appropriate. The psychometric characteristics of the DHP-18 were assessed according to consensus-based standards for the selection of health status measurement instruments (COSMIN) guidelines [28]. Missing values for the DHP-18 and SF-12 v2 were substituted with the mean of the completed questions for those dimensions in which ≥ 50% of questions had been completed [29,30]. We evaluated floor and ceiling effects by calculating the percentage of patients scoring either the lowest or highest possible dimensional scores. If more than 15% of respondents achieve the lowest or highest possible score, then floor or ceiling effects are present [31]. Statistical analyses were performed using Stata Version 15 (StataCorp LP; College Station, TX) and R software, version R 4.0.0 (R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria; http:// www.R-proje ct. org) was used to perform the confirmatory factor analysis. The level of statistical significance was set at p < 0.05. Structural validity We performed a confirmatory factor analysis (CFA) because the factor structure had already been determined [32] and confirmed for other language translations [23]. In this case, we used CFA using Diagonally Weighted Least Squares (DWLS) [33][34][35] to test the hypothesis that the general construct of DHP is composed of three individual and correlated factors: psychological distress (6 items), activity barriers (7 items) and disinhibited eating (5 items). To estimate the model fit, we used the following criteria: Values > 0.95 for the Tucker-Lewis index (TLI) or for the comparative fit index (CFI) and the root mean square error of approximation (RMSEA) < 0.06 or the standardized root mean square residual (SRMR) < 0.08 are considered as a good model fit [36,37]. The magnitudes of factor loadings of 0.3 or greater were considered suitable. Reliability To measure internal consistency reliability, we used Cronbach's alpha coefficient, where values > 0.7 are considered as acceptable [36]. The homogeneity of items was verified by the analysis of item-rest and inter-item correlations for the items constituting each dimension of the scale. The usual rule of thumb is that an item should correlate between 0.3 and 0.7 with the total score of the factor (excluding that item), using Pearson's coefficient. Additionally, average inter-item correlations for items in the same factor should correlate moderately, between 0.15 and 0.5, to ensure that they measure the same construct but not so closely as to be too redundant [38]. We measured test-retest reliability in patients reporting no-change in the global assessment of change question. To measure test-retest reliability we considered that the individual's health was significantly better if they responded, "much better" or "somewhat better" in the global assessment, or significantly worse if they responded "somewhat worse" or "much worse" [39]. We used the intraclass correlation coefficient (ICC) under a 2-way random effects model with absolute agreement [40], and its associated 95% confidence interval. We considered that a questionnaire exhibits substantial reliability when ICC is between 0.40 and 0.75, and greater than 0.90 represents excellent reliability [36]. Measurement errors were determined by calculating the standard error of measurement (SEM) and the smallest detectable change (SDC). We calculated SEM by the square root of the error variance derived from analysis of variance (ANOVA), two-way ANOVA with repeated measures [41]. The SDC individual and SDC group was calculated with the following formulas (41): SDC group = (SDC individual /√n); n: number of subjects in the sample. We estimated the minimally important difference (MID) for each DHP-18 dimension using three distribution-based methods to estimate MID: 0.2 and 0.5 standard deviation (SD) and SEM estimations. Formulas: We also estimated Cohen's d effect size (ES) of the change in DHP-18 dimensions for those reporting a small but important change and those reporting no changes in global assessment rating. Cohen's d was calculated with the following formula (42): SD basaline : Standard deviation of baseline score. An effect size of 0.2 was considered small, 0.5 moderate and 0.8 large [43]. Construct validity We assessed construct validity of the DHP questionnaire using three approaches. Firstly, we assessed convergent validity using binary correlation analysis (Spearman's rdue to non-normal value distributions) of the DHP-18 and SF-12v2. Before starting the analysis, we set up the following a priori hypothesis: (1) Scores of "psychological distress" dimension in DHP-18 correlate negatively with scores of "mental health" dimension in SF-12v2. (2) Scores of "activity barriers" dimension in DHP-18 correlates negatively with "physical dimension" in SF-12v2. (3) Scores of "disinhibited eating" dimension in DHP-18 correlates negatively with "physical dimension" in SF-12v2. Secondly, we explored discriminant validity by comparing the correlation among the three dimensions of the DHP-18 scale. Thirdly, we evaluated known-group validity by comparing DHP-18 scores in patients according to sex, education level, obesity, and clinical characteristics such as duration of diabetes, presence of comorbidities and/or diabetesrelated complications using a Student's t-test or ANOVA. We tested the following pre-defined hypotheses: H1: Individuals with longer duration of illness would have higher DHP-18 scores (poorer quality of life) than those with shorter illness duration [44]. H6: Individuals with diabetes-related complications would have higher DHP-18 values (poorer quality of life) than patients without complications [44]. Linguistic and cultural adaptation Two Ecuadorian medical researchers modified some linguistic expressions in the Spanish version for the United States in items 2, 3, 4, 5, 6, 10, 11, 12, 13, 14, 15 and 18 and in some answer options. In the linguistic and cultural review, six women and two men participated: a 28-yearold person, a 49-year-old person and a 52-year-old person, and the rest of the participants were over 70 years old. They made further changes to items 5, 6, 10 and 12 and proposed reformulation of some expressions. Most modifications were minor linguistical issues to use terms more commonly used in Ecuador, for example the expression "staying out" was changed by "going out of the house", the term "edgy" was changed by "nervous", and the term "lose your temper" was changed by "get angry easily". Other changes were made to improve comprehension by simplifying technical terms, for example "influenza" was changed to "flu", "depressed" was changed to "sad". One of the items was flagged as having potential difficulties because participants would be asked to reflect on their sugar levels, and there was a very low availability of glucometers in homes. The expression "on the low side" was changed to "having low or very low sugar levels". Similarly, in item 6, the word "monitor" was replaced by the expression "take the sugar test" to improve its understanding. The original author approved the new tool, linguistically and culturally adapted to the context of Quito, Ecuador. Psychometric validation We recruited 146 patients diagnosed with T2DM. Table 1 describes the characteristics of the study population. The mean age of the participants was 56.8 years, 58.2% were women and 80.1% were mestizo. The population studied had relatively low educational qualifications, with 56.8% having primary or no education, 27.6% were not working and 61.4% had incomes of less than $375 per month. Regarding diabetes medication, the majority were on oral antidiabetic therapy (66.2%), 11.7% of patients were treated with insulin, 2.1% with only diet and the rest (20%) were on combined therapy (oral + insulin). We found that 37.5% were overweight and 29.5% were obese. Seventy-five (51.4%) participants were retested for DHP-18. There were no differences in socio-demographic or clinical characteristics between participants who were retested and those who were not (Table 1). In the DHP-18 retest, there were two missing values in item 4 and two in item 14, the distribution of the missing were 2 participants who did not answer one item and 1 participant did not answer two items. Structural validity CFA with values of χ 2 (132) = 162.738, p < 0.001; CFI = 0.990; TLI = 0.989; SRMR = 0.086 and RMSEA = 0.040 indicated a good fit to the data, except for SRMR. The standardized factorial loads (λ) from each item on their respective factors were all statistically significant (p < 0.001) and ranged from 0.57 to 0.89, from 0.21 to 0.77 and from 0.46 to 0.73 for psychological distress, barriers to activity and disinhibited eating, respectively. The covariance between the three latent variables ranged from 0.54 to 0.90, with psychological distress and disinhibited eating presenting the highest covariance. And two items (questions 1 and 3) showed a λ value below 0.3, using a one-factor model (Fig. 1). When we repeated the analysis excluding these 2 items, we observed a significant improvement in all the indicators, including the SRMR, which was the only one that showed a value slightly higher than recommended (CFI = 0.997; TLI = 0.996; RMSEA = 0.027 (90% confidence interval: 0.000-0.052); SRMR = 0.078). Reliability Overall Cronbach's alpha was 0.77 and dimensional alphas were 0.81, 0.63 and 0.74 for psychological distress, barriers to activity and disinhibited eating, respectively. The three dimensions were in a suitable range (0.15-0.50) When we repeated the analysis excluding question 1 which had an item-rest correlation value below 0.30 (value: 0.07) and a λ value < 0.3 in the barriers to activity dimension, the dimensional and overall Cronbach's alpha changed to 0.67 and 0.76, respectively. When question 17 (item-rest correlation value slightly higher than 0.7) was excluded from psychological distress dimension, the dimensional and overall Cronbach's alpha changed to 0.75 and 0.76, respectively. ICC values for a total of 75 retested participants ( Among the retest participants, thirty-nine (52%) reported that their condition was unchanged from baseline to retest (ICC values in Table 2) and 36 (48%) reported that their condition had changed from baseline to retest. Fifteen (20%) participants reported that their condition had improved and 21 (28%) reported that their condition had deteriorated. ICC values for participants reporting that their condition stayed the same were 0.69 (95%CI 0.48-0.83), 0.66 (95%CI 0.50-0.79), 0.66 (95%CI 0.50-0.80) for psychological distress, barriers to activity and disinhibited eating, respectively. Construct validity Our assessment of convergent validity showed an inverse relationship between DHP-18 dimensions and SF12v2 dimensions and the results verified two of three a priori hypotheses with correlation values between 0.4 and 0.7 (Table 3). For discriminant validity, correlations between the DHP-18 dimensions were 0.4 or more, ranging from 0.40 to 0.74. The highest correlation was between psychological distress and disinhibited eating (r = 0.74), followed by the correlation between psychological distress and barriers to activity (r = 0.45) and the lowest was the correlation between barriers to activity and disinhibited eating (r = 0.40). With regard to known-group validity, our results showed the expected tendency in three (H2, H3 and H6) of the 6 initial hypotheses. Compared to individuals with BMI < 30 kg/m 2 , those with BMI ≥ 30 kg/m 2 (H2) showed higher values for each dimension, although only those associated to disinhibited eating were statistically significant. For H3 and H6, the expected tendency of scores for each dimension was obtained with higher scores in women than men and in patients with diabetes-related complications than those without, but there were no statistically significant differences (Table 4). Regarding hypotheses H1, H4 and H5, score patterns were different from those expected. Individuals with longer duration of illness (H1) had lower scores reflecting improved quality of life, although the differences were not statistically significant. Similarly, regarding educational level (H5), scores did not show a clear tendency, with the exception of lower scores for barriers to activity dimension with increasing educational level. Finally, there were no differences by presence of comorbidities (H4) but we found differences between patients with or without specific comorbidities such as hypertension and depression. Having hypertension was associated with better evaluation of two dimensions (psychological distress and disinhibited eating), while depression was associated with worse evaluation of two dimensions (barriers to activity and disinhibited eating) ( Table 4). Discussion In the present study, we linguistically and culturally adapted the DHP-18 and investigated its psychometric properties in people resident in Quito, Ecuador. Satisfactory psychometric properties were observed in a substantial number of aspects. The factor structure was adequate but with 2 items, belonging to the dimension of barriers to activity, which were loaded below the recommended value. Although reliability parameters were adequate for psychological distress and disinhibited eating dimensions, the barriers to activity presented lower internal consistency and future analysis should verify the applicability and cultural equivalence of some of the items of this dimension to Ecuador. Except for the dimension of barriers to activity, a good internal consistency was found. The internal consistency of the dimension of barriers to activity contrasts with another study [21] and may be related to the different populations of patients investigated [21,22,32], since in some studies of people with both type 1 or 2 diabetes were included. Based on a more detailed analysis of the total item statistics, we observed that the elimination of items 1 and 17, with the lowest and highest item-rest correlation values, did not produce significant increases in overall and dimensional consistency, as observed in another study [23]. The test-retest reliability showed substantial reliability values in accordance with the recommendations of the literature [36]. And the sample size used is within the recommended ranges in psychometric validation studies, which could be considered a strength of our study [49]. Regarding convergent validity, a strong correlation was shown between the dimension of psychological distress of the DHP-18 and the mental health dimension of the SF-12v2 and between the dimension of barriers to activity of the DHP-18 and the role physical dimension of the SF-12v2, corroborating two of the predefined hypotheses. Similar results have been observed in previous studies [21,23]. However, the uninhibited eating dimension was related to the emotional role dimension and not to the physical role dimension as was hypothesized based on other studies [21]. Discriminant validity showed adequate correlations between the 3 dimensions, higher than those indicated in the literature. These results differ from other studies that showed an overall low correlation between the dimensions of DHP-18 [20,23]. Regarding the known group validity, our results showed the expected trend in three of the 6 initial hypotheses. Regarding the hypothesis related to educational level, lower scores were given for the dimension barriers to activity with the increase in educational level. Regarding comorbidities there were also significant differences for specific comorbidities such as hypertension and depression. These results are corroborated with other studies where it was seen that the presence of hypertension resulted in a significantly lower score in the disinhibited eating dimension [50]. In the case of the duration of the disease, and despite the fact that the differences were not significant, we did see that people with a disease of longer duration reported a better quality of life. One possible explanation is that the longer the disease lasts, the more likely the patient is able to adapt to the care requirements including behaviour modification [51,52]. The CFA indicated an adequate fit to the original three-factor model, with the exception of the SRMR indicator. The dimension barriers to activity showed the lowest standardized factorial loads, while for the dimensions psychological distress and uninhibited eating, they were adequate. Using a one-factor model, two of the 18 items, both of the activity barriers dimension, were loaded below the recommended value of 0.3, item 3 loaded with a value of 0.3 and item 1 with a value less than 0.3. Regarding item 1, the problem may be due to a lack of understanding, perhaps not at the linguistic level, but of concepts, since in the linguistic comprehension and cultural adaptation interviews, it was observed that the question was simple, short and easily understood, but sometimes people were unsure whether food "controlling one's life" referred to the need to observe and take care of one's diet, or whether it referred to having your life structured and organized around food, such as the timing of meals, the physical exercise depending on the amount of food, etc. Perhaps a clarification with examples could be added to overcome this issue in future uses of the questionnaire in Ecuador. Item 3, about "tied to meal times" was also flagged as potentially problematic in the initial round of reviews by the 2 medical researchers, who evaluated linguistic understanding and cultural adaptation. It should be added that we carried out a new confirmatory factor analysis, eliminating these 2 items and observed a significant improvement in all the indicators, including the SRMR, which was the only one that showed a value slightly higher than recommended. This study has some limitations in addition to the factors discussed above. Although the DHP-18 can be used with people with either Type 1 or Type 2 diabetes, the psychometric test was not performed in type 1 diabetes, limiting the applicability of the results to those patients with type 2 diabetes. In addition, an important factor to take into account is the context in which the study was carried out, in a pandemic situation it is difficult to assess the possible changes produced and there may be factors external to the disease that can influence the results, especially of repeatability-concordance, due to changes in the context produced quickly and that can affect the quality of life of patients. Another limitation is that SF12v2 summary scores for physical and mental health can be misleading if proprietary scores are used, as a low physical health summary score tends to inflate the mental health summary score and vice versa. So this must be taken into account when interpreting the results [53,54]. Despite this, the results are significant and similar to those obtained in other studies. Conclusions The strength of this study lies in the fact that this is the first adaptation and validation of a questionnaire to assess the quality of life in diabetic patients in Ecuador. Hence, it provides a practical tool to evaluate aspects such as self-control of food intake, limitations, barriers and anxiety related to daily activities, feelings, emotions, mood and irritability in people with diabetes. The study adds to the evidence for DHP-18, showing that it is a short, acceptable, valid and reliable instrument to measure the impact of living with diabetes from a patient perspective. However, future analysis should verify the applicability and cultural equivalence of some of the items of barriers to activity dimension to Ecuador. Using DHP-18 enables clinicians to conduct an appropriate educational or therapeutic intervention to alleviate or address dysfunctional life outcomes for people living with diabetes.
2021-07-31T13:52:22.405Z
2021-07-31T00:00:00.000
{ "year": 2021, "sha1": "1f7d0dd7c0fc2149b400e2f348384e0b00db6bb1", "oa_license": "CCBY", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-021-01818-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f7d0dd7c0fc2149b400e2f348384e0b00db6bb1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235707019
pes2o/s2orc
v3-fos-license
Weighted Gene Coexpression Network Analysis to Construct Competitive Endogenous RNA Network in Chromogenic Renal Cell Carcinoma Aim This study is aimed at constructing the competing endogenous RNA (ceRNA) network in chromophobe renal cell carcinoma (ChRCC). Methods Clinical and RNA sequence profiles of patients with ChRCC, including messenger RNAs (mRNAs), microRNAs (miRNAs), and long noncoding RNAs (lncRNAs), were obtained from The Cancer Genome Atlas (TCGA) database. “edgeR” and “clusterProfiler” packages were utilized to obtain the expression matrices of differential RNAs (DERNAs) and to conduct gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses. Weighted gene coexpression network analysis (WGCNA) was performed to screen the highly related RNAs, and miRcode, StarBase, miRTarBase, miRDB, and TargetScan datasets were used to predict the connections between them. Univariate and multivariate Cox proportional hazards regressions were performed in turn to elucidate prognosis-related mRNAs in order to construct the ceRNA regulatory network. Results A total of 1628 DElncRNAs, 104 DEmiRNAs, and 2619 DEmRNAs were identified. WGCNA showed significant correlation in 1534 DElncRNAs, 98 DEmiRNAs, and 2543 DEmRNAs, which were related to ChRCC. Fourteen DEmiRNAs, 113 DElncRNAs, and 43 DEmRNAs were screened. Nine mRNAs (ALPL, ARHGAP29, CADM2, KIT, KLRD1, MYBL1, PSD3, SFRP1, and SLC7A11) significantly contributed to the overall survival (OS) of patients with ChRCC (P < 0.05). Furthermore, two mRNAs (CADM2 and SFRP1) appeared to be independent risk factors for ChRCC. Conclusion The findings revealed the molecular mechanism of ChRCC and potential therapeutic targets for the disease. Introduction As one of the three major renal cell carcinoma histological subtypes, chromophobe renal cell carcinoma (ChRCC) accounts for 4%-5% of renal cancer cases [1]. The average diagnostic age of ChRCC is 58 years, and most patients are male [2]. Most patients with ChRCC have good prognoses with 5-year survival rates of 78%-100%. However, metastases still occur in about 6%-7% of patients and usually affect the liver or lungs [3]. Furman et al. and Panel et al.'s tumor classification schemes have already been proposed for use in staging ChRCC over the past decades [4,5]. However, considering the ambiguity of the grading criteria and the lack of applicability to the characteristics of the nucleus of ChRCC, their prognostic value appears to have been overestimated [6,7]. In order to better standardize treatment and improve patient prognosis, it is critical to elucidate highly specific biomarkers and effective therapeutic targets. In 2011, Salmena et al. described the competing endogenous RNA (ceRNA) hypothesis, which reexplored the regulatory function of long noncoding RNAs and the potential network between messenger RNAs (mRNAs), microRNAs (miRNAs), and long noncoding RNAs (lncRNAs) [8]. As a key element in the ceRNA network, miRNAs could simultaneously be competitively antagonized by lncRNA, mRNA, and other RNAs through shared microRNA response elements (MREs). Overexpressed MRE-containing transcripts (socalled "RNA sponges") could affect expression by absorbing multiple miRNAs connected to mRNAs [9][10][11]. This molecular internal regulation mechanism plays an important role in the occurrence and development of multiple cancers [12]. The Cancer Genome Atlas (TCGA) database, established by the National Cancer Institute and the National Human Genome Research Institute, has collected numerous genomic, epigenomic, transcriptomic, and proteomic data for 33 cancer types [13,14], facilitating exploration of the ceRNA network in ChRCC and the identification of prognostic-related biomarkers. Methods All clinical and RNA sequence profile data of patients enrolled in TCGA database before May 2020, including mRNA, miRNA, and lncRNA matrices, were completely downloaded and extracted from the dataset (https://portal .gdc.cancer.gov/). Inclusion criteria stipulated that the clinical data of every sample should, at least, include the patient's survival status and survival time. The R version 3.6.0 software was used for all statistical analyses. As a public database was used, additional approval from an ethics committee was not required. The "edgeR" package of R (version 3.6.0) was used to elucidate and compare the DElncRNAs, DEmiRNAs and DEmRNAs of normal and cancer samples. Log2FC > 2 and FDR < 0:05 were considered statistically significant. We preformed gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses using the "clusterProfiler" package (with P < 0:05 as significant) to construct the pathway-gene and pathway-pathway networks [15]. Finally, univariate and multivariate Cox proportional hazards regressions were performed in turn using the "survival" package of R to elucidate the most significant independent risk factor mRNAs associated with the OS of patients with ChRCC. Sample scores were compared to the median risk score and divided into high-risk and low-risk groups. ROC and C-indices were used to evaluate the stability and reliability of the mRNA-based prognostic model. The detailed flow chart is presented in Figure 1. BioMed Research International Heatmap and volcano map of lncRNAs BioMed Research International endogenous network. The Cytoscape software (version of 3.6.1) was used to visualize the ceRNA network. The Kaplan-Meier curves were used to analyze the reliability with which each RNA in the ceRNA network was able to predict the patient's OS (with P < 0:05 indicating significant reliability). Results The lncRNA, miRNA, and mRNA expression matrices of the 89 patients ( Figures 2(a)-2(c). GO analysis showed that the top five functions of the 2619 DEmRNAs focused on organic anion transport, regulation of membrane potential, regulation of ion transmembrane transport, modulation of chemical synaptic transmission, and regulation of transsynaptic signaling (Figure 3(a)). Meanwhile, the top five KEGG pathways of these DEmRNAs were enriched in neuroactive ligand−receptor interaction, cAMP signaling pathway, complement and coagulation cascades, retinol metabolism, and chemical carcinogenesis (Figure 3(b)). Insulin secretion and connection between pathways were presented in the pathway-pathway network (Figure 3(c)). In the pathway-gene network, multiple RNAs were related to five pathways: complement and coagulation cascades, metabolism of xenobiotics by cytochrome P450, neuroactive ligand−receptor interaction, retinol metabolism, and steroid hormone biosynthesis (Figure 3(d)). In the WGCNA, the power of the soft threshold of the lncRNA-miRNA matrix was 10 (Figure 4(a)), and the miRNA-mRNA matrix was 14 (Figure 4(b)), both of which achieved the best satisfaction and consistency of the scalefree R-squared value. After calculating their adjacency and connectivity, these lncRNAs-miRNAs were classified into 10 modules (Figure 4(c)), and miRNAs-mRNAs were classi-fied into 11 modules (Figure 4(d)). Their topological overlap matrix heatmaps are presented in Figures 4(e) and 4(f). Red, yellow, brown, and grey modules of lncRNAs-miRNAs were found to have significant correlation ( Figure 5(a)), and greater connections were also observed in green, turquoise, and grey modules of the miRNAs-mRNAs ( Figure 5(b)). Modules in these two groups included a total of 1534 DElncRNAs, 98 DEmiRNAs, and 2543 DEmRNAs, which were also more closely related to ChRCC than the others (Figures 5(c) and 5(d)). Nine mRNAs (ALPL, ARHGAP29, CADM2, KIT, KLRD1, MYBL1, PSD3, SFRP1, and SLC7A11) were identified as prognosis-related genes when a univariate Cox analysis was conducted on the 43 mRNAs (P < 0:05). Moreover, the results of multivariate Cox proportional hazards regressions indicted that two of the nine mRNAs (CADM2 and SFRP1) were independent risk factors for ChRCC (Figure 6(a)). The C-index of this model was 0.91, and the 3-and 5-year AUCs (area under receiver operating characteristic curve) were 0.996 and 0.989 (Figure 6(b)), which proved the stability and reliability of the model. Finally, six miRNAs (3/3, up/down) corresponded to 79 lncRNAs (31/48, up/down) and were associated with these nine (Figure 7(a)). Additionally, Kaplan-Meier analyses for the ceRNA members showed that low expression of KLRD1 and high expression of LINC00520 significantly contributed to worse OS for patients with ChRCC (P < 0:05) (Figures 7(b) and 7(c)). Meanwhile, the low-risk group also showed obvious superiority over the high-risk group, despite its P value being slightly greater than 0.05 (P = 0:06016) (Figure 7(d)). Discussion With the progress of molecular biology, the function of noncoding transcriptome has been extensively explored. Multi-RNA competition regulatory networks appear to play indispensable roles in the biological processes and courses of cancer diseases [16,17]. Several studies have explored and verified ceRNA networks in the past. Wang et al. included 407 normal and 151 acute myeloid leukemia (AML) samples from Genotype-Tissue Expression (GTEx) (https:// commonfund.nih.gov/GTEx/) and TCGA datasets in their study. They found that the ceRNA network in AML involved 108 lncRNAs, 10 miRNAs, and 8 mRNAs, which appeared to influence prognosis and cancer progression [18]. Meanwhile, Yao et al. also established a ceRNA network from TCGA database comprising 52 lncRNAs, 17miRNAs, and 79 mRNAs in breast cancer's RNA matrix and in which five lncRNAs were found to significantly affect patients' OS. Furthermore, results of GO and KEGG analyses of these mRNAs were also related to biological characteristics of tumors [19]. WGCNA is a bioinformatic and sensitive method, which is especially suitable for these large and highdimensional data, as well as for low abundance or fold change genes. It is able to cluster highly related genes from microarray samples into different color modules and explore the relationship between the genes and cancer traits [20]. WGCNA has already been used in various oncological studies to explore hub genes and the regulatory relationships between them [21][22][23]. In our study, we preformed WGCNA to select highly related module genes, which helped us elucidate the more meaningful RNAs for further prediction. Importantly, prediction in multiple datasets allowed us to rapidly lockdown the shared high-value genes similar to previous studies [24][25][26][27]. Another advantage of our study was the application of univariate and multivariate Cox proportional hazards regressions on selected target mRNAs from which we obtained a reliable and stable prognostic model and identified important genetic biomarkers for ChRCC within the ceRNA network. The excellent C-index and 3and 5-year survival AUCs further proved the superiority of our model. The Kaplan-Meier curves showed that low-risk patients would achieve better long-term OS. A member of the cell adhesion molecule gene family, CADM2, has been reported to be underexpressed in the nine mRNAs. This might contribute to the progression of various cancers, including prostate cancer, ovarian cancer, lymphoma, melanoma, and clear renal cell cancer (cRCC) [28][29][30][31][32]. CADM2 is believed to prevent tumor progression, invasion, and metastasis by maintaining cell's polarity and adhesion [32]. BioMed Research International Tyrosine protein kinase (KIT) is overexpressed in various cancers [33,34], especially in ChRCC and oncocytoma. Huo et al. reported that KIT was more sensitive to ChRCC and oncocytoma than other renal cancers, and hence, it would be useful in precise tumor classification and targeted therapy [35,36]. In the past, SFRP1 has been considered to be a tumor suppressor gene and possibly antagonistic to the wnt signaling pathway [37]. It has been found that increased methylation levels in the SFRP1 promoter region might lead to SFRP1 silencing in cRCC [38,39]. Meanwhile, low SLC7A11 expression was found to be an important target in the p53 tumor suppression pathway, which is closely related to cell-cycle arrest, apoptosis, and senescence. As the main component of the cystine/glutamate antiporter, underexpressed SLC7A11 could inhibit cellular uptake of cystine and eventually lead to increased cell sensitivity to ferroptosis [40]. Additionally, upregulation of ARHGAP29 might be related to metastasis in gastric cancer [41]. ALPL is primarily related to hypophosphatemia [42]. Rao et al. found that high expression of ALPL led to poor survival outcomes for patients with prostate cancer [43]. However, another study proposed that ALPL could inhibit the motility and aggression of serous ovarian cancer cells [44]. High expression of KLRD1 was reported to inhibit the function of natural killer cells and cytokine-induced killer cells [45,46]. PSD3 is considered to be a candidate metastasis suppressor gene, and its low expression has been observed to be associated with poor prognosis in ovarian cancer and metastasis in breast cancer [47,48]. Moreover, MYBL1 is highly expressed in adenoid cystic carcinoma and is often accompanied by genomic rearrangements [49]. These previous findings lend confidence to the hypothesis that the ceRNA network plays an important role in the occurrence and development of cancers. Moreover, to our knowledge, this is the first report regarding the role of these mRNAs in the ChRCC, in which KLRD1 was found by Kaplan-Meier analysis (P = 2:344e − 2) to significantly affect patients' OS. Previous studies involving cRCC have reported the importance of the six miRNAs (hsa-mir-222, hsa-mir-204, hsa-mir-206, hsa-mir-183, hsa-mir-372, and hsa-mir-221) in the ceRNA network. In particular, hsa-mir-206, hsa-mir-204, and hsa-mir-372 were found to suppress cancer through corresponding biological functions [50][51][52], and hsa-mir-183 was considered to be a potential oncogene [53]. Kaplan-Meier analysis also showed that high expression of LINC00520 had an effect on OS. Chen et al., in their study based on the cBio-Portal dataset, also emphasized its importance in cRCC [54]. However, more studies are needed to fully explore the biological function of the lncRNAs in ChRCC. In this study, we constructed a ceRNA network including 79 lncRNAs, 6 miRNAs, and 9 mRNAs. Their possible competitive synergistic biological functions might jointly regulate various processes in ChRCC, and, hence, they may provide new therapeutic targets and a new perspective for ChRCC genetic biology studies. However, there were some limitations to our study. Firstly, the prognostic model of mRNA has not been externally verified. Also, we lacked in vivo and in vitro experiments to verify our results. Conclusions We established the ceRNA network in ChRCC, which included 79 lncRNAs, 6 miRNAs, and 9 mRNAs. Among them, three mRNAs (CADM2, SFRP1, and KLRD1) and one lncRNA (LINC00520) showed promise as potential biomarkers for ChRCC. Our results offer new insights into the diagnosis and treatment of ChRCC and demonstrate the merit of further genetic biology research into ChRCC. Data Availability The dataset supporting the conclusions of this study is available in The Cancer Genome Atlas (TCGA) database. Conflicts of Interest The authors have no conflicts of interest to declare. Authors' Contributions Yong-Bo Chen, Liang Gao, Jin-Dong Zhang, Liang-You Tang, and Ying-Wen Liu designed the study. Yong-Bo Chen, Liang Gao, Jiang Guo, and Liang-You Tang selected and analyzed the data. Yong-Bo Chen, Ping-Hong You, Liang-You Tang, Liang Gao, and Ying-Wen Liu were involved in statistical analysis. Yong-Bo Chen, Jin-Dong Zhang, Jiang Guo, Liang-You Tang, Ping-Hong You, and Ying-Wen Liu drafted and revised the manuscript. All authors have reviewed and approved the final manuscript. Yong-Bo Chen and Liang Gao are co-first authors (these authors contributed equally to this work).
2021-07-03T05:21:36.974Z
2021-06-10T00:00:00.000
{ "year": 2021, "sha1": "a228074f901b0bd138ee85ecc563c93677b0c5b0", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2021/5589101.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a228074f901b0bd138ee85ecc563c93677b0c5b0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
122778573
pes2o/s2orc
v3-fos-license
SEA WAVE MODELLING FOR MOTION CONTROL APPLICATIONS Abstract: The modelling of sea environment is important in designing an effective motion control system for any marine vehicle. Inadequate representation of the components of a typical random sea might lead to poor performance of the control system. A multiple output system such as the one having components of wave elevation and slope, facilitates designing the control system taking into account the different degrees of freedom. The method of modelling the sea environment presented here provides the basis for the design of motion control systems for multiple degree of freedom cases, which give rise to excitation forces and moments acting on the marine vehicle. The method used here models the sea environment using Gaussian white noise and shaping filter to generate a multiple output form of the random sea state. In the first step a given standard wave spectrum is approximated using a rational polynomial, the coefficients of the polynomial are obtained by least square fitting method to best match the spectrum. The established rational polynomial is then decomposed to get the transfer function of the shaping filter. The wave slope spectrum is similarly approximated using the same rational polynomial. The transfer functions of the two components of amplitude and slope, representing the filters are combined to generate a state space form. Using the white noise as input, the state space form obtains the wave elevation and slope as outputs. By performing spectral analysis using Welch method (1967), the quality of the obtained output is checked against the targetted spectrum. The application of the simulated wave slope spectrum in a closed loop state space model is demonstrated as applied to the roll stabilization characteristics of a stationary ship using a passive tank. Introduction Modelling the stochastic nature of a seaway is important in designing and analyzing an effective motion control system for marine applications.The efficacy of any motion control system basically depends on its performance in the case of non-linearities and irregular changes in the environmental conditions.In ship hydrodynamics, the irregular environmental conditions are expressed in terms of sea state spectrum which describes the nature of a particular sea.Random seas are essentially composed of an infinite sum of sinusoids at various frequencies and magnitudes, combined to form with a uniform probability distribution, and random phase.It is necessary to have a meaningful mathematical description of the apparently random nature.Statistically the sea surface follows the well-known Gaussian or normal distribution.In probability theory, the central limit theorem justifies the use of normal distribution in many applications.By the central limit theorem a random variable formed by the sum of a large number of independent random variables, is characterized by an approximately normal distribution (Rice, 2007).Different methods are used to reproduce the characteristics of the sea and many of them which are used for numerical simulations, are not applicable for control system design strategy.In this context, the method of shaping filter has been cited as a valuable tool for modelling the stochastic nature of the linear dynamical systems (Martin and David, 2007). The present paper describes the method of shaping filter for modelling a given sea state condition for motion control applications.The method of shaping filter is based on random process theory, established by filtered white noise.The output of a linear system having appropriate spectral density with an ergodic Guassian white noise input is also an ergodic Guassian random process possessing required spectral density.When the Guassian white noise is sent through the shaping filter, it shapes the white noise signal into the desired colored noise output, i.e., desired random waves of specified frequency.This is the theoretical basis of using shaping filter to generate random wave. Methodology The methodology used to generate the components of the sea state spectrum using the method of shaping filter through white noise filtering is depicted in Fig. 1.The method has three parts namely: approximation of standard wave spectrum, parameter estimation and finally, simulation of the desired sea state.The standard wave spectrum is approximated for the desired sea state using a rational filter, which consists of a finite degree of polynomials in the numerator and denominator.The spectrum to be achieved is known and therefore the degree of polynomials can be selected to give the estimated shape of the desired spectrum.Secondly, the coefficients of the rational filter are determined by least square fitting to best match the targetted spectrum, named as rational spectrum.The established rational spectrum is then decomposed to get the transfer function of the shaping filter.This can be carried out through spectral factorization (Kallstrom, 1981).The power spectral density function can be related for a unit white noise by the equation: where, Z is the transfer function of the linear system.By finding  from Eq. ( 1) , the spectral density function can be modelled as a linear dynamical system with white noise as input signal.For an even rational polynomial, the power spectrum is decomposed into ( ) ( ) and the transfer function of shaping filter for the power spectrum is simply () j  (Xu et al., 2011). The rational polynomial is also used to approximate the wave slope spectrum.The filter transfer function of standard wave spectrum with its slope component is represented into a state space form to obtain single-input and two-output system.By inputting a stationary Guassian white noise signal to the state space representation, the desired sea can be realized in the form of wave elevation and wave slope. Sea Wave Modelling The seastate spectrum used for the present study takes the form of ITTC 2-parameter wave spectrum that permits period and wave height to be assigned separately, defined by the following (Bhattacharya, 1978): where S(ω) is the wave amplitude spectral ordinate, H s is the significant wave height in m, ω is the wave frequency in rad/sec and T 1 is the period corresponding to average frequency of the component waves.The definition Table for the sea state considered and the typical values associated with them are given in Table 1. Rational spectrum approximation The following rational polynomial was selected to approximate the standard ITTC spectrum (Kallstrom, 1981): where, a 1 , a 2 , a 3 , and b 2 are real numbers. We now derive the solution for the coefficients in the rational polynomial as shown below: Eq. ( 3) can be written in matrix form as: The ITTC spectrum can be approximated using the rational function as The parameters of the rational spectrum, Eq. ( 3) can be best approximated by establishing the N algebraic equations by least square method.In matrix notation, the equation for polynomial fit is given by: where, a -2a a and A(4) = 2 3 a The parameters of the rational spectrum are obtained by solving the matrix equation by the least square method: T -1 T Α = (Χ .Χ) .Χ .Η (11) In order to know the coefficient values of the rational polynomial a 1, a 2 , a 3 , and b 2 , A(3) is now written as: Eq. ( 12) is a fourth degree equation: The coefficient value for a 1 is obtained by solving Eq. ( 13) and from the expression for A(2), the coefficient value for a 2 is also obtained.The least square fitted values are substituted in Eq. ( 3) to get the approximated rational spectrum of the Standard ITTC spectrum, Eq. ( 2). The comparisons of sea spectrum with the obtained rational spectrum are shown in Fig. Shaping filter The spectral factorization on the rational function in Eq. (3) will have the form in S-domain (Kallstrom, 1981) Oscillatory motion such as roll motion, is more sensitive to wave slope than the wave height.The same formulation can be used to define the wave slope realization by approximating the wave slope spectrum.The wave slope spectrum can be represented with respect to rational filter: The slope spectrum obtained in Eq. ( 16) can be approximated by the following filter: where, c is a constant.The value of c can be obtained in an analogous manner by approximating the slope spectrum in Eq. ( 16) with the slope filter, i.e., Eq. ( 17). Figure 3 shows the wave slope spectrum approximation for the given sea state with the filter Eq. ( 17).The comparison shows good match in the peak region of the slope spectrum.Thus, the values for c were obtained and estimated as -0.061 for generating the wave slope spectrum.By combining filter Eqs. ( 15) and ( 17), a state space representation for a single input and multiple output system is achieved which can be used as an input for control system design: . where, wave height (h) and wave slope (s) are the output of this system.The wave height and wave slope have been realized by passing the white noise signal through the shaping filter for standard deviation corresponding to 4. Spectral Analysis A spectral analysis of the output has been performed using Welch method (1967) to test the filter output against the desired sea state spectrum.The Welch method estimates the spectrum of a given signal by grouping the data into overlapping segments, computing a modified periodogram of each segment, and then averaging the spectrum over all the sets.Fast Fourier Transform (FFT) is used to estimate the spectrum of each set, which involves sectioning the record, taking modified periodograms of these sections, and averaging these modified periodograms.By permitting data sample overlap, the averaging of modified periodograms tends to decrease the variance of the estimated power spectral density. In the method, for a given stochastic process, K segments are assumed which cover the entire record of the process, then for each segment of length L, a modified periodogram is calculated.A data window W( ) j , 0,..., L-1 j  , is selected to form the segment sequence.Then finite Fourier transforms are taken, 1K A ( ), ..., A ( ) nn to estimate these sequences.The spectral estimate is the average of these periodograms.The estimator (Welch, 1967) is given as follows: A periodic Hamming window is taken as the window function.The method reduces computations and gives better control over the variance characteristics of the estimated power spectral density.A close correlation is achieved and the spectral analysis confirms the sea state spectra.The estimated spectrum is shown in Fig. 5. State Space Modelling of Passive Tank Stabilization The generated wave slope components are used to demonstrate the effect of passive tank stabilization in irregular waves.Passive tanks are more effective in regular waves than random waves.Phan et al., (2008) have validated the performance of an anti-roll tank in irregular wave conditions.A state space modelling of ship system with passive tank has been presented here to demonstrate the effectiveness of the method and for design of such motion control systems. The generated wave slope has been used to perturb the ship from its equilibrium position at zero speed in beam sea condition.The particulars of the ship and anti-roll tank are shown in Table 3 and 4. The geometric representation and the nomenclature used for the tank dimensions are followed as given in Gawad et al., (2001). The formula to obtain the tank coefficients are given below (Gawad et al., 2001): The ship motion coefficients have been obtained using standard strip theory program, SEAWAY (Journee, 2001).For modelling the anti-roll tank stabilization, a state space representation: . w (31) where, F is the system matrix, G is the input matrix and  is the disturbance distribution matrix.For this case, i.e., passive stabilization, input is zero and the stabilization is only due to the movement of water inside the tank.For a beam sea, the wave induced roll moment is calculated from the wave slope and the quasi-static excitation (Sgobbo and Parsons, 1999)  Based on the above state space representation, a computational model is setup using MATLAB and Simulink.The corresponding mathematical expressions were specified through user defined function (Fcn block) in Simulink.The modelling of system dynamics along with the wave slope is shown in Fig. 6.The performance of the passive tank in irregular sea is shown in Fig. 7.The RMS value obtained for the simulation has been presented in Table 5. Conclusion The simulation of irregular sea in the form of a single-input, multiple-output system has been described in this paper.The sea wave simulated by feeding white noise to the shaping filter can be used to perturb the marine vehicle from its equilibrium position for control system design.The method can be extended for approximating the excitation forces and moments for analyzing the multiple degree problems in motion control.The least square fitting approach for the rational polynomial in this paper can be used for any other sea condition to get the coefficient values to best match the spectrum.Through this approach, close approximations of seas are obtained.The wave slope spectrum is matched in the main region of the spectrum.Roll reduction modelling a passive tank in a specific ship in irregular sea condition using this method has been successfully demonstrated.Appreciable roll reduction is reported. Fig. 2 : Fig. 2: Rational spectrum approximation for given sea state Fig. 3: Estimated filter with slope spectrum Fig. 5 : Fig. 5: Spectral estimation of wave spectrum for sea state 5 Table 2 : Coefficient values for rational spectrum The zero points, pole points and gain in the left plane of S-domain constitute the right hand factor of Eq. (14).As mentioned earlier, the transfer function of shaping filter is, Table 3 : Main particulars of the ship Table 4 : General characteristics of the tankThe liquid level inside the tank is adjusted to get the highest effect, i.e., at the selected liquid level the natural frequency of the tank and ship are best matched.The coupled equation of motion of ship in the presence of a passive tank and fluid motion inside the tank is expressed as: Table 5 : Passive tank simulation report in sea state 5
2019-04-20T13:14:31.408Z
2014-06-22T00:00:00.000
{ "year": 2014, "sha1": "4332c36ece33b70730ed3ff23c41aad45a291e67", "oa_license": "CCBYNC", "oa_url": "https://www.banglajol.info/index.php/JNAME/article/download/17768/13336", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ecf54df028c00094df189579e966dd2e4f01026e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
232087016
pes2o/s2orc
v3-fos-license
Study on Acoustic Emission Characteristics of Low-Temperature Asphalt Concrete Cracking Damage In this study, asphalt concrete specimens were subjected to a semicircle bending test at −10 °C to simulate the process of the development of cracks in asphalt concrete at low temperature. The acoustic emission parameters were collected during the test, the variation characteristics of acoustic emission parameters were analyzed, and the peakedness value was introduced to evaluate the damage of asphalt concrete. The dynamic evolution of fracture development was analyzed by periods with acoustic emission source location. The results indicate that the damage of asphalt mixtures shows an obvious brittle characteristic at low temperature, acoustic emission signals mainly originate from the crack damage caused by tensile stress, and the strength and number of signals can reflect the degree of crack development. Based on acoustic emission parameters and load curves, the cracking damage of asphalt concrete at low temperature in this study can be divided into three periods: a calm period, a stable development period, and a rapid fracture period. The crack point occurred and propagated upward rapidly in the rapid fracture period. During this period, acoustic emission parameters such as ringing count, acoustic emission energy, and amplitude increased suddenly; furthermore, the peakedness value reached its peak in this period and corresponded well with the low-temperature damage of asphalt concrete. Acoustic emission source location technology can track position of crack points and the propagation path of cracks, reflecting the dynamic evolution process of asphalt concrete crack damage at low temperature. Introduction Asphalt concrete is a typical composite material that is widely used in the construction of road engineering in cold regions. Crack damage of asphalt pavement at low temperature is common; when the internal damage of asphalt concrete reaches a certain extent, microcracks will occur and expand, resulting in macro-cracks [1], which seriously affect the pavement performance and traffic safety. However, the unpredictability and seriousness of micro-cracks have always been a major problem in the field of highway engineering. Acoustic emission (AE) is a nondestructive testing technology that can detect tiny deformations and damage inside materials by using the high-speed digital AE detector and preamplifier; the signals received by sensors are converted and shown by corresponding parameters. In recent years, AE has been widely used in the study of relatively homogeneous brittle materials such as rocks, metal, and concrete [2][3][4][5]. The essence of acoustic emission is the transient elastic wave generated by the rapid release of stress when the material is deformed or damaged [6,7]. By analyzing the characteristics of AE signals, the mechanical and physical behaviors of materials can be evaluated to reflect the subtle changes inside the materials at the microscopic level, which are difficult to reflect from the macroscopic physical parameters such as the stress-strain curve. AE source location technology can also determine the location of damage within the material. Figure 1 presents some basic AE parameters such as amplitude, energy, ringing count, duration, etc. However, although the application of AE technology to asphalt concrete started late, much attention has been paid to the characterization of the fracture behavior of asphalt concrete through variation in AE parameters and research on the AE source location of fracture points. Jiao et al. [8][9][10] carried out compression and splitting tests on permeable asphalt mixtures, and obtained the evolution law of each AE parameter. The results showed that the variation characteristics of each parameter had a certain relationship with the macro damage of the asphalt mixture, and thus AE technology has great potential in the damage characterization of asphalt concrete. Qiu et al. [11,12] studied the continuous damage evolution behavior of an asphalt mixture and located the damage position using AE location technology. The results indicate that acoustic emission technology can characterize the fracture behavior of asphalt mixtures. Behzad [13] investigated the self-healing ability of an asphalt mixture by AE; the results showed that more cooling cycles could reduce the self-healing ability of the asphalt mixture. AE can be used to analyze the influence of the resting time between cooling cycles on the fracture energy of materials. AE technology has been gradually applied to the research of the mechanical fracture behavior of asphalt concrete. However, it is worth mentioning that there are still many problems to be solved, such as describing and explaining the relationship between the crack damage of low-temperature asphalt concrete and the characteristics of AE signal, as well as the corresponding relationship between macro crack extension path and AE source location at low temperature. Based on the 16-channel SEAU3H AE equipment manufactured in China, this paper aims to: (1) Study the variation characteristics and laws of AE parameters during the fracture and damage process of asphalt concrete at −10 • C. (2) Track the crack propagation path using AE source localization technology and study the dynamic variation characteristics of AE source spatial distribution during the fracture process of asphalt concrete at −10 • C. Materials AC-13 asphalt concrete was utilized with a grade 70 penetration bitumen content of 4.3% and mineral filler content of 6%; the aggregate and mineral fillings were limestone. The target aggregate gradation curve of the asphalt concrete was obtained from the midvalue between the upper and lower limits according to Standard Test Methods of Bitumen and Bituminous Mixtures of Highway Engineering (JTG E20-2011), as shown in Figure 2. After mixing at 160 • C, the AC-13 was compacted by a gyratory compactor at 140 • C to form a cylindrical specimen with a diameter of 150 mm and a height of 180 mm. First, the upper and lower surfaces were cut to 15 mm thick, respectively, and the remaining specimens were cut into three circular specimens with a thickness of 50 mm. Then, six semi-circular specimens with a radius of 75 mm and a height of 50 mm were cut through the central axis. Furthermore, a pre-cut notch of 10 mm in length and 2 mm in width was made at the bottom of the semi-circular specimens, numbered T1-T6. Figure 3 shows the formed specimens: Test Setup and AE System A UTM-25 manufactured by SANS in Guangdong China was utilized in the semicircular bending test; the loading rate was set at 2 mm/min until the end of the test. To ensure the reliability of the test, 6 parallel tests were carried out. Before the test, the specimens were placed in a temperature chamber at −10 • C for more than 12 h. AE tests were simultaneously performed using a 16-channel SEAU3H AE system manufactured by Soundwel in Beijing China. The test equipment is shown in Figure 4. Four AE sensors (G8) were arranged on the surface of the specimen as shown in Figure 5; the receiving frequency of sensors sranged from 18 to180 kHz. The preamplifier was set as 40 dB and the threshold value as 35 dB with a sampling frequency of 1 MHz for the tests. AE Source Location AE source location is of great importance in the accurate determination and prediction of damage points. The three-dimensional coordinates of the source points which release elastic waves can be calculated by reasonably arranging the spatial position of the sensors. AE location technology includes time difference and region location, and the time difference location is an accurate and complex location method [14][15][16][17][18][19]. The schematic diagram is shown in Figure 6: As shown in Figure 6, an AE source was generated at a point inside the material, and the signal propagated to sensors 1# to 4# in turn. The displacement-time relation of signal propagation is as follows: where (x i , y i , z i ) are the coordinates of the ith sensor, t i is the signal arrival time of each sensor, and v p is the propagation velocity inside the asphalt concrete. Furthermore, at least three sensors are required to obtain the 3D coordinates of the AE source. Four sensors were applied in this paper to establish Equation (2): By solving Equation (2), the three-dimensional coordinates (x,y, z) and the time (t 0 ) of AE source signal generation could be obtained. Results and Discussion AE test data varied for different specimens; however, the variation characteristics and laws of AE parameters were consistent. Therefore, the load data of the representative specimen T2 were selected for discussion and analysis. Deformation Fracture Characteristics The load-displacement curve of the semi-circular bending test is shown in Figure 7. According to the curve variation characteristics, the peak load (Fp) was about 10.2 kN and the peak vertical deformation was about 0.75 mm. The load had a uniform rate of increase, and increased slowly with the increase of displacement; when the vertical displacement reached about 0.75 mm, the stress decreased instantly, and the specimen was broken. When the specimen was broken, the peak vertical deformation was about 0.75 mm and no obvious bending deformation occurred. This is an indication that there were some micro-cracks scattered inside the specimen before the bending failure. When the load level reached the peak, the cracking point occurred in the tensile zone and diffused rapidly along the weak zone. Micro-cracks permeated each other in a short time and then developed into the compression zone of the section, and finally formed macroscopic fracture cracks showing obvious brittle fracture characteristics. The main reason for this phenomenon is the high-temperature sensitivity of matrix asphalt. From the perspective of fracture mechanics, the fracture toughness K I I = 0 since the specimen was only under vertical compression, and the pressure was converted to the tension at the pre-cut notch. The equation of fracture toughness K IC can be obtained according to the literature [20]: where F is the fracture load, which was 10.2 kN in this test; B is the thickness of the specimen, which was 50 mm in this test; a is the crack length (the pre-cut notch length was 10 mm in this test); and W is the strain energy density. According to the literature [20], the shape factor f a W is as follows: According to Equations (3) and (4), K IC is 1.138 MPa·mm −1/2 , which means that when K is greater than 1.138, the specimen will be destroyed; otherwise, it is safe when K is less than 2.3. Characteristics of AE Parameters Although the specific values of AE parameters were different, the variation trends and rules were the same and consistent. Therefore, the AE data of the representative T2 specimen were selected in this paper for discussion and analysis. The variation characteristics of AE parameters during the semi-circular bending test are shown in Figure 8. Among them, ringing counts, cumulative ringing counts, and duration can reflect the number of AE activities, while energy, accumulative energy, and amplitude represent the strength of AE activities [21][22][23][24]. As shown in Figure 8, according to the variation characteristics of different parameters, the whole process of asphalt concrete fracture damage at low temperature can be divided into three periods-I: a calm period; II: a stable development period; and III: a rapid fracture period. Calm period: During the initial 0-15 s, the load level was 0-0.7 Fp. The energy and the ringing counts were low; while there was no obvious upward trend for the cumulative ringing counts and the cumulative energy curve, only sporadic low-strength signals of low amplitude and short duration were generated in the whole period. The main reason is that the specimen itself had tensile and bending strength at low temperature, and the influence of external loads was limited. The stress started to accumulate at the internal weakness points; there was no obvious damage inside the material, and the elastic deformation generated in this period could be recovered, so the AE activity at this period was weak. Stable development period: During this period, when the load level was 0.7-0.95 Fp and the corresponding time domain was 15-20 s, the ringing counts, energy, and amplitude all increased with the continuous increase of the load level and several longer-duration signals occurred. The cumulative ringing counts and cumulative energy curve had a minor increase at 15 s and then rose steadily. As the influence of external load increased, the stress level at the central section of the specimen began to rise, and the resilience modulus increased. The stress accumulated in the previous period was released locally at the internal weakness, leading to irreversible deformation and damage in the tensile zone, and internal micro-cracks began to occur. Therefore, AE activities were more active than the previous period, while the cumulative ringing counts curve had a higher increase than the cumulative energy curve, indicating that the stress release degree was limited and the micro-cracks only occurred with a local scope. Rapid fracture period: In this final period, the load reached peak value and the corresponding time domain was 20-23 s. From 20 s, signal duration was obviously longer and the occurrences of high-level energy, ringing counts, and amplitudes increased in number significantly; the increase rates of cumulative energy and cumulative ringing count curves were accelerated. When the load reached the peak, the main cracks occurred and the AE energy, ringing count, amplitude, and duration all increased to the highest level instantaneously; the cumulative energy and cumulative ringing counts curve rose in a straight line. Subsequently, however, all AE parameters fell to an ultra-low level; the cumulative energy and cumulative ringing counts curves no longer rose. This was mainly due to the increase of vertical deformation at this period, causing the internal micro-cracks begin to extend and interconnect, producing a large number of AE signals of strong penetration. When the load reached the critical stress, the stress in the tensile zone exceeded the ultimate tensile strength; cracks first occurred at the crack point and rapidly propagated to the compression zone, leading to low-temperature brittle fracture. A large amount of energy was released and AE activities were the most active at this time. It can be seen from Figure 8 that there are obvious relationships between the variation of AE parameters and the deformation and fracture of the asphalt concrete, and AE parameters can reflect the internal microscopic changes that are difficult to detect with load curves. At the initial part of the rapid fracture period, the concentrated and explosive growth of each parameter can be regarded as the forewarning of damage failure of lowtemperature asphalt concrete. Damage Evaluation For unstable signals produced by bearing structure vibration, peakedness (P) (K) is often adopted in the monitoring of structure health to evaluate the damage of a bearing structure and its service ability [25,26]. The value of P reflects the irregularity otherness of a set of data; the greater the value of P, the greater the data irregularity and the more unstable the material interior tends to be; conversely, the smaller the value of P, the more stable the material interior tends to be. In this paper, the P value is introduced to evaluate the structural damage of asphalt concrete during the test. The P value is calculated as following: where P is the kurtosis value, N is the number of data samples for each group, and x(i) is ringing counts in this paper. The ringing counts every two seconds in the test were considered as a set of data, and its P value was calculated, respectively. The results of division of damage periods and P value of six specimens are presented in Figure 9. As shown in Figure 9, the specific kurtosis value was different for different specimens, while the variation trends and rules were consistent and corresponded to the damage period. For example, taking sample T2 as an example, the P value was small in the initial calm period and ranged from 0.5 to 1.5; in the 16-18 s of the stable development period, the P value suddenly increased to 2.3, the first peak occurred, and then decreased to a lower level; the P value then increased to 4.2 in the 20-22 s of the rapid fracture period; the second peak occurred, which was also the maximum during the whole process; the P value then continued to maintain a higher value of 3.3 in the 22-24 s period; when the specimen was completely fractured, the P value reduced 1.15. It can be found from the variation of P that there was a certain correspondence between the value of P and AE activity; the P value was larger when AE activity was more intense and smaller in other stages. P reached its maximum value in the rapid fracture period, indicating that the data irregularity was the greatest at this time, the asphalt concrete specimen approached the ultimate bending strength asymptotically, and was about to be fractured. AE Source Location It can be seen from the foregoing that AE parameters have a good ability to reflect cracking damage in low-temperature asphalt concrete. In order to investigate the accuracy of AE source location technology in low-temperature crack propagation, taking sample T2 as an example, planar projection on the front and vertical side was performed on all the three-dimensional locating points obtained in the test, and the results were compared with the actual crack path. The time nodes of source location were selected as 10, 15, 20, 22, and 25 s, and are represented in Figure 10 by different colors, while the solid lines represent the actual crack propagation path. Calm period: There were few AE events in this period. Only three AE events occurred at the initial stage (0-10 s), all of which were located in the compression zone of the upper center of the section in the frontside direction, and were distributed in a line in the vertical direction, which was due to the extrusion deformation of the upper part. At the later stage of 10-15 s, four AE Events occurred, one of which was located at the upper part of the section; the other three were distributed at the middle and lower parts of the section in the frontside direction, and were evenly distributed in the vertical direction. AE events decreased in the compression zone and gradually spread to the middle and lower parts, indicating that the AE events mainly resulted in the initiation of a small number of internal micro-cracks caused by tensile stress on specimens. Stable development period: During this period, AE events increased by eight and mainly scattered in the middle and lower zone, indicating that many internal micro-cracks began to occur in the tensile zone of the section and began to expand in the tensile direction. The scattered distribution of the events in the vertical direction indicates that the internal micro-damage occurred randomly on the vertical section. The growth rate of the number of AE events was increasing; this indicates that the damage process of specimens accelerated with the increase of displacement. Rapid fracture period: This period lasted only about 3 s, during which time 20 AE events occurred; this is more than the sum of the previous periods and accounted for 53% of the total number of events. In this period, the micro-cracks rapidly expanded and connected with each other; crack point occurred at the pre-cut notch and propagated upward rapidly, resulting in a macroscopic major crack and causing the specimen to rapidly brittle fracture. AE events in this period all gathered around the crack point and the initial path of propagation in the frontside direction; additionally, the aggregation of AE sources at the center of the vertical direction indicates that the crack was generated at the center of the vertical section. This phenomenon indicates that the occurrence of AE source location has a congruent relationship with the generation of cracks, and the number and density of AE events can reflect the severity of damage, which is forward-looking for the appearance of macroscopic damage regions. As the displacement continued to increase, the whole section of the specimen was in a state of tension. Finally, three AE events occurred, scattered on the upper part of the section, indicating that the crack continued to propagate upward, and the specimen was eventually completely damaged. It can be seen, from comparison with the actual cracks, that the location points of AE sources were all gathered around the actual cracks, but did not coincide exactly with them. This is due to the time difference method, which assumes that the propagation of stress wave is stable in an ideal homogeneous material, while the asphalt concrete is a heterogeneous composite material with a large internal irregularity. As a result, the propagation of the stress wave was not in a stable state, but rather always attenuating, thus reducing the accuracy of positioning. However, the error of AE location points and actual cracks did not exceed 1 cm, which meets the engineering requirements, indicating that the positioning result was real and effective and could be used as the engineering basis. Conclusions Based on AC-13 asphalt concrete semicircle bending tests and acoustic emission tests, we analyzed the AE parameters during the cracking process of specimens. The main conclusions are as follows: (1) The fracture process and load-displacement curve of AC-13 asphalt concrete at low temperature showed obvious brittle fracture characteristics; the specimen fractured at the peak load and the corresponding displacement was about 0.75 mm. (2) The variation of AE parameters during the fracture process of low-temperature asphalt concrete could be divided into three periods: a calm period, a stable development period, and a rapid fracture period. During the rapid fracture period, AE energy and ringing count increased gradually, and then suddenly, the cumulative energy and cumulative ringing counts reached a mutation point and then rose in a straight line. This variation characteristic can be regarded as a forewarning of macroscopic cracks. The K value reached its maximum at the rapid fracture period and had a good correspondence with the AE parameters, which could be used to evaluate the damage stability in the process of specimen cracking. (3) The dynamic evolution of the spatial distribution of AE location points can be used to track the surface path of crack development, reflect the initiation and propagation of micro-cracks, and evaluate the whole process of crack development. It has great potential and prospects in the crack detection of asphalt pavement in cold regions. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data used to support the findings of this study are included within the article. Conflicts of Interest: The authors declare no conflict of interest.
2021-03-03T05:19:34.506Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "fd324e7b5182e7fc3f3bc4a411b2b138e77b59e9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/4/881/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd324e7b5182e7fc3f3bc4a411b2b138e77b59e9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
4518084
pes2o/s2orc
v3-fos-license
Longevity Around the Turn of the 20th Century: Life-Long Sustained Survival Advantage for Parents of Today’s Nonagenarians Abstract Members of longevous families live longer than individuals from similar birth cohorts and delay/escape age-related diseases. Insight into this familial component of longevity can provide important knowledge about mechanisms protecting against age-related diseases. This familial component of longevity was studied in the Leiden Longevity Study which consists of 944 longevous siblings (participants), their parents (N = 842), siblings (N = 2,302), and spouses (N = 809). Family longevity scores were estimated to explore whether human longevity is transmitted preferentially through the maternal or paternal line. Standardized mortality ratios (SMRs) were estimated to investigate whether longevous siblings have a survival advantage compared with longevous singletons and we investigated whether parents of longevous siblings harbor a life-long sustained survival advantage compared with the general Dutch population by estimating lifetime SMRs (L-SMRs). We found that sibships with long-lived mothers and non-long-lived fathers had 0.41 (p = .024) less observed deaths than sibships with long-lived fathers and non-long-lived mothers and 0.48 (p = .008) less observed deaths than sibships with both parents non-long lived. Participants had 18.6 per cent less deaths compared with matched singletons and parents had a life-long sustained survival advantage (L-SMR = 0.510 and 0.688). In conclusion, genetic longevity studies may incorporate the maternal transmission pattern and genes influencing the entire life-course of individuals. nonagenarian siblings lived significantly longer than members of comparable birth cohorts (11). Multigenerational studies into the sex-specific inheritance pattern of lifespan and longevity showed inconsistent results however, with either paternal or maternal transmission patterns (Refs. 17-33, as reviewed in Ref. 34). Despite the generally observed survival advantage of first-degree relatives of longevous subjects, observations on the survival of their spouses and on longevity inheritance patterns remain inconclusive (11,35,36). The limitations in current inheritance pattern studies are twofold. First, secular trends, such as the increase of life expectancy over time, are not taken into account. Second, parent-offspring analysis usually focuses on a single child per family, thereby omitting the potential of a complete sibship per family (37). Furthermore, studies have selected long-lived persons based on different criteria, focusing either on multiple siblings or singletons (9)(10)(11)16). It remains to be elucidated whether the stringency of long-lived case selection based on the presence or absence of a long-lived sibling provides a survival advantage in the selected persons compared with birth cohort-and sex-matched long-lived singletons. Apart from this, research into the survival of first-degree relatives and spouses of long-lived persons often struggles to obtain an accurate population-based control group, sometimes leading to the generalization of a single birth year control group to other birth years (16). It is also difficult to compare the survival of parents of long-lived persons to population-based sex-and birth cohort-matched controls because representative cohort lifetables preceding 1900 are often unavailable, except for the Netherlands and Sweden (38). Overall, research is still inconclusive about the following issues: sex-specific inheritance pattern of longevity, the survival advantage of long-lived sibships compared with long-lived singletons and about the question whether their parents already had a life-long sustained survival advantage. To investigate these three issues, we used the data available in the Leiden Longevity Study (LLS). The LLS currently contains 421 complete three generational families, which we denote with filial 0 until 2 (F0-F2). First, we grouped complete F1 sibships to their parental longevity. We defined parental longevity as belonging to the top 1 per cent of their birth cohort (34,39) and constructed four parental groups: Group 1: both parents were long-lived (N = 1); group 2: mother long-lived and father not long-lived (N = 17); group 3: father long-lived and mother not long-lived (N = 21); group 4: both parents were not long-lived (N = 371). We subsequently compared the longevity Family Scores (LFS) of the different groups. Next, we investigated whether longevous siblings had a survival advantage over sex-and birth cohort-matched singletons using standardized mortality ratios (SMR). We compared the survival of spouses of longevous siblings to sex-and birth cohort-matched controls. Finally, we estimated lifetime SMRs (L-SMRs) to determine whether parents of longevous siblings had a life-long sustained survival advantage. Leiden Longevity Study The LLS was initiated in 2002 to study genetic determinants of human longevity. The LLS consists of 421 families and covers two generations of living subjects (F1 and F2) who were born between 1864 and 2017. Inclusion took place from 2002 until 2006. Men and women could participate if they were alive and aged ≥89 and ≥91, respectively. Both men and women were recruited to have a living sibling meeting the same criteria. Furthermore, the parents of the F1 participants had to be of Dutch Caucasian origin, and the siblings in one family had to descend from the same parents. The sex-specific age inclusion criteria represented individuals equal to or beyond the oldest 0.5 per cent of the Dutch population in 2001. There were no selection criteria on health or demographic characteristics. In total, 944 longevous F1 participants, who provided blood for research purposes, were included in the LLS (F1). In addition, their offspring and the spouses of their offspring were included (F2). Relevant for the current study is that genealogical information was collected for the siblings (F1; N = 2,302), parents (F0; N = 842), and spouses (F1; N = 809) of the longevous F1 participants (henceforth referred to as siblings, parents, spouses, and participants). All genealogical information was verified by birth or marriage certificates and passports whenever possible. Additionally, verification took place via personal cards which were obtained from the Dutch Central Bureau of Genealogy in The Hague. In 2017, we updated the ages at death and last observation via the currently centralized municipal personal records database. For this study, we used two generations (F0 and F1) consisting of 4,807 individuals in all 421 families ( Figure 1 and Table 1) because 86 per cent from the third generation (F2) were still alive. Lifetables In the Netherlands, population-based cohort lifetables are available from 1850 until 2017 (40,41). These lifetables contain, for each birth year and sex, an estimate of the hazard of dying between ages x and x + n (h x ) based on yearly intervals (N = 1) up to 99 years of age. Conditional cumulative hazards (H x ) and survival probabilities (S x ) can be derived using these hazards. In turn, we can determine to which sex-and birth year-based survival percentile each person of our study belonged to. For example, person "A" was born in 1876, was a female, and died at the age of 92. According to the lifetable information, this person belonged to the top 3 per cent survivors of her birth cohort, meaning that only 3 per cent of the women born in 1876 reached a higher age than person A. We used the lifetables to calculate the birth cohort and sex-specific survival percentiles for each individual in the LLS. Supplementary Figure A1 shows the ages at death corresponding to the top 10, 5, and 1 per cent survivors of their birth cohorts for the period 1850-1960. Statistical Analyses Statistical analyses were conducted using R statistics version 3.3.0 (42). Standardized mortality ratios To indicate excess mortality or excess survival of groups in the LLS compared with a reference population, we used SMRs. An SMR is estimated by dividing the observed number of deaths by the expected number of deaths. The expected number of deaths is given by the sum of all individual cumulative hazards based on the birth cohort and sexspecific lifetables of the Dutch population. An SMR between 1 and 0 indicates excess survival, an SMR of 1 indicates that the study population shows a similar survival to the reference population, and an SMR above 1 indicates excess mortality. The SMR can be estimated conditional on the specific age at which an individual starts to be observed in the study. This was necessary to avoid selection bias if individuals in a study population were not at risk of dying before a specific age of entry, SMR observed number of deaths expected number of deaths where d i = dead status (1 = dead, 0 = alive), H t i 0 = sex-and birth year-specific cumulative hazard based on lifetable, t i = timing, referring to age at death or last observation, t i 0 = lifetable age conditioning, in this case from birth, ( t i 0 = 0), N = group sample size. SMRs were estimated for all first-degree relatives (F0 and F1) of the LLS participants (F1) to investigate their survival compared with the Dutch population. Direct or indirect selection effects were taken into account when estimating the SMR by conditioning the lifetable hazards to the age at first death of a specific group. SMRs were also estimated for participants by conditioning to age of inclusion, which varies between 89 and 102 years (see Supplementary Table A1 for an overview of conditioning criteria). Note that the lifetables do not contain yearly interval information beyond the age of 99. For this reason, the SMR estimations were truncated at 99 years. To estimate the SMR at every possible starting age, we restricted age at death or last observation at yearly thresholds between 0 and 99 years for every group in the LLS, except for the participants because they were selected to have survived ≥89/91 years (men/ women). We will refer to these age-conditioned SMRs as L-SMRs. These L-SMRs provided insight into the specific moment the firstdegree relatives and spouses had a survival advantage during their lifespan. SMR and L-SMR confidence intervals were estimated using 95% family-based bootstrap confidence intervals with 500 resampling cycles to correct for familial dependencies in the LLS data. Longevity family score To summarize the survival of a specific study population or subsample on the level of families, we constructed a LFS. The LFS is related to the SMR, but it is estimated by subtracting the sex-, birth cohort-, and age-conditioned-specific cumulative hazards by event status (1 if death and 0 if alive) for each individual in the study population. In a next step, the family mean is calculated which adjusts for family size and results in the LFS. The LFS is related to the Family Mortality History Score described by Rozing and colleagues (43) and the est(SE) described by Sebastiani and colleagues (44). The LFS ranges between −1 and infinity. A score of 0 indicates that the familial longevity resembles that of the normal Dutch population. A score above 0 indicates excess survival and below 0 indicates excess mortality. For example, family "A" scores an LFS of 1. This indicates that we observe 1 death less than expected based on the Dutch population, LFS expected deaths observed deaths sibshipsize = sex-and birth year-specific cumulative hazard based on lifetable, t ij = timing, referring to age at death or last observation, t ij 0 = lifetable age conditioning, in this case from birth ( t ij To identify the presence of a sex-specific inheritance pattern, four groups of F1 sibships (participants + siblings) were constructed according to their parental longevity. We defined parental longevity as belonging to the top 1 per cent of their birth cohort. Group 1: both parents were long-lived (N = 1); group 2: mother long-lived and father not long-lived (N = 17); group 3: father long-lived and mother not long-lived (N = 21); group 4: both parents were not longlived (N = 371). Group 1 was omitted from the analyses because the size was too small and 12 sibships could not be grouped due to missing ages at death of their parents. The LFS was used to summarize F1 sibship survival relative to the parental groups. F1 LFS differences between the groups were tested using the nonparametric Mann-Whitney U test and corresponding 95 per cent exact confidence intervals were reported (45). Results To investigate sex-specific inheritance and the presence of a lifelong sustained survival advantage in the LLS, we used two generations covering longevous participants (F1; N = 944), their parents (2) 119 (15) Notes: *Participants are enrolled as siblings meeting the age criteria of 89 (men) or 91 years (women). † Siblings are the siblings of participants who did not meet the age criteria yet or who had already been deceased at the time of enrolment. Age refers to either age at death or age at last observation. Missing age means that we have no observation at all. MAD = Median absolute deviation; SD = Standard deviation. (F0; N = 842), siblings (F1; N = 2,302), and spouses (F1; N = 809) ( Figure 1). The participants were born between 1900 and 1916, and 63 per cent were female (N = 595). The participants' mean age at death or at last observation was 97 years and 22 (2%) participants are currently alive. The parents were born between 1850 and 1894 and they are all passed away with a mean age at death of 77 years. We were unable to retrieve the age at death of 22 parents (3%). The siblings were born between 1875 and 1941 and 47% were female (N = 1,082). The siblings mean age at death was 69 years and the median age at death was 80 years. Three hundred sixty-five (16%) siblings are currently still alive while we were unable to retrieve any information on the age at death for 33 (2%) siblings. The mean sibship size for F1 (participants+siblings) was 7.71 (SD = 3.4) with a minimum of 2 and a maximum of 17 siblings. The spouses were born between 1882 and 1950. Forty per cent of the spouses were female (N = 324) and their mean age at death was 75 years. Twentyseven (3%) spouses are currently alive and for 119 (15%) spouses no age at death or last observation was available (Table 1). LLS Data Are of High Quality We verified the observations as described by Schoenmaker and colleagues based on the first 100 LLS families by estimating SMRs for parents, spouses, and siblings of the complete enrolled LLS ( Maternal Transmission of Longevity To determine an inheritance pattern based on information of not just single individuals but an entire sibship, we used a LFS to summarize sibship survival. We grouped sibships (F1, participants + siblings) according to their parental (F0) longevity (parental longevity was defined as belonging to the top 1% survivors of their birth cohort) and compared the median group LFS of the complete sibships. Figure 2 shows that all F1 sibship groups, on average, had an excess survival compared with single individuals from the same birth cohorts and sex, as indicated by the median scores which were all above 0. Sibships with a long-lived (LL) father and a non-long-lived (NL) mother had 1.21 (median LFS) less observed deaths in reference to the Dutch population and a mean sibship size of 8.34 (SD = 3.4). Sibships with an LL mother and an NL father had 1.62 (median LFS) less observed deaths with a mean sibship size of 5 (SD = 1.9) and sibships with both parents NL had 1.1 less observed deaths with a mean sibship size of 7.95 (SD = 3.4). As a result, sibships with long-lived mothers and nonlong-lived fathers showed larger LFSs than sibships with long-lived fathers and non-long-lived mothers (median difference in LFS of 0.41; 95% CI = 0.07-0.77; p = .024). Similarly, they showed larger LFSs than sibships with both parents non-long lived (median difference in LFS = 0.48; 95% CI = 0.15-0.79; p = .008). We did not observe differential survival between sons and daughters with a long-lived mother (Supplementary Figure A2). In conclusion, we observed a maternal transmission pattern of human longevity with no evidence of a differential survival advantage for sons and daughters. Last Life-Phase Survival Advantage of Siblings Over Singletons To test if longevous F1 participants had a survival advantage over birth cohort-, sex-, and inclusion age-matched singletons, we estimated sex-specific SMRs for the participants ( Figure 3A). Life-Long Sustained Survival Advantage of Siblings and Parents But not For Spouses Whether first-degree relatives and spouses of the participants had a survival advantage over their entire lifetime was studied by estimating L-SMRs. Figure 3B shows that siblings had a significant survival advantage compared with individuals from similar birth cohorts and sex at any point of their lifetime distribution until the threshold of 97 years, although the SMR at 98 years was again significant. The mean L-SMR was 0.680 and the median L-SMR was 0.660. No sex differences were identified at any age threshold. We observed that spouses had a nonsignificant L-SMR until age 74, indicating that No significant differences between men and women have been observed for any category. Observed deaths have been counted after the age of the first death in a group for "parents of participants," "siblings of participants," and "spouses of participants." For the participants, observed deaths have been counted after the age of inclusion for each individual separately. This is to correct for selection effects in the data. In line with the counting of the observed deaths, the Dutch lifetables have been age conditioned to match the counting of deaths in the different groups. Equal to the counting of observed deaths, the age conditioning of the lifetables was done to correct for selection effects. they were similar to sex-and birth cohort-matched individuals from the general population. Beyond age 74, there was a small but significant survival disadvantage (min SMR = 1.09 and max SMR = 1.32) and from age 91 until 94 the effects were not statistically significant anymore. Among spouses, no statistically significant differences between husbands and wives could be detected at any age threshold. The mean L-SMR was 1.050 and the median L-SMR over all age points was 1.030 ( Figure 3C). Finally, we were able to study the life-long survival for parents of longevous participants ( Figure 3D). Parents had a significant survival advantage compared with individuals from the same birth cohort and sex at any point of the parents' lifetime distribution until 93 years. After 93 years, the SMR estimates were still below 1 although not statistically significant, probably due to small sample size. The parental mean and median L-SMR were 0.510 and 0.688, respectively. No sex differences were identified at any age threshold. Exact values corresponding to Figure 3 can be found in Supplementary Table A2. Discussion We investigated the survival of the longevous F1 LLS participants (who are longevous siblings) selected in the Leiden Longevity Study, and their F1 siblings, F0 parents, and F1 spouses. Based on the lifespan data of entire sibships (F1, participants + siblings), we observed a maternal transmission pattern of longevity with equal probability to sons and daughters. Compared with inclusion age-matched singletons from similar birth cohorts and sex, LLS participants had 18.6 per cent less observed deaths than expected, and thus a survival advantage. In the LLS, the spouses of the participants had a life-long sustained survival pattern similar to the general population. Finally, we conclude that parents and siblings of the LLS participants had a life-long sustained survival advantage compared with individuals matched on birth cohorts and sex. Family longevity scores (FLS) were used to explore whether human longevity was transmitted preferentially through the maternal or paternal line, using the entire sibship information instead of only that of one single child per family. All sibships had an increased survival compared with individuals from the same birth cohort and sex, regardless of their parental longevity, because we selected LLS participants to have lived ≥89 and 91 years for men and women, respectively. However, the median FLS for sibships with a long-lived mother and a non-long-lived father was 0.41 (p = .024) higher than for sibships with a long-lived father and a non-long-lived mother, and 0.48 (p = .008) higher than for sibships with both parents nonlong-lived. This indicates that in the LLS longevity was transmitted preferentially via the maternal line. This maternal transmission of longevity is in concordance with the mitochondrial transmission hypothesis which posits that longevity may be transmitted through mitochondrial DNA from mothers to her offspring (8). Although this theory argues that because mitochondria are only maternally inherited, they are under selection pressure for optimized compatibility with only the female genome, we have no evidence that there is preferential transmission of longevity from mothers to daughters. Another explanation connects to Fogel's theory of technophysio evolution which explains that in the turn of the 19th to the 20th century, childhood and early life mortality decreased significantly. This decrease was attributed to an increased birth weight and height of children and young adults, respectively (46). Since mothers are pivotal in this process it might be that the long-lived mothers were able to give birth to such healthy children whereas this may not have been the case for non-long-lived mothers, irrespective of the beneficial effect that 19th century long-lived fathers may have provided. The similarity in LFS for sibships with a long-lived father and a non-long-lived mother (LFS = 1.21) and sibships with both parents non-long-lived (LFS = 1.14) indicates the small influence of paternal effects compared with maternal effects. This absence may indicate that paternal socioeconomic status in the LLS is of marginal influence to the intergenerational transmission of longevity (47,48). Sibships with a long-lived mother and a non-long-lived father had not only had a higher LFS, but they also had a mean sibship size of 5, whereas the two other categories had a mean sibship size of 8.34 and 7.95. In general, the probability of finding long-lived subjects in families increases with sibship size (49). The finding of longevity among children in small sibships (with a long-lived mother) may therefore indicate that the longevity is less likely to be prominent by chance. The smaller sibship size of LL mothers may be explained by a tradeoff in longevity families, either based on environmental (ie limited economic resources) or biological (ie reproductive capacity) factors. The discordant parental groups were quite small ( Figure 2). We identified sibships with a long-lived father but not mother, and vice versa (N_sibships = 21; N_individuals = 177 and N_sibships = 17; N_individuals = 85) which interestingly shows that the maternal transmission effects are found not in all, but in a subset of LLS families. To investigate familial clustering of longevity, studies selected long-lived subjects based on multiple siblings or singletons (9)(10)(11)16). So far it was unclear whether a sibling-based selection provides a survival advantage over singletons. We showed that longevous siblings (F1 LLS participants) indeed had an 18.6 per cent survival advantage over inclusion age, birth cohort, and sex-matched longevous singletons. The effect can be considered large because the observational period focuses on the last stage of life (ages ≥89 and 91 for men and women), especially when taking into account that siblings of LLS participants, whose full-life course was observed, showed a 33.7 per cent survival advantage. It might even be expected that confining the sample to participants consisting of three or more longevous siblings increases the survival advantage. We did not, however, have the sample size to stratify our analyses to specific numbers of longevous participants within a family. Furthermore, we accounted for direct selection effects, although we could not directly account for the possibility that more healthy persons enrolled in the LLS than unhealthy persons or vice versa. We, however, did not expect that this has influenced our results since the first participants died only a few weeks after inclusion. We conclude that, when compiling a long-lived study cohort, selecting longevous siblings is a more stringent selection than longevous singletons of the same age. Literature is inconclusive about the potential survival advantage of spouses of long-lived persons (10,11,35,36). We showed, in a large group, that spouses of longevous LLS participants (N = 809) had an equal survival to the general population until the age of 74. Beyond 74 years, we observed a small excess mortality. We have no other explanation for this finding than the fact that this excess mortality beyond 74 years may be a function of small sample size. Pedersen and colleagues observed a survival advantage in the long life family study for spouses of long-lived siblings when comparing them to a birth cohort and sex-matched control group. The authors point to assortative mating as a factor explaining the survival advantage for spouses of longevous participants (10). An earlier Quebec study also reported a survival advantage of spouses (35) and a study of Southern Italy found male nonagenarians to outlive their spouses, whereas this was not the case for female nonagenarians (36). Clearly, biological, environmental, and cultural factors influence survival to advanced ages in longevous families. Because of unique Dutch lifetables dating back to 1850, we were able to show that parents of longevous LLS participants had a life-long sustained survival advantage compared with birth cohort and sex-matched controls, until at least the age of 93 years. Beyond 94 years the confidence intervals increased due to a limited sample size. The life-long sustained survival advantage of first-degree relatives indicates a familial clustering of human longevity, which may be the result of the absence of deleterious genetic mutations (50,51) or the presence of genetic mutations protecting from agingrelated diseases (52). Genetic studies aimed at identifying longevity loci promoting a life-long survival advantage up to the highest ages requires a focus on extreme individuals: cases belonging to the top 1%-5% survivors with comparable parents. Recent genetic studies in the large UK Biobank (50,51) focused on subjects of 70 years on average without a parental selection (51) or selecting on parents belonging to the top 10 per cent survivors (50). This selection resulted in loci known to influence healthy aging and mortality in middle and older ages rather than exceptional longevity. As alternative to genetic influences, shared lifestyle or environmental factors may influence the longevity clustering in families. With the SMR analyses, we could not adjust for environmental and lifestyle factors. However, the fact that we found spouses to survive comparable to the general population and that first-degree relatives (siblings and parents) had a life-long sustained survival advantage suggests a familial/genetic influence on human longevity, possibly acting from early life onward. Longevity clusters within specific families and insight into this familial clustering is important in gaining knowledge of factors involved in a life-long survival advantage up to the highest ages. Knowledge about the inheritance pattern of longevity may be useful for genetic studies trying to discover longevity-related genes. For example, effects of mitochondrial genes on human longevity should be investigated in those families with a history of maternal transmission of human longevity. Furthermore, research aiming to establish a study cohort of long-lived persons should ideally take family information into account, because we have demonstrated an enhanced survival for longevous siblings (LLS participants) over birth cohortand sex-matched singletons. In the LLS, spouses seem comparable to the general population, making them a suitable comparison group for various health-related phenotypes as well as longevity. Lastly, compared with sex-and birth cohort-matched individuals, parents of the LLS participants at the turn of the 19th century have a lifelong sustained survival advantage up to the highest ages which was previously reported for the 20th century survival of siblings of longevous singletons (9,10,16). This indicates that when studying the determinants of longevity factors involving the entire lifespan may contribute and emphasize the importance of longitudinal population-based studies in the search for protective factor for age-related disease. Informed Consent Informed consent was obtained from all Leiden Longevity Study participants. Supplementary Material Supplementary data is available at The Journals of Gerontology, Series A: Biological Sciences and Medical Sciences online. Funding This work was supported by the Netherlands Organization for Scientific Research [360-53-180].
2018-04-04T00:06:21.489Z
2018-03-27T00:00:00.000
{ "year": 2018, "sha1": "4be1c84826c44a4522e223d78fdf3aeec032491b", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/biomedgerontology/article-pdf/73/10/1295/25711917/gly049.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4be1c84826c44a4522e223d78fdf3aeec032491b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210867178
pes2o/s2orc
v3-fos-license
Modulation of the Root Microbiome by Plant Molecules: The Basis for Targeted Disease Suppression and Plant Growth Promotion Plants host a mesmerizing diversity of microbes inside and around their roots, known as the microbiome. The microbiome is composed mostly of fungi, bacteria, oomycetes, and archaea that can be either pathogenic or beneficial for plant health and fitness. To grow healthy, plants need to surveil soil niches around the roots for the detection of pathogenic microbes, and in parallel maximize the services of beneficial microbes in nutrients uptake and growth promotion. Plants employ a palette of mechanisms to modulate their microbiome including structural modifications, the exudation of secondary metabolites and the coordinated action of different defence responses. Here, we review the current understanding on the composition and activity of the root microbiome and how different plant molecules can shape the structure of the root-associated microbial communities. Examples are given on interactions that occur in the rhizosphere between plants and soilborne fungi. We also present some well-established examples of microbiome harnessing to highlight how plants can maximize their fitness by selecting their microbiome. Understanding how plants manipulate their microbiome can aid in the design of next-generation microbial inoculants for targeted disease suppression and enhanced plant growth. INTRODUCTION Plants are sessile organisms anchored in the soil by their roots. In terrestrial ecosystems, plants are the main food producers supporting most of the other life. In nature, plants are continuously exposed to various biotic stresses caused by pathogens or pests and adverse environmental conditions, such as drought, soil salinity, extreme temperatures, nutrient deficiencies, or exposure to heavy metals (De Coninck et al., 2015;Antoniou et al., 2017;Hacquard et al., 2017). To survive biotic stresses, plants have evolved an array of sophisticated immune responses which protect plant cells from the challenges they confront Pieterse et al., 2014). For decades, the interactions between plants and pathogens were studied under the prism of an individual plant-microbe relationship, ignoring the complexity of such interactions and the involvement of many other groups of microorganisms that affect the outcome of infection (Mendes et al., 2011;Berendsen et al., 2012;Bulgarelli et al., 2013). Over the last years, focus has been diverted to the effect of the plantassociated microbial communities on plant growth and health. Increasing evidence suggests that services provided by plantassociated microorganisms can broaden immune functions of the plant host (Vannier et al., 2019). It has even been postulated that plants actively recruit soil microorganisms by releasing compounds in the rhizosphere that selectively stimulate microorganisms that are beneficial to plant growth and health (Reinhold-Hurek et al., 2015;Sasse et al., 2017). Here, we review the current understanding on the composition and activity of the root-associated microbial communities, and we discuss how different plant molecules can shape the structure of these communities providing also with examples on the interactions between plants and soilborne fungi. Game of Biomes: Plants Roots and Their Microbiome Plants harbor a mesmerizing diversity of microbes both in their aboveground and their belowground tissues that are collectively known as plant microbiota, while the genomes of the microbiota living in close association with plants are commonly referred to as the plant microbiome (Berendsen et al., 2012;Bulgarelli et al., 2013). This review will focus on the interactions of the microbiome with the root, which is the plant organ "hidden" in the soil that mediates key functions for plant longevity and fitness (De Coninck et al., 2015). Some of these functions are the fixation of a plant in a position, the uptake and storage of nutrients and water from the soil and the mediation of the interaction with soil-inhabiting microbes (Figure 1). Roots and their surrounding soil constitute one of the most rich and diverse ecosystems on Earth. The grand concentration of microbial life in the thin soil layer surrounding the roots, known as the rhizosphere, is explained by the release of carbon-rich products of photosynthesis which are a vital food source for the attracted microbes (Bais et al., 2006;Sasse et al., 2017). Rhizodeposits are quite diverse and include organic acids, amino acids, sugars, products of secondary metabolism, and even the release of dying root cap border cells (Dakora and Phillips, 2002;Bais et al., 2006;Driouich et al., 2013). Root-derived exudates, apart from supporting microbial proliferation in the rhizosphere, are also responsible for the formation of distinct microbial assemblages between soil and the rhizosphere, a phenomenon described as the "rhizosphere effect" (Hiltner, 1904;Berendsen et al., 2012). The microbes proliferating in the rhizosphere are therefore exposed to plant-derived compounds and signaling molecules FIGURE 1 | Plants respond to different environmental stresses and modulate their microbiome. (A) Plants not experiencing any biotic stress and having access to nutrients (green pentagons), release constitutively exudates (red arrows) that allow them to sustain a balance in the rhizosphere between pathogenic and beneficial microbes. (B) Upon infection by a pathogen (red microbe), the exudation profile of roots changes and stress-induced exudates (blue arrows) aid the plants in inhibiting pathogenic growth in the rhizosphere, while selecting at the same time for beneficial microbes. Some of these beneficial microbes when they establish themselves in the rhizosphere, can trigger ISR that can help plants deal with pathogenic infections in the leaves. (C) In the case of soil suppressiveness or "cry-forhelp" conditions, there is establishment of beneficial rhizosphere communities that are further supported by the release of stress-induced exudates. Under these conditions, soilborne and foliar pathogens fail to cause disease. (D) Plants experiencing nutrient deficiencies (e.g. iron, nitrogen, phosphate) change the metabolomic profile of their roots to either make nutrients more available and soluble or to attract beneficial microbes (e.g. rhizobia, AMF, PGPR) that can help them deal with the nutrient deficiency. Font size indicates the abundance of beneficial or pathogenic subsets of the microbiota under different conditions. The figure was designed with Biorender (https://biorender.com). and represent a subset of the highly complex microbial communities of the bulk soil (Berendsen et al., 2012). A next layer of selection occurs when microbes grow on the root surface (rhizoplane) or inside roots (endosphere) and in turn less diverse microbial communities are observed (Bulgarelli et al., 2013;Reinhold-Hurek et al., 2015;Hacquard et al., 2017). These layers of selection are critical considering that the rootassociated microbiota consist of microbes that can assist plants in nutrient assimilation, or enhance their growth and defense potential, but also of microbes that can be detrimental for plant health (Lugtenberg and Kamilova, 2009;Pieterse et al., 2014;De Coninck et al., 2015). Therefore, the maintenance of a balance between plant health and the accommodation of this plethora of microbes in the root rhizosphere requires a coordination of complex processes in the rhizosphere where all partners benefit . The Identity of Root-Associated Microbiomes The last decade several studies unearthed the composition of root-associated microbial communities. Most of these studies employed next-generation sequencing of microbial marker genes like 16S rRNA for bacteria and the nuclear ribosomal internal transcribed spacer (ITS) region for fungi (Claesson et al., 2010;Schoch et al., 2012) which is known as amplicon sequencing (Sharpton, 2014), while others used shotgun metagenomics sequencing where not only selected microbial marker genes but all DNA present in an environmental sample is sequenced (Sessitsch et al., 2012;Ofek-Lalzar et al., 2014;Bai et al., 2015;Bulgarelli et al., 2015;Stringlis et al., 2018b). The latter approach allows not only for the taxonomic profiling of the root-associated microbial communities but also for the functional characterization of the microbiome (Sharpton, 2014). These c u l t u r e -i n d e p e n d e n t m e t h o d o l o g i e s a l l o w e d t h e characterization of the microbiota in both the rhizosphere but also in the endosphere of different plant species. In the case of bacteria, analysis at phylum level revealed that the microbiota of healthy Arabidopsis thaliana (hereafter Arabidopsis) plants originates from the more diverse soil communities, and is dominated by the phyla Proteobacteria, Actinobacteria, Bacteroidetes and less by Firmicutes (Bulgarelli et al., 2012;Lundberg et al., 2012). Similarly, the root microbiome of closely related species belonging to the Brassicaceae family (Cardamine hirsuta, Arabidopsis halleri, Arabidopsis lyrata and Arabis alpina) display quite similar root microbial assemblages with those of Arabidopsis (Schlaeppi et al., 2014;Dombrowski et al., 2017). In plant species not related to Arabidopsis, such as barley, citrus, rice, Lotus japonicus, poplar, sugarcane, and tomato, the phyla Proteobacteria, Actinobacteria, Bacteroidetes, and Firmicutes constitute the highest proportion among the identified bacteria (Bulgarelli et al., 2015;Edwards et al., 2015;De Souza et al., 2016;Zgadzaj et al., 2016;Beckers et al., 2017;Zhang et al., 2017;Kwak et al., 2018). For fungal communities, studies in Arabidopsis, Arabis alpina, poplar, and sugarcane have shown that mostly the phyla Ascomycota, Basidiomycota and less Zygomycota, and Glomeromycota dominate the root microbiota of their host plants (Shakya et al., 2013;De Souza et al., 2016;Almario et al., 2017;Robbins et al., 2018;Bergelson et al., 2019). The high representation of selected bacterial and fungal phyla in roots and rhizospheres of different hosts suggests that members of these phyla constitute competitive and adaptable colonizers under various soil types and locations (Muller et al., 2016). Indeed, sequencing of microbiome DNA and RNA from the rhizosphere and the root of Brassica napus and citrus demonstrated that phyla Proteobacteria, Actinobacteria, Acidobacteria and Bacteroidetes are really active in the root and the rhizosphere and assimilate most of the carbon released by the roots (Gkarmiri et al., 2017;Zhang et al., 2017). Metatranscriptomics, functional studies or labelling of carbon absorption revealed that overrepresentation of specific fungal phyla in the rhizosphere correlates with their increased activity around the roots or services they provide to the host plants (Vandenkoornhuyse et al., 2007;Turner et al., 2013;Almario et al., 2017;Gonzalez et al., 2018). Interactions of Plants With Beneficial and Pathogenic Microbes Beneficial Associations With Plants Symbiotic Plant-Microbe Associations. Research has unearthed that intimate interactions of plants with beneficial microbes first occurred millions of years ago. The first land plants were colonized by ancestral filamentous fungi that facilitated water absorption and nutrient acquisition for the host plant, while fungi received back photosynthetically-fixed carbon (Field et al., 2015;Martin et al., 2017). This symbiotic association coevolved in such a successful direction since more than 90% of living plant species form symbioses with mycorrhizal fungi, of which about 80% are classified as arbuscular mycorrhizal fungi (AMF) (Parniske, 2008;Bonfante and Genre, 2010). As obligate biotrophs, AMF need to sense the presence of the host plants to complete their lifecycle. The root-exuded plant hormone strigolactone has been recognized as the stimulatory signal for AMF mycelium metabolism and branching and its concentration gradient from the roots reveal the proximity to the host plant (Parniske, 2008;Bonfante and Genre, 2010). Intriguingly, AMF signaling pathways are very similar to the one that coordinates the well-known symbiosis between the paraphyletic group of rhizobial bacteria and leguminous plants and are therefore named common signaling symbiotic pathways (CSSPs) (Maclean et al., 2017;Martin et al., 2017). In rhizobia, the symbiotic association begins with the perception of specific root-exuded iso-flavonoid compounds by the microbes that stimulates root nodule formation (Begum et al., 2001;Oldroyd, 2013;Poole et al., 2018). Once symbiosis is established there is continuous exchange of nutrients between the host plant and the microbes. AMF can uptake the consistently low water-soluble inorganic orthophosphate (Pi) from soils and transport Pi through the extraradical mycelium network and fungal arbuscules inside the root. AMF can also uptake and transport other major nutrients; for example nitrogen is transferred in the forms of nitrate, ammonium, and amino acids inside plants by using specialized transporters (Parniske, 2008;Bonfante and Genre, 2010;Maclean et al., 2017). In exchange, AMF receive the entire carbon requirements from plants, through specific fungal hexose transporters and fatty acids (Jiang et al., 2017;Maclean et al., 2017). In rhizobia-leguminous plants symbiosis, rhizobia reduce atmospheric N 2 to ammonia inside the root nodules and secrete it to plants, while plants provide rhizobia with dicarboxylates (Poole et al., 2018). Nutrient Uptake and Growth Promotion by Beneficial Microbes. Plants can acquire nutrients even in the absence of symbiosis with AMF or rhizobia. Enhanced nutrient acquisition in plants is a very common mechanism of phytostimulation (Lugtenberg and Kamilova, 2009;Finkel et al., 2017;Jacoby et al., 2017;Verbon et al., 2017) and a wide array of microbes can accomplish this function in non-mycorrhizal plants (Almario et al., 2017;Castrillo et al., 2017;Fabianska et al., 2019). The nonhost plant Arabidopsis acquires Pi through its natural root endophytic symbiont Colletotrichum tofieldiae (Hiruma et al., 2016). Hiruma and colleagues (2016) demonstrated that Pi translocation is the main plant growth promotion mechanism provided by C. tofieldiae and this mechanism is governed by the plant phosphate starvation status and requires intact immune system of the plant. Endophytic fungi belonging to the order of Sebacinales, such as Serendipita indica (formerly known as Piriformospora indica) can also promote plant growth through Pi acquisition (Yadav et al., 2010;Weiss et al., 2016). Similarly, Trichoderma fungi can produce chelating metabolites that solubilize phosphate and increase its acquisition by plants to promote plant growth (Altomare et al., 1999;De Jaeger et al., 2011). Nitrogen acquisition is mediated on non-leguminous plants by other microbes which are not belonging in the N-fixing bacteria group (Jacoby et al., 2017;Martin et al., 2017). Evidence also accumulates that during root colonization selected beneficial microbes can hijack the iron deficiency response of plants. In this case, following bacterial colonization there is induction of the expression of genes with a role in iron uptake, and these genes are commonly used by plants to mobilize and uptake iron, when this element is present in unavailable forms in the soil Zhou et al., 2016;Martinez-Medina et al., 2017;Verbon et al., 2017). Beneficial microbes can promote plant growth by affecting the hormonal balance of plants. This beneficial effect can be induced by the secretion of microbial small secondary metabolites (SM) that can act as hormone-like plant growth regulators, or by the production of SM and proteins that enable microbes to modulate the signaling of plant defense hormones to successfully colonize plant tissues (Verbon and Liberman, 2016;Patkar and Naqvi, 2017;Manganiello et al., 2018;Stringlis et al., 2018c). Numerous microbial species among plant associated bacteria and fungi can produce indole-3-acetic acid (IAA) or auxin-mimicking molecules that play a direct role on plant growth and development (Duca et al., 2014;Garnica-Vergara et al., 2016). Other microbial phytohormones or phytohormone-like molecules, such as cytokinins, gibberellins and analogues of defense-related hormones, such as salicylic acid (SA) or jasmonic acid (JA)-isoleucine are mainly produced to facilitate microbial colonization through modulation of plant immunity (Schafer et al., 2009;Stringlis et al., 2018c). Moreover, many plant beneficial microorganisms produce 1-aminocyclopropane-1-carboxylate (ACC) deaminase that cleaves ACC, the immediate biosynthetic precursor of ethylene (ET) in plants, and promote plant growth presumably by lowering plant ET which can reach inhibitory levels for plant growth when subjected to stress conditions (Viterbo et al., 2010;Brotman et al., 2013;Glick, 2014;Stringlis et al., 2018c). Induced Systemic Resistance. Another well-studied mechanism of elevated plant defense potential is the so-called induced systemic resistance (ISR) which is triggered by beneficial members of the root microbiome to a wide range of plant hosts making them resistant against various pathogenic threats . Systemic activation of plant defenses is ensured by a complex network of defense-related hormone signaling pathways, which brings the message of a beneficial interaction, in different plants organs (Pieterse et al., 2009;Pieterse et al., 2014). The ISR phenomenon has been firstly described for bacteria of the genus Pseudomonas, and this mechanism has been distinguished from "systemic acquired resistance" (SAR) which is induced by pathogens . ISR has also been described for many plant growth-promoting bacteria (PGPR) of the genus Bacillus and Serratia and plant growth-promoting fungi (PGPF) of the genus Trichoderma, Fusarium, Serendipita and AMF (Harman et al., 2004;Kloepper et al., 2004;Shoresh et al., 2010;Jung et al., 2012;Pieterse et al., 2014) and is determined by the perception of microbial secreted SM (Ongena and Jacques, 2008;Raaijmakers et al., 2010;Manganiello et al., 2018;Stringlis et al., 2018c). Interestingly, ISR is characterized by the activation of defense responses only after pathogen attack, saving the plant from a great energy consumption. This mechanism of "upon attack" defense activation is known as priming and is an energy-saving evolutionary strategy that allows plants to silently alert their immune system until a challenge by pathogens or insects occurs. Following this challenge, plants will deploy all the cellular responses faster and/or stronger resulting in a more efficient and effective resistance Martinez-Medina et al., 2016). All the beneficial associations presented above are based on the interaction between the host plant and a single beneficial microbe. Modern holistic approaches aim to correlate plant health to the entire plant-associated microbial community. In this case, microbial genes are considered as an extension of the plant genetic repertoire and perform specific functions benefiting plant growth, reproduction and disease resistance (Vandenkoornhuyse et al., 2015;Hassani et al., 2018). Community level-based metagenomic studies can elucidate whether there is functional redundancy or overlapping genomic traits in most microbes promoting plant growth or inducing systemic resistance, enabling in this way the discovery of novel PGPR or PGPF (Lugtenberg and Kamilova, 2009;Pieterse et al., 2014;Bai et al., 2015;Zeilinger et al., 2016;Berendsen et al., 2018;Duran et al., 2018). infections caused by pathogenic microorganisms. Soilborne pathogens can affect hundreds of plant species, including economically important crops, and cause significant monetary losses due to significant reduction in yield and quality. For many crops, losses are estimated at 10%-20% of the attainable yield (Pimentel et al., 1991;Okubara and Paulitz, 2005;De Coninck et al., 2015). However, crop losses are often underestimated as soilborne pathogens are not an immediate concern for growers and their practices in many cases lead to increased inoculum reservoirs in soils (Chellemi et al., 2016). Also, their economic importance is expected to significantly rise due to the increasing implementation of conservation tillage or no-till farming practices in many countries (De Coninck et al., 2015) and the climate change that can increase their geographical range on Earth (Cheng et al., 2019). Soilborne pathogens reside in the soil for short or extended periods, and survive as saprophytes on plant residues and organic matter or as resting structures (e.g. sclerotia, chlamydospores, oospores, melanized mycelia) until triggered to grow by root exudates (Bruehl, 1987;Bais et al., 2006;De Coninck et al., 2015). For example, phenolic acids, sugars, and free amino acids in root exudates from watermelon significantly increased spore germination and sporulation of F. oxysporum f. sp. niveum (Hao et al., 2010). Similarly, tomato root exudates stimulated microconidia germination of the tomato pathogens F. oxysporum f. sp. lycopersici and F. oxysporum f. sp. radicis-lycopersici and the level of stimulation was affected by plant age (Steinkellner et al., 2005). Moreover, root exudates can be detected by fungal pathogens enabling fungal hyphae to orient their growth towards the root. For example, the chemotropic response of F. oxysporum towards tomato roots was recently characterized and involves the catalytic activity of root-secreted class III peroxidases (Turrà et al., 2015). Under favorable environmental conditions, soilborne pathogens invade plants through the root system and in most cases roots and other belowground parts are directly affected; however, symptoms are often visible on above ground parts of plants (Koike et al., 2003). Plants infected by soilborne pathogens suffer from root rots, inhibition of root development, stunted growth, seedling damping-off, stem and collar rots, wilting or even plant death (De Coninck et al., 2015;Katan, 2017). Diseases caused by soilborne plant pathogens are notoriously difficult to control for several reasons: many soilborne pathogens produce persistent resting structures that can survive in the soil for many years even in the absence of a susceptible host (Katan, 2017); measures targeting resting structures (e.g. chemical fumigation) are unsuitable for large-scale application due to public health and environmental issues and ban on chemical fumigants (Yadeta and Thomma, 2013); application of pesticides is often insufficient because of the poor accessibility in soil matrix (De Coninck et al., 2015); some of the soilborne pathogens infect a wide range of host plants rendering cultural control measures ineffective (Antoniou et al., 2017). Moreover, in order to establish a parasitic relationship with the plants, pathogens must interact with the complex rhizosphere community that also influences the outcome of the infection (Raaijmakers et al., 2009). Pathogens are negatively affected by co-inhabiting microorganisms through antibiosis and competition for nutrients, processes that usually involve secreted molecules. Snelders et al. (2018) proposed that pathogens can fight back by delivering effector proteins which target the rhizosphere communities instead of the plant to ultimately facilitate host colonization by the pathogen. Soilborne pathogens include species of fungi, oomycetes, bacteria, viruses and nematodes (Katan, 2017). The most important soilborne fungal pathogens are Fusarium oxysporum (Michielse and Rep, 2009), Fusarium solani (Coleman, 2016), Rhizoctonia solani (Gonzalez et al., 2011), Verticillium spp. (Klosterman et al., 2009), and Sclerotinia sclerotiorum (Bolton et al., 2006) and destructive soilborne oomycetes are Phytophthora spp. (Van West et al., 2003;Lamour et al., 2012) and Pythium spp. (Van West et al., 2003). Among many soil bacteria that are beneficial, there are only a few groups that infect the plant roots. Examples are Ralstonia solanacearum (Peeters et al., 2013) and the causal agent of crown gall Agrobacterium tumefaciens (Anand et al., 2008) that require a natural opening or wound to penetrate into the plant and cause infection. Only a small number of viruses can infect roots and like bacteria, they require an opening to achieve penetration. They generally survive only in the living tissues of the host plant or in their vectors. In soil, viruses are transmitted by zoosporic fungi (Campbell, 1996) or by nematodes (Brown et al., 1995). Effect of Root Exudates on Root-Associated Microbiome Plants produce and exude via their roots various metabolites that can affect the assembly of the root microbiome before even microbes reach the root surface where they confront with the plant immune system (Sasse et al., 2017). The age and developmental stage of the plant influence exudation and subsequently the microbes proliferating around roots. Exudates of Arabidopsis plants collected at different plant age varied in sugar levels which affected accordingly microbial functions related with sugar and secondary metabolism . It was also shown that Arabidopsis plants during the early and late stage of their development can influence the abundance of Actinobacteria, Bacteroidetes and Cyanobacteria and microbial activity as well (Chaparro et al., 2014). Functions aligning with pathogens were more represented at early developmental stages while later developmental stages were dominated by functions related with antibiosis and chemotaxis and aligned to beneficial microbes, suggesting a selective pressure during plant aging towards microbes that provide their hosts with important services. In this direction, a recent study elegantly demonstrated that exudates change during the growth cycle of Avena barbata with sucrose levels are high at earlier stages while amino acids and defense molecules are released more at later developmental stages (Zhalnina et al., 2018). Using exometabolomics, this study showed that selected metabolites including aromatic organic acids (nicotinic, shikimic, salicylic, cinnamic, and IAA) are responsible for the proliferation or not of specific microbes around the roots during the different growth stages of the host plant (Zhalnina et al., 2018). Different rhizodeposits have been shown to influence the microbiome composition. Studies on how plants select rootassociated microbes/microbiota are summarized in Table 1. Biosynthesis of aliphatic and indolic glucosinolates, that are components of the chemical defense of plants, occurs in the vascular stele . Early studies demonstrated that root exudation of aliphatic glucosinolates can affect the rhizospheric microbial communities (Bressan et al., 2009), while indolic glucosinolates accumulate in Arabidopsis root upon pathogen infection (Bednarek et al., 2005). Combinations of exudates collected from Arabidopsis plants growing in vitro and applied in soil in the absence of plants revealed differential effects of phenolic compounds on the abundance of bacterial taxa . More specifically, phenolics seemed to have the biggest effect on the growth and attraction of bacterial operational taxonomic units (OTUs), followed by amino acids and sugars. A role of phenolics in affecting soil microbial diversity was also demonstrated with an Arabidopsis ABC transporter mutant (abcg30) which releases more phenolics but shows a reduced export of sugars (Badri et al., 2009). In soil in which abcg30 plants were grown, an increased abundance of PGPR or bacteria involved in heavy metal remediation was observed compared to wild type Col-0 plants, suggesting a role for phenolics in attracting beneficial microbes. More recent studies suggested that coumarins, which are also phenolic compounds, can shape the rhizosphere microbiome and display differential toxicity against beneficial and pathogenic microbes (Stringlis et al., 2018b;Stringlis et al., 2019a;Voges et al., 2019). Next to phenolics, more chemical players have been found to contribute in the balance between roots and the microbiome, including benzoxazinoids (Hu et al., 2018;Cotton et al., 2019), triterpenes (Huang et al., 2019), and camalexin (Koprivova et al., 2019). Other naturally occurring exudates, like flavonoids and strigolactones, act as signaling compounds for the establishment of well-characterized symbiotic interactions of plant hosts with rhizobia and AMF (Akiyama et al., 2005;Subramanian et al., 2007). Moreover, border cells and borderlike cells that are forming an extra root layer between the root tip and soil have been shown to affect a group of soilborne bacteria, because of proteins synthesized and released through them (Driouich et al., 2013). Arabinogalactan proteins were identified among the secreted molecules and were found to regulate Rhizobium and Agrobacterium attachment on roots (Gaspar et al., 2004;Vicre et al., 2005;Xie et al., 2012). Different parts of the root can release a different blend of exudates that can favor the colonization by selected members of the microbiome (Baetz and Martinoia, 2014). Studies using modern techniques like microfluidics and bacterial biosensors responsive to selected root exudates have revealed the preferential colonization of the root elongation zone and of lateral roots by bacteria of the genera Bacillus and Rhizobium (Massalha et al., 2017;Pini et al., 2017). Structural Root Defenses and Microbiome Plants have developed various ways to restrict microbial growth and colonization on plant tissues, once microbes overcome niche competition with other microbes in the rhizosphere and can successfully grow in root exudates. In leaves, an armory of structural and chemical defense mechanisms have evolved to prevent disease caused by colonization of harmful microbes inside plant tissues (Senthil-Kumar and Mysore, 2013). These structural defense components include the cuticle, lignin, suberin and deposition of callose and are also present in the roots. Roots are plant organs characterized by radial organization where each concentric layer corresponds to a different tissue (Wachsman et al., 2015). Lignin fortifies the xylem of Arabidopsis roots (Van De Mortel et al., 2008;Naseer et al., 2012) and going outwards from the root core, lignin-composed Casparian strips (CS) and the hydrophobic polymer suberin make the endodermis a barrier between the xylem and the soil (Naseer et al., 2012;Geldner, 2013). Recognition of microbes or of microbial elicitors can induce callose deposition in the epidermal cells of the root (Millet et al., 2010;Jacobs et al., 2011;Hiruma et al., 2016). Finally, cutin as a waxy polymer of the cuticle coating the epidermis, has barrier-like properties like suberin and is present in the primary and lateral roots (Berhin et al., 2019). Evidence suggests that plant defense components exert some selective pressure on the microbes that can colonize the inner tissues of the root. The first seminal studies on the root microbiome field demonstrated that the endosphere microbiota is a fraction of the rhizosphere microbiota, and structural defense components might have a role in this observation (Bulgarelli et al., 2012;Lundberg et al., 2012). Other structural modifications of the root system like emergence of lateral roots or formation of root hairs might be involved in creating micro-niches that host distinct subsets of the root microbiota. A study in barley comparing wild type and mutant plants for root hair formation revealed that the microbial community in root hair mutants was simpler and less diverse compared to the microbial communities assembled in the roots of wild type barley plants (Robertson-Albertyn et al., 2017). Despite the presence of structural defense components in roots and their dynamic contribution in plant growth, information on their role in the assembly of the root microbiome is still limited. Interplay Between Plant Immunity and the Microbiome Root Immune System As already mentioned in this review, soil microbial populations consist of a mix of beneficial and pathogenic microbes. Hence, plants need to successfully recognize them and subsequently reprogram their defense strategies to allow or block their colonization Yu et al., 2019a). To effectively and timely perceive microbial signals, plants have evolved a multilayered detection system that leads, depending on the trigger, to the activation of downstream defense responses (Dodds and Rathjen, 2010). In the first layer of this defense system, surface-localized pattern recognition receptors (PRRs) Whitefly Whitefly infestation elicited SA and JA signaling in above and below ground tissues and overexpression of PR genes in the roots resulting in a differential microbiome assembly The differential microbiome assembly induced resistance against to Xanthomonas axonopodis pv. perceive conserved microbe-derived molecules, called microbeassociated molecular patterns (MAMPs). In Arabidopsis, some MAMP/PRR pairs are well defined (Couto and Zipfel, 2016). Bacterial flagellin and the immunogenic epitope of flagellin flg22 are perceived by receptor kinase FLAGELLIN-SENSING 2 ( F L S 2 ) ( G o m e z -G o m e z a n d B o l l e r , 2 0 0 0 ) , w h i l e ELONGATION FACTOR-TU RECEPTOR (EFR) recognizes bacterial elongation factor Tu and its derived immunogenic peptide elf18 (Kunze et al., 2004). Additionally, CHITIN ELICITOR RECEPTOR KINASE 1 (CERK1) and LYSIN MOTIF CONTAINING RECEPTOR-LIKE KINASE 5 (LYK5) recognize hepta-or octamers of the fungal elicitor chitin (Miya et al., 2007;Cao et al., 2014). The recognition of a MAMP leads to the induction of immune responses in the host plant that constitute the first layer of defense referred to as MAMPtriggered immunity (MTI). Based on their timing, the activated immune responses range from instant [medium alkalization, oxidative burst (ROS), protein phosphorylation] and early (ethylene biosynthesis, defense gene activation) to late (callose deposition and growth inhibition) (Boller and Felix, 2009). All these processes aim to halt any further growth of a microbe on/in plant tissues and have been elucidated by the extensive study of pathogen perception in the aerial plant tissues. During the last decade, many studies have shown that roots can perceive MAMPs and generate MAMP-specific responses such as callose deposition, camalexin biosynthesis, and induction of defence-related genes similar to leaves (Millet et al., 2010;Jacobs et al., 2011;Wyrsch et al., 2015;Poncini et al., 2017;Stringlis et al., 2018a;Marhavy et al., 2019). Constitutive activation of PRRs in microbe-and elicitor-enriched environments like roots and the surrounding rhizosphere could result in unnecessary MTI that in turn could cause growth and yield inhibition of plants (Gomez-Gomez et al., 1999;Vos et al., 2013). For this, different researchers aimed to define the involvement of different plant organs in flg22 perception by its receptor FLS2 (Beck et al., 2014) and the contribution of different root tissues in the induction of MTI upon flg22 elicitation (Wyrsch et al., 2015). Interestingly, inner tissues show higher expression of the FLS2 receptor and stronger MAMP responses (ROS production and induction of defense genes) compared to epidermal tissues. However, it's not only the plant side that adapts to the presence of MAMPs, but the microbes themselves adapt to the presence of PRRs. Only a small fraction of the genomes of the culturable microbiome of Arabidopsis (3%-6%) contains genes coding for flg22 or elf18 peptides, while the peptide cold shock protein 22 (csp22) recognized by Solanaceae and not by Arabidopsis is present in 25% of the isolated Arabidopsis-associated microbes Hacquard et al., 2017). This suggests that the presence of PRRs in roots exerts a selective pressure on the root-associated microbes that need to develop mechanisms to mask the presence of their MAMPs and achieve colonization. Some PRRs can also identify "self" molecules known as hostderived damage-associated molecular patterns (DAMPs). In response to cellular rupture by nematodes or fungal attack, DAMPs are released and can induce strong tissue specific responses in the roots of Arabidopsis (Poncini et al., 2017;Marhavy et al., 2019). Considering the potential of DAMPs to induce stronger defense responses in the roots compared to MAMPs (Poncini et al., 2017), their role in the assembly of the root microbiome and on how plants discriminate between beneficial and pathogenic root colonizers should be expected. Suppression of Root Defenses by Beneficial Microbes. Signaling pathways of defense hormones SA and JA have been longinvolved in responses of plants to infection by pathogens or colonization by beneficial microbes Zamioudis and Pieterse, 2012;Pieterse et al., 2014) and studies using mutants for these hormonal pathways have demonstrated their role in shaping the root microbiome (Carvalhais et al., 2015;Lebeis et al., 2015). Beneficial members of the root microbiota have developed different strategies to suppress MTI and/or manipulate the homeostasis of defense hormones to achieve colonization and provide their host with benefits Yu et al., 2019a). Symbiotic mycorrhizal and ectomycorrhizal fungi Rhizophagus irregularis and Laccaria bicolor secrete mutualism effectors that manipulate ET and JA hormonal signaling pathways (Kloppholz et al., 2011;Plett et al., 2011;Plett et al., 2014a;Plett et al., 2014b), while effectors of endophytic fungus Serendipita indica target JA signaling to achieve defense suppression (Jacobs et al., 2011;Akum et al., 2015). JA signaling is also upregulated by PGPF Trichoderma spp. to suppress activation of immune responses during early colonization of the root (Brotman et al., 2013). Beneficial bacteria employ different strategies to manipulate the host and accomplish colonization. The type III secretion system (T3SS) is important in the establishment of symbiosis between rhizobia and their legume partners . T3SS is a multicomponent apparatus that Gram negative bacteria, mostly pathogenic, use to secrete effector molecules into host cells aiming to restrict the defense responses mounted due to their recognition and achieve host colonization (Galan and Collmer, 1999). Sinorhizobium fredii HH103 with defective T3SS is unable to suppress SA-dependent defenses and subsequently fails to promote nodulation on its legume host (Jimenez-Guerrero et al., 2015). Non-symbiotic PGPR such as Pseudomonas fluorescens SBW25, Pseudomonas brassicacearum Q8r1-96 and Pseudomonas simiae WCS417 and other root-associated Pseudomonads, are also equipped with T3SS, however its role in root colonization remains elusive (Preston et al., 2001;Mavrodi et al., 2011;Loper et al., 2012;Berendsen et al., 2015;Stringlis et al., 2019b). Nevertheless, beneficial microbes can employ other mechanisms independent of secretion systems to mask their presence in the rhizosphere. Pathogenic bacteria Pseudomonas aeruginosa and Pseudomonas syringae release the extracellular alkaline protease AprA which degrades flagellin monomers, and allows microbes to have their MAMPs undetected by the immune system of both mammals and plants (Bardoel et al., 2011;Pel et al., 2014). Plant-beneficial bacteria have AprA homologs in their genomes so a role of this protease in their interaction with roots is possible (Pel et al., 2014). More recently, Yu et al. (2019b) suggested another mode of plant manipulation where beneficial rhizobacteria of the genus Pseudomonas spp. produce organic acids during root colonization that lower the environmental pH and in turn suppress root immune responses following recognition of the flg22 peptide. Building Up of Disease Suppressiveness Soil microbial communities provide silently their valuable services in terrestrial ecosystems by increasing ecosystem resilience, making soil more resistant to any disturbance-induced damages due to environmental changes (Berendsen et al., 2012). Disease suppression is a well-known microbiome-mediated phenomenon that provides a first line of defense against infections by the soilborne pathogens (Weller et al., 2002). Disease suppressive soils have been originally defined as "soils in which the pathogen does not establish or persist, establishes but causes little or no damage, or establishes and causes disease for a while but thereafter the disease is less important, although the pathogen may persist in the soils" (Baker and Cook, 1974). In contrast, in conducive soils the disease occurs readily. Two types of soil suppressiveness have been characterized: "general" and "specific" suppression. In general suppression, growth and activity of pathogens are inhibited to some extent and the suppressiveness is attributed to the antagonistic activity of the collective microbial community that is often associated with competition for available resources (Mazzola, 2002;Weller et al., 2002;Cook, 2014). General suppressiveness is enhanced by the incorporation of organic amendments or other management practices that increase the total microbial activity and competition in the soil (Weller et al., 2002;Bonanomi et al., 2010). It is often effective against a broad range of pathogens and is not transferable between soils (Cook and Rovira, 1976;Weller et al., 2002). General suppressiveness is a pre-existing characteristic of soils and is fundamentally microbiological in nature (Weller et al., 2002;Raaijmakers and Mazzola, 2016). Specific suppression occurs when individual species or specific subsets of soil microorganisms interfere with the infection cycle of a pathogen (Weller et al., 2002;Berendsen et al., 2012). The biotic nature of specific suppression is also demonstrated as it can be eliminated through soil pasteurization or biocides. In contrast to general suppressiveness, specific suppressiveness can be transferred by introducing very small amounts (1%-10%) of suppressive soil into a conducive soil (Cook and Rovira, 1976;Mendes et al., 2011;Raaijmakers and Mazzola, 2016;Schlatter et al., 2017). Specific suppression is superimposed over the general suppression and is more effective (Berendsen et al., 2012). In some soils specific suppression is retained for prolonged periods even when soils are left bare, whereas in other soils it is induced by continuous monoculture of a susceptible host after a disease outbreak (Berendsen et al., 2012;Raaijmakers and Mazzola, 2016). Induction of specific suppression requires multilateral interactions between plants, soil microbiome and pathogens and is mechanistically complex. The interaction between plant and pathogen that occurs before a disease outbreak may induce the release of pathogen-or plantderived metabolites that lead to alterations in microbiota composition and activation of pathogen-suppressive microorganisms (Chapelle et al., 2016). In recent years, many studies using new culture-independent technologies started to unravel the identity of responsible microorganisms in disease suppressive soils (Gomez Exposito et al., 2017). For instance, suppressiveness towards Verticillium dahliae was mainly associated with higher abundances of Actinobacteria and Oxalobacteraceae (Cretoiu et al., 2013). Another study regarding fungi revealed significant differences in the fungal community composition between suppressive and non-suppressive soil for the disease caused by R. solani AG 8; Xylaria, Bionectria, and Eutypa were more abundant in the suppressive soil whereas Alternaria and Davidiella dominated the non-suppressive soil (Penton et al., 2014). Also, higher abundances of the Phyla Actinobacteria, Proteobacteria, Acidobacteria, Gemmatimonadetes, and Nitrospirae were found in soil with specific suppressiveness to Fusarium wilt of strawberry (Cha et al., 2016). More recently, it was shown that fungal and bacterial diversity differed significantly between a suppressive and a conducive soil of Fusarium wilt whereas several of the fungal and bacterial genera known for their activity against F. oxysporum were detected exclusively or more abundantly in the Fusarium wilt-suppressive soil (Siegel-Hertz et al., 2018). Interestingly, studies analyzing the rhizobacterial community composition in soils suppressive or conducive to R. solani revealed that relative abundance of specific bacterial taxa is a more important indicator of suppressiveness than the exclusive presence or absence of specific bacterial families (Mendes et al., 2011;Chapelle et al., 2016). In a study by Hu et al. (2016) defined Pseudomonas species consortia were introduced into naturally complex microbial communities to assess the importance of the Pseudomonas community diversity for the suppression of R. solanacearum in the tomato rhizosphere. Only the most dense and diverse Pseudomonas communities reduced pathogen density in the rhizosphere and decreased the disease incidence due to both intensified resource competition and interference with the pathogen. Recently, Wei et al. (2019) demonstrated that the composition and functioning of the initial soil microbiome predetermines future disease outcome of R. solanacearum on tomato plants. Plant survival was associated with specific bacterial species, including the highly antagonistic Pseudomonas and Bacillus bacteria together with specific rare taxa. The mechanism behind the suppression could be the production of antibiotics, as high abundance of genes encoding non-ribosomal peptide and polyketide synthases was found in the initial microbiomes associated with healthy plants. Intriguingly, they also demonstrated that this capacity can be transferred to the next generation of plants through soil transplantation opening a new avenue of exploiting microbiomes for disease resistance. Microbiome Modulation by Coumarins, Benzoxazinoids, and Other Root-Exuded Molecules Coumarins Coumarins are phenolic compounds produced via the phenylpropanoid pathway and have been extensively studied for their role in disease resistance but also for their involvement in responses of dicotyledonous plants to iron deficiency (Tsai and Schmidt, 2017a). Coumarins are produced when iron is unavailable in the soil around the roots and their exudation increases to make iron more available before it is imported inside the roots (Tsai and Schmidt, 2017b;Tsai and Schmidt, 2017a). Coumarins with pronounced production/ exudation in response to iron deficiency are scopolin, scopoletin, esculin, esculetin, fraxetin and sideretin (Jin et al., 2007;Rodriguez-Celma et al., 2013;Fourcroy et al., 2014;Schmid et al., 2014;Schmidt et al., 2014;Fourcroy et al., 2016;Rajniak et al., 2018;Tsai et al., 2018). Recent studies have suggested their role also in shaping microbiome composition around the roots (Stringlis et al., 2018b;Voges et al., 2019). Stringlis et al. (2018b) showed that both under iron deficiency and colonization of roots by beneficial rhizobacteria that induce ISR, there is increased accumulation of coumarins inside the roots. Components of the production and exudation of coumarins in this study were genes with a key role in ISR, such as the root-specific transcription factor MYB72 and beta-glucosidase gene BGLU42 (Verhagen et al., 2004;Zamioudis et al., 2014;Stringlis et al., 2018b). More specifically, in myb72 mutant plants no coumarin accumulation was observed inside the roots, while in bglu42 mutant plants there was reduced exudation of coumarin scopoletin. Analysis of the rhizosphere microbiomes in these mutants plants, the coumarin biosynthesis mutant f6'h1 (Kai et al., 2008;Schmid et al., 2014) and wild-type plants revealed that coumarins can affect the composition of the microbiome around the roots (Stringlis et al., 2018b). There was increase in the relative abundance of Proteobacteria but decrease of Firmicutes in the f6'h1 rhizosphere compared to wild-type plants rhizosphere. Further experiments showed that coumarin scopoletin was inhibiting the growth of soilborne pathogens whereas rhizobacteria that induce ISR were insensitive to its antimicrobial activity (Stringlis et al., 2018b;Stringlis et al., 2019a). Voges et al. (2019) showed that coumarins can shape the composition of a synthetic bacterial community inoculated in in vitro grown plants and there was enrichment of a Pseudomonas strain in f6'h1 compared to wild-types plants growing under iron deficiency. In this study, it was suggested that the antimicrobial effect of catecholic coumarins fraxetin and sideretin, produced downstream of scopoletin (Rajniak et al., 2018;Tsai et al., 2018), are due to the hydrogen peroxide deriving from catecholic coumarins at conditions of iron deficiency (Voges et al., 2019). Benzoxazinoids Benzoxazinoids are a class of compounds, quite abundant in the roots of maize, with a documented role in the attraction of beneficial microbes in the rhizosphere (Neal et al., 2012) and the defense responses of plants to various pathogenic threats (Ahmad et al., 2011). Recently, studies have focused on characterizing how benzoxazinoids can shape the assembly of root-associated bacterial and fungal communities (Hu et al., 2018;Cotton et al., 2019). Hu et al. (2018) using a benzoxazinoids deficient maize mutant observed that different bacterial and fungal communities assemble in the roots of the mutants compared to wild-type maize. Despite the prominent changes in bacterial and fungal microbiome the authors didn't assess the effects of benzoxazinoids on specific bacterial/fungal taxa. Release of benzoxazinoids and the subsequent microbiome changes were sufficient to provide plants of a next generation growing in this soil with protection against a herbivore insect. Next-generation maize plants growing in soil with and without benzoxazinoids displayed distinct bacterial and fungal communities both in the root and the rhizosphere. A c t i n o b a c t e r i a O T U s a n d s o m e A s c o m y c ot a a n d Glomeromycota OTUs were mostly responsible for root and rhizosphere separation but the effects on plant fitness were more strongly associated with changes in bacteria than fungi in the rhizosphere of these next-generation plants (Hu et al., 2018). There was increase of a subset of Proteobacteria in soils with benzoxazinoids, while Chloroflexi OTUs were enriched in soils without benzoxazinoids. In the case of fungal communities, Ascomycota OTUs were present in both soils with and without benzoxazinoids. Interestingly, Glomeromycota OTUs seemed to be less abundant in soils with benzoxazinoids. In the study by Cotton et al. (2019), the effect of benzoxazinoids on the metabolomic profile of roots and microbiome assembly was assessed. Metabolomic profiles of mutants in benzoxazinoids production were different compared to those of wild type plants, indicating a role of benzoxazinoids in the metabolic response of maize roots. The microbiome analysis revealed enrichment or depletion of bacterial and fungal OTUs between the rhizospheres of wild type and mutant plants and the authors correlated the changes in the microbial abundance with metabolites present in the roots of wild type and mutant plants (Cotton et al., 2019). Studies like those presented herein on coumarins and benzoxazinoids enrich our understanding on how specific exudates shape root-associated microbial communities, and unlocking how a beneficial microbiome can be selected via exudation could allow us to breed for plants that can manipulate their microbiome to maximize growth and health benefits (Vannier et al., 2019). Triterpenes and Camalexin As already mentioned in section Effect of Root Exudates on Root-Associated Microbiome, triterpenes and camalexin were recently found to be involved in microbiome shaping (Huang et al., 2019;Koprivova et al., 2019). Triterpenes are products of plant metabolism with involvement in disease resistance and with antimicrobial activity (Papadopoulou et al., 1999). Triterpenes are synthesized via the mevalonate pathway and can accumulate in plant tissues as triterpene glycosides (Thimmappa et al., 2014). Huang et al. (2019) observed that triterpenes thalianin and arabidin are produced in roots and biosynthetic genes for their production are induced following treatment of roots with MeJA. Microbiome analysis of thalianin and arabidin mutants and wildtype plants revealed the assembly of distinct root microbial communities in the absence of triterpenes. These differences were explained by the enrichment of Bacteroidetes and the depletion of Deltaproteobacteria in the roots of triterpene mutants compared with the roots of wild type plants (Huang et al., 2019). In the study of Koprivova et al. (2019), the authors performed a genome wide association study (GWAS) and measured microbial sulfatase activity in the soil where 172 accessions of Arabidopsis were grown. Through this screen the authors found single-nucleotide polymorphisms (SNPs) explaining differences in microbial sulfatase activity. Some of these SNPs were in gene CYP71A27 and a mutant of this gene displayed reduced microbial sulfatase activity and impaired production of antimicrobial compound camalexin. Interestingly, the authors observed that beneficial rhizobacteria could promote growth in wild-type plants but only beneficial rhizobacteria without sulfatase activity could promote growth in cyp71a27 mutants. The fact that beneficial rhizobacterium Pseudomonas sp. CH267 could promote growth in wild-type plants but not in nine Arabidopsis accessions with variation in the amino acid sequence of CYP71A27, suggested that camalexin is required in the interaction of roots with microbes in order the plants to have a benefit (Koprivova et al., 2019). "Cry for Help" During Infection of Plants Plants experiencing infection by phytopathogens or insects, actively recruit beneficial members from the rhizosphere microbiota that will help them overcome biotic stresses, a phenomenon defined as "cry for help" (Bakker et al., 2018). Studies have shown that the build-up of a beneficial microbial community in the root is mediated by changes in gene expression and alterations in root exudation responsive to pathogen attack ( Figure 1). Rudrappa et al. (2008) showed that infection of Arabidopsis leaves by Pseudomonas syringae pv. tomato (Pst) induced the root exudation of malic acid that in turn favored the recruitment of the beneficial Bacillus subtilis strain FB17 which triggers ISR in Arabidopsis against Pst. Tomato plants experiencing different stresses produced exudates that acted as chemoattractants for the beneficial fungus Trichoderma harzianum (Lombardi et al., 2018). Other studies have shown that aphid feeding or whitefly infestation of pepper and tobacco leaves can cause a transcriptional reprogramming in roots and changes in the root microbiome composition which makes plants more resistant to foliar and soilborne pathogens (Yang et al., 2011;Lee et al., 2012;Lee et al., 2018). Recently, Berendsen et al. (2018) demonstrated that Arabidopsis leaf infection by the biotrophic oomycete Hyaloperonospora arabidopsidis (Hpa) can lead to the enrichment of three bacterial taxa (Xanthomonas spp., Stenotrophomonas spp., and Microbacterium spp.) in the rhizosphere. Isolation of these microbes and inoculation of Arabidopsis showed that these three microbes together could induce ISR against Hpa and promote plant growth, indicating the active recruitment of beneficial microbes by infected plants. Microbiome changes were also apparent in Arabidopsis infected with Pseudomonas syringae and those changes were attributed to changes in root exudation (Yuan et al., 2018). In these studies, the beneficial effect in plant health due to microbiome changes could be transferred to the offspring of the infected plants that displayed increased levels of resistance to these pathogens Yuan et al., 2018). These findings indicate that in soils with infected plants changes in exudation and the microbiome lead to the build-up of a microbial legacy that is inherited to the next generations of plants growing in this soil and favors their survival under phytopathogenic pressure (Bakker et al., 2018). Considering the continuity of plantpathogens interactions during the lifetime of a plant in a field, a functional "loop" should be in action: when plants experience stress they respond with changes in exudation that can favor the selection of beneficial microbial members from the rhizosphere which in turn can help the plants deal with the stress (Liu et al., 2019a). Future studies should elucidate how different exudates contribute in the microbial recruitment and the subsequent soilborne legacy described above, considering the involvement of coumarins (Stringlis et al., 2018b;Stringlis et al., 2019a), malic acid (Rudrappa et al., 2008), benzoxazinoids (Hu et al., 2018;Cotton et al., 2019), and camalexin (Koprivova et al., 2019) in the selection of beneficial microbes in the rhizosphere. Rhizosphere Microbiome as a Source of Benefits for the Plant Beneficial Effects Against Biotic Stresses It is well documented that plant genotype exerts strong influence on the overall composition of root associated communities through plant root exudates (Bulgarelli et al., 2012;Badri et al., 2013;Matthews et al., 2019). Recent evidence suggest that root exudates attract beneficial and pathogen-suppressing microbes or reshape microbiome assembly in the plant rhizosphere to suppress disease symptoms (Kwak et al., 2018;Mendes et al., 2018). The study of Mendes et al. (2018) using common bean cultivars with variable levels of resistance has shown that rhizobacteria belonging to Pseudomonadaceae, Bacillaceae, Solibacteraceae, and Cytophagaceae families were more abundant in the rhizosphere of the Fusarium-resistant cultivar. Kwak et al. (2018) analyzed the rhizosphere microbiomes of a resistant and a susceptible tomato variety to the soilborne pathogen R. solanacearum to assess the role of plant-associated microorganisms in disease resistance and proved that transplantation of rhizosphere microbiota from resistant plants suppressed disease symptoms in susceptible plants. By comparing the metagenomes of the rhizosphere from resistant and susceptible plants a flavobacterial genome was identified to be far more abundant in the resistant plant rhizosphere. The isolated flavobacterium could suppress R. solanacearum in pot experiments with a susceptible tomato variety suggesting that selection of native microbiota can protect plants from root pathogens. Recently, it was shown that in natural populations of Arabidopsis, the plants are protected against root-inhabiting filamentous eukaryotes because of the presence of the co-residing bacterial root microbiota that is essential for plant survival . In another microbiome study, the occurrence of potato common scab caused by Streptomyces was correlated with the composition and putative function of the soil microbiome (Shi et al., 2019). The community composition of the geocaulosphere soil samples revealed that Geobacillus, Curtobacterium, and unclassified Geodermatophilaceae were the most abundant genera that were significantly negatively correlated with the scab severity level, the estimated absolute abundance of pathogenic Streptomyces, and txtAB gene copy number (biosynthetic gene of the scab phytotoxin). In contrast, Variovorax, Stenotrophomonas, and Agrobacterium were the most abundant genera that were positively correlated with these three parameters. Direct pathogen suppression by rhizospheric microorganisms has been extensively reported (Mendes et al., 2011;Santhanam et al., 2015;Cha et al., 2016;Hu et al., 2016). Pathogen growth is affected by several and highly diverse mechanisms including microbial competition (for resources or space) (Zelezniak et al., 2015), secretion of antimicrobial compounds Helfrich et al., 2018;Stringlis et al., 2018b;Koprivova et al., 2019) and hyperparasitism (Parratt and Laine, 2018). As mentioned previously, members of the rhizosphere microbiome can alter plant growth by producing phytohormones which modulate endogenous plant hormone levels (Stringlis et al., 2018c). In a recent study, two synthetic microbial communities were designed and consisted of bacterial strains that show ACC deaminase activity and produce an array of hormones and enzymes in vitro and also show antimicrobial activity against F. oxysporum f. sp. lycopersici. Inoculation of these synthetic communities in a poor substrate enhanced the growth of tomato plants and reduced symptoms caused by F. oxysporum f. sp. lycopersici (Tsolakidou et al., 2019a). In another study, endophytic Enterobacteriaceae strains engineered to express ACC deaminase activity on the bacterial cell walls did not show any activity against a pathogenic strain of Fusarium oxysporum f. sp. cubense in vitro. However, they promoted banana plant growth and increased resistance to banana Fusarium wilt suggesting that engineering the interactions between plants with their microbiome can provide valuable tools to deal with plant pathogens that are difficult to control . Pathogenic microbes can employ similar strategies with beneficial microbes to colonize their hosts. For example, overexpression of ACC deaminase gene in V. dahliae significantly lowered ACC levels in the roots of infected tomato plants and increased both its virulence and the fungal biomass in the vascular tissues of plants (Tsolakidou et al., 2019b). Therefore, future studies need to address how functions shared by both beneficial and pathogenic microbes are perceived by the plants and how plants can maintain a balance in the rhizosphere. Beneficial Effects Against Abiotic Stresses Accumulating evidence suggests that the rhizosphere microbiome is not only involved in coping with biotic stresses but is also involved in protection of plants against abiotic stresses ( Figure 1). Rhizosphere bacteria have been shown to elicit socalled induced systemic tolerance to high salinity, drought and nutrient deficiency or excess (Yang et al., 2009;Rolli et al., 2015). A recent study found a diverse range of root-associated bacteria of soybean and wheat, including Pseudomonas spp., Pantoea spp., and Paraburkholderia spp., showing mechanisms involved in improved nutrient uptake, growth, and stress tolerance like phosphate solubilization, nitrogen fixation, indole acetic acid and ACC deaminase production (Rascovan et al., 2016). Accumulation of heavy metals, hydrocarbons and pesticides in soil can cause deterioration of soil properties and have negative impact on plant growth or make the plant unsuitable for consumption (Kuiper et al., 2004). Interestingly, Sessitsch et al. (2012) found enrichment of microbial functions for the degradation of aromatic compounds in the metagenomes of endophytes, highlighting a potential for bioremediation. Understanding how microbiome dynamics and functions can change in response to perturbations can open new avenues to engineer microbial communities also for bioremediation purposes (Perez-Garcia et al., 2016;Eng and Borenstein, 2019). Indeed, soil tillage and compost amendment of contaminated soils could stimulate the indigenous microbial communities which are naturally adapted to the pollutants of these soils (Ventorino et al., 2019). In another study the modification of the microbiota assemblage following the introduction of a natural and diverse microbiome transplant in an oilcontaminated soil led to more efficient contaminant degradation compared to the introduction of an artificial microbial selection (Bell et al., 2016). Phytoremediation is the use of plants to extract, sequester, or detoxify pollutants. This practice is often associated with the microbial bioremediation since the presence of plants can stimulate the microbial population in the rhizosphere, improve physical and chemical properties of the soil and increase contacts between microbes and soil contaminants (Kuiper et al., 2004). In a recent work, Fan and colleagues found that inoculation of Robinia pseudoacacia with rhizobia, significantly affected rhizosphere microbial population and functions and also improved the phytoremediation capacity of the plants (Fan et al., 2018). Plant Microbiome as a Source of Variability in Plant Breeding The efforts of plant breeding practices have always been directed towards the selection of desirable phenotypic traits, such as higher yield associated with improved edible characteristics. This domestication process, progressively led to the loss of allelic diversity, also named as genetic erosion of domesticated plants (Perez-Jaramillo et al., 2016;Pieterse et al., 2016). Recent studies indicated that in several plant species the rhizosphere microbiome composition may have been affected in domesticated plants as compared to their wild relatives (Perez-Jaramillo et al., 2017;Perez-Jaramillo et al., 2018;Pérez-Jaramillo et al., 2019). For common bean, it was shown that relative abundance of Bacteroidetes was increased in wild accessions whereas Actinobacteria and Proteobacteria were enriched in modern accessions and this shifting was associated with plant genotypic and specific root morphological traits (Perez-Jaramillo et al., 2017). Interestingly, the transition of common bean from a native to an agricultural soil led to a gain of rhizobacterial diversity and to a stronger effect of the bean genotype on rhizobacterial assembly (Pérez-Jaramillo et al., 2019). In a study using 33 strains of sunflower (Helianthus annuus) with varying degrees of domestication it was found that rhizosphere fungal communities were more strongly influenced by host genetic factors and plant breeding than bacterial communities. They also found that there was a minimal vertical transmission of fungi from seeds to adult plants (Leff et al., 2017). A survey of the bacterial community structure of 3 barley accessions also pointed to a small but significant role of the host genotype on rootassociated community composition (Bulgarelli et al., 2015). Perez-Jaramillo et al. (2018) conducted a meta-analysis integrating metagenomics data of 6 independent studies with the aim of addressing whether plant domestication affected the composition of the root-associated microbiome in various crop plant species and observed consistent enrichment of Actinobacteria and Proteobacteria in modern varieties in contrast to the enrichment of Bacteroidetes in their wild relatives. This evidence indicates that modern agriculture may not utilize the full potential the associated microbiome may offer. In this framework, wild relatives have been suggested to provide new perspective into plant genes associated with microbiome assembly, and this knowledge could open new horizons for future breeding strategies (Perez-Jaramillo et al., 2018). Engineering Microbial Inoculants to Suppress Disease and Support Plant Growth: From the Lab to the Field The Prospect of Using Synthetic Communities to Promote Plant Health The successful application of microbial consortia as inoculants to protect plants from stresses and enhance their productivity relies mainly on the ability of microorganisms that show promise in the lab to overcome hurdles and retain their characteristics when applied in the field . The rationale behind this strategy is twofold: the selection and combination i) of distantly related microorganisms with different or complementing characteristics tailored to promote plant growth and suppress pathogens, or tolerate different plant genotypes or environmental conditions (Compant et al., 2019), or ii) of closely related strains in order to expand the diversity of resources that these strains use (Wei et al., 2015;Hu et al., 2016). Species-rich communities are often more efficient and more productive than species-poor communities as they use limiting resources more efficiently (Loreau et al., 2001). For instance, the introduction of high diversity Pseudomonas consortia reduced R. solanacearum density in the rhizosphere of tomato plants and decreased the disease incidence due to interference and intensified resource competition with the pathogen. Interestingly, increasing diversity of the introduced Pseudomonas consortia also increased their survival (Hu et al., 2016). Furthermore, increasing the richness of Pseudomonas consortia resulted in enhanced accumulation of plant biomass and more efficient assimilation of nutrients in tomato plants; diversity effects were more important than the identity of the Pseudomonas strain and the observed plant growth promotion was associated with elevated production of plant hormones, siderophores, and solubilization of phosphorus in vitro (Hu et al., 2017). In contrast, increasing genotypic richness of P. fluorescens communities increased disproportionally the antagonistic interactions, causing community collapse and resulted in loss of Medicago sativa protection against the oomycete Pythium ultimum (Becker et al., 2012). It was recently proposed that microbial synthetic communities can be used as inoculants to produce plant growth substrates with desired characteristics such as biocontrol of targeted pathogens and plant growth promotion (Tsolakidou et al., 2019a). The composition of the synthetic communities was a determinant factor for the growth of plants and pathogen inhibition. The synthetic community consisting of different bacterial genera promoted the growth of tomato plants but failed to protect plants against Fusarium wilt. The synthetic community consisting of Bacillus isolates suppressed Fusarium wilt symptoms and enhanced tomato growth but to a lesser extent as compared to the more diverse synthetic community (Tsolakidou et al., 2019a). There is a substantial number of studies suggesting that complex inocula can provide plants with increased disease resistance and growth promotion effects as compared to single strains (Rolli et al., 2015;Santhanam et al., 2015;Wei et al., 2015;Molina-Romero et al., 2017;Niu et al., 2017;Berendsen et al., 2018;Tsolakidou et al., 2019a). Bacterial strains that show little or no effects as single inoculants can exhibit plant growth promotion effects when used in a consortium (Raaijmakers and Weller, 1998;Berendsen et al., 2018). The prospect of using microbial mixtures as plant inoculants that can positively affect plant properties is an emerging field of research ( Figure 2). However, the complexity of experimentation is exponentially increasing when using synthetic microbial communities as compared to single strain inoculants. Thus, successful implementation of microbial consortia with desired host outputs will depend on our understanding of how microorganisms interact with one another and with their hosts in natural ecosystems. To this direction, synthetic microbial communities have been widely adopted for fundamental discoveries in plant microbiomes research as a reductionist approach to simplify and especially control each component of this complex system Lebeis et al., 2015;Finkel et al., 2019). Indeed, as cleverly postulated by Vorholt and colleagues (2017), the true strength of a synthetic community is that each member of the community can be singularly added or substituted, and this can be even accomplished at a functional level by silencing or expressing specific genes. However, controlling each member of a large community would bring to a factorial number of possible combinations, making it impossible to control. Recently, Paredes and colleagues (2018) developed a machine learning computational approach to design a bacterial synthetic community. This method was based on the "cry-for-help" theory, consisting in the construction of a neural-network model that received as inputs the growth rate of a pool of bacterial isolates grown with the root exudates of phosphate starved plants, and the phosphate content of shoots of plants in binary interaction with each one of these single bacterial isolates. This method allowed to design a synthetic community with consistent predictable plant phenotypes. In parallel, the construction of the synthetic community based on the "cry-forhelp" carried out by Berendsen and colleagues (2018) was more based on a plant-driven approach, where plants effectively attracted a consortium of beneficial bacteria which in turn produced desirable plant phenotypes. These examples show that the identification of microbes that mostly respond to plant stress signals can be used as reliable predictors for the discovery of beneficial microbes. FIGURE 2 | Integration of modern technologies to engineer microbial inoculants that boost plant growth and suppress pathogens. Plants respond to stresses and change their exudation. To unravel how changes in exudation affect microbiome composition and functions, we need to couple advance metabolomic techniques with metagenomics sequencing (A) and culture-based methodologies (B). At the same time, there is promise for the use of exometabolomics methodologies and spatial metabolomics that can help in finding where specific exudates are produced and how the microbes around the exudation site are affected (C). Analysis of the generated data in depth will allow the characterization of the microbial communities that respond to exudates and the identification of networks that will reveal how microbes interact and contribute in the microbiome assembly (A). The parallel isolation of a representative fraction of the root microbiome (B) will allow to link descriptive data with the isolated microbes and will guide the design of synthetic communities (D). Testing of these synthetic communities with different hosts under different conditions (e.g. biotic/abiotic stress/in vitro/in soil/in field) will facilitate the selection of synthetic communities that can promote plant growth (E) and suppress pathogens (F) in a consistent and reproducible manner. The figure was designed with Biorender (https://biorender.com/). Techniques and Workflows to Harness Plants and Engineer Beneficial Microbiomes Engineering microbiomes to promote plant fitness and health is an emerging scientific field and an approach holding great promise towards the realization of sustainable future agriculture. However, there are many aspects and technical limitations that need to be considered to effectively exploit this technology. Here, we aim to summarize some of these considerations that are extensively discussed in a recent review by Lawson et al. (2019). First, to unravel mechanisms underlying the interactions between hosts and microbiomes, multiple omics techniques need to be integrated (Jansson and Baker, 2016). Metabolomics, metagenomics, plant transcriptomics, metatranscriptomics, and plant genetics are some of the approaches that combined can disentangle the complex interactions occurring between members of the holobiont. A thorough description of these methodologies are beyond the scope of this review, but some recent focused reviews are available for further reading (Van Dam and Bouwmeester, 2016;Levy et al., 2018;O' Banion et al., 2019;Rodriguez et al., 2019). Here, we report some examples where application of a multi-omics approach revealed how selected plant exudates produced under natural or under stress conditions can affect the colonization of roots by specific microbes. Hu et al. (2018) combined metabolomics and amplicon-based metagenomics analysis on two maize genotypes (wild type and a benzoxazinoids precursor mutant) and revealed how the defense-related benzoxazinoids metabolites structure the bacterial and fungal community of the maize rhizosphere. Stringlis et al. (2018b) also exploited the combination of shotgun metagenomics and metabolomics on an array of Arabidopsis mutants to demonstrate that root exudation of coumarins can shape the rhizosphere microbiome. Similarly, Huang et al. (2019) utilized metabolomics and metagenomics to reveal the effect that root-exuded triterpenes have on microbiota composition of the root. On the track of the work by Berendsen et al. (2018); Yuan et al. (2018) revealed the metabolic drivers of the "legacy effect" by combining metabolomics of the root exudates of infected plants with metagenomics analysis of the rhizospheres of these plants. Furthermore, in an elegant combination of exometabolomics, metagenomics and comparative genomics, Zhalnina et al. (2018) demonstrated how temporal dynamic exudation of root metabolites during different plant developmental stages assembled specific microbial communities and enriched for specific microbial functions. In a next step, we need to link how released plant molecules can affect microbial activity and unearth how plant secretions can define which root niches can be colonized by beneficial microbes while at the same time excluding the pathogenic ones (Jacoby and Kopriva, 2018;Levy et al., 2018). Furthermore, as the blend of root exudates is strictly dependent on plant genotype, it is expectable that different plants attract different microbes that can produce similar effects on different hosts, due to the redundancy of functions of the microbiome. Considering this, we propose to use desirable microbiome functions as selective markers to identify potential beneficial microbes. By exposing different plant species to the same stress conditions, a comparative metatranscriptomics approach would allow the identification of common functions expressed by microbiomes upon the sensing of stress plant signals. Metatranscriptomics has already been used to highlight the most active members of microbiomes in different plant species or to identify bacterial genes expressed during different Arabidopsis life stages (Turner et al., 2013;Chaparro et al., 2014). To date, only a few metatranscriptomics studies have been conducted, due to the difficulties of mapping metatranscripts to reference genomes and metagenomes. Again, in this case, using synthetic communities composed of whole-genome sequenced members would facilitate this task. Associating these studies with detailed metabolomic analysis of root exudates from stressed plants would then make the integration of multi-omics techniques more and more reliable (Figure 2). All together these strategies would produce an incredible amount of data that still need to be interpreted. For this reason, it is necessary to develop bioinformatics techniques that would allow the reduction and summarization of these data. System biology approaches based on correlation networks have been proposed to discover microbial associations where positive and negative correlations can be used to infer possible synergistic or antagonistic interactions (Agler et al., 2016;Poudel et al., 2016;Van Der Heijden and Hartmann, 2016). With this methodology, it is also possible to identify the so-called microbial hub taxa which represent the most interactive nodes in the networks. In this direction, Agler et al. (2016) established a computational method which identified the plant pathogen Albugo and the fungus Dioszegia as microbial hubs in the microbiome of Arabidopsis phyllosphere. In a further experiment, through the artificial manipulation of the microbiome it was also demonstrated that the microbes identified as the hubs of the network, also represented "keystone taxa" as they drove the composition and function of the microbiome. The concept of "keystone" also has been adopted by Niu et al. (2017) when studying the contribution of individual members of a microbial synthetic community on the rhizosphere of maize plants. In this case, the removal of a singular member caused the collapse of the community functioning with the respective decrease of the richness indexes. These results clearly highlighted that some microbial individuals play a key role in shaping microbial communities on plant hosts. Another very powerful computational approach is the use of metagenome-wide association study (MWAS). This method derives from the genome-wide association's studies, which rely on the construction of linear mixed models to relate genotypic variations to quantitative observed phenotypes. MWAS have been typically used in human metagenomics studies, i.e. to identify microbial taxa or microbial functions associated with a host phenotypic trait which could be a disease or the host metabolomics profile, by integrating a multi-omics approach (Gilbert et al., 2016). Genome-wide association approach has also been used in the study of plant-microbe interactions, i.e. to identify Arabidopsis loci associated with the ability of plants to maximize benefit from the interaction with the beneficial Pseudomonas strain WCS417 (Wintermans et al., 2016). In a plant-microbiome context, Beilsmith and colleagues (2019) propose to use MWAS to find associations between host genes and microbial taxa. MWAS could be very useful to find functional associations between either microbial genes and host genes, or microbial genes and host phenotype, which could also include root exudation profiles. Finally, to build synthetic microbial communities with consistent beneficial effects for plants in the field, it is essential to understand whether a specific trait of a single strain is expressed in a community level and under multiple contexts (different environmental conditions, hosts, other microorganisms, etc.) (Vannier et al., 2019). This is crucial considering that single strains or synthetic communities that have beneficial effects in vitro and under controlled conditions might behave in a different manner in the field. We need also to be aware that the increasing complexity of the synthetic community decreases the feasibility of the large-scale industrial production of microbial inoculants. This should be considered in future plant-microbiome studies with a translational intent, since a number of methodologies and tools need to be combined to design small and effective synthetic communities that can provide the host plants with consistent and predictable outcomes.
2020-01-24T14:13:46.792Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "5509620e5cb0c8b2af6bffbd53a71aabd756667a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.01741/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5509620e5cb0c8b2af6bffbd53a71aabd756667a", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
85448664
pes2o/s2orc
v3-fos-license
Induced Neurons for the Study of Neurodegenerative and Neurodevelopmental Disorders Patient-derived or genomically modified human induced pluripotent stem cells (iPSCs) offer the opportu-nity to study neurodevelopmental and neurodegenerative disorders. Overexpression of certain neurogenic transcription factors (TFs) in iPSCs can induce efficient differentiation into homogeneous populations of the disease-relevant neuronal cell types. Here we provide protocols for genomic manipulations of iPSCs by CRISPR/Cas9. We also introduce two methods, based on lentiviral delivery and the piggyBac transposon system, to stably integrate neurogenic TFs into human iPSCs. Furthermore, we describe the TF-mediated neuronal differentiation and maturation in combination with astrocyte cocultures. Introduction Induced pluripotent stem cells (iPSCs) enable studying neurodevelopmental and neurodegenerative diseases such as autism spectrum disorders including fragile X syndrome and Rett syndrome, amyotrophic lateral sclerosis, Alzheimer's disease, Parkinson's disease, Huntington's disease, or spinal muscular atrophy [1]. Human iPSC lines are generated by reprogramming of fibroblasts, hair, or blood samples [2], which are either directly donated by patients with a disease-relevant phenotype and a known genotype or disease-causing mutations can be introduced into the genome of the iPSCs by genomic modifications such as CRISPR/Cas9 [3]. To study the effect of the mutations on the cellular level, iPSCs can be differentiated into the disease-relevant neuronal subtypes. Conventional differentiation protocols rely on the addition of specific soluble growth factors and compounds to the culturing media. These factors trigger intracellular signaling pathways affecting transcription factors (TFs), which in turn induce neuronal differentiation by changing gene expression levels and triggering gene regulatory networks. However, these protocols can be very delicate and time-consuming, lasting from several weeks to months, and yield a heterogeneous mixture of different neuronal subtypes at different developmental stages and glia cells. The forced expression of certain neurogenic TFs in human iPSCs shortcuts neuronal differentiation resulting in rapid neurogenesis that yields highly homogeneous populations of neurons [4][5][6][7]. Here we describe the culturing of a robust inducible-neuronal iPSC line as well as different methods to introduce neurogenic TFs and genomic modifications into human iPSCs and how to differentiate those iPSCs into mature neurons. Neurogenic TFs under the control of a doxycycline-inducible promoter can be stably integrated in the genome of iPSCs either by lentiviral delivery [8] or via the piggyBac transposon system [9]. While lentiviruses have a high efficiency in delivering transgenes, the preparation of viral particles is laborious, timeconsuming and requires biosafety level 2. In contrast, the piggyBac transposon system offers a nonviral alternative to efficiently cut and paste transgenes into the genome. The production of plasmids is faster and cheaper and the piggyBac system requires only standard laboratory biosafety levels. For genome editing of human iPSCs with great precision, the CRISPR/Cas9 technology is the method of choice since it is easy-to-use, efficient, and cost-effective. Genomically modified iPSCs can be differentiated into neurons by doxycycline-induced overexpression of TFs and maturation is achieved by astrocyte coculture. 9. 4 M NaCl solution. Store at 4 C. 10. PBS pH 7.2 without calcium and magnesium. Store at room temperature. 12. Antibiotic: If you would like to select the cells for the integrated lentiviral construct, use the appropriate antibiotic (such as blasticidin or puromycin). Store aliquots at À20 C, after thawing store at 4 C, protected from light. 15. TaqMan ® PCR master mix, such as TaqMan ® Universal PCR Master Mix (Thermo Fisher Scientific). Store at 4 C. 16. TaqMan ® primer and probes for WPRE and albumin detection (see Table 1). Dilute in ddH 2 O to a concentration of 10 μM and store at À20 C. Store at À20 C. 3. 1Â PBS with calcium and magnesium. Store at 4 C. 4. Doxycycline solution: dissolve 10 mg doxycycline hyclate powder in 20 ml PBS (0.5 mg/ml ¼ 1000Â), sterile-filter (0.22 μm). Store aliquots at À20 C; after thawing store at 4 C, protected from light. Put the tubes in a freezing container and store at À80 C for at least 2 h. Subsequently, store in liquid nitrogen. Nucleofection of iPSCs 1. In order to electroporate piggyBac and transposase vectors into iPSCs in suspension, use the X-Unit of the 4D-Nucleofector™ System in combination with the P3 Primary Cell 4D-Nucleo-fector™ X Kit according to the manufacturer's guidelines. 2. First of all, prepare the DNA, the Nucleofector™ solution and the cell culture plates. For a nucleofection reaction in 100 μl cuvettes, mix 10 μg piggyBac vector and 2.5 μg transposase vector in less than 10 μl volume (maximum 10% of the final sample volume) in a 1.5 ml tube. In a separate tube, mix 82 μl Nucleofector™ solution with 18 μl supplement per nucleofection reaction and bring to room temperature. Prepare Matrigel-coated cell culture plates with the desired volume of mTeSR™1 medium with ROCKi and prewarm in the incubator (see Note 11). 3. Switch on the X-Unit of the 4D-Nucleofector™ System and choose the cell-type specific program for the human embryonic stem cell line H9, the cuvette size, P3 primary solution and the pulse CB-156 or CB-150 (see Note 12). 4. Dissociate the cells to be nucleofected using TrypLE, centrifuge (400 Â g, 4 min) and resuspend in mTeSR™1 with ROCKi. Determine the cell number, transfer 800,000 cells for each nucleofection into a 1.5 ml tube and centrifuge (400 Â g, 4 min). Aspirate the supernatant and resuspend the cells in 100 μl room temperature Nucleofector™ solution with supplement, mix with the DNA and transfer into an electroporation cuvette and close the lid. Avoid air bubbles while pipetting. Gently tap the cuvette to make sure that the sample covers the bottom. 5. Quickly put the cuvette(s) into the Nucleofector™ and press the start button to apply the pulse CB-156 or CB-150. Immediately after, carefully remove the samples, add mTeSR™1 with ROCKi into the cuvette, mix by gently pipetting up and down 6. The next day, wash the cells with 1Â PBS w/o Ca 2+ and Mg 2+ and change the medium to mTeSR™1 w/o ROCKi. Change the medium every day until next passaging (see Fig. 2b). Starting 48 h after nucleofection, select the cells with an integrated construct with the appropriate antibiotic (see Note 14). In order to determine the number of the integrated piggyBac constructs, use the piggyBac copy number kit from System Biosciences (see Note 15). To prepare genomic DNA, seed the cells in a 12-well plate (see Note 16). When confluent, wash once with 1Â PBS w/o Ca 2+ and Mg 2+ and add 250 μl lysis buffer to each well. Freeze the cells at À80 C and thaw the plate at room temperature to ensure complete cellular lysis. Detach the cells by pipetting up and down, transfer the lysates to 1.5 ml tubes and heat them at 95 C for 2 min. Centrifuge at 17,000 Â g for 2 min and transfer the supernatant to a new 1.5 ml tube. The lysates should be placed on ice if used immediately or stored at À20 C. 9. Calculate the copy number as follows [11]: , divide the ΔΔC t by 2 as there are two copies of the UCR1 sequence per genome. Lentivirus Production and Transduction 1. For the production and transduction of lentiviruses, titration and copy number determination, we follow the protocol from the Trono lab [8]. 2. One day prior to transfection, seed 8,000,000,293T/17 cells in a 10 cm culture dish. The next day, replace the culture medium with 4 ml fresh DMEM with 10% FBS. The cells are transfected using 45 μg of polyethylenimine (PEI) combined with 15 μg DNA containing the plasmid of interest (see Fig. 3a, b), the viral packaging (psPAX2) plasmid, and the viral envelope (pMD2G) plasmid in a 4:2:1 ratio. . 3c). 9. Plot the standard curve using the software of your qPCR machine or manually using other software such as Microsoft Excel, and calculate the quantity of albumin and WPRE for each sample using the equation of the standard curve. 10. Calculate the copy number for each sample as follows: Copy number ¼ (quantity mean of WPRE sequence/quantity mean of Alb sequence) Â 2. 11. Calculate the viral titer with the following formula: Titer (viral genome/ml) ¼ (number of target cells counted at day 1 Â number of copies per cell of the sample)/volume of supernatant (ml). 2. Seed the iPSCs at a density of 30,000-50,000 cells per cm 2 in mTeSR™1 medium with ROCKi supplemented with 0.5 μg/ ml doxycycline. On the next day, wash the cells with 1Â PBS w/o Ca 2+ and Mg 2+ and change the medium to mTeSR™1 w/o ROCKi supplemented with 0.5 μg/ml doxycycline. Change the medium daily until day 4 (Fig. 4). 3. When culturing the neurons for longer time periods, it is recommended to change the stem cell medium (mTeSR™1) to maturation medium (BrainPhys™ with supplements). Change half of the medium on day 5 of differentiation to BrainPhys™ medium with supplements. Repeat changing half of the medium 2 days later. After those two adaptation medium changes, it is sufficient to change half of the medium once per week. Volume loss due to evaporation should be compensated with ddH 2 O. Coculturing with Astrocytes 1. In order to increase the maturation of neurons for electrophysiological measurements, coculturing with astrocytes is highly recommended [4,15]. We adapted the protocol from Kaech and Banker [16] to our cell culture. 2. Rat primary cortical astrocytes are cultured in astrocyte medium at 37 C and 5% CO 2 according to the manufacturer's instructions. For passaging, aspirate the culture medium and store it in a Falcon tube as a washing solution (see Note 26). Rinse the cells once with 1Â PBS w/o Ca 2+ and Mg 2+ . Add prewarmed Accutase and incubate the cells at 37 C until all of them are detached (usually 5 min are sufficient). Stepwise add the cell culture medium stored in the first step to flush cells and collect all cells to a prerinsed 15 ml Falcon tube. Centrifuge at 400 Â g for 5 min. Aspirate the supernatant and resuspend the pellet in prewarmed astrocyte growth medium. Count the cells using Trypan Blue and seed the appropriate amount in uncoated tissue-culture treated dishes at a seeding density of approximately 5000 cells per cm 2 . Change the growth medium every 3-4 days. 3. For the coculture with neurons, prepare astrocytes to be~80% confluent at day 4 of neuronal differentiation. One day before the reseeding of neurons, wash the astrocytes three times with 1Â PBS w/o Ca 2+ and Mg 2+ and add BrainPhys™ medium with minimal supplements. 4. Thoroughly clean the coverslips in a big glass petri dish. First, rinse the coverslips in ddH 2 O for 2 h and then shake in 50 ml 8. After 2 h, place the coverslips with the differentiated iPSCs upside down into culture wells containing 80% confluent rat astrocytes. Every 7 days, exchange 50% of the BrainPhys™ medium and compensate the volume loss due to evaporation with ddH 2 O (see Note 30). 1. Aliquot the Matrigel according to the protocol and the dilution factor provided with it (varies for each bottle of Matrigel). We prepare aliquots for dilution in 12 ml coating medium. Briefly, thaw the Matrigel on ice in the cold room or the fridge and prepare a box with dry ice to precool 1.5 ml tubes. Quickly distribute the Matrigel solution into the tubes and store at À20 C. 2. The piggyBac vector backbone can be obtained from Addgene (to be submitted, containing the EGFP gene under the control of the doxycycline-inducible promoter). For cloning of transcription factors, the EGFP can be excised using NheI and XhoI, the transcription factor cDNA can be amplified by PCR and introduced into the piggyBac vector using Gibson Assembly cloning [17]. 3. There are two different lentiviral vector systems that can be used: the pLV system that consists of two constructs, one expressing the rtTA transactivator from the constitutively active EF1α promoter and the other one expressing the transgene under the control of the doxycycline-inducible TRE promoter [4] (Addgene plasmids #61472 and #61471, respectively) or the pLIX403 system that expresses the rtTA transactivator and the transgene under the TRE promoter on a single construct (Addgene plasmid #41395). The pLV plasmids that are referred to in this protocol do not contain any selection markers. If selection for the integrated constructs is required, it should be cloned into the plasmids before the production of lentiviral particles. 4. For better attachment of the neurons, freshly add 1 μl of a 1 mg/ml laminin solution per 1 ml supplemented BrainPhys™ medium to a final concentration of 1 μg/ml. 5. For the coculture with astrocytes, we use BrainPhys™ medium with a minimal supplementation since we found that astrocytes do not grow well in the presence of cAMP. Addition of BDNF and GDNF was neither found to enhance maturation nor affect astrocytes but might be beneficial depending on experiment design. 6. Use 1 ml of diluted Matrigel solution per well of a 6-well plate, 0.5 ml per well of a 12-well plate and 0.25 ml per well of a 24-well plate. 7. Use 2 ml mTeSR™1 medium per well of a 6-well plate, 1 ml per well of a 12-well plate and 0.5 ml per well of a 24-well plate. If you would like to avoid feeding the cells on the weekend, add at least the 1.5-fold amount of medium on Friday. 8. The optimal cell density depends on the growth rate of your iPSC line. For our cells, we seed 15,000-25,000 cells/cm 2 for maintenance of stem cells, and 30,000-50,000 cells/cm 2 for differentiation experiments. 9. Check iPSCs in 4-week intervals for mycoplasma contamination using the Universal Mycoplasma Detection Kit (ATCC ® 30-1012K™) according to the manufacturer's instructions. 10. The optimal density for freezing depends on your iPSC line. For our cells, a density of 500,000-1,000,000 cells/cryotube in 0.5-1 ml mFreSR™ works well. 11. Cells of one 100 μl nucleofection reaction can be seeded to one well of a 6-well plate or distributed to multiple wells of a 12-or 24-well plate. 12. The pulse CB-156 is recommended if higher transfection efficiency is favored at expenses of a lower survival rate, whereas the pulse CB-150 results in higher viability with lower transfection efficiency. 13. Leaving the cells in Nucleofector™ solution for extended periods of time may lead to reduced transfection efficiency and viability so it is important to work as quickly as possible. If you face problems such as low transfection efficiency due to very big plasmids etc. you can try to incubate the cells after nucleofection in the Nucleofector™ solution at room temperature for approximately 10 min. 14. The concentration of antibiotic optimal for selection depends on the specific iPSC line of choice and should be determined with a killing curve. We use a final concentration of 20 μg/ml for blasticidin, 3 μg/ml for puromycin, and 250 μg/ml for hygromycin. 15. Alternatively, the copy number can be determined as described for the lentiviral transduction (see Subheading 3.3) by performing a TaqMan ® -based qPCR on genomic DNA. Use the albumin gene for normalization and a gene specific for the piggyBac construct for counting the integration events. We recommend using primers and probes for the antibiotic resistance gene, if not otherwise present in the genome of your iPSC line. It is important to have both genes present on the same plasmid used for the standard curve since the preparation of the serial dilutions is prone to small variations. 16. Before performing the copy number determination, the cells must be passaged at least once to avoid the interference of nonintegrated piggyBac plasmids with the qPCR. 17. The optimal settings of the qPCR protocol may vary with the qPCR machine and the SYBR ® Green or TaqMan ® mix used. 18. From this step on, the cells are producing viral particles and should be handled at biosafety level 2. All viral particles that are collected are also biosafety level 2. 19. If your centrifuge is not able to run at 7000 Â g, the centrifugation step can be carried out at lower g for a longer period of time (e.g., at 5000 Â g for 30 min). 20. We usually use one aliquot of viral particles to transduce one well of a 6-or 12-well plate. In order to optimize the viral transduction, it is recommended to determine the viral titer. Therefore, transduce cells with different volumes of the viruscontaining supernatant and perform a qPCR on genomic DNA counting the number of integrated copies per cell. 21. Directly after transfection, the iPSCs are biosafety level 2 and should be handled as such, after the medium change the next day, they are back at biosafety level 1. 23. Avoid placing the PAM sequence into your sgRNA-expressing vector and the potential donor construct. It will be cut once sgRNA and Cas9 are expressed. The vectors from the Zhang lab can be ordered in different versions, that is, with GFP or puromycin expression. If positive cells should be sorted with flow cytometry, GFP is optimal. When expanding of single cells and subsequent picking of monoclonal colonies is preferred, use puromycin with the version V2 on Addgene, which is corrected from a previous version. The success rate of this cloning strategy is usually very high. 24. One or two guanines can be added for more efficiency of the U6 promoter if the designed sgRNA is not starting with it. They have to be added to the bottom oligo as reverse complement in addition to the sgRNA sequence as well. 25. The T7 endonuclease I assay is performed as follows: Transfect the sgRNA-and Cas9-expressing constructs into a test cell line (e.g., 293T/17, see Subheadings 2.3 and 3.3). Isolate the DNA using a DNA extraction kit, such as the DNeasy® Blood and Tissue Kit (Qiagen). Amplify the locus using flanking primers tested for specificity in advance. Purify the reaction using a PCR Purification kit (Qiagen). Elute in 30 μl. Mix 200 ng purified PCR product, 2 μl NEBuffer™ 2 and water to a total volume of 19 μl. Hybridize the PCR product in a thermocycler by heating to 95 C and ramp down to 85 C with À2 C/s, then to 25 C with 0.1 C/s and hold at 4 C. Add 1 μl T7 endonuclease I to the reaction and incubate at 37 C for 20 min. Run on a 2% agarose gel (30 min, 90 V) to see if one or more bands appear. If two or three bands are visible, the sgRNA works fine. The T7 endonuclease cuts at wobbles that appear with reannealing of nonfitting DNA [13] strands. This happens if the Cas9 cuts parts of the DNA of the population of cells used as the test cell line. 26. Rat Primary Cortical Astrocytes stick to the plastic used in cell culture dishes and centrifuge tubes. Prior to use, rinse all material that will come in contact with the cells with medium to prevent cells from sticking to the plastic. 27. Since the cleaning of the coverslips is very time-consuming, it can also be done in 1 day. Briefly, rinse the coverslips two times in ddH 2 O, then add 50 ml 1 M HCl and shake for 1 h. Rinse three times with ddH 2 O by shaking for 2 min, and then rinse once more with ddH 2 O by shaking for 1 h. Shake three times in 100% ethanol for 2 min and one time for 1 h. Sterilize the coverslips at 225 C for 2-3 h. Successful cleaning will be accompanied by an even spread of coating solution across the whole surface of the coverslip. If problems with adhesion occur, go back to the long protocol. 28. The purpose of the spacers is to allow growth of the induced neurons in close proximity to the astrocyte feeder layer but without physical contact. 29. The coverslips can have any size depending on the requirements of the experiment. We routinely use 12 mm coverslips equipped with three paraffin feet in a 24-well plate. It is recommended to add additional volume of medium to the well to completely cover the coverslips in order to avoid floating. 30. We add approximately 50 μl ddH 2 O per week for a 24-well plate to compensate for volume loss due to evaporation. Store a test plate full of H 2 O in the incubator and weigh in weekly intervals to check for evaporation.
2019-03-23T13:03:01.294Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "e52883e9f5b98159c4827c9dfa508f516a59ddfb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-1-4939-9080-1_9.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "419d2720eada265ca2273cbc4efdd35d406e85a1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
258087087
pes2o/s2orc
v3-fos-license
Pregnancy and complete atrioventricular block: a case report Introduction: Bradycardia in pregnancy due to complete atrioventricular block (CAVB) is a rare but serious occurrence that can be life-threatening to the mother and fetus. Patients with CAVB may be asymptomatic, but symptomatic cases require urgent and definitive management. Case presentation: The case of a 20-year-old primigravida with previously undiagnosed CAVB who attended the obstetric emergency service in labor is presented. The route of delivery was vaginal without complications. The decision was made to implant a permanent dual-chamber pacemaker on the third day of the puerperium, and the patient did no present cardiovascular symptoms during outpatient follow-up. Clinical discussion: CAVB is a rare but serious condition in pregnancy that can be congenital or acquired. While some cases are relatively benign, others can lead to decompensation and fetal complications. There is no consensus on the best delivery route, but vaginal delivery is generally safe unless contraindicated for obstetric reasons. Pacemaker implantation may be necessary in some cases and can be performed safely during pregnancy. Conclusion: This case highlights the importance of cardiac evaluation in pregnant patients, especially those with a history of syncope. It also highlights the need for adequate and urgent management in symptomatic cases of CAVB in pregnancy and adequate evaluation to decide when to implant the pacemaker as a definitive measure. Introduction Bradycardia in pregnancy due to complete atrioventricular block (CAVB) is a rare but serious occurrence [1,2] . The incidence of CAVB is estimated to be 1 in 15 000 to 20 000 live births [3] , and it can be congenital or acquired. The acquired variety is rare during pregnancy, as it is mainly observed in individuals over 50 years of age [4] . Cardiac output is generally maintained by increasing stroke volume, but some individuals require additional management when facing physiological challenges such as pregnancy, especially if they have a cardiac condition such as CAVB [5] . Patients with CAVB may be asymptomatic, but symptomatic cases require urgent and definitive management. Definitive management requires the implantation of a pacemaker, but there has been controversy in the past regarding its necessity [6] . This case report highlights the challenges we faced due to a lack of experience and the solutions available using the available clinical evidence. This case report has been reported in accordance with the CARE criteria [7] . Case presentation A 20-year-old primigravida with 39 weeks of pregnancy attended the obstetrics emergency service in the latent phase of labor with no significant personal and family cardiovascular history. She reported having experienced four episodes of syncope in her life: one episode at 11 years of age and three episodes at 18 years of age not related to physical effort, evaluated by a neurologist who ruled out neurological involvement, for which no further studies were carried out, nor she received any treatment. She had no symptoms during the pregnancy. On admission, the patient was in good general condition, oriented, showing bradycardia with a heart rate of 36 beats/min, uterine height according to gestational age, cephalic presentation, fetal movements present and adequate fetal heartbeats, cervix dilation of 3 cm. During cardiovascular auscultation, a bradycardic heart rate was detected, for which she was referred to a hospital with a greater capacity for resolution, as she was a HIGHLIGHTS • Pregnancy with complete heart block is rare. • Patients may be asymptomatic. • A multidisciplinary team is required to manage these cases. • Such patients can be managed conservatively or may require temporary or permanent pacemaker implantation. Department of Cardiology, National Hospital Edgardo Rebagliati Martins, Lima, Perú Sponsorships or competing interests that may be relevant to content are disclosed at the end of this article high-risk maternal patient. Initially, 1 mg of atropine was indicated if the heart rate decreased to less than 30 beats/min. The patient arrived at the obstetric emergency room of our institution, in the expulsive period of labor and with a variable heart rate between 25 and 32 beats/min, respiratory rate 22 /minute, blood pressure 120/70 mmHg. CAVB was documented on the electrocardiography, with a narrow QRS complex and a heart rate of 34 beats/minute (Fig. 1A). In the registry of prenatal check-ups, the pregnant woman's heart rate was not recorded, blood pressure was normal, and uterine growth was adequate for gestational age, negative serological tests, and hemoglobin of 13 g/dl. As the patient arrived at our institution during the last stage of labor and maintained a heart rate greater than 30 beats/min without major risk factors on the electrocardiogram indicating an urgent need for pacemaker implantation, we decided to immediately implement monitoring and expectant management, and advised her to continue with labor. The delivery was vaginal, without complications. She was born a healthy baby weighing 2860 g, with an APGAR score of 9 and 9 at 1 min and 5 min, respectively, without cardiovascular disease. The patient underwent an echocardiogram with preserved left ventricular ejection fraction and no structural alterations ( Fig. 2A). It was decided to implant the definitive dual-chamber pacemaker on the third day of the puerperium (Figs. 1B and 2B). In the outpatient follow-up, the patient did not present any cardiovascular symptoms. Discussion CAVB, a disorder of the cardiac conduction system in which atrioventricular conduction is completely absent, is a common cause of permanent bradycardia [4] . The finding of CAVB in pregnancy is rare; if present, it is usually congenital. About 30% of congenital CAVB cases remain undiscovered until adulthood and therefore may be first diagnosed during the gestational stage [8] . It can be acquired from secondary causes, such as ischemic heart disease, drug toxicity, nodal ablation, electrolyte disorder, and due to previous cardiac surgeries. Other acquired causes are systemic diseases such as amyloidosis, sarcoidosis, and systemic lupus erythematosus, which can also cause CAVB [2] . Congenital CAVB can occur as an isolated condition or in conjunction with other congenital heart disease. Isolated CAVB, without associated structural disease, is relatively benign, consistent with a normal pregnancy, and there may be an increase in heart rate with exercise, atropine, and orciprenaline since the block is in the AV node in these cases [9][10][11] . However, in other cases of CAVB, the heart rate does not increase and can become decompensated, especially in the last stages of pregnancy and during the second stage of labor or in the immediate postpartum period. Valsalva stimulates the vagus nerve and may exacerbate bradycardia, produce asystole, or cardiac arrest [4,8] . In 30% of patients with congenital heart block, the first symptoms occur during pregnancy, probably due to the hyperdynamic circulation of pregnancy [1] . Regarding fetal complications, isolated cases of pregnancy with intrauterine growth retardation and preterm delivery have been reported [12] . In the review of the literature, there is no consensus regarding the best route of delivery of such patients [13] . There is no absolute contraindication regarding vaginal delivery since it depends on the patient's condition and her cardiopulmonary tolerance [14] . Vaginal delivery carries no additional risks in a pregnant with congenital complete heart block, unless contraindicated for obstetric reasons [15] . Dilation of labor is recommended to shorten the active phase of labor and elective instrumental delivery is recommended to limit the duration of the active phase of the second stage, as these pregnant women are prone to developing syncopal attacks and seizures due to decreased heart rate associated with Valsalva [6] . Cesarean section is recommended only when there is an obstetric indication [16] . For women who have a stable junctional escape rhythm, implantation of a pacemaker may not be necessary or may be deferred until after delivery if there are no risk factors such as syncope, pauses greater than 3 times the duration of the ventricular escape rhythm cycle, wide QRS escape rhythm, prolonged QT interval, complex ventricular ectopy, mean daytime heart rate less than 50 bpm [17] . Otherwise, patients should undergo pacemaker implantation during pregnancy. Pacemaker implantation can be performed safely, especially if the fetus is more than 8 weeks pregnant [8,11,17] . There are several reports of women undergoing permanent pacing during pregnancy without significant adverse effects; in some cases, transesophageal echocardiography was used to guide lead position and in others electroanatomical navigation minimizing the use of fluoroscopy [17] . Conclusion CAVB in pregnancy is a rare condition. This condition can be completely asymptomatic during pregnancy and be diagnosed only at the time of labor when the patient comes into contact with health facilities for the first time. Once diagnosed, a multidisciplinary approach involving obstetrician, cardiologist, and anesthetist should evaluate the case, to plan the management, to determine the best route of delivery and the timing of pacemaker implantation. The medical team must be prepared for any adverse event that may arise. Ethical approval No ethical approval necessary. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. Sources of funding There is no funding source for this study. Conflicts of interest disclosure I declare that there is no competing interest related to the study, authors, other individuals, or organizations. Provenance and peer review Not commissioned, externally peer reviewed.
2023-04-13T15:09:34.839Z
2023-04-07T00:00:00.000
{ "year": 2023, "sha1": "b8e01f760b5e384421336b994bd5bee473d810be", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/ms9.0000000000000505", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6da300a82f69a20475b41129e26652df9722eb3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
269317486
pes2o/s2orc
v3-fos-license
Quantification of Irgafos P-168 and Degradative Profile in Samples of a Polypropylene/Polyethylene Composite Using Microwave, Ultrasound and Soxhlet Extraction Techniques : In polypropylene/polyethylene composite (C-PP/PE) production, stabilizing additives such as Irgafos P-168 are essential as antioxidant agents. In this study, an investigation was carried out that covers different solid–liquid extraction methods (Soxhlet, ultrasound, and microwaves); various variables were evaluated, such as temperature, extraction time, the choice of solvents, and the type of C-PP/PE used, and the gas chromatography coupled to mass spectrometry (GC-MS) technique was used to quantify the presence of Irgafos P-168 in the C-PP/PE samples. The results revealed that microwave extraction was the most effective in recovering Irgafos P-168. A recovery of 96.7% was achieved when using dichloromethane as a solvent, and 92.83% was achieved when using limonene as a solvent. The ultrasound technique recovered 91.74% using dichloromethane and 89.71% using limonene. The Soxhlet extraction method showed the lowest recovery percentages of 57.39% using dichloromethane as a solvent and 55.76% with limonene, especially when the C-PP/PE was in the form of pellets. The degradation products that obtained the highest degradation percentages were Bis (di-test-butyl phenyl) phosphate and Mono (di-test-butyl phenyl) phosphate using the microwave method with dichloromethane as a solvent and PP in film. Finally, the possible mechanisms for forming the degradation compounds of Irgafos P-168 were postulated. Introduction In producing polypropylene/polyethylene composite (C-PP/PE) films, additives improve and modify the material's properties.These additives play a fundamental role in optimizing physical, chemical, and mechanical characteristics, allowing them to be adapted to the specific needs of various industrial applications, among which food packaging and packaging stand out [1][2][3][4][5][6][7][8][9][10][11][12][13][14].Stabilizing additives, such as antioxidants and UV stabilizers, protect C-PP/PE from oxidation and degradation caused by environmental factors [14].Fluidity and slip modifier additives improve film processing, reducing friction and facilitating slip in the production stages [14,15].Plasticizers increase the flexibility and elasticity of the films, while reinforcing additives improve the mechanical resistance and rigidity of the material.Additionally, color additives and pigments provide a wide range of aesthetic options.These additives are essential to obtain high-quality, high-performance polypropylene and C-PP/PE films adapted to the needs of different industrial sectors [16][17][18][19][20][21][22][23][24][25][26]. One of the antioxidant additives in most significant demand in the production of C-PP/PE films is (tris (2,4-di-tert-butylphenyl) phosphite) known as Irgafos P-168; it is a trifunctional ester of phosphoric acid, which contains three phenyl groups substituted with butyl groups in specific positions [27].This structural configuration (Figure 1) gives Irgafos P-168 antioxidant properties, allowing it to act as a free radical scavenger and delay oxidation processes that can lead to the degradation of C-PP/PE.Incorporating Irgafos P-168 in the molten polymer during the extrusion and thermomolding processes improves thermal stability, protecting it from oxidation and degradation [14].These characteristics are essential to achieve optimal appearance and obtain stable physical properties, guaranteeing more excellent durability and useful life of the products [28]. One of the antioxidant additives in most significant demand in the production PP/PE films is (tris (2,4-di-tert-butylphenyl) phosphite) known as Irgafos P-168; it i functional ester of phosphoric acid, which contains three phenyl groups substituted butyl groups in specific positions [27].This structural configuration (Figure 1) Irgafos P-168 antioxidant properties, allowing it to act as a free radical scavenger a lay oxidation processes that can lead to the degradation of C-PP/PE.Incorporating I P-168 in the molten polymer during the extrusion and thermomolding processes imp thermal stability, protecting it from oxidation and degradation [14].These characte are essential to achieve optimal appearance and obtain stable physical properties, gu teeing more excellent durability and useful life of the products [28].Soxhlet extraction is a classic method where the sample is placed in a cell-shaped e tion cartridge, and a heating and cooling cycle is carried out.The solvent evaporat condenses, efficiently extracting the antioxidant additives from the sample, b method has certain limitations, such as long extraction times ranging from 6 h to 48 higher volumes of solvent required [35].In contrast, the UAE uses ultrasonic wa improve the extraction efficiency of antioxidant additives.This technique generates cavitation and turbulence in the solvent, accelerating extraction and offering prom results in shorter times [36].Likewise, MAE, which stands out as a fast and efficien nique since it allows direct extraction in granulated or pelletized matrices using re volumes of solvent and without the need for exhaustive pretreatment of the sample proven to be effective in obtaining extraction results in significantly shorter times.I trast, SFE uses a fluid in a supercritical state, which exhibits intermediate propert tween a liquid and a gas.However, it offers better performance in selectivity and e tion efficiency; its application can be more complicated and expensive due to the ne specialized and expensive equipment [33][34][35]. It is essential to remember that solvents play a crucial role in extracting the add in all these solid-liquid extraction techniques.Although organic compounds such chloromethane, cyclohexane, and chloroform are recognized for achieving high rec percentages, their high toxicity represents a significant risk for the personnel invol their handling and the environment in general [33][34][35].Specifically, dichloromethan known as methylene chloride, is subject to use restrictions due to its harmful pot Therefore, it is essential to select solvents that carefully ensure the extraction pro effectiveness, offer safety for human well-being, and minimize their environmental i Previously, various extraction techniques have been used for Irgafos P-168 and other antioxidant additives in C-PP/PE and PP films [29,30].These techniques include Soxhlet extraction, ultrasound-assisted extraction (UAE), and more advanced techniques such as microwave-assisted extraction (MAE) and supercritical fluid extraction (SFE) [31][32][33][34].Soxhlet extraction is a classic method where the sample is placed in a cell-shaped extraction cartridge, and a heating and cooling cycle is carried out.The solvent evaporates and condenses, efficiently extracting the antioxidant additives from the sample, but the method has certain limitations, such as long extraction times ranging from 6 h to 48 h and higher volumes of solvent required [35].In contrast, the UAE uses ultrasonic waves to improve the extraction efficiency of antioxidant additives.This technique generates micro cavitation and turbulence in the solvent, accelerating extraction and offering promising results in shorter times [36].Likewise, MAE, which stands out as a fast and efficient technique since it allows direct extraction in granulated or pelletized matrices using reduced volumes of solvent and without the need for exhaustive pretreatment of the samples, has proven to be effective in obtaining extraction results in significantly shorter times.In contrast, SFE uses a fluid in a supercritical state, which exhibits intermediate properties between a liquid and a gas.However, it offers better performance in selectivity and extraction efficiency; its application can be more complicated and expensive due to the need for specialized and expensive equipment [33][34][35]. It is essential to remember that solvents play a crucial role in extracting the additives in all these solid-liquid extraction techniques.Although organic compounds such as dichloromethane, cyclohexane, and chloroform are recognized for achieving high recovery percentages, their high toxicity represents a significant risk for the personnel involved in their handling and the environment in general [33][34][35].Specifically, dichloromethane, also known as methylene chloride, is subject to use restrictions due to its harmful potential.Therefore, it is essential to select solvents that carefully ensure the extraction process's effectiveness, offer safety for human well-being, and minimize their environmental impact [28][29][30][31][32]. Limonene emerges as an up-and-coming option since its toxicity is considerably lower than that of traditional solvents [28][29][30][31][32]. Limonene C 10 H16 is a monocyclic hydrocarbon belonging to the class of terpenes.It has a highly aromatic structure, is hydrophilic, can dissolve a range of organic products, and is present in the peel of citrus fruits, especially lemons.There are two optical isomers of Limonene, d-Limonene and l-Limonene, and a racemic mixture that combines both isomers [28][29][30][31][32].Its pleasant lemon aroma makes it an additive widely used in the food industry to add flavor and fragrance to various products.However, its applications go further, and it is also found in household, cosmetic, and pharmaceutical products, where it has been considered safe.In addition, studies have revealed that Limonene has anticancer properties [28][29][30][31][32][33][34][35].In recent years, Limonene has also begun to be used as a green solvent since it is environmentally friendly and derived from natural sources.Limonene is biodegradable and non-toxic, which makes it an alternative to traditional solvents such as dichloromethane.It is safer and more ecological, minimizes the negative impact on ecosystems, and promotes more sustainable practices in the industry [28][29][30][31][32][33]. Unfortunately, the use of green solvents is still limited in the polymer industry for the extraction of additives such as Irgafos P-168, so in this research, extraction techniques assisted by microwaves, ultrasound, and Soxhlet will be used, using a method-sensitive microextraction coupled to gas chromatography, for the simultaneous determination of the concentration of Irgafos P-168 and degradation products in C-PP/PE samples.Three pretreatments of PP (ground, pellets, and films) will be carried out using a traditional solvent (dichloromethane) and a green solvent (Limonene).Multivariate analysis will evaluate performance, and the degradation products of Irgafos P-168 will be quantified to establish their relationship with the results of each extraction.The results of this research are essential to the scientific community, industry, and regulatory bodies involved in the extraction and characterization of Irgafos P-168 in polymeric matrices. Reagents The Irgafos P-168 was acquired from Shanghai Tixiai Co., Ltd.(Shanghai, China).Butylated hydroxytoluene (BHT) was also used as an internal standard provided by Campro Science GmbH (Berlin, Germany).Limonene (HPLC grade) was obtained from Scharlab (Barcelona, Spain).Hydrogen and nitrogen with 99.9999% purity were purchased from Linde (Cartagena, Colombia), and dichloromethane with 99.99% purity was used from Sigma Aldrich (Bangalore, India). GC-MS Analysis A specific method was designed using gas chromatography coupled with mass spectrometry (GC-MS) to evaluate the recoveries of the master mixture.The extracts obtained by solid-liquid extraction were analyzed by GC-MS, following the degradation products of each compound analyzed, together with an internal standard (BHT).These analyses were carried out using an Agilent 7890 gas chromatograph provided by Agilent JW Scientific (Diegem, Belgium).The chromatography was coupled to an Agilent 7000 GC-MS triple quadrupole (QqQ) mass spectrometer, equipped with an electron impact ionization (EI) source, and operated in selective ion monitoring (SIM) mode.The quadrupole and the ion source temperatures were maintained at 150 • C and 230 • C, respectively.The multiplier voltage was set to 2200 V. To improve the acquisition speed, three acquisition segments were programmed with different retention times (20, 15, and 20 ms, respectively).One microliter of extract was injected into a PTV injector in pulsed, splitless mode, with an injection temperature of 280 • C. The column used in the gas chromatograph was a DB-5 ms of 30 m length, 0.25 mm internal diameter, and 0.25 µm film.The oven temperature was started at 60 • C for 3 min and then increased to 300 • C at a rate of 10 • C per minute, maintaining this temperature for 15 min.The total analysis execution time was 42 min.Helium was used as carrier gas at a constant flow of 1.0 mL per minute. Prepare Irgafos 168 Calibration Standards and C-PP/PE Samples with Irgafos P-168 Preparation of the Curve for Calibration of the Chromatograph Figure 2 shows how a stock solution of Irgafos P-168 at 10,000 ppm was prepared (by adding 10,000 mg of Irgafos P-168 in 1 L of Limonene) and an internal standard solution of butylated hydroxytoluene (BHT) at a concentration of 10,000 ppm.These solutions generated four samples with known concentrations of 500, 1000, 1500, and 2000 ppm. Figure 2 shows how a stock solution of Irgafos P-168 at 10,000 ppm was prepared (by adding 10,000 mg of Irgafos P-168 in 1 L of Limonene) and an internal standard solution of butylated hydroxytoluene (BHT) at a concentration of 10,000 ppm.These solutions generated four samples with known concentrations of 500, 1000, 1500, and 2000 ppm.The C-PP/PE samples with Irgafos P-168 were prepared following the procedure described in Figure 3, which had the following stages: (1) 0.0, 0.5, 1, 1. Extraction of Irgafos P-168 in C-PP/PE Samples Figure 4 presents the methodology used in this research to extract Irgafos P-168 from C-PP/PE samples in ground form, pellets, and films, to which Irgafos P-168 had been added.Solid-liquid extractions were carried out using two different solvents, dichloromethane and Limonene.These extractions were performed using three methods: Soxhlet, ultrasound (conventional laboratory sonic bath), and microwave oven (high-power programmable laboratory microwave oven), and a mass-coupled gas chromatograph was used to quantify the concentration of Irgafos P-168 and the degradation products present in the C-PP/PE film samples.In preliminary tests, it was identified that 90, 50, and 117 • C were the optimal temperatures for working with Soxhlet, ultrasound, and microwave, respectively.Therefore, this variable was left fixed in our experimental design, and thus, we could evaluate how other variables affect the extraction efficiency.Extraction of Irgafos P-168 in C-PP/PE Samples Figure 4 presents the methodology used in this research to extract Irgafos P-168 from C-PP/PE samples in ground form, pellets, and films, to which Irgafos P-168 had been added.Solid-liquid extractions were carried out using two different solvents, dichloromethane and Limonene.These extractions were performed using three methods: Soxhlet, ultrasound (conventional laboratory sonic bath), and microwave oven (high-power programmable laboratory microwave oven), and a mass-coupled gas chromatograph was used to quantify the concentration of Irgafos P-168 and the degradation products present in the C-PP/PE film samples.In preliminary tests, it was identified that 90, 50, and 117 °C were the optimal temperatures for working with Soxhlet, ultrasound, and microwave, respectively.Therefore, this variable was left fixed in our experimental design, and thus, we could evaluate how other variables affect the extraction efficiency.For microwave extraction, 5 g of C-PP/PE resin was extracted using a solution of dichloromethane and Limonene.It was determined that heating the solution in the microwave oven at 25-50% power for 45 min, stirring every 5 min, was sufficient to extract the antioxidant.Four different extractions were performed with the resin, pellets, and ground C-PP/PE, and the average results of the ultrasound, Soxhlet, and microwave extractions are presented in Table 1 and Figure For the ultrasonic bath, 3 g of C-PP/PE placed in a 20 mL vial were used.Next, 10.0 mL of an internal standard solution was added using a 5.0 mL micropipette, as shown in Figure 4.Each test was replicated five times, and sonication was carried out for three hours in an ultrasonic bath, keeping the temperature under control, below 50 • C.After completion of sonication, the vials were removed from the bath and allowed to stand for 10 min before filtration of the extracted Irgafos P-168 sample solutions using disposable PTFE syringe filters. The extraction was carried out in three ways: ground C-PP/PE, pellets, and films.In the case of ground C-PP/PE, the extraction lasted for 90 min, while, for C-PP/PE pellets and films, it was carried out for 60 min in the ultrasonic bath.During extraction, the solution was shaken for at least 30 s every 10 min. Notably, the microwave oven was revealed as a fast and effective method to extract Irgafos P-168 from the crushed resin, while the ultrasonic bath provided an economical and relatively fast alternative for extracting the additives.In contrast, the Soxhlet extraction method with these C-PP/PE resins required at least 7 h to extract most of the additives.In this study, Soxhlet extraction was extended for 1440 and 720 min, suggesting that it would possibly require more than 24 h to recover the additive completely. Multivariate Graphical Analysis This study conducted a graphical analysis to examine the recovery of the antioxidant Irgafos P-168 in the C-PP/PE samples.Minitab statistical software, widely recognized for its ability to perform advanced statistical analyses, was used to carry out this analysis. Multivariate Graphical Analysis This study conducted a graphical analysis to examine the recovery of the antioxidant Irgafos P-168 in the C-PP/PE samples.Minitab statistical software, widely recognized for its ability to perform advanced statistical analyses, was used to carry out this analysis.Since the study involves multiple variables, such as the different extraction techniques (Soxhlet, ultrasound, and microwave), the solvents used (dichloromethane and Limonene), and the types of C-PP/PE (ground, pellets, and films), a multivariate graphical analysis was performed.This made it possible to identify the existing relationships between the various extraction techniques, the solvents, and the C-PP/PE forms to extract Irgafos P-168. Quantification and Recovery of the Additive Irgafos P-168 by GC For the analysis of Irgafos P-168 in the C-PP/PE samples, an internal standard method was implemented to check the validity of the GC-MS method.It should be noted that both the standard solutions and the samples were analyzed in duplicate to guarantee the precision of the results.The calibration curve demonstrated excellent linearity within the established range, with a coefficient of determination greater than 0.999. In the experimentation of this study, four different concentrations of Irgafos P-168 solutions were prepared using an internal standard (500, 1000, 1500, 2000 ppm).Following the procedure described in Section 2, GC-MS analyses were performed on Irgafos P-168 extracts obtained from C-PP/PE samples in various forms, such as ground C-PP/PE, C-PP/PE pellets, and C-PP/PE film. Multiple extraction techniques were used to evaluate the recovery of Irgafos P-168, including Soxhlet, ultrasound, and microwave, along with two different solvents, dichloromethane, and Limonene.In addition, different forms of C-PP/PE were worked with, that is, ground, pellets, and film, and the extraction times were varied.To analyze the results effectively, a variability graph was constructed that allowed the identification of differences in the means and variations in antioxidant recovery at the combined levels (Figure 6). the procedure described in Section 2, GC-MS analyses were performed on Irgafo extracts obtained from C-PP/PE samples in various forms, such as ground C-PP PP/PE pellets, and C-PP/PE film. Multiple extraction techniques were used to evaluate the recovery of Irgafo including Soxhlet, ultrasound, and microwave, along with two different solve chloromethane, and Limonene.In addition, different forms of C-PP/PE were worke that is, ground, pellets, and film, and the extraction times were varied.To anal results effectively, a variability graph was constructed that allowed the identifica differences in the means and variations in antioxidant recovery at the combined (Figure 6). Figure 6 shows the relationship between the recovery percentage of Irgafos Pthe previously mentioned variables.The results highlight that the microwave ex technique achieves the highest recovery percentages when applied for 45 min and C-PP/PE was used.Furthermore, no significant differences were observed betwee tional and green solvents since the recovery percentages remained close.Specific chloromethane recovered 96.07%, while Limonene obtained 92.83%. The ultrasound technique obtained optimal results with an extraction time of using ground C-PP/PE with the traditional solvent dichloromethane (91.74%) and nene (89.71%).The difference between these solvents remains minimal, regardles extraction technique used.Figure 6 shows the relationship between the recovery percentage of Irgafos P-168 and the previously mentioned variables.The results highlight that the microwave extraction technique achieves the highest recovery percentages when applied for 45 min and ground C-PP/PE was used.Furthermore, no significant differences were observed between traditional and green solvents since the recovery percentages remained close.Specifically, dichloromethane recovered 96.07%, while Limonene obtained 92.83%. The ultrasound technique obtained optimal results with an extraction time of 90 min, using ground C-PP/PE with the traditional solvent dichloromethane (91.74%) and Limonene (89.71%).The difference between these solvents remains minimal, regardless of the extraction technique used. Lastly, the extraction performed by Soxhlet obtained lower recovery results than the microwave and ultrasound techniques.The highest percentages achieved with this technique occurred at 1440 min, 78.64% using dichloromethane as a solvent and ground C-PP/PE, and 76.66% with Limonene in ground C-PP/PE.With everything mentioned above, it can be stated that using ground C-PP/PE instead of forms of C-PP/PE in films and pellets can improve the recovery results of Irgafos P-168 due to the larger contact surface, more excellent permeability, smaller size of particles, and greater homogeneity of the material.Furthermore, the results indicate that better recoveries were obtained using microwave extraction than ultrasound and Soxhlet techniques.This can be explained by the microwave extraction technique, which selectively heats the solvent and the sample.This allows for faster and more efficient heat transfer, speeding up the extraction process.By contrast, ultrasound and Soxhlet techniques may require more time to reach the appropriate temperature and achieve complete extraction, as demonstrated in the experimental design, since Soxhlet extraction requires 1440 min to achieve good recoveries.They are significantly below the recovery percentages obtained by microwave, which only took 45 min.Another relevant aspect that supports the effectiveness of the microwave extraction technique is its ability to generate more intense agitation and turbulence in the sample.This improved agitation facilitates the interaction between the solvent and the analyte, thus simplifying the extraction of Irgafos P-168 and improving recovery efficiency.As is known, the microwave extraction technique achieved comparable or better results in a shorter extraction time compared to ultrasound and Soxhlet techniques.A shorter extraction time can minimize analyte degradation or loss and improve recovery, as seen in Section 2. In addition, it allows greater control of extraction conditions, such as temperature and pressure.This allows conditions to be optimized to maximize the recovery of Irgafos P-168 and minimize any possible interference or degradation of the analyte. The dichloromethane solvent showed higher recovery percentages; however, the difference was insignificant enough to rule out Limonene as a green solvent option completely.In these cases, it is essential to consider Limonene's additional benefits, such as lower environmental impact and toxicity.The choice of solvent depends on other factors, such as current environmental regulations, specific application requirements, and personal or company preferences.Choosing a solvent such as Limonene is an ideal option for those who value sustainability and seek to minimize environmental impact. In previous studies, Camacho et al. [36] have already used microwave extraction to evaluate the quality of resins such as polypropylene and low-density polyethylene (LDPE) in recycled resins and successfully extracted phenolic antioxidants such as Irgafos P-168 and Irganox 1010 using a mixture of 50/50 cyclohexane and isopropanol solvents, obtaining high recovery percentages of 97% for Irgafos P-168 and 93% for Irganox 1010.In addition, it is essential to mention that short extraction times of 30, 45, and 60 min were used with extraction temperatures of 70, 100, and 120 • C in the development of the method. The previously mentioned study and the present research work have achieved high recovery percentages by applying various techniques and solvents.Within the framework of this research, some recovery percentages have been obtained that exceed the 90% threshold in the case of the conditions evaluated by microwaves and ultrasound, using both solvents, different extraction times, and different forms of the polymer.It is essential to highlight that the experimental conditions have differed between these studies, including aspects such as the type of polymer used, the particle size, the solvent combinations, and the time intervals used in the extraction process.These variations influence the results, making it difficult to compare the investigations directly. Identification of Irgafos P-168 by Gas Chromatography Coupled to Mass Spectrometry (GC-MS) Analysis The primary purpose of extracting Irgafos P-168 was to obtain the maximum possible amount of the original substance while minimizing the presence of relevant contaminants.However, it is crucial to consider that during this process, there is a possibility of Irgafos P-168 experiencing degradation, which could result in a decrease in recovery percentages.To address this concern, subsequent analyses of the Irgafos P-168 recovered in the extractions were conducted using gas chromatography coupled with mass spectrometry (GC-MS) to examine the potential formation of degradation products.The degradation products generated may pose challenges both in their recovery and detection during the analytical process.The application of the GC-MS technique allowed for the precise identification of these degraded products, thereby providing crucial information to assess whether Irgafos P-168 had undergone significant degradation.When interpreting the obtained data, previous knowledge that the analyzed compounds were specific degradation products of Irgafos P-168 was taken into account (Figure 7).These research findings are of paramount importance in understanding the potential effects of degradation on the quality and integrity of the compound.Furthermore, they significantly contribute to advancing knowledge in this field by providing a deeper understanding of degradation processes and their implications in the practical application of Irgafos P-168. ucts of Irgafos P-168 was taken into account (Figure 7).These research findings are of par-amount importance in understanding the potential effects of degradation on the quality and integrity of the compound.Furthermore, they significantly contribute to advancing knowledge in this field by providing a deeper understanding of degradation processes and their implications in the practical application of Irgafos P-168.Antioxidants play a crucial role in preserving copolymers like C-PP/PE, posing a significant challenge at the industrial level when they undergo thermo-oxidative degradation.This phenomenon not only compromises the durability and physical integrity of the polymer but also holds significant implications given its final application in direct food contact packaging.In this context, Irgafos P-168 is susceptible to oxidative degradation during solid-liquid extraction processes, especially in the presence of specific solvents such as Limonene and dichloromethane.This vulnerability arises from the chemical properties of Irgafos P-168 and its interaction with these mentioned solvents.During extractions, Irgafos P-168 may be exposed to environmental conditions conducive to oxidation, including the presence of oxygen and variations in temperature.Solvents like Limonene and dichloromethane can exacerbate this process by dissolving and transporting Irgafos P-168, thereby increasing its exposure to oxidative conditions.The oxidation or degradation of Irgafos P-168 during these extractions can have significant implications for its efficacy and quality.The resulting degradation products may be challenging to detect and Antioxidants play a crucial role in preserving copolymers like C-PP/PE, posing a significant challenge at the industrial level when they undergo thermo-oxidative degradation.This phenomenon not only compromises the durability and physical integrity of the polymer but also holds significant implications given its final application in direct food contact packaging.In this context, Irgafos P-168 is susceptible to oxidative degradation during solid-liquid extraction processes, especially in the presence of specific solvents such as Limonene and dichloromethane.This vulnerability arises from the chemical properties of Irgafos P-168 and its interaction with these mentioned solvents.During extractions, Irgafos P-168 may be exposed to environmental conditions conducive to oxidation, including the presence of oxygen and variations in temperature.Solvents like Limonene and dichloromethane can exacerbate this process by dissolving and transporting Irgafos P-168, thereby increasing its exposure to oxidative conditions.The oxidation or degradation of Irgafos P-168 during these extractions can have significant implications for its efficacy and quality.The resulting degradation products may be challenging to detect and recover, potentially impacting the purity and effectiveness of Irgafos P-168 in its final application. Therefore, it is imperative to consider the susceptibility of Irgafos P-168 to oxidation or degradation when conducting solid-liquid extractions involving this compound along with solvents such as Limonene and dichloromethane.It is necessary to implement appropriate measures to minimize exposure to conditions that promote degradation and ensure the compound's integrity, both in industrial and research applications.Furthermore, it is important to note that the oxidative degradation of Irgafos P-168 can result in the formation of compounds with different properties, such as the generation of more polar compounds.These modified compounds may have a lower affinity for the solvents used in extraction, which could hinder their separation from the C-PP/PE polymer and, consequently, reduce the extraction yield.Considering various factors that can influence the oxidative degradation of Irgafos P-168, such as temperature, the presence of catalysts, the duration of the extraction process, and the storage conditions of the C-PP/PE copolymer, is essential.Increased oxidative degradation of Irgafos P-168 may indicate a less efficient extraction process and, therefore, lower yield.Therefore, measures should be taken to minimize the oxidative degradation of Irgafos P-168 during the extraction process and the storage of the copolymer, in order to optimize extraction performance.The results of this study suggest that measuring the degree of oxidative degradation of Irgafos P-168 in the copolymer indirectly provides an assessment of the extraction performance of the compound in said polymer. An illustration is provided in Figure 8 to clarify the degradation processes, showing how the breaking of the (PO) bond and the tert-butyl methyl diphenyl groups complicates the complete recovery of Irgafos P-168.This difficulty arises because a part of the molecule is lost due to fragmentation, leading to the formation of molecules with properties different from the original antioxidant.Additionally, Table 2 presents a detailed profile of the degradation of Irgafos P-168.This profile includes the recovery percentage of each compound obtained through the fragmentation of the crucial bonds present in the compound's structure.This detailed information provides a more comprehensive view of the resulting degradation products and their respective recovery rates, contributing to a deeper understanding of the effects of degradation on the original compound. quently, reduce the extraction yield.Considering various factors that can influence the oxidative degradation of Irgafos P-168, such as temperature, the presence of catalysts, the duration of the extraction process, and the storage conditions of the C-PP/PE copolymer is essential.Increased oxidative degradation of Irgafos P-168 may indicate a less efficien extraction process and, therefore, lower yield.Therefore, measures should be taken to minimize the oxidative degradation of Irgafos P-168 during the extraction process and the storage of the copolymer, in order to optimize extraction performance.The results of thi study suggest that measuring the degree of oxidative degradation of Irgafos P-168 in the copolymer indirectly provides an assessment of the extraction performance of the com pound in said polymer. An illustration is provided in Figure 8 to clarify the degradation processes, showing how the breaking of the (PO) bond and the tert-butyl methyl diphenyl groups complicate the complete recovery of Irgafos P-168.This difficulty arises because a part of the molecule is lost due to fragmentation, leading to the formation of molecules with properties differ ent from the original antioxidant.Additionally, Table 2 presents a detailed profile of the degradation of Irgafos P-168.This profile includes the recovery percentage of each com pound obtained through the fragmentation of the crucial bonds present in the com pound's structure.This detailed information provides a more comprehensive view of the resulting degradation products and their respective recovery rates, contributing to a deeper understanding of the effects of degradation on the original compound. Determination of the Thermo-Oxidative Degradation Products of Irgafos P-168 Based on the results obtained previously, we present the possible mechanisms of the degradation of Irgafos P-168 in Figures 9-14, which exhibit the formation processes of each of the products resulting from the thermo-oxidative degradation.All these mechanisms share the characteristic of developing in conditions that involve the presence of hydrogen and oxygen radicals in abundance, which occur in the tertiary carbons present in the polypropylene structure [34][35][36]. Mechanism of the Phosphate Product of Irgafos P-168 In Figure 8, the mechanism carried out in the first two stages involves the typical steps of all the other degradation products since they show how the hydrogen radicals that cause the degradation of Irgafos P-168 are formed.The first thing that occurs is a homolytic cleavage in the tertiary carbon of C-PP/PE caused by temperature and the presence of the peroxide bond (OO), so said carbon undergoes oxidation, and at the same time, the hydrogen radical (H•) returns to stabilize by uniting this time with oxygen.Subsequently, a homolytic cleavage occurs again between the peroxo bond (OO), which on this occasion generates a hydroxyl radical (OH•) that attacks the phosphorus of Irgafos P-168, generating a double bond with it once again.It generates homolytic cleavage by hydrogen, stabilizing the polymer chain's carbon. the hydrogen radical (H•) returns to stabilize by uniting this time with oxygen.Subsequently, a homolytic cleavage occurs again between the peroxo bond (OO), which on this occasion generates a hydroxyl radical (OH•) that attacks the phosphorus of Irgafos P-168, generating a double bond with it once again.It generates homolytic cleavage by hydrogen, stabilizing the polymer chain's carbon.PO bond, forming an alkoxyl radical (RO•).Simultaneously, this alkoxyl radical (R stabilized by bonding with a hydrogen radical (H•), forming a new alcohol bond ( that generates the product of interest.Furthermore, due to the complexity of this ty molecule, other possible degradation products result. Mechanism of Formation of the Bis(di-tert-butylphenyl) Phosphate Product In Figure 11, the degradation product (Irgafos P-168 Phosphate) is also the st point.The process begins with a homolytic cleavage between the carbon of the R and the oxygen, which leads to the release of the alkyl group in the form of a radic and the oxygen attached to the phosphorus atom as another radical.At this point, h gen (H•) radicals stabilize both radicals, forming Bis(di-tert-butylphenyl) phosphate Mechanism of Formation of the Mono(di-tert-butylphenyl) Phosphate Produc In Figure 12, the formation mechanism begins from the previous product Bis(d butylphenyl) phosphate; in the presence of the hydrogen radical and high temperat homolytic cleavage occurs between the C of the R group and the oxygen, releasing an R group, both of which are stabilized with hydrogen radicals.The degradation prod interest and a di-tert-butylphenyl are obtained. Mechanism of Formation of the Bis(di-tert-butylphenyl) Phosphate Product In Figure 11, the degradation product (Irgafos P-168 Phosphate) is also the starting point.The process begins with a homolytic cleavage between the carbon of the R group and the oxygen, which leads to the release of the alkyl group in the form of a radical (R•) and the oxygen attached to the phosphorus atom as another radical.At this point, hydrogen (H•) radicals stabilize both radicals, forming Bis(di-tert-butylphenyl) phosphate. molecule, other possible degradation products result. Mechanism of Formation of the Bis(di-tert-butylphenyl) Phosphate Product In Figure 11, the degradation product (Irgafos P-168 Phosphate) is also the sta point.The process begins with a homolytic cleavage between the carbon of the R g and the oxygen, which leads to the release of the alkyl group in the form of a radica and the oxygen attached to the phosphorus atom as another radical.At this point, hy gen (H•) radicals stabilize both radicals, forming Bis(di-tert-butylphenyl) phosphate. Mechanism of Formation of the Mono(di-tert-butylphenyl) Phosphate Product In Figure 12, the formation mechanism begins from the previous product Bis(di butylphenyl) phosphate; in the presence of the hydrogen radical and high temperatu homolytic cleavage occurs between the C of the R group and the oxygen, releasing ano R group, both of which are stabilized with hydrogen radicals.The degradation produ interest and a di-tert-butylphenyl are obtained. Mechanism of Formation of the Mono(di-tert-butylphenyl) Phosphate Product In Figure 12, the formation mechanism begins from the previous product Bis(di-tertbutylphenyl) phosphate; in the presence of the hydrogen radical and high temperature, a homolytic cleavage occurs between the C of the R group and the oxygen, releasing another R group, both of which are stabilized with hydrogen radicals.The degradation product of interest and a di-tert-butylphenyl are obtained. Mechanism of 2-Tert-butylphenol Product Formation In Figure 13, this time in part of the product 2,4-di-tert-butylphenol, under the same conditions set out above, a homolytic cleavage occurs in the tert-butyl group in the position para, so the degradation product of interest and a tert-butyl group are obtained. Mechanism of 2-Tert-butylphenol Product Formation In Figure 13, this time in part of the product 2,4-di-tert-butylphenol, under the same conditions set out above, a homolytic cleavage occurs in the tert-butyl group in the position para, so the degradation product of interest and a tert-butyl group are obtained. Mechanism for Obtaining the Product 4-Tert-butylphenol The last mechanism illustrated in Figure 14 for the formation of the 4-tert-butylphenol product involves the loss of a tert-butyl group in the (ortho) position, so it begins with the 2,4-tert-butylphenol molecule, which undergoes a homolytic cleavage between the ring carbon bond and the tert-butyl carbon, resulting in the product of interest 4-tert-butylphenol and a tert-butyl group. Mechanism for Obtaining the Product 4-Tert-butylphenol The last mechanism illustrated in Figure 14 for the formation of the 4-tert-butylphenol product involves the loss of a tert-butyl group in the (ortho) position, so it begins with the 2,4-tert-butylphenol molecule, which undergoes a homolytic cleavage between the ring carbon bond and the tert-butyl carbon, resulting in the product of interest 4-tert-butylphenol and a tert-butyl group. Mechanism of 2-Tert-butylphenol Product Formation In Figure 13, this time in part of the product 2,4-di-tert-butylphenol, under the same conditions set out above, a homolytic cleavage occurs in the tert-butyl group in the position para, so the degradation product of interest and a tert-butyl group are obtained. Mechanism for Obtaining the Product 4-Tert-butylphenol The last mechanism illustrated in Figure 14 for the formation of the 4-tert-butylphenol product involves the loss of a tert-butyl group in the (ortho) position, so it begins with the 2,4-tert-butylphenol molecule, which undergoes a homolytic cleavage between the ring carbon bond and the tert-butyl carbon, resulting in the product of interest 4-tert-butylphenol and a tert-butyl group. Validation of Proposed Mechanisms It is widely recognized in scientific circles that quantum chemistry offers an effective method for understanding processes occurring in chemical reactions.This allows for the calculation of charge distribution, molecular properties, and potential energy surfaces associated with these reactions.Numerous studies have supported the utility of Density Functional Theory (DFT) as a powerful tool for predicting trajectories, kinetics, and secondary products of compounds of interest under specific environmental conditions, as documented in the scientific literature.Therefore, we have chosen to employ computational tools to validate the formation of proposed degraded products in the preceding section.This will further support our conclusions and enhance our understanding of the underlying processes in the studied chemical reactions. Based on the results provided in Table 3, it can be concluded that all analyzed formation mechanisms and products exhibit negative values for both the delta of Gibbs free energy (∆G) and the delta of enthalpy (∆H), indicating the favorable thermodynamic nature of the reactions under study.Significant differences are observed between the values of ∆G and ∆H among the different mechanisms, with the mechanism for obtaining the 4-tertbutylphenol product standing out as the most spontaneous with the most negative value of ∆G.The close relationship between ∆G and ∆H suggests that enthalpy plays an important role in the spontaneity of reactions, although other factors also have an influence.Based on the results provided in Table 3, it can be concluded that all analyzed for-mation mechanisms and products exhibit negative values for both the delta of Gibbs free energy (ΔG) and the delta of enthalpy (ΔH), indicating the favorable thermodynamic nature of the reactions under study.Significant differences are observed between the values of ΔG and ΔH among the different mechanisms, with the mechanism for obtaining the 4tert-butylphenol product standing out as the most spontaneous with the most negative value of ΔG.The close relationship between ΔG and ΔH suggests that enthalpy plays an important role in the spontaneity of reactions, although other factors also have an influence.tional tools to validate the formation of proposed degraded products in the preceding section.This will further support our conclusions and enhance our understanding of the underlying processes in the studied chemical reactions. Based on the results provided in Table 3, it can be concluded that all analyzed formation mechanisms and products exhibit negative values for both the delta of Gibbs free energy (ΔG) and the delta of enthalpy (ΔH), indicating the favorable thermodynamic nature of the reactions under study.Significant differences are observed between the values of ΔG and ΔH among the different mechanisms, with the mechanism for obtaining the 4tert-butylphenol product standing out as the most spontaneous with the most negative value of ΔG.The close relationship between ΔG and ΔH suggests that enthalpy plays an important role in the spontaneity of reactions, although other factors also have an influence. Percentage Analysis of the Degradation Products of Irgafos P-168 Figure 15 provides an interpretation of the variations in Irgafos P-168 concentrations, illustrating the significant impact of different extraction methods, solvents, and forms of C-PP/PE.It was observed that all degradation products reached their highest percentages when the microwave extraction technique was used in combination with dichloromethane and C-PP/PE in the form of films.Although this technique involves short times and low temperatures, in theory, it should not cause significant structural changes; the greater extraction of degraded products may be because the samples were exposed to higher temperatures during the previous preparation stages, which could have contributed to the degradation of the Irgafos P-168.It is evident that the shape of the C-PP/PE also influences the number of degraded products.Notably, the highest percentages of degradation products were found in C-PP/PE films, while the pellet and ground forms showed lower concentrations of these products.The film structures are less dense and permeable, facilitating oxygen diffusion and, therefore, its reaction with Irgafos P-168.These films are also less protected against environmental elements such as sunlight and humidity, which can accelerate degradation processes.Furthermore, C-PP/PE films are more susceptible to mechanical stress during handling due to stretching or deformation processes during manufacturing, which increases their vulnerability to Irgafos P-168 degradation. Percentage Analysis of the Degradation Products of Irgafos P-168 Figure 15 provides an interpretation of the variations in Irgafos P-168 concentrations, illustrating the significant impact of different extraction methods, solvents, and forms of C-PP/PE.It was observed that all degradation products reached their highest percentages when the microwave extraction technique was used in combination with dichloromethane and C-PP/PE in the form of films.Although this technique involves short times and low temperatures, in theory, it should not cause significant structural changes; the greater extraction of degraded products may be because the samples were exposed to higher temperatures during the previous preparation stages, which could have contributed to the degradation of the Irgafos P-168.It is evident that the shape of the C-PP/PE also influences the number of degraded products.Notably, the highest percentages of degradation products were found in C-PP/PE films, while the pellet and ground forms showed lower concentrations of these products.The film structures are less dense and permeable, facilitating oxygen diffusion and, therefore, its reaction with Irgafos P-168.These films are also less protected against environmental elements such as sunlight and humidity, which can accelerate degradation processes.Furthermore, C-PP/PE films are more susceptible to mechanical stress during handling due to stretching or deformation processes during manufacturing, which increases their vulnerability to Irgafos P-168 degradation. Percentage Analysis of the Degradation Products of Irgafos P-168 Figure 15 provides an interpretation of the variations in Irgafos P-168 concentrations, illustrating the significant impact of different extraction methods, solvents, and forms of C-PP/PE.It was observed that all degradation products reached their highest percentages when the microwave extraction technique was used in combination with dichloromethane and C-PP/PE in the form of films.Although this technique involves short times and low temperatures, in theory, it should not cause significant structural changes; the greater extraction of degraded products may be because the samples were exposed to higher temperatures during the previous preparation stages, which could have contributed to the degradation of the Irgafos P-168.It is evident that the shape of the C-PP/PE also influences the number of degraded products.Notably, the highest percentages of degradation products were found in C-PP/PE films, while the pellet and ground forms showed lower concentrations of these products.The film structures are less dense and permeable, facilitating oxygen diffusion and, therefore, its reaction with Irgafos P-168.These films are also less protected against environmental elements such as sunlight and humidity, which can accelerate degradation processes.Furthermore, C-PP/PE films are more susceptible to mechanical stress during handling due to stretching or deformation processes during manufacturing, which increases their vulnerability to Irgafos P-168 degradation. Percentage Analysis of the Degradation Products of Irgafos P-168 Figure 15 provides an interpretation of the variations in Irgafos P-168 concentrations, illustrating the significant impact of different extraction methods, solvents, and forms of C-PP/PE.It was observed that all degradation products reached their highest percentages when the microwave extraction technique was used in combination with dichloromethane and C-PP/PE in the form of films.Although this technique involves short times and low temperatures, in theory, it should not cause significant structural changes; the greater extraction of degraded products may be because the samples were exposed to higher temperatures during the previous preparation stages, which could have contributed to the degradation of the Irgafos P-168.It is evident that the shape of the C-PP/PE also influences the number of degraded products.Notably, the highest percentages of degradation products were found in C-PP/PE films, while the pellet and ground forms showed lower concentrations of these products.The film structures are less dense and permeable, facilitating oxygen diffusion and, therefore, its reaction with Irgafos P-168.These films are also less protected against environmental elements such as sunlight and humidity, which can accelerate degradation processes.Furthermore, C-PP/PE films are more susceptible to mechanical stress during handling due to stretching or deformation processes during manufacturing, which increases their vulnerability to Irgafos P-168 degradation.Interestingly, the most predominant degradation product was Mono(di-tert-butylphenyl) phosphate, followed by Bis(di-tert-butylphenyl) phosphate.This is interesting since, according to the literature, when the degradation of Irgafos P-168 occurs, Irgafos P-168 Phosphate and 2,4 di-tert-butylphenol are frequently the products with the highest percentage [36,37].This discrepancy raises essential questions about the exact degradation mechanisms in the presence of different solvents.On the other hand, dichloromethane demonstrated notably higher recovery rates of degradation products compared to Limonene in all the techniques used.Again, Limonene, as a green solvent option, shows its effectiveness by providing lower percentages of degradation products and extracting acceptable amounts of Irgafos P-168.These results highlight the complexity of the interactions between Irgafos P-168, solvents, and extraction conditions and the urgency of finding an optimal balance between efficient additive recovery and minimal degradation.Furthermore, it highlights the critical importance of carefully selecting extraction conditions to preserve the integrity of the additive.These results also emphasize the urgency of continued research to constantly improve extraction techniques and minimize the degradation of the polymers and their additives. Conclusions The results obtained in this study show that the microwave extraction technique surpasses the ultrasound and Soxhlet techniques in terms of effectiveness, reducing extraction times and increasing the recovery efficiency of the compound of interest.Furthermore, ground C-PP/PE leads to notable improvements in the recovery of Irgafos P-168 compared to C-PP/PE presentations in the form of films and pellets.These improvements are attributed to the advantages inherent to using ground C-PP/PE, such as a substantial increase in the contact surface, more excellent permeability, the presence of smaller particles, and greater homogeneity in the composition of the material.To be more precise in the findings, by applying microwaves for 45 min, with the solvent dichloromethane and using ground C-PP/PE as a substrate, the maximum recovery of Irgafos P-168 is achieved, obtaining a percentage of 96.07%; with a duration of 45 min, using the solvent Limonene Interestingly, the most predominant degradation product was Mono(di-tert-butylphenyl) phosphate, followed by Bis(di-tert-butylphenyl) phosphate.This is interesting since, according to the literature, when the degradation of Irgafos P-168 occurs, Irgafos P-168 Phosphate and 2,4 di-tert-butylphenol are frequently the products with the highest percentage [36,37].This discrepancy raises essential questions about the exact degradation mechanisms in the presence of different solvents.On the other hand, dichloromethane demonstrated notably higher recovery rates of degradation products compared to Limonene in all the techniques used.Again, Limonene, as a green solvent option, shows its effectiveness by providing lower percentages of degradation products and extracting acceptable amounts of Irgafos P-168.These results highlight the complexity of the interactions between Irgafos P-168, solvents, and extraction conditions and the urgency of finding an optimal balance between efficient additive recovery and minimal degradation.Furthermore, it highlights the critical importance of carefully selecting extraction conditions to preserve the integrity of the additive.These results also emphasize the urgency of continued research to constantly improve extraction techniques and minimize the degradation of the polymers and their additives. Conclusions The results obtained in this study show that the microwave extraction technique surpasses the ultrasound and Soxhlet techniques in terms of effectiveness, reducing extraction times and increasing the recovery efficiency of the compound of interest.Furthermore, ground C-PP/PE leads to notable improvements in the recovery of Irgafos P-168 compared to C-PP/PE presentations in the form of films and pellets.These improvements are attributed to the advantages inherent to using ground C-PP/PE, such as a substantial increase in the contact surface, more excellent permeability, the presence of smaller particles, and greater homogeneity in the composition of the material.To be more precise in the findings, by applying microwaves for 45 min, with the solvent dichloromethane and using ground C-PP/PE as a substrate, the maximum recovery of Irgafos P-168 is achieved, obtaining a percentage of 96.07%; with a duration of 45 min, using the solvent Limonene and ground C-PP/PE, a recovery of 92.83% is achieved; and by using the ultrasound extraction tech-nique for 90 min, with the solvent dichloromethane and ground C-PP/PE, a recovery of 91.74% and 89.71% with Limonene is obtained.However, the Soxhlet extraction technique, with a duration of 1440 min and using ground C-PP/PE, entails the lowest recovery of 78.64% with dichloromethane and 76.66% with Limonene.These results underline that the microwave extraction technique is the best choice when combined with ground C-PP/PE, providing the highest recovery percentages at noticeably shorter extraction intervals.Although dichloromethane exhibits some advantages in terms of recovery, the choice of Limonene as an alternative solvent is viable.It provides additional benefits, such as lower toxicity and reduced environmental impact. Figure 2 . Figure 2. Preparation of Irgafos P-168 standard samples.Preparation of the C-PP/PE Sample with Different Concentrations of Irgafos P-168 The C-PP/PE samples with Irgafos P-168 were prepared following the procedure described in Figure 3, which had the following stages: (1) 0.0, 0.5, 1, 1.5, and 2.0 g of Irgafos P-168 were weighed in quantities individually.(2) To each of the quantities of Irgafos P-168, 1 kg of virgin C-PP/PE resin was added.(3) Mixtures were premixed with a standard Prodex mixer, Henschel 115JSS, at 800 rpm for 7 min.(4) Next, each sample was mixed with a Welex-200 24.1 extruder equipped with five temperature zones in its path.The temperatures used were 190, 195, 200, 210, 210, and 220 °C.This process guaranteed uniform distribution of the mixture.(5) Finally, from each type of melt, 20 g of mass was fed into a CARVER 3895 hot press.In this CARVER machine, the samples were compressed until films 300 mm in diameter with a thickness of ≈100 µm were obtained.The resulting films in the experiment were identified as C-PP/PE (0 ppm of Irgafos P-168), C-PP/PE 2 (500 ppm of Irgafos P-168), C-PP/PE 3 (1000 ppm of Irgafos P-168), C-PP/PE 4 (1500 ppm of Irgafos P-168), and C-PP/PE 5 (2000 ppm of Irgafos P-168). Figure 2 . Figure 2. Preparation of Irgafos P-168 standard samples.Preparation of the C-PP/PE Sample with Different Concentrations of Irgafos P-168 22 Figure 3 . Figure 3. Preparation of C-PP/PE samples with different concentrations of Irgafos P-168. Figure 3 . Figure 3. Preparation of C-PP/PE samples with different concentrations of Irgafos P-168. Figure 3 . Figure 3. Preparation of C-PP/PE samples with different concentrations of Irgafos P-168. Figure 4 . Figure 4. Extraction of Irgafos P-168 by Soxhlet, ultrasound, and microwave and quantification by GC-MS. Figure 4 . Figure 4. Extraction of Irgafos P-168 by Soxhlet, ultrasound, and microwave and quantification by GC-MS. 5 . Concentrations are expressed in parts per million (ppm).Percentages indicate recovery relative to an initial concentration of 500 ppm. Figure 5 . Figure 5. Graphic representation of the extraction of Irgafos P-168 with different techniques, different solvents, and different types of C-PP/PE. Figure 5 . Figure 5. Graphic representation of the extraction of Irgafos P-168 with different techniques, different solvents, and different types of C-PP/PE. 3. 3 . 2 . Figure 10 represents the process by which the degradation product previously obtained, Irgafos P-168 Phosphate, undergoes a homolytic breakdown due to abundant hydrogen radicals (H•) and the influence of temperature.This cleavage occurs between the PO bond, forming an alkoxyl radical (RO•).Simultaneously, this alkoxyl radical (RO•) is stabilized by bonding with a hydrogen radical (H•), forming a new alcohol bond (ROH) that generates the product of interest.Furthermore, due to the complexity of this type of molecule, other possible degradation products result. Figure 12 . Figure 12.Mechanism of formation of Mono(di-tert-butylphenyl).3.3.5.Mechanism of 2-Tert-butylphenol Product Formation In Figure 13, this time in part of the product 2,4-di-tert-butylphenol, under the same conditions set out above, a homolytic cleavage occurs in the tert-butyl group in the posi- Figure 14 . Figure 14.Mechanism of the formation of the 4-tert-butylphenol product. Figure 14 . Figure 14.Mechanism of the formation of the 4-tert-butylphenol product. Figure 14 . Figure 14.Mechanism of the formation of the 4-tert-butylphenol product. J. Compos.Sci.2024, 8, x FOR PEER REVIEW 4 of 22 for 15 min.The total analysis execution time was 42 min.Helium was used as carrier gas at a constant flow of 1.0 mL per minute.2.2.1.Prepare Irgafos 168 Calibration Standards and C-PP/PE Samples with Irgafos P-168 Preparation of the Curve for Calibration of the Chromatograph Table 1 . Experimental design of the extraction of Irgafos P-168 with different extraction techniques, different solvents, and different types of C-PP/PE. Table 2 . Degradation profile of Irgafos P-168 in different solvents. Table 3 . Gibbs free energy and enthalpy for the proposed degradation mechanisms. Table 3 . Gibbs free energy and enthalpy for the proposed degradation mechanisms. Table 3 . Gibbs free energy and enthalpy for the proposed degradation mechanisms.
2024-04-24T15:02:59.737Z
2024-04-21T00:00:00.000
{ "year": 2024, "sha1": "4ef99ffff26b9e78dbde742cfd90518035c65376", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-477X/8/4/156/pdf?version=1713666580", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1c01cccb5f8db55f0be6d32a1dcf6d586a7a48a4", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
236772645
pes2o/s2orc
v3-fos-license
Cybonto: Towards Human Cognitive Digital Twins for Cybersecurity Cyber defense is reactive and slow. On average, the time-to-remedy is hundreds of times larger than the time-to-compromise. In response to the expanding ever-more-complex threat landscape, Digital Twins (DTs) and particularly Human Digital Twins (HDTs) offer the capability of running massive simulations across multiple knowledge domains. Simulated results may offer insights into adversaries' behaviors and tactics, resulting in better proactive cyber-defense strategies. For the first time, this paper solidifies the vision of DTs and HDTs for cybersecurity via the Cybonto conceptual framework proposal. The paper also contributes the Cybonto ontology, formally documenting 108 constructs and thousands of cognitive-related paths based on 20 time-tested psychology theories. Finally, the paper applied 20 network centrality algorithms in analyzing the 108 constructs. The identified top 10 constructs call for extensions of current digital cognitive architectures in preparation for the DT future. I. INTRODUCTION Humans are recognized to be the weakest link in the cybersecurity defense chain [1], [2]. Insider threat incidents cost both small and large companies billions of dollars annually [3]. Cyber defenders are reactive and slow. On average, hackers need 15 hours to compromise a system while defenders need 200 to 300 days to discover a breach [2]. Meanwhile, the cybersecurity threat landscape keeps expanding. Cyber defenders respond by enlisting inter-discipline knowledge from numerous fields such as math, psychology, and criminology [4], [5], [6], [2]. In such a climate, Digital Twins (DTs) and incredibly Human Digital Twins (HDTs) offer the capability of running large-scale simulations across multiple knowledge domains to improve proactive cyber-defense strategies. Digital Twins are computational models of physical systems, including humans. The DT market is rapidly growing at a compound annual rate of 45.4% [7]. Notably, massive DT projects such as the British National Digital Twin [8] are being built. Within the intertwined DT networks, individual smart DTs such as HDTs should be capable of not only executing mimetic behaviors but also having local and global awareness, self-learning, and self-optimizing [7]. For the first time, this paper proposes a grounded vision on how DTs and HDTs can be applied towards cybersecurity. The main goal is making a case for expanding current digital cognitive architectures that will be at the hearts of future HDTs. The paper unified twenty most cybersecurity-relevant finalists from over seventy behavioral psychology theories. The theoryinformed knowledge and other cybersecurity constructs were then encoded as the novel Cybonto ontology. Analyzing the Cybonto ontology informed the Cybonto conceptual framework. The key contributions are as followed. The Cybonto conceptual framework solidifies the vision of how human cognitive digital twins and digital twin systems can be leveraged to design proactive cybersecurity strategies. The Cybonto ontology provides research-based guidance on 108 constructs and thousands of possible paths among them. Analyzing the ontology's cognitive core using more than 20 network centrality algorithms yields Behavior, Arousal, Goals, Perception, Selfefficacy, Circumstances, Evaluating, Behavior-Controlabiity, Knowledge, and Intentional Modality as the top 10 most influential constructs. These results call for the expansion of current cognitive architectures to better fit their future employments in DT systems. II. LITERATURE REVIEW The concept of HDTs previously appeared in humancomputer interaction studies. In comparison with traditional models, HDTs for digital twin systems have broader scopes with emphasis on both behavioral and cognitive activities. The work of Somers et al. is an excellent example in which HDT acts as a sensible personal assistant in organizing social events [9]. Notably, the HDT did not explicitly ask potential event participants for their preferences. Instead, it observed the people's social dimensions and then modeled the cognitive processes underlying an expert event planner's decisions. Zhang et al. [10] describes HDTs' self-awareness as a continuous process that involves dynamic knowledge acquisition and utilization. Numerous feedback loops will be needed. Well-designed ontologies are essential for information exchanges among different models [11], [12]. Compared with an application ontology, a reference ontology is supposed to be much more canonical and reusable [13]. Ontological reusability begins with the adoption of a top-level ontology. Key papers in cognitive frameworks and cybersecurity ontologies are as followed. arXiv:2108.00551v2 [cs.AI] 5 Aug 2021 A. Cognitive Frameworks ACT-R [14] is representative of the psychological modeling group with Clarion and Epic as other members. SOAR [15] is representative of the agent functionality-focused group that also includes Sigma, Lida, Icarus, and Companions. ACT-R and SOAR differ on architectural constraints, memory retrieval, conflict resolution strategies, and exhaustive processing [16]. ACT-R sequential architecture forces developers to watch out for bottlenecks while SOAR's parallel architecture is more relaxed [16]. ACT-R provides two options for resolving conflicts, while SOAR offers none. Both SOAR and ACT-R share the same general cognitive cycle and common architectural modules such as perception, short-term memory, declarative learning, declarative long-term memory, procedural long-term memory, procedural learning, action selection, and action. While ACT-R, SOAR, and other cognitive systems rely on symbolic input/output and rule database, their symbols may contain statistical metadata, and their architectures do allow the integration of deep learning systems. B. Cybersecurity Ontologies Ontologies are essential for symbolic operations, the building of a knowledge base, and explainability. Ontologies can be manually build from scratch [17], [18] or be automatically extracted [19], [20]. DOLCE 1 vs. BFO 2 highlights the importance of ontological commitments. DOLCE is grounded in natural language while BFO is grounded in the real world [21]. Because objects can be conceptual or actual in a languagebased ontology, there is always a risk of one actual object being recognized as two or more different conceptual objects. Oltramari et al. [22] introduced Cratelo, which has DOLCE as the top-level ontology. The ontology's human behavioral structures are confined within the cyber operation scope. Costa et al. [23] used the natural language processing approach in building their Insider Threat Indicator Ontology (ITIO). The ontology inherited considerable amounts of language ambiguity and did not support the identification of deeper behavioral structures. In 2019, Greitzer et al. [24] built upon their 2016's work and introduced the Sociotechnical and Organizational Factors for Insider Threat (SOFIT). Due to the absence of a top-level ontology and the behavioral language that leans heavily towards organizational insider threat activities, SOFIT is an application ontology rather than a reference ontology. Greitzer et al. [24] also admitted that ontology validation exercises only covered 10% of the ontology. Meanwhile, Donalds and Osei-Bryson [25] reported that cybersecurity ontologies have been insufficient due to fragmentation, incompatibility, and inconsistent use of terminologies. The team proposed a cybercrime classification ontology structured around attack events [25]. While the ontology provides a holistic, multi-perspective view regarding cybercrime attacks, 1 https://lnkd.in/gTFR8Wt 2 https://basic-formal-ontology.org/ its behavioral components are limited and lack theoretical grounding. C. Open Problems While massive DT projects are underway, digital cognitive twin development is pale in comparison, and HDT for cybersecurity is non-existent. This paper examined both ACT-R and SOAR published research repositories and found no cybersecurity dedicated track with topics such as cybersecurity, online ethical decisions, cyber criminology, or cyber attack/defense simulations. There is no grounded vision on how powerful DT systems with HDTs may improve proactive cybersecurity defenses. Recommended exploring questions are (i) What are the types of HDT (malicious hackers, groups as single HDT, defenders,etc.) to be built? (ii) What will HDT for cybersecurity feedback loops look like? (iii) How will existing cognitive architectures be extended to best facilitate those feedback loops? (iv) What shall we learn from our continuous observation of those HDTs? Current cybersecurity-related cognitive models focus on narrow use cases and are far from the HDTs that can automatically interact with other DTs while building up their own awareness. For the main reason, existing cognitive architectures do not provide enough granularity. This leads to further problems with multi-modal understanding and meta-cognition. For example, current long-term memory architecture can be further divided into experiences and beliefs. It is possible for two persons sharing a strong belief to have different interpretations of the same data. One may be significantly influenced by a past experience. Additionally, having access to too much data due to lack of granularity will lead to cognitive bottlenecks at system levels. Deciding which chunks of data to be loaded, excluded or be permanently erased from memory remains a challenge. Finally, we do not have a reference ontology for documenting and sharing behavioral-cybersecurity knowledge. Existing cybersecurity ontologies that have behavioral components are mostly application ontologies with none or weak ontological commitments. Such ontologies will not fit for use in massive and complex DT systems. III. THE CYBONTO CONCEPTUAL FRAMEWORK The novel Cybonto conceptual framework aims to provide general directions on answering the previously-mentioned questions regarding the vision of DTs and HDTs for cybersecurity. The framework targets the cognitive process of a malicious actor as an HDT within a DT system. Cognitive space is defined by the behavioral/cognitive component of the Cybonto ontology. The action space is limited by the HDT's set of encoded actions, its ability to improvise new moves, and the other DTs' interaction interfaces. In the beginning, fifty theories were picked from the behavioral/cognitive psychology body of knowledge. Each theory was ranked based on its ability to generate research, relevancy to cybersecurity and criminology, and consistency. Table I presents the top 25 theories. Then, each theory was codified into tuples of (entity, "influence" relationship, entity). The combination of 20 codified top theories formed the Cybonto cognitive core ontology with over 100 constructs. Full description of Cybonto in RDF store, Neo4J relational database, and other documentation are available at Cybonto-1.0 Github repository 3 . The Cybonto conceptual framework was formed upon analysis of the Cybonto ontology. Figure 1 presents the Cybonto conceptual framework with three environment types and four groups of digital twins (DTs). The internal environment (INE) is private to each DT. It contains both cognitive components and non-cognitive components. Opposite to the internal environment is the societal environment (SOE) where everything is public. In between, the in-group environment (IGE) connects INE with SOE. All environments follow Bronfenbrenner's Ecological System Theory [26] which describes influences as progressive, varying, and reciprocal forces among individuals and environments. For example, a seemingly distant public event may still be able to affect certain private mental processes. The IEG and the SOE are relative to the targeted HDT. The IEG is equivalent to Bronfenbrenner's micro-and mesosystems. The microsystem is the most influential external environment with members such as family, close friends, school, lovers, and mentors. SOE is equivalent to Bronfenbrenner's Exo-, Macro-, and Chrono-systems. The Cybonto conceptual framework requires four representatives from four DT groups. We need one attacker HDT and one defender HDT. Unlike traditional models to which data and feature specifications were explicitly fed, an attacker HDT must collect the data by itself. Group-related data cannot be inferred if the fundamental group structure is not met. Hence, we then need at least two more DTs to present IEG and SOE identities. An HDT can perform two main types of behaviors: the artifact creating/altering behavior and the non-artifact behavior. An artifact can range from a piece of code to a complex noncognitive digital twin. Viewing a malware's codes is a nonartifact behavior, while running the codes can be an artifactaltering behavior if the codes make changes to other artifacts. The perceptual layer sits on the border between the internal and external environments (IEG and SOE). Different perceptual layers in combination with different cognitive systems will have different perceptions of the same data streams. Refined perceptions constitute only a small part of a digital cognitive system. The Cybonto ontology details thousands of cognitive paths for processing initial perceptions. The result of a cognitive processing chain will be either a non-artifact behavior or an artifact creating/altering behavior. The behaviors (data streams) will be observed by other HDTs, and a new round of feedback loops begins. It is essential to note that a behavior can be kept secret within the in-group environment. In this framework: (a) HDTs have the complete freedom to interact with other DTs per published protocols, and automatically seek whatever data is made available to them. (b) By releasing their behaviors, HDTs generate new data, which may then be consumed by other HDTs. (c) The cognitive architecture within each HDT determines its cognitive capabilities which should include awareness and adaptation. (d) Cybonto DT simulation's objectives should be more about discovering new knowledge (the Why and How) rather than mining specific data (the What). IV. ANALYSIS OF THE CYBONTO ONTOLOGY Cybonto elected the Basic Formal Ontology (BFO) as its top-level ontology from more than thirty candidates. BFO is the only top-level ontology that adopts materialism, commits to actual-world possibilia, and has an intensional criterion of identity. The Cybonto Core (the behavioral/cognitive component) is grounded further by employing the Mental Functioning (MF) as its mid-level ontology. MF follows best practices outlined by the OBO Foundry and aligns with other projects in the Cognitive Atlas -a state-of-the-art collaborative knowledge-base in Cognitive Science [27]. Materialism views the world as a collection of materialized objects existing in space and time [21] -a core principle in DT strategies. Committing to materialism through BFO offers a fundamental distinction in the way Cybonto represents mental constructs. For centuries, cognitive activities were considered abstract particulars that could only be described through languages. This tradition is the reason why most behavioral components in cybersecurity ontologies are languagebased. Recent breakthroughs in the brain-machine interface such as those of Neuralink [28] enables measurements of brain activities that correspond to certain cognitive constructs. Therefore, it is now possible to ground behavioral/cognitive ontologies in materialism. Cybonto rejects conceptual objects, different linguistic descriptions of the same actual objects, process-based objects, and qualitative object labels that cannot be measured in real life. Figure 2 shows the network of Cybonto's horizontal relationships. Each node size equals the log scale of the node's page rank. A darker link color indicates a higher link value. Nodes were automatically arranged in a multi-circle layout with higher betweenness centrality nodes closer to the center. Figure 3 shows the most popular entities based on different network centrality scores. Top Authority Central (AC) constructs receive influence from constructs that have the most influence on others. Top Betweenness Central (BC) constructs are the ones that sit in the shortest paths among other constructs. BC constructs can serve either as bridges or gatekeepers of other constructs and processes. Top Eigenvector Central (EC) constructs are the leaders of their own cliques. A clique is a group of constructs in which each member has relationships with the others. In the context of the cognitive digital twin, a clique may represent a strong cognitive/behavioral pattern. Not only the top EC constructs are well-connected with their own clique members, they also have relationships with other cliques. The top 10 constructs across 20 network centrality measures are Behavior, Arousal, Goals, Perception, Self-efficacy, Circumstances, Evaluating, Behavior-Controlabiity, Knowledge, and Intentional Modality. In this list, only Behaviors, Goals, Perception, Evaluating and Knowledge are parts of existing digital cognitive architectures, although some are not explicitly implemented. It is possible that before this study, influential cognitive structures have been studied per independent usecases and thus could not collectively attract attention from conservative cognitive system designers. Now with a bird-eye view across 20 behavioral theories, these top 10 constructs deserve better attention. Within cognitive architectures, we may consider implementing Goals, Knowledge, Perception, and Evaluating explicitly and with finer granularity. For example, Perception is more than short-lived sensory perception. For example, Alice perceived Bob as a nice guy, and such perception persists whether Bob is with Alice or not. Additionally, we should consider adding Arousal and Intentional Modality. Although Arousal is a non-cognitive construct, it is ranked in the second place and influences several cognitive constructs within the top 10, such as Evaluating and Intentional Modality. Unfortunately, the current state of research regarding Arousal as a part of a digital cognitive process is almost non-existent. SOAR-related research results show a few papers studying the effects of general emotions. ACT-R research repository shows just four papers studying the effects of Arousal on memory management. The Circumstance is another non-cognitive construct with significant influence on behavioral outcomes. The paper rec-ommends expanding the existing Mental Image module in existing cognitive architectures to include non-physical environment variables such as urgency, group dynamics, and social sentiments. Finally, the paper recommends a new component -Imagining -to enable the HDT to run its own situational simulations and reason about possible circumstances. V. CONCLUSION Once massive non-cognitive digital twin systems are brought online, adding human cognitive digital twins will be the only logical next step. The vision of letting human digital twins run free in a digital twin world (and observing them) is realistic and offers a new paradigm in knowledge mining. The Cybonto conceptual framework demonstrates how such an ecosystem can be leveraged for shaping proactive cybersecurity defense strategies. Notably, HDTs are fundamentally different from deep learning models. Most cognitive systems can combine human cognitive reasoning (symbolic) with deep learning models (sub-symbolic). Cognitive reasoning with good enough granularity and a well-designed ontology allows us to observe and more importantly, to understand what the digital twins are doing. Hence, the paper also proposes the Cybonto ontology as specific recommendations on how existing cognitive systems can be expanded. Future work may involve further framework development, fine-tuning and expanding the ontology, and building a malicious HDT for demonstration purposes.
2021-08-03T17:07:14.101Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "fa502846be44d653d402d9343249f10babdd5e0f", "oa_license": null, "oa_url": "https://psyarxiv.com/2rbku/download", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0cf33636637805e62b1fa96e8f747ea4f6ad7c9a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
118536586
pes2o/s2orc
v3-fos-license
Variability of Solar Five-Minute Oscillations in the Corona as Observed by the Extreme Ultraviolet Spectrophotometer (ESP) on the Solar Dynamics Observatory Extreme Ultraviolet Variability Experiment (SDO/EVE) Solar five-minute oscillations have been detected in the power spectra of two six-day time intervals from soft X-ray measurements of the Sun observed as a star using the Extreme Ultraviolet Spectrophotometer (ESP) onboard the Solar Dynamics Observatory (SDO) Extreme Ultraviolet Variability Experiment (EVE). The frequencies of the largest amplitude peaks were found matching within 3.7 microHz the known low-degree (l = 0--3) modes of global acoustic oscillations, and can be explained by a leakage of the global modes into the corona. Due to strong variability of the solar atmosphere between the photosphere and the corona the frequencies and amplitudes of the coronal oscillations are likely to vary with time. We investigate the variations in the power spectra for individual days and their association with changes of solar activity, e.g. with the mean level of the EUV irradiance, and its short-term variations due to evolving active regions. Our analysis of samples of one-day oscillation power spectra for a 49-day period of low and intermediate solar activity showed little correlation with the mean EUV irradiance and the short-term variability of the irradiance. We suggest that some other changes in the solar atmosphere, e.g. magnetic fields and/or inter-network configuration may affect the mode leakage to the corona. Introduction The frequencies of individual resonant acoustic modes (Claverie et al., 1979) that are excited by turbulent convection are stable within about 0.4 µHz on the time scale of the solar cycle as they correspond to intrinsic phase relations of resonant waves in the solar interior. Five-minute oscillations with frequencies centered at about 3.3 mHz are trapped below the solar photosphere. However, a number of observations in photospheric and chromospheric lines, in the UV passbands, and in the coronal Fe xvi line (33.5 nm) demonstrate some leakage of these oscillations into upper layers of the solar atmosphere, e.g. Judge et al. (2001), O'Shea et al. (2002), McIntosh et al. (2003), Muglach (2003), De Pontieu et al. (2004. This leakage may be explained by the increased amplitude of the oscillations due to the rapid density decrease, e.g. Gough (1993), and/or by interaction with the network magnetic elements, which can channel the photospheric acoustic power to higher atmospheric layers at frequencies below the cutoff (Vecchio et al., 2007). For a discussion of these observations see Didkovsky et al. (2011). investigated the acoustic response to a single point-source driver. Malins and Erdelyi (2007) used a numerical simulation to show that widely horizontally coherent velocity signals from p-modes may cause cavity modes in the chromosphere, and surface waves in the transition region, and that fine structures are generated extending from a dynamic transition region into the lower corona, even in the absence of a magnetic field. A detection of the response of the corona to the observed photospheric lowdegree (ℓ = 0 -3) p-modes was reported by Didkovsky et al. (2011). The authors studied the oscillation power spectra of two six-day long time-series using the soft X-ray band-pass from the Extreme Ultraviolet Spectrophotometer (ESP) (Didkovsky et al., 2012) onboard the Solar Dynamics Observatory (SDO) Extreme Ultraviolet Variability Experiment (EVE: Woods et al., 2012). The largest amplitude peaks in the five-minute spectral region were compared with the lowdegree photospheric p-modes observed in Doppler velocity from the Birmingham Solar Oscillation Network (BiSON: Chaplin et al., 1998) and in visible light intensity (red channel) from the SOHO/VIRGO instrument (Andersen, 1991;Frohlich et al., 1997). This comparison showed that the frequencies of the coronal oscillations may deviate from the frequencies determined from the photospheric observations. This can be explained by significant influence of the non-uniform distribution of the irradiance sources and variability of the upper atmosphere (Didkovsky et al., 2011). The mean standard deviation of the coronal frequencies from the photospheric p-mode frequencies was ≈ 3.7 µHz in the frequency range of 2.4 to 3.6 mHz, which is about two times larger than the uncertainty of the peaks in the power spectrum determined from the six-day time-series. This deviation was also confirmed by comparing the power spectra for two consecutive six-day time series with a spectrum for the combined 12-day period. The power spectrum for a single six-day time-series showed more significant peaks with better correspondence to the photospheric p-mode spectrum than the combined 12-day spectrum. However, it was not clear whether the observed coronal oscillations were transmitted but distorted photospheric p-modes, or if these oscillations were excited in the corona by localized impulsive perturbation related to solar activity processes, (e.g. modeling of Goode et al. (1992), Andreev and Kosovichev (1995), Andreev and Kosovichev (1998), Bryson et al. (2005). In the follow-up work here we investigate whether the oscillations were excited in the corona by impulsive sources of solar activity (e.g. by flares) by studying the observational data during higher solar activity. If they are caused by solar activity, then this study could reveal a correlation between the appearance of the coronal five-minute oscillations and such activity. In contrast to that, if the observed coronal oscillations (Didkovsky et al., 2011) were related to the transmission of photospheric p-modes through the upper atmosphere, the use of observations made during the higher solar activity may reveal that solar-irradiance variability and significant increases of the soft X-ray irradiance during solar flares add some solar "noise" to the data time-series and low-amplitude oscillation peaks in the power spectra may be masked by these noise peaks. In this work we study the variability of coronal five-minute oscillations by analyzing 49 one-day power spectra for various solar activity observing conditions that range from the lowest solar activity observed during the SDO mission in the middle of May 2010 to intermediate solar activity levels in 2011. As in Didkovsky et al. (2011) we use soft X-ray observations without spatial resolution in the zeroth-order channel of SDO/EVE/ESP. SDO/EVE/ESP Channels EVE is one of three instrument suites on SDO. It provides solar EUV-irradiance measurements that are unprecedented in terms of spectral resolution, temporal cadence, accuracy, and precision. Furthermore, the EVE program will incorporate physics-based models of solar EUV irradiance to advance the understanding of solar dynamics based on short-and long-term activity of solar magnetic features. ESP (Didkovsky et al., 2012) is one of five channels in the EVE suite. It is an advanced version of the SOHO/CELIAS/SEM (Hovestadt et al., 1995;Judge et al., 1998). ESP is designed to measure solar EUV irradiance in four first-order bands of the diffraction grating centered around 19 nm, 25 nm, 30 nm, and 36 nm, and in a soft X-ray band from 0.1 to 7.0 nm (the energy range is 0.18 to 12.4 keV) in the zeroth-order of the grating. Each band's detector system converts the photo-current into a countrate (frequency). The count-rates are integrated over 0.25 seconds increments and transmitted to the EVE Science and Operations Center for data processing. An algorithm for converting the measured count rates into solar irradiance and the ESP calibration parameters are described by Didkovsky et al., ( , 2012. Observations Our analysis of the ESP measurements was based on datasets for a long series of observations covering a much wider range of solar activity compared to the two six-day time intervals analyzed by Didkovsky et al. (2011). Due to the high sensitivity of the ESP zeroth-order soft X-ray signal to solar activity and because of significant contamination of the power spectra in the five-minute band by the impulsive increases in irradiance during solar flares, we used data for the periods of small-to-intermediate solar activity, without strong solar flares. Based on these conditions, five data intervals were chosen (Table 1, Figure 1). Figure 1 five data intervals used for this analysis (thick horizontal bars) on the background of variations of soft X-ray solar irradiance related to the increased phase of solar activity cycle. Intervals one to three were chosen to represent the lowest periods of solar irradiance observed by SDO in 2010, 0.137, 0.181, and 0.160 mW m −2 , respectively. The fourth time interval was chosen between two periods of relatively high solar activity ( Figure 1) with a mean solar irradiance of 0.531 mW m −2 . The fifth time interval represents a return to relatively low solar activity with a mean irradiance of 0.276 mW m −2 . Thus, these five time intervals cover a wide range of solar conditions for the periods of decreased solar activity. To establish a point of reference with our previous analysis, the first time interval matches the two six-day intervals analyzed previously (Didkovsky et al., 2011). Three of the time intervals (the second, third and fifth) correspond to the lower-irradiance "spots" on the irradiance curve ( Figure 1) with decreased solar activity, while the fourth time interval includes some C and M-class solar flares to have some stronger disturbances of the solar atmosphere. Table 1 shows some details of the five-interval database. The lowest since the SDO mission daily mean irradiance of 0.126 mW m −2 was detected for 13 May 2010 (the first day of the first time interval) and the lowest daily STD of 3.31 10 −3 mW m −2 for 15 May 2010. The largest flare-related mean irradiance value and STD were detected for 2011047, and were 0.674 and 0.288 mW m −2 , respectively. Data Reduction The data reduction for this work was based on the use of the ESP zerothorder time series with original (Level 0D) effective count rate (counts s −1 ) corrected for energetic-particle events and temperature changes of dark counts (Didkovsky et al., 2012). The data were interpolated to eliminate short gaps (about two minutes total) that occur when the filters in the ESP filter-wheel and observing modes change during routine daily calibration. The first step was to calculate a power spectrum for each of the 49 days analyzed. Then, a power-law curve for each spectrum was determined in a manner similar to that described by Didkovsky et al. (2011) for the best fit of the spectrum in the range of frequencies from 2.0 mHz to 10.0 mHz, which includes our range of interest between 2.4 mHz and 4.0 mHz. where I1 is an array for the power-law spectral density, i is the day number, A i is a constant, f is frequency, and n i is the power-law index. The third step was to calculate a running mean [RM ] curve which may represent the local, e.g. in the five-minute band, power increase in the power spectrum. Due to the use of one-day power spectra with relatively low (11.6 µHz) frequency resolution and, thus, a low confidence in the frequencies of individual peaks, the running mean window of integration was chosen as 13 seconds to reduce the influence of individual peaks in the power spectrum but preserve the whole power increase within the range of the power-law curve. where I2 is an array which represents the running-mean function. The running mean function is the standard IDL procedure, i.e. median. The final step was to calculate the mean sum S i of the ratios between the I2 i and I1 i over the frequency range of five-minute oscillations (f 1 = 2.4 mHz to f 2 = 4.0 mHz). where m is the number of frequency bins within the frequency range. If the power spectrum showed an increase in the five-minute band, then S i is greater than unity. As an example of this data reduction algorithm, Figure 2 shows a power spectrum for 13 May 2010 with I1 as a dashed (blue) line and I2 as a red line. In addition to the power increase in the five-minute region, some peaks with frequencies above the cut-off frequency of 5.5 mHz demonstrate increased amplitudes without the clear indication of increased power represented by the red line ( Figure 2). For some other days the power increase in the frequency Table 4). The value of S for this example is 1.23. region from 6 to 10 mHz is clearly observed and this increase may be similar to that found by Gurman et al. (1982) in the frequency range from 5.8 to 7.8 mHz using SMM observations. An example of such increases in the five-minute region and above the cut-off frequency is shown in Figure 3. Results The results of the calculation of the five-minute S-ratios for the one-day power spectra are shown in Table 2 along with the changes of spectral irradiance, fluctuations of this irradiance (STD), and the maximum amplitude of the filter curve I2 (Equation (2)) in the five-minute range. Columns (one to five) are days of observations in the (YYYY DOY) format, Daily mean irradiance [10 −1 mW m −2 ], Daily Standard deviation [10 −3 mW m −2 ], the ratio (S), and the filter I2 maximum amplitude in the five-minute range, [10 −4 counts s −1 ]. Each of the five time intervals is separated by horizontal lines. Note, the S-ratio column in Table 2. A comparison of the S-ratios (Equation (3)) in the five-minute band with the daily mean soft X-ray irradiance, the standard deviation of this irradiance (STD), and the filter I2 maximum amplitude (Equation (2) Table 4). The S-ratio for this example is 1.37. Power increases above the cut-off frequency, at about 6 -8 mHz are more visible than in Figure 2. Note, for the convenience of comparison with Figure 2, the vertical scale is the same as in Figure 2. Table 2 is marked as N/A if the change of the spectral density in the power spectrum shows large low-frequency variations that could be caused by strong changes of solar irradiance, e.g. related to a solar flare. The gaps in the I2 column are either for days in which the power spectra do not show any increase in the five-minute region (the S-ratio is < 1) or for days where the S-ratio in the S-ratio column is marked as N/A because of strong contamination from solar flares. As Table 2 shows, the largest number of such contaminated spectra corresponds to the fourth and fifth time intervals with significantly higher solar activity (see the STD in Table 2) than during the first three time intervals. How Solar Activity Affects the Oscillations in the Corona To analyze how the oscillations in the five-minute band represented by the S-ratios (Table 2) and by the amplitude of the filter curve I2 (Equation (2)) are related to the changes of the observing conditions, two parameters of these conditions, daily mean irradiance and standard deviation of the irradiance were compared with the changes of the S-ratios and maximum amplitude of filter I2. Table 2 shows that the S-ratios larger than unity are detected for the first, second, and fourth time series. Figures 4 -6 show these time series. Note, the column I2 in Table 2 for the amplitude of the filtered curve and the open circles in Figures 4 -6 do not show any negative amplitude compared to the power law curve, (Equation (1)) for the days for which the S-ratio is less than unity. The filtered curve I2 (Equation (2)) for such days is below the spectral density I1 (Equation (1)) and represents nothing but noise. The S-ratios (Table 2) are plotted according to a linear scale shown on the right-hand edge of the plots. Figure 4 and other gaps in Figures 5 and 6 are related to the ratios with S < 1.0 (Table 2, S-ratio) for which no power increase and the spectra show noise in the five-minute region. Table 3 summarizes data from Table 2 for a more detailed comparison of the observing conditions. The technique used for this analysis is based on a comparison of spectral amplitudes in the five-minute range of the power spectra. This technique is very sensitive to the contamination of the spectra by flare-related increases of the Figure 6 are related to the ratios with S < 1.0 (Table 2, S-ratio) for which no power increase and the spectra show noise in the five-minute region. irradiance. These increases affect both the daily mean solar irradiance and the standard deviation [STD] of this irradiance. If the local five-minute increase in the power spectrum is a result of such contamination (flare-related solar noise), one should expect a positive correlation between the S-ratio (or maximum amplitude of filter I2) and the flare-related increases of the irradiance and its STD. Thus, the technical goal of this analysis was to extract and compare such information from the daily spectra for different levels of solar activity. Maximum Amplitude of Filter I2 Figures 4 -6 show the maximum amplitude of filter I2 as open circles (see also last column in Table 2). With conditions of minimum solar activity, the oscillation peaks in the power spectra are not masked by noise and we assume that the maxima in the amplitude of filter I2 for the first and second time intervals show the amplitudes of the five-minute oscillations in the corona related to the photospheric p-modes. For such conditions the mean amplitude is similar for the first and second time intervals, 1.69 and 2.16 × 10 −4 counts s −1 ]. The fourth time interval shows significantly higher amplitudes ( Figure 6 and Table 2) which indicate the contamination of the spectra by much higher solar flare activity. For low solar-activity periods the maximum amplitude of filter I2 is just another representation of the S-ratio, e.g. Figure 4. Table 2 and Figures 4 -6 show that the daily mean soft X-ray irradiance is a significant source of the S-ratio change. Table 3 shows that the largest mean S-ratio of the oscillations in the five-minute band was detected for the first time interval with the lowest daily mean irradiance. Cross-correlation between the changes of the irradiance and the S-ratios for the first time interval is low: 0.1. For the second time interval it becomes negative, -0.27, which indicates that an increase of irradiance (1.81 compared to 1.37) leads to a decrease of S-ratios for the observed five-minute oscillations in the corona. For the fourth time interval the correlation is positive and high: 0.64. We interpret this fact as a contamination of the power spectra by solar-flare events. The amplitudes of the filter-curve maxima in Figures 4 -6 (open circles) and the last column in Table 2 demonstrate such a flare-related increase. Since the spectral contamination is the result of transfer of the solar flare low-frequency power to the other frequency regions of the spectrum, including the five-minute region, we assume that it leads to a large positive correlation detected for the fourth time interval. Standard Deviation The soft X-ray signal that ESP detects in the zeroth-order channel is a very sensitive probe of solar variability. Assuming that the observed five-minute oscillations in the corona represent a response of the corona to the photospheric acoustic modes, and that the "transmission" of the solar atmosphere is a function of various disturbances and inhomogeneities between the photosphere and the corona, we can treat the standard deviation [STD] as an indicator of such solar "noise" in the five-minute oscillation signal. However, our results indicate that STD is not a unique parameter to estimate this "transmission". For example, the cross-correlation between the S-ratio and STD for the first and second time intervals, 0.1 and -0.44, correspondingly, is either low or negative. For the third time interval for which STD was the lowest (Table 3) the S-ratios were all < 1.0 (Table 2). This may indicate that, as suggested by a number of authors e.g. Judge et al. (2001), O'Shea et al. (2002), McIntosh et al. (2003), Muglach (2003, Vecchio et al. (2007), the connectivity between the photosphere and the corona depends on the configuration of the magnetic fields which may be also a function of the STD. If the lowest STD during the third time interval is related to the decreased magnetic field strength and to a non-effective configuration of the network, this may explain the absence of the power increases in the five-minute range of oscillations. Relatively high negative correlation (-0.44) for the second time interval with about two times larger STD compared to the first time interval allows us to consider that S-ratios for the second time interval as well as for the first time interval were not the result of the spectral contamination in the power spectra. This evidence is consistent with another independent confirmation of the response of the corona to photospheric p-modes e.g. based on the spectral analysis of the two six-day time series (Didkovsky et al., 2011). A Shift of Maximum Frequency as a Function of Activity The oscillation spectra with the most significant power increases (S 1.1, see Table 2, S) in the five-minute range were analyzed to investigate the correlation between the frequency of the maximum of this increase and solar activity (see Table 2, Standard Deviation). In addition to this correlation, the ranges of the increases were also analyzed. Table 4 and Figure 5 show the results of this analysis. Figure 7 shows a positive correlation between the increases of the STD and the shift of the maximum frequency, R 1 = 0.62. The same correlation (R 2 = 0.66) is found between the changes of STD and the mean frequency of the increase. Certainly, the significances of these correlations are small. With the use of the t -distribution, t = 1.4, which is lower than the t -distribution number of 1.638 for three degrees of freedom and significance 0.1 (one tail). Two days with the largest STD show a significant shift of the right edge of the frequency range toward the cut-off frequency, see 2010 DOY 142 (5.3 mHz) and 2011 DOY 47 (5.3 mHz) in Table 4. This shift is consistent with the model proposed by Bryson et al. (2005). Concluding Remarks Global solar oscillations in the five-minute range were initially detected in the corona using soft X-ray irradiance measurements from SDO/EVE/ESP (Didkovsky et al., 2012). The variability of the five-minute S-ratios (Equation (3) . Note, data for the power spectra are shown with increases of S 1.1, see Table 2, S-ratio. The results of this analysis show that the best conditions for observing fiveminute solar oscillations in the corona are when the solar activity is low. In the first time interval 11 of the 12 days show power increases with an estimated S-ratio ≥ 1.0. Our analysis shows that power increases in the spectra of coronal oscillations, interpreted as the response of the corona to photospheric p-modes due to their channeling (Vecchio et al., 2007) or leakage Malins and Erdelyi, 2007), are not related to increased daily mean solar irradiance. This is clear from S-ratio comparison between the first time interval and the other four time intervals, see Table 3. The larger daily mean irradiances observed for the second through fifth time intervals did not lead to a larger S-ratio. The power increases in the oscillation spectra are not caused by the increases in the mean STD (third column in Table 3) either. This conclusion is based on the analysis of cross-correlations between the S-ratio and the mean STD. The correlation is low for all time intervals except the fourth. This result confirms that detected increases of the S-ratios in the five-minute range are not created by the spectral contamination in the power spectra from solar flares, and are not the instrumental or data reduction artifacts but represent the leakage of photospheric p-modes to the corona. We interpret a significant positive correlation for the fourth time interval as an artifact and a reflection of much higher solar activity with the STD about 27 times larger than for the first time interval. The maxima of amplitude of filter I2 for the fourth time interval are significantly larger than the amplitudes for the first and second time intervals and may be a demonstration of contamination of the spectra in the five-minute region by the low-frequency flare signals. This is also clear from helioseismology results which have shown shifts in the frequencies of the global modes related to the solar cycle but not in the amplitudes. Another conclusion based on these results is that large-scale solar oscillations detected in the corona are related to the leakage of the photospheric acoustic oscillations rather than to the excitation of these coronal oscillations by solar energetic events. The high sensitivity of the five-minute oscillations in the corona to the changes of solar irradiance may be used as a diagnostic tool to characterize the 'connectivity' in soft X-ray irradiance dynamics.
2012-11-04T18:35:00.000Z
2012-11-04T00:00:00.000
{ "year": 2013, "sha1": "cad4a80f5ad057a0c7f8d22242a818afdd00cb77", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11207-012-0186-3.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "cad4a80f5ad057a0c7f8d22242a818afdd00cb77", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237306053
pes2o/s2orc
v3-fos-license
Platelet-to-Lymphocyte Ratio Multiplied by the Cytokeratin-19 Fragment Level as a Predictor of Pathological Response to Neoadjuvant Chemotherapy in Esophageal Squamous Cell Carcinoma Background The standard treatment for resectable advanced esophageal squamous cell carcinoma in Japan is surgery followed by neoadjuvant chemotherapy, and it is important to predict the effect of neoadjuvant chemotherapy before treatment. Therefore, this study aims to extract conventional blood examination data, such as tumor markers and/or inflammatory/nutritional index levels, that can predict the pathological response of patients with esophageal squamous cell carcinoma to neoadjuvant chemotherapy. Methods We retrospectively analyzed the medical records of 66 patients with thoracic esophageal squamous cell carcinoma who received neoadjuvant chemotherapy, followed by curative esophagectomy at Tottori University Hospital between June 2009 and December 2019. Results We demonstrated that the product of the platelet-to-lymphocyte ratio (PLR) multiplied by the cytokeratin-19 fragment (CYFRA) level, which was termed “PLR-CYFRA,” is the most accurate indicator that predicts the pathological response to neoadjuvant chemotherapy, with the highest area under the curve [0.795 (95% confidence interval: 0.665–0.925), P < 0.001] in receiver operating characteristic analyses. Therefore, we divided patients into the PLR-CYFRALow (< 237.6, n = 21) and PLR-CYFRAHigh (≥ 237.6, n = 45) groups and found that the percentage of PLR-CYFRALow was significantly higher in patients with a better pathological response (P < 0.001). Furthermore, patients with good pathological response had significantly better prognoses in terms of disease-specific survival (P = 0.014), recurrence-free survival (P = 0.014), and overall survival (P = 0.032). In the multivariate analysis, PLR-CYFRA was an independent predictor of the pathological response of patients with esophageal squamous cell carcinoma to neoadjuvant chemotherapy (P = 0.002). Conclusion Pretreatment PLR-CYFRA might be a useful and simple tool that predicts the pathological effect of neoadjuvant chemotherapy in esophageal squamous cell carcinoma. Esophageal cancer is the sixth leading cause of cancerrelated deaths worldwide, 1 and in Japan, the most common histological type of esophageal cancer is squamous cell carcinoma (more than 90%). 2 According to the results of the JCOG9907 trial, the standard treatment for locally advanced and resectable esophageal squamous cell carcinoma (ESCC) is esophagectomy with two-or three-field lymphadenectomy after neoadjuvant chemotherapy (NAC). 3 Despite this intensive combination therapy, we often find cases with a poor prognosis because of postoperative recurrence, and the 5-year survival rate after esophagectomy is only 59.3%. 2 One reason for this poor prognosis is the inadequate effect of NAC, as it has been reported that pathological responders to NAC exhibited better prognosis and that the postoperative recurrence pattern often confined to the regional field is predominantly a solitary lesion without distant recurrence. 4 Therefore, although the prediction of the NAC effect before treatment is important as it determines the treatment strategy, no predictive method has been established. In recent years, several studies have reported that pretherapeutic values of tumor markers might be useful in predicting prognosis and NAC efficacy in ESCC. 5 Furthermore, various inflammatory/nutritional biomarkers, such as the neutrophil-to-lymphocyte ratio (NLR), platelet-to-lymphocyte ratio (PLR), and prognostic nutritional index (PNI), have been reported to be associated significantly with prognosis and to be useful in predicting the chemotherapeutic effects in ESCC. [6][7][8][9][10] However, it is unknown whether each indicator alone or in combination can predict the effects of NAC in ESCC with the highest accuracy. Therefore, this study aims to evaluate the predictive value of a single indicator or a combination of indicators, including tumor markers and inflammatory/ nutritional biomarkers, in predicting NAC efficacy. This study also aims to establish the best predictor of NAC efficacy in ESCC. Patients and NAC regimens This study was based on a retrospective analysis of 66 patients with locally advanced thoracic ESCC who received NAC, followed by curative esophagectomy at Tottori University Hospital between June 2009 and December 2019. The clinicopathological findings were determined according to the Japanese Classification of Esophageal Cancer (11th edition). 11,12 The criteria for NAC administration were clinical stage II, III, or IVa disease. As standard chemotherapeutic drugs, 5-fluorouracil (5-FU) and cisplatin (FP regimen) were used for all eligible patients, except those with impaired renal function and those treated with 5-FU and nedaplatin (FN regimen). The FP regimen consisted of 80 mg/m 2 cisplatin on day 1 and 800 mg/m 2 5-FU infusions on days 1-5, whereas the FN regimen consisted of 90 mg/ m 2 nedaplatin on day 1 and 800 mg/m 2 5-FU infusions on days 1-5. The length of one chemotherapy cycle of each regimen ranged from 21 to 28 days. Surgery was performed 6-8 weeks after the last NAC cycle. The standard surgical approach was thoracoscopic subtotal esophagectomy and reconstruction with a gastric tube, and lymphadenectomies, including two-or three-field procedures, were performed. Criteria of pathological response to NAC The pathological response was evaluated by pathologists using the primary tumor of the surgical specimens according to the Japanese Classification of Esophageal Cancer (11th edition), 11,12 as follows: grade 0, no recognizable cytological or histological therapeutic effect is observed; grade 1a, viable cancer cells account for two-thirds or more of the tumor tissue; grade 1b, viable cancer cells account for between one-third and twothirds of the tumor tissue; grade 2, viable cancer cells account for less than one-third of the tumor tissue; and grade 3, no viable cancer cells are apparent (pathological complete response; pCR). Serum biomarkers The results of peripheral blood tests, including the detection of serum albumin (g/dL), C-reactive protein (CRP) (mg/dL), squamous cell carcinoma antigen (SCC Ag) (ng/mL) and cytokeratin-19 fragment (CYFRA) (ng/ mL) levels, and total platelet, lymphocyte, and neutrophil counts (/μL), were obtained from the patients' medical records. Blood test data were obtained within 1 month of NAC. The NLR and PLR were obtained by dividing the peripheral neutrophil count and platelet count, respectively, by the peripheral lymphocyte count. The PNI was calculated as follows: 10 × peripheral serum albumin + 0.005 × peripheral lymphocyte count, as reported by Onodera et al. 13 The modified Glasgow prognostic score (mGPS) was scored as 0, 1, or 2 based on CRP (> 1.0 mg/dL) and hypoalbuminemia (<3.5 g/ dL), as described previously. 14 The PLR-CYFRA was first defined as the PLR value × the serum CYFRA level. Our institutional review board approved this study (20A234). The need for informed consent was waived. Statistical analyses The Youden index was calculated using receiver operating characteristic (ROC) curve analysis and was defined as the maximum value of "sensitivity + specificity -1". 15,16 The Youden index value was used as an optimal cut-off for the PLR-CYRFA in the pathological response, which was used to divide patients into the PLR-CYFRA High and PLR-CYFRA Low groups. Survival curves were calculated according to the Kaplan-Meier method, and differences between the curves were identified using the log-rank test. Univariate and multivariate analyses using Cox proportional hazards models were performed to evaluate prognostic factors for disease-specific survival (DSS). Moreover, to evaluate the effects of clinical variables on the pathological response, a univariate analysis was performed using χ 2 tests, followed by a multivariate logistic analysis. P values < 0.05 were considered significant. GraphPad Prism (GraphPad Software, Inc., La Jolla, CA) and IBM SPSS Statistics 25 (IBM SPSS, Chicago, IL) software were used for the statistical analyses. PLR-CYFRA was valuable in predicting the pathological response to NAC ROC curves were constructed to evaluate the pathological response, and the area under the curve (AUC) values were compared to assess the discriminatory ability of SCC Ag, CYFRA, PNI, mGPS, NLR, and PLR (Table 2). In this analysis, the AUC values of CYFRA, NLR and PLR were particularly higher than those of the other indicators. Therefore, we defined NLR-CYFRA and PLR-CYFRA as the product of NLR and PLR multiplied by CYFRA, respectively. ROC analysis showed that PLR-CYFRA was most accurate in predicting the pathological response with an AUC = 0.795 [95% confidence interval (CI): 0.665-0.925, P < 0.001] ( Table 2); the optimal cut-off PLR-CYFRA value was 237.6. Based on this cut-off, the sensitivity, specificity, positive predictive value, and negative predictive value of PLR-CYFRA for pathological response grade ≥ 2 were 0.81, 0.84, 0.62, and 0.93, respectively. Then, we divided the patients into the PLR-CYFRA Low (PLR-CYFRA < 237.6, n = 21) and PLR-CYFRA High groups (PLR-CYFRA ≥ 237.6, n = 45). Figure 1 shows the percentage of PLR-CYFRA Low or PLR-CYFRA High according to the pathological response grade, and the percentage of PLR-CYFRA Low was found to be significantly higher when the pathological response grade was higher (P < 0.001). Then, Table 3 shows the correlations between the PLR-CYFRA and the clinicopathological variables in all patients included in this study. The value of PLR-CYFRA was significantly higher in younger patients (< 70 years) than in older patients (≥ 70 years: P < 0.001), in those with low body mass index (< 18.5) than those with high body mass index (≥ 18.5: P = 0.014), and in those treated with FN regimen than in those treated with FP regimen (P = 0.028). PLR-CYFRA was an independent predictor of pathological response to NAC Finally, we evaluated the effects of clinical variables on the pathological response to NAC. The univariate analysis indicated that NLR (P = 0.009) and PLR-CYFRA (P < 0.001) were associated significantly with the pathological response (Table 5). In the multivariate analysis, PLR-CYFRA (P = 0.002) was an independent predictor of pathological response in patients with ESCC who received NAC (Table 5). DISCUSSION The purpose of the present study was to extract valuable predictors of NAC efficacy in ESCC using various factors, including inflammatory/nutritional biomarkers and tumor markers. We then demonstrated that pretreatment PLR-CYFRA was an independent predictor of pathological response and an independent prognostic factor for patients with ESCC treated with NAC. Furthermore, the pathological response to NAC was also correlated with patient prognosis. In this study, we demonstrated that the inflammatory biomarker PLR and the tumor marker CYFRA were useful predictors of NAC effects in ESCC. It is well known that the systemic inflammatory response plays an important role in tumorigenesis and predicts the survival of patients with cancer, and inflammation can be assessed easily by counting neutrophils, lymphocytes, monocytes, and platelets in peripheral blood. 17 Specifically, it has been reported that PLR can predict the efficacy of chemotherapy in non-small cell lung cancer, colorectal cancer, and breast cancer. [18][19][20] The mechanism of PLR in tumorigenesis may stem from the role of platelets in promoting angiogenesis, adhesion, and invasion by increasing the production of vascular epidermal growth factor and transforming growth factor-β in the tumor environment. 21 Additionally, cytokines and chemokines released from platelets promote the infiltration of other immune cells, including neutrophils and lymphocytes, into the tumor stroma, which induces the progression of inflammation. 22 On the contrary, tumor markers are substances produced by tumor cells or by non-tumor cells in response to tumor cells that reflect the presence of tumors, tumor cell types, and tumor quantity. 23 Therefore, tumor markers directly reflect the disease activity of the tumor itself, and CYFRA is known to be a useful tumor marker in ESCC. 24,25 Furthermore, CYFRA has been reported to predict the response to chemotherapy in patients with non-small cell lung cancer. 26,27 On the other hand, it has been reported that other tumor markers such as SCC Ag and serum p53 antibody, as well as PET-CT scan after NAC are useful in predicting the effect of NAC in ESCC patients. 5,28,29 Consistent with PLR-CYFRA in our results, these markers were shown to be independent predictors for pathological response to NAC in surgical specimens. The reason why different markers became independent predictors between these studies and our study may be due to the differences in patient backgrounds and the NAC regimens used. Here, we showed that the value of PLR multiplied by CYFRA is a highly accurate predictor of chemotherapy efficacy and that pretreatment PLR-CYFRA might be an important biomarker for patients with ESCC who receive NAC. To our knowledge, this is the first report that demonstrates the utility of PLR in predicting the effects of NAC in ESCC. Several meta-analyses have reported the impact of PLR on the prognosis of patients with ESCC, [30][31][32] and therefore, PLR is an important indicator during ESCC treatment. However, these reports included patients with various treatment strategies, and consequently, the clinical impact of PLR on NAC (the standard treatment for patients with locally advanced resectable ESCC in Japan) response was unclear. Yang et al. reported that PLR was more useful than NLR and PNI in predicting prognosis and treatment responses in patients with nonmetastatic ESCC who received postoperative chemotherapy 10 ; however, the inf lammatory status of patients who received postoperative chemotherapy should differ from that of patients who received NAC because of the effects of surgery. Therefore, this study, which revealed the ability of PLR to predict the effects of NAC, presents a novel finding that is useful for ESCC treatment. We also showed that PLR-CYFRA was a useful prognostic factor for ESCC, and this was especially significant for DSS and RFS, but not for OS (Figs. 2d, e and f). This suggests that PLR-CYFRA may be more closely related to ESCC death, but it is not clear because there were only 7 patients who died of other diseases in this study. We acknowledge that this study has several limitations. First, this was a retrospective study with a small sample size, and therefore, a prospective study with a larger cohort is needed to validate the utility of PLR-CYFRA. Second, we did not evaluate the effect of NAC on metastatic lymph nodes, because the outcome of this study was a pathological response of the primary tumor, according to the Japanese Classification of Esophageal Cancer (11th edition). 11,12 However, approximately 20% of the patients in this study had no clinical lymph node metastasis, and the pathological response was correlated significantly with prognosis, as shown in Fig. 2; thus, we regard the results of this study as reliable. In conclusion, pretreatment PLR-CYFRA was an independent predictor of the pathological response of patients with ESCC to NAC. According to these findings, we should consider more intensive NAC regimens for patients with ESCC with high pretreatment PLR-CYFRA, because patients with poor NAC responses also exhibit a poorer prognosis. We believe that a further prospective study of pretreatment PLR-CYFRA will lead to a novel and valuable biomarker for ESCC treatment.
2021-08-27T06:16:18.914Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "e9f98307269a38c01238f2b48b823e068eefe0c8", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/yam/64/3/64_2021.08.003/_pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4f532b2cf78ac5f4557839274887b42eac4e8abf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250706324
pes2o/s2orc
v3-fos-license
Antiviral medication to prevent fetal transmission of maternal CMV during pregnancy Background. Cytomegalovirus infection represents the most frequent congenital viral infection, with serious conse-quences on newborns. Neurosensorial hearing loss is the principal outcome, but also, the infection can cause other central nervous system’s anomalies. Although CMV infection can have a major impact on fetal development, there are not clear directions to follow yet, to prevent or treat this condition. Therefore, our purpose with this paper is to update the knowledge regarding the treatment options in order to prevent fetal transmission of maternal CMV infection, based on the latest data from the specialized literature in this field. Methods. Electronic research and analysis of the relevant articles published mainly in the last 5 years were per-formed, consulting the web platforms PubMed, ScienceDirect, Mendeley and ClinicalTrials.gov. Results and conclusions. To date, there is not enough evidence to reach a consensus on therapeutic methods to prevent or to treat fetal CMV infections and, as a consequence, antenatal screening is not justified. Many pharma-ceutical companies work on vaccines to prevent CMV infection, but the results are only from studies’ second phase. Information on efficiency of hyperimmunoglobulin is mixt and it is necessary to clarify the dosage. Among antiviral agents, valaciclovir, which was studied in recent clinical trials, seems to have the best efficiency to prevent fetal transmission of maternal CMV infection and the best safety profile. Valganciclovir has possible embryotoxic effects, but higher potency and information on it are available only from case reports. The interest of scientific community on this topic is high, thus many studies are underway to bring new clarifications. PubMed, ScienceDirect, Mendeley and ClinicalTri-als.gov. The key words that we use were cytomegal-ovirus/CMV, prevention. The results of our research were filtered and we chose only the relevant articles published mainly in the last 5 years. Following analysis of se-lected articles, we identified the most valuables results and scientific opinions, we compared and dis-cussed them, and the information that we obtained is presented below. INtRoduCtIoN Cytomegalovirus (CMV) or Herpesvirus 5 infection has a global prevalence between 50% and 85% [1] and it is the most frequent viral congenital infection. It is estimated that 1 in 150 children is born with congenital CMV infection. This is the main cause of neurosensorial non-genetic hearing loss and it can also cause anomalies of cognitive development or cerebral palsy [2,3]. Maternal CMV infection is usually asymptomatic (about 80%) and because of that, it is diagnosed only when fetal signs are noticed [4]. Although this subject has an impor-tant impact, to date, there is no efficient approved treatment to prevent this infection on pregnant women [5]. This is the reason why there is no screening program. Our paper intends to reveal the latest conclusions on the antiviral medication's efficiency in order to prevent fetal transmission of maternal CMV infection. MethodS We consult the specialized literature on this topic, performing electronic research on web platforms PubMed, ScienceDirect, Mendeley and ClinicalTrials.gov. The key words that we use were cytomegalovirus/CMV, pregnancy, materno-fetal transmission, prevention. The results of our research were filtered and we chose only the relevant articles published mainly in the last 5 years. Following analysis of selected articles, we identified the most valuables results and scientific opinions, we compared and discussed them, and the information that we obtained is presented below. PRIMARy PReVeNtIoN Even if CMV infection is the most frequent neonatal infection and has severe consequences, childbearing age women are not informed enough about the ways of the disease transmission and astonishingly, their knowhow about CMV infection is lower in the last years (from 14% in 2005 to 9% in 2016). The main responsibility is on their attending physicians, who do not offer them an adequate counseling [6]. The knowhow and application of hygiene rules while women take care of young children can decrease the seroconversion rate from 7,6% (in uncounseled groups) to 1,2% (in counselled groups) [4]. The measures that can limit the transmission of CMV infection are: frequent handwash with water and soap after changing diapers, feeding young children, contact with pacifiers, toys, clothes, surfaces than can be contaminated with saliva, urine or nasal secretions; frequent cleaning of possible contaminated objects, avoiding sharing cutlery, toothbrush or other intimate objects [2]. Another method of primary prevention is vaccination, but to date, this is not available. Research is ongoing, but phase III results are not available yet. Six vaccines are involved in advanced clinical trials (Sanofi Pasteur, MSD, City of Hope Triplex Vaccine, City of Hope Cytomegalovirus PepVax, Hookipa Pharma, Astellas), another 3 products are involved in incipient clinical studies and other 2 preparations are in preclinical researches. Their early phase II conclusions show that vaccination can prevent CMV infection in seronegative patients who have been exposed to the virus and in seronegative patients who suffered organ or bone marrow transplant [7]. SeCoNdARy PReVeNtIoN At this moment, the options to prevent maternalfetal transmission are limited and they are still under study. The two alternatives are Hyperimmunoglobulin (HIG) or antiviral agents. HIG is an immunoglobulin G derived preparation, from high concentration of anti-CMV antibody human plasma. This might decrease fetal CMV transmission through neutralization of the virus by the high avidity antibodies [2]. First research found a half time of 22 days and, consequently, HIG was administered once at 4 weeks, 100U/kg, intravenous. Nigro et all. reported a decrease in transmission rate from 40% in control group to 16% in study group [8]. For the same posology, Revello et all. found a decrease of only 30%, versus 44% in placebo group, but they noticed an increase of obstetrical complications such as preterm birth, preeclampsia, or intrauterine growth restriction [1,9,10]. Recently, it has been discovered that half time of HIG is about 11 days and consequently, it is necessary to administer one dose once at 2 weeks. Kagan et all. increased the dose to 200 U/kg, once at 2 weeks, starting before 14 weeks of gestation and until 20 weeks, and so they reported a transmission rate of 7,5% comparing with 35,2% in control group and they did not notice a higher proportion of obstetrical complications [1,11]. The alternative treatment option, cheaper and more studied is represented by antiviral drugs. Antiviral agents, administered to provide secondary prevention, try to treat maternal infection by lowering the viremia and thus, they decrease the risk of viral transmission to the fetus, through the placenta. Only one drug was studied in clinical trials with pregnant women to date and this is valacyclovir. The other medicines available to treat CMV infection are ganciclovir, valganciclovir, foscarnet, cidofovir, maribavir, letermovir and fomivirsen, but they do not prove their safety and/or efficiency to be used in pregnancy and the studies are ongoing [2,12]. Ganciclovir and its pro-drug, valganciclovir are the most efficient antiviral agents available to treat CMV infection and they act through inhibition of viral DNA polymerase. Because of the low intestinal absorption of ganciclovir (8% bioavailability), its Lvaline ester, valganciclovir, is preferred to use for oral administration [5]. These two substances seem to have teratogenic and cytotoxic effects and they are classified by Therapeutic Goods Administration Australia in class D [1,13]. Valaciclovir is L-valine ester of acyclovir, is metabolized by liver, but with a higher oral bioavailability (50% versus 10-20%) and its mechanism of action is also inhibition of viral DNA polymerase. It is renally excreted, through glomerular filtration and tubular secretion. This substance is approved to use in adults and adolescents over 12 years, but the efficiency is under ganciclovir. In vitro and animal studies prove a good safety profile, without genotoxic or carcinogenic effects. For use in pregnancy, ganciclovir is classified in class B of safety [1,3,14,15]. Fourteen years ago The Acyclovir in Pregnancy Registry was established and 1234 pregnant women who received acyclovir in any of the three trimes-ters of pregnancy, from 24 countries, were followed. The researchers noticed that the rate and types of major defects of these fetuses, who were exposed in utero to acyclovir, were not different from the general population [16]. An inconvenient can be the large number of tablets which must be swallowed daily, the dose is 8g/ day, which means 16 tablets. However, the adherence to treatment of pregnant women was increased [3]. The experience and the evidence of valaciclovir use to prevent fetal transmission of CMV maternal infection are supported by a couple of cases and some recent clinical trials. In 2020, an Italian group reported the first series of cases treated with valaciclovir for secondary prevention of congenital CMV infection. It identify 12 pregnant women with primary CMV infection in first trimester of pregnancy, who received 8 g valaciclovir/day starting right after the moment of diagnosis and until amniocentesis. For two cases with positive PCR tests in amniotic fluid, treatment was continued until delivery. After birth, 3 more infected fetuses were identified, but asymptomatic, after a negative result of amniocentesis. Only one new-born of the two confirmed by amniocentesis have developed unilateral moderate hearing loss at 18 months old. Data obtained in this way was compared with an older batch of the same clinic and the following results were configured: the transmission rate at the time of amniocentesis was 17%, half compared to 37% of the control batch, and after birth 42%, but also considering a false negative amniocentesis rate of 30%. Of the 3 patients with viremia re-detectable after stopping treatment, 2 of them gave birth to infected fetuses, thus demonstrating a delayed transmission of the virus to the fetus. This data is not statistically significant due to the small number of cases. The authors do not report any adverse effects attributable to valaciclovir [17]. In September 2020, the results of the first randomized, double-blind, placebo-controlled trial conducted in Israel between 2015-2018 (NCT02351102) on 90 patients with CMV infection detected periconceptionally or in the first trimester of pregnancy were reported [18]. Treatment with 4g valaciclovir x 2 / day was initiated in 45 cases from the first postdiagnosis visit and lasted until the time of amniocentesis, accumulating at least 7 weeks and reaching 21 weeks gestational age. In the study group, a total of 5 positive results for CMV at amniocentesis were detected, out of 45 achieved (11%), compared to the control group, with 14 out of 45 (30%), p = 0.027. No significant differences were noticed for periconceptional infections, the positive results were 3 out of 26 amniocentesis (12%) versus 3 out of 24 (13%), p = 0.91, but for infections acquired in the first trimester a reduction in CMV transmission of at 48% (11 out of 23 positive results in the control group) at 11% (2 out of 19 in the valaciclovir group), p = 0.020. It was concluded that patients with periconceptional infection started treatment later than the time of contacting CMV (on average at 60.58 days) and thus the positive results in amniocentesis are more numerous, while pregnant women diagnosed in the first trimester of pregnancy began treatment closer to the time of infection (on average 43.84 days) and thus the efficiency was higher and the percentage of positive results much lower. Postpartum, 7% of symptomatic fetal infections were reported in the study group, compared with 16% in the placebo group. Out of a total of 6 positive CMV cases despite negative amniocentesis, 4 cases were reported in the valaciclovir group and 2 in the control group. The noticed side effects (thrombocytopenia, headache, nausea, abdominal pain) were not clinically significant. Shadar-Nissan et al. demonstrated the effectiveness of using valaciclovir in preventing the fetal transmission of maternal CMV infection and draw attention to the timing of treatment. The conclusion is that the effectiveness of the treatment will be higher if it is initiated as early as possible before the time of diagnosis [18]. A few months ago, Faure-Bardon et al. confirmed the benefit of valaciclovir for the secondary prevention of congenital CMV infection through a case-control study conducted between 2009-2020 [15]. During this period, 310 primary maternal infections were detected and 65 of these patients received valaciclovir 8 g/day. The results were compared with 65 control cases. The duration of treatment was, on average, 35 days, and the average time of onset was 12.71 weeks gestational age. The transmission rate was 12% (8/65) in the study group, compared to 29% (19/65) in the control group, thus concluding there was a significant decrease in the vertical transmission of maternal-fetal CMV infection (OR = 0.318 [ 0.12-0.841], p = 0.021). This group of authors also notes a greater effect of treatment in reducing the transmission of contact infection in the first trimester of pregnancy versus periconceptional, but they attribute it to the lower background risk of transmitting periconceptional infections. Regarding the safety of administration, there was a case of acute oliguric renal failure after 4 weeks of valaciclovir administration, but it was remitted 10 days after stopping treatment. The possible mechanism was considered the accumulation and precipitation of crystals in the proximal tubular renal cells. One opinion is that the dose of 8 g/day should be divided into 4 doses of 2 g each, the half-life of valaciclovir being 3 hours and thus decreasing the risk of accumulation and local renal toxicity. Therefore, careful monitoring of creatinine is recommended during treatment with valaciclovir [15]. Although still in incipient state, studies on placental cell cultures testing CMV-specific antivirals are ongoing. An analysis of the effects of letermovir, maribavir, cidofovir, acyclovir, and ganciclovir on first-trimester TEV-1 trophoblastic cell cultures and third-trimester ex vivo placental explant histocultures was performed. They were found to have no cytotoxic effects and did not affect cell proliferation. Antiviral treatment of CMV-infected placental explants resulted in a statistically significant inhibition (p <0.05) of viral replication in 83.3% for letermovir, 83.6% for maribavir, 89.3% for cidofovir, 82.4% for ganciclovir, but not for acyclovir [12]. PoStNAtAl tReAtMeNt of CMV INfeCtIoN Hyperimmunoglobulin and antivirals (ganciclovir/valaciclovir) remain the only tools to be used for confirmed fetal CMV infection. HIG was most often evaluated in studies that investigated mainly the effect of decreasing the maternal-fetal transmission rate, but which also analyzed the effect in case of confirmed fetal infection. Therefore, there is research which identifies a lower percentage of infected fetuses, but asymptomatic, among pregnant women who received treatment with HIG, compared to those whose mothers were not treated with HIG, but there are also results that show no significant difference [2]. Regarding antiviral therapy initiated antepartum, there is mixed information. An observational pilot study did not show significant differences in new-born symptoms whose mothers received valaciclovir during pregnancy (47.6%) compared to untreated cases (41.7%) [2], while Leruez-Ville's group provided results deriving from a phase II multicenter study, which show a positive association between the antenatal administration of 8 g/day valaciclovir and the birth of an asymptomatic new-born (82%), compared with 43% -the percentage of a control cohort without treatment [19]. Ganciclovir and valganciclovir are the preferred drugs for neonatal treatment of CMV infection. In-ternational recommendations and guidelines are in favor of administration of a dose of 16 mg valganciclovir/kg x2/day, orally, between 6 weeks and 6 months to symptomatic newborns, starting from first month of life. If the condition does not allow it, ganciclovir iv, 6mg/kg x2/ day can be administered in the first 2 weeks of treatment and then a switch to oral medication is recommended [6,20]. The aim is to prevent the onset of deafness or to prevent it from getting worse, in cases where it is already present. Although this is the recommended therapy, cases of resistance to treatment (approximately 4%) are also reported [6,21]. Studies to determine the effectiveness of valganciclovir in infected but asymptomatic or isolated deaf newborns are ongoing (NCT03301415, NCT03107871, NCT01649869) [22,23,24], and a study evaluating the pharmacokinetics of letermovir in the treatment program is scheduled for 2022 [1,6]. CoNCluSIoNS Currently, advice and education of patients on applicable hygiene measures remain the most accessible and safe methods of preventing CMV infection during pregnancy. At present, there are insufficient reliable data to recommend treatment with hyperimmunoglobulin or a specific antiviral agent for the prevention of maternal-fetal transmission or the treatment of confirmed fetal infection. However, the results reported so far are favorable and promising. Numerous studies are underway and we expect soon changes in the recommendation of screening, the possibility of vaccination, but also treatment to limit as much as possible the serious effects of this congenital infection.
2022-07-21T15:11:42.047Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "0142cc0ec21b689ddad33af95e56e83159cb458e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.37897/rjid.2022.2.1", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "31eb26145a1e1e4a77e6e3fff4b7f7ca12c31e8b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
267628099
pes2o/s2orc
v3-fos-license
Morphologic characterization and cytokine response of chicken bone-marrow derived dendritic cells to infection with high and low pathogenic avian influenza virus Dendritic cells (DCs) are professional antigen-presenting cells, which are key components of the immune system and involved in early immune responses. DCs are specialized in capturing, processing, and presenting antigens to facilitate immune interactions. Chickens infected with avian influenza virus (AIV) demonstrate a wide range of clinical symptoms, based on pathogenicity of the virus. Low pathogenic avian influenza (LPAI) viruses typically induce mild clinical signs, whereas high pathogenic avian influenza (HPAI) induce more severe disease, which can lead to death. For this study, chicken bone marrow-derived DC (ckBM-DC)s were produced and infected with high and low pathogenic avian influenza viruses of H5N2 or H7N3 subtypes to characterize innate immune responses, study effect on cell morphologies, and evaluate virus replication. A strong proinflammatory response was observed at 8 hours post infection, via upregulation of chicken interleukin-1β and stimulation of the interferon response pathway. Microscopically, the DCs underwent morphological changes from classic elongated dendrites to a more general rounded shape that eventually led to cell death with the presence of scattered cellular debris. Differences in onset of morphologic changes were observed between H5 and H7 subtypes. Increases in viral titers demonstrated that both HPAI and LPAI are capable of infecting and replicating in DCs. The increase in activation of infected DCs may be indicative of a dysregulated immune response typically seen with HPAI infections. Introduction In recent years, avian influenza virus (AIV) has been one the leading causes of infection-based poultry mortality and morbidity.Prior to the 1990s, AIV outbreaks in domesticated poultry were rare, however ongoing outbreaks of highly pathogenic avian influenza (HPAI) have occurred globally for the past several years (1)(2)(3).The H5 A/goose/Guangdong/1996 (H5-Gs/Gd) lineage is responsible for most of the outbreaks, as the current clade 2.3.4.4b viruses appear to be highly adapted to migratory waterfowl (3).As a result of the adaptation, more spill over into domesticated poultry, mammals, and humans have been observed (4).High morbidity and mortality rates have led to reduced poultry production, embargoes on countries of origin, and increased expenses associated with vaccinating and controlling AIV within the global poultry industry (5,6).In 2022, a total of 67 countries reported HPAI outbreaks, resulting in the deaths of 131 million poultry and wild birds (7, 8).In the U.S., the ongoing 2022-2024 HPAI H5N1 outbreak has resulted in the loss of over 60 million birds and $3 billion dollars in economic damages (1,9). Low pathogenic avian influenza (LPAI) viruses typically cause a mild disease in poultry that is restricted to the respiratory and intestinal tract because they contain a mono-basic cleavage site in the hemagglutinin (HA) protein that can only be cleaved by a few, localized cellular proteases (6,10,11).HPAI viruses contain a multi-basic cleavage site that allows for several common proteases to cleave the HA, which leads to a severe, systemic infection (6,11).The rapid, multi-organ infection coupled with HPAI-specific dysregulated cytokine responses typically lead to death 1-6 days post infection in domesticated poultry (12).Early responses against viral infections are pre-dominantly mediated by host innate immunity, followed by migration of antigen-presenting cells (APC) and lymphocytes into the lymphoid tissues to initiate adaptive immune responses.Increased expression of pathogen recognition receptors (PRRs), interferons, pro-inflammatory cytokines, and chemokines are generally observed during the early stages of an AIV infection (13).PRRs, such as Toll-like receptors (TLRs) and MDA-5, sense viral RNAs and initiate inflammatory responses by releasing proinflammatory cytokines (14).A rapid induction of type 1 (interferon-alpha (IFN-a)) and type 2 (interferon-beta (IFN-b)) interferon leads to the upregulation of interferon stimulated genes (ISG)s, which are essential for an antiviral response.In particular, myxovirus resistance gene (Mx) is important because it promotes anti-AIV activity in various mammalian and avian species (14-17).The role of Mx is contested in chickens as there are conflicting reports of its effectiveness against HPAIV, however there is a known interaction between the viral nucleoprotein (NP) and Mx proteins (14,(17)(18)(19)(20). Proinflammatory cytokines, including interleukin 6 (IL-6), interleukin 12 (IL-12), and interleukin 1 beta (IL-1b) upregulate inflammatory cytokine responses to limit infection, while antiinflammatory cytokines such as IL-10 can inhibit expression of proinflammatory cytokines to down regulate the inflammation process (21). Several AIV proteins have been implicated activating the necrotic and apoptotic cell death pathways (22).Necrosis is a passive, uncontrolled cell death, which typically causes an inflammatory reaction and affects surrounding cells, whereas apoptosis is an active, controlled cell death that does not affect surrounding cells (12,22,23).While both can occur during an infection, AIV proteins have been shown to block the Caspase-3 (Casp-3) and Caspase-8 (Casp-8) activation causing a shift from the apoptotic pathway to the necrotic pathway (24-30).The expression of these innate immune modulators drastically varies by virus strain, host, and target tissue making our understanding of immune response to AIV incomplete (12,13). APCs are crucial components of the primary immune response against pathogens and help bridge the innate and adaptive immune responses.Dendritic cells (DC) are professional APCs that play a central role as regulators of the adaptive immune response by interacting with T and B cells (13).Avian DC progenitors originate from hematopoietic stem cells in the bone marrow and translocate to non-lymphoid tissue where they become immature DCs (13,31,32).While immature DCs are capable of phagocytizing antigens, they are poor T-cell stimulators and lack proper antigen presentation capabilities.Upon activation, chicken DCs migrate to T-cell regions where they mature and upregulate several costimulatory molecules, including MHC-II, CD11c, CD40 and CD80.Mature DCs are specialized in antigen presentation to T cells (33).Recently, more emphasis has been put on understanding the immune modulation of chicken DC cells and their ability to combat disease. Previous studies reported DCs could be grown in vitro by incubating chicken bone marrow (BM)-derived cells with chicken granulocyte-macrophage stimulating factor (GM-CSF) and chicken interleukin 4 (IL-4) (34,35).In this study, we cultured bone-derived chicken dendritic cells (ck-BM-DC) and examined gene expression levels of IFN-a, TLR-3, TLR-7, MHC-I, IL-1b, IL-6, Mx, Casp-3, and Casp-8 pre/post infection with AIV.There are limited studies examining the interactions between chicken DCs and AIV (32,(36)(37)(38).The exact nature of how AIV infections affect DCs is largely unknown.We seek to determine whether active AIV replication can occur in DCs and if antigen processing occurs.In this study, we compared immune responses, morphological changes, and replication of ckBM-DCs following infection with contemporary H5 and H7 HPAI and LPAI viruses.A better understanding of how chicken antigen presentation occurs is needed, as the Gs/Gd lineage becomes entrenched in migratory waterfowl globally. Chickens and chicken bone marrow dendritic cells isolation and culture Four-week-old specific pathogen-free (SPF) white leghorn chickens were housed at the USDA-ARS U.S. National Poultry Research Center.The studies involving animals were reviewed and approved by the USDA-ARS U.S. National Poultry Research Center Institutional Animal Care and Use Committee (IACUC).All birds used in these studies were cared for and handled in compliance with IACUC guidelines and procedures.ckBM-DCs were generated as previously described with minor modifications to the protocol (35).Briefly, following euthanasia with injected sodium pentobarbital using AVMA guidelines, femurs of chickens were removed and placed into 10 cm petri dishes containing 1X PBS with 1% antibiotics (Sigma-Aldrich, St. Louis, MO).Both ends of the femur bone were cut across the tops with sterile bone-scissors and a sterile iron wire was passed through and the bone marrow was flushed with sterile 1X PBS using 20 ml syringe with 16G needle.Marrow clusters were gently meshed through a 70 nm screen using a syringe plunger to obtain single-cell suspensions.Cell suspensions were overlaid with an equal volume of Histopaque 1119 (Sigma-Aldrich, St. Louis, MO) and centrifuged at 1200 g for 30 min at RT to remove red blood cells.Cells were collected and were washed three times in RPMI-1640 media (Thermo-fisher Scientific, Waltham, MA).After collection, cells were resuspended in 1X PBS and mixed 1:1 with trypan blue solution (Thermo-fisher Scientific, Waltham, MA) and checked under microscope using a hemacytometer for viable cells. Cells were cultured in six-well plates at a concentration of 2×10 6 cells/ml at 41°C 5% CO 2 in RPMI-1640 supplemented with 10% chicken serum (Thermo-fisher, Waltham, MA), 1% L-glutamine, 1% non-essential amino acids and antibiotics (Gibco, Thermofisher, Waltham, MA) for 7 days.Different concentrations (0, 10, 25 and 50 ng/ml) of yeast-produced recombinant chicken IL-4 and chicken GM-CSF (Kingfisher, St Paul, MIM) were added to the medium to optimize culture conditions.Fresh complete medium was mixed with conditional media at a 3:1 ratio and added to the cells every 2 days.To induce maturation of bone-marrow cells into DCs, cells were stimulated with Escherichia coli LPS (500 ng/ml) (Thermo-fisher Scientific, Waltham, MA) for 30 hours.Images of the cells were taken at 30 hours using an EVOS 5000 (Invitrogen, Carlsbad, CA). Morphology and phenotypic analysis Cells were cultured for 6 days in the presence of different concentrations (0, 10, 25 and 50 ng/ml) of chicken GM-CSF and chicken IL-4.Cell morphology and cell growth were monitored daily.After stimulation with Escherichia coli LPS (500ng/ml) for 30 hours, images were taken to check change in cell morphology. Phagocytosis assay Phagocytosis was assessed using FITC labeled inactivated H5N9 virus, and 0.5µm carboxylate modified fluorescent red latex beads (Sigma-Aldrich, St. Louis, MO).To explain briefly, non-stimulated ckBM-DCs were cultured for 6 days, followed by incubation with FITC-labeled inactivated H5N9 virus or chicken serum-opsonized red latex beads in RPMI-1640 medium at a density of 10 8 particles/ ml at 41°C for 4 hours.Cells were washed five times with 1X PBS and visualized with immunofluorescence microscopy. Immunofluorescence analysis For sialic acid receptor staining, cells were fixed and stained by incubating FITC-labeled MAA (SA-a2,3-Gal) and TRITC-labeled SNA (SA-a2,6-Gal) for 1 hour at room temperature.Following 3 rinses in 1X PBS, cells were stained for 5 minutes with DAPI (Thermo-fisher Scientific, Waltham, MA).The immunofluorescence assays for virus nuclear protein (NP) detection were performed as previously described (40).Briefly, cells were infected with A/turkey/ Virginia/SEP-4/2009 (H1N1) and A/Turkey/Wisconsin/68 (H5N9) virus at a MOI of 1 for 20 hours.Cells were then washed with 1X PBS twice, fixed and permeabilized with methanol.Viral antigens were detected with mouse-derived monoclonal antibody specific for a type A influenza virus nucleoprotein (developed at Southeast Poultry Research Laboratory, USDA), then stained with FITC-conjugated anti-mouse IgG antibody (Thermo-fisher Scientific, Waltham, MA). Virus infection and analysis of cytokine expression by quantitative real-time RT-PCR Cells were infected with either LPAI or HPAI H5N2 and H7N3 at a MOI of 1 in serum free DC medium for one hour with gentle agitation applied every 10 minutes.Cells were washed twice with 1X PBS and resuspended in DC medium containing 2% chick serum and incubated at 41°C 5% CO2.At 2, 8 and 24 hours post infection (hpi), supernatants were collected and stored at -80°C until titration.Virus titers are expressed as log10 50% embryo infectious dose (EID 50 /ml) and HAU.Cells were harvested for RNA extraction at 8 hpi.Relative gene expression levels of IFN-a, Mx, TLR-3, TLR-7, MHC-I, IL-1b, IL-6, Casp-3 and Casp-8 were evaluated by qRT-PCR as previously described (28). Statistical analyses Data are expressed as the mean ± standard error.Statistical differences were analyzed with Tukey one-way ANOVA using Prism 9 (GraphPad Co., San Diego, CA). Morphological characteristics of chicken bone marrow-derived DC Morphological characteristics of ckBM-DC differed based on the levels of recombinant chicken GM-CSF and IL-4 (0, 10, 25, and 50 ng/ml) used.Untreated bone marrow cells displayed a rounded appearance, with follicle-like structures (cytoplasmic vacuoles) present within the cytoplasm (Figure 1A).Cells treated with 10 ng/ml or 25 ng/ml of GM-CSF and IL-4 retained a rounded appearance, but a few cells were observed to have some elongated morphology (Figures 1B, C).Cells treated with 50 ng/ml exhibited the greatest DC-like morphology by morphing into larger, elongated, and branched cells, as previously described (Figure 1D) (35).While no international consensus exists on how to determine units of activity for avian cytokines, 50 ng/ml of GM-CSF and 50 ng/ml IL-4 were used to maximize the number of cell aggregates in this study. Maturation of ckBM-DC To induce maturation of the ckBM-DCs, we stimulated cells with 500 ng/ml LPS on day 6 post culture for 30 hours.The cells were examined at different timepoints (0, 10, 20, and 30 hours) after the addition of LPS.At the 0 timepoint, cells displayed a veiled appearance, with small elongated branches on each cell (Figure 2A).After incubating with LPS for 10 hours, the DCs began displaying long and thin branch-like features, with a spiny or sheet-like appearance (Figure 2B).At the later timepoints, 20 and 30 hours, most of the cells developed the dendritic-like appearance (dendrites), indicating the presense of activated, mature DCs (Figure 2C, D). Mature ckBM-DC cells share phenotypic similarities with mammalian DC cells Dendritic cells co-exist in both immature and mature states.In mammals, immature dendritic cells are characterized by moderate or low-level expression of surface markers molecules such as MHC-II, CD11c, CD40, CD80, CD83 and CD86 and increase upon maturation (41).Immunofluorescence microscopy demonstrated that immature ckBM-DCs had some level of surface marker expression when stained with anti-chicken MHC-II (Figure 3A1), anti-chicken CD11c (Figure 3B1) and anti-chicken CD40 (Figure 3C1).After stimulation with LPS for 24 hours, expression level was increased in all 3 markers, MHC-II (Figure 3A2), CD11c (Figure 3B2) and CD40 (Figure 3C2).To quantify the level of surface marker expression, qPCR was used to determine the fold change of expression between immature and mature DCs.The level of surface marker expression was significantly enhanced in mature ckBM-DC cells, approximately 40-120-fold compared with their immature counterparts.The greatest change was observed with CD80, CD83 and CD86 (Figure 3B). Immature ckBM-DCs retain the capability to phagocytosize foreign antigens To test phagocytosis, 0.5 µm carboxylate modified fluorescent red latex beads and FITC labeled-inactivated H5N9 avian influenza virus particles were added to immature ckBM-DCs.The cells were able to phagocytose both the beads (Figure 4A) and viral particles (Figure 4B).The beads and virus were observed in the cytoplasm (Figures 4A, B). AIVs are capable of infecting immature ckBM-DCs To test whether immature ckBM-DCs can successfully be infected with AIV, immunofluorescence microcopy was performed to detect expression of SA-a2,3-Gal and SA-a2,6-Gal receptors on the DCs surface.Results demonstrated that both SA-a2,3-Gal (Figure 5A2) and SA-a2,6-Gal (Figure 5B2) receptors were extensively expressed in the immature DCs, however it Morphology of immature ckBM-DC stimulated with LPS.Cells were cultured in the presence of 50 ng/ml GM-CSF + 50 ng/ml IL-4 for 6 days and then stimulated with LPS (500ng/ml).ckBM-DCs were observed by microscopy for 30 hours.Images show cells cultured at (A) 0 hours, (B) 10 hours, (C) 20 hours, and (D) 30 hours.A representative image is shown for each timepoint at 100x magnification.Differening levels of elongated dendrites are in black boxes. B A Comparative analysis of surface markers on immature and mature ckBM-DCs.Cells were cultured in the presence of 50 ng/ml GM-CSF + 50 ng/ml IL-4 for 6 days, and then stimulated with 500 ng/ml LPS for 30 hours.(A) Immature cells (-LPS) are on the left (A1, B1, C1) and mature cells (+LPS) are on the right (A2, B2, C2).Immunofluorescence analysis was performed using a FITC labeled mouse-anti-chicken MHC-II antibody (green) (A1, A2).Cells were also stained with mouse anti-chicken CD11c (B1, B2) (red) and mouse anti-chicken CD40 (C1, C2) followed by a goat-anti-mouse secondary (green).A representative image is shown for each at 100x magnification.(B) Cellular RNA was extracted to measure expression levels of surface markers, via qPCR, in ckBM-DCs.RNA was normalized using the Ck 28S house-keeping gene.The data is expressed as the fold change in mRNA levels between immature and mature ckBM-DCs for MHC-II, CD11c, CD40, CD80, CD83, and CD86.The data shown is a representative of three independent experiments.Error bars represent the standard error. ckBM-DCs can be infected with both LPAIVs and HPAIVs ckBM-DCs were infected with HPAIV and LPAIV (H5N2 and H7N3 subtypes) to determine the effect on cell morphology and viral replication.At 8 hpi, all infected cells underwent some degree of morphological change, more rounded cells were observed when infected with H7N3 compared to the H5N2 strains (Figure 6A).At 24 hpi, CPE was observed in the form of detached cells and changes in their morphology (rounding), regardless of subtype or pathogenicity (Figure 6A).There was little difference in severity of CPE between the H5N2 strains, but cells infected with HPAI H7N3 demonstrated more severe levels of CPE with larger number of detached cells compared to LPAI H7N3 (Figure 6A).In terms of viral growth, LPAI H5N2 demonstrated a titer of 10 5.5 EID 50 /ml at 24 hpi, compared to HPAI H5N3 which demonstrated a titer of 10 4.8 EID 50 /ml.In contrast, HPAI H7N3 demonstrated a higher titer compared to LPAI H7N3, with titers of 10 6.5 EID 50 /ml and 10 4.8 EID 50 /ml, respectively (Figure 6B).All virus titers increased by 3-5 logs between 2 and 8 HPI indicating AIV replicated in the ckBM-DCs (Figure 6B). Discussion The innate immune system plays a central in detecting viral pathogens and mounting an early response by activating inflammatory and antiviral defense mechanisms.DCs are essential in bridging the gap between innate and adaptive immune responses because they process and present antigens to T cells and B cells.However, it is still largely unknown if AIV can directly infect ckBM-DCs and if infection causes morphological and physiological changes to the cells.This study established that ck DCs can be infected by AIV, and that viral growth occurs in them.We also demonstrated ck-BMDCs were able to phagocytosize viral particles as immature DCs.The ckBM-DCs were able to be infected by both HPAIV and LPAIV isolates.However, differences in cell morphology did exist between the virus strains and pathogenicity.LPAI H5N2 replicated better in ckBM-DCs than its HPAI counterpart.While the exact reason is not clear, it may indicate a delayed replication in DCs or be attributed to HPAI H5N2 lack of adaptation in cell culture.In contrast, HPAI H7N3 demonstrated more severe CPE at 8 hpi compared to the LPAI H7N3, suggesting some correlation with pathogenicity and CPE with H7 viruses.Viral titers also correlated with CPE and pathogenicity, with the titer of HPAI H7N3 demonstrating a 1.7 log difference in EID 50 /ml titers compared to LPAI H7N3.The results are consistent with a previous study in which HPAI H7N1 showed better replication in chicken DCs compared to LPAI (32). Controlled cell death is normally induced by apoptotic genes, during a viral infection up-regulation of related caspase genes is typically observed (23).In our study, Casp-3 and Casp-8 expression increasing in all infected groups, regardless of pathogenicity or subtype.However, DCs infected with HPAI viruses demonstrated significantly higher expression levels of Casp-3 and Casp-8 compared to their LPAI counterparts, indicating a correlation between caspase gene expression and pathogenicity.Studies have demonstrated AIV causes caspase-dependent apoptosis based on Casp-3 activation, which results in nuclear export of newly synthesized viral nucleoprotein (NP) and elevated virus replication.This suggests Casp-3 activation is a crucial event for AIV propagation and dissemination (42,43).One study reported primary duck cells infected with LPAI H2N3 and classical H5N1 strains underwent rapid cell death compared to primary chicken cells, both lines showed similar levels of viral RNA, but lower amounts of infectious virus were observed in the duck cells (44).Such rapid cell death was not observed in the same study with duck cells infected with a contemporary Eurasian H5N1 strain fatal to ducks, indicating the rapid apoptosis may be part of a mechanism of host resistance against AIV (45).An increased expression of caspase genes demonstrated in our study may further support the notion that AIV can induce cell death via Casp-3 and Casp-8. During AIV infection, ssRNA and dsRNA are recognized by a specific group of PRRs.In this study, HPAIV infected cells demonstrated significantly higher expression levels of TLR-3 and TLR-7 compared to cells infected with LPAIV.However, the level of TLR expression did not correspond to the amount of viral load as the titers had mixed results between the HPAI and LPAI strains.Furthermore, the TLR-3 expression levels were significantly higher in HPAI H7N3 compared to HPAI H5N2s.One study reported that TLR-3 expression levels significantly increased at 4 hpi and 16 hpi with HPAI H7N1 infections, whereas the level of increase in HPAI H5N2s were more gradual (32).TLR-3 and TLR-7 closely interact with STAT-3, which is crucial for regulating cytokine-mediated responses, such as IL-6 to combat viral infections (46).One study reported STAT-3 expression was not adversely affected by LPAIV H3N2 in chicken cells, but expression levels were significantly decreased in chicken cells infected by HPAI H5N1 (45).In contrast, STAT-3 expression levels were significantly elevated in duck cells, indicating infection with the same H5N1 strain had a less adverse effect in duck cells.Thus, it can be speculated that differences in the cell signaling process, along with specificity of the strains, may affect cytokine responses. Our results demonstrated that expression levels of proinflammatory related genes were higher in the HPAI groups in the early stages of infection, compared to LPAI groups.Geus et al. reported that levels of IFN-a were elevated in HPAI infected DCs and were maintained up to 24 hpi, compared to the LPAI infected DCs where most of the IFN-a expression occurred in only the early stages (47).Our study demonstrated that the expression levels of IFN-a and IL-6 genes in DCs were higher in the HPAI H7N3 group, compared to the HPAI H5N2 group, suggesting the ability to activate host innate responses may vary depending on the virus subtype and the host.Several studies have reported high levels of IL-6, IL-12 and IL-18 cytokine expression in the lungs and spleens of chickens infected with H5 HPAIVs while type 1 interferons were mostly present in the plasma and tissues (48-51).Another study reported similar amounts of viral RNA and cytokine expression levels following infection with HPAI and LPAI H7N1 in chickens (52).Kuribayashi et al. (2013), demonstrated that H7N1 strains can replicate more efficiently in chickens compared to H7N7, especially in the brain and are able to trigger excessive expression of inflammatory and antiviral cytokines, such as IFN-g, IL-1b, IL-6, and IFN-a, in proportion to its proliferation.In contrast, another study reported that human-origin DCs infected with HPAI H7 resulted in delayed and decreased expression of cytokines, including type 1 interferons, compared other AIV subtypes (53).Thus, the difference of immune profiles of the host cell might be attributed to the specificity of the AIV.Furthermore, HPAI viruses may impair the regulatory activity of the TLR pathway, which is responsible for controlling the magnitude and duration of the inflammatory response, and lead to an uncontrolled immune response and cytokine storm.The acute uncontrolled innate immune response, which leads to overexpression of proinflammatory cytokines may be one of the causes for swift death in mammals infected with HPAI.Thus, one can speculate deregulation of these cytokines in chicken DCs may lead to multiple organ failure, as frequently seen in mammals. Mx is a well-known antiviral protein, which can be induced by type 1 interferons (15).However, susceptibility to the inhibitory effects of Mx may vary by strain and host (14, 20,54,55).In this study, higher type 1 interferon expression levels were observed along with elevated expression of the Mx gene.However, despite the presence of elevated type 1 interferon and Mx expression levels, viral replication in DCs were not significantly inhibited.Rapid cell death and activation of caspase-dependent apoptosis did not appear to hinder the output of viral load.To date, the full complement of genes and their exact roles which contribute to antiviral properties are not well defined in chickens.However, one might speculate that the PRR dependent immune response may play a crucial role in mounting an antiviral defense, given the role of the TLR-7 and RIG-1 receptor signaling.For instance, it was shown that the presence of RIG-1 in cells stimulates expression of several key genes involved in innate immune responses that are crucial against viral infections, such as influenza (56). Overall, we were able to demonstrate that AIV can infect and replicate in chicken DCs regardless of pathogenicity.HPAI subtypes trigger a significantly higher expression of various immune factors compared to LPAI subtypes, suggesting a dysregulation of the immune system.The increase in DC activation following infection may be indicative of the dysregulated immune responses typically seen with high pathogenic avian influenza infections. FIGURE 4 Functionality FIGURE 4Functionality of immature ckBM-DCs.Cells were cultured in the presence of 50 ng/ml GM-CSF + 50 ng/ml IL-4 for 6 days.(A) ckBM-DCs were incubated with 0.5 µm carboxylate modified fluorescent red latex beads or (B) FITC labeled-inactivated H5N9 avian influenza virus for 4 hours.Following incubation cells were counterstained with DAPI, washed 5x with 1X PBS, and visualized by immunofluorescence microscopy.A representative image is shown for each at 100x magnification. 6 FIGURE 6 Change in morphology and growth of ckBM-DCs infected with LPAIV and HPAIV.Cells were cultured in the presence of 50 ng/ml GM-CSF + 50 ng/ ml IL-4 for 6 days.Immature ckBM-DCs were infected at a MOI of 1 with LPAIV (A/Chicken/Pennsylvania/21525/1983 H5N2 and A/Cinnamon Teal/ Mexico/2817/2006 H7N3) and HPAIV (A/Chicken/Pennsylvania/1370/1983 H5N2 and A/Chicken/Jalisco/CPA1/2017 H7N3) viruses.(A) CPE (black arrows) and cellular morphological changes were observed via microscopy at 8 and 24 HPI.A representative image is shown for each at 100x and 200x magnification.(B) Supernatants were obtained at 2, 8, and 24 HPI and viral titers were evaluated by EID50.The data shown is a representative of three independent experiments.Error bars represent the standard error of triplicate samples.
2024-02-13T14:10:15.323Z
2024-02-08T00:00:00.000
{ "year": 2024, "sha1": "a9915eda94e6c5144193a93d20f490627380083d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fimmu.2024.1374838", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d485f57710c05b7e84e28ddcfffe3d6b14ceea36", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
118636449
pes2o/s2orc
v3-fos-license
Simultaneous XMM-\textit{Newton} and HST-COS observation of 1H0419-577: II. Broadband spectral modeling of a variable Seyfert galaxy In this paper we present the longest exposure (97 ks) XMM-Newton EPIC-pn spectrum ever obtained for the Seyfert 1.5 galaxy 1H 0419-577. With the aim of explaining the broadband emission of this source, we took advantage of the simultaneous coverage in the optical/UV that was provided in the present case by the XMM-Newton Optical Monitor and by a HST-COS observation. Archival FUSE flux measurements in the FUV were also used for the present analysis. We successfully modeled the X-ray spectrum together with the optical/UV fluxes data points using a Comptonization model. We found that a blackbody temperature of $T \sim 56$ eV accounts for the optical/UV emission originating in the accretion disk. This temperature serves as input for the Comptonized components that model the X-ray continuum. Both a warm ($T_{\rm wc} \sim 0.7 $ keV, $\tau_{\rm wc} \sim 7 $) and a hot corona ($T_{\rm hc} \sim 160 $ keV, $\tau_{\rm hc} \sim 0.5$) intervene to upscatter the disk photons to X-ray wavelengths. With the addition of a partially covering ($C_v\sim50\%$) cold absorber with a variable opacity ($ {\it N}_{\rm H}\sim [10^{19}- 10^{22}] \,\rm cm^{-2}$), this model can well explain also the historical spectral variability of this source, with the present dataset presenting the lowest one (${\it N}_{\rm H}\sim 10^{19} \, \rm cm^{-2} $). We discuss a scenario where the variable absorber, getting ionized in response to the variations of the X-ray continuum, becomes less opaque in the highest flux states. The lower limit for the absorber density derived in this scenario is typical for the broad line region clouds. Finally, we critically compare this scenario with all the different models (e.g. disk reflection) that have been used in the past to explain the variability of this source Introduction The infalling of matter onto a supermassive black hole (SMBH) supplies the energy that active galactic nuclei (AGN) emit in the form of observable radiation over a broad energy range. Indeed, in the radio quiet case, the AGN emission ranges mostly from optical to X-ray wavelengths. The optical/UV emission is thought to be direct thermal emission from the accreting matter. A standard geometrically thin, optically thick, accretion disk (Shakura 1973;Novikov & Thorne 1973) produces a multicolor blackbody spectrum, whose effective temperature scales with the black hole mass as ∝ M −1/4 . Therefore, for a typical AGN hosting a supermassive black hole (SMBH) of M ∼ 10 8 M ⊙ , the disk spectrum is expected to peak in the far UV range (e.g., for an Eddington ratio of L/L Edd ∼ 0.2, T ∼ 20 eV ). The Wien tail of the accretion disk emission is not expected to be strong in the soft X-rays. However, Seyfert X-ray spectra display a prominent "soft excess" (Arnaud et al. 1985;Piro et al. 1997) that lies well above the steep power law which well describes the spectrum at energies larger than ∼ 2.0 keV (Perola et al. 2002;Cappi et al. 2006;Panessa et al. 2008). The origin of the soft X-ray emission in AGN has been debated a lot in the last decades. Comptonization of the disk photons in a warm plasma is a possible mechanism to extend the disk emission to higher energies (Magdziarz et al. 1998;Done et al. 2012). Alternatively, relativistically blurred reflection of the primary X-ray power law in an ionized disk (Ballantyne et al. 2001;) is another possible explanation. Partial covering ionized absorption (see Turner et al. 2009, for a review) is also able to explain the soft X-ray emission without requiring extremely relativistic conditions in the vicinity of the black hole. Discriminate among these models through X-ray spectral fitting alone is difficult, even in high signal to noise spectra. In many cases, models with drastically different underlying physical assumptions can provide an acceptable fit (e.g., Middleton et al. 2007;Crummy et al. 2006). For instance, in the case of the well known nearby Seyfert 1 MCG-6-30-15 both a reflection model (Ballantyne et al. 2003) and an absorption based model (Miller et al. 2008) have been successfully used to fit the spectrum. For these reasons, the nature of the soft excess is still an open issue (see also Piconcelli et al. 2005). Recently, the multiwavelength monitoring campaign (spanning over ∼ 100 days) of the bright Seyfert galaxy Mrk 509 ) provided a possible discriminating evidence. During the campaign, the soft excess component varied together with the UV continuum emission (Mehdipour et al. 2011, hereafter M11). This finding disfavors disk reflection, at least as the main driver of the source variability occurring on the few days timescale typical of the campaign. Indeed, in the case of disk reflection, the soft excess component should rather vary in correlation with the hard ( > ∼ 10 keV) X-ray flux, because of the broad reflection bump at ∼ 30 keV characterizing any reflection component. The M11 result is not a unique case: in a different case, the soft excess variability has been found to be independent from the hard X-ray variability also on a longer (∼ years) timescale (e.g. Mrk 590, Rivers et al. 2012). The correlation between the soft X-ray and UV variability may be a natural consequence of Comptonization, because in this framework the soft X-ray emission is directly fed by the disk photons. Indeed, this interpretation explains the simultaneous broadband optical/UV/X-ray/gamma-ray spectrum of Mrk 509 obtained in the monitoring campaign (Petrucci et al. 2013, hereafter P13). In the P13 broadband model, two Comptonized components model respectively the "soft-excess" and the X-ray emission above ∼ 2.0 keV. Indeed, it is commonly accepted that the phenomenological power law characterizing AGN spectra above ∼ 2.0 keV is produced by Comptonization of the disk photons in a hot (T∼ 100 keV), optically thin corona (Haardt & Maraschi 1991). The nature and the origin of the hot corona are still largely unknown. However, recent results from X-ray timing techniques (e.g. X-ray reverberation lag, Wilkins & Fabian 2013) or imaging of gravitationally lensed quasar (e.g., Morgan et al. 2008) indicates that it may be a compact emitting spot, located a few gravitational radii above the accretion disk (e.g., Reis & Miller 2013). 1H 0419-577: a variable Seyfert galaxy 1H 0419-577 is a radio quiet quasar located at redshift z=0.104 and spectrally classified as type 1.5 Seyfert (Véron-Cetty & Véron 2006). The estimated mass for the SMBH harbored in its nucleus is ∼ 3.8 × 10 8 M ⊙ (O'Neill et al. 2005). The source has been targeted by all the major X-ray observatories, and in Fig. 1 we plot the historical fluxes in the 0.5-2.0 keV band. As noticed for the first time in Guainazzi et al. (1998), 1H 0419-577 undergoes frequent transitions between low and high flux states. While the bulk of the flux variation occurs in the soft X-ray, in the hard (2-10 keV) X-ray band the spectral slope flattens out drastically (down to Γ = 1.0 in the lowest state, Page et al. 2012;Pounds et al. 2004a). Due to this peculiar behavior in the X-rays, 1H 0419-577 is challenging for any interpretation, and for this reason it has been subject of discussion over the past years. According to Page et al. (2002), the cooling of the plasma temperature in the hot corona may produce the observed flux/spectral transition. Afterwards, Pounds et al. (2004a,b) carried on a systematic study of the spectral variability in this source, using five ∼15 ks long observations, that were taken during one year with a time spacing of ∼3 months. These authors concluded that the spectral variability is dominated by an emerging/disappearing steep power-law component, which is in turn modified by a slightly ionized variable absorber. The fitted absorber becomes more ionized and less opaque as the continuum flux increases, supporting the idea that a fraction of the soft Xray emission may be due to re-emission of the absorbed continuum in an extended region of photoionized gas. An alternative explanation of the same XMM-Newton datasets was however readily proposed in the framework of blurred reflection model (Fabian et al. 2005, hereafter F05). This model prescribes that AGN spectral variability is due to the degree of light-bending as the primary power law emitting spot moves in a region of stronger gravity. Low flat states, such as the ones observed in 1H 0419-577, are extreme reflection dominated cases occurring when the primary emission is almost completely focused down to the disk and do not reach the observer. The broader spectral coverage provided by two subsequent S uzaku observations of 1H 0419-577 did not break this models degeneracy. The variable excess observed above ∼15 keV can be either explained by reflection (Walton et al. 2010;Pal & Dewangan 2013) or by reprocessing of the primary emission in a partially covering, Compton-thick, screen of gas (Turner et al. 2009). The high ionization parameter suggests that this absorber may be part of a clumpy disk wind located within the broad line region (BLR). In this paper we present the longest exposure EPIC-pn dataset of 1H0419-577 obtained so far. The XMM-Newton observation was taken simultaneously to a HST-COS observation in the UV (Edmonds et al. 2011), and caught the source in an intermediate flux state (Fig. 1). In the Reflection Grating Spectrometer (RGS) spectrum of this dataset (already presented in Di Gesu et al. 2013, hereafter Paper I) we detected a lowly ionized absorbing gas (also observed in S uzaku, Winter et al. 2012). We found that the X-ray and the UV absorbing gas (Edmonds et al. 2011) are consistent to be one and the same. The low gas density estimated in the UV together with the low ionization parameter that we measured in the X-ray imply a galactic scale location for the absorbing gas (d ∼ 4 kpc). In this respect, the warm absorber in 1H 0419-577 represents a unique case, being the first X-ray absorber ever detected so far away from the nucleus. The absorbing gas does not have an emission counterpart as more highly ionized lines, produced by e.g. O vii, O viii, and Ne ix, are the most prominent emission features in the X-ray spectrum. The photoionization modeling of the X-ray and UV narrow emissionlines confirmed that they are produced by a more highly ionized gas phase, located closer (∼1 pc) to the nucleus. In the present analysis we exploit the simultaneous UV and optical (thanks to the XMM-Newton Optical Monitor) coverage to model the X-ray spectrum in a broadband context using Comptonization. The paper is organized as follows: in Sect. 3 we explain the data reduction procedure; in Sect. 4 we present the spectral analysis of our dataset; in Sect. 5 we apply our best fit model to the past XMM-Newton datasets, with the aim of explaining the historical spectral variability; finally in Sect. 6 we discuss our results and in Sect. 7 we outline our conclusions. The cosmological parameters used are: H 0 =70 km s −1 Mpc −1 , Notes. (a) Net exposure time after flaring filtering. (b) Observed fluxes in the quoted bands (c) Archival datasets. (d) Datasets analyzed in this paper. Ω m =0.3, Ω Λ =0.7. The C-statistics (Cash 1979) is used throughout the paper, and errors are quoted at 90% confidence levels (∆C = 2.7). In all the spectral models presented in the following we consider the Galactic hydrogen column density from Kalberla et al. (2005, N H =1.26 × 10 20 cm −2 ). Observations and data preparation 1H 0419-577 was observed in May 2010 with XMM-Newton for ∼167 ks. The observing time was split into two observations (Obs. ID 0604720301 and 0604720401 respectively) which were performed in two consecutive satellite orbits. For the present analysis, we used the EPIC-pn (Strüder et al. 2001) and the Optical Monitor (OM, Mason et al. 2001) data. In the UV, besides the HST-COS observation simultaneous to our XMM-Newton observation, the source has been observed twice with the Far Ultraviolet Space Explorer (FUSE), respectively in 2003 and in 2006. In this analysis we used the FUSE flux measurements reported in the literature (Dunn et al. 2008;Wakker & Savage 2009). Finally, we retrieved all the available archival datasets from the XMM-Newton archive and we used them to study the source variability. The X-ray data We processed the present datasets and all the archival Observation Data Files (ODF) with the XMM-Newton Science Analysis System (SAS), version 10.0, and with the HEAsoft FTOOLS, version 6.12. We refer the reader to Paper I for a detailed description of the data reduction. For the present datasets, we extracted the EPIC-pn spectra from both 0604720301 and 0604720401 observation. We checked the stability of the spectrum in the two observations and we found no flux variability larger than ∼ 7%. Therefore, we summed up the two spectra into a single combined spectrum with a net exposure time of ∼ 97 ks after the background filtering. We used the FTOOLS mathpha and addarf to combine respectively the spectra and the Ancillary Response Files (ARF). We reduced all the archival datasets following the same standard procedure described in Paper I, and we discarded the datasets with ID 0148000301 and 0148000701 because they show a high contamination by background flares. Hence, we created the EPIC-pn spectra and spectral response matrices for all the good datasets. We fitted all the X-ray spectra in the 0.3-10 keV band and we rebinned them in order to have at least 20 counts in each spectral bin, although this is not strictly necessary when using the C statistics. In Table 1 we provide the most relevant information of each XMM-Newton observation and we label them with numbers, following a chronological order. The optical and UV data As also described in Paper I, in our XMM-Newton observation OM data were collected in 4 broad-band filters: B, UVW1, UVM2, and UVW2. In the present analysis we used the OM filters count-rates for the purpose of spectral fitting. Therefore, we also retrieved from the ESA website 1 the spectral response matrices correspondent to each filter. We corrected the flux in the B filter to account for the host galaxy starlight contribution. For this, we used the same correction factor (56%) estimated in M11 for the stellar bulge of Mrk 509. Indeed, since Mrk 509 hosts a BH with a mass similar to the one in 1H 0419-577 , also the stellar mass of the bulge should be similar in this two galaxies (e.g., Merritt & Ferrarese 2001 Edmonds et al. 2011).Therefore, we could safely fit the FUSE fluxes together with the COS, OM and the EPIC-pn data that were simultaneously taken in 2010. For this purpose, we converted the UV fluxes back to count rates. We used the HST-COS sensitivity curve and the FUSE effective area (see also M11) for this. We outline all the UV and optical continuum values for 1H 0419-577 in Table 2. To check for a possible variability of the source in the optical-UV, we obtained also the OM fluxes from the archival XMM-Newton observations. For all the archival datasets, except Obs.1, OM data were available in the U, B, V, UVW1, and UVW2 filters. A phenomenological model We started with a pure phenomenological modeling of the present EPIC-pn spectrum, using SPEX (Kaastra et al. 1996), version 2.04.00. We first attempted to fit the spectrum in the 2.0-10.0 keV energy region with a canonical simple power law (Γ ∼ 1.6). The broadband residuals ( Fig. 2) show large deviations from this simple model. Indeed, besides a prominent soft excess in the 0.3-2.0 keV band, the model does not account for a broad trough between 2.0 and 4.0 keV. To phenomenologically account for this nontrivial spectral shape, a combination of 4 different spectral slopes would be required all over the 0.3-10.0 keV band. Nonetheless, a prominent peak in the model residuals ( Fig. 2), at ∼ 0.5 keV, is still unaccounted. We identified this feature as due to the blend of the O vii-O viii lines that we detected in the simultaneous RGS spectrum (Paper I). Furthermore, a shallow excess is seen at ∼ 5.5 keV. Fitting this feature with a delta-shaped emission line, centered at the nominal rest frame energy of the Fe Kα line-transition, does not leave any prominent structure in the residuals. If the line width is left free to vary, the fitted line-width (σ = 300 ± 200 eV) is well consistent with what previously reported for this source (Turner et al. 2009;Pounds et al. 2004a,b). We attempted also to decompose the Fe Kα in a combination of a broad plus a narrow component, but this exercise did not lead to a conclusive result. Despite the good data quality of the present dataset, the line-width of the broad component and the normalization of the narrow component cannot be constrained simultaneously. In conclusion, the long exposure time of the present EPICpn spectrum unveiled a complex continuum spectral structure, Table 3. Best fit parameters for the reflection fitting. Simple reflection model Notes. which calls for a more physically motivated modeling to be fully understood. Reflection Fitting At first, we tested a disk reflection scenario for the present spectrum. As noticed in Sect. 1, this model has been already successfully applied to Obs. 1-5 (F05). Besides the main power law continuum, the second relevant spectral component in this model is a relativistically smeared reflected power law, which is thought to be produced in an ionized accretion disk. We fitted the spectrum with Xspec (Arnaud 1996) version 12.0, and we used PHABS to account for the Galactic hydrogen column density along the line of sight. We used REFLIONX to model the reflected component, and we left the ionization parameter of the reflector free to vary. Hence, we accounted for the relativistic effects from an accretion disk surrounding a rotating black hole (Laor 1991) with KDBLUR2. The free parameters in this component are the disk inclination and inner radius, along with the slopes and the break radius of the broken power law shaped emissivity profile. We kept the outer radius frozen to the default value of 400 gravitational radii (R g ), and we set the iron abundance to the solar value (Anders & Grevesse 1989). We extended the model calculation to a larger energy range (0.1-40 keV) to avoid spurious effect due to a trun- Table 1). Note that the data point labeled with a "X" has not been used in the present analysis (see Sect. 3.1). cated convolution. We also attempted to fit the spectrum with a composite disk model (see F05), splitting the disk in two regions with different ionization, to mimic a more realistic scenario where the disk ionization parameter varies with the radius. However, with a simple reflection model we already obtained a statistically good fit, that was not strikingly improved (∆C = 29) using a more complex composite disk model. We list the best fit parameters of the reflection fitting in Table 3. Overall, our result agrees with the main predictions of the physical picture proposed in F05. The black hole hosted in 1H 0419-577 may be rapidly spinning as suggested by the proximity of the fitted disk inner radius to the value of the innermost stable orbit of a maximally rotating Kerr black hole. The steep emissivity profile of the disk indicates that it is illuminated mostly in its inner part, as it is expected if the primary continuum is emitted very close to the BH. In this framework, the historical source variability is due to the variable light bending, which may produce a negative trend of the reflection fraction with the power law flux. Our results are consistent with the general trend noticed in F05 (Fig. 3). Broadband spectral modeling The AGN emission can be also produced by thermal Comptonization (see Sect. 1). This model has the advantage of explaining AGN emission in a consistent way over the entire optical, UV, and X-ray energy range (e.g. Mrk 509, M11, P13). Indeed, the disk blackbody temperature that can be constrained from a fit of the optical/UV data serves as input for the Comptonized components that produce the X-ray continuum. The model includes both a warm (hereafter labeled as "wc") and a hot Comptonizing corona (hereafter labeled as "hc") to cover the entire X-ray bandpass. Given the simultaneous X-ray, UV and optical coverage available in the present case, it is worthwhile testing also this scenario. We fitted the EPIC-pn spectrum of 1H 0419-577 together with the COS, FUSE and OM count-rates with SPEX. We left the nor-malization of each instrument relative to the EPIC-pn as a free parameter to account for the diverse collecting area of different detectors. In the fit we both accounted for the Galactic absorption and for the local warm absorber that we detected in Paper I. For the former, we used the SPEX collisionally-ionized plasma model (HOT), setting a low temperature (0.5 eV) to mimic a neutral gas. The cosmological redshift (z=0.104) was also considered in the fit. The final multicomponent model is plotted in Fig. 4. We used the disk-blackbody model (DBB) in SPEX to model the optical-UV emission of 1H 0419-577. This model is based on a geometrically thin, optically thick, Shakura-Sunyaev accretion disk (Shakura 1973). The DBB spectral shape results from the weighed sum of the different blackbody spectra emitted by annuli of the disk located at different radii. The free parameters are the maximum temperature in the disk (T max ) and the normalization A = R 2 in cos i, where R in is the inner radius of the disk and i is the disk inclination. We kept instead the ratio between the outer and the inner radius of the disk frozen to the default value of 10 3 . The parameters of the disk-blackbody best fitting the data (Fig. 4, long dashed line) are: T max = 56 ± 6 eV and A = (1.2 ± 0.6) × 10 26 cm 2 . The fitted intercalibration factors between OM, COS and FUSE and EPIC-pn, with errors, are reported in Table 2. The effect of these intercalibration corrections is within the errors of the disk blackbody parameters given above. We used the COMT model in SPEX, which is based on the Comptonization model of Titarchuk (1994), to model the X-ray continuum. The seed photons in this model have a Wien-law spectrum with temperature T 0 . In the fit we coupled T 0 to the disk temperature T max . The other free parameters are the electron temperature T and the optical depth τ of the Comptonizing plasma. A combination of two Comptonizing components fits the entire EPIC-pn spectrum. The warm corona (T wc ∼ 0.7 keV) is optically thick (τ wc ∼ 7) and produces the softer part of the X-ray continuum, below ∼2.0 keV (Fig. 4, dotted line). On the other hand, the hot corona (T hc ∼ 160 keV) is optically thin (τ hc ∼ 0.5) and accounts for the X-ray emission above ∼2 keV (Fig. 4, dashed line). Hence, we identified the remaining features in the model residuals as due to the O vii-O viii (Fig. 4, dash dot line) and Fe Kα emission lines (Fig. 4, dash dot dot line). We added to the fit a broadened Gaussian line, with the line-centroid and the linewidth frozen to the values that we obtained in the RGS fit (Paper I) to account for the O vii emission. The fitted line luminosity is consistent with what reported in Paper I. The shallower O viii line that was present in the RGS spectrum is instead undetected in the EPIC data. In a Comptonization framework, a possible origin for the Fe Kα emission is reflection from a cold, distant matter (e.g., from the torus). We have shown in Sect. 4.1 that the Fe Kα line in 1H 0419-577 might also be broad. Detailed study of the properties of the Fe Kα emission line produced in cold matter show that in some conditions the line may appear broadened because of the blend between the main line core and the so-called "Compton shoulder" (see e.g., Yaqoob & Murphy 2011). The predicted apparent line broadening is consistent with what we have obtained in Sect 4.1 from a phenomenological fit of a possible line-width. We added a REFL component to the fit to test this possibility. We considered an incident power law with a cutoff energy of 150 keV, and with the same slope and normalization that we derived from the phenomenological fit (Sect. 4.1). We set a null ionization parameter and a low gas temperature (T ∼ 1 eV) to mimic a neutral reflector and, to adapt the model to the data, we left only the scaling factor 2 (s) free to vary. A reflected component with s = 0.3 ± 0.1 satisfactorily fits the Fe Kα line and slightly adjusts the underlying continuum (∆C = −7). We list the luminosities of all the model components in Table 4 while the parameters and errors of the Comptonized components are outlined in Table 5. The baseline model In Fig. 5 we plot the historical X-ray, optical and UV fluxes of 1H 0419-577, from the archival XMM-Newton observations and from the present dataset. The optical-UV flux of the source has been stable throughout the ∼8 years spanned by the available OM observations (Obs 2-6). Nevertheless, as already pointed We used the Comptonization model that successfully fitted the present dataset as a baseline model for the fit of the past XMM-Newton observations. Because the maximum observed variability in the optical-UV (∼ 20%) is within the errors in the disk blackbody parameters that we derived in the broadband fitting Table 1) are labeled as well on the upper axes. (Sect. 4.3), we assumed the same seed photons temperature of the present dataset in the baseline model. In Fig. 6 we plot the historical fluxes of the Fe Kα emission line, that were measured from a phenomenological fit of the archival and present XMM-Newton datasets in the 2.0-10.0 keV band. Although the line is not well constrained in any of the archival datasets, its flux is however consistent to have been stable in the ∼ 10 years long period covered by XMM-Newton observations. Therefore, also the cold reflection continuum associated to the Fe Kα line should have remained constant. Since we are mainly interested in studying the source variability in the soft X-ray band, we included in the baseline model just a delta function to account for the Fe Kα emission line. Indeed, the addition of a cold reflection continuum is not critical for the resulting parameters. Finally, we included unresolved O vii-f and O viii-Ly α emission lines in the baseline model. Indeed, previous analysis of the RGS spectrum (Pounds et al. 2004b) has shown that O vii and O viii lines were present in Obs 2-5. The baseline model provides a formally acceptable fit for all the datasets except Obs. 2, namely the lowest flux state. The low flux state We show in Fig. 7 a comparison between the low-state spectrum and the present dataset. At a first glance, the spectrum appears much flatter in the ∼1.0-2.0 keV band and displays a peak at Partial covering of the baseline model Prompted by the results of the low state fit, we checked if the addition of a cold absorber could improve also the fit of the other datasets. We outline the results of this exercise for all the archival datasets in Table 5. An additional partially-covering absorbing component with a similar covering factor (∼ 50%) but a lower column density provides a significant improvement of the fit for Obs. 3-5. In contrast, no absorbing component is statistically required in the fit of Obs. 1 and and 6, namely the two highest flux states. For these datasets, we set an upper limit to the absorbing column density by keeping the covering factor fixed to ∼ 0.5. In Fig. 8 we plot the parameters of the absorber as a function of the source flux. The absorbing column density shows a negative trend with the flux, while the covering fraction is consistent to be constant. The variable absorbing component does not however account for all the source variability. As we show in Fig. 9, we still observe an intrinsic variability, in both the two Comptonized components, after removing the effect of the variable absorption. We tested also a different scenario where the absorber is constant in opacity and its variability is driven only by a variable covering fraction. At this purpose, we attempted to fit Obs. 4 and 5 keeping the absorber column density frozen to the value observed in the low-flux state and letting only the covering factor free to vary. In both cases we had to release also the parameters of the underlying continuum to achieve an acceptable fit. In detail, for Obs. 4 we obtained a covering factor of ∼4% but the model residuals are important between 2 and 5 keV. In Obs. 5 the fit erases the absorber pushing the covering factor to a much lower value (∼0.03%). In both cases the fit is statistically worse (C/d.o.f=216/152 and 152/139 respectively) than what reported in Table 5. Thus, we rejected this possibility, and we concluded that the absorber model outlined in Table 5 better fits the data. The best fit Comptonization model is displayed as a solid line. Model residuals, in terms of σ, are also shown. We rebinned the spectra for clarity purposes. Note that for the present datasets, error bars are as large as the thickness of the model line. 0.46 ± 0.05 9.0 ± 0.7 2.9 ± 0.5 170 ± 40 0.3 ± 0.2 12.6 ± 0.1 202/139 11 ± 3 0.5 ± 0.2 -62 6 0.7 ± 0.4 6.9 ± 0.5 5.9 ± 0.6 160 ± 30 0.5 ± 0. Discussion 6.1. The X-ray spectrum of 1H 0419-577 The long exposure XMM-Newton observation of 1H 0419-577 that we presented in this paper provided a high-quality X-ray spectrum, suitable for testing physically motivated models against real data. Exploiting the simultaneous coverage in the optical/UV that was provided in the present case by the OM and by HST-COS, we successfully represented the broadband spectrum of 1H 0419-577 using Comptonization. The emerging physical picture provided that the optical-UV disk photons (T ∼ 56 eV) are both Comptonized by an optically thick (τ wc ∼ 7) warm medium (T wc ∼ 0.7 eV), and by an optically thinner (τ hc ∼ 0.5) and hotter (T hc ∼ 160 keV) plasma to produce the entire X-ray spectrum. A similar interpretation has been recently proposed for the broadband simultaneous spectrum of Mrk 509 (M11, P13), and also for a sample of unobscured type 1 AGN . A reasonable configuration for these two media in the inner region of AGN is possible. Two different Comptonizing coronae may be present. The geometrically compact hot corona may be associated with the inner part of the accretion flow, while the warm corona may be a flat upper layer of the accretion disk (P13). Alternatively, according to a model proposed in Done et al. (2012), the warm Comptonization may take place in the accretion disk itself, below a critical radius after which the radiation cannot thermalize anymore. It is in principle also possible that the seed photons are provided to the hot corona by the soft excess component (e.g., PKS 0558-504, Gliozzi et al. 2013). We note that, provided a slightly thicker warm Comptonized component (τ ∼ 11), the broadband spectrum of 1H 0419-577 is consistent with a "nested-Comptonization" scenario. We suggest finally that the Fe Kα emission line in 1H 0419-577 is produced by reflection in a cold thick torus. The long-term flux stability of the line that we note in Fig. 6 supports this interpretation. Tombesi et al. (2010) reported the detection of an ultra fast outflow (UFO) in 1H 0419-577. According to these authors, the signature of the UFO is a blushifted Fe xxvi-Ly α absorption line located at a restframe energy of ∼7.23 keV, possibly accompanied by a Fe xxv feature at ∼ 8.4 keV. Despite the high signal to noise none of these features is evident in the present spectrum. We estimated upper limits for the equivalent width (EW) of the main UFO absorption lines, assuming the same outflow velocity given in Tombesi et al. (2010, v∼11100 km s −1 ). The deepest UFO consistent with our dataset is much shallower than what previously reported because we found EW < ∼ 12 eV and EW < ∼ 9 eV for the Fe xxvi-Ly α and Fe xxv-Heα transition respectively. 6.2. The historical spectral variability of 1H 0419-577 Table 1) are labeled as well on the upper axes. Table 1) are labeled as well on the upper axes. 1H 0419-577 is well known for showing a remarkable flux variability in the X-rays that is accompanied by a dramatical flattening of the spectral slope as the source flux decreases. Nonetheless the optical/UV fluxes that were observed in diverse X-ray flux states are rather stable. We showed in Sect. 5 that a Comptonization model can explain the historical spectral variability of 1H 0419-577 , provided an intrinsic variability of the two Comptonized components, and a partially-covering, cold absorption variable in opacity. The intrinsic variability of the X-ray Comptonized continuum is easy to justify. Indeed, Monte Carlo computations of X-ray spectra from a disk-corona system show that, even given an optical depth and a electron temperature, coronae may still be intrinsically variable. Without requiring variations in the accretion rate that would be inconsistent with the stability of the optical/UV flux, a substantial variability may be induced, for instance, by geometrical variations (e.g., Sobolewska et al. 2004) or by variations in the bulk velocity of the coronal plasma (e.g., in a non static corona, Malzac et al. 2001). However, apart from the smaller intrinsic variation, the variable cold absorption causes the bulk of the observed spectral variability. In this respect, 1H 0419-577 is not a unique case. So far, cold absorber variable on a broad range of timescales (few hours-few years) have been detected in a handful of cases, (e.g., NGC 1365, NGC 4388, andNGC 7582, Risaliti et al. 2005;Elvis et al. 2004;Piconcelli et al. 2007) including also some optically unobscured, standard type 1 objects (e.g., NGC 4151, 1H0557-385, Mrk 6 Puccetti et al. 2007;Longinotti et al. 2009;Immler et al. 2003) and narrow line Seyfert 1 (e.g., SWIFT J2127. 4+5654 Sanfrutos et al. 2013). In most of these cases, discrete clouds of cold gas, with densities and sizes typical of the BLR clouds may cause the variable absorption. The estimated location of these clouds is within the "dust sublimation radius", nominally separating the BLR and the obscuring torus. This indicates that the distribution of absorbing material in AGN may be more complex than the axisymmetric torus prescribed by the standard Unified Model (see Bianchi et al. 2012, and references therein). The variation of the cold absorber in 1H 0419-577 is driven by a decreasing opacity, which seems to show a trend with the increasing flux (Fig. 8, upper panel). Indeed, in the two highest flux spectra, the absorbing column density is consistent to have been at least a factor ∼ 40 thinner than in the lowest flux spectrum. At the same time, the absorber covering factor is consistent to have remained constant (∼ 0.5). A simple explanation for this behavior is that in the highest flux dataset, the neutral column density became ionized responding to the enhanced X-ray ionizing continuum. An inspection of the archival RGS spectra (Pounds et al. 2004a,b) may provide additional support to this hypothesis. The broad emission features from O vii and O viii that we detected in the RGS spectrum of the present dataset (Paper I) are present also in all the past RGS spectra. We note however that in the present dataset the luminosity of both these lines is lower than what observed, for instance, in the lowest flux state (e.g. by a factor ∼2 and ∼5, respectively) Moreover, we clearly detected a broad line from a more highly ionized species (Ne ix, see Paper I), that is undetected in all the archival RGS spectra. This may be a qualitative indication of a more highly ionized gas in the BLR during the higher flux state. The enhanced flux may have increased the ionization of the gas, causing the enhancement of the Ne ix and the decrease of the O vii-O viii. The difference in unabsorbed continuum luminosities that we observed for instance between the lowest flux state of September 2002 and the following observation of March 2003 (∆L ∼ 3 × 10 44 erg s −1 , see Fig. 9) may indeed ionize a cloud of neutral hydrogen with typical BLR density in the observed timescale (∆t < ∼ 7 months). Assuming that the absorber is a cloud of pure hydrogen that does not change in volume as a consequence of the ionization, then the fraction of hydrogen that became completely ionized in March 2003: If the cloud is illuminated by ∆L the conservation of energy, assuming spherical symmetry implies that: where: C v is the absorber covering factor, d is the absorber distance, U H =13.6 eV is the ionization threshold of hydrogen, n H is the absorber density, and f = 10 −2 is the volume filling factor of the broad line region (Osterbrock 1989). Taking as un upper limit for the absorber distance the dust sublimation radius (R DUST ∼ 0.6 pc 3 ), from Equation 2 follows that: This lower limit is indeed well consistent with the typical range of densities of the BLR clouds (10 8 − 10 12 cm −3 , Baldwin et al. 1995). We note also that this scenario resulted in a statistically better fit of the data than a model mimicking a single cloud with constant opacity crossing the line of sight (see Sect. 5.3). Indeed, the ∼months time interval separating the XMM-Newton observations of 1H 0419-577 is inconsistent with the expected duration of an occultation event due to a single BLR cloud. For instance, in the case of NGC 1365 , the occultation observed in April 2006 lasted ∼4 days. A similar eclipse, lasting only 90 ks, has been observed in SWIFT J2127.4+5654 (Sanfrutos et al. 2013). In the scenario we are proposing for 1H 0419-577, when the continuum source is found in a low flux state, the surrounding gas is on average less ionized. We suggest therefore that in these conditions the number of neutral clouds along the line of sight may be larger, making obscuration events more probable. We finally remark that in this framework, a simple argument can explain the stability of the optical/UV continuum in 1H 0419-577. Indeed, the optical/UV continuum source is 10 times larger in radius than the X-ray source (see e.g., Elvis 2012). Therefore, a covering of the order of ∼ 50% of the Xray source implies a negligible covering of the ∼ 0.5% for the optical/UV source. The geometry of AGN We can put significant constraints on the geometry of 1H 0419-577 from the fitted parameters of the disk blackbody (Sect. 4.3). Indeed, the fitted disk blackbody normalization A is linked to the disk inclination angle i. Analitically, where R in is the inner radius of the disk. The disk inner radius may be set by the radius of the innermost stable circular orbit (R ISCO ), which is allowed in the space time metric produced by the black hole mass (M ∼ 3.8×10 8 M ⊙ in this case, O'Neill et al. 2005) and by its spin. The two extreme cases are a maximally rotating black hole and a non-rotating Schwarzschild black hole (see e.g., Bambi 2012). Therefore, according to this general prescription, R in may vary only in the range: where R g = 2GM/c 2 is the gravitational radius, G is the gravitational constant and c is the speed of light. Combining equations 4 and 5 we obtain: To set a more robust lower limit for the disk inclination angle, we additionally considered in the calculation the error of the disk normalization (Sect. 4.3) and of the black hole mass (∼ 0.5 dex, see O'Neill et al. 2005, and references therein). With these tighter constraints: These inclination values are well consistent with the intermediate spectral classification of 1H 0419-577 as Seyfert 1.5. It is therefore possible that our viewing direction towards 1H 0419-577 is grazing the so called "obscuring torus" prescribed by the standard Unified Model (Antonucci 1993). In this framework, the inner part of the torus may be responsible for the X-ray obscuration. The structure of the obscuring medium in AGN may be more complex than the classical donut torus paradigm. Indeed, this model faces difficulties in explaining several theoretical and observational issues, including for instance the wide range of Xray obscuring column densities (see Elvis 2012, and references therein). A clumpy torus (Nenkova et al. 2008) may alleviate part of these problems. In the latter case, when the numerical density of clouds along the line of sight is low, even a source viewed from a high inclination angle may appear like a Seyfert 1 (see also Elitzur 2012). Moreover, in this model, the BLR and the torus itself are part of the same medium, decreasing in ionization as the distance from the central source increases. Indeed, it has been proposed (see Nenkova et al. 2008) that the clumpy torus extends inward beyond the dust sublimation point. The innermost torus clouds, being more exposed to the ionizing radiation, are probably dust free and may dominate the X-ray obscuration. Given the high viewing angle derived above, 1H 0419-577 may possibly fit in this framework. A comparison with other models The model that we presented in this paper explains both the optical/UV/X-ray broadband spectrum, and the historical variability of 1H 0419-577in a reasonable geometrical configuration. However, the hard X-ray energy range above 10 keV is not covered by the present spectral analysis. In that range, 1H 0419-577 displays a "hard-excess" (Turner et al. 2009;Pal & Dewangan 2013;Walton et al. 2010) over a simple power law model, that shows some evidence of variability (a factor ∼ 2, see Turner et al. 2009). The extrapolation to harder energies of our broadband model predicts a flux of ∼ 2.9 × 10 −11 erg s −1 cm −2 in the 10-50 keV band. This flux is higher than the 70 months-averaged flux observed with BAT in the same band (∼ 2.2 × 10 −11 erg s −1 cm −2 , Baumgartner et al. 2013), but well consistent with the latest S uzaku measurement taken only 5 months before our XMM-Newton observations (∼ 2.7 × 10 −11 erg s −1 cm −2 , Pal & Dewangan 2013). Because of this relatively short time interval between the S uzaku and XMM-Newton observation, it is indeed likely that S uzaku caught the source in the same flux condition of our observation (see also Fig. 1). In the context of the S uzaku data analysis (Pal & Dewangan 2013), the authors attempted also to fit a Comptonization model to the May 2010 EPIC-pn spectra. They obtained however a poor result mainly because of a prominent excess in the residuals near ∼ 0.5 keV. In the present analysis, thanks to the higher resolution provided by the RGS, we could easily identify that feature as due to the O vii line complex. Apart from this discrepancy, the parameters they obtained for the warm Comptonized component reasonably agree with our result. On a different occasion (July 2007), S uzaku caught the source in a bright state, similar to what was observed by XMM-Newton in December 2000 (Obs.1). The hard X-ray curvature that was observed in that case (F 15−50 keV ∼ 2.6 × 10 −11 erg s −1 ) can be fitted using a Compton-thick, highly ionized absorption, covering ∼66% of the line of sight. To check if our model could explain also this historical hard X-ray maximum, we made the exercise of comparing the hard X-ray flux extrapolated by the fit of Obs. 1 with the one observed by S uzaku. We noted a small disagreement, with the flux predicted by our model being by a factor ∼ 1.6 lower than the observed one. Thus, we cannot definitively rule out that a partially-covering ionized absorber was present in the high flux state of July 2007. In the framework proposed in this paper, it may in principle be the ionized counterpart of the cold absorber present in the low flux state. Beside our interpretation, also the light bending model may explain the variable X-ray spectrum of this source and fits both the XMM-Newton and the S uzaku datasets (F05, Walton et al. 2010;Pal & Dewangan 2013, this paper). We note however that even in this disk-reflection framework, a variable absorption is required to fit the data. Indeed, a cold absorber showing the same trend noticed here (with a column density dropping from N H ≃ 10 21 cm −2 to 0) is present in the F05 model. Additionally, an O vii edge with a variable depth, mimicking an ionized warm absorber is also included. The latter is somewhat at odds with what we reported in Paper I because the warm absorber in 1H 0419-577 is too lowly ionized to produce any strong O vii absorption features. Moreover, a short timescale variation of the ionized absorption edges, as required in F05, is difficult to reconcile with the galactic scale location (see Paper I) of the warm absorber in this source. In our analysis, solar abundances are adequate to fit the data. In the light bending model, the metal abundance in the disk is a free parameter, and it has been reported to vary from supersolar in September 2002 (∼ 3.8, F05) to undersolar in the January 2010 (∼0.5, Pal & Dewangan 2013). This is another issue that is difficult to explain. Finally, we note that a physical interpretation of the source variability observed with S uzaku is not possible in the context of the light bending model alone, and additional variability in the disk-corona geometry, possibly caused by a variability in the accretion rate (that would be however inconsistent with the stability of the optical/UV flux noticed here) has to be invoked (Pal & Dewangan 2013). In conclusion, the Comptonization model proposed here for the present and historical broadband spectrum of 1H 0419-577 does not rule out other possible interpretations. It has however the advantage of explaining all the observational evidences collected in the last ten years over the broadest range of wavelength available, without requiring any special ad hoc assumption. A future observation in the entire X-ray band with XMM and NuSTAR would be important to solve the long-standing ambiguity in the interpretation of the spectral variability in this peculiar Seyfert galaxy. Summary and conclusions We modeled the broadband optical (XMM-OM), UV (HST-COS, FUSE) and X-ray (EPIC-pn) simultaneous spectrum of 1H 0419-577 taken in May 2010 using Comptonization. The X-ray continuum may be produced by a warm (T wc ∼ 0.7 keV, τ wc ∼ 7) and a hot Comptonizing medium (T hc ∼ 160 keV, τ ∼ 0.5) both fed by the same optical/UV disk photons (T dbb ∼ 56 eV). The hot medium may be a geometrically compact corona located in the innermost region of the disk. The warm medium may be an upper layer of the accretion disk. Reflection from cold distant matter is a possible origin for the Fe Kα emission line. Despite the long exposure time of our dataset we do not find evidence of the ultra fast outflow features that have been reported in the past for this source. Providing a partially covering (∼50%) cold absorber with a variable opacity ( N H ∼ [10 19 − 10 22 ] cm −2 ) and a small variability intrinsic to the source, this model can reproduce also the historical spectral variability of 1H 0419-577. The opacity of the absorber increases as the continuum flux decreases. We argue that the absorber may have the typical density of the BLR clouds and that it, getting ionized in response to the enhanced X-ray continuum, becomes optically thinner in the higher flux states. Relativistic light-bending remains an alternative explanation for the spectral variability in this source. We note however that in this scenario, a variable elemental abundance and a variable absorption are required. The latter is difficult to reconcile with the UV/X-rays absorber that we have determined to be located at a ∼ kpc scale. Finally, we suggest that 1H 0419-577 may be viewed from a high inclination angle, marginally intercepting a possibly clumpy obscuring torus. In this geometry, the X-ray obscuration may be associated with the innermost dust free region of the obscuring torus. The present spectral analysis in the optical/UV/X-ray represents a substantial step forward in the comprehension of this intriguing Seyfert galaxy. However, further investigations (e.g. with NuSTAR) are needed to understand the true nature of the spectral variability of this source.
2014-01-22T10:42:35.000Z
2014-01-22T00:00:00.000
{ "year": 2014, "sha1": "99b7975e5d0d5118e4bbf77cabbb1b58d0310136", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2014/03/aa22916-13.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "99b7975e5d0d5118e4bbf77cabbb1b58d0310136", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225639760
pes2o/s2orc
v3-fos-license
The Virocene Epoch: the vulnerability nexus of viruses, capitalism and racism COVID-19 has ushered in a new planetary epoch—the Virocene. In doing so, it has laid bare the limits of humanity's power over nature, exposing the vulnerability of 'normal' ways of living and their moral and pragmatic bankruptcy in coping with those vulnerabilities. 'Normal' is powerless against the virus and has not worked for a majority of the world's human and non-human population. Whatever new normal humanity fashions depends on the socio-ecological change set in motion by mutations between human and non-human species. The outcomes of society's responses to the pandemic depend on how human agency, as an embodiment of social, ecological, and metaphysical relations, transforms the relations now shaped by capitalism and racism—the two mutually reinforcing processes at the root of the Virocene's social and ecological vulnerabilities. A deeper understanding of vulnerabilities is necessary to avoid recreating a 'new normal' that normalizes the current oppressive and vulnerable social order, while inhibiting our ability to transform the world. At the same time, the sweeping possibilities of alternative ways of organizing humanity's mutual wellbeing and nature lie at our fingertips. The emancipatory political consciousness, rationalities, and strategies inherent in such intuitively sensible and counter-hegemonic approaches, first and foremost, are matters of justice, embodied in the power that shapes human-nature metabolism. The Virocene is thus a battleground for social and ecological justice. To be effective partners in these struggles for justice, political ecology needs a universal perspective of social and ecological justice that functions both as a form of critical inquiry—that is, as a way to understand how social and ecological inequalities and justices arise and function—and as a form of critical praxis—that is, as a way to reclaim and transform capitalism and racism's power in valuing and organizing social and ecological wellbeing. Introduction The old is dying and the new cannot be born. In this interregnum there arises a great diversity of morbid symptoms. -Antonio Gramsci. The moral arc of the universe is long, but it bends toward justice. -Martin Luther King, Jr. The 2020 COVID-19 pandemic has moved humanity into what I term the "Virocene" epoch, following the era known as the Anthropocene. The prefix "viro" refers to "virus"-a sub-microscopic family of infectious agents that multiply and grow using the living cells of their hosts, causing disease in humans, animals, and plants. The suffix "cene" derives from the Greek kainos, meaning "new" or "recent", signifying a historically unique moment of interaction between humans and eco-systems. 2 The novelty of this epoch lies in the intensity of the pandemic are directly connected to the contradictions of racism and capitalism. The Virocene also invokes three forms of fear. One is the fear of sickness and loss of life shared by all social classes. Second, is held primarily by the economically and racially privileged: fear of resistance against capitalism taking an aggressive turn in response to its social and ecological failures, brought to the fore by the pandemic. Third, marginalized social groups themselves have either internalized the same worldview as the privileged or are unwilling to take the risks and uncertainties necessary to embrace the idea of an alternative world order. In this context, well-founded doubts about, and challenges to, the sustainability and replicability of alternative ways of responding to vulnerabilities during and beyond the pandemic raise questions about the moral basis of human agency and rights, the nexus between justice and power, and the power to translate imagined alternatives into reality while working against the hegemonic power of neoliberal governmentality (Balasubramanian 2015;Bauhardt 2014;Büchs and Koch 2019;Dogan 2010;Fletcher 2020;Malette 2009;Seki 2009;Youde 2009). Hence, the challenge for justice-minded scholarship involves reassessing hegemonic and counter-hegemonic ways of organizing life-worlds in light of the certainty of uncertainty in our human capacity to cope with the clinical and systemic vulnerabilities of the Virocene era-which is proving to be a battleground for both power and justice. In this article, (the first of two parts), I turn to Political Ecology (PE), since it is especially equipped to engage with the tensions between reproducing a world that has failed to respond to the vulnerabilities of the Virocene, and the promise of a socially just and ecologically sustainable order. It also offers necessary insights into how unjust political ecologies impinge upon human and non-human species alike. In doing so, PE is particularly useful as it is simultaneously concerned with how hegemonic environmental orthodoxies and inequalities, as well as the institutions they embody, impinge on the organization of nature-society relations. These, in turn, shape policies that maintain hegemony-over people and the environment-at multiple levels and scales in the world order Batterbury 2001;Bryant 1992;Cash et al. 2006;Fairhead and Leach 1995;Forsyth 2003;Peet and Watts, 1993;Rocheleau, et al. 1996;Stott and Sullivan 2000). PE's emphasis on how power relations produce particular social, economic and cultural interactions with non-human species-for example in its examination of global livestock production-provide a basis for considering environmental justice from the perspective of human and non-human species, and yield valuable insights into understanding and controlling zoonotic spread of various diseases (Emel and Wolch 1998;Notzke 2013). Additionally, PE examines the role of neoliberal government policy-as captured by Mbembé and Meintjes' (2003) concept of 'necropolitics.' Neoliberalism is, for example, operationalized through wildlife conservation practices, highlighting the importance of viewing human-nonhuman relations from a non-anthropocentric point of view (Hobson 2007;McIntyre and Nast 2011;Sundberg 2014). Moreover, PE is a field searching for emancipatory pathways towards socially and ecologically just, equitable, and sustainable coproduction of nature-society relations (Adger 2000;Berkes and Ross 2013;Marshall and Marshall 2007;Nelson et al. 2007;Walker 2005). But PE also contains weaknesses. 4 In particular, limitations in PE's theories of the justice-power nexus (discussed in Bryant 1998;Moore 1998;Svarstad et al. 2018;Wisner 2015) have hamstrung its emancipatory aspiration to reorganize social and ecological vulnerabilities in the Virocene era. A fruitful discussion of power is impossible without a theory of justice. However, PE is neither explicit about, nor firmly grounded in the moral cultures that shape human and ecological rights and justice, thus failing to conceptualize the "moral ontological" and "moral epistemological" frames to understand and transform multispecies relations. 5 Hence, it may be vulnerable to co-option by the very forces it seeks to transform. From a perspective of emancipatory politics, reflecting upon the vulnerabilities of the Virocene era provides an opportunity to build synergies between ideas of the Capitalocene, political economy and political ecology, provided they are built upon robust perspectives of the moral basis of the rights-justice-power nexus that governs the organization of multispecies relations. Novelty and boundaries of the Virocene Epoch The Virocene, I argue, is both the current moment and a distinct epoch in the lineage of other epochs: Ecocene, Holocene, Anthropocene, Capitalocene, and Chthulucene (Table 1). The Virocene is a historic moment in which interoperation between human and non-human actors becomes existentially threatening on a planetary scale. Consequently, there has arisen a sense of urgency to question, challenge, rethink, reimagine and act on our current ways of being in, and with, the world. For emancipatory scholars, the Virocene opens a moment for praxis towards social and ecological justice, which must accommodate the contingency, uncertainty and fragility imposed by ongoing viral epidemics and pandemics. What then, is the evidence for the Virocene epoch, and when can we say it started? These questions are critical for those who study the broader trajectories of environmental change through the lens of pandemics and the social and economic vulnerabilities they create. Debates over the boundaries of geological and sociological meta-planetary periods are ongoing (Lewis 2012;Nordhaus et al. 2012;Rockström et al. 2009). For example, critics of the 'Anthropocene' have pointed out the impossibility of demarcating the time frame during which human activity significantly reshaped non-human environments. Taken as a whole, the emergence of human species as a geophysical force marks the boundary between the Holocene and Anthropocene eras, but such a universal boundary fails to consider that "because the ecological impacts of human activity have been and remain diachronous, significant environmental signatures evident in one part of the world (e.g., Western Europe) may not be replicated elsewhere until the last few years or next few decades" (Castree 2016: 4). As a result, it is difficult to speak of a singular, asocial, concept of nature to justify various management, conservation, remediation, preservation, or restoration measures (p. 11). Additionally, it is possible that "future environmental markers reflective of present-day human activities will prove to be more compelling indicators of the Anthropocene's onset" (p. 5). And, finally, period markers can create confusion in disciplines other than those which first assigned them. Nonetheless, in spite of ambiguity and uncertainty concerning the boundaries of periodization, boundaries do matter: we experience, make sense of, and act upon social and environmental phenomena based on specific spatial and temporal scales. Boundaries shape how regimes of knowledge, subjectivities, and power inform human-nature relationships. The popularity of a particular term (in this case for a temporal epoch) depends on the confluence of several factors: the scope, magnitude, and public appeal of the phenomena described by the given period; the status and popularity of the person or persons describing the period; and the receptiveness of dominant social institutions. For example: Paul Crutzen and Eugene Stoermer (1995), who are credited for marking the boundary of the Anthropocene, were joint Nobel Prize winning chemists, and Jan Zalasiewicz, who helped popularize the term, is a geologist (Zalasiewicz et al. 2014). Their attempt to chart the scale of catastrophic For Tim Luke (2017), policies inspired by the concept of an Anthropocene epoch "appear to be developing a moral rhetoric of, and operational plans for, managing the Anthropocene to create specific outcomes for those who are the managers as well as the managed" (p. 80). However, "the fact that human beings do not, in fact, have this measure of technical control is ignored by advocates of Anthropocene politics to advance their policy agendas" (ibid: p. 81). The notion of an Anthropos, or 'humanity', as a global, unified 'geological force', "employed in the concept of 'the Anthropocene'", Frank Biermann et al. (2016) argue, "[masks] the diversity and differences in the actual conditions and impacts of humankind, and does not do justice to the diversity of local and regional contexts" (p. 349). Universalization of human character with respect to human agency and its experience, "gravitate towards western ontologies and epistemologies of living in the Anthropocene" (Simangan 2020: 218). The Anthropocene's mistaken understanding of human agency is consistent with the notion of Homo Economicus as a rational and autonomous actor. The hegemonic status of this model of agency is influenced by neoliberal policy regimes that effectively function as incubators of racism and socioeconomic stratification (Peck and Tickell 2002). Similarly, the Chthulucene narrative for the survival of humanity on a troubled planet-to reconceptualize its relationship with the Earth and its nonhuman inhabitants as responsible kin relationships (Haraway 2015), does not directly address or destabilize neoliberalism and its racist manifestations. A unique characteristic of the Virocene, then, is that it highlights the roles of capitalism and racism in producing human-nature relations that exacerbate human and nonhuman vulnerabilities to viral activity (which originate in nature). Although the Capitalocene highlights, unpacks, and problematizes human agency in the Anthropocene, the Capitalocene's conceptualization of human agency is weak, as it cannot explain why humans behave as they do. Why do humans often do what they do not want to do? From whence comes the wide gap between their knowledge and their actions, and what are the moral bases for shaping human actions to overcome the crisis of world ecology? The Virocene epoch is also unique in several important respects. First, climate, human agency, capital, gender, and (metaphorically, Cthulhu) environmental and social degradation are the driving forces in the Holocene, Anthropocene, Capitalocene, Gynocene, and Chthulucene eras, respectively. In the Virocene epoch, capitalism is inextricably linked with racialized appropriation of society and nature, contributing to the videogenic activities and the multifaceted human and non-human vulnerabilities they cause. Uncertain and complex mutations of the Corona viruses, stand in the way of providing clarity as to their epidemiology and pathogenesis (Rothan and Byrareddy 2020). The Coronaviruses are the largest, enveloped, single-stranded positive-sense RNA viruses, and the origins, nature, and life cycles of viruses, the main drivers of the Virocene epoch are much harder to detect, predict, and manage using current human, social and political rationalities (Andersen et al. 2020). They are a force external to the human body, over which humans have even less control than they do over, for example, the so-called invisible hand allegedly controlling capitalist markets. The autonomous power of COVID-type viruses as a natural force is further exemplified by their rapid spread and mutating genome, both of which are outpacing humanity's capacity to develop preventive measures such as vaccines and curative treatments. 7 Hence the mutation and spread of COVID-19 make uncertainty about its future appearances and disruptions a permanent consideration in how society organizes its and nonhuman species' futures (Ge, Wang and Yuan et al. 2020;Chen et al. 2020). Pandemics worse than COVID-19 have occurred throughout recorded history (Smith et al. 2014). In the Sixth century, the Justinian plague (AD 541-750) killed an estimated 35 million (approximately half the population of Europe) and permanently weakened the remnants of the Roman Empire (Barry 2005). As a result, other civilizations began reconquering formerly Byzantine holdings in the Middle East, Northern Africa, and parts of Asia. Kyle Harper's (2017) grand narrative describes the role of climate change and infectious diseases in the collapse of the Roman Empire as a story of nature's triumph over human ambition (p. 226). To paraphrase 7 Viruses are not infectious organisms per se. Rather, they are microscopic packages of genetic instructions, bundled in a protein shell, that require an organism to serve as a host so that they can replicate and complete their "life cycle" (Harvard Health 2020). In the process, the virus copies itself and spreads to other cells in the organism, causing disease. A host organism, once infected, become a virus factory. In the process, viral RNA may mutate, leading to divergent strains of the same virus (Rutgers University 2020). Viral mutation is a natural phenomenon that has played a role in many prior pandemics. Journal of Political Ecology Vol 27, 2020 642 Edward Gibbon's (2001) view of reactions to the Justinian's plague in The decline and fall of the Roman Empire, the plague created opportunities for some even as it unrelentingly ended Roman power in the Mediterranean, bringing nothing short of the end of the world for others (pp. 340-341). Critical reasons for the collapse of empire, however, originated before the pandemic and they were an integral part of the evolution of the empire. They emerged from the economic and political weaknesses resulting from the same reasons that made it an expansive empire (e.g. the monetization of the economy, price-driven grain shortages, and militarized territorial control) (Harper 2016;Kessler and Temin 2005;McNeill 1976;Postan 2016: 41-56;Vacsia 2016). Later pandemics continued to destabilize and reshape Mediterranean and European society (Bayer 1986;Cohn 2017). They exhibited locally wide-ranging, yet overall somewhat similar historical patterns. The bubonic plague pandemic-the infamous Black Death of the 14th century, produced contractions in the agricultural economy that weakened both European states and the Mamluk Sultanate (Dols 1977). Commoners, empowered by new shortages of labor, sought new rights and privileges, while the established noble classes of Europe in turn introduced new laws to maintain the existing social order (Herlihy 1997). Many of the immediate and most disruptive effects manifested as violence against urban dwellers, migrants, and so-called outsiders (Dols 1977). In Italy, the "Black Death of 1347-1351 unleashed mass violence on Catalans in Sicily, clerics and beggars in Narbonn, and . . . pogroms against Jews" (Cohn 2017: 8). Campbell (2016) writes that "nature as much as society needs to be acknowledged as a protagonist of historical change" while warning that "[to] privilege endogenous human processes over ostensibly exogenous environmental events is … to create a false dichotomy, since there is nothing in this model that is not endogenous" (p. 22). More recent pandemics have produced similar economic, social, and racial impacts and reactionary social contractions (Bollyky 2019;Cohn 2012;Wade 2020). HIV has caused tens of millions of deaths in poor countries worldwide, and a capitalist and increasingly racialized nation-centric world order has used this pandemic, along with subsequent viral epidemics such as SARS and Zika Fever, to consolidate its hegemony over society and nature (Bell 2020;Chase-Dunn and Roberts 2012). Current responses to the COVID-19 pandemic are, likewise, far from exceptional. Pandemics also have a long history of causing extraordinary political, demographic, and psychological effects on society and nature (Barry 2005;Snodgrass 2017), fundamentally shifting the trajectories of social and political relationships. A year after Columbus built his first town on the island of Hispaniola (Dominican Republic/Haiti), the indigenous population dropped from at "least 60,000 and possibly as many as 8 million" to less than 500. The Arawak/Taíno people lacked immunity to pathogens carried by the Spanish and fell "victim to terrible plagues of smallpox, influenza, and other viruses" (Pringle 2015). In the 15th-17th centuries, smallpox killed approximately 20 million people, nearly 90 percent of the indigenous American population. This contributed to European colonization by creating the illusion that much of the American continent was Terra Nullius, or empty land and "white man's country" (Bush 2016: 150). The Spanish flu of 1918 took an estimated 50 million to 100 million lives around the globe, including 675,000 in the United States. Coming as it did toward the end of World War I, the disease spread mostly by people coming into contact with soldiers. Jeremy Brown's Influenza: the 100-year hunt to cure the deadliest disease in history (2019), shows how the world economy plunged into a deep recession beginning in January 1920, with the influenza labeled as a "war disease" (Francis Jr. 1947: 10). Mindful of these things, in 1941 the U.S. military established the Armed Forces Epidemiological Board, researching influenza vaccines as World War II raged overseas (Hoyt 2006, in Kamradt-Scott 2020. In 1946, the World Health Organization (WHO) was established to research and mitigate viruses. The collective experience of the subsequent 1957 and 1968 pandemics conclusively showed that influenza vaccines were effective at reducing human morbidity and mortality. As a result, a number of governments in high-income countries (HICs)-where the majority of vaccine manufacturers were located, focused their efforts, over the next few decades, on ensuring greater vaccine yield over faster time frames (Kamradt-Scott 2020: 539). Increased recognition of the utility of antiviral medications as a second line of defense added to the influenza pharmaceutical "arsenal" (Glezen 1996in Kamradt-Scott 2020Mendel and Sidwell 1998). Public-sector involvement in pandemics continued until the end of the Cold War, with health considered a public good-an important part of the social contract between the state and society (Kamradt-Scott 2020: 541). The bipolar division of the world further necessitated public intervention to contain communism's push toward egalitarian Journal of Political Ecology Vol 27, 2020 643 policies amid global competition between the United States and USSR for territorial control, at a time when the egalitarian ideology of the socialist bloc saw health as a basic human right that the state should address according to need, rather than ability to pay. In many parts of the world egalitarian health care systems have collapsed or are struggling to survive, and since end of the Cold War the frequency of pandemic occurrences continues to increase. Since 1967 scientists have identified forty strains of coronavirus. Regardless of whether a vaccine is devised, the virus will mutate. Indeed, according to Andrew Rambaut, a molecular evolutionary biologist at the University of Edinburgh, "over the length of its 30,000-base-pair genome, SARS-CoV-2 accumulates an average of about one to two mutations per month" (Nafie 2020: 10). The pace and geographical spread of COVID-19 is far more rapid and global, and uneven across nations, when compared with other recent pandemics, due to vulnerabilities created as a result of "space-time compression" under capitalist modernity (Harvey: 1989) -with movement of commodities and people occurring at high volumes and increasing speeds. What was initially believed to be a virus infecting the elderly now affects all demographic groups. It can spread by multiple means, with international experts warning that the coronavirus can float and be transmitted via air droplets. Furthermore, asymptomatic persons can transmit it, so that isolation seems to be the only option available for avoiding contracting the disease. Viruses, the defining agents of the Virocene epoch, are an embodied force, so that their representations and the vulnerabilities they create for humans and nonhumans are constituted by the same forces (values, knowledge, rationalities, and power) that transform the relations between humans and nature-yet those forces have an extremely limited capacity to predict the variety of forms that the virus can take, let alone its behaviors or the disruptions it may cause to human and nonhuman species. Viral activity has now evolved as a selfimposing framework in contestation with current and future systems of social ordering of "world ecology", to use Jason Moore's phrase (Moore 2015). Secondly, COVID-19 has revealed the link between social and ecological vulnerabilities to pandemics and the global economy, through ever-increasing economic exchanges and interdependencies between nations seeking economic growth (McKibbin and Sidorenko 2006;Peláez and Peláez 2008;World Bank 2020). According to the U.S. Census Bureau (2020), China is the United States' third-largest trading partner (p. 1). Total U.S. exports of agricultural products to China totaled US$9.3 billion in 2018 (Minnesota DoA 2019), making it the fourth-largest agricultural export market; in the same year, U.S. total imports of agricultural products from China totaled US$4.9 billion, making it the largest supplier of agricultural imports. U.S. exports of services to China totaled an estimated US$58.9 billion in 2018, 2.2 percent (US$1.3 billion) more than 2017 and 272 percent more than 2008 levels. U.S. imports of services from China were an estimated US$18.4 billion in 2018, 5.5% (US$963 million) more than 2017 and 68.3 percent more than 2008 levels (OUSTR 2018). The global demand for meat has also grown, bringing a quadrupling of meat production over the past 50 years (FAO 2020). Intensive export-oriented agriculture, particularly meat production, causes negative environmental effects, such as increased emission of greenhouse gases, and exhausts agricultural land and freshwater resources (Alexander et al. 2016). The economic policies of the U.S. and China are deeply connected with the economic growth trajectories of developing countries (Freund et al. 2020;Kose et al. 2020), specifically, the extraction of natural resources from these countries and their trade deficits that enable both countries to compete in the global economy. This connection, in turn, has implications for the spike in viruses, as well as their global spread and their social and economic impacts, which "transcend national frontiers" (Poore and Nemecek 2018;United Nations 2019a). The United States and China have recorded higher numbers of COVID-19 infections and deaths than many other nations, being the two largest high growth and interdependent economies in the world and the biggest contributors to climate change. As of July 14, 2020, the total number of global infections was 12.8 million persons, including 3.2 million in the United States and 86,000, or probably more, in China (Our World in Data 2020: 1; https://covid19.who.int/; Richie et al. 2020). Trade wars, technology disputes, and nationalist rhetoric driven by economic growth priorities, are creating a new Cold War environment "that undermines global action for fighting global change" (Loh and Gottlieb 2019: 6; Hodgson 2020). Third, the globalization of neoliberal policies that have emerged in response to the disciplinary needs of capital, continue to subjugate social and ecological well-being to market rationality, notably through reductions in social policies, including health care expenditures (such as for medical care and public health services) and the privatization of health services imposed by many governments (Harvey 2007;Lobao et al. 2018;Previtali 2016;Viens 2020). Conversely, governments are being called on to respond to the pandemic as well as to bring economic recovery. Government capabilities are severely constrained by neoliberalism's growing vulnerability, amid internal crises arising from constraints on its continuity of accumulation and the challenges involved in consolidating popular legitimacy caused by inequality and the social and ecological crises that accumulation generates (Clark 2012;Harvey 1989;Wolfson 2003). Under neoliberalism's continuing pressure to replace social welfare rationality with market rationality as the guardian of human and ecological well-being, the COVID-19 pandemic has brought calls for coordinated state intervention to stimulate the economy and mitigate the crisis, recognizing some preventative actions came too late (IMF 2020; Tufekci 2020). Several nation-states have resorted to wellbeing-focused interventions, unprecedented in the neoliberal era, with which to meet the pandemic challenge. The Virocene epoch has made the state a focus for resistance and change (Horgan 2020), promoting the concern that "however deep the economic carnage and regardless of its source, those who seek to drive this country towards socialism will exploit it for all it's worth." (Henry 2020: 1) and a common thread among the political left is to "pounce upon every inequity for maximum political impact" (p. 9). Fourth, the global spread of viral diseases also parallels the global effects of climate change in that it is being driven by aggressive neoliberal growth policies. Climate change negatively affects responses to COVID-19, which undermines environmental determinants of health, and places additional stress on health systems (United Nations 2020a). The global spread of viruses also parallels the global effects of climate change, being driven by aggressive neoliberal growth policies. Growth drives the destruction of forests and brings people into closer contact with animals than ever before through intensive farming, the local and global trading of livestock and livestock products with cruelty to animals, as well as through cohabitation of human and non-human species in marketplaces. Neoliberal economic policies have disproportionately affected populations already distressed by climate change (Jordan 2019;Luber and Knowlton et al. 2014), including through the "re-emergence of pathogens that have been familiar for a long time, but now threaten new, immunologically vulnerable populations" (WHO 2018b: 18). For example, a study in April 2016 found that the habitat of Aedes aegypti -a mosquito that spreads viruses causing dengue fever, chikungunya, Zika fever, Mayaro, and yellow fever, could increase up to 13 percent when the rate of greenhouse emissions reaches 8.5 RCP, which under the high greenhouse gas emission scenario, would be reached sometime between 2061 and 2080. 8 In this scenario, up to 460 million additional people could be exposed to these diseases (Monaghan et al. 2018). The United Nations' World economic situation and prospects (WESP) report of 2019 warns that steady economic growth in 2019-2020 "at the global level . . . is excessively dependent on carbon-intensive fossil fuels" (United Nations 2019b: 1). The growth in gross domestic product and carbon dioxide emissions also remain closely linked. Between 1990 and 2015, as the global level of production doubled, anthropogenic GHG emissions increased by 45 percent (ibid). Global efforts to mitigate climate change will likely face setbacks as nations spur economic growth through stimulus packages that do not prioritize climate change . Against this backdrop, we imagine the Virocene as a planetary epoch in the same way that Dipesh Chakrabarty (2009: 222) described the universality of climate change: Climate change poses for us a question of a human collectivity pointing to a figure of the universal that escapes our capacity to experience the world. It is more like a universal that arises from a shared sense of a catastrophe. It calls for a global approach to politics without the myth of a global identity, for, unlike a Hegelian universal, it cannot subsume particularities. We may provisionally call it a "negative universal history". We must think about how the Virocene, as an epoch, shapes the way we organize our lifestyles; how its universality is grafted onto other universals such as racism, capitalism, and climate change; and how this universality affects the hegemony of capitalism as it operates within the nation state and globally. We must also explore counterhegemonic models of organization that can transform both the political relationships between humans and the relationship of humankind to nature. Fifth, the Virocene's vulnerabilities felt by marginalized groups across the globe reveal an intersection between neoliberalism and structural racism, which are mutually constituted by and frame each other (Chowkwanyun and Reed 2020;Cooper et al. 1981;Kiple and Kiple 1980;McDonald 2020). The neoliberal narrative is based on the belief that the market is a self-regulating and socially, politically, and economically neutral agency shaping the freedom, dignity, and well-being of humans and nonhumans (Kurien 2015). The reproduction and survival of the narrative is inextricably linked to racism. "In effect, neoliberalism has rendered an enormous and growing racial inequality culturally palatable by effectively relegating racism to historical legacies and translating contemporary social problems into individual choices and personality traits" (Mascarenhas 2016: 3). The vulnerabilities associated with COVID-19 that disproportionately impinge upon racially marginalized groups, underscores the importance of critically evaluating the lived in "contradictions at the core of neoliberal capitalism" (Comaroff and Comaroff 2000: 298), as well as how powerfully neoliberalism recomposes experience in the present with effects on public life, relationships, and identities (Giroux 2008) and the need to engage with the culture of neoliberalism if we are to resist its ideology. Meanwhile, the rise of antiracist protests around the world show signs of creating solidarities among counterhegemonic political movements. Sixth, we are now seeing a rejuvenation of counterhegemonic debates on the ideological and pragmatic limits, and the social and ecological vulnerabilities arising from growth-centered economic policies. These debates, which have been ongoing since the Club of Rome met in the 1970s, have included advocates of degrowth, solidarity, and social economics; climate change activists; and proponents of cooperative, community and mutual aid systems (Kallis 2018). Even growth-driven financial markets are ready to accept the realities of pandemics: "[When] normalcy returns, banks and capital markets firms will likely have learned a few lessons. These may include how to best retain operational resilience when confronted with future pandemics, and possibly how to design new operating models such as alternate work arrangements" (Barret et al. 2020: 9). At the same time, attempts to decouple social and ecological wellbeing from the capitalist growth economy are not finding expression in protests against racism and climate change. Emerging movements that pursue climate justice and racial justice as inseparable. Nonetheless, emerging trends in monetary and fiscal policies and the current governmental rush to "reopen the economy" to quickly recover economic growth and establish the popular legitimacy of the government, provides little hope for fundamental ideological and policy changes away from neoliberalism. The international response to the pandemic has varied, but states the world over show few signs of deviating from a neoliberal economic growth paradigm, despite its colossal moral and pragmatic failure to effectively cope with the vulnerabilities of COVID-19 (Lapavitsas 2020;McCloskey 2020). Finally, viruses embody the processes that shape natural and social worlds and the interactions between them. In the Virocene era, they have become a permanent planetary force, disrupting the well-being of human and nonhuman species and imposing historically unprecedented power over the battles between the human species seeking to defend, resist, and pursue alternatives to capitalism's and racism's dominance over moralities, rationalities, and power in organizing society-nature relations. Although I recognize that the intersectionality of race, sexuality, gender, physicality, class, and territoriality are important determinants of vulnerabilities in the Virocene era, I do not want to undermine the uniqueness of anti-racism as an epistemological and political practice for restoring human dignity in responding to the Virocene's vulnerabilities. The contributors to a recent anthology on Rod Bush, a prominent African American scholar-activist, concur that there is an inextricable link between racism and capitalism, but the link does not explain the reasons for the persistence of racism that stands in the way of creating a more just, democratic, and egalitarian world ( Bush et al. 2019). Thus, I consider the focus on anti-racism to be a critical perquisite and ethical imperative in forging solidarity and opening pathways for emancipatory politics against all other forms of exclusion and domination upon which capitalism Journal of Political Ecology Vol 27, 2020 646 and its growth model rests (Crenshaw 1989;Davis et al. 2019;Grillo and Wildman 1991;Hill Collins 2019;Smith 2000;West 1993). The analysis of the Virocene's vulnerabilities in relation to capitalism and racism in the following section highlights the need for an alternative way of organizing human-nature relations, which is fundamentally an issue of a multispecies rights-justice-power nexus. Social and ecological vulnerabilities of the Virocene epoch If we conceptualize the reasons for vulnerability 9 during emergencies, we can see that their ideological framing and representations, as well as their social and ecological consequences and functions do not radically differ from the reasons that shape vulnerability during non-emergencies (Blaikie et al. 1994;Salama et al. 2004). Human and ecological vulnerability during the Virocene epoch is largely determined by how humans organize their individual and collective identities and the relationships between them. Insecurities arising from a lack of means to satisfy basic needs-food, water, sanitation, mental health, mobility and the desire to belong to an inclusive and just society-are experienced differently by different people. Nonetheless, they all are to various degrees, rooted in a capitalist ideology and racist systems of governance that predate the COVID-19 pandemic. In the Virocene epoch, capitalism is the primary organizer of human-nature relations (Moore 2016a(Moore , 2017b. The epoch also embodies crises generated by capitalism and racism, shaping its vulnerabilities, power, and politics. Now, however, the Capitalocene is encircled by the Virocene into the indefinite future, contingent upon how both epochs are experienced in different societies and how their respective social, political, economic, and ecological relations have defined their position within global political economy, revealing the uneven geographical development of capitalism. Viral activity hinders capitalism's and racism's central roles in shaping "humanity [as] a species-environment relation" (Moore 2005: 11), as well as societies' alternative responses to disruptions. It forces capitalists to reimagine their own future, as well for as those anti-capitalist elements interested in planetary justice, to imagine how they "might dismantle, analytically and practically, the tyranny of man and nature" (Moore 2017b: 34). As Arundhati Roy notes, […unlike] the flow of capital, this virus seeks proliferation, not profit, and has, therefore, inadvertently, to some extent, reversed the direction of the flow. It has mocked immigration controls, biometrics, digital surveillance and every other kind of data analytics, and struck hardest-thus far-in the richest, most powerful nations of the world, bringing the engine of capitalism to a juddering halt. Temporarily perhaps, but at least long enough for us to examine its parts, make an assessment and decide whether we want to help fix it, or look for a better engine. (Roy 2020: 5) The pandemic has exposed the vulnerabilities of the social, economic and military structures that currently maintain oppressive, unjust and exploitative systems (capitalism, sexism, religious oppression, militarism, racism, racist nationalism, etc.). Although, the pandemic's threat to capitalism may be temporary, it is moving at a staggering pace through nations and population groups, with the potential to destabilize human aspirations to shape the world order. Donna Haraway (2015) noted that in the Cthulucene epoch, humans "must collect up the trash of the Anthropocene, the extremism of the Capitalocene, and chipping and shredding and layering like a mad gardener, make a much hotter compost pile for still possible pasts, presents, and futures" (p. 3). Likewise, in the Virocene, the autonomy and power of virogenic activity over humans requires that human responses to social and ecological catastrophes be "thinking-with" and "becoming-with" the reality of virogenic activity, while cultivating a praxis with new kin-relations and "response-ability" for survival (p. 34). The Virocene period has seen the social and ecological effects of virological activities become an autonomous and unpredictable force playing a major role in human and ecological well-being. The natural 'autonomy' that COVID-19 commands causes abrupt, unpredictable, and irreversible changes in the web of social and ecological life that are not easily comprehended or mitigated using current intellectual and technological capabilities. The occurrence of "virogenic" social and ecological change during the Virocene does not mean that humanity caused the pandemic; rather, it permits distinctions between the changes caused solely by virogenic activity and changes that have happened due to human involvement and social systems. Human agency is implicated in a fear-driven consolidation of the neoliberal and racist world order in far more socially and ecologically debilitating ways after a pandemic. Juliana Fadil-Luchkiw (2018: 1) captures the spirit of the Virocene epoch when she characterizes humanity's impact on the world as a "parasitical infestation of the Earth-including, but not limited to, climate change, overpopulation, pollution, extractive capitalism, environmental devastation, and mass extinction." Human agency can also, however, invigorate egalitarian ideas and movements, inclusive and just ways of organizing human and ecological relations to build resilience against current and future emergencies. The COVID-19 pandemic has allowed humanity no time to find a cure with which to prevent fatalities and mitigate damage; it is impossible to predict what will happen once a cure is found, leaving the fear that SARS-CoV-2 or another fatal virus will someday surprise the world again. When it reappears, it will likely be resistant to the treatments used for the preceding variant. Both denialism-the idea that we are immune to illness-and triumphalism-the idea that dominant systems will overcome illness-are now shattered. The powerful and powerless alike are becoming accustomed to the idea that the virus will likely invade their lives and communities voraciously. As the virus encircles the globe and shifts its epicentre from one place to another, humanity must reconsider the idea of normalcy during the Virocene in relation to the consequences of the current disease and the imminent threat of its return in more debilitating forms. We are forced to accept that staying alive in a world encompassed by the Virocene is far more prudent than projecting optimism about defeating it. While a vaccine may successfully inoculate victims against the virus, it cannot protect them from capitalism, racism, and climate change. 10 As Vijay Prasad (2020: 1) notes, We won't go back to normal because normal was the problem. Now, amid the novel coronavirus, it seems impossible to imagine a return to the old world, the world that left us so helpless before the arrival of these deadly microscopic particles. Waves of anxiety prevail; death continues to stalk us. If there is a future, we say to each other, it cannot mimic the past. Scientific and social origins of viruses The tensions, gaps, and negotiations among scientific explanations of the SARS-CoV-2 virus and their political representations are indicative of the neoliberal political economy's struggle to consolidate its dominance during and beyond the pandemic. The manufactured disjuncture between scientific understandings of epidemiology, and the clinical characteristics and methods of treating COVID-19 on the one hand, and certain social understandings of the virus shaped by social and political determinants that predate the pandemic, on the other, have serious consequences for cross-fertilization between scientific and social knowledges, understanding of human-nature relations, and the discriminatory community-level impacts of the pandemic. Journal of Political Ecology Vol 27, 2020 648 Despite advances in our understanding of viral origins and evolutionary history through epidemiological studies that examine the relationships between viruses and their hosts, there is still much to understand. We do not know enough about the molecular mechanisms of viral entry and replication, modes of viral transmission, length of viral infection in relation to population density, living conditions, climate cycles, and viral stability. To this date, no clear explanation for the origins of viruses exists. Viruses may have arisen from mobile genetic elements that gain the ability to move between cells. They may be descendants of previously living organisms that adapted a parasitic replication strategy, perhaps viruses existed before and led to the evolution of cellular life. (Wessner 2010: 32-37) Most studies express reluctance to precisely date the origins of viruses, but indicate that they might date back millions or even billions of years. Part of the difficulty in dating viruses, according to Ed Rybicki, a virologist at the University of Cape Town in South Africa, is that viruses "don't leave fossils and because of the tricks they use to make copies of themselves within the cells they've invaded" (2018: 4). They sometimes can stitch their own genes into those of the cells they infect, so that "[understanding] their ancestry requires untangling it from the history of their hosts and other organisms" (p. 5). To buttress the notion of the potentially natural origins of viruses, Rybicki points to theories that viruses could have existed in insects millions of years ago and at some point of their evolution began infecting other species and/or "emerged either from a type of degenerate cell that had lost the ability to replicate on its own or from genes that had escaped their cellular confines" (2018: 2). The challenge with studying the origins of viruses is also compounded by the difficulties of distinguishing "a specific mode of evolutionary change, such as the explosive radiation of lineages leading to different viral families" (ibid). Thus, the origin of viruses continues to be debated especially with respect to "RNA viruses, for which evolutionary history [is] especially difficult to resolve" (Holmes 2020: 2). Better scientific knowledge about coronaviruses dates back to the 1960s, with subsequent studies having found an enormous variety of animal coronaviruses-with five new forms being discovered since 2003 alone. SARS-CoV-2, a new strain of coronavirus that belongs to the Nidovirales order, replicates using a nested set of RNA; epidemiologists are still studying its transmission, symptoms, and severity. The WHO has classified it as the source of the pandemic because of its rapid spread over a wide geographic area and its effects on an exceptionally high proportion of the population in the absence of promising measures for mitigation and cure (Anthony et al. 2017). Despite facing severe global political scrutiny, on December 31 2019, Chinese authorities alerted the WHO of an outbreak of a novel strain of coronavirus causing severe illness. Several novel COVID-19-infected pneumonia (NCIP) cases were recorded in the Wuhan region of China, and spread rapidly across the country, and thereafter even more rapidly across the world, especially affecting Italy. The U.S. is now (in July 2020) considered likely to have seen far more infections and potential deaths than Italy and China (https://covid19.who.int). Human-to-human spread in Wuhan led to the virus's detection but it is not a certain indicator of the virus's geographical origin. A sampling of coronavirus cases since 1965 has indicated the presence of the virus on most continents (ibid). At the root of the popular idea that the virus originated where it was first detected, in a location where humans have more regular contact with bats and pangolins, are possible reasons for such proximity, including a globalist, interconnected growth-oriented capitalist economy that cannot be reduced to the policies and politics of a specific place or country. On January 10th, 2020, China publicly released the genome sequencing of the COVID-19 virus (Holmes 2020). 11 Yet, many continue to believe that the Chinese government failed to share information about its COVID-19 outbreak in a timely manner (O'Donnell and Associates 2020). Such misinformation is, in part, a consequence of neoliberal doctrine, which asserts that the legitimacy of the state rests on the performance of Journal of Political Ecology Vol 27, 2020 649 the economy, which led to several nations not sharing or ignoring scientific warnings about SARS-CoV-2 or dismissing them as trivial. Other governments including the U.S. also ignored both scientific advice and the lessons of previous pandemics. Despite repeated calls by Democrats for urgent pandemic readiness, President Donald Trump reduced federal financial allocations and staff at the Centers for Disease Control (CDC) by three quarters in 2018 (Baptiste and Washington 2020). Trump also abolished the White House Office on Pandemic Preparedness, the National Security Council Pandemic Unit, and ignored an Obama-era 69-page National Security Council playbook, which included hundreds of tactics and policy decisions to "prevent, slow, or mitigate the spread of an emerging infectious disease threat" (Diamond and Toosi 2020). While the authoritarian Chinese state was largely successful in enforcing mitigation and treatment plans and materially helping many other countries to do the same, several democracies continue to struggle against capitalist market forces that impede their ability to coordinate resources to fight the pandemic. These difficulties stem from the commodification of health care and, to a great extent, the practice of scientific research itself. Until the early 1970s, capitalism expanded by dispossessing people of their land and their productive labor (also known as 'accumulation by dispossession', Harvey 1996). Although scientific knowledge grew in parallel to growth in capitalist productivity, its development was not directly bounded by capitalist accumulation, maintaining a far higher degree of autonomy than seen at present (Freudenthal and McLaughlin 2009). Public-interest and public-sector institutions often dominated the production and dissemination of scientific knowledge. As limits on 'accumulation by dispossession' emerged, scientific knowledge assumed a greater role in capital's quest to increase profits by replacing labor with technology (Kleinman and Vallas 2011;Slaughter and Rhoades 2009). Global capital's search for new areas of investment after the end of the Cold War viewed state-backed production of scientific knowledge as a constraint on expansion and increased pressure on countries to privatize it, by orienting its production, dissemination and application to attract privatesector investments (Aspragathos 2013;Huws 2012;Olssen and Peters 2005). During this period, scientific knowledge began to emerge as fertile ground for capital accumulation (Edgerton 2006) and those countries that held an edge in technological development achieved faster economic growth. Science itself became rapidly commodified as market rationality dictated the production, dissemination, and use of scientific knowledge (Lave et al. 2010;Mirowski 2011;Moore et al. 2011). This, in turn, imposed constraints on the autonomy of scientific knowledge production and its ability to serve public interest (Huws 2012;Vohland et al. 2019). Within a short time, the knowledge economy, spurred by science, reached its limits in resolving the crisis of low-profitability and unemployment, for the production of scientific knowledge itself became vulnerable to the vicissitudes of the market economy and subject to the speculative behavior of financial markets (Brenner 2002). 12 Scientific inquiry as a field of critical inquiry, however, did not entirely lose its autonomy in producing knowledge -or its public purpose. The knowledge economy created an interdependent world that threatened to disrupt the neoliberal narrative -one in which production and dissemination of knowledge could bypass constraints on capital and state power and in which knowledge flows were increasingly fluid, spreading around the world more quickly than commodity flows. As Bob Jessop has noted, "[k]nowledge is a collectively generated resource and, even where specific forms and types of intellectual property are produced in capitalist conditions for profit, this depends on a far wider intellectual commons" (Jessop 2002: 129). The survival of neoliberal regimes thus depends not only on production but also on the suppression of scientific knowledge that is detrimental to the neoliberal narrative. Anti-science trends in politics have intensified, especially since the 1990s. At the root of antipathy toward scientific knowledge about the COVID-19 pandemic is the struggle of various political regimes to 12 At the same time, several East Asian and developing countries emerged as hubs for labor-intensive and technologyintensive industries, with companies in Western countries outsourcing production to take advantage of lower costs and relaxed labor and environmental regulations (Amsden 1988;Haggard 1990;Santasombat 2019). Western investors overlooked the environment-related health impacts of their investments, despite increased outbreaks of epidemics and pandemics. The shortages of various products, including essential medical supplies, seen in many countries during the COVID-19 pandemic arose from disruptions to global supply chains originating in East Asian economies-"invented" by global capital's relentless search for easily exploitable labor and lax environmental enforcement. Journal of Political Ecology Vol 27, 2020 650 maintain their competitive edge in the global economy. Neoliberal and ethnonationalist governments are averse to evidence-based knowledge, as well as to evidence that disputes the morality and ethics of the assumptions underlying their claims. Mirowski (2011) argues that neoliberal political ideology operates on the belief that "corporations can do no wrong", "competition always prevails", and "the state should be governmentalized through privatization of knowledge" for the benefit of the markets (p. 30). The discord between scientific and neoliberal political rationality is intensified when scientific rationality begins to expose the social and ecological limits of capitalism, and inspires anti-capitalist resistance. For example, the vast body of knowledge about the dangers of climate change presents significant threats to the expansion of capitalism. Evidence shows that political hostility to scientific knowledge also exacerbates the vulnerability of racially marginalized communities to COVID-19. The economic and ecological costs of the subsuming of scientific knowledge by capitalism have reduced both their clinical immunity and their economic and social resilience. Angela Saini (2019) convincingly argues that the "problem of the color line till survives today in 21st-century science" (p. 26). Saini recalls Du Bois's belief that "the problem of the twentieth century is the problem of the color line-the relation of the darker to the lighter races of men in Asia and Africa, in America and the islands of the sea" (Du Bois 2014: 62), at a time "when the scientifically backed enterprise of eugenics-improving the genetic quality of white, European races by removing people deemed inferiorgained massive popularity" (Skibba 2019: 2). The issues at stake here relates to power over, and power exercised through, scientific knowledge. The challenge of reclaiming the power of science to produce social and environmental wellbeing requires a foundational theory of justice from which to deliberate on the competing interests of power over scientific knowledge. In short, "epistemic justice" cannot be achieved by distributing the products of knowledge, (or, redistributive justice), or "technopolitical struggles" alone (Moore 2011). But rather, a theory of justice that can dismantle epistemic injustice and create alternatives that are "inclusive of locally situated counterexpertise", resolving the tensions between experts and non-experts through a "knowledge justice framework" that accommodates "justice for counter-expertise" (Baigorrotegui 2019). Social distancing and quarantine -disparate impacts on women and vulnerable populations In the absence of a promising treatment, social distancing and self-quarantine are the most promising ways to combat the SARS-CoV-2 virus. Both offer specific ways of organizing relationships between humans and physical spaces. Peoples' varied responses to these requirements, their ability to abide by them, and their diverse experiences in these spaces exemplify certain facets of the political economy of organization of spatial relations, resulting from enclosure of common spaces and the creation of diverse geographies of deprivation and dispossession across different scales (Sevilla-Buitrago 2013: 1). These factors shape risk perceptions with regard to the pandemic, as well as people's responses to social distancing measures, in ways that are unevenly distributed across space. The role of identity politics and political rationalities in shaping risk perceptions are also evident in the racialized representations of those violating social distancing and quarantine requirements. For example, in India and Sri Lanka popular media representations highlight the religious identity of the Muslims and Christians who violate these requirements. Religious identities of politically privileged communities are not mentioned; instead, they are referred to as 'people', 'pilgrims', 'irresponsible people', or 'returnees' from abroad. The origins of these bigoted representations predate the COVID-19 pandemic, and as Amir Ali notes, "Islamophobia has been transposed onto the coronavirus issue" (in Perrigo 2020: 1). Likewise, "one of the key features of anti-Muslim sentiment in India for quite a long time has been the idea that Muslims themselves are a kind of infection in the body politic" (Arjun Appadurai, in Perrigo 2020: 3). This perspective highlights the affinity between long-standing bias and new anxieties surrounding COVID-19 (ibid). Amid the pandemic, Nalaka Gunawadenne, a Sri Lankan media analyst says, "it is very disturbing and disheartening to see anti-Islamic sentiments and anti-Muslim hate speech raise their ugly head again…" (Qazi and Thasleem 2020: 24). Similarly, in the U.S. and in Australia, there has been a spike in racism against Chinese people and their cultural practices, accusing them of being responsible for the spread of the SARS-CoV-2 virus. 13 The spike in racism primarily evident in social media may be beyond the reach of the government; however, there is no concerted official effort to dispel such sentiments and punish those who stigmatize and physically harm minorities. The popular representations of the origins of viruses and the stigmatization of certain populations as its carriers in specific geographical locations (i.e. nation states and ethnic communities), ignore the fact that the divergent epidemiological realities of viruses in specific local spaces is often shaped by economic and environmental changes caused by extra-local forces imposed on those spaces. While the current pandemic imposes extraordinary measures to enforce social distancing and quarantine, these measures present significant challenges to vulnerable communities in temporary housing such as slums, shantytowns, homeless shelters and migrant workers' camps; to those without access to shelter or private automobiles; and, to daily wage earners (Corburn et al. 2020). Workers in crowded labor-intensive industries (e.g., the textile and meat industries) are especially vulnerable to disease. Farm workers live in congested environments already distanced from the rest of the population, yet in order to restore food supply chains they are often compelled to work in crowded workplaces without access to appropriate safeguards from the virus (Willingham and Mathema 2020). While homeless shelters are already cramped, social investments to expand shelter facilities have fallen drastically, and the number of homeless people has continued to increase in developed countries such as the U.S. and UK (NAEH 2020; Ritchie 2019). In the UK alone, homelessness has increased more than 250% since 2010, which was the early days of government austerity programs (Ritchie 2019: 1). Gentrification and the increasing costs of housing in major cities that displace populations into congested areas, began well before COVID-19 as part of neoliberal urban housing development policies, commodification of land, and forceful land grabs. In the U.S., some 552,830 people were homeless on a single night in 2018 -which equals 17 out of every 10,000 people in the country (National Alliance 2020). Similarly, as the CEO of Shelter says, "It's unforgivable that 320,000 people in Britain have been swept up by the housing crisis and now have no place to call home" (Neate 2020: 2). At the end of 2019, 79.5 million people worldwide were displaced due to persecution, conflict, violence, human rights violations or events seriously disturbing public order, change including 11.8 million people displaced within the borders of their own nations. Some 85 percent of them are hosted in developing countries. (UNHCR 2019: 1). According to the UNHCR, of the 196 countries affected by COVID-19 globally, 79 are refugee-hosting countries that have reported local transmission (UNHCR 2020b: 1). Refugee populations already live in substandard and overcrowded conditions, with limited access to safe water and sanitation, and often suffer from poor health and nutrition. These inequalities substantially increase their risk of infection. Measles, diarrheal diseases, acute respiratory infection, and malaria account for 60-80 percent of all reported causes of death among refugees (Wise and Barry et al. 2017). Refugee communities disproportionately bear the burden of pandemic control measures, including restrictions on movement and border closures, both of which restrict their access to resources. Displaced populations are "frequently neglected, stigmatized, and may face difficulties in accessing health services that are otherwise available to the general population" (UNHCR 2020b: 1). In this regard, climate refugees are more vulnerable because they are not covered by international law, despite the fact that "Climate-related causes are a growing driver of new internal displacement, surpassing those related to conflict and violence by more than 50%" (Grandi 2019: 46). Once infected, a displaced 13 The Atlantic Monthly notes that "[w]herever a pandemic goes, xenophobia is never far behind. Since the outbreak of the coronavirus, reports of racism toward East Asian communities have grown apace" (Serhan and McLaughlin 2020). Naming SARS-CoV-2 a 'Chinese virus' and an 'Asian virus', has led to stigmatization, denial of access to services, verbal and physical attacks on Asian-appearing people, rekindling anti-Asian racism that predates the pandemic. During the 1853 yellow fever epidemic in the United States, European immigrants-and more recently, during the Ebola and HIV epidemics, Africans-were scapegoated as being carriers and subject to more intense racism. This in turn mitigated their access to health care, deepening their vulnerability to disease and deprivation. To prevent the spread of the virus, some nations have discriminatively imposed quarantine measures on other nations based on unexplained scientific evidence. For instance, banning all European travelers but not Chinese travelers to Sri Lanka in 2020, was a matter of geopolitical and personal political relations between countries. Journal of Political Ecology Vol 27, 2020 652 population runs the risk of being pushed farther from areas where it can access resources, and further constraining a populations' options for relocation and resettlement can adversely affect host communities (UNHCR 2020a(UNHCR , 2020b. Unlike in past epochs, the global nature of the Virocene will further increase restrictions on the mobility of humanitarian workers and access to international humanitarian aid flows, forcing humanitarians to rethink ways of working with displaced populations. Yet, focusing on humanitarian assistance alone will not help increase displaced communities' resilience to the COVID-19 pandemic. Forced displacement is also associated with climate change and climate adaptation (Afolayan 1999;McMichael 2015;Ryan et al. 2019;Scheffran et al. 2011). Climate change exacerbates displacement's effects on infection rates, for "…with warming temperatures, animals that [are] known to transmit the viruses to humans are expected to move into new areas, bringing the disease with them" (Redding et al. 2019: 16). Climate change increases stress on species that are more susceptible to the spread of viruses, bringing them into closer contact with humans. A recent study found "that 33 viruses 28 of which had previously been unknown to scientists, had been entombed for 15,000 years in ice cores within a melting glacier in Tibet" (Zhong et al. 2020: 7; also see Hotaling et al. 2017). Climate change precipitated by rising temperatures, deforestation, and changing rainfall patterns increases the "effect on the burden of infectious diseases that are transmitted by insect vectors and through contaminated water" (Shuman 2010: 362). Degraded habitats are breeding grounds for viruses, as viruses are more adaptable to those environments than are humans (Khan et al. 2019;Ogden 2018). Sixty percent of all infectious diseases, amounting to 75 percent of all emerging infectious diseases in humans, are zoonotic, and they are connected with forest losses, leading to closer contact between wildlife and human settlements and intensive agriculture and livestock industries. Climate change affects food systems. Animals create polluted environments and increase people's vulnerability to viruses; conversely, climate-change-related poverty and displacement make people less able to access health-care facilities (Fischer et al. 2013;Redding 2019;Shuman 2010). An estimated 1 billion people's first exposure to mosquito-transmitted viruses in the coming century will be linked to climate change-related migration processes, and contexts will shape migrant and host community health outcomes in a variety of ways (McMichael 2015). Nations that are concerned primarily with protecting their own communities from viral pandemics could restrict displaced persons' mobility and/or push them towards uninhabitable regions and areas that are vulnerable to human-induced climate change. Pressure on governments to increase economic growth to aid in recovery from the COVID-19 pandemic will further worsen climate change, especially when international aid for economic recovery is conditioned on national implementation of measures designed to spur growth. Two-thirds of the world population live on less than US$10 per day, and every tenth person lives on less than US$1.90 per day, the majority of whom are in in congested environments. They are unable to maintain reserves of essential items and now face job loss, loss of remittances, rising prices, and channels of services available to them (Roser and Ortiz-Ospina 2019). Of the 164 million migrant workers worldwide, approximately 111.2 million live in developing countries (ILO 2018: 1). An increasing majority of the world's vulnerable population who live on daily or weekly wages are already or will soon also be food insecure. The epidemiological vulnerabilities of migrant communities in their workplaces and the collapse of global supply chains increase their chances of unemployment. Workers leaving worksites are either stuck in cramped housing, forced to use crowded means of transport, or, in some cases, walk hundreds of miles to return home. In India, tens of thousands of migrant workers walked home, often in close proximity to one another because trains were shut down (Chatterjee 2020;Carballo et al. 2018). In India, 74 million people-one sixth of the urban population, live in slums, and in some areas, slum dwellers have one toilet for 1,440 people. Residents, mostly women, congregate in a few places in large numbers to gather water that is often supplied for only a limited time. They are forced to walk through narrow roads and congested and open markets in large numbers, to purchase their daily supplies, and the situation is worsened when curfews restrict mobility. In one settlement "the lanes are so narrow that when we cross each other, we cannot do it without our shoulders rubbing against the other person" and "We all go outdoors to a common toilet and there are 20 families that live just near my small house" said a slum dweller (Sur and Mitra 2020: 7-8). The socio-physical borders of these vulnerable communities were already impenetrable before the pandemic for reasons of unequal economic development, racism, and xenophobia. Losing everyday social Journal of Political Ecology Vol 27, 2020 653 connections comes with psychological costs for everyone, yet vulnerable populations in congested places do not have the luxury of coping within (absent) private spaces in their homes or by accessing entertainment in ways the privileged can. In addition, while social distancing and quarantine are necessary to save lives, for many it also means being subjected to vulnerabilities that extend far beyond economic survival. A study done by Johns Hopkins University noted the high probability of an increase in "suicide, substance abuse, domestic violence, homelessness and food insecurity" (DeLuca et al. 2020). Climate change affects food systems. Animals create polluted environments and increase people's vulnerability to viruses; conversely, climatechange-related poverty and displacement make people less able to access health-care facilities (Fischer et al. 2013;Redding 2019;Shuman 2010). Nations that are concerned primarily with protecting their own communities from viral pandemics could restrict displaced persons' mobility and/or push them towards uninhabitable regions and areas that are vulnerable to human-induced climate change. Pressure on governments to increase economic growth to aid in recovery from the COVID-19 pandemic will further worsen climate change, especially when international aid for economic recovery is conditioned on national implementation of measures designed to spur such growth (Oldekop et al. 2020). News reports from around the world also note increasing incidences of domestic abuse and violence, "more so now with abusers finding themselves frustrated and at home far more than normal" (DeLuca et al. 2020;Godin 2020). In the state of Oregon, "[perpetrators] are threatening to throw their victims out on the street, so they get sick" (Mahdawi 2020). A WHO study noted that during emergencies, gender-based violence tends to increase and go unreported. In these situations, "women's bodies too often become battlefields" (WHO 2018b), where women are more likeFly to absorb the frustrations and anxieties of the households. Both past and present studies show that women are far more adversely impacted by pandemics than men. For example, even after economic distress levelled at the micro-level, men's controlling behavior continued to increase towards their partners. School closures also affect girls' education and life opportunities (Roy, I. 2020). "As many girls dropped out of school, it also showed a rise in teenage-pregnancy rates. And predictably, domestic and sexual violence rose" (p. 10). Due "to mass school closures women will bear much of the responsibility for child and elderly care. The lockdown will only exacerbate the burden since women already do three times as much unpaid care work than men. In India it is 9.8 times more" (ibid). Shelters from domestic abuse are either crowded or do not accept clients due to fear of spreading the virus. Even if the pandemic ends, the scars of abuse will last for lifetimes. Difficulties in adhering to demands for social distancing faced by vulnerable communities and the disproportionate impacts of distance on these communities are fundamentally rooted in the already extant distance-proximity dialectic that emerged from capitalism's and racism's control of living spaces, health care, social safety networks, food systems, and politics. Coping with the Virocene in socially distanced and quarantined spaces demands a "new normal way of living", with radically different ontologies, tools, and strategies to resist the further consolidation of capitalism's and racism's power over human and non-human lives. The Virocene epoch thus calls us to explore ways to replace anthropocentric views about human and non-human species relations from a multispecies perspective. Health Disparities in preparation for the COVID-19 pandemic, access to needed medical supplies and urgent health-care resources are visible along class, race, and gender lines. In many cases these disparities demonstrate the extent to which the governments and political ideologies prevalent in these countries prioritize human wellbeing versus market wellbeing. In many countries, COVID-19 has exhausted public health measures meant to prevent, detect, and treat pandemic illnesses. Globally, public investments in pandemic mitigation continue to drop, despite evidence "that a wide range of preventive approaches are cost-effective, including interventions that address the environmental and social determinants of health, build resilience and promote healthy behaviors, as well as vaccination and screening" (WHO 2014: ii). . A major reason behind these cutbacks is the commodification (privatization) of health systems (Attard 2020). As private firms seek profits, they cut 'fat' from the system, including investment in areas such as disease prevention and mitigation. Moreover, privatization has led to a lack of coordination in health services, which is critical in times of crisis. Thus, the real cost of cuts in preventative care are inevitably borne by the public and the state -privatizing profits while socializing risks. Consequently, hospitals around the world are experiencing shortages of key equipment needed to care for critically ill patients, including beds, testing equipment, ventilators, and personal protective equipment gloves, face shields, gowns, and hand sanitizer for frontline medical personnel (Ranney et al. 2020). Apart from disruptions in supply chains, these shortages of protective materials derive from several factors. First, despite vast knowledge that virogenic activity is occurring with increasing frequency, private health systems and governments have not been stockpiling emergency medical supplies. Governments, under pressure to support private businesses have not encouraged or demanded that health institutions stockpile such materials. In addition, neoliberal governments, operating within an ideology of private sector 'efficiency' and fiscal austerity, have not stockpiled such goods and services either. Second, profitability depends on commodification, which essentially means creating artificial scarcity in the process of resource production, allocation, or distribution. Global pharmaceutical companies do not share information with each other out of fear of giving an edge to their competitors, slowing the speed of, and undermining their capacity to find a profitable product (Millar 2019). Acquiring patents and litigating for trade secrets, as part of the profit motive results in these companies restricting or blocking other firms or even countries from producing cheaper generic formulations (Boseley 2006;Cooper et al. 2001;t' Hoen 2002;O'Manique 2004). In a world structured by competitive capitalism, pooling of knowledge and resources to develop an effective treatment and vaccine for COVID-19 is impossible to imagine, especially when nation-states are beholden to corporations that are key players in economic growth, provide employment, and financially patronize politicians. COVID-19's exposure of the unpredictable powers of nature is "nowhere more true than in the continuous evolution of new infectious threats to human health that emerge" (WHO 2018a: 14). At the same time, countries that are already in deep economic crisis are unlikely to access enough international aid. Jennifer Kates et al. (2020) cite "growing concern about its impact in low-and middle-income countries (LMICs), … particularly those in sub-Saharan Africa, home to more than one billion people" (p. 1) and "it is highly likely that many other LMIC countries not identified as COVID-19 high priority will experience growing case-loads and require enhanced assistance" (p. 5). For some countries, prejudice and geopolitical biases can override the severity of the pandemic in determining their access to critical medical aid. Elizabeth Rosenberg of the Center for a New American Security think tank, pointed out that "while Iran is an epicentre of this virus outbreak and facing true economic catastrophe . . . there will be no relief on sanctions" (in Mohammed et al. 2020: 10). After much delay, in the last week of March 2020, Britain, France, and Germany bypassed US sanctions to send medical aid to Iran to battle the virus (Rothwell 2020). Economic sanctions imposed by advanced industrialized countries on international trade flows in developing countries, for example US sanctions on Venezuela, are likely to worsen the economies of these countries as they struggle to cope with the pandemic. Some developed economies have failed to respond to the COVID-19 pandemic themselves, despite enough warning, because the rate at which people are dying outpaces a market system that is unable to provide necessary assistance. Currently COVID-19 is spreading rapidly in developing countries. Yet there are no signs of an ideological shift toward restructuring the healthcare industry along non-capitalist lines. For example, the World Bank approved US$1.9 billion in aid to assist 25 countries, and this could increase to US$160 billion in the next few years. Despite using the language of 'global public good' in their pandemic preparedness literature, there is no indication of a shift in the Bank's neoliberal ideology towards aid and long-term recovery (World Bank 2017). As Stein and Sridhar (2017) point out, the purpose of aid transfers is to "create a market for pandemic risk" (p. 5), through an "insurance arrangement that does not simply pool donor money but creates a market for private sector investment" (p. 1). As they further note, Yet, in putting particular emphasis on market-based solutions to health concerns, the [World Bank] risks creating a financial mechanism that is inefficient and opaque. This points to the wider tensions between the immediate pursuit of profit and the goal of providing healthcare to the world's poorest people. (p. 22) Journal of Political Ecology Vol 27, 2020 655 The embrace of market-based solutions in the name of building 'economic resilience' also means channeling investments, mostly in the form of debt, into changing economic activities and modes of governance according to market rationality. The World Bank's folly is manifest in its contradictory positioning of health as a public good in the market economy, thus demonstrating how global health crises are rooted in the organization of healthcare systems under capitalism. Thus, the Bank's pandemic interventions are likely to impose more constraints on diverse health systems focused on human wellbeing rather than profit. On a broader scale, the pandemic thus points to the rooting of current deficiencies across global healthcare networks in preventing, preparing, and responding to infectious diseases in neoliberal economic reforms. Investments in public health have largely focused on anchoring public health within a market-driven economy. For example, in 2003, the severe acute respiratory syndrome (SARS) virus dragged world economic output down by US$50 billion. Given that China's global GDP share in 2019 was four times higher than in 2003, however, and with confirmed cases of COVID-19 being more than double the total for SARS cases in 2020, the coronavirus outbreak is estimated to cost the global economy up to US$360 billion. This would have a domino effect on the economies of poor countries: Should Chinese demand fall by 1% due to the coronavirus outbreak, low-and middle-income countries would lose $4 billion worth of goods exports and $0.6 billion of tourism receipts. If oil prices fall by 5% amidst lower global demand following the outbreak, sub-Saharan African countries would face a $3 billion cut on its mineral fuel export revenues. (Raga 2020) Epidemics such as Ebola and HIV have already pushed poorer countries into the dominant growth paradigm and led to the collapse of health-care systems as the latter were brought in line with the growth imperative of capitalism. Cuts in fiscal allocations for basic needs and social safety networks, as well as currency devaluation have increased GDP allocated for debt servicing. This, in turn, has reduced the real incomes of the poor. Growth-inducing policies also speed up the extraction of natural resources in poor nations to boost growth in developed countries, worsening global climate change. While economic growth continues to be pivotal in measuring levels of economic development, wide national differences in terms of resilience and effective responses to the SARS-CoV-2 virus do not necessarily correspond to rates of economic growth. Countries that succeed in mobilizing resources and providing care based on need, give priority to and have the political will to ensure access to healthcare as an entitlement. The importance of political values in mobilizing health resources are exemplified in the cases of Cuba and Sri Lanka. For example, Cuba currently has about 37,000 medical workers in 67 countries, most in longstanding missions. In the city of Crema in the hard-hit Lombardy region of northern Italy, 52 Cuban doctors and nurses set up a field hospital with 32 beds equipped with oxygen and three ICU beds. (Associated Press 2020) Sri Lanka mobilized its public health services and military all at government expense, far quicker and more efficiently than the US, even though its knowledge, human resources and financial capacities are far more limited. Within a short period of time, the Sri Lankan government set up quarantine centers for self-isolation for COVID-19 and mobilized the public health system to provide care for the affected people rather than sending them home. The state enforced the wearing masks, restricting public gatherings, closed borders between districts and made sure the private and public institutions provide sanitization and temperature checking facilities free of charge (Kohona 2020). This is in contrast to countries such as the US where the biomedical industrial complex, rather than the state, is enormously powerful in controlling the supply of physical and financial resources to ensure immediate care (Gaffney 2020). The complex derives its power from politically influential segments of the population advocating for access to health care based on ability to pay as opposed to need. In addition, the politicized ideological divide in the US between those in support and Journal of Political Ecology Vol 27, 2020 656 opposed to the state enforcing preventive measures against COVID-19 is a critical determinant of its failure to contain the pandemic. The reasons that pandemics exhaust resource capacities in the developing and developed countries to meet the health care needs is systemic, but are also matters of morality and political will. The global biomedical industrial complex constrains the freedom countries to develop inexpensive medical supplies by allocating more finances to the private sector than to the state, enforcing austerity measures to curtail public health systems and providing incentives for private sector investments in the health care system (Baru and Mohan 2018;Mackintosh and Koivusalo 2005;Sparke 2019). Growth of health-care systems in the neoliberal world is predicated on manufactured scarcities intended to maximize profits for private health-care corporations (Cassell et al. 2017). Neoliberal forces allow public health systems to exist, so long as they do not interrupt the commodification of health care and other complementary systems (Goodell 2020). Neoliberalism's hold over the healthcare system in any country depends on personal and government values: whether universal health care is seen as an entitlement for all human beings, or if access to health is based on individuals' ability to pay. Both perceptions ultimately shape the country's political realities. Shamasunder et al. (2020Shamasunder et al. ( : 1083 noted that, the COVID-19 pandemic demonstrates the critical need to reimagine and repair the broken systems of global health. Specifically, the pandemic demonstrates the hollowness of the global health rhetoric of equity, the weaknesses of a health security-driven global health agenda, and the negative health impacts of power differentials not only globally, but also regionally and locally. Reimagining of the global health system fundamentally a matter of values of human and non-human wellbeing that shape the power of the capitalism and racism over the system. Food Food insecurity during the COVID-19 pandemic is not primarily caused by viral activity, but rather by the pre-existing capitalist ethics that govern supply chain systems, and the pursuit of economic growth predicated on these chains. According to the Food and Agriculture Organization (FAO), the COVID-19 pandemic will worsen global food insecurity for an additional 820 million people, more than the Global Financial Crisis (GFC) of 2008 (United Nations 2020b). Countries are unprepared to meet food needs during pandemics, especially because capitalist systems fail to see food as part of complex social and ecological systems (Hall 2015). The political crisis that could emerge from a food crisis during and in the aftermath of the pandemic is thus rooted in the capitalist framing of the food system (Allen and Guthman 2006). Activities related to food production, storage, distribution, processing, packaging, retailing, and marketing assume profit maximization. Market rationality determines production choices, pricing, and product accessibility. Agriculture in and of itself is not a priority unless it directly adds to economic growth, preferably via export markets -a fundamental factor in the contemporary political ecology of food (Hall 2015). Environmentally destructive intensive farming practices, and the increasing cost of agricultural production hamper food security by blocking better forms of land use. As profits are controlled by agribusinesses, farmers become less innovative in agricultural production, even for their own subsistence. The FAO reports that an estimated 1.3 billion tons of food is wasted globally each year, equalizing one third of all food produced for human consumption (FAO 2011). The FAO, however, is not explicit about how this waste is an essential part of commodification of food systems which create scarcities in the marketplace, in turn are integral to maximizing commercial profits and adding to economic growth as measured by Gross National Product. Food shortages thus are indicative of institutional failure, arising not out of limits on production imposed by the COVID-19 pandemic, but by the profit-oriented organization of food systems. For example, food growers and producers in the US are letting food rot and dumping crops as they face massive surpluses of highly perishable food (Cagle 2020). Thus, the root of the food waste problem "lies in the Journal of Political Ecology Vol 27, 2020 657 hegemonic agri-food system and the unequal power relationships between the actors in the agri-food chain" (Gascón 2018: 587). Most food growers also sell their produce through highly commercialized food chains, where each unit in the chain seeks to maximize profit. According to the National Sustainable Agriculture Coalition's report, the total loss to the industry is US$1.32 billion (Cagle 2020). Now that the pandemic has interrupted the supply/production/sales chain, famers have no storage facilities or means to get produce to the consumers. Local retail shops, food banks and non-profits do not have the physical and financial capacity to access and absorb food surpluses (ibid). Even countries such as Sri Lanka, endowed with vast agricultural resources, are fast losing the capacity to guarantee food security to their citizens (WFP 2017), as their food systems have been rapidly transformed from addressing people's wellbeing to maximizing profits. Prior to 1977, Sri Lanka had a complex network of production and distribution cooperatives, storage facilities, and transport facilities. However, (after introduction of neoliberal policies in 1977) the supply of agricultural inputs under the government system has virtually collapsed, and the inputs are no longer controlled locally, but by transnational agribusiness companies. The post 1977 economic policies intensified the impact of climate change on food security as the rationale behind the agriculture changed from 'feeding the population' to maximizing economic growth. 14 The capacity of the Sri Lankan government to efficiently coordinate food access during an emergency when people's mobility is severely restricted, has now been undermined. This is a global reality, and food systems thus structured by capitalism continue to fail to provide needed nourishment to the most vulnerable. The intersection between food and COVID-19 is rooted in the way food systems are structured, valued, and positioned in the global 'ecopolitical economy' and how it intersects and manifests in different cultures of production and consumption of food. Neoliberal economic policies do not support alternative food systems, unless they at least complement pro-growth food systems, regardless of their implications for social and ecological wellbeing (Slocum 2007). The capitalist food system's vulnerabilities exemplify the existing hierarchies of resource access and power upon which the reproduction of the political economy of food production under capitalism rests, rather than a lack of options for people to organize their food systems based on human and non-human wellbeing. As Holt-Giménez (2017: 172) writes: This hegemonic food discourse not only reflects the dominant ideology of the corporate food regime, it avoids addressing how the capitalist food system is inextricably based on the oppression and exploitation of women, people of colour, and workers. Worse, this dominant food narrative lulls us into the magical belief that somehow, we can change the food system without changing the capitalist system in which it is historically embedded. This is the political fetishization of food. Moreover, difficulties mobilizing power to create deracialized and decommodified and culturally appropriate food regimes are issues of politics and culture (Appadurai 1981). Food systems affect climate change and the ability of capitalism to reproduce itself by incorporating diverse food regimes, including those organized by non-capitalist and non-racist rationalities (Holt-Giménez 2017; Plahe et al. 2013;Wald and Hill 2016). Food sovereignty movements attempts to shift the focus away from food security to food sovereignty: "we are arguing for a different approach to mainstream capitalism focusing in particular on the spatial and temporal aspects that underlie its role in perpetuating marginalization and inequality" (Wald and Hill 2016: 233). In recent times, promising movements for organizing food systems centered on multispecies justice have emerged, seeking to address a broad array of economic, racial and environmental justice issues, including ethical relationships of human and non-human animals, the politics of food production and consumption, and even larger questions of alienation, authenticity, and mindfulness (Belkhir and Charlemaine 2016;Holmes and Peterson 2017;Joassart-Marcelli and Bosco 2014;McConnell 2017;Noll 2017;Schanbacher 2017). These Journal of Political Ecology Vol 27, 2020 658 movements face the challenges of being localized and being unable to replicate on a scale powerful enough to transform commodified and racialized food systems, which mostly benefit privileged social groups. By being complicit with normalizing political rationalities and forms of power, rather than creating alternatives to, or dismantling the neoliberal food system, these alternatives often function as modes of neoliberal governmentality (Balasubramanian 2015;Guthman 2008;Seki 2009;Youde 2009). Employment COVID-19 demonstrates how the disastrous consequences of human wellbeing predicated on wage labor remains alienated from the wealth it creates. At the same time, the profits of large corporations, especially in computer-related industries continued to soar. For example, in 2020 Jeff Bezos of Amazon made an extra US$6.8 billion on top of the US$118 billion he had already made, Mark Zuckerberg of Facebook made an extra US$6.2 billion, and Warren Buffet and Elon Musk's wealth grew US$5 billion and US$4.2 billion, respectively, despite the fact that the world's richest people lost US$36 billion in the fourth week of February (Vega 2020). In the United States, the COVID-19 pandemic has caused 47 million job losses, and the country's unemployment rate is likely to reach 32 percent, its highest since the Great Depression (Cox 2020;Gopinath 2020). Nearly 200 million full-time workers are expected to lose their jobs in mid to late 2020, and "More than four out of five people (81 percent) in the global workforce of 3.3 billion are currently affected by full or partial workplace closures" (ILO 2020). The pandemic has underscored the failure of capitalism to secure the wellbeing of wage-dependent labor: workers must depend on the state and community aid for survival, and expect no relief from capitalists who now compete with labor for the limited resources offered by the more affluent states for economic support during COVID-19. In the United States, the COVID-19 pandemic occurred during a time of rapid deterioration in social safety networks and widespread labor agitation to increase the minimum wage, despite the overall increase in employment. Now social safety nets in the United States are stretched beyond capacity, and there are miles-long lines at food banks (Gordon and Bruch 2020). COVID-19 has also exacerbated pre-existing divisions in the labor market. About two-third of job losses in the United States have happened in locales and industries that are high intensity contact places (Torry 2020: 6). Workers in these areas are typically low-skilled and low-paid and thus cannot afford to stay at home and work: "Low wages correlate with closer personal interactions at work, and they are more vulnerable to contagious diseases, except for health care workers who are equally vulnerable yet earn relatively higher wages." Industries where the majority work remotely have suffered fewer job losses; and in fact, added jobs in March 2020. Workers in these sectors are high skilled and better paid. "Work-from-home and telework are now seen as a privilege activity and for a privileged class" (ibid). Stable employment and living wages were already under threat prior to the pandemic due to profitmaximizing policies removing constraints on capital (Albarracín and Naron 2000;Burrows 2013;Crouch 2012;Kotz 2002). In many cases, temporary and contractual employment without benefits has replaced permanent and well-compensated jobs, while price levels, debt, and debt servicing rates are increasing, and real income has dropped or stagnated. The issue at stake here is the relationship between wellbeing predicated on the economy's profit maximization imperative, and the limits (i.e. decline in purchasing power) it enforces on profitability arising from an inability to end overproduction. The idea of wage labor is not natural or inevitable but a product of the way capitalism (via normalized ideologies of consumerism) appropriates human power and nature for profit, rather than sustaining human and ecological wellbeing. The pandemic has thus exposed the Janus-faced realities of the capitalist system's impact on wage labor. On the one hand, the decline in human wellbeing is a direct product of the mediation of wages, and the legal system of capitalism which dispossesses humans from direct access to, and alternative means of, satisfying their needs (by extracting from nature). Capitalism, and its elites, expand(s) and overcomes a crisis by finding novel ways of exploiting and discipling labor (Marx 1976: 376-377;Richards 2016). The pandemic occurred at the time when organized labor was at a low point, and Western economies faced systemic crises of underconsumption, personal debt, employment insecurity and institutional suppression of the labor movement (The Economist 2020; Meyerson 2020). On the other hand, climate change restricts the expansion of capitalism, even as less privileged workers toil in environmentally unsafe areas, further depriving them of access to nature. Policies based on mainstream economic narratives that propose to diagnose, measure, and prevent the current economic system from failing, show remarkable continuity with those used during previous epidemics and general economic crises. The diagnosis goes as follows: due to profit-maximizing consumption, the economy cannot be resilient because the pandemic has disrupted the production and distribution chains. The rate and space at which people are losing jobs and purchasing power threatens peoples' basic levels of survival. Fiscal and monetary stimuli focus on tax cuts to consumers and producers, bailout packages for businesses (Judge 2020), lower interest rates and different loan repayment options, as well as transfer payments to keep production, employment, and purchasing power afloat, and prevent asset crashes. Until the fear of the pandemic ends, the current economic policies to contain it will not help economic recovery, as these policies are only designed to be effective against cyclical economic downturns (Saiz 2020). Restrictions on mobility, including the imposition of self-isolation, and disruption of supply chains will slow any restarting of production and consumer spending. Consumers are more concerned about limiting consumption to essentials, including debt payments, than about increasing their spending. In short, the marketbased economic system is failing catastrophically, as current economic policies were not designed to meet the needs of emergency situations that fracture production and supply chains and quarantine their constituents. The Virocene epoch has highlighted the importance of delinking human wellbeing from the capitalist wage market, with decommodification of the means of survival as the only reliable means of addressing vulnerabilities during and beyond pandemics. For this to happen, alternatives to growth-centered ways of securing human and ecological wellbeing are urgently needed, given that economic policies in response to COVID-19 do not show signs of fundamental changes in macroeconomic realities. As Mulvaney (2019) writes: "…given how growth depends on natural resources, and control over natural resources figures in geopolitical contests, the pursuit of growth will necessitate the continuation of militarized capitalism, with all of the tortured and unequal socio-ecological relations that tends to reproduce." Degrowth, solidarity and social economics, which emerged as responses to social inequities, exclusions, and the unsustainability of neoliberalism, aspire to create a voluntary transition toward a just, participatory, cooperative, decentralized, inclusive, and ecologically sustainable society ( Despite these challenges, these new modes of organizing economic relations, political rationalities, and solidarities are sites of emancipatory political consciousness that "…struggle between being palliative and transformative" (Raffaelli 2016). Racism S.K. Cohn's (2017) study of racism and pandemics since the Black Death to the HIV crisis, notes that "Instead, both in the popular imagination and the scholarly literature, violent hatred and even pogroms are held to have been pandemics' normal course, supposedly engrained in timeless mental structures -to use René Baehrel's words, 'certaines structures mentales, certaines constantes psychologiques'" (Cohn 2017: 4). In the wake of COVID-19, racism manifested in states' and societies' pandemic responses, masks how neoliberalism and the hierarchically structured power of the state function in the current world order. Indeed, the argument that the effects of the SARS-CoV-2 virus are universal and do not discriminate along the lines of race, class and gender is itself racist, as it masks how societal responses to the pandemic do not equally benefit all groups. For example, the Navajo Nation in the United States of 170,000 people, had more coronavirus cases per capita than any state in America in mid 2020. The Navajo or Diné nation is the largest Native American reservation in the United States. Forced by the federal government onto reserved land, the Nation is plagued by poverty despite its low population density. Reservations are also known food deserts, lacking basic infrastructure, and often experiencing acute shortages of hospitals and medical supplies. Already grievously affected by colonial Journal of Political Ecology Vol 27, 2020 660 federal policies, transportation to hospitals has virtually stopped during the pandemic and additional federal services are slow to reach the reservations (Capatides 2020). Similarly, African Americans who comprise 13 percent of the US population, make up 28 percent of current COVID-19 cases. 15 This is directly related to the years of segregation that pushed African American communities into neighborhoods with precarious environmental conditions, continually worsened by structures of environmental racism. African Americans are 75 percent more likely to live in places bordering a polluting facility like a factory or refinery compared to other Americans and are exposed to air that is 38 percent more polluted in comparison to white Americans (Fleischman and Franklin 2017: 7). Oil and chemical companies take advantage of poor communities that have low levels of political power, that result in negative health impacts on these community and poverty that lessens their accessibility to health care (p. 6). A report by the National Alliance to End Homelessness (2020) also found that homeless African Americans are more vulnerable to COVID-19 as "they make up 40 percent of the U.S. homeless population..." (p. 1). That number has increased in recent years, even as the rate of homelessness for other ethnicities has gone down (p. 6). In addition, workplace racism has also contributed to the overrepresentation of African Americans in laborintensive and low-paid jobs. The disproportional impacts of COVID-19 on people of color (POC) are hence effects of racist social policies that began decades ago, that "led people of color to be more exposed and less protected from the virus and has burdened them with chronic diseases" (Wallis 2020: 2). Many POC continue to suffer disparities in health care and economic prospects as they live in segregated neighborhoods, work in pandemic frontlines for low wages-also in nursing homes jails, prisons and homeless shelters, with very limited access to personal protective equipment (p. 7). In 2018, the poverty rate for the US population was 11.8 percent, with variations along racial lines (ibid). The report by the National Alliance to End Homelessness (2020) noted that African Americans have the highest poverty rate at 20.8 percent, and non-Hispanic whites the lowest at 8.1 percent. "The poverty rate for Blacks and Hispanics is more than double that of non-Hispanic Whites" 16 and "One in three Native Americans are living in poverty, with a median income of $23,000 a year" (Muhammad et al. 2019: 6). According to the Asian American Federation, the number of Asian Americans living in poverty grew by 44 percent between 2000 and 2016 (in Hassan and Carlson 2018). Even though national poverty and unemployment rates are at historic lows, "income inequality has reached the highest level since the Census Bureau started tracking it more than five decades ago", reported the Washington Post in late 2019 (Telford 2019). Inequality is disproportionately higher among poor people of colour, for example, "across America, black people remain disproportionately poor. More than 20% live in poverty, twice the rate of whites" (The Economist 2019). Moreover, even as the economy booms, the number of Americans without health insurance is rising (Associated Press 2019). These inequalities are the product of the same processes that led to higher levels of economic growth and employment from 2017 until the COVID-19 pandemic halted much economic activity. Universal generalizations around COVID-19 divert public attention from the positive correlation between coronavirus-related hate crimes, xenophobia and racism against minorities, all of which have spiked dramatically (Human Rights Watch 2020). Anti-Muslim racism during the pandemic has spiked in political regimes in South Asia, aided by state-sanctioned Islamophobia and ethnoreligious nationalism, as previously described. Islamophobia justifies violence against Muslims by depicting them as responsible for economically displacing previously privileged communities, a claim that has become a defining factor in political parties' quest for state power. In the United States, white racism against Asians started in the 19 th century, when Chinese laborers were brought into the United States to work on railways and mines; the spread of diseases during this period was associated with Asian racial identity (Li 1998). The revival of racial myths during the pandemic, especially through anti-Chinese rhetoric (Hvistendahl 2020), must be understood within the context of how the historical development of capitalism has forged the current links between China and the United States of capital-labor relations that occurred during the historical development of capitalism, taking advantage of China's labor and authoritarian governance policies, which have benefited the U.S. capitalist class and made an abundance of products available to U.S. consumers. Recession, inequality, and debt increase demand for lower-priced goods, and Beijing, determined to keep its export machine humming, is finding a way to deliver. "Delivering," in this context, means that the Chinese government is doing whatever is necessary to ensure the "ability of Chinese [based] manufacturers to quickly slash prices by reducing wages and other costs in production zones that often rely on migrant workers" (Barboza 2009;Hart-Landsberg 2020: 64). The Pandemic has disrupted the flows of goods and services from China and deprived it of markets. The narratives of economic nationalism emerging from such contexts provide legitimacy to efforts aimed at removing environmental regulations, introducing austerity measures to attract investments to purportedly spur growth, and forcefully suppressing dissent against economic growth-related injustices and environmental degradation. These trends are likely to produce several outcomes. First, resistance to racism and climate change takes a back seat when the majority of the population accepts neoliberal growth as normal and the alternatives to it are viewed as ineffective and weak. More importantly, the racism that functions as the legitimizing ideology of state power provides it with the flexibility (even popular legitimacy) to use whatever means necessary, including extensively involving the military. For example, as is the case in Sri Lanka where the military has been tasked with creating a "disciplined and virtuous" society for the pursuit of growth, prosperity, and security (Colombo Telegraph 2020: 1). Secondly, the spike in racist nationalism and militarism in the wake of the state's failure to serve the general interests of society, particularly when the state is constrained by debt repayments and low levels of economic growth, paradoxically serves as a robust source of state legitimacy. Thirdly, the state's simultaneous strengthening of social welfare policies, including public health systems, and pursuing of rigorous neoliberal policies disregards the economic and ecological outcomes of both. Finally, sustainable platforms incorporating dissent against climate change and capitalism are evolving into political movements for change, arising from global demands for racial justice. Governance There is no clear relationship between the nature of political regimes, the levels of economic development they generate, and their pandemic response strategies and their effectiveness. Yet, those economic and political pandemic responses that are driven by neoliberalism and racist nationalism could have debilitating impacts on democratic freedoms, society and nature. For one, the COVID-19 pandemic is likely to increase the pressure on nation states to further liberalize financial markets, especially those with cash strapped economies depending on financial assistance. Contrary to the expectations of neoliberal orthodoxy, however, the actions that many states have taken during the pandemic-nationalization of hospitals, direct supply of resources to people, releasing prisoners, and increasing worker compensation, were unimaginable just months ago, in 2019. While the survival of these practices in different states beyond the 2020 emergency is doubtful, the state currently bears a major burden of managing the health and economic crises of the pandemic. While the private sector is unwilling to invest without the promise of profit, or without being subsidized and safeguarded by neoliberal states, these states are fiscally decapitated and ideologically constrained from operating outside the boundaries of neoliberalism. The economic crisis in countries where state legitimacy relies on ethnoreligious nationalism, nepotism and the military could produce disastrous social and ecological consequences. The emergence of autocratic economic nationalism in countries such as the U.S. has also meant an expanded public role for nation states who compete to increase economic growth within their own economies through new policies. For example, the U.S. (under Trump) has imposed tariffs to limit imports, and encouraged citizens to buy local products, contravening their demands to the rest of the world to follow freetrade policies. Amidst the pandemic, the Environmental Protection Agency (EPA), "issued a sweeping suspension of its enforcement of environmental laws…telling companies they would not need to meet environmental standards during the coronavirus outbreak" (Beitsch 2020: 1). Such pro-growth neoliberal policies are likely to have disastrous economic and environmental consequences. They might also widen economic disparities within and between nations, making them vulnerable to "zoonotic and vector-borne Journal of Political Ecology Vol 27, 2020 662 diseases -two disease groups that are of particular concern because they are climate sensitive, and comprise the majority of emerging or re-emerging infectious diseases." (Estrada et al. 2016: 10). How countries adjust social policies to manage the implications for economic growth arising from the intersection between climate risks and financial market risks will shape policies targeted at coping with the Virocene's vulnerabilities. As "domestic policies are essential…to fending off financial crises", (IMF 2020: 21), developing countries will face greater difficulties in coping with the financial crisis that would result in "growing fiscal and current account deficits and a shift toward riskier debt" (p. 16). The failure of the United States to respond effectively to COVID-19 could also reset debates about the viability of democratic forms of governance during the current emergency and beyond (Benedikt et al. 2020;Vlaicu 2020). The political regimes that may potentially emerge in place of democratic modes could have far more debilitating impacts on developing countries than the developed world. There are several instances of governments in developing countries publicly comparing the successes of authoritarian pandemic responses (e.g. military control of logistics and information in pandemic operations) with the failures of Western democracies (Ben-Ghiat 2020; Diaz and Mountz 2020; Perera 2020). The resulting public sentiments are then mobilized to justify an electoral mandate to enhance the powers of the state, a difficult move were it not for the pandemic. This is particularly dangerous for countries that frame the pandemic in terms of national security, breathing new life into militias and racialized political forces, and further empowering corrupt plutocrats and politicians known for human rights violations and abuses of power. It is naïve and dangerous to assume that military involvement, which may be necessary to combat the pandemic, is completely independent of a wider process of militarization. This is particularly true of countries where military rationales dominate governance modalities and where national security is viewed as critical to neoliberal economic development (Perera 2020). History is full of examples of national security apparatus developed for emergencies being applied to safeguard ethnonationalist interests, distracting public attention from social and environmental problems caused by neoliberal economic policies, and suppressing the emergence of dissent opposing such injustices. The post-pandemic era will be far more devastating for vulnerable groups, unless these political trends are overpowered by creative thinking and by social and political movements premised on ecological wellbeing and justice. 17 In the midst of the uncertainty of economic recovery, the decisions regarding aid to pandemic stricken poor countries are subject to the parochial domestic economic, political and geopolitical interests of the donor countries (Oldekop et al. 2020). Ostensibly to curtail human trafficking, in 2018 the Trump administration started to place restrictions on "$700 million allocated for important U.S.-funded aid programs around the developing world, including money that could have helped alleviate the new Ebola outbreak in Central Africa" (Gramer 2019;Sarukhan 2016). Policies asserted by authoritarian or want-to-be authoritarian leaders no longer even require an underlying logic, beyond perceived personal prestige. For example, President Trump on March 17, 2020 justified his action of withholding funding from the WHO on the basis of his displeasure with their lack of control over the Chinese government's pandemic response (Berglund 2020). An earlier tweet from Trump, on January 24 "praised and thanked China on behalf of American people for its efforts and transparency in fighting the Coronavirus" (Yeung 2020). Trump's policy shift on China happened in a context of media charges levied against him for delaying and sabotaging the United States' response to the pandemic, especially by dismantling and/or reducing the capabilities of the Center for Disease Control and other agencies put in place to manage pandemic outbreaks by previous governments (Diamond 2020;Garrett 2020;McGraw and Cook 2020;UN News 2020). As governments move away from containing the spread of the pandemic to mitigating the resulting economic fallout, systems of governance will likely prioritize recovering economic growth, compromising social equality and climate change mitigation in the process. Governments have the opportunity to extend their unprecedented control over their citizens in response to the coronavirus to build the power to 'discipline' society 17 Anti-austerity protests are already underway. Tracing the rise and fall of 'expansionary austerity ', Mark Blyth (2015) "argues austerity policies worsened during the Great Depression and created the conditions for the rise of Adolf Hitler and the Japanese militarists" (p. 57). Also see McKee et.al. (2012) and Blickle (2020). There is also recognition that financial markets and big businesses are part of the problem not the solution, echoing John Maynard Keynes notion that "the boom, not the slump, is the time for austerity at the Treasury" (Keynes: 1937: 388; see also Horagan 2020; Quiggin 2020: 18). Journal of Political Ecology Vol 27, 2020 663 for economic recovery and restoring the government's popular legitimacy. COVID-19 has not fundamentally changed the basic features of neoliberal governance, and in fact made them seem, conservatively, even more necessary for economic recovery. The end of the pandemic would also mean a revival of dissent against neoliberal governance, as the extraordinary redistributive policies undertaken by normally conservative, promarket regimes are likely to end. Funds may be redirected to competitive growth sectors. Countries risk a breakdown of democratic governance, a rise in racist nationalism, and the militarization of their societies, and the coercive apparatuses developed during the pandemic will be extended to silence critics. Post-pandemic crises may mean expanded surveillance, and entrenched power in the name of 'crisis management.' The increased presence of the military in civilian affairs will have serious consequences for women. "Militarized masculinity", a product of nurturing masculinities infused with myths about male leadership and efficiency and systems of obedience, discipline, and punishment during military training, make their way into society and provide a framework for or shape gender relations in private and public domains (Enloe 2014;Whitworth 2004;Williams 1994), disproportionately impacting racially and economically marginalized women. These dark realities are vulnerable to becoming permanent fixtures of society, unless they are checked by new emancipatory political rationalities, strategies, politics, and power for multispecies justice and backed by global solidarity movements (Fernando 2020). Human-animal relations COVID-19 calls us to treat vulnerabilities, and planetary, animal and human health as deeply interconnected. Viral interactions with humans evolve as the relationship between humans and nature evolves. In the case of zoonotic or potentially zoonotic viruses such as SARS-CoV-2 it is important to raise questions about human and non-human proximity, and how political economy organizes human-nature relations. For example, Alex de Waal notes that "the Ebola epidemic was ultimately the product of disruptions to West Africa's ecology caused by the expansion of commercial agriculture into forest zones" (De Waal 2007: 13; also see Mansfield 2008). The evolution of the world's food regimes-their respective ontologies, production, distribution, consumption, cultures, and politics, are important determinants of the zoonotic transmission of viruses and immunity deficiencies that disproportionately impact marginalized social groups (Galt 2017: PAHO/WHO, n.d.). Industrial livestock farms supply more than "90 percent of meat globally -and around 99 percent of America's meat, animals are tightly packed together and live under harsh and unsanitary conditions" (Samuel 2020). These industrial livestock farms hosting millions of domesticated animals grew in tandem with the growth of commercial agriculture, the land use patterns of which negatively impacted climate change and food security. As evolutionary biologist Rob Wallace notes, "[factory] farms are the best way to select for the most dangerous pathogens possible" (Samuel 2020: 16). Capitalist agribusiness also use large amount of land to produce animal feed, decreasing the availability of, and increasing the price of, food for humans in marginalized communities, thus also decreasing their immunity to viruses. Meat factory workers also have proven to be particularly vulnerable to COVID-19 (Dyal et al. 2020) and are under pressure to resume work without sanitary improvements in their workplaces (Secard 2020). Pandemic pressures expedited the slaughter of animals in large numbers to meet demand, and the disruption of supply chains also forced farmers to cull at least two million animals across the U.S. (Kevany 2020;Scott-Reid 2020). 18 Human and non-human vulnerability to COVID-19 are deeply connected, but rendered unequal by the profit driven organization of relations between human labor and animals in production chains. The "wet markets" where COVID was first detected are hosts for "all kinds of natural commodities, from exotic wild animals like snakes to domesticated livestock like hogs" (Huber 2020: 3), but these wet markets are created by globally interdependent economic growth policies related political cultures (McMullen 2015). According to Quammen (2020: 14), "We cut the trees; we kill the animals or cage them and send them to markets. We disrupt 18 A US webinar and website hosted by The National Pork Board, 'COVID-19: Animal welfare tools for pork producers' lists among resources on emergency planning, links to 'euthanizing' animals, by methods including gunshot, electrocution, carbon dioxide and manual blunt force trauma and 'ventilation shutdown.' https://library.pork.org/media/?mediaId=7BD2613C-7E2A-452A-9D9C5CC76F7AF2E9 ecosystems, and we shake viruses loose from their natural hosts. When that happens, they need a new host. Often, we are it." The root causes of human and animal health-care crises during the pandemic (and beyond) therefore do not originate in 'nature' but from the values and power of the political economy which organizes the relations between humans and animals (Benton 1993;Nibert 2013;Massé 2016;Mullen 2015). Since the privatization of the commons, "global expansion of capitalism, together with its requisite increase in structures of power and domination, are responsible for the intensification of similar injustices for non-human animals" (Painter 2016: 126) including the mass euthanasia of animals during pandemics. Alternatives to the growth paradigm call for a nature-based approach to human and non-human wellbeing based on an understanding that human survival, and indeed nature's survival, are both operationally and ideologically linked, and quests for transformation and justice inclusive of humans and non-humans should also occur in tandem (Emel and Nirmal 2020 forthcoming; Hribal 2003). The campaigns for non-anthropocentric multispecies justice could easily be coopted by the cultural politics of ethnonationalist neoliberal political regimes, to incentivize violence against vulnerable minorities. For example, we find a relationship between the demands for "cow vigilante" groups and racism and violence against Muslims in South Asia (Jain 2019; Human Rights Watch 2019). Conclusions Achieving clinical immunity from the SARS-CoV-2 virus will not address the underlying forces of capitalism and racism that are responsible for social and ecological vulnerabilities in the Virocene. If these forces remain unchecked, the world order in the Virocene era might become more racialized, nepotistic, unequal, unjust, autocratic, and militarized to an extent not previously seen. Likewise, the very forces responsible for the vulnerabilities and insecurities of the Virocene epoch might emerge victorious, rendering society even more vulnerable to further pandemics. There is also a real danger that the COVID-19 pandemic will be exploited to allocate and consolidate resources under the dominant neoliberal economic growth model, tinges in some states with authoritarianism, as a pathway to economic recovery -resulting in a highly militarized and racialized world order (Transparency International 2020). The Virocene epoch foregrounds the urgency of moving forward with counter-hegemonic ways of responding to social and natural vulnerabilities. COVID-19 has exposed the moral and pragmatic failures of capitalism and racism as ways of organizing human-nature relations. Moving forward with counter-hegemonic modalities of response requires us to problematize and de-normalize society's complicity with capitalism and racism as the 'normal' way of organizing human-nature relations and work. We need to begin a critical reflection on the subjectivities, regimes of truth, relationships, and power structures necessary to realize the possibility of a new world order, by reflecting how the vulnerabilities of the current world order came into being and are sustained. The vulnerabilities-shortages, inequalities, and sociological hardships, insecurities, and anxieties-of the Virocene epoch result from the ways in which capitalism and racism organize relations between nature and society. The terms through which capitalism organizes relations between humans and nature are best captured through Marx's understanding of the 'appropriation of nature' as a universal phenomenon of social metabolism: In the social production of their life, [people] enter definite relations that are indispensable and independent of their will, relations of production which correspond to a definite stage of development of their material productive forces. The sum total of these relations of production constitutes the economic structure of society, the real foundation, on which rises a legal and political superstructure, and to which correspond definite forms of social consciousness…. From forms of development of the forces of production these relations turn into their fetters. Then comes the period of social revolution. With the change of the economic foundation the entire immense superstructure is more or less rapidly transformed. In considering such transformations the distinction should always be made between the material transformation of the economic conditions of production, which can be determined with the precision of natural science, and the legal, political, religious, aesthetic, or philosophic -in short, ideological -forms in which men Journal of Political Ecology Vol 27, 2020 665 become conscious of this conflict and fight it out. Just as our opinion of an individual is not based on what he thinks of himself, so can we not judge such a period of transformation by its own consciousness; on the contrary, this consciousness must rather be explained from the contradictions of material life, from the existing conflict between the social forces of production and the relations of production (Marx 1859: 43-44). Historically, the social and ecological vulnerabilities arising from metabolic processes led to various forms of consciousness that reinforced the capitalist orders and opened possibilities for alternatives (e.g., socialist, and social welfare economies). These alternative orders failed to replicate and sustain globally against the ideological and militant forces of capitalism, making capitalism the longest surviving order, showing remarkable ability to recover from multiple crises (Foster 1999). The Virocene moment illustrates the unique reality that capitalism has become helpless in coping with the challenges of climate change, and hardly much better in constraining pandemics. Nonetheless, humans are themselves showing signs of becoming a formidable force against capitalism by organizing globally around shared norms of social and ecological justice. The Virocene epoch also points to the capitalist system as being fundamentally responsible for producing human and non-human vulnerabilities. It also reveals how capitalism's own rationalities, tools, and power pose challenges to its survival. The disruptions caused by the Virocene and its social and environmental consequences have not completely de-normalized the economic, cultural, economic and political orthodoxies on which capitalism's power rests, through consensual and coercive means. Denormalization faces the challenge of the continuing political and cultural dominance of capitalist rationality. At the same time, the Virocene has not ruptured the widespread belief that human-nature relations mediated by wage labor and capital is the most efficient means of achieving human and ecological wellbeing. In addition to the deprivations of labor created by wage-labor relations, capitalism also creates a metabolic rift between humans and nature (Foster 1999), further constraining flexibility in organizing the relations between them in ways other than those aligned with the capitalist worldview -which would free humans and nature to be co-creators of mutually sustainable survival. Marx noted: Large landed property reduces the agricultural population to an ever decreasing minimum and confronts it with an ever growing industrial population crammed together in large towns; in this way it produces conditions that provoke an irreparable rift in the interdependent process of the social metabolism, a metabolism prescribed by the natural laws of life itself. As a result, the vitality of the soil is squandered, and this prodigality is carried by commerce far beyond the borders of a particular state (Marx 1999(Marx (1894: 588). The vulnerabilities of the Virocene epoch have highlighted the need for radical changes in capitalism's and racisms' hold over governance and persistence of this metabolism, both locally and globally. These are unlikely to happen, as long as the views of the oppressed are no different than those of the oppressor, the former ending up reproducing the same world order either out of fear of taking risks to create change, or from fear of not knowing alternative world orders, or because they view the oppressor's world as 'normal' (Freire 1970). The dissent against the failures of capitalism projected on the state during the Virocene has acquired some flexibility (seen for example in Black Lives Matter protests). How state policies will evolve in the Virocene remains uncertain, as those movements with the political power to produce change are more geared toward changing the "form of the state", rather than its "capitalist nature." Hence, they enable the state to cope with challenges to its legitimacy arising from crises of capitalism and racist nationalism. 19 As evident in the history of capitalism, the role of the state will continue to be the primary determinant of how dissent against capitalism and racism, evident in current protest movements, will metamorphose into emancipatory politics reflecting society's desire to create a new normal way of living. 20 Journal of Political Ecology Vol 27, 2020 666 The Virocene epoch thus presents us with the challenge of creating a new social order without falling into the anthropocentric tropes of "nihilistic protest and fascistic accommodations" (Harvey 2018) now proliferating in the name of sustainability, conservation, solidarity, and the "the web portal 'Ecosystem Marketplace' [that] offers information updates and investment and price trend data on carbon, water and biodiversity markets" as green alternatives to capitalism (Fairhead et al. 2012: 238; see also Castree 2010; Robbins and Luginbuhl 2005). The new subjectivities, regimes of truth, and power embodied in green capitalism demonstrate its enormous capacity to appropriate the language and practices of its potential antagonists to reproduce its social and political power and legitimacy, serving interests beyond those of capitalism itself in expanding accumulation. I do not dismiss the idea that these alternatives embody dissent against and hope amid capitalism. My contention, rather, is that these alternatives are vulnerable to cooption by hegemonic powers because they are not grounded in a theory of social and ecological justice that is radically different from that of their antagonists. We have seen society's yearning for justice, which is often heightened during emergencies, subverted over and over again, especially when framed in terms of rights, obligations, responsibility, equal opportunity, distribution of wealth, opportunities, and privileges within a society, as well as the creation of a society that works for everyone and for the environment. We can learn from the Haitian revolution (1791-1804) that "pushed the universalism of natural rights to its ultimate fulfilment in actualizing human freedom by overthrowing slavery" (Fick 2007: 395), and we must recognize that "the substantive meanings of such abstract concepts as emancipation, liberty, equality, citizenship, or even independence, were by no means self-evident", then or today (p. 396). Their meanings are constructed, made self-evident and normalized in particular contexts. After their 13-year struggle against incredible odds, the Haitian revolutionaries sought to forge their own future by tailoring their rights, aspirations, and strategies to suit the needs and divergent concerns of the rival factions (Fick 2007: 396). Like the revolutionaries of Santo Domingo (Saint Domingue as it was then known), we must historicize the specific content of how moral norms naturalize ideas of freedom, justice, and equality and turn them into lived-in realities, that "justified their overthrow of the governing structures oppressing them" (ibid: 395). To do so, we must upend the ways in which rights are conceived under capitalism and redefine the meaning of freedom. We must turn abstract concepts of natural rights that were used to legitimize oppression into human and ecological rights, then position those rights as universal rights that form a moral basis for attempts to overthrow a system of oppression that considers human beings and nature as property. Dissent and emancipatory politics must evolve from the sites of oppression, in the same way as, …[t]he French and American revolutions had initially set forth declarations of natural and inalienable rights and defined these by the standard of universal human equality, but the slaves of the Haitian revolution -the unimaginable event -actually fulfilled the ultimate meaning of natural rights by overthrowing slavery in their struggle for self-emancipation and then national self-determination. (Fick 2007: 414) The emancipation of Haiti was based on universal norms about equality and liberty for all, "not in the western metaphorical sense of political oppression or subservience, but as a system grounded in the individual ownership of other human beings as a form of property" (ibid: 395). The power of these emancipatory struggles is deeply linked with justice, which itself is linked with the moral bases of rights. In political ecology, discussions of the power-justice nexus occupy a central place amid efforts to foster an understanding of the complex ways that power is produced and the "critical role of nonhuman elements in co-constitution of society -technology -nature" (Ahlborg and Nightingale 2018: 381; also see Castree and Braun 2001;Rocheleau et al. 1996). Political ecologists have argued that "combining power perspectives is one of political ecology's strengths, which should be nurtured through a continuous examination of a broad spectrum of social science theories on power" (Svarstad et al. 2018: 350). Discussions of power in political ecology focus on how power is implicated in resource choices and governance, the relations between human agency and constitutive power, and the ways in which conflict over access to environmental resources has been linked to the political and economic legacies of the colonial era (Bryant 1998 potential of political ecology in the Virocene era is nonetheless constrained because its theories of power are not firmly grounded in a robust theory of justice built on the moral bases of social and ecological rights. Moreover, following O'Keefe (forthcoming), political ecology's theories of justice could better emphasize how the universalist worldview of capitalism is localized and how it appropriates, configures, and disciplines subjectivities and power relations in localities to function according to the imperatives of capital. The limitations of PE's theories of justice have weakened the field's emancipatory potential and its ability to address the challenges of the neoliberal state and in ecological crises. The Virocene is an epoch of well-founded pessimism about humanity's survival. Despair, evil, and helplessness, rather than hope, goodness, and redemption now dominate the consciousness of both the powerful and the powerless. Yet, paradoxically, periods of isolation and fear also create opportunities for critical reflection on human and societal vulnerability. No virus can completely overcome humanity's potential or nullify its inborn capacity for self-preservation. Will there be a sober reflection of the certain but unpredictable causes and timing of death, and the limits of human powers to comprehend and act upon the forces of nature? Will the "human power now overwhelmed by the power of nature" bring "nature and power into a sustainable balance?" (Radkau 2013: 329). Will the current economic crisis give new life to radical thinking about the highly romanticized and fetishized attachment to individual freedoms and capitalism, and how people view the role of the state? Will thinking transform the way we want to live, our relations with each other, and the environment? Will critical reflection lead to rediscovering ourselves and our agency in the world as communal/relational beings? How will political choices about ethics and values and critiques of our leaders' performance affect social and ecological well-being? Will it lead to demilitarization and the dismantling of racism and cultivate an inclusive, equitable and just sense of belonging to a nation and connection with each other? Our answers to these questions must be powered by creative thinking and by social and political movements premised on ecological wellbeing and justice. Promising counter-hegemonic ideas, practices and politics that problematize what is considered normal, seeking to create new ways of living, which radically restructure the political economy of human-nature relations abound across the world. But their success at becoming 'mainstream' depends on the extent to which they let rights-centered perspective(s) of justice drive their intellectual and policy efforts to reclaim power over neoliberal and racialized regimes and to mobilize society around their alternative ways of organizing human-nature relations. Rather than taking for granted that justice and power are coextensive, the challenge for political ecology is to focus on a moral basis of rights as the connection between justice and power. As T.D. Campbell (1974) notes, "formal justice is insufficient to establish general equivalence between justice and rights" (p. 449). The efforts of political ecologists to understand and create alternative pathways for freedom from growth-driven nature-society relations must therefore be pursued within a social and ecological justice framework. While the appropriation of nature by capitalism and racism is the primary impediment to coping with the Virocene's social and ecological vulnerabilities, the emancipatory struggles needed to address them are issues of justice rather than ecology. Likewise, the confrontation and transformation of capitalism and racism's valuation of humans and nature are also firstly a matter of justice. By investing more efforts toward theorizing the rights-justice-power nexus to cope with the Virocene's vulnerabilities, political ecology could enhance its contribution to emancipatory movements seeking to defeat the "morbid symptoms" emerging in the midst of the historically unprecedented possibilities for revolutionary change that the Virocene offers, such that these symptoms would not become harbingers of a possible catastrophic future. What I seek to present in my second article (Fernando 2020) is a theory of social and ecological justice that functions as a form of critical inquiry. I will seek to understand how social and ecological inequalities and justice arise and function-and as a form of critical praxis-that is, as a means of both challenging and eventually overturning capitalism and racism, and creating alternatives.
2020-07-23T09:09:38.066Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "8f93426c8a2baaef92e7bf039d367cb307489b77", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2458/v27i1.23748", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "97c866f47a879edc4d0746a984e25dbbbfa03d07", "s2fieldsofstudy": [ "Sociology", "Political Science", "Environmental Science" ], "extfieldsofstudy": [ "Sociology" ] }
174808070
pes2o/s2orc
v3-fos-license
Chiral magnetic interlayer coupling in synthetic antiferromagnets The exchange coupling underlies ferroic magnetic coupling and is thus the key element that governs statics and dynamics of magnetic systems. This fundamental interaction comes in two flavors - symmetric and antisymmetric coupling. While symmetric coupling leads to ferro- and antiferromagnetism, antisymmetric coupling has attracted significant interest owing to its major role in promoting topologically non-trivial spin textures that promise high-speed and energy-efficient devices. So far, the antisymmetric exchange coupling rather short-ranged and limited to a single magnetic layer has been demonstrated, while the symmetric coupling also leads to long-range interlayer exchange coupling. Here, we report the missing component of the long-range antisymmetric interlayer exchange coupling in perpendicularly magnetized synthetic antiferromagnets with parallel and antiparallel magnetization alignments. Asymmetric hysteresis loops under an in-plane field unambiguously reveal a unidirectional and chiral nature of this novel interaction, which cannot be accounted for by existing coupling mechanisms, resulting in canted magnetization alignments. This can be explained by spin-orbit coupling combined with reduced symmetry in multilayers. This new class of chiral interaction provides an additional degree of freedom for engineering magnetic structures and promises to enable a new class of three-dimensional topological structures. Ferromagnets (FM) and antiferromagnets (AFM) possess collinear spin alignments within magnetic domains, due to a coupling, which is called symmetric or Heisenberg exchange coupling. While this conventional coupling is well known, recently a different coupling has moved into the forefront of interest, which leads to non-collinear and chiral spin textures.8][19] Therefore, DMI only manifests in systems with a spin-orbit coupling (SOC) and with a bulk or structural inversion asymmetry, e.g., in cubic B20 alloys or at interfaces between FMs and heavy metals (see Fig. 1a).[11] Besides the intralayer exchange coupling, in magnetic multilayers consisting of alternating ferromagnteic and non-magnetic spacer layers, the FMs can also be coupled to each other by interlayer exchange coupling (IEC). 14,21Phenomenologically, the IEC shares common features with the symmetric Heisenberg exchange within each magnetic layer: they are bilinear in spins, and isotropic under rotation, favoring collinear spin alignment.In complete analogy to the experimentally established and theoretically understood symmetric and antisymmetric exchange within a single magnetic layer, one can anticipate that multilayers exhibit not only a symmetric but also an antisymmetric IEC.Specifically, based on simple symmetry considerations, it is natural to expect the emergence of such an antisymmetric IEC in systems with broken inversion symmetry (yellow and green boxes in Fig. 1b) and strong SOC provided by a non-magnetic spacer.A remarkable feature of the antisymmetric IEC is that it promotes chiral magnetization configurations perpendicular to the film plane, in contrast to the interfacial DMI leading to chiral spin structures within individual layers.This suggests the possibility for designing threedimensional topological structures based on this novel interaction.Despite its fundamental importance as well as the associated technological promises, 10,14,22,23 clear evidence of the antisymmetric IEC is remarkably elusive so far.In this Letter, we present the experimental demonstration of such a hitherto uncovered antisymmetric IEC in perpendicularly magnetized synthetic antiferromagnets (SAFs) with parallel and antiparallel magnetization alignments.We study the multilayer reversal in different stacks and using judiciously designed field sequences, we can identify from unidirectional and chiral magnetization reversal the presence of an antisymmetric IEC. We start by developing the necessary concepts to unambiguously identify the effect of antisymmetric IEC.In general, the magnetization reversal in FMs is invariant upon the inversion of the magnetic field direction.However, this field-reversal invariance does not hold if the inversion symmetry is broken in a given physical system.One particular example is the interfacial DMI. 17 In the presence of interfacial DMI, domain walls (DWs) experience different effective fields according to their magnetic orderings, up-to-down (U-D) and down-to-up (D-U), under an in-plane magnetic field HIN as the core magnetizations within DWs of U-D and D-U align along opposite directions due to their preferred handedness by DMI.Consequently, when the DW moves, its velocity becomes asymmetric with respect to HIN, depending on their magnetic ordering. 1,3,24,25alogously, the antisymmetric IEC can break the field-reversal symmetry for the magnetization reversal.In the absence of the antisymmetric IEC, HIN cannot break the inversion symmetry but only assist in lowering the energy barrier for the magnetization reversal independent of the switching polarity (left panels of Fig. 1c and 1d).However, if the antisymmetric IEC is present, the chiral magnetization configurations are affected differently by HIN, assisted or hindered in their magnetization switching depending on the sign of HIN and the magnetization configurations.Particularly, they exhibit contrasting energy barriers for magnetization switching from parallel to antiparallel and antiparallel to parallel alignments as well as for switching of D-U and U-D, as shown in right panels of Fig. 1c and 1d.(Supplementary Note I) Accordingly, one would expect different switching fields with respect to the sweeping direction of the magnetic field, which in turn results in the asymmetric magnetic hysteresis loops. To test experimentally if the aforementioned asymmetric switching exists, which would indicate the presence of antisymmetric IEC, we measure the switching fields of typical SAFs of Ta(4)/ Pt(4)/ Co(0.6)/Pt(0.5)/Ru(tRu)/ Pt(0.5)/Co(1)/ Pt(4) (layer thicknesses in nanometers), by sweeping the out-of-plane magnetic field, Hz, whilst simultaneously applying HIN (Methods section).Here two Co layers are coupled to each other via the symmetric IEC and perpendicularly magnetized with either parallel or antiparallel magnetization alignments at its remanence.The magnetic hysteresis loops are measured by anomalous Hall effect (AHE), using the measurement configurations shown in Fig. 1e.For comparison, we also measure the switching fields of the reference sample Pt/Co/Pt/Ru that is nominally the same as the bottom half of the SAFs but without any IEC. Figure 2a shows the magnetic hysteresis loops of Pt/Co/Pt/Ru and Pt/Co/Pt/Ru/Pt/Co/Pt where tRu=0.4 and 2.7 nm, for which the symmetric IEC is ferromagnetic and antiferromagnetic leading to parallel and antiparallel alignment of the layers, respectively.Square hysteresis loops are clearly seen for all structures, showing that they have strong perpendicular magnetization anisotropy (PMA).Importantly, we find that the hysteresis loops for the SAFs with parallel and antiparallel coupling become significantly asymmetric when HIN is applied.For the parallel coupling case, at |μ0HIN| = 100mT, a difference of approximately 0.7 mT in the switching fields (Δμ0HSW) between U-D and D-U is found.For the antiparallel coupling case, the hysteresis loop is seemingly biased to the left (right) at μ0HIN= 100mT (-100mT), giving rise to Δμ0HSW = 1.1 and 1.4 mT for switching from parallel to antiparallel and from antiparallel to parallel alignments, respectively.Such asymmetric behavior is in striking contrast to the results obtained from our reference sample of Pt/Co/Pt/Ru, where the magnetic hysteresis loops are symmetric with respect to Hz=0 irrespectively of the sign of HIN.The measured absence of inversion symmetry in the hysteresis loops is in obvious disagreement with the field-reversal symmetry, demonstrating the presence of a symmetry-breaking interaction such as antisymmetric IEC in our SAFs.Moreover, we note that the field-reversal symmetry for Pt/Co/Pt/Ru in the same setup also excludes any possible artifact from the misalignment of the in-plane magnet, which could otherwise cause an asymmetry in the hysteresis loop. To understand the origin of the asymmetric switching behavior, we next measure the azimuthal-angular dependence of HSW, as shown in Fig. 2b and 2c.Here, the magnitude of the inplane field is kept at |μ0HIN| = 100 mT, while rotated from 0° to 360°.In systems with inversion symmetry, one expects to see an isotropic or uniaxial (or multiaxial) anisotropy depending on the crystalline properties of thin films, which is indeed found in our reference sample (see Fig. 2b). Notably, however, we find that the magnetization switching for both SAFs with parallel and antiparallel alignment exhibits a unidirectional anisotropy which is for parallel (antiparallel) alignment with symmetric (S) and asymmetric (AS) along the direction of HIN // 75° (150°) and HIN // 165° (240°), respectively (this will be discussed in detail later).This highlights the unidirectional nature of the observed interlayer coupling.Interestingly, for the antiparallel coupling, we obtain markedly different unidirectional features in the two magnetic layers: for the case of the top Co layer (FMtop), the value of |μ0HSW| for the U-D (D-U) is biased to 60° (240°), while for the bottom Co layer (FMbottom), it is biased along the opposite direction.This opposite unidirectional behavior between two magnetic layers unambiguously reveals that the observed unidirectional effect has a chiral nature (see Supplementary Note 1) in line with an antisymmetric IEC.Here, we would like to note that the observed chiral behavior is radically different from that expected from currently known magnetic interactions.For example, the biquadratic IEC 26 can also introduce similar non-collinear configurations, leading, however, to isotropic behavior without preferred handedness, contrary to our observations as seen in Fig. 2c.Furthermore, the interfacial DMI cannot account for such asymmetric switching behavior, as this interaction cannot produce the obtained asymmetric hysteresis on its own unless it is combined with additional symmetry breaking effects such as DC spin currents 27 or laterally asymmetric nanostructures 28 (see Supplementary Note 2) The antisymmetric IEC is expected in particular to modify the dependence of HSW on HIN, which we plot in Fig. 3.For the structure with parallel coupling, the asymmetric behavior between U-D and D-U switching is again clearly found for the case where the HIN is applied along the AS axis, while almost symmetric behavior is seen for HIN // S. (Fig. 3a and 3c) In particular, for the antiparallel coupling case, one can see that the HIN for local maxima (or minima) are shifted away from HIN =0 mT for HIN // AS, and the direction of the shift reverses for the opposite switching polarity.(Fig. 3b) This shift of HSW along the HIN axis is a robust indicator for the presence of the antisymmetric IEC; the offset in curves of HSW vs. HIN indicates the presence of a built-in effective field, the sign and magnitude of which rely on the relative orientation of the magnetization between the top and bottom Co layers.This is analogous to the internal fields from the interfacial DMI, which depends on the magnetic ordering of DW structures. 24However, this is in sharp contrast to the case without the antisymmetric IEC, where HIN always assists in switching the magnetization of perpendicularly magnetized materials irrespectively of the sign of HIN and switching polarity. To validate the found asymmetric switching behavior by the antisymmetric IEC, we perform numerical calculations based on a macro-spin model incorporating the symmetric and antisymmetric IEC (Methods section).The calculated azimuthal-angular and field-dependence of HSW for the parallel and antiparallel couplings are presented in Fig. 3c and 3d.The numerical calculations are qualitatively in good agreement with the experimental data, clearly reproducing the asymmetric and off-centered HSW vs. HIN as well as the unidirectional and chiral azimuthalangular dependence of HSW (see Supplementary Note 3).This firmly supports our conclusion that the unidirectional switching behavior is attributed to the antisymmetric IEC.The quantitative values of switching fields and the switching sequences of top and bottom Co layers are found to be different from our numerical calculations.This is most likely due to the computational parameters chosen, thermal effects and dipolar interaction that are not taken into account in calculations but are present in the experiments. 29 put our experimental findings on solid theoretical foundations and uncover the minimal ingredients that give rise to the observed antisymmetric IEC, we employ theoretical ab initio methods to scrutinize this coupling in thin magnetic heterostructures (Methods section and Supplementary Note 4).In particular, we focus on the system Co/Ru/Pt/Co with collinear magnetization within each layer.To explore the effect of the in-plane symmetry in multilayers on the antisymmetric IEC, in our calculations, we consider various C1v in-plane locations of the top Co between the hollow sites "a" and "b" of C3v symmetry as illustrated in Fig. 4a.One of the key manifestations of the antisymmetric IEC  D S S is a relativistic contribution to the total energy that is asymmetric with respect to the relative angle α between the magnetic moments S1 and S2 in the two Co layers.Indeed, our electronic-structure calculations demonstrate such a unique signature of the antisymmetric IEC in the low-symmetric C1v structures (see Fig. 4b and Fig. S6), generally favoring a non-zero canting between adjacent ferromagnetic layers due to the complex interplay with the conventional symmetric IEC.To assess the overall relevance of such a chiral interlayer interaction, we estimate for comparison the magnitude of the symmetric IEC   inter 1 2 J  SS by using an effective parameter Jinter that describes the small-angle region in the non- relativistic energy dispersion.Figure 4c presents the calculated values of both interlayer exchange interactions as a function of the position of the top magnet for an originally ferromagnetic or antiferromagnetic coupling between the magnetic layers.While the symmetric coupling exceeds the typical energy scale for the chiral IEC of 1.0 meV by one to two orders of magnitude in the studied system, the latter interaction is more susceptible to changes in the symmetry of the crystal lattice.In particular, the characteristic vector Dinter is required to be perpendicular to any mirror plane connecting interaction partners in the two layers, which renders the net antisymmetric IEC zero in C3v systems but generally finite in the case of reduced symmetry (see Fig. 4b).By emphasizing the key role of the in-plane symmetry breaking for this novel magnetic interaction, we note that any effective symmetry breaking, e.g., from a thickness gradient or a lattice mismatch between different atomic layers leading to dislocations, can give rise to the appearance of the antisymmetric IEC.Indeed, we experimentally demonstrate that a small thickness gradient in our samples gives rise to an effective symmetry breaking, allowing the antisymmetric IEC with a fixed Dinter perpendicular to the thickness gradient direction (see Supplementary Note 5).Additionally, for an appropriately asymmetric system, our ab initio calculations clearly confirm the presence of the antisymmetric IEC, which predominantly acquires its microscopic contribution from the heavy metals like Pt, as a direct consequence of SOC.Therefore, we anticipate that the predicted effect of an antisymmetric IEC, as well as the corresponding chiral spin textures, can be designed by adjusting the interface chemistry, 17 or by tuning the thickness of the Ru spacer layer 30 to alter the coupling between adjacent magnetic layers. Complementing the ensemble of magnetic interactions in systems with broken inversion symmetry, our combined experimental and theoretical work establishes the antisymmetric IEC of two adjacent magnetic layers mediated by a non-magnetic spacer as an integral part for understanding and controlling three-dimensional magnetic textures.Specifically, we experimentally demonstrate the existence of this novel IEC in SAFs with parallel and antiparallel alignments, leading to asymmetric switching behaviors under in-plane bias fields.The observed asymmetric magnetization reversal is a unique signature of the chiral magnetization in the interlayer exchange-coupled layers.We identify the interplay of SOC and the reduced symmetry as the microscopic origin of the observed antisymmetric IEC.Our findings not only uncover the hidden magnetic interaction in SAFs with parallel and antiparallel coupling but also open the possibility for three-dimensional topological structures. Sample preparation and anomalous Hall measurement. The magnetic multilayers were grown on a silicon wafer coated with a 100nm-thick SiO2 by using a UHV magnetron DC sputtering system at the base pressure of 9.5×10 -8 mbar and the working pressure of 2 × 10 -2 mbar.Multilayers of Si/Ta(4)/Pt/(4)/Co(1.0)/Pt(0.7)/Ru(t)/Pt(0.7)/Co(0.9)/Pt(4) were grown at room temperature (layer thicknesses in nm).The Ru is used for the spacer which provides strong IEC, and the Pt layers between top and bottom Co layers are used to enhance the PMA of both ferromagnetic layers.To investigate the Ru-thickness dependent interlayer coupling, a wedge-shaped sample of Ta/Pt/Co/Pt/Ru/Pt/Co/Pt, where the Ru thickness was varied from 0 to 4nm, was preliminarily grown, and the oscillatory behavior of magnetic hysteresis loops was measured by the magneto-optical Kerr effect in a polar configuration (pMOKE).The Ru thicknesses used in the main text and supplementary notes were selected from the result.The hysteresis loops of the magnetic multilayers were measured by anomalous Hall signal on approximately 5 × 5 mm 2 sized continuous film by using a Van der Pauw method.For the transport measurement, a sinusoidal current with a frequency of 13.7Hz and a peak-to-peak amplitude of ~1mA was used as a current source, Lock-in technique was used for detecting the Hall signal. Macro-spin modeling. In order to explore the effect of the antisymmetric IEC and other magnetic interactions on the magnetization reversal, we employed a macro-spin model that finds an equilibrium magnetization configuration through minimization of the total free energy functional which consists of anisotropic energy, Zeeman energy, symmetric and antisymmetric exchange energies, that is given by Here, Ms is saturation magnetization, m magnetization vector, K effective anisotropy constant, μ0 vacuum permeability, B external magnetic field, t thickness of a magnetic layer, ẑ unit vector normal to surface, Jinter coefficient for symmetric IEC, and Dinter DMI vector for antisymmetric IEC.The subscript of "top" and "bottom" describe the top and bottom magnetic layers, respectively.For a model system of Pt/Co/Pt/Ru/Pt/Co/Pt, we used the following material parameters: Ms= 1.1×10 6 A/m, K = 2.24×10 5 and 5.25×10 5 J/m 3 for the bottom and top layers, respectively.The coefficients for the symmetric IEC Jinter = 2.1×10 -4 and -2.0×10 -4 mJ/m 2 and the antisymmetric IEC, Dinter corresponding to |Dinter/Jinter| = 0.1 and 0.03 were used for the SAFs with parallel and antiparallel coupling, respectively. First-principles calculations. Using material-specific density functional theory as implemented in the full-potential linearized augmented-plane-wave (FLAPW) code FLEUR, 31 we studied the electronic structure of a thin Co/Ru/Pt/Co film in a super-cell geometry.The lattice constant of the in-plane hexagonal lattice was 5.211 a0 (where a0 is Bohr's radius), the distance between the two Co layers was 12.765 a0, and we assumed a face-centered cubic stacking but variable in-plane positions of the top magnetic layer.Based on the generalized gradient approximation, 32 the self-consistent calculations of the system without SOC were performed using a plane-wave cutoff of 4.0 a0 -1 , and the full Brillouin zone was sampled by 1024 points.By including the effect of SOC to first order, we unambiguously determined the magnitude of the antisymmetric interlayer exchange interaction from the change in the energy dispersion of coned spin spirals 33 propagating perpendicular to the film.In these force-theorem calculations with SOC, the Brillouin zone was sampled by 4096 points.Choosing a large enough distance between different super cells, we explicitly ensured that periodic images of the slab do not contribute to the obtained magnetic interaction parameters. experiments.D.-S.H. performed the sample fabrication with support from R. L. and H. J. M. S.. D.-S.H. and K. L. performed transport measurements with W. Y. and data analysis under the supervision of M. K. and M.-H.J.. J. P. H. and Y. M. performed the first-principle calculations and the analysis of relevant data.D.-S.H. and C.-Y. Y. performed the numerical calculation based on a macro-spin model.D.-S.H. wrote the paper with K. L., J. H., and M.K.All authors discussed the results and commented on the manuscript. Figure 2 Figure 2 Chiral and unidirectional magnetization switching behaviors.a Magnetic hysteresis loops measured by anomalous Hall effect for the reference Pt/Co/Pt/Ru (top panel) and SAFs of Pt/Co/Pt/Ru/Pt/Co/Pt with parallel (middle panel) and antiparallel (bottom panel) coupling.The black and red curves indicate the hysteresis loops under the application of the negative and positive in-plane field of |μ0HIN| = 100mT, respectively, which applied along AS axis, as indicated in Fig. 2b and 2c.For the Pt/Co/Pt/Ru/Pt/Co/Pt, the difference in switching fields, Δμ0HSW between U-D and D-U corresponds to ~0.7mT.Four representative magnetization configurations which appear during magnetization reversal are indicated as black arrows.b Azimuthal-angular dependence of switching field of Pt/Co/Pt/Ru (top panel) and ferromagnetically coupled multilayers of Pt/Co/Pt/Ru/Pt/Co/Pt (bottom panel).The red and blue symbols are for U-D and D-U switching polarities, respectively.The lines are to guide the eyes.c Azimuthal-angular dependence of switching field of top (top panel) and bottom (bottom panel) Co layers of antiferromagnetically coupled Pt/Co/Pt/Ru/Pt/Co/Pt. AS and S represent asymmetric and symmetric axes, respectively. Figure 3 Figure 3 In-plane field dependence of magnetization switching fields.Experimentally measured switching field HSW as a function of HIN, applied along AS (top panel) and S (bottom panel) axes as defined in Fig. 2, in SAFs with parallel (a) and antiparallel (b) coupling.The right panels on each column of (a) represent averaged |HSW| of U-D and D-U switching for HIN and -HIN, respectively.For both parallel and antiparallel coupled cases, the symmetric (asymmetric) HSW with respect to HIN=0 is found when HIN is applied along S (AS) axis.Calculated HSW as a function of HIN for SAFs with parallel (c) and antiparallel (d) coupling by using a macro spin model (Methods section). Figure 4 Figure 4 Antisymmetric interlayer exchange from first principles.(a) Top and side view of the thin Co/Ru/Pt/Co film.The high-symmetry locations "a" and "b" are marked, and the colored arrow indicates the direction of the considered displacements of the top Co layer.(b) Microscopic schematic of the chiral interlayer exchange in the C1v structures.The collinear magnetization (grey arrows) of adjacent magnetic layers acquires a relative canting due to the antisymmetric interlayer interaction as mediated by inter , which is perpendicular to the shaded mirror plane.(c) Effective interlayer coupling constants inter (solid lines) and − inter (dotted lines) as a function of the position of the top Co layer, where squares and circles refer to the cases of parallel and antiparallel coupling, respectively.
2018-09-04T16:40:18.000Z
2018-09-04T00:00:00.000
{ "year": 2018, "sha1": "5d3eccc033694a48a29b2164278da3e7c7c9f4fe", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1809.01080", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e0f68949e9fae0f0199924a481dbdf70cb611a56", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
46940704
pes2o/s2orc
v3-fos-license
Angular Photochromic LC Composite Film for an Anti-Counterfeiting Label In the harsh application environment, improving the mechanical properties of liquid crystal materials is a fundamental and important problem in the design of anti-counterfeit materials. In this paper, by a stepwise polymerization of first, photo-polymerization and subsequently thermal-polymerization, a coexistent polymer dispersed network was first constructed in cholesteric liquid crystal materials containing a photo-polymerizable system of urethane acrylate and a thermo-polymerizable system of isocyanate. Results revealed that the coexistent polymer dispersed network exhibited largely enhanced mechanical performance, and the networks obtained by different methods had different contributions to the enhancement of the peel strength and toughness of the composite films. Then an angular photochromic anti-fake label based on a coexistent polymer dispersed network with enhanced mechanical and apparent angular discoloration characteristics, suitable for practical applications, was successfully achieved. Introduction Anti-fake technologies such as holograms [1], watermarks [2], coated labels [3], and so on have been developed to tackle counterfeiting problems for many decades. However, traditional anti-fake technologies are becoming known to counterfeiters and they are not able to meet the various demands from different fields. For the negotiable instrument field such as for currencies, bank documents, and so on, easy to be identified by the public is the preferred choice for anti-fake materials. Furthermore, due to the harsh application environment, good mechanical properties are also urgently needed to prevent materials from wearing out. Liquid crystal (LC), which exhibits excellent optical, controllable, characteristic, and self-assembled soft matter properties, can be used as anti-fake material and has been widely reported previously [4][5][6]. Attributing to the unique helical supra-molecular structure, cholesteric LC (ChLC) could selectively reflect circularly polarized incident light whose handedness is identical with the helical axis [7]. The selective reflection phenomenon could be easily observed by the naked eye and also be detected by instruments. Moreover, the refection wavelength and circular polarization property of the reflected light could both be artificially tuned [8,9]. The reflection wavelength, λ = nPsinθ, where n = (no + ne)/2 is the average of the ordinary (no) and extraordinary (ne) refractive indices of the ChLC, P is the cholesteric pitch corresponding to the length of a 2π molecular rotation, and θ is the angle between the surface and viewing direction. Accordingly, for a specific ChLC material, the reflection colors (supposing that the reflection wavelength is in the visible region) is directly proportional to the viewing angle, which is very suitable for anti-fake purposes. However, the mechanical properties of pure LC materials are not satisfactory for application. Reports have shown that there are many different methods to reinforce the mechanical properties of a material such as nanoparticle filling [10][11][12][13], mineral reinforcing [14][15][16], polymer composite [17][18][19], and so on. For research to improve the mechanical properties of the liquid crystal material, there is still a lack of attention. Based on the excellent mechanical properties and easy processing characteristics of polymers, Prof. H. Yang composited ethylene-vinyl acetate (EVA) with cholesteric side-chain liquid crystal polymers (ChSCLCP) to improve the mechanical property of infrared light shielding LC film [20]. By adjusting the ratios and manufacturing processes, the composite film was successfully prepared with mechanical properties as good as pure EVA film without loss of its transmittance. Another method to improve the mechanical properties was by introduction of a polymer network into LC materials. The polymer-dispersed liquid crystal (PDLC) system, which consists of a continuous polymer matrix with micro-sized LC droplets dispersed in it, can be manufactured either by UV light or thermal curing. During the polymerization, phase separation occurs and the LC forms a microphase separation structure in the polymer network. The strong interaction between the continuous polymer network and the substrate endows the PDLC film with strong peeling strength [21][22][23][24]. Prof. Yang invented a novel coexistent system of polymer-dispersed and polymer-stabilized liquid crystals (PD&SLCs), which forms a homeotropically aligned polymer network (HAPN) within the LC droplets after a microphase separation between the LC and the polymer matrix, and combines the advantages of both the PDLC and PSLC systems. Compared with the corresponding traditional PSLC film, a great improvement of shearing force was achieved in the as-made PD&SLC film [25,26]. Prof. Wang prepared a kind of light scattering display with body temperature controlled optical and thermal information storage properties based on a special "loofah-like gel network" of super strong liquid crystalline physical gel. The study found that the adding of gelators in host 5CB can relatively resist a great outer force [17]. In the present paper, a series of polymer dispersed cholesteric liquid crystalline film was prepared. Different from the work based on a single polymer dispersed network by photo-polymerization or thermal-polymerization previously reported, a coexistent polymer dispersed network by a stepwise polymerization of first photo-polymerization and subsequent thermal-polymerization was first realized; the impact of different polymer dispersed networks and ChLC materials on the mechanical and optical properties of the ChLC films was systematically investigated. Then an angular photochromic anti-fake label based on a coexistent polymer dispersed network with enhanced mechanical and apparent angular discoloration characteristics suitable for practical applications was successfully prepared. Measurements A PerkinElmer DSC8000 (PerkinElmer, Waltham, MA, USA) with a mechanical refrigerator was used to obtain the phase transition of the polymers under dry nitrogen at a heating and cooling rate of 20 • C min −1 ; the temperature and heat flow scale were calibrated using zinc and indium as standards. Polarized optical microscopy (POM) was carried out on a Carl Zeiss Axio Vision SE64 polarized optical microscope (Carl Zeiss, Oberkochen, Germany) with a Linkam LTS420 hot stage. Spectral characterization was done by an unpolarized UV/Vis/IR spectrophotometer (Perkin-Elmer Lambda 950, PerkinElmer, Waltham, MA, USA) in transmission mode at normal incidence. The peeling strength experiment was practiced on a universal tensile test machine (Instron 5969, Instron, Boston, MA, USA) and the rate of extension was 0.5 mm s −1 . The samples were sandwiched between two PET films to perform the peeling strength experiment from the horizontal direction of the film. The horizontal cross-section area was 1 cm 2 . Preparation of the Samples The prepared samples were mixed thoroughly in the specified proportions according to Table 1 until they were homogenized. Then, the mixture was filled into two layers of PET substrate, respectively, with a thickness of 20 ± 1 µm controlled by a spacer. After this, samples A1, A2, A3, and C1 were irradiated by a UV lamp (365 nm 35-W Hg lamp, PS135, UV Flood, Stockholm, Sweden) for 30 min at room temperature; samples B1, B2, B3, and C5 were thermally cured in an oven at 363.15 K for 7 h; sample C2, C3, C4, D1, D2, D3, and D4 were first irradiated by a UV lamp for 30 min, then thermal cured in an oven at 363.15 K for 7 h. Mesomorphic and Optical Properties of the Cholesteric Liquid Crystal Materials Previous reports have shown that the center reflection wavelength of chiral compounds strongly relied on the content of the chiral component [29]. Accordingly, we designed three different ChLC systems, a small molecular weight liquid crystal system, a side-chain liquid crystal polymer system, and a polymerizable liquid crystal system, with selective reflective wavelength covering the visible range, as shown in Table 1. Scheme 1 exhibits the chemical structures and some basic physical parameters of the monomers, ChSCLCP, the small molecular weight nematic LC, etc. Among them, the small molecular weight nematic LC SLC1717 was a commercial product, the ChSCLCP was obtained via conventional free radical polymerization of different liquid crystalline monomers-for details of the synthetic route refer to our previous work [27,28]-and the nematic LC monomer C6M was prepared in our own laboratory. As expected, the three ChLC systems all exhibited a wide temperature range of the cholesteric phase, and the central selective reflection wavelengths were 525 nm, 668 nm, and 675 nm, respectively. Dependence of the Polymer Dispersed Network on the Optical and Mechanical Properties In our previous studies, we revealed that the introduction of a polymer dispersed network into LC could considerably improve the peeling strength of the material [25]. However, the polymer dispersed network can be obtained by either photo [21] or thermal [22] polymerization and the influence was still not clear of the preparation method of the polymer network on the mechanical properties and the interplay between them. As a consequence, we attempted to design a single polymer dispersed network by photo-polymerization or thermal-polymerization, as well as a coexistent polymer dispersed network by a step polymerization of photo-and thermal-polymerization, in order to find out the optimal resolution for mechanical performance improvement. Scheme 1. The chemical structures of the materials used. Table 2, a series of polymer dispersed ChLC films were prepared. For sample series A, which is denoted as A1-A3, a polymer dispersed network by photo-polymerization was introduced into the designed ChLCs, while in sample series B, denoted as B1-B3, a polymer dispersed network by thermo-polymerization was introduced. In sample series C, sample C1 was prepared by photo-polymerization, sample C5 was prepared by thermo-polymerization, samples C2, C3, and C4 were prepared by first photo-polymerization and then thermo-polymerization. For sample series D, denoted as D1 and D2, a polymer dispersed network by first photo-polymerization and then thermo-polymerization was introduced into the designed ChLCs. The optical and mechanical properties of all the samples were characterized by a combination of POM, UV/Vis/IR spectra, and peeling strength measurement. As shown in As illustrated in Figure 1, Figure 1a-f shows the actual pictures of all the samples respectively. It can be found that samples A1, A2, and A3 with a polymer dispersed network by photo-polymerization were more transparent than samples B1, B2, and B3 with a polymer dispersed network by thermal-polymerization, which was also proved by UV/Vis/IR spectrum in later measurement. Moreover, samples A1, A3, B1, and B3 displayed selective reflection characteristics, suggesting that SLC1717/S811 and ChSCLCP formed planar orientation spontaneously (which was demonstrated by the oily streak like texture under POM as shown in Figure 1g,i,j,l after introducing the polymer dispersed network. However, samples A2 and B2 could not spontaneously form a planar orientation, thus a scattering state was developed during the curing process, which was unsuitable for anti-fake use. Furthermore, samples B1 and B3 obtained by the thermal-polymerization had more scattering than samples A1 and A3 prepared by photo-polymerization, probably due to the fact that the planar orientation was somehow damaged during the heating procedure. To further investigate the light transmittance properties of the samples, UV/Vis/IR spectra were utilized and the results are shown in Figure 2. The overall transmittance and the selective reflection intensities of A1, A2, and A3 were higher than that of B1, B2, and B3, further demonstrating that the heating procedure could damage the planer orientation of ChLC, which was also in accordance with the POM results. For the samples containing the same polymer dispersed network, taking samples A1 and A3 as an example, sample A3 showed higher transmittance and stronger selective reflection than that of A1; a similar trend was also found in samples B1 and B3, indicating that ChSCLCP could perform better angular photochromic phenomenon and be more suitable for anti-fake use. However, samples A2 and B2 did not exhibit selective reflection feature; we did not take them into account for the later tests. However, although the overall optical properties of the samples with a polymer dispersed network by photo-polymerization were better than that with a polymer dispersed network by thermal-polymerization, the mechanical performance of the samples showed an interesting phenomenon. As shown in Figure 3, the largest peeling strengths of samples B1 and B3 were nearly 10 N higher than that of samples A1 and A3, while the maximum elongation of the later ones showed greater improvement over the former ones, indicating that the polymer dispersed network by thermal-polymerization contributed more for peel strength improvement, and the polymer dispersed network by photo-polymerization contributed more for toughness enhancement. From the above results, we expect that there may be some interaction between them if we introduce the two kinds of polymer dispersed networks simultaneously in one system, and an equilibrium point may exist. So a coexistent polymer dispersed network by a step polymerization of first photo-polymerization and subsequent thermal-polymerization was attempted and the effect of different ratios between the photo-polymerizable and thermal-polymerizable monomers on the mechanical properties was investigated. Figure 4 shows the mechanical property of the single and the coexistent polymer dispersed networks. As expected, the coexistent polymer dispersed network exhibited enhanced mechanical performance. When the weight ratio of the two polymerizable monomers was nearly 1:1, the film showed an overall optimal mechanical property of peeling strength and toughness. Preparation of Angular Photochromic Films with a Coexistent Polymer Dispersed Network According to the results obtained above, when the weight ratios of photo-polymerizable and thermal-polymerizable monomers were similar, the coexistent polymer dispersed network exhibited the optimal mechanical performance, which was favorable for the anti-fake application. Thus, two kinds of angular photochromic films based on ChLC of SLC1717/S811 and 3HG2080 with a coexistent polymer dispersed network (denoted as D1 and D2, respectively) were prepared. Figure 5 shows the mechanical property of the angular photochromic films. A comparative mechanical property of peeling strength and toughness with that of the corresponding coexistent polymer dispersed network was obtained. Furthermore, comparing the mechanical properties of the two angular photochromic films, they benefited from the excellent mechanical and processable properties of the polymer materials, the film based on 3HG2080 exhibited a relatively superior performance. Figure 6a show the typical profile of the light transmittance spectra of sample D2 in the visible region. The angular photochromic region was more than 100 nm wide with different viewing angles. An angular photochromic label with 10 mm × 10 mm was manufactured based on 3HG2080 with a coexistent polymer dispersed network. Accordingly, as shown in Figure 6b, when the viewing angle varied from 90 • to 60 • , significant color changes from green, cyan, blue, and purple could be observed, indicating that an angular photochromic label for anti-fake purpose had been successfully obtained. Conclusions In summary, employing the selective light reflection characteristic of ChLC, by introduction of a coexistent polymer dispersed network via a stepwise polymerization of first photo-polymerization and subsequently thermal-polymerization, an angular photochromic anti-fake film with enhanced mechanical and apparent angular discoloration characteristics was successfully developed. Detailed investigation found that the polymer networks developed by different methods had different effects on the mechanical properties: the polymer dispersed network by thermal-polymerization contributed more for peel strength improvement, and the polymer dispersed network by photo-polymerization contributed more for toughness enhancement. It is believed that the ChLC anti-fake film will have practical application in the negotiable instrument field such as for currencies, bank documents, and so on.
2018-06-06T17:49:33.811Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "e36250b4f18a7945472eb243a8ee41e930c42031", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/10/4/453/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e36250b4f18a7945472eb243a8ee41e930c42031", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
219330369
pes2o/s2orc
v3-fos-license
Monosomal karyotype and chromosome 17p loss or TP53 mutations in decitabine-treated patients with acute myeloid leukemia TP53 aberrations reportedly predict favorable responses to decitabine (DAC) in acute myeloid leukemia (AML). We evaluated clinical features and outcomes associated with chromosome 17p loss or TP53 gene mutations in older, unfit DAC-treated AML patients in a phase II trial. Of 178 patients, 25 had loss of 17p in metaphase cytogenetics; 24 of these had a complex (CK+) and 21 a monosomal karyotype (MK+). In analyses in all patients and restricted to CK+ and MK+ patients, 17p loss tended to associate with higher rates of complete remission (CR), partial remission (PR), or antileukemic effect (ALE). Despite favorable response rates, there was no significant OS difference between patients with or without loss of 17p in the entire cohort or in the CK+ and MK+ cohort. TP53 mutations were identified in eight of 45 patients with material available. Five of the eight TP53-mutated patients had 17p loss. TP53-mutated patients had similar rates of CR/PR/ALE but shorter OS than those with TP53 wild type (P = 0.036). Moreover, patients with a subclone based on mutation data had shorter OS than those without (P = 0.05); only one patient with TP53-mutated AML had a subclone. In conclusion, 17p loss conferred a favorable impact on response rates, even among CK+ and MK+ patients that however could not be maintained. The effect of TP53 mutations appeared to be different; however, patient numbers were low. Future research needs to further dissect the impact of the various TP53 aberrations in HMA-based combination therapies. The limited duration of favorable responses to HMA treatment in adverse-risk genetics AML should prompt physicians to advance allografting for eligible patients in a timely fashion. Electronic supplementary material The online version of this article (10.1007/s00277-020-04082-7) contains supplementary material, which is available to authorized users. Introduction The hypomethylating agents (HMA) decitabine (DAC) and azacitidine (AZA) are a standard of care in AML and higher risk MDS patients not eligible for intensive treatment. While dynamic features, such as early platelet response [1], can be used to estimate eventual treatment response, no pre-treatment markers are in routine clinical use. Here, we evaluated the genetic and clinical characteristics associated with loss of chromosome 17p or gene mutations affecting TP53 in older, unfit AML patients treated with DAC within a phase II trial. Patients and treatment Patients were enrolled onto the phase II trial 00331 (German Clinical Trials Registry DRKS00000069), the results of which have been previously reported [3]. Briefly, 227 patients with AML (by French-American-British classification), who were ineligible for intensive chemotherapy, were treated with DAC (15 mg/m 2 every 8 h for 3 consecutive days, total dose of 135 mg/m 2 , every 6 weeks). In case of an antileukemic effect (ALE) or stable disease (SD) after course 1, administration of the second course of DAC was followed by all-trans retinoic acid (ATRA; 45 mg/m 2 /day) for 28 days. Patients with complete remission (CR), partial remission (PR), or ALE after completion of 4 courses were eligible to receive maintenance treatment with DAC at 20 mg/m 2 /day (for 3 consecutive days, every 6-8 weeks). Bone marrow aspirates were performed after courses 1, 2, and 4. Morphology was centrally reviewed. The following response definitions were applied [3]: CR: BM blasts < 5%, platelets > 100 × 10 9 /L, white blood cells (WBC) > 1.5 × 10 9 /L, and no extramedullary leukemia. PR: BM blasts 5-25%, platelets > 100 × 10 9 /L, WBC > 1.5 × 10 9 /L, and no clinical or imaging evidence of leukemia; or BM blasts < 5%, platelet count < 100 × 10 9 /L, WBC < 1.5 × 10 9 / L. ALE: > 25% reduction of BM blasts relative to the initial blast percentage but not enough to fulfill the criteria for a PR. The study was approved by the institutional review boards of each center. All patients had given written informed consent for collection and use of data and specimens. All procedures followed were in accordance with the ethical standards of the responsible committee and with the Helsinki Declaration. Cytogenetics and gene mutations Metaphase karyotypes were centrally reviewed and CK+ and MK+ status assigned as previously described [3,15]. MK+ required presence of a single autosomal monosomy and a structural aberration, or two or more autosomal monosomies [15]. Loss of 17p was evaluated based on the available karyotype data. Data on mutations in DNMT3A and NPM1 and FLT3-internal tandem duplications (ITD) had been previously reported [17]. For the present study, bone marrow (n = 27) and peripheral blood (n = 18) samples of 45 patients were analyzed using the Illumina TruSight Myeloid Sequencing Panel (covering 54 genes relevant in myeloid neoplasms) for library preparation and an Illumina MiSeq device for sequencing. Variants located in introns, synonymous variants, and known single nucleotide polymorphisms were excluded. Variants had to feature a variant allele frequency (VAF) of > 5% for missense variants, or had to be hot spot mutations or mutations known to be present in the given patient, or had to be large insertions or deletions. Variants had to be covered by > 100 reads, and the variant had to be observed in > 10 reads. Four of six amplicons covering CEBPA only gave insufficient reads for analysis, thus CEBPA mutations may be underestimated. The genetic data were used to derive the clonal architecture (detailed in the supplemental). Statistical analyses CR, PR, ALE, SD, progressive disease (PD), early death (ED). and OS (time from start of treatment to death) were defined as previously described. [3] All patients who had received at least one dose of DAC were included in the analysis. The Fisher's exact and Wilcoxon rank sum tests were used to compare categorical or continuous variables, respectively. Estimated probabilities of OS were calculated using the Kaplan-Meier method. Group differences were assessed using the log-rank test and univariate Cox proportional hazards models. Results Association of loss of 17p with pre-treatment characteristics and outcomes As previously published, [3] cytogenetic data were available for 177 patients; 120 patients had clonal cytogenetic aberrations. Of these, 25 patients were identified to have loss of 17p; 24 of them were CK+, and 21 were MK+ (Table 1). We evaluated the outcome of patients with loss of 17p compared with those without in the entire cohort of patients with cytogenetic data and in the subgroups of CK+ or MK+ patients ( Table 2, Fig. 1a-c). Patients with loss of 17p overall tended to have favorable response rates in comparison with patients without loss of 17p (CR/PR/ALE vs SD/PD/ED, P = 0.08). This was also true when analyses were conducted only among patients with CK+ (P = 0.01) or MK+ (P = 0.05). However, despite these favorable response rates and although the median OS was longer for patients with loss of 17p especially in the CK+ and MK+ cohort, there was no significant difference in the OS between patients with or without loss of 17p in the entire cohort or in the CK+ and MK+ cohort, as the OS curves crossed shortly after the 6-month mark. Of the patients with cytogenetic data, 77 had received ATRA in addition to DAC, 49 of them over the entire planned period of 4 weeks during course 2. Only six of the 49 patients had a loss of 17p. The low number of patients and the bias regarding the selection of patients receiving ATRA precluded further analyses. Based on the sequencing and cytogenetic data, the clonal architecture could be derived in 33 patients ( Table S2). In nine patients, one or more minor subclones (defined through mutations present in a cell fraction that was > 20% smaller than the major clone) were present; while in the remaining 24 patients, no subclones in addition to the major clone could be identified. Five patients with a TP53 mutation also had a cytogenetic loss of 17p. In one of these patients, the VAF of the TP53 mutation indicated the loss of the TP53 wild-type allele (i.e., VAF > 60%); one other patient with loss of 17p had two mutations in TP53. One patient with a TP53 mutation with a (P < 0.001), MK+ (P = 0.02), or harbored a loss of 17p (P < 0.001) than TP53 wild-type AML ( Table 3). The TP53 mutations were all present in the major AML clone of the respective patient. Only one patient (13%) with a TP53-mutated AML had a minor subclone, while 8 (32%) of the patients with TP53 wild-type AML did. Patients with a TP53 mutation harbored a median of only one additional mutation (range, 0-4). Association of TP53 mutations with clinical features and outcome Compared with TP53 wild type, patients with TP53 mutations were younger (P = 0.01; median, 71 vs 77 years); AML with TP53 mutation tended to more often develop from antecedent MDS (P = 0.12) ( Table 3). In the outcome comparisons between AML patients with mutated or wild-type TP53, there were no differences in the response rates, but patients with TP53 mutations had a shorter OS than those with wild-type TP53 (P = 0.036) ( Table 2, Fig. 1d). Twenty-four of the patients with sequenced samples had received ATRA. Of these, only three harbored a TP53 mutation, which precluded outcome analyses. Association between other mutations and outcome Other markers previously reported to be associated with outcomes in DAC-treated patients are mutations in SRSF2, [11] DNMT3A, [2,17] TET2, [7] IDH1, or IDH2. [8,14] We found no differences in response rates or OS between patients with or without mutations in these genes or in at least one RNA splicing gene (data not shown). Moreover, there were no differences in response rates and OS between patients with ≤ 3 mutated genes and those with > 3 mutated genes (Supplemental Table S3). However, the presence of subclones was associated with a shorter OS in comparison with their absence (P = 0.05) (Fig. 1e, Supplemental Table S3). Only one of the 9 patients with a minor subclone also harbored a TP53 mutation. Since both a TP53 mutation and presence of a minor subclone were associated with shorter OS, we combined patients with at least one of these features into one group and compared them with the remaining. Compared with patients with TP53 wild type and absence of a 1 Overall survival according to a-c the presence or absence of loss of 17p among a all patients with cytogenetic information, b patients with CK+ and c patients with MK+, and overall survival according to d the presence or absence of a TP53 mutation among all patients with samples subjected to panel sequencing (corresponding COX model: HR 2.31, 95% CI 1.03-5.16, P = 0.041), e the presence or absence of minor subclones among patients with available data (corresponding COX model: HR 2.29, 95% CI 0.98-5.39, P = 0.056), and f the presence of a TP53 mutation or a minor subclone or absence of both among patients with available data (corresponding COX model: HR 2.63, 95% CI 1.20-5.79, P = 0.016) minor subclone, those with a TP53 mutation or a minor subclone expectedly had shorter OS (P = 0.01); no differences in the response rates were observed (Fig. 1f, Supplemental Table S3). Discussion HMAs have become a standard of care in AML patients not eligible for intensive chemotherapy. Chromosomal or molecular aberrations of TP53 are likely central in the investigation of markers and biological pathways associated with the response to HMAs [5, 7, 10-12, 14, 16, 18, 19]. Thus, we sought to investigate the impact of loss of 17p and TP53 mutations in our phase II trial 00331 in which older unfit AML patients were treated with 3-day DAC. We observed that patients with a loss of 17p tended to have higher rates for CR/PR/ALE both in analyses including all patients and in those restricted to CK+ and MK+ patients. Among CK+ and MK+ patients, patients with loss of 17p also had longer median OS, but this favorable course could not be maintained over time. Patients with TP53-mutated AML had similar rates of CR/PR/ALE but shorter OS than those with wild-type TP53 (P = 0.036). Published data on the impact of chromosome 17p aberrations on response to HMA treatment are scarce. Nazha et al. [20] observed no difference in the response rates according to chromosome 17 aberrations in MK+ and CK+ patients treated with HMAs. In an explorative retrospective analysis of the AZA-AML-001 study, patients with chromosome 17p aberrations had a strong trend for better OS when treated with AZA as compared with conventional care regimens (mainly lowdose cytarabine) [16]. The impact of TP53 mutations (or expression) on outcomes in HMA-treated MDS or AML patients has been assessed in several studies and yielded heterogeneous results. [7, 10-12, 14, 16, 19] Welch et al. [11] observed in patients with AML or MDS that achievement of CR was more frequent in patients with a TP53 mutation. Moreover, in contrast to the poor OS of TP53-mutated AML patients after standard induction, there was no OS difference according to TP53 mutation status in patients receiving DAC. In the aforementioned analysis of the AZA-AML-001 study, patients with TP53-mutated AML had strong trends for improved OS when treated with AZA compared with alternative therapies [16]. However, in studies among MDS patients treated with DAC or AZA, TP53 mutations had no impact on response rates but were associated with shorter response duration and/or OS [7,10]. In another report on patients with MDS (mostly with blast excess) who received DAC, most patients with TP53 mutations achieved a CR, but they still had inferior OS [14]. In MDS, mono-allelic TP53 mutations associate with more favorable disease features (including less frequent complex karyotype and better OS) than multi-hit TP53 mutations [21]. The role of TP53 mutations under consideration of their allelic state remains to be established. Welch et al. [11] described that twothirds of the patients with TP53 mutations potentially had both alleles affected. In our study, 5 patients fulfilled the criteria of a TP53 multi-hit mutation (i.e., multiple gene mutations or gene mutation plus genomic loss) according to Bernard et al. [21]. The low patient numbers precluded meaningful outcome analyses. Hopefully, future studies will be able to decipher the impact of the allelic state of TP53 and its impact in response to DAC. Considering our results and those reported by others, it currently remains elusive whether the TP53 aberrations or rather associated genetic features may confer sensitivity to HMAs. As observed in the present study, TP53 mutations and chromosome 17p aberrations coincide with CK+ and MK+ [22][23][24][25][26]; and for trial 00331, which was subject to the present study, we previously reported that MK+ patients had higher response rates and similar OS compared with MK− patients [3]. Similar results have been reported by Wierzbowska et al. [27] from a post hoc analysis of the phase 3 DACO-016 trial in AML and by our group from the phase 3 EORTC trial 06011 in MDS [9,28]. In the study by Welch et al. [11], almost all patients with TP53 mutations had unfavorable cytogenetics, and achieving a CR was more frequent in patients with unfavorable than those with intermediate or favorable cytogenetics. In the study by Chang et al. [14], almost all TP53-mutated patients who achieved a CR were CK+ or had monosomies. Welch et al. [11] suggested that a variable response of TP53mutated AML to DAC may be due to the presence of TP53 mutations in subclones instead of the major clone. We observed no superior response to DAC, although TP53 mutations were all present in the major clone, and despite other features supporting their disease-driving effect, i.e. TP53-mutated AML only rarely harbored a minor subclone and had a low number of additional mutations [7]. However, we did observe that patients with a minor subclone had shorter OS than those without, although only one of the nine patients with a minor subclone also harbored a TP53 mutation. The heterogeneity in the reports on associations between TP53 aberrations and response to HMAs may be due to weaknesses that are variably shared by the studies, including the present study. First, analyses are often based on relatively small patient numbers [11,14,16]. Second, if provided, the information on 17p loss normally stems from conventional cytogenetics, although the (presumably lost) TP53 allele may be present in unidentified chromosome material. [16,29] Third, cohorts variably comprise MDS or AML patients or both, although DAC may have higher efficacy in patients with higher blast counts [30,31]. Fourth, there is the heterogeneity in treatment. In several studies, patients treated with DAC or AZA were combined into one group [7,10], or patients were included who received DAC combined with another agent [2,3,7,10]. Moreover, DAC was administered according to different protocols. The majority of the patients received DAC according to the 5-day protocol (total of 100 mg/m 2 over 5 days) [10,14]. In the study by Welch et al. [11], the majority of patients received DAC according to the 10-day protocol (total 200 mg/m 2 over 10 days). Patients in our present study received DAC according to the 3-day protocol (total of 135 mg/m 2 over 3 days; every 6 weeks), in part of the patients followed by a reduced dosage maintenance phase. Moreover, in the present study, the patients with loss of 17p or TP53 mutation had received only a median of 2 (range, 1-12) or 1 (range, 1-6) DAC courses, while several courses are normally required to achieve best response. While the clinical observation of the (counter-intuitive) response to HMAs in adverse genetics AML/MDS is more and more accepted within the clinical community, the underlying mechanism of the interaction between hypomethylating activity and these genotypes is still unresolved. Monosomal chromosomal regions may preferentially attract epigenetic silencing [32,33], providing a particularly sensitive target to DNMT inhibition. Despite the present lack of a conclusive model of this interaction, clinicians need to be aware that the responses, while surprisingly frequent, are often short-lived. Hence, they can also be quite deceptive, by raising unfounded optimism regarding their duration. Thus, patients with adverse genetics who are eligible for allografting should transition to this curative treatment in a timely manner, i.e., before HMA resistance sets in. In summary, within the specifications of the patient cohort studied, loss of 17p was associated with trends for higher DAC response rates, both within the entire cohort and among patients with CK+ or MK+ AML. Patients with a TP53 mutation achieved similar response rates as patients with wildtype TP53, but had a shorter OS. Our data further support the potential applicability of TP53 aberrations as predictor for HMA treatment, and emphasize a possible role for subclonal mutations in this regard. Isolated TP53 mutation analyses apparently are not sufficient for prediction of HMA response. Cytogenetic analysis remains standard and allows for evaluation of MK+ status and 17p loss. The landscape of HMAbased therapy is changing, and favorable responses in adverse genetics patients are also observed when these drugs are combined with the BCL-2 inhibitor venetoclax [34] or all-trans retinoic acid [35]. Hence, the impact of the different types of adverse cytogenetics and TP53 alterations (e.g., cytogenetic and molecular genetic mono-or bi-allelic loss) on outcome after HMA combination studies will be of great interest. Acknowledgments We wish to thank Gabriele Greve, Ruhtraut Ziegler, Tobias Ma, Philipp Sander, Christoph Niemöller, and Dennis Zimmer for help during the project. Authors' contributions HB: Concept design; data acquisition, analyses, and interpretation; manuscript preparation; critical revision and final approval of the manuscript. DP: Data acquisition, analyses, and interpretation; critical revision and final approval of the manuscript. GI: Data acquisition, analyses, and interpretation; critical revision and final approval of the manuscript. MP: Data analyses and interpretation; critical revision and final approval of the manuscript. JW: Data analyses and interpretation; critical revision and final approval of the manuscript. BHR: Treatment of patients and specimen acquisition; critical revision and final approval of the manuscript. LB: Data acquisition, analyses, and interpretation; critical revision and final approval of the manuscript. BH: Treatment of patients and specimen acquisition; data acquisition, analyses, and interpretation; critical revision and final approval of the manuscript. UG: Treatment of patients and specimen acquisition; critical revision and final approval of the manuscript. AK: Treatment of patients and specimen acquisition; critical revision and final approval of the manuscript. UP: Treatment of patients and specimen acquisition; critical revision and final approval of the manuscript. KD: Treatment of patients and specimen acquisition; data acquisition, analyses, and interpretation; critical revision and final approval of the manuscript. AG: Treatment of patients and specimen acquisition; critical revision and final approval of the manuscript. AH: Data acquisition, analyses, and interpretation; critical revision and final approval of the manuscript. PWW: Treatment of patients and specimen acquisition; critical revision and final approval of the manuscript. HD: Treatment of patients and specimen acquisition; critical revision and final approval of the manuscript. JD: Concept design; data interpretation; critical revision and final approval of the manuscript. ML: Concept design; treatment of patients and specimen acquisition; data interpretation; manuscript preparation; critical revision and final approval of the manuscript. Compliance with ethical standards The study was approved by the institutional review boards of each center. All patients had given written informed consent for collection and use of data and specimens. All procedures followed were in accordance with the ethical standards of the responsible committee and with the Helsinki Declaration. Conflict of interest The authors declare that they have no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-06-06T14:10:51.066Z
2020-06-06T00:00:00.000
{ "year": 2020, "sha1": "bad102f163d8de3c0e5f595c5fbd6c5e9e2c0ffb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00277-020-04082-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bad102f163d8de3c0e5f595c5fbd6c5e9e2c0ffb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247732435
pes2o/s2orc
v3-fos-license
Renal manifestations of hepatitis E among immunocompetent and solid organ transplant recipients Hepatitis E virus (HEV) infections are generally self-limited. Rare cases of hepatitis E induced fulminant liver failure requiring liver transplantation are reported in the literature. Even though HEV infection is generally encountered among developing countries, a recent uptrend is reported in developed countries. Consumption of unprocessed meat and zoonosis are considered to be the likely transmission modalities in developed countries. Renal involvement of HEV generally holds a benign and self-limited course. Although rare cases of cryoglobulinemia are reported in immunocompetent patients, glomerular manifestations of HEV infection are frequently encountered in immunocompromised and solid organ transplant recipients. The spectrum of renal manifestations of HEV infection include pre-renal failure, glomerular disorders, tubular and interstitial injury. Kidney biopsy is the gold standard diagnostic test that confirms the pattern of injury. Management predominantly includes conservative approach. Reduction of immunosuppressive medications and ribavirin (for 3-6 mo) is considered among patients with solid organ transplants. Here we review the clinical course, pathogenesis, renal manifestations, and management of HEV among immunocompetent and solid organ transplant recipients. INTRODUCTION Hepatitis E virus (HEV) has a pronounced worldwide distribution. It is a spherical, single-strand RNA virus consisting of three partially overlapping open reading frames (ORF) ORF1, ORF2, and ORF3[1]. HEV belongs to hepeviridae family, and eight genotypes of HEV (HEV1 to HEV 8) have been identified [2,3]. Genotypes HEV1 and HEV2 are routinely encountered in developing countries and are transmitted through fecal-oral route. HEV3 and HEV4 are associated with sporadic autochthonous infection among western countries and are predominantly transmitted through animal reservoirs and ingestion of uncooked meat [4][5][6]. Additionally, HEV genome 3 related infection is associated with solid organ transplant recipients and immunocompromised patients. Other uncommon modalities of transmission could occur through blood products and solid organ transplants [7,8]. Transfusion-related transmission is not common in the United States, but is reported in countries like China and Japan[9, 10]. Lastly, vertical transmission of HEV infection from mother to fetus could be up to 100%, as reported by Kumar et al [11] and is associated with fatal outcomes. CLINICAL COURSE HEV infection commonly holds a benign, self-limiting course, and the case-fatality rate in developing countries is estimated to be 0.5%-4% [12,13]. Clinical presentation of HEV infection is similar to that of hepatitis A. Majority of the infected patients sustain mild and asymptomatic course. Acute HEV infection is accompanied by jaundice, icteric eyes, malaise, anorexia, and abdominal discomfort. Severe infection is usually reported among patients with underlying chronic liver disease and is associated with increased mortality [14]. Additionally, solid organ transplant recipients encounter a more sustained course [15]. Among such patients, HEV antibody production could be delayed, often leading to sustained viremia with progression to chronic hepatitis and cirrhosis [16,17]. Pregnant women can suffer a complicated course with fulminant HEV infection and sustain higher mortality rates compared to non-pregnant cohorts. It is estimated that fatality rates reach 10%-40% among pregnant women [11,18]. Both obstetric and non-obstetric complications are encountered. Nonobstetric complications include fulminant hepatic failure, acute liver failure, acute cerebral edema and Non-glomerular manifestations Renal manifestations of hepatitis B and hepatitis C (HBV, HCV) infection are well described. The association between HEV infection and kidney is established as the HEV particles are isolated from the urine of infected patients [22,23]. Additionally, when urine of infected monkeys was induced into healthy animals, the development of HEV infection was well appreciated and confirmed the infectious nature of the viral particles shed in the urine [23]. HEV-associated renal manifestations include prerenal or intrinsic renal disorders. Among intrinsic renal conditions, glomeruli and tubules are the affected sites [24,25]. HEV infection is less commonly associated with the progression of kidney disease in immunocompetent patients. Chronic HEV infection and subsequent development of decompensated liver cirrhosis are frequently encountered among solid organ transplant recipients. Hepatorenal physiology secondary to increased circulating vasoactive agents like nitric oxide is often noted. Similar to other cirrhotic patients, HEV-associated liver dysfunction patients could have increased vasodilatory mediators released secondary to shear stress on the portal vasculature, leading to splanchnic vasodilatation, portosystemic shunting, and bacterial translocation. Additionally, reduction in effective arterial blood volume perpetuates decrease in renal perfusions that ultimately leads to renal vasoconstriction [26]. Urine sodium levels remain low, indicating prerenal failure. However, prolongation of renal hypoperfusion contributes to ischemic injury of the proximal tubule with manifestations of acute tubular necrosis [13]. Bile cast nephropathy, also called cholemic nephrosis, is typically encountered among patients with cholestasis secondary to advanced cirrhosis or acute liver failure. Nayak et al [27] reported a case of cholemic nephrosis secondary to acute HEV infection. Historically, the diagnosis is made by kidney biopsy with the presence of bile cast obstructing distal tubules. The pathogenesis of cholemic nephrosis is not completely understood, however, it is hypothesized secondary to intraluminal obstruction of the bile cast along with direct tubular toxicity [28,29]. Cases of hemolysis and subsequent renal failure are reported with HEV infection. Karki et al [30] reported a case of massive hemolysis in a patient with glucose-6-phosphate dehydrogenase (G6PD) deficiency, heme pigment causing direct proximal tubular toxicity. Development of hemoglobin cast further leads to intratubular obstruction and subsequent development of acute kidney injury. It is hypothesized that the liver dysfunction secondary to acute HEV leads to accumulation of toxins along with the depletion of antioxidants like glutathione. Additionally, if patients have underlying G6PD deficiency, massive hemolysis, and acute kidney injury are encountered [31] (Figure 1). Glomerular manifestation Glomerular manifestations of HEV infection are reported among solid organ transplant recipients associated with HEV genotype 3. However, it is unclear if renal manifestations and presentation differ among various organ transplant recipients. While glomerular manifestations are commonly noted among immunocompromised patients [32,33], autochthonous HEV-induced membranoproliferative glomerular pattern was reported in an immunocompetent individual [33]. Study by Kamar et al[34] evaluated the renal function of patients with HEV infection in solid organ transplants recipients. Out of total 51 cases of genotype 3 HEV infections, 43.2% were cleared of the virus spontaneously within 6 mo of infection, whereas 56.8% progressed to chronic hepatitis. Among 36 kidney and kidney-pancreas-transplant patients, glomerular filtration rate (GFR) significantly decreased from baseline of 52.9 ± 17.7 mL/min at four-month median before HEV infection to 48.8 ± 18.7 mL/min during acute HEV infection (P = 0.04). Acute rejection episode, infection, modification in immunosuppressant type or dose, and functional renal insufficiency were ruled out, and the GFR decline is attributed to acute HEV infection. Proteinuria levels significantly increased in four kidney-transplant patients at HEV diagnosis, which subsequently improved with improvement in renal functions and HEV clearances. Kidney biopsy performed during acute phase revealed patterns of membranoproliferative glomerulonephritis, cryoglobulinemia II and III types, and IgA nephropathy [34]. Additionally, among patients who developed chronic hepatitis, 12 patients who received anti-viral therapy with ribavirin for three months had clearances of HEV with subsequent improvement in GFR at 6 mo follow up. Interestingly, In the subgroup who received anti-viral therapy, cryoglobulinemia was detected in 70% of patients before therap, eventually became undetectable in all patients after viral clearance. Renal manifestations of the reported cases of HEV infection among immunocompetent and solid organ recipients are summarized in Table 1 PATHOPHYSIOLOGY OF HEV-INDUCED RENAL INJURY Pathophysiology of HEV-induced kidney injury is not completely known. HEV-mediated renal manifestations were thought to be a result of direct cytopathic injury due to the viral infection per se or related to immune-mediated mechanisms. Similar to HBV and HCV, it is hypothesized that HEV plays a role in precipitating glomerular injury through immune complex-mediated mechanisms [35]. The study by El-Mokhtar et al [36] assessed the role of immune-mediated mechanisms in HEV-induced renal dysfunction. CD10 and CD13 positive proximal tubular epithelial cells were isolated and challenged in vitro with HEV inoculum. HEV infection minimally upregulated inflammatory markers in the absence of peripheral blood mononuclear cells, and no measurable changes were noted in lactate dehydrogenase (LDH) levels, kidney injury molecules, or transcription of chemokines. However, when the HEV infected proximal tubular cells were inoculated with peripheral blood mononuclear cells, there was upregulation of inflammatory molecules, kidney injury markers, and LDH levels, indicating that HEV infection per se might not be completely responsible for glomerular injury. Thus, it is the intersection between immune cells, HEV infection, and proximal tubular epithelial cells that contribute to renal injury [36]. Diagnostics Over the recent years, HEV laboratory testing has been refined drastically. Two main methods for testing HEV currently are indirect and direct serological tests. With regards to indirect studies, there are commercially available kits for serological testing for the presence of anti-HEV IgM and anti-HEV IgG that relies on the presence of antibodies in the serum to detect infection [37]. In addition, indirect studies rely heavily on patient's immune response to HEV infection, decreasing sensitivity in immunocompromised patients to some degree [38]. Direct testing predominately uses more advanced nucleic acid testing, that works via detecting the presence of viral genetic material in the form of nucleic acid sequences (HEV RNA) to determine the presence or absence of infection along with detection of viral capsid antigens [39,40]. In Immunocompetent patients, it is advised to check anti-HEV IgM initially for suspected HEV infection [41]. A negative test rules out the disease, however, if the test is positive, HEV RNA analysis is needed. On the other hand, among immunocompromised patients, it is recommended to test HEV-RNA even with negative anti-HEV IgM in blood and in stool before ruling out HEV infection [37]. Urine studies and electrolytes give subtle clues in identifying various causes of AKI. Urine microscopy adds an additional advantage to diagnose patients with acute tubular necrosis in the presence of muddy brown granular cast. Kidney biopsy remains the gold standard diagnostic testing for glomerular disorders and tubular obstructions, including bile cast nephropathy, while evaluating renal manifestations of HEV. Patients with acute or chronic hepatitis with new-onset proteinuria should be considered for kidney biopsy [42]. Treatment Management of HEV-associated renal manifestations depends on the clinical presentation. Treatment is predominantly based on a conservative approach given benign course of the disease. Acute infection with HEV usually does not require anti-viral therapy. In patients with severe acute infection or acute on chronic liver disease, ribavirin therapy is considered [42]. For patients with acute kidney injury secondary to acute tubular necrosis or bile cast nephropathy, routine care to maintain mean arterial pressures, avoid nephrotoxic agents, and further insults are recommended. Indications for initiation of renal replacement therapy are similar to routine indications of dialysis initiation. Management of HEVassociated glomerular disorders should be based on underlying pathology. Guinault et al [33] reported a case of HEV-induced cryoglobulinemic glomerulonephritis in an immunocompetent patient with serum monoclonal IgG k light chain type II cryoglobulin. Renal biopsy results were consistent with lobular membranoproliferative exudative glomerulonephritis with fibrinoid necrosis and cellular crescents with a ruptured Bowman capsule. The patient was subsequently treated with seven sessions of plasma exchange along with pulse steroids with improvement in HEV RNA titers and cryoglobulinemic levels. Occasionally acute HEV infection follows a fulminant course as reported in pregnant individuals and could manifest as acute cerebral edema, seizures, acute fatty liver and are associated with increased mortality [43]. While managing patients with solid organ transplants, benefits of treatment need to be weighed against risks of rejection. Reduction of immunosuppression is considered the first-line approach [44], allowing HEV clearance in about one-third of patients. Ribavirin, an anti-viral agent, is considered in patients with severe acute or acute on chronic liver failure [45,46]. It has also been postulated that ribavirin acts by inhibiting HEV viral replication and increases the expression of interferon stimulating genes leading to immune modulation [47]. In a study done by Kamar et al[34], patients who received anti-viral therapy with ribavirin, cryoglobulinemia was detected in 70% of patients before therapy and became undetectable in all patients after viral clearance. Ribavirin is also used successfully to treat HEVassociated membranoproliferative glomerulonephritis in a solid organ transplant recipient [32] (Figure 2). In a multicenter retrospective study by Karmer et al, solid-organ transplant recipients were treated with ribavirin at a median dose of 600 (range, 29-1200) mg/d for three months. Similar virological remission was observed in patients who received ribavirin for three months as compared to those who were treated for more than three months. In patients with detectable HEV RNA in the serum and/or in the stool, at the end of three months, ribavirin monotherapy can be continued for an additional three months [48] Hence it is indicated to treat with ribavirin initially for three months and evaluate the March 27, 2022 Volume 14 Issue 3 Figure 2 Management of acute kidney injury in acute hepatitis E infected patients. response. With non-sustained virological remission, ribavirin is recommended to be continued for a total of 6 mo. Among liver transplant recipients, interferon (IFN) α has shown to achieve sustained virological remission among patients with HEV after liver transplant. However, the use of IFNα is not recommended among other solid organ transplant recipients due to the risk of graft rejection (Table 1). Sofosbuvir, a nucleotide analog, is evaluated along with ribavirin in patients who failed ribavirin monotherapy. Wezel et al [49] evaluated two solid organ transplant recipients who failed ribavirin monotherapy and observed that sofosbuvir showed variable antiviral activity in chronic HEV patients. Sofosbuvir was ineffective in achieving sustained virological response. Pegylated IFNα has shown efficacy in achieving a sustained virological response in patients with hemodialysis and liver transplants [50]. However, given the concern of interference with graft and risk of acute rejection, interferon α is contraindicated in patients with other solid organ transplants [47]. CONCLUSION HEV infection is a global health concern and is uncommonly associated with mortality and morbidity. HEV infection is restricted not only to developing countries, but is increasingly identified among developed countries. Renal manifestations of HEV range from prerenal failure, acute tubular necrosis, glomerular disorders, and intratubular obstruction form bile cast nephropathy. Similar to HBV and HCV infections, immune-mediated mechanisms are hypothesized in development of HEV-associated glomerular diseases. Conservative approach is routinely employed in cases of renal involvement from acute hepatitis in immunocompetent patients. Among solid organ transplant recipients, ribavirin is considered in patients with chronic HEV infection for a duration of 3-6 mo along with reduction of immunosuppression. IFNα has shown to achieve sustained virological remission among patients with HEV after liver transplant. However, the use of IFNα is not recommended among other solid organ transplant recipients secondary to the risk of graft rejection. In patients who failed monotherapy with ribavirin, sofosbuvir has been evaluated in conjunction with ribavirin with variable anti-viral effects. Plasma exchange, in addition to pulse steroids is occasionally used in management of crescentic glomerular nephritis associated with HEV infection.
2022-03-27T15:12:34.889Z
2022-03-27T00:00:00.000
{ "year": 2022, "sha1": "eaf1edb703379a5466ff679493308d0390bcaf17", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4254/wjh.v14.i3.516", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30e604c810a11c0635ef231f9969de54e68f7aca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18176759
pes2o/s2orc
v3-fos-license
Non-extreme Calabi-Yau Black Holes Non-extreme black hole solutions of four dimensional, N=2 supergravity theories with Calabi-Yau prepotentials are presented, which generalize certain known double-extreme and extreme solutions. The boost parameters characterizing the nonextreme solutions must satisfy certain constraints, which effectively limit the functional independence of the moduli scalars. A necessary condition for being able to take certain boost parameters independent is found to be block diagonality of the gauge coupling matrix. We present a number of examples aimed at developing an understanding of this situation and speculate about the existence of more general solutions. Introduction Considerable effort has been devoted recently to studying black hole solutions in fourdimensional, N = 2 supergravity theories [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. Interest has been focused, so far, on extreme black holes, which satisfy additional supersymmetry constraints and saturate a BPS bound. A key discovery [3] in this case is that the values of the scalar moduli fields of the N = 2 vector multiplets are actually fixed at the black hole horizon in terms of the electric and magnetic charges carried by the black hole. In particular, the horizon values of the scalar fields are independent of the values of the scalar fields at infinity. The evolution of the scalar fields moving inward from infinity towards the horizon can then be thought of as motion in a kind of attractor [3]. Of particular interest are the "doubleextreme" solutions, for which the scalar fields stay fixed at their horizon values throughout the spacetime [9]. These are "doubly" extreme in the sense that, in addition to having degenerate horizons, the black hole mass, for these solutions, is minimized for the given charges. "Singly" extreme solutions with non-constant scalars are given in [11]. In this paper we will look at non-extreme black hole solutions in N = 2 theories in four dimensions, obtained by dimensional reduction of Type II supergravity on a Calabi-Yau threefold. Since the basic form of the extreme solutions in this case [11] is quite similar to certain supersymmetric, intersecting brane solutions of torus compactifications [17,18], a simple ansatz for the non-extreme N = 2 black holes arises from the known non-extreme intersecting brane solutions in torus compactifications [19]. This ansatz is also analogous to the non-extreme generalization of the extreme black branes solution of M-theory [20]. In this ansatz, given below, there is a single "non-extremality" parameter µ and a number of "boost parameters" γ Λ related to the individual charges. We find below, however, that this ansatz does not in general solve the equations of motion. Rather, the equations of motion reduce to a condition which may be regarded as a constraint on the boost parameters. The only general (i.e. for all Calabi-Yau manifolds) solution to this constraint, which we have found, is when all the boost parameters are taken to be equal. For specific models, such as the ST U model and others discussed below, it is possible to take separate boost parameters. We have not yet explored these constraints fully. In the case of torus compactifications of D = 11 supergravity, the general non-extreme solutions of [19] may be obtained from the D = 10 Schwarzschild solution via various combinations of boosts, dimensional upliftings and reductions and duality symmetries. We note that these same methods cannot be used to similarly construct the non-extreme N = 2 solutions. 1 The Basic Setup: N = 2 Lagrangian We give only a brief summary of the formalism here. A more complete treatment may be found in, e.g., [9]. An N = 2 supergravity theory in four dimensions includes, in addition to the graviton multiplet, n v vector multiplets and n h hypermultiplets. In our work we consistently take the hypermultiplet fields to be constant and will ignore them below. The bosonic part of the action is then given by 2 where G µν is the spacetime metric, z A with A = 1, . . . , n v are complex scalar moduli fields parametrizing a special Kähler manifold and F Λ µν = 2∂ [µ A Λ ν] with Λ = 0, 1, . . . , n v are the field strengths of n v + 1 U (1) gauge fields A Λ µ . Here, the complex scalars are related to the holomorphic symplectic sections X Λ by the inhomogeneous coordinates condition The Kähler potential K, scalar metric g AB and gauge couplings N ΛΣ are all determined in terms of the prepotential F (X), which is a holomorphic, second-order homogeneous function. The Kähler potential K is given by where F Λ = ∂F/∂X Λ . The Kähler metric on the scalar moduli space is then given by where ∂Ā = ∂/∂z A and the gauge field couplings N ΛΣ by where 1 After this work was completed, we found that the same ansatz for the non-extreme solutions had been made in [13]. We disagree with the claim there that the ansatz generally satisfies the equations of motion. 2 We use the normalization ǫtrθφ = 1. For type II supergravity compactified on a Calabi-Yau space, the prepotential takes the form where the constants d ABC , with ABC completely symmetric, are the topological intersection numbers of the manifold. We further restrict our interest here to the axion free case, in which all the moduli scalars z A are pure imaginary. The gauge coupling matrix N ΛΣ is then pure imaginary, having nonzero components and the Kähler metric is given by The equations of motion following from the action (with ReN = 0) are given by Non-Extreme Solutions We want to generalize certain double-extreme and extreme solutions, which were given in [9] and [11] respectively. In these solutions, the gauge field F 0 µν carries only electric charge, while each gauge field F A µν carries only magnetic charge. As discussed in [9,11], regarded as a compactification of M-theory on S 1 × CY , these solutions correspond to fivebranes wrapping 4-cycles of the Calabi-Yau space, with a boost along the common string. For the special case of a torus compactification, the corresponding non-extreme solutions are given in [19]. It is straightforward to modify the solutions there to get an ansatz for the non-extreme solutions in the present case, where prime denotes ∂ r . Nonzero components of the gauge field strengths are The ansatz (11) reduces to the "singly" extreme solutions given in [11] when the limit µ → 0, γ Λ → ∞ is taken with µ sinh 2 γ Λ ≡ k Λ held fixed and further to the "doubly" extreme solutions, with constant moduli scalars, in [9] when all the k Λ are the same. It can also be shown that, if the solution (11) It is straightforward to check that the ansatz (11) satisfies the gauge field equation of motion (8). Equation (10) for the curvature reduces to the condition and the scalar field equation (9) leads to In deriving these last two equations we have made use of the fact that the extreme solutions, (13) and (14) vanish identically in this case. We also note that ImN BC = −iN BC by virtue of (6) is a first order homogeneous function of z A and that, in particular, z A ∂ A ImN BC = ImN BC . This property can be used to "contract" equation (14) with z A to obtain equation (13). Thus it is only necessary to show that the ansatz (11) (with H 0 = H 0 = 1) satisfies (14). It is not difficult to see that, for an arbitrary choice of the constants d ABC in the prepotential, the condition (14) is not satisfied unless the parameters γ A are taken to be equal. This differs from the case of intersecting branes on a torus [19], for parameters γ A may be specified independently for each set of branes. We do not at present fully understand the significance of the restrictions placed by (14) on the parameters γ A . Note that, if all the boost parameters, including γ 0 , are set equal to some common value γ in (11), then the scalars z A will be constant, having values where the asymptotic flatness condition, h 0 d ABC h A h B h C = 1, has been used. This case is then a non-extreme version of the "doubly" extreme black holes in [9]. Taking γ 0 to be different, as may always be done, makes the scalars z A non-constant, but keeps their ratios constants. Clearly, if some, or all, of the γ A 's may also be taken unequal, then there will be additional functional independence between the scalars. In the next section, we will explore some simple examples of prepotentials for which some, or all, of the γ A 's may be specified independently. Examples We list below some choices for the d ABC which allow some of the γ A 's also to be different from each other. It follows from (13), that a necessary condition for (at least) some of the γ A 's to be independent is that the gauge coupling matrix ImN AB be block diagonal. In this case there turns out to be one independent parameter per block. From this point of view, it seems consistent that γ 0 may always be specified independently of γ A , since N 0A vanishes as evident by (6), and hence N 00 forms a 1 × 1 block. Our first example is the ST U model [9] for which the only nonzero d ABC is d 123 . In this case the coupling matrix ImN BC is diagonal and all three parameters γ 1 , γ 2 , γ 3 may all be specified independently. However, when quantum corrections are added to the ST U model [9,11] d 333 becomes nonzero. This makes the coupling matrix ImN BC completely nondiagonal, which in turn implies that the γ A 's must be taken equal. As a second example, we can take only the constants d 1AB to be nonzero, where A, B = 1 (a similar model is considered in [5]). The coupling matrix ImN BC in this case is block diagonal, having a 1 × 1 block and an (n v − 1) × (n v − 1) block. It follows that γ 1 can be chosen independently of the γ A for A = 1, which must all be the same. A specialization of the previous example is to take only d 12B nonzero with B = 3 . . . n v . This makes ImN BC block diagonal with two 1 × 1 blocks and one (n v − 2) × (n v − 2) block and one can have three different γ's: γ 1 , γ 2 and one more γ B for B = 3 . . . n v . 3 As a final example we consider a simple toy model where only d 112 and d 111 are nonzero. In this case ImN BC is diagonal if and only if d 111 = 0, i.e. γ 1 = γ 2 is required unless d 111 = 0. In each of these cases block diagonality of the gauge coupling matrix ImN BC appears to be both a necessary and a sufficient condition to be able to take independent γ's, though we have not been able to show this generally. 3 Notice that if one specializes this last example one step further one ends up with the ST U model (without the quantum correction). Physical Parameters and Discussion We examine the physical properties of the non-extreme solutions. In particular, we want to check, given the restrictions on the γ A 's, that the charges may still be specified arbitrarily, as they can in the extreme limit [9,11]. We will first display all formulae as if the γ A 's can be specified independently and then discuss the actual solutions, in which the γ A 's are restricted. After imposing the asymptotic flatness condition, the set of independent parameters for the solutions can be taken to be {µ, γ 0 , h A , γ A }. These can be exchanged for the more physical set {E, q 0 , p A , γ A }, where E is the ADM mass, q 0 the electric charge for F 0 µν and p A the magnetic charges for F A µν . The ADM energy is given by 4 where K C ≡ h C k C and k Λ = µ sinh 2 γ Λ as above. The electric charge q 0 and magnetic charges p A are defined by We find The Hawking temperature is where λ 0 = h 0 cosh 2 γ 0 and λ A = h A cosh 2 γ A and the Bekenstein entropy is First, note that equation (18) implies that, even in the case that all boost parameters are set equal, the charges q 0 , p A may still be chosen arbitrarily by virtue of the constants h A and the single boost parameter γ. As we observed above, the restrictions on the γ A should be regarded as restrictions on the functional independence of the scalars z A , with respect to one another. Next, we note that, for all the examples discussed in the last section, the formulae for the temperature (19) and the entropy (20) simplify considerably. The square roots in (19) and (20) can be "gotten rid of", in these cases, because the λ factors appearing in the each term of the sums are identical. For example, in the d 1AB model, the entropy (20) reduces to where γ = γ A for A = 2 . . . n v . It remains an open question, whether, or not, more general non-extreme solutions (static, axion-free and carrying only the charges q 0 and p A ) exist. These might, for example, have independent boost parameters for each of the Calabi-Yau 4-cycles. In the case of orthogonally intersecting branes on a torus [19], there are at most four independent parameters corresponding to a boost and three sets of branes. However, the most general black hole solutions in type II theory compactified to 4-dimensions on a torus are described by 28 electric and 28 magnetic charges (see e.g. [21]). The extreme solutions in this case arise via collections of branes intersecting non-orthogonally [22]. It may be necessary to look at a non-extreme solution based on branes intersecting at angles to get the most general solution in the Calabi-Yau case as well. It would also be interesting to try to construct the solutions, which we have found here, using the available symmetry transformations, which in the present case include boosts in the time direction and symplectic transformations. Finally, it should also be possible to find nonextreme solutions in N = 2 theories with prepotentials not of the Calabi-Yau form. We note that since (13) and (14) are derived using the extreme solution and since they are displayed not in terms of the particular prepotential we have used in this paper, they are generally applicable to finding nonextreme black hole solutions for other prepotentials. In particular the block diagonality of ImN AB is a necessary condition for the existence of more than one γ A . We emphasize that the derivation of (13) and (14) does not depend on any specific expression for e 2U and depends only on the fact that ReN = 0, F 0 µν = 0, and F A µν carries only magnetic charge.
2014-10-01T00:00:00.000Z
1997-05-14T00:00:00.000
{ "year": 1997, "sha1": "e8adbd7b34c709e68631cff177a45a2f445dbfbf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9705090", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5c6e68a1fefc7c96bab4177158887ffb7e437794", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
266211639
pes2o/s2orc
v3-fos-license
Better performance for right-skewed data using an alternative gamma model Background The Maximum Likelihood Estimator (MLE) for parameters of the gamma distribution is commonly used to estimate models of right-skewed variables such as costs, hospital length of stay, and appointment wait times in Economics and Healthcare research. The common specification for this estimator assumes the variance is proportional to the square of the mean, which underlies estimation and specification tests. We present a specification in which the variance is directly proportional to the mean. Methods We used simulation experiments to investigate finite sample results, and we used United States Department of Veterans Affairs (VA) healthcare cost data as an empirical example comparing the fit and predictive ability of the models. Results Simulation showed the MLE based on a correctly specified alternative has less parameter bias, lower standard errors, and less skewness in distribution than a misspecified standard model. The application to VA healthcare cost data showed the alternative specification can have better R square, smaller root mean squared error, and smaller mean residuals within deciles of predicted values. Conclusions The alternative gamma specification can be a useful alternative to the standard specification for estimating models of right-skewed continuous variables. Supplementary Information The online version contains supplementary material available at 10.1186/s12874-023-02113-1. Introduction Estimating model parameters for right-skewed distributions, such as for costs, hospitalization events, and length of hospital stays are important to research on healthcare and health services, and more generally to research in economics and social sciences, among other fields.For example, statistical models of the impact of long-term services and supports on overall costs of care from a health system perspective, if misspecified, can provide poor predicted costs for older frail populations thereby impacting real-world budgeting and policy decisions.Since costs tend to be right skewed, they, and other rightskewed variables, are often modeled using a gamma distribution [1][2][3][4][5][6][7].For example, Hong et al. used the gamma Generalized Linear Model (GLM) to analyse total and out-of-pocket health care expenditures from the Medical Organizations Survey linked to the Medical Expenditure Panel Survey to evaluate the impact of Accountable Care Organizations on adults in the United States [8].Barnett United States Department of Veterans Affairs (VA) data [9].Graves et al. used the gamma GLM to analyse the effect of respiratory track, urinary track, and hospitalacquired infections on both hospital length of stay and costs among adults in a prospective cohort study of two Australian hospitals [6].Nikolovaa et al. used the gamma GLM to analyse, and compare to alternative models, appointment waiting times using the Scottish morbidity record [7]. Methods for predicting values of right-skewed variables and estimating characteristics of their distributions are continually being investigated: for example, Machine Learning and nonparametric techniques are being developed in this context [10][11][12].Nonetheless, parametric methods remain a common and useful approach [13][14][15].However, because the characteristics of right-skewed variables can vary across populations and type of outcome, no single technique or model is universally preferred.As Basu and Manning conclude in their review of cost modeling methods, "No current method is optimal or dominant for all cost applications" [16].Rather than seek a methodological panacea, researchers can benefit from greater flexibility by expanding their toolbox and thereby increase the ability to select an approach appropriately aligned with their goals and data.In this paper, we recommend expanding the parametric modeling toolbox to include an alternative specification to the common gamma model. Although use of the gamma distribution is common [1][2][3][4][5][6][7][8][9], this specification of such models is not always the best option.For example, using VA data, Wagner et al. [17] investigated the gamma distribution in their analysis of cost-based risk scores, and Gao et al. [18] investigated the gamma distribution in their development of a case-mix algorithm for hospitals and payers to compare their providers' cost performance.Both studies found the gamma model performed poorly.In this paper we show that such results do not imply the gamma distribution is necessarily inappropriate for the data.Another specification for the gamma model may perform well. Gamma distribution models of positive variables are commonly based on two parameters: a scale parameter and a shape parameter.The mean of the distribution is equal to their product.In these models, the conditional expectation is commonly specified with the influence of covariates through the scale parameter (the gamma scale model), and consequently the conditional variance is specified as being proportional to the square of the conditional mean.Both the Maximum Likelihood Estimator (MLE) and the Maximum Quasi-Likelihood Estimator (MQLE) tend to encode this specification.For example, see the Stata glm function [19] and the SAS statistical software's proc glm procedure [20].Specification tests for the gamma distribution also often take advantage of this moment condition: for example, see the Modified Parks Test [3,21]. In this paper we focus on the gamma shape model, a model specification in which covariates influence the distribution through the shape parameter such that the variance is directly proportional to the mean [22,23].We show that the gamma shape model can be important for identifying a statistically adequate model using Monte Carlo simulation, and we show that, relative to the standard specification, it can provide better predictions, using data from the United States Department of Veterans Affairs. Methods In this section, we present the gamma shape model and the methods we used for evaluating the estimator's performance. Alternative gamma model specification A random variable, Y, with a range on the positive real line, has a gamma distribution if its probability density function is. in which the shape parameter is denoted as α, and the scale parameter is denoted as β.In terms of these parameters, the mean of Y is the shape parameter multiplied by the scale parameter: The variance of Y is the shape parameter multiplied by the square of the scale parameter: The standard gamma scale model takes advantage of the fact that, by multiplying and dividing the variance by the shape parameter α, the variance can be expressed as. Because α⋅β is the mean, the variance is proportional to the mean squared: However, by inspecting Eq. 1, the variance is also directly proportional to the mean.The variance can be expressed as. which is The difference between Eqs. 2 and 3 is of little importance when considering a single distribution as they equate to the same value.However, the distinction can be important when considering conditional distributions across the range of covariate values.If distributions are nontrivially conditional on other variables, those variables must modify the parameters of the distribution.Consequently, in terms of the preceding gamma distribution, if predictors influence the mean, they must do so by influencing either the shape parameter or the scale parameter (or both).For example, if the distribution of costs among those who are 60 years old is different from the distribution of costs among those who are 65 years old, then, assuming both are gamma distributed, either the shape parameter or the scale parameter (or both) must be different across the two groups. If covariates affect only the scale parameter, then the conditional mean can be expressed as. in which the scale parameter, β, is a function of random variables X, and the shape parameter, α, is constant: the mean is proportional to the scale parameter across the range of X.In this case, because we hold the shape parameter constant, the conditional variance, as a function of covariates, is proportional to the conditional mean squared, which is the typical specification: If, instead, variables affect the mean through the shape parameter, α(x), and the scale parameter β is constant, then the conditional mean is proportional to the shape parameter across the range of covariates X: Therefore, the conditional variance, as a function of covariates, is directly proportional to the conditional mean across the range of covariates X: Note that Eq. 5 does not contradict the exponential family consequence for the gamma distribution that the variance is proportional to the mean squared: this relationship holds for any specific gamma distribution (and thereby for any specified set of covariate values) because Eq. 1 implies Eq. 2, as well as Eq. 3. Equations 4 and 5, however, become different specifications when they are treated as functions of covariates rather than as specific distributions. The difference between these two specifications can influence the parameter and standard error estimators.Although the regression functions of both specifications have the same parametric form, if the regressors are included in the wrong parameter (either shape or scale parameter), then the wrong moment condition across covariates is established in the estimation of the conditional likelihood function (i.e.either Eq. 4 or Eq. 5).Consequently, the model coefficients and standard error estimates are inconsistent.To maximize the likelihood function, the MLE will find the parameters that balance the regression function and the mean-variance moment condition.The MLE will adjust the regression coefficients to account for the incorrect moment condition thereby generating an inconsistent estimator and inconsistent standard error estimator. The MQLE used to estimate parameters of Generalized Linear Models does not generally address this concern.Li and Xiru [24], Chen et al. [25], and Yin and colleagues [26,27], among others, show that the MQLE is consistent; however, these results assume the variance is correctly specified up to a scale parameter of a function of the mean [28], which is the issue of concern here.Although misspecification of the variance can be addressed using nonparametric quasi-likelihood methods [29], a parametric estimator may be preserved if the gamma shape specification is statistically adequate. In this work, we focus on the common application of the gamma distribution to random variables defined on the positive real line.However, the implications of this work also apply to random variables with greatest lower bounds other than 0, i.e. with L any real number in the more general statement for the distribution: for all y > L. Simulation The asymptotic properties of MLE are known to depend on correct specification [30]; consequently, if a model is misspecified, whether as the gamma scale or gamma shape model, then MLE may not perform well.We used Monte Carlo experiments to provide an example of the finite sample properties of the properly specified gamma shape model and the consequence of using the misspecified gamma scale model in this case.The purpose of these results is not to prove that proper specification of a likelihood function is required for asymptotic efficiency and consistent estimation, which is well established in the MLE literature [30], nor are results intended to prove that the models are always importantly different, as that depends on the data.We present results as an example to show that specification can meaningfully matter and thereby support the claim that the gamma shape model should be considered when modeling positive right skewed data, particularly if the gamma scale model is not fitting well. For the purpose of this investigation, we used the common log-link function for the GLM, i.e. we specify the mean as the natural exponential of predictors: in which θ denotes a vector of coefficients.We used 10,000 Monte Carlo samples, with sample sizes of 1000 observations each, to compare the scale and shape models.The Monte Carlo samples were drawn from a gamma distribution with a uniform distributed predictor variable on the interval of 0 to 2, denoted below as x, influencing the shape parameter, and with the scale parameter specified as a constant: and We estimated both the gamma scale and gamma shape specifications with MLE on each data set, and we compared the averages and standard deviations of the coefficient estimates, the average standard error estimates, and the skewness and kurtosis of the estimate distributions. Modeling costs of healthcare in the U.S. department of veterans affairs We examined an adaptation of a Medicare Advantage capitation index (the CMS V21 risk score) based on Hierarchical Condition Categories (HCC), which is a projection to a cost index used to capitate Medicare payments to Medicare Managed Care organizations [31].Acknowledging the under-reporting of diagnoses in VA data and its negative impact on the use of the CMS V21 risk score in the VA, Wagner et al. [17] created the Nosos risk score.The Nosos is a risk adjusted cost model that uses the V21 risk score and adds 48 mental health diagnoses relevant to the cost of VA care and 24 drug categories, as well as age, age squared, indicators of being white, being male, having insurance, being married, being on a VA chronic illness registry, and priority status group indicators (which indicates levels of service-related disability).We further modified the Nosos by including the individual indicators of HCC's that comprise the V21 risk score rather than incorporating the V21 score itself. We used data from VA inpatient, outpatient, and feebasis files, cleaned Managerial Cost Accounting costs, enrollment and vital status for fiscal year 2017.Medicare inpatient, Carrier and certain outpatient claims supplemented VA diagnoses.The dependent variable was the total outlier corrected Consumer Price Index adjusted VA cost.We applied the cost models to five populations of Veterans with varying frailty levels: [1] all Veterans using VHA; [2] all Veterans using any one of the VA Geriatrics & Extended Care services (GEC Cohort); [3] all Veterans using VA's Home-Based Primary Care, a program that provides interdisciplinary care (physicians and nurse practitioners, nurses, social workers, therapists, dieticians, clinical pharmacists, and other professional care) to Veterans who are unable to leave their homes to receive care in clinics at their homes; [4] all Veterans who used VA services with JEN Frailty Index (JFI) less than six (corresponding to having no more than 1 Activity of Daily Living (ADL) impairment); and [5] those with JFI between six and twelve, corresponding to having two or more ADL impairments [32]. We estimated both the gamma shape and scale models as well as the more complex gamma shape/scale model in which covariates influence both parameters.We used MLE in the Stata statistical software version 17. See the Appendix for the Stata code to estimate the gamma shape model.To evaluate the within-sample predictive ability of each model, we used the R square to compare prediction (calculated as 1 minus the ratio of the residual variance to the total variance), the square-root of the mean squared error to compare model precision, and the maximum average residual across deciles of predicted values to compare maximum residual deviation. Simulation results Table 1 presents the average across 10,000 Monte Carlo samples of the coefficient estimates, the standard deviations of the estimates, the average of the standard error estimates, and the skewness and kurtosis of estimates for each model and parameter.The correctly specified gamma shape model has biases in the data of 0%, 2%, and 0.2% for the coefficients on x and x 2 , and the constant, respectively.Whereas, the misspecified gamma scale model has larger biases of 2%, 28%, and 2% for the coefficients on x and x 2 , and the constant, respectively.More striking, however, is the standard deviation of the estimates for the gamma shape model is approximately half of the corresponding standard deviations for the gamma shape model for each coefficient.Moreover, the mean estimated standard errors are the same as the standard deviations for the gamma shape model, but they are 26%, 14%, and 40% lower than the standard deviations in the gamma scale model for the coefficients on x and x 2 , and the constant, respectively.Regarding the distribution of the coefficient estimates, there is no evidence that the gamma shape model estimates deviate from the Normal distribution in skewness and kurtosis; however, for the gamma scale model, estimates deviate from Normal in terms of skewness (p values < 0.000 for each coefficient), which can explain why its estimated standard errors do not match the standard deviations. Healthcare costs in the U.S. department veteran affairs results Table 2 presents the R square, root means squared error, and maximum mean decile errors for the two models on each of the five populations of U.S. Veterans.The R square was larger for the gamma shape model than the gamma scale model across all populations.Indeed, the R squares are negative for the gamma scale model estimated on the overall non-institutional population and population with JFI less than six, whereas the gamma shape model has R squares of 0.57 and 0.41 in these populations, respectively.The negative R squares indicate that the estimated regression function strongly deviates from the underlying true conditional expectation.The root mean squared error was 15-85% smaller for the gamma shape model across all populations.In addition, the maximum mean error across deciles of predicted values was smaller by approximately one order of magnitude for the gamma shape model in all populations.In contrast, for this data, when allowing covariates to impact both shape and scale parameters, results fell between the Discussion Simulation results showed what we expected, given the requirements for efficiency and consistent estimation: The correctly specified model, in this case the gamma shape specification, showed less bias in the data (up to 93% lower percent bias on estimated coefficients in this example), which can affect the accuracy of predicted values.Notably, however, the correctly specified model can also have smaller standard deviations of the coefficient sampling distributions (up to 58% lower in this example) and more accurate estimated standard errors (the standard deviations and estimated standard errors were the same for the correct specification but up to 40% off for the misspecified model).Moreover, the correct specification had less skewness in the distribution of estimates (up to 89% lower magnitude in skewness in this example). These results indicate the potential for improved statistical inferences and more appropriate use of standard test statistics based on the normal distribution.The larger R square, smaller root means squared error, and smaller maximum mean decile errors in the empirical example provides evidence that the gamma shape model can have better predictive ability than the gamma scale specification in real-world data-in some cases, considerably better.Because the difference between R square values across two models estimated on the same population using the same covariates is proportional to the difference in the variation of the model bias across the range of covariates, these results show that the gamma shape model had considerably lower model bias than the typical gamma scale model for this data.Although not a general replacement for the gamma scale model, which would outperform the gamma shape specification if it better matched the underlying data generating process, these results strongly suggest researchers consider the gamma shape specification among the set of models they use to model right-skewed variables.An additional implication of the alternative specification is regarding the interpretation of the commonly used Modified Park's Test (MPT) for specification of the GLM family [3,21].The MPT tests whether the variance is proportional to the square of the mean across the range of covariates.If the hypothesis is rejected, then the MPT is taken to imply the distribution is not a gamma.Clearly, this need not be true.Under the gamma shape specification, the variance is directly proportional to the mean across the range of covariates.Rejecting the moment condition of the gamma scale specification leaves open the possibility that the distribution remains a gamma, but covariates influence the distribution through the shape parameter.Moreover, if we reject both moment conditions, this still does not imply the distribution is not gamma because if predictors influence the mean through both the shape and scale parameters, then a consistent moment condition does not hold across the range of covariate values and the Modified Park test simply does not apply. To identify misspecification and differentiate the gamma shape and gamma scale models, one can use a model fit statistic such as the Veazie-Ye goodness-of-fit test [33] for each specification, which tests the gamma distribution with the specified parameterization.Or, one can estimate the gamma shape/scale model and use a joint test of each coefficient vector to determine which is affected by the variables; this assumes the gamma distribution but tests parameterization (see Appendix for estimation and testing code for use with the Stata statistical software program).This paper does not present a survey of all possible gamma specifications.However, if neither the scale or shape models are statistically adequate, other parameterizations can be considered as well.See Venter [23] for examples in which the gamma can be specified such that its variance can be expressed as any power of the mean.Moreover, if regressors affect both the scale and shape parameters, then the regression function can be based on the model in which regressors affect both parameters as mentioned in the preceding paragraph.However, due to greater complexity of the gamma shape/scale model, which has twice the number of parameters associated with the same number of covariates, estimation can be problematic for small samples and computationally challenging for larger samples. Conclusion Parametric modeling is common for estimation, risk adjustment, and the identification of predictors of rightskewed outcomes.In this paper, we presented an alternative to an otherwise common gamma specification and showed that it can have better empirical performance.We recommend the alternative gamma shape model be added to the toolkit for modeling right-skewed continuous conditional distributions. et al. used the gamma GLM to evaluate determinants of healthcare cost among Veterans with HIV from Table 1 Monte Carlo simulation results for the distribution of estimates from 10,000 iterations for the gamma scale (incorrect specification) and gamma shape (correct specification) models a Joint test of skewness = 0 and kurtosis = 3 Table 2 Prediction criteria results of the gamma scale and gamma shape and gamma shape/scale models for noninstitutionalized veterans in fiscal year 2017 a Scale = gamma scale model; Shape = gamma shape model; Both = gamma shape/scale model with covariates impacting both shape and scale parameters b Neg denotes negative R square values, which is possible if the regression model does not match the true conditional expectations
2023-12-15T14:08:21.748Z
2023-12-15T00:00:00.000
{ "year": 2023, "sha1": "62f80ee0ed7c9d9626b018148856613e7cb5cf0f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "3834f3f3dad75b06c7e47a502b77a73e180fae94", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
252089832
pes2o/s2orc
v3-fos-license
Review: How dynamic prestress governs the shape of living systems, from the subcellular to tissue scale Cells and tissues change shape both to carry out their function and during pathology. In most cases, these deformations are driven from within the systems themselves. This is permitted by a range of molecular actors, such as active crosslinkers and ion pumps, whose activity is biologically controlled in space and time. The resulting stresses are propagated within complex and dynamical architectures like networks or cell aggregates. From a mechanical point of view, these effects can be seen as the generation of prestress or prestrain, resulting from either a contractile or growth activity. In this review, we present this concept of prestress and the theoretical tools available to conceptualise the statics and dynamics of living systems. We then describe a range of phenomena where prestress controls shape changes in biopolymer networks (especially the actomyosin cytoskeleton and fibrous tissues) and cellularised tissues. Despite the diversity of scale and organisation, we demonstrate that these phenomena stem from a limited number of spatial distributions of prestress, which can be categorised as heterogeneous, anisotropic or differential. We suggest that in addition to growth and contraction, a third type of prestress -- topological prestress -- can result from active processes altering the microstructure of tissue. Introduction A peculiarity of living systems is their ability to constantly rearrange their structure in order to perform biological function. Cells, for example, transition from static to migrating while changing shape, or split in the process of cell division. At the tissue scale, specific shapes are acquired during development and maintained at adult age to accomplish organ function, but can be lost as in the case of cancer. These rearrangements are in most cases the result of active 1 processes taking place within the cells or tissues themselves, rather than being imposed from the exterior through boundary conditions. Across the scales and specificities of systems, one finds a number of ways for these internal stresses to be generated, ranging from protein synthesis or pumping of ions that give rise to osmotic pressure gradients to ATP hydrolysis-fuelled changes of conformation of crosslinkers within biopolymer networks. One common point of these mechanisms is that they are controlled by biological pathways, and that they can be triggered or modulated dynamically, enabling systems to change shape or state. In this review, we aim to show how these very different force-generation mechanisms can be usefully understood with a common concept of prestress (and prestrain). While this concept has been useful in describing stable tissue shape in adult tissues [Taber, 1995], here we focus on how dynamic changes in prestress can alter shape. Often, prestress generation interplays with other less specific properties of living systems like their complex rheological properties, thin-sheet geometry and foam or network architecture. In some cases, it appears challenging to distinguish between phenomena stemming from dynamic prestress and those stemming from these complex material properties. An example developed below is the cell neighbour exchange (also called T1 transition) occurring in epithelia, which can have either active or passive origins. In section 2 of the review, we will present the concept of prestress and the theoretical frameworks available to describe prestress in statics and dynamics. Following on, we will briefly present in section 3 the strategies available to experimentalists to identify and measure prestress in living systems. Then, in the final two sections, we will describe strategies of prestress generation in biological systems and the macroscopic effects obtained. On our way, we will show that common categories of spatial regulation of prestress (heterogeneous, anisotropic or differential) are used in systems with different compositions and scales. Two key concepts will be used throughout this review: growth and contraction. By growth, we mean any active process that leads to increase the equilibrium size of some of the components of a system. This can be due to the addition of material within the system by some out-ofequilibrium process, e.g. the polymerisation of filaments fed by monomers diffusing into it. This can also be more simply due to osmotically-driven attraction of more water into the system. By contraction, we mean any active process that leads to the reduction of the equilibrium size of some of the components of a system. In section 4 of the review, we will focus on crosslinked networks of semiflexible biopolymers, either within cells-they are in this case part of the cytoskeleton-or external to them-then part of the extracellular matrix (ECM). These networks can be remodelled and crosslinked by other proteins which have out-of-equilibrium dynamics, fuelled by active processes. We will show how contraction and growth of these networks govern shapes and deformations of subcellular compartments, cells and fibrous tissue. Within the cytoskeleton, one network of particular interest is the one formed by actin and the crosslinker (and molecular motor) myosin. Finally, in section 5, we will describe how prestress affects the shape of cellularised tissue. Here, the material is formed by cells of regulated volume and mechanically connected through adhesions, molecular bonds joining their plasma membranes. The adhesions are formed of transmembrane complexes, allowing tension to be transmitted between the cytoskeletons of neighbouring cells. In this section, the elements experiencing growth and contraction interact with these other elements, making the material heterogeneous and bringing additional effects. Prestress can also be built through topological changes of the cell contours. We refer to this type of prestress as topological prestress. In particular, we suggest that, from a mechanical point of view, morphogenesis through cell proliferation is conveniently described if one distinguishes within the effect of cell division between growth prestress (increase of volume of cells) and topological prestress (due to the apprearance of new cell-cell junctions following cytokinesis 2 events). Concept of prestress in living systems In engineering, generally, "pre" in "prestress" refers to the fact that it is due to an operation done before establishing the system, for instance imposing boundary conditions such as tension on a structure such as wires, called tendons, before putting them in parallel with a compressionbearing material, e.g. by casting concrete. This results in a system whose reference configuration is not compatible with the reference configuration of each of its components, which are thus prestressed. In systems of linear elastic components, one can equivalently consider that they are prestrained, the prestress and prestrain being simply related by the elastic modulus. The source of prestress is not necessarily an externally imposed force applied to a component of the system, but can also be due to an internal change in the system. Prestress can thus arise in a system which is already connected and is originally stress-free, if one of the components, which we will call the active component, changes its equilibrium configuration. As an example, a porous material (passive component) whose pores are occupied by a liquid will become prestressed if that liquid (active component) changes volume due to freezing or crystallisation. In that case, "pre" is not understood anymore as referring to a process in time, but rather as making reference to the fact that prestrain corresponds to the deformation between the initial configuration and a virtual configuration where the active component has assumed its new equilibrium shape. However, this stress-free configuration is virtual because the shapes of the active and passive components are not compatible anymore. The actual configuration in the absence of external load results from the mechanical balance between the active and the passive components, neither of which will be stress-free: the observed stress field in the absence of external loads is called residual stress [Fung, 2013, Taber, 1995, Goriely, 2017. While residual stress has this narrow definition, the term prestress is somewhat broader. In the engineering community, prestress is typically due to external loads [Gower et al., 2017]. The prestressed configuration is then used as a reference configuration, from which another elastic deformation (for instance the wave propagation in the prestressed body) is studied [Parnell, 2012]. In biophysical models, prestress can also be of active origin, for instance due to a morphogenetic event [Ben Amar et al., 2018]. We take advantage from this relative freedom to define prestress so that it corresponds to the notion of active stress which was defined in the context of actomyosin systems . Indeed, if one chooses to describe the prestressed material with respect to its original shape, that is, the configuration it would have in the absence of prestress, as a reference configuration, then one finds that there is now a stress field associated to it, which for this choice is a prestress field [Salençon, 1994]. Since this prestress does not need to preexist the system, it can be adjusted at any time by nonmechanical processes, such as biological pathways, so that it can drive dynamics. We now illustrate the concept using elementary mechanical elements. In order to describe the behaviour of individual cells within tissue, a leading approach is to use cell-based discrete models, which can capture cell-cell interactions and dynamical changes in topology. On the other hand, continuum descriptions offer the benefit of giving access to quantities such as the Young's modulus or Poisson's ratio, and offer a vast array of tools for simplification through mathematical analysis [Jones and Chapman, 2012]. For the sake of simplicity of presentation, we present ideas in terms of continuum models when possible, and discrete models when necessary. Figure 1: Prestrain and prestress in simple 1D systems. (a) Illustration of deformation gradient decomposition for contractile prestrain. An active element (red) and a passive one (gray) are put in parallel in between force-bearing walls (black vertical lines). The active element is composed of a spring whose length is actively decreased (or increased) from L 0 to L, with an anelastic stretch, or prestrain, λ a (here λ a < 1) imposed via a crank. In the virtual configuration, both elements remain stress-free but the system's topology (dashed line connections) is not respected. In the current configuration, even at equilibrium (no net force on the walls), the structure is under stress. Operating the crank the other way, λ a > 1, gives the effect of growth prestrain. (b) A system equivalent to the one in a can be obtained by replacing the active crank and spring element by a stress generator element (pulleys and weight system) whose magnitude is the prestress. (c) For topological prestress, no active spring is necessary; the activity consists of disconnecting and reconnecting elements into a new network. Initially, springs k 1 and k 2 are in parallel, and the pair is connected in series with k 3 . The topological change reconnects the springs, such that k 1 and k 3 are in series, this pair connected in parallel with k 2 . Due to the change of topology (see directed graph insets) the initially stress-free structure becomes prestressed; spring k 2 is in tension, springs k 1 and k 3 are in compression. (d ) A viscous passive element (dashpot) in parallel with a stress generator gives a permanent regime of contraction at a constant strain rate. 4 Static description We illustrate the concept of a virtual configuration in Fig. 1a based on a 1D example, meaning that we only consider deformations in the horizontal direction. An active element (red) is connected in parallel to a passive element (gray). The vertical black lines represent forcebearing walls. In the initial configuration, both elements have length L 0 , and the system is unstressed. A contraction reduces the length of the active element to L, acting effectively like a spring that reduces its rest length through an active process. We refer to this active stretch as λ a = L/L 0 . In this example, λ a < 1, since an active contraction occurs. In the virtual configuration, the system is still stress-free, but incompatible, since the two elements now have different rest lengths making their existing connection impossible [Eckart, 1948]. Compatibility is restored through building stress: in the current configuration, both elements have length l, which is the new equilibrium length. The elastic stretch of the active element is λ e = l/L. In this new equilibrium configuration, the active element is in tension (λ e > 1), and the passive spring is in compression (its elastic stretch is l/L 0 < 1). In this framework, growth can be treated the same way, with one crucial difference: the active element increases, rather than decreases, its rest length, so that the active stretch is λ a > 1. It is useful to understand the relationship between prestrain and prestress, since it allows to make a link with the work on active stress in the context of contractile networks . To understand this relationship, let us consider one active element in isolation, with initial length L 0 , virtual length L, and current length l. The Hookean stress-strain relationship of this element is where E is Young's modulus. If we now deform the spring to the formerly stress-free state (l = L 0 ), we will be met with resisting stress σ equal to the prestress σ = σ a = E(λ −1 a − 1). In this simple case, there is thus an explicit relation that can be written between the prestress field σ a and the active stretch that characterises prestrain λ a = L/L 0 . How can this be made sense of in the context of the stress-strain relationship of the spring, Eq. (1)? It is possible to split (1) as follows: This shows a Hookean behaviour near the initial, formerly stress-free state, with the modified Young's modulusẼ = E + σ a . At the formerly stress-free configuration l = L 0 , we recover the prestress σ = σ a . Up to the change of elastic modulus, we thus see the equivalence between a system in which the active element is a growing (σ a < 0) or contracting (σ a > 0) elastic material and a system in which the active element is a "stress generator" of magnitude σ a , see Fig. 1b. In order to describe more generally mechanical systems with prestress, and to allow the prestrain to be either due to growth or active contraction, we use a framework of anelasticity [Rodriguez et al., 1994, Lubarda, 2004, Epstein, 2012. At its core is the decomposition of the deformation gradient as F = F e F a , where F is decomposed into an anelastic part F a and an elastic part F e , see Fig. 1. The elastic deformation is taken to be hyperelastic and isotropic, for example neo-Hookean, captured by a strain-energy density W = W (F e ). The anelastic deformation is associated with an irreversible process that in some way modifies the microstructure: F a can mean active contraction, like energy-consuming contraction due to myosin, in which mass is conserved or reduced (i.e. transferred from the solid phase to an extracellular reservoir), det F a ≤ 1. Alternatively, it could mean growth, in which mass is added into the solid phase from an extracellular reservoir, det F a ≥ 1. Cauchy stress is then defined as σ = (det F e ) −1 (∂W/∂F e ) F T e and mechanical equilibrium is div σ = 0 in the absence of body forces and respecting that anelasticity Figure 2: Creating residual stress from different patterns of prestrain in the anelastic framework for a disk. The disk is incompressible (det F e = 1) and of neo-Hookean material. The boundary conditions are no traction, σe R = 0. We denote the components of the prestrain in polar coordinates F a = diag(γ R , γ θ ). Here we illustrate contraction, γ < 1, however equivalent situations are found for growth, γ > 1. (a) We consider spatially heterogeneous prestrain which is isotropic (γ := γ R = γ θ ). The initial configuration shows the undeformed, uncontracted, stress-free disk. In the virtual configuration, most contraction occurs towards the periphery, leading to an incompatible body. The prestrain is explicitly shown in the inset, where γ = 1 (no prestrain) at the center and γ = 0.5 (contraction) at the boundary of the disk. The result in the current configuration is a residually stressed body, with tensile hoop stress (σ θ > 0) at the disk boundary, and compressive stress (σ R < 0, σ θ < 0) at the disk center. Due to the tensile hoop stress at the boundary, the disk would open if incised in the periphery. (b) We consider the case of anisotropic (γ R = γ θ ) but spatially homogeneous (dγ R /dR = dγ θ /dR = 0) prestrain. This corresponds to pizza slice shaped pieces being cut out in the virtual configuration. The resulting stress field is qualitatively the same as in the heterogeneous case. (c) We consider the case in which the outer part of the disk has a different prestrain than the inner part, prestrain being isotropic. The scenario is a discrete version of the heterogeneous case. The hoop stress is discontinuous. (active contraction or growth) occurs at time scales much larger than elastodynamics. In this view, prestress will exist in the system due to the incompatibility [Eckart, 1948, Skalak et al., 1997, Aharoni et al., 2016, Truskinovsky and Zurlo, 2019, Lee et al., 2021 of the anelastic strain F a : for instance, if the system is composed of multiple components with different incompatible reference configurations (F a in layer 1 is different from F a in layer 2), or anisotropy (F a in, e.g., radial direction does not match F a in hoop direction), or some heterogeneity in the anelastic strain (say, F a (x) = F a (x + ∆x)), which could capture a spatial gradient in growth or active contraction. Fig. 2 illustrates how residual stress from different patterns of prestress can be created from the anelastic point of view. A distinctly different possibility for building prestress is through changes in the microstructure due to rearrangements in the network. An example are T1 transitions, which are neighbour exchanges between cells in epithelial tissues, see section 5.3. We offer to name this type of prestress topological prestress. We define it as prestress that is added to or removed from an interconnected network of mechanical elements by breaking and reconnecting network elements rather than prestressing individual elements. For instance, a passive T1 transition relaxes the stress in the system purely by exchanging which cell is connected to which, and not through modifying the reference configuration of any of the cells. The concept of a microstructural rearrangement leading to topological prestress is illustrated in Fig. 1c. A network of three springs is presented in the initial configuration, but breaking and reconnecting bonds changes the network topology in the virtual configuration (topology refers to the connectivity of a network). Such changes of connectivity, which are meant to describe networks of discrete elements such as cell adhesions or polymer crosslinks, are challenging to describe with a continuum field like F a that is meant to describe the larger tissue scale. For example, discrete deformations like slip lines in crystal plasticity have been successfully described with the continuum framework F = F e F a where F a describes the macroscopic plastic deformation, F e the elastic deformation, and F the total deformation Conti, 2014, Reina et al., 2016]. But for biological tissue, which is generally amorphic and has no crystalline structure, the definition of macroscopic topological prestrain has not fully been achieved yet, although it is an active area of research [Chenchiah and Shipman, 2014, Murisic et al., 2015, Erlich et al., 2020, Kupferman et al., 2020. Dynamic description Both the active and passive elements of the system can exhibit a time-dependent behaviour. The growth rate and actomyosin contractility both depend on some nonmechanical processes, such as protein synthesis, nutrient intake, or ATP hydrolysis. All these are tightly regulated by biochemical pathways in physiological conditions, and in a large part the dynamics of the mechanical systems can be enslaved to biochemical clocks [Michaux et al., 2018, Nishikawa et al., 2017, Heer et al., 2017. Active elements are also sensitive to the mechanical context. For instance, individual molecular motors are known to stall beyond some maximal load [Liepelt and Lipowsky, 2009]. In the context of growth, the timescale is rather large, since it is the one of the cycle of cell division which takes place over hours or days. Therefore, the passive elements are considered to be always at equilibrium. On the other hand, regimes of cell motility often rely on the dynamics of the passive component with a constant prestress. This may be required to achieve movements which are faster than the rates at which prestress can be created. This is the case e.g. for carnivorous plants [Forterre et al., 2005] where an elastic instability is being used to suddenly release elastic energy that had been stored by the slow build-up of prestress. While the full complexity of interacting timescales between the active prestress one and the passive viscoelastic ones is encountered in some cases, we will focus here on models that describe cases where they are sufficiently well separated. In the case of growth, the system is often considered to be purely elastic and the models thus focus on the timescale of the evolution of prestrain. In the case of contractile networks, viscous dissipation in the microstructure is often important for the observed dynamics and sets their timescale, whereas the active stress is often assumed to be slowly varying. Dynamics governed by the active component: the example of growth laws The timescale of elastodynamics in soft tissue (i.e. wave propagation in soft elastic media) is on the order of milliseconds, and a typical viscous time scale due to internal friction is on the order of seconds to minutes. The timescale of growth, on the other hand, varies from minutes (doubling time of Escherichia coli ) to years (slow growing tumours). Growth is thus a case where the separation of timescales is generally sufficient to consider that the passive components are instantaneously reaching their equilibrium configuration [Goriely, 2017]. The dynamics is then governed by a law that prescribes the evolution of the active component's prestrain as a function of the current configuration. Thermodynamic arguments based on the entropy inequality or a dissipation principle motivate appropriate forms of such a law [Epstein and Maugin, 2000, Lubarda and Hoger, 2002, DiCarlo and Quiligotti, 2002, Ambrosi and Guillou, 2007, Ganghoffer, 2010. By following a standard set of arguments and derivations, one arrives at a variant of the growth laẇ where /ρ r is the Eshelby stress, W is the strain-energy density, and ρ r the density in the virtual configuration. The homeostatic Eshelby stress is σ * E and K is a positive-definite coefficient matrix. The principle of homeostasis states that organisms have the ability to self-regulate some of their properties so as to optimise function in a physiological state, such as the ability of mammals to maintain a constant body temperature. In the context of mechanics, homeostasis can be understood as a living tissue's ability to grow and remodel to accommodate a preferred (homeostatic) stress, i.e. to reshape itself to reduce the difference between its actual stress and the a priori known or genetically encoded homeostatic stress [Di-Carlo andQuiligotti, 2002, Erlich et al., 2019]. Growth laws of the type (3) which employ a homeostasis mechanism have been applied to morphogenesis problems, like sea urchin gastrulation [Taber, 2009], the formation of ribs in Ammonite's seashells [Erlich et al., 2018], and the intestinal crypt [Almet et al., 2021], as well as other applications such as wound healing [Bowden et al., 2015, Taber, 2009 and discrete networks such as plant cell networks [Erlich et al., 2020]. Dynamics governed by the passive component: the example of contractile networks On the other hand, there are systems in which the limiting rate of strain is set by the passive component. Large amplitude motion, such as muscle contraction or intracellular retrograde flow, could not take place in a purely elastic medium. Indeed, in the elastic models of Fig. 1a, an obvious upper bound for the amplitude of strain is the magnitude of the prestrain. However, if the passive spring is replaced by a viscous dashpot (Fig. 1d ), then a constant rate of strain is achieved. This modelling is consistent for instance with the theory of sliding filaments for muscles, as delineated by [Huxley, 1957], where the regime of maximum contraction speed is explained in terms of a balance between active stress and a sliding friction. This compound 8 element is thus governed by an equation of the form: Here we find striking similarity with Eq. (3), with the difference that the strain rate of the whole system appears directly. In both cases, as long as the stress differs from a given homeostatic stress (σ * ) or active stress (σ a ), an internal length, the virtual length L or a length of 'telescoping' of filaments, is being adjusted at a rate set by a parameter which has the dimensions of a viscosity. We will come back to the molecular-scale understanding of the sliding friction in section 4.1.2. In effect, this sliding friction can be likened to the fluidisation of any viscoelastic liquid beyond its relaxation time. In transiently reticulated networks, such as the actomyosin network, this relaxation time is related to the residence time of crosslinkers [Larson, 1999] and thus the maximum speed of actomyosin contraction can be related to these crosslinker dynamics [Étienne et al., 2015], yielding a Maxwell model for the passive component: whereε is the rate of strain tensor, E the elastic modulus of the crosslinked actin network, and τ a a characteristic relaxation time. The time derivative σ has to be an objective time derivative of the stress tensor σ, starting from rubber elasticity theory one obtains an upper-convected Maxwell derivative [Yamamoto, 1956[Yamamoto, ,Étienne et al., 2015. This can be related to the fact that the network structure is based on linear elements under stretch deformation [Hinch and Harlen, 2021]. Note that corotational derivatives are widely used in the field. This constitutive relation is consistent with the general framework of active gels , which provides a thermodynamic framework relating the active prestress σ a to the chemical potential difference associated with the myosin activity. At the molecular scale, the corresponding continuous injection of energy drives these system out-of-equilibrium and is at the origin of a spectacular violation of the fluctuation-dissipation relation [Mizuno et al., 2007], although effective equilibrium descriptions can be restored at higher scales [O'Byrne et al., 2022]. Contrary to growing systems, contractile ones generally deform while keeping a constant mass. In order to sustain a deformation rate that will in general not be volume-preserving, the density ρ of the network needs to be actively regulated to a constant value ρ 0 by a reaction term, which in its simplest expression writes, in one dimension: where v is the velocity in x direction. Here τ −1 n provides another bound for the maximum rate of sustained flow. On the other hand, the reaction term in the mass balance can itself be a source of growth-related prestress. Assuming a density-dependent rheology of the material, such as σ = −E(ρ − ρ * )/ρ * in its simplest form, with ρ * an equilibrium density. When density is close to this equilibrium, ρ ρ * , we find again Eq. (4) with η = τ n E and σ a = E(ρ * − ρ 0 )/ρ * [Putelat et al., 2018]. Finally, one situation which is encountered in several living systems is a stationary system size emerging from an enduring permanent internal flow regime. It can be easily seen e.g. that a closed system governed by Eq. (6), but with different regulation densities ρ a 0 < ρ b 0 in different geometrical regions a and b, will establish a flow from the region b to a. The system total size will adjust as the combination of the local growth and shrinkage, and there can be geometries and parameters for which this balances yields a constant total size. This sort of dynamic equilibrium will be exemplified below in actomyosin networks and cell spheroids. 9 Obtaining detailed and reliable expositions of the prestresses which shape living matter has presented a great technical and conceptual challenge. A major difficulty is the large number of components which are in fact represented by the simplified components of Fig. 1. In every cell and tissue, numerous stress-bearing and -generating elements are mechanically coupled in complex (and often unknown) arrangements. Current techniques typically allow us to probe only small subsets of those components at once, so we are liable to overlook the many connected parts which remain invisible. On top of this, of course, the reference configuration of the components of Fig. 1 cannot be deduced from the reference configuration of the ensemble alone, they can only be revealed through perturbations. But living matter has a great propensity to react and adapt to the perturbations we introduce in order to measure, so that there is always a real risk of measuring artefacts. In order to face these challenges, in recent years, a multiplicity of experimental methods have been developed by biologists and physicists to study dynamic prestress. The most commonly used approaches are: • live imaging of molecular actors generating prestress and subsequent strain of the biological material; • biological perturbation of prestress generators via drugs or molecular loss of function; • prestress release by cutting and ablation followed by measurement of resulting strains; • insertion of or embedding into stress-sensing deformable elements, functioning from the tissue scale down to the subcellular scale. These developments have been reviewed in detail elsewhere for both cell [Polacheck and Chen, 2016] and tissue scale measurements [Gómez-González et al., 2020] and will be described when required in the sections below. Prestress in the actin cytoskeleton The cytoskeleton is made of three categories of dynamic filaments (namely actin, microtubules and intermediate filaments) but only actin and microtubules are found to interact with molecular motors, which are major actors in prestress generation. We focus here on the actin cytoskeleton, although the framework defined above can be applied to the microtubule network, for instance to understand the force-balance within the mitotic spindle [Gay et al., 2012]. The actin cytoskeleton is made of polar semiflexible filaments composed of G-actin monomers, which turnover within filaments in time-scales ranging from seconds to minutes depending on cell types and actin structures considered [Amato and Taylor, 1986, Elkhatib et al., 2014, Saha et al., 2016, Clément et al., 2017. Growth rate and geometry of the actin networks are regulated by a variety of actin-binding proteins which can either nucleate, elongate or sever actin filaments, or cap their ends [Pollard, 2016]. Actin also binds to a specific type of crosslinker, the myosin molecular motors, which walk along actin filaments by using ATP hydrolysis (Fig. 3b). Myosin II filaments in particular can attach to two actin filaments thanks to head domains at their two ends. Actin and myosins form together a zoology of network structures which range from the crystalline structure found in muscle sarcomeres to the less ordered actomyosin cortex, a thin actin gel lying underneath the cell Figure 3: Examples of (a,b) subcellular and (c-g) tissue scale deformations due to active prestress related (a-e) to microscale rest shape change or (f,g) to a change of connectivity, which we refer to as topological prestress. (f ',f ") and (g',g") represent how the topological change in the tissue can be obtained by a subcellular active process, involving myosin prestress, however this level of detail can usefully be ignored when modelling tissue-scale deformations using the concept of topological prestress. 11 membrane. Intermediate in terms of organisation are linear bundles of actin enriched in myosin, such as the so-called stress fibres [Burnette et al., 2014] and junctional cortex in epithelia [Bertet et al., 2004]. The signalling pathways controlling the formation of these respective organisations is beyond the scope of this review and has been reviewed elsewhere [Tojkander et al., 2012]. Mechanical balance of prestressed actomyosin Contractile prestress. Importantly, myosin generates contractile prestress within actin networks. The contractile nature of stress fibres was demonstrated by measuring the rate and amplitude of the viscoelastic recoil of individual stress fibres after laser ablation. These two quantities were shown to be reduced when Myosin II activity was inhibited by drug treatment [Kumar et al., 2006]. Furthermore, dose-dependent treatments of Blebbistatin (an inhibitor of Myosin II contractile activity) on single cells isolated in a parallel plates traction force apparatus revealed that cell-scale traction exerted by the actomyosin cortex is proportional to Myosin II ATPase activity, indicating that myosin is the main generator of contractile prestress in the cell cortex [Mitrossilis et al., 2009]. It can thus be established that the actomyosin meshwork is exerting a contractile active stress, proportional to the chemical potential of myosin , which can be understood as a prestress. How is it that contraction at the cell scale dominates over expansion despite the disordered nature of these networks? Various hypotheses have been proposed. The most documented one posits that because actin filaments buckle under compression, myosin activity will result only in a tensile contribution [Lenz, 2014, Belmonte et al., 2017, Koenderink and Paluch, 2018. This contractile prestress can be balanced by the mechanical resistance of three types of other mechanical elements: the cell environment, the other cytoskeletal networks and the fluid component of the cytoplasm (or cytosol). The tension-compression balance between tensile actomyosin and the external environment to which cells adhere was revealed thanks to the development of deformable substrates (elastomers, hydrogels, or micro-posts arrays). It was shown that adherent cells seemingly "at rest" apply tensile stresses radially directed towards the cell center [Harris et al., 1980, Pelham and Wang, 1997, Dembo and Wang, 1999, Tan et al., 2003]. This stress is transmitted to the substrate at the sites of focal adhesions, mechano-sensitive protein aggregates connecting cells to the ECM [Tan et al., 2003, Burridge andGuilluy, 2016]. Larger deformations are observed in the direction parallel to stress fibres [Mandal et al., 2014]. In line with this, the recoil of the cell substrate away from the site of incision after ablation of stress fibres demonstrated the mechanical connection between those dense actomyosin fibres and the ECM [Kumar et al., 2006]. Actomyosin tensile stress could also be balanced by compression of other cytoskeletal components. Among them, the microtubule network was suggested as a major mechanical actor bearing actomyosin tensile prestress, forming a biological illustration of the tensegrity model [Ingber, 1993, Ingber et al., 2014. Depolymerising microtubules using a drug increases traction forces on the substrate, suggesting that the compression borne by microtubules is transferred to the substrate [Stamenović et al., 2002]. Actomyosin pretension can however also be biochemically affected by microtubule depolymerisation [Rape et al., 2011]. Finally, the tension developed within the actomyosin cortex can be balanced by the cytosol, an incompressible fluid which permeates the whole cytoplasm of the cell [Salbreux et al., 2012]. The cytosol is restricted from escaping by the cell plasma membrane to which the actomyosin cortex is adhered via the ERM family of proteins [Mangeat et al., 1999]. This mechanical balance is spectacularly broken when this cortex or its adhesion with the plasma membrane is locally ruptured: in such occasions, a high-curvature spherical protrusion, called a bleb, forms at the wounded site and inflates with cytosol from the cell body [Charras et al., 2008, Tinevez et al., In podosomes, protruding forces applied by the actin core (growing at a rate v p ) onto the substrate are balanced by a contractile actomyosin network of prestress (σ a ), organised as a dome and attached to the substrate at the periphery via adhesion proteins [Labernadie et al., 2014]. (b) In adherent cells, actomyosin prestress (σ a ) is balanced by cytosol pressure (∆P) and substrate deformation. Cell shape is further refined by anisotropic and heterogeneous actomyosin network contraction. Here, orthoradial stress fibres are connected to radial stress fibres, which are attached to the substrate via adhesion proteins at the cell periphery [Burnette et al., 2014]. (c) In the zebrafish semicircular canal, pressure (∆P) is generated within the ECM via synthesis of hyaluronan, pumping in interstitial fluid. This deforms the overlying epithelium which is further shaped by an anisotropic prestress (σ a ) generated by actin-and cadherin-rich protrusions [Munjal et al., 2021]. 2009]. Reducing myosin activity, e.g. with the fittingly-named drug Blebbistatin, decreases the volume and rate of expansion of blebs, evidencing that the pressure driving the flow is due to the prestressed actomyosin cortex. The actomyosin cortex and stress fibres constitute a continuous network and both contribute to cell-scale prestress [Labouesse et al., 2015, Vignaud et al., 2021, although their mechanical properties and regulatory pathways are different [Labouesse et al., 2015]. The spatial arrangement of these prestressed structures in equilibrium with the passive elements described above give rise, for instance, to the typical three-dimensional shape of crawling cells. In these cells, a flat compartment is formed at the cell front and an inversion of curvature of the cell profile is observed at the junction between this compartment and the dorsal cortex (see Fig. 4b). This specific shape results from the presence of orthoradial fibres whose pretension opposes cytosol pressure. These fibres are then connected to radial stress fibres which are attached to the substrate near the cell edge via focal adhesions. Depolymerising the orthoradial fibres via biochemical treatment restores a constant curvature along the cell profile [Burnette et al., 2014]. Growth prestress. The actin cytoskeleton can also exert growth prestress. This was shown in vitro where actin networks nucleated by Arp2/3 under an AFM cantilever [Bieling et al., 2016] or at the surface of spaced magnetic cylinders [Bauër et al., 2017] were shown to generate compression within the network. This effect can be conceptualised as in Eq. (6) (also Fig. 3a), although the growth is often localised at the boundary. In vitro and in vivo, the mechanical activity of network growth is mechanosensitive as shown by a force-velocity relationship in line with an increase of the network density in response to load [Bieling et al., 2016, Mueller et al., 2017. The generation of such growth prestress is involved in various biological functions. First, it is at the origin of the motility of the Listeria pathogen [Theriot et al., 1992]. Here, like on spherical beads immersed in actin in vitro [Marcy et al., 2004], growth occurs first homogeneously at the surface, which generates residual stress at the periphery of the network. The network ultimately fails via an elastic instability [John et al., 2008]. This symmetry breaking thus forms an anisotropic gel at the surface of the object resulting in a directional movement. In mammalian cells, a length increase of actin filaments can increase cortical thickness and more importantly counteract tension generated by myosin in the network [Chugh et al., 2017]. The best known example of a cell function in which actin network growth is involved is cell protrusivity, where actin can form a variety of structures pushing the cell membrane forward. This topic is a field of research in itself and we refer the reader to reviews treating it specifically [Blanchoin et al., 2014]. Example: podosomes are shaped by growth and contractile prestress. An example of a prestressed structure combining growth and tensile prestress at the subcellular scale is the podosome, whose mechanics have recently been clarified (see Fig. 4a). Podosomes are micronscale structures present in various cell types (macrophages, cancer cells, endothelial cells) known to probe cell substrate mechanical properties and to be the site of ECM digestion. They are formed of a dense core of actin filaments, oriented normally to the substrate, and bound to a corona of radially-oriented filaments which are tangential to the substrate and adhere to it thanks to focal adhesion proteins [Luxenburg et al., 2012, van den Dries et al., 2019. Protruding forces generated by actin polymerisation within the podosome's core were proposed to be a major contributor to the compression of the substrate [Labernadie et al., 2014]. While this growthrelated prestress remains a possible player, recent findings show that the mechanical balance is dominated by the peripheral actomyosin filaments, which are exerting tensile forces between the tip of the core and the substrate, and hence press the core into the substrate [Jasnin et al., 2021]. This system shows that despite their complexity, the mechanical equilibrium of biological structures can be unravelled by combining force measurements, imaging and careful biological perturbation experiments. Dynamics As mentioned in section 2.2, the dynamics of the system can come either from an evolution of the prestress or from a passive relaxation of the material. With some notable exceptions [Fierling et al., 2022], the evolution of the prestress is generally slower than the relaxation of the material. Indeed, biopolymer networks are in general very dynamic as they are being constantly remodelled by processes of (de)reticulation and (de)polymerisation. While the time scales of ECM proteolysis and synthesis remain largely unknown, the rapid turnover time of actin filaments entails its long times liquid-like behaviour [Kruse et al., 2006]. Its effective viscosity scales like the product of the network short-times elastic modulus E and its characteristic turnover time τ a , see Eq. (5). In the muscle, the only crosslinkers between actin and thick filaments are the myosin heads themselves. In order to function as sliding filaments, myosin needs to cycle from attached to detached from the actin in order to perform repeated steps [Caruel and Truskinovsky, 2018], and the dissipation associated with maximal contraction velocity is due to an internal friction associated with the rate of detachment [Huxley, 1957]. This is also the case in most motile and tissue cells, where the turnover time of myosin was found to be close to that of actin monomers in actin filaments and of α-actinin (one of the major actin crosslinkers) [Khalilgharibi et al., 2019]. This explains persistent flows such as the retrograde flow observed in migrating and spreading cells, whose maximal rate of strain is thus set by the ratio of the prestress σ a to the viscosity, as in Eq. (4) [Kruse et al., 2006[Kruse et al., ,Étienne et al., 2015. This is also what sets the rate at which actomyosin-rich subcellular components or tissues straighten after having been buckled by external compressive forces [Tofangchi et al., 2016. It is also of interest to consider the case in which macroscopic-scale deformation of the network is prevented by the boundary conditions. In those conditions, forces at the boundaries will need to balance the internal tension of the network. The energy input from myosin motors will then be dissipated internally by a microscopic scale creep, corresponding to the elastic energy loss incurred when elements in the network detach while under tension [Huxley, 1957[Huxley, ,Étienne et al., 2015. In this case, the stress measured at the boundaries will be equal to the prestress. Note that in energetic terms, there is work being continuously performed internally by the myosin in both the cases of zero external load or zero contraction of the system, the corresponding energy being respectively dissipated by internal friction and by internal creep [Étienne et al., 2015]. These observations demonstrate a peculiarity of actomyosin networks where prestress and dissipation possess common molecular origins. At the microscopic scale, active processes increase the intensity of fluctuations in the medium. This has been evidenced both for the outof-equilibrium (de)polymerisation of cytoskeletal filaments [Robert et al., 2010] and for myosin activity [Mizuno et al., 2007], by comparing the fluctuations to those that would be expected from thermal forces only. The actomyosin cortex and myosin contractility also participate in dynamic equilibration of prestress when substrate stiffness is varied. Cell traction assays in parallel plate geometry performed in various contractile cell types revealed that the loading rate during traction increased with substrate stiffness, with the same trend as the maximum traction force [Mitrossilis et al., 2009, Lam et al., 2011. While the models most often evoked to explain the variations of cell tractions in response to changes in substrate stiffness at long time-scale rely on mechanotransduction and subsequent changes in biochemical activity, it was found that the loading rate adapted to real-time changes of cantilever stiffness in a sub-second time-scale [Mitrossilis et al., 2010, Crow et al., 2012, making purely mechanical explanations appealing [Fouchard et al., 2011]. Independently of the adaptation of myosin prestress to mechanical cues, a simple model such as Eq. (5) already predicts a biphasic behaviour with a maximal traction for very large stiffness of the exterior and a traction proportional to that stiffness when it is below a threshold [Étienne et al., 2015]. When the actomyosin cortex is continuously adhering to a deformable substrate, the interplay of this system with the elastic length scale of the substrate yields complex interactions which translate into a biphasic behaviour of the cell crawling velocity as a function of the elastic modulus of the substrate [Chelly et al., 2022]. The combination of growth at the leading edge of cells and active contraction to the back is the hallmark of cell crawling on a flat substrate [Mitchison and Cramer, 1996, Kruse et al., 2006, these two effects giving rise to a dynamic equilibrium setting the size of the system [Étienne et al., 2015, Ambrosi andZanzottera, 2016]. Prestress in fibrous tissues The ECM is a biopolymer network which is made of assemblies of filamentous proteins (the most abundant being collagen I), proteoglycans and glycosaminoglycans (GAGs). It is the main component of fibrous tissues-also called soft connective tissues-and of the basement membrane on which bidimensional epithelial tissues lie. In most tissues, matrix content is dominated by collagen (especially type I collagen), which has the ability to self-organise into fibrils and fibres through physical bonds [Giraud-Guille et al., 2008]. These fibres can measure up to dozens of microns and often form a crosslinked gel in vivo. In soft fibrous tissues, the ECM is synthesised by cells of the fibroblast family which are embedded and mechanically connected to the matrix. Here, both cells and ECM lie in an interstitial fluid. Mechanical interactions between these three phases generate a large variety of architectures [Wershof et al., 2019] and mechanical properties [Levental et al., 2007], which vary from organ to organ and according to patho-physiological conditions. The difficulty of performing live imaging in these three-dimensional systems, combined with their physical and biological complexity, has so far impaired the understanding of their active mechanical properties. Nevertheless, recent data show how prestress can be generated in fibrous tissues and affects tissue development and pathology. Despite being largely composed of inert protein, the ECM is not mechanically passive. Contractile prestress. First, contraction of the ECM can result from variation in water content within the interstitial fluid. In particular, collagen molecules, known to resist tensile stresses in fibrous tissues, change conformation with decreasing water content of the surrounding medium. This effect induces high contraction of the network, which could be important for the function of the load-bearing tendon . But the ECM is also made contractile through the traction forces that fibroblasts exert within it. Since fibroblasts are polarised mechanically and bound to the ECM through focal adhesions, they apply force dipoles on the collagenous network, acting like active crosslinkers. Isometric contraction of fibroblast assemblies self-organising in collagenous matrix could be measured in between parallel cantilevers and was shown to be dependent on myosin activity [Delvoye et al., 1991, Legant et al., 2009. In these systems, like in the actomyosin cortex, the nonlinear mechanical properties of collagenous networks resulting from the semiflexible nature of the fibres are thought to play a major role in the propagation and amplification of contractile stress at the tissue scale [Ronceray et al., 2016, Han et al., 2018. Because cell traction forces are sensitive to the stiffness of the extracellular environment [Discher et al., 2005, Mitrossilis et al., 2009, stiffening of the ECM fibres generated by cell traction forces triggers a positive feedback loop amplifying stiffening in fibrotic reactions [Calvo et al., 2013] or during directed cell migration [Van Helvert et al., 2018]. This mechanical activity of fibroblasts contracting the ECM influences tissue function during developement and in adulthood. First, in confirmation of a long-standing hypothesis [Harris et al., 1984], recent work shows that the patterning of multicellular aggregates in the chick dermis initiating feather follicles depends on fibroblast cell contractility, preceding differentiation of the epidermis [Shyer et al., 2017]. This indicates that fibroblast contractile activity can indeed act as an organising factor of fibrous tissues during development. Second, during lymph node physiological function, immune cells signal their arrival to the fibroblast reticular cells and tune their contractility in order to relax the tissue. This is thought to help maintaining lymph node integrity while the lymph node expands [Acton et al., 2014]. Finally, cancer-associated fibroblasts, which shape the fibrous tissue of the tumour microenvironment and are more contractile than normal fibroblasts [Sahai et al., 2020], could also apply active stresses at the global scale of the tumour. Along this line, it was shown that their collective contraction concomitant with an orthoradial assembly around tumour aggregates compress tumour compartments in vivo, as well as cylindrical micropillars in vitro [Barbazan et al., 2021]. Growth prestress. The interstitial fluid can generate growth prestress within the ECM. Indeed, the osmotic pressure within the ECM can be tuned in particular by the presence of GAGs and proteoglycans, thanks to their long, negatively charged chains. This property is used during developmental morphogenesis where localised synthesis of hyaluronan (a common GAG) contributes to the bulging of an epithelial monolayer laying on top of the swelling ECM [Munjal et al., 2021] (see Fig. 4c). Such water influx is then balanced by the matrix fibres under tensile load [Ehret et al., 2017]. Thus, in contrast to the poro-elastic behaviour observed in the cell cortex , tension relaxation correlates with expelled interstitial fluid [Ehret et al., 2017]. In the context of cancer, the deregulated fibrous tissue defining the mechanical properties of the tumour stroma (i.e the tumour micro-environment) is also affected by an increase in hyaluronan synthesis and subsequent elevated interstitial fluid pressure. In pancreatic adeno-carcinoma, this effect generates collapse of blood vessels, which could participate in organ loss of function and impairs delivery of therapeutic agents [Provenzano et al., 2012]. Prestress in cellularised tissue At the tissue scale, a new structural unit becomes fundamental: the cell. From a purely mechanical view point, one crucial aspect is that cells tessellate the space occupied by a tissue into units of regulated volume, which have interactions via adhesion molecules. Being functional units, cells may have individual biochemical activity which can translate into mechanical activity, in turn giving rise to strains that can alter the tissue geometry. This biochemical activity can be patterned at the scale of single cells. For example, in epithelial tissues, which typically form thin sheets of a single cell layer, apical (cell 'top' surface), basal ('bottom') and lateral (where cells are in contact) surfaces are, to some extent, independent biological and hence mechanical units. Since tight junctions between cells allow tissues to form impermeable layers, cells which actively and directionaly pump ions can in this way impose a pressure difference between apical and basal surfaces, which can generate and maintain a liquid-filled cavity called lumen. Lumens, like neighbouring tissues and the ECM network synthesised by cells, in turn impact tissue mechanics by boundary conditions that vary in space and time. Cells also define a tissue topology via the neighbour relations created by cell-cell adhesion. This topology can evolve over the course of time as cell-cell junctions are assembled and disassembled, which can in turn give rise to further tissue-scale strain that can be modelled as the result of a topological prestress, as opposed to contractile or growth prestress. With these differences in microstructure come processes with new timescales [Khalilgharibi et al., 2016]. While a single actin filament may turn-over in seconds, a cell-cell junction requires at least minutes to disassemble and reassemble when a cell changes neighbour [Tlili et al., 2020, Clément et al., 2017. The creation of a new junction during cell division similarly occurs over minutes. However, the full timescale of the cell cycle, which we may consider to encompass the prestress changes associated with cell growth and cytokinesis, varies greatly between animal cell types, from minutes to years. These cell-level processes may be synchronised throughout a tissue, generating tissue-level deformations at a similar timescale, or they may be asynchronous and so add up gradually over longer timescales. Supracellular scale prestressed networks in interaction with their environment Although actomyosin is restricted to the cell cytoplasm, it can form supracellular structures by means of adhesion molecules that mechanically connect one subcellular network to that of a neighbouring cell, forming multicellular structures under tension [Fernandez-Gonzalez et al., 2009. The emergence of such a mechanical continuum is illustrated during the reformation of a dissociated epithelial monolayer in vitro by an increase of apparent tissue stiffness, which is coincident with the development of cell-cell junctions and dependent on actomyosin activity [Harris et al., 2014]. The tissue-level prestress generated by the continuous actomyosin network can be measured directly in in vitro epithelial monolayers devoid of ECM and suspended between the arms of a force cantilever . Here, a ramp of compressive strain imposed at the tissue boundary results in a linear decrease of tissue stress. As would be the case for a prestressed thin elastic plate, the tissue then buckles when it reaches a compressed state. Strikingly, the dynamics of stress recovery upon a rapidly applied compressive strain match those of an isolated actomyosin network, indicating that, in this case, the cellularised structure has little impact on the overall mechanics . Such a response, consistent with a continuous model of an epithelium, has also been observed in vivo, in several Drosophila epithelia, where anisotropies of prestress were revealed by the recoil of circular regions of tissue after laser-cutting [Bonnet et al., 2012]. Just as epithelia can change length and generate prestress through their actomyosin networks, they can also be connected to an active element and play a passive role. Their challenge is then to bear the stress generated in order to maintain epithelial integrity [Bonfanti et al., 2022]. This is illustrated by the blisters that epithelia form under active ion pumping directed towards their basal side. In analogy with the pressure in the cell cytoplasm, the pressure within the blister is balanced by the tension in the actomyosin network. Under increased stress, cell deformations can reach hundreds of percents. When the actin pool is exhausted so that the filament network can no longer cover the cell surface area, the keratin network (an intermediate filament network connected through an other type of cell-cell junctions called desmosomes) takes over to resist mechanical stress [Latorre et al., 2018]. While the passive response of an epithelium leads to a dome shape in the system above, an active participation of the epithelium is required to generate a tubular shape, as is the case in the zebrafish inner ear [Munjal et al., 2021]. Here, anisotropic multicellular cables which are both contractile and adhesive form in the direction orthoradial to the cylinder axis, and so by breaking the symmetry of prestress allows an anisotropic shape to arise. Notably in this example, the element driving growth is an increase of osmotic pressure in the ECM actively regulated by cell synthesis of hyaluronan. Spatially patterned prestress and tissue bending The generation of anisotropic prestress, as in the zebrafish inner ear, is one example of a common strategy of spatially organising prestress in order to control tissue shape. The most studied example is perhaps epithelial tissues where prestress, tangential to the plane of the tissue, varies along its transverse direction and drives bending via differential prestress. This aspect is particularly important during animal development and detailed reviews of this process have been made by others [Pearl et al., 2017, Tozluoǧlu andMao, 2020]. For the sake of this review, note that a variety of fine-tuned prestress regulation have been documented, including increase of prestress in the apical domain [Martin et al., 2009], increase or decrease of prestress in the basal domain [Sidhaye andNorden, 2017, Krueger et al., 2018], and increase in lateral prestress [Brodland et al., 2010, Gracia et al., 2019. This differential prestress can be revealed by laser ablation in cultured suspended epithelia by measuring the spontaneous curvature generated orthogonal to tissue plane at a newly created free edge. This too allows for measurement of the out-of-plane forces involved by unfurling the curled tissue with a force cantilever . Three-dimensional geometries other than linear folds can be produced through the same mechanism, via patterning of differential prestress throughout the plane of an epithelium. For instance in 2D-cultured gut organoids, three cell types are organised within the tissue plane into concentric circular islands. Traction force microscopy and laser ablations revealed that these regions display different mechanical behaviours, with apical-basal myosin polarisation in the central region leading to the doming of crypts (cup-shaped structures of the digestive tracts) . In 3D gut organoids, in which cells embedded in ECM form cysts, experiments revealed that crypt morphogenesis is also driven by membrane transporters which cause liquid transfer from the crypt cavity to the tissue [Yang et al., 2021]. Interestingly, the apical-basal polarisation of the crypt region was found to be large enough in comparison to that outside of the crypt region for the overall morphology to be robust to organoid volume changes [Yang et al., 2021]. Recent advances in optogenetics (optical activation of engineered biomolecules) have also allowed for the experimental control of this patterned differential prestress. For example, at the stage preceding gastrulation in Drosophila, localised apical activation of RhoA, an activator of myosin contractility, was shown to be sufficient to initiate folding in a variety of directions and locations where invagination does not normally occur [Izquierdo et al., 2018]. In vitro measurements showed that the spontaneous curvature generated by active differential prestress is so high (on the order of the inverse of tissue thickness) that a competition between in-plane elastic energy and bending energy takes place. In this way, tissue folding is continuously modulated by external tension and reciprocally . Nevertheless, differential prestress is not the only way to achieve folding. Recent data from the ventral furrow formation in Drosophila embryo showed the importance of the global ellipsoidal geometry of the embryo for an elongated ventral patch of myosin to achieve a fold along its long axis . Indeed, heterogeneneous prestress at the surface of a thin shell respecting this geometry leads to surface buckling initiating folding with the correct pattern of strain and dynamics [Fierling et al., 2022]. Growth prestress Physiological growth in living tissues often results in material added (or lost) in a non-uniform manner, forcing neighbouring tissue to accommodate the newly added material through elastic deformation. These nonuniformities, as illustrated in Fig. 2, include heterogeneous, anisotropic and differential prestrain F a . As discussed in Section 2, this nonuniform prestrain is revealed by residual stress: an internal stress that remains when all external loads of an originally unloaded configuration are removed. Tissues actively build these internal stresses both during morphogenesis (when they rapidly change shape and add mass) and in the adult physiological state (when mass and volume changes serve the purpose of maintenance and are comparatively small). A classic example is the residual stress in arteries, which has been theoretically and experimentally described by Fung and others [Fung, 1991, Vaishnav andVossoughi, 1987]. The observation is that arteries, when radially cut, open up due to compressive stress built in the hoop (circumferential) direction. The opening angle can be used to quantify residual stress [Fung, 1991]. Experiments also suggest that arteries are residually stressed in the axial direction [Goriely and Vandiver, 2010]. In a series of seminal studies, Fung and co-workers demonstrated residual stress in other cardiovascular systems such as the heart [Omens and Fung, 1990], veins [Xie et al., 1991], and the trachea [Han and Fung, 1991]. Residual stress was also identified in other physiological tissues and organs such as the brain [Budday et al., 2014] and bones [Yamada et al., 2011], as well as morphogenetic systems such as the optic cup [Oltean et al., 2016] and the developing embryo [Beloussov and Grabovsky, 2003]. It was also found in pathological tissue such as solid tumours [Ambrosi and Mollica, 2004]. Proliferation also produces compression within the core of tumours and orthoradial tension at the periphery as revealed by cutting of an excised tumour along its radius [Stylianopoulos et al., 2012]. In this context, growth-related prestress is referred to as 'solid stress'. Apart from physiological systems, growth-induced prestress has been measured in a multitude of ways in cultured systems. Stress in growing multicellular spheroids has been measured with great precision: the growing cells were encapsulated inside permeable, elastic, hollow microspheres which deformed as the spheroids grew inside of them, allowing to reconstruct the traction forces [Alessandri et al., 2013]. The external pressure applied on the spheroid by the elastic coating leads to a steady-state size of the spheroid, in which there is an equilibrium between a necrotic core and a proliferating rim. The existence of a pressure at which spheroids reach a stationary size was theoretically proposed by Basan et al. [2009] and independently experimentally addressed by applying osmotic pressure to the spheroid [Montel et al., 2011]. A similar technique for measuring growth-induced stress, by the elastic deformation of the environment, was demonstrated for yeast cell colonies (which do not form cell-cell junctions) [Delarue et al., 2016]. A distinctly different method was used in [Cheng et al., 2009], where threedimensional aggregates of tumour cells were co-embeded with fluorescent micro-beads in agarose gels. The displacement of the beads allows a reconstruction of the spatial stress distribution. Prestress generated by proliferation can also be observed in two-dimensional tissues, for example during the early development of the Drosophila wing disc. In a 3D finite element simulation of the wing disc as one heterogeneous layer, the growth rate and mechanical coupling to the elastic basement membrane of the tissue provoke the formation of spatially regulated folds [Tozluoǧlu et al., 2019]. The doming of a part of the wing disc, the wing pouch, was recently explained through a combination of differential growth and differential growth anisotropy between tissue layers [Harmansa et al., 2022]. Similarly, differential growth rates between adhered tissues has been shown to regulate the looping of the chick gut [Savin et al., 2011], the formation of villi in the chick gut [Shyer et al., 2013] and the gyrification of the brain [Tallinen et al., 2016]. Until here, we considered growth to be dictated by proliferation rate, but an increase of cell density can occur independently to proliferation. For instance, during the formation of the zebrafish optic cup, migration of cells from the outside of the organ is analogue to a local tissue growth. This process increases tissue curvature and is required to produce correct bending of the organ, in parallel to differential contractility [Sidhaye and Norden, 2017]. Finally, the removal of cells, through cellular processes such as apoptosis, can be seen as a reverse growth and can play equally important roles during tissue morphogenesis. For example, in a transient extra-embryonic epithelial tissue named the amnioserosa, which sits between two embryonic epithelia during Drosophila embryogenesis, the gradual apoptosis of amnioserosa cells drives the final stage of dorsal closure in which those two sheets are brought together [Pasakarnis et al., 2016]. Topological prestress The cellular microstructure of tissues offers a further mechanism through which to generate or relax prestress via changing the topology through cell-cell neighbour exchange events, termed intercalations (or T1 transitions in foam literature) , Blanchard, 2017. At the subcellular scale, this process requires the coordinated action of many biomolecular players to disassemble and reassemble cell-cell junctions and has often been found to depend on the active generation of stress to shrink or expand junctions [Bertet et al., 2004, Nestor-Bergmann et al., 2022. The resulting T1 defines an orientation, as specified by the orientation of the removed and of the added junctions. In some tissues, T1 transitions occur throughout tissues with no preferred orientation, in which case they relieve local cell packing stresses, allowing tissue ordering [Curran et al., 2017] or facilitating flow during migration [Tlili et al., 2020]. The transition from such a liquid-like state to a 'jammed' state has been shown theoretically to relate to cell density, junctional tension and fluctuations [Lawson-Keister and Manning, 2021]. However, the most dramatic examples occur when intercalations are globally aligned throughout a tissue, such as during Drosophila germband extension Wieschaus, 2004, Tetley et al., 2016]. This results in creating neighbour relations which, in the initial configuration of the tissue, strain the cells in the transverse direction: their relaxation thus entails a tissue-scale deformation in the longitudinal direction [Collinet et al., 2015], see Fig. 3g. Similarly, multilayered epithelia can expand (here isotropically in the plane) through 'radial intercalation' in which cells in lower layers intercalate into upper layers creating a precompression which is relaxed by expansion [Szabó et al., 2016]. Since these are relaxation processes following out-of-equilibrium topological changes, they can be conceptualised as topological prestress. A major challenge in the analysis of such deformations is to disentangle this process from boundary stresses that may also act on the tissue , Lye et al., 2015, Collinet et al., 2015. Although epithelial topology is often represented as a two-dimensional network, a full threedimensional treatment revealed different connectivity in the apical and basal domains [Gómez-Gálvez et al., 2018]. Topological transitions were indeed observed along the apico-basal axis in a range of in vivo tissues, resulting in a three-dimensional cell shape named a scutoid. This solution is favourable when tissues are curved by different amounts with respect to their principle planar axes (i.e. tubular rather than spherical) unless the radius of curvature is large compared with the tissue thickness. It remains to be found whether these intercalations could drive bending itself, rather than simply relax stress. A separate class of topological change which affects prestress is the introduction of a new junction into the network, which occurs during cytokinesis, see Fig. 3f. Again this is an oriented process, as the degree of freedom is the orientation of the new interface. Indeed, much has been discovered about the cell-and tissue-level signals read by a cell when choosing a division orientation. These signals include biochemical cues such as tissue polarisation [Gong et al., 2004, Gho andSchweisguth, 1998], as well as mechanical and geometrical signals such as tissue stress [Niwayama et al., 2019, Fink et al., 2011, Scarpa et al., 2018 and cell shape [Wyatt et al., 2015, Bosveld et al., 2016. In turn, the choice of division orientation alters prestress. A prestrained mother cell, depending on the choice of division orientation, could be divided into two daughter cells with either an increased or reduced anisotropy in shape. In tissues, the latter choice is usually observed, so as to homeostatically regulate cell shape anisotropy [Mao et al., 2013, Wyatt et al., 2015, Xiong et al., 2014. In the zebrafish, this alignment of division with cell shape (which coincides with the principal axis of tissue stress), was shown using laser ablations to reduce the stress that builds up as a cell layer migrates over the embryo's surface [Campinho et al., 2013]. Similarly, alignment of cytokineses along a given axis can contribute to driving directional growth [da Silva andVincent, 2007, Li et al., 2014]. Altogether, a complete description of tissue morphogenesis can be achieved by a linear combination of the active and passive mechanical modules we have described so far (cell stretching, oriented cell division, T1 transitions), provided that one has a good knowledge of the boundary conditions, which can often be dynamic in living systems [Etournay et al., 2015]. Example biophysical systems where mechanics is governed by heterogeneous, anisotropic or differential prestress of either sign, corresponding to contraction or growth. (a) Contractile actomyosin with local accumulation, in parallel with a length-regulating element and in continuous adhesion with a substrate, generates a friction pattern that enables motility . (b) Anisotropic pretension of the apical surface of cells regulates their shape [Burnette et al., 2014]. (c) Differential prestress between the weakly contractile apical (top) surface and strongly contractile basal (bottom) surface causes tissue curling (d ) Residual stress due to heterogeneous growth is characterised by cutting experiments in tumours, revealing tensile hoop stress at the periphery [Stylianopoulos et al., 2012] (e) The core of podosomes grows within a confined space, generating anisotropic prestress [Labernadie et al., 2014] (f ) Arteries change curvature if cut, this is believed to be caused by differential growth and remodelling of concentric layers [Goriely and Vandiver, 2010] either contractile or growth activity. Figure 5 provides an example for each of the six typologies that thus emerge. Additionally, we define topological prestress as the prestress that is added to, or removed from, an interconnected network of mechanical elements not through prestressing individual elements, but purely by breaking and creating connections between the network elements. This latter type of prestress corresponds to a higher level of phenomenology, since it does not in itself describe by which (active) process the connectivity is being changed, see the examples in Fig. 3f,g. It also has the property that it is always providing anisotropic prestress. From an experimental point of view, characterising living systems as active materials is highly challenging. One difficulty is linked with the necessity to test systems while they are maintained in a state of function as close as possible to physiological conditions. For this, recent developments in organoid systems present great opportunities, since elements of tissue morphogenesis can now be recapitulated in an in vitro setting which is much simplified and much more amenable to experimental pertubations. Observing the system simultaneously at different scales, especially combining global-scale stress and strain measurements with very local measurements could allow a better characterisation of how the microstructure dynamics give rise to emergent properties. For instance, tools measuring strain at the molecular scale, like Förster resonance energy transfer (FRET) [Borghi et al., 2012], or at the meso-scale, like micro-magnets [Laplaud et al., 2021], micro-droplets [Mon-gera et al., 2018] or optical tweezers [Han et al., 2018], could be coupled to cell-scale techniques, like parallel plate rheometry or TFM, to decipher the subcellular contributions to cell shape changes. On the other hand, cell-scale in parallel to tissue-scale mechanical testing appears necessary to understand the cellular origins of tissue flows [Moisdon et al., 2022]. In all cases, the interpretation of force-displacement relation measurements must be supported by careful modelling, as exemplified by the violation of the fluctuation-dissipation relation [O'Byrne et al., 2022]. This sheds light on the vital need for combining any experimental approach with a theoretical understanding and of testing this by employing multiple modalities. Notably, extending the existing models towards geometric or material nonlinearities is a necessity given the large deformations that are commonly encountered in real systems. How the prestrain and prestress fields relate in a nonlinear context also remains to be clarified. Solving mechanical models coupling different biological structures, in relevant geometries in three dimensions also remains an important challenge: for example, a full understanding of the three-dimensional mechanical balance of single or tissue cells, or of the dynamics of fibrous tissues which is governed both by cells and ECM are still lacking. Another challenging theoretical task is to describe topological prestrain and prestress in a continuum framework. A missing intermediate step is the linking between the cellular and tissue scales. At the scale of several cells, vertex models [Farhadifar et al., 2007] are a highly studied family of models for biological tissue, both for their strength at capturing various tissue properties [Alt et al., 2017, Cheddadi et al., 2019, Latorre et al., 2018 and for their interesting physical behaviour [Bi et al., 2015, Farhadifar et al., 2007, Schoetz et al., 2013. However, the coarse-graining of vertex models to continuum descriptions, such as anelasticity, remains highly challenging, and is being tackled with approaches based on nearly periodic lattices [Murisic et al., 2015, Chenchiah and Shipman, 2014, Kupferman et al., 2020 and discrete calculus [Jensen et al., 2020, Nestor-Bergmann et al., 2018. These efforts may in the long term lead to the possibility of encoding topological transitions, like active or passive neighbour exchanges (T1 transitions), into a continuum field usable, for instance, in anelasticity approaches. Seventy percent of the total cell volume is water, and there is growing evidence in favour of a coupling between cell mechanics and osmotic gradients controlling volume [Xie et al., 2018, Cadart et al., 2019. Cells maintain a prestress inside the membrane by carefully controlling the flow of water: water mobility in and out of cells relies on the permeation of water through the plasma membrane, which can be regulated by aquaporin channels [Kedem and Katchalsky, 1958], as well as ion pumps which actively consume energy. Therefore, there has been recently considerable interest in the modelling of water mobility. This has been approached via a fluid phase in the framework of poroelasticity [Ambrosi et al., 2017, Fraldi and Carotenuto, 2018, Xue et al., 2016, as well as by explicitly tracking water fluxes in a vertex model [Cheddadi et al., 2019]. In parallel, there are new developments coupling the electrochemistry of ion fluxes, mechanics of cell volume regulation, and active pumping [Cadart et al., 2019, Deshpande et al., 2021. Water mobility should ultimately add contributions to coarse-grained models such as anelasticity. From the experimental side, this requires the development of non-perturbative pressure sensors which is an important challenge for the future.
2022-09-07T01:15:55.548Z
2022-09-03T00:00:00.000
{ "year": 2022, "sha1": "0a160855ee25be1963e6cdc33870448086f8c96c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0a160855ee25be1963e6cdc33870448086f8c96c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
174021
pes2o/s2orc
v3-fos-license
SMS for Life: a pilot project to improve anti-malarial drug supply management in rural Tanzania using standard technology. BACKGROUND Maintaining adequate supplies of anti-malarial medicines at the health facility level in rural sub-Saharan Africa is a major barrier to effective management of the disease. Lack of visibility of anti-malarial stock levels at the health facility level is an important contributor to this problem. METHODS A 21-week pilot study, 'SMS for Life', was undertaken during 2009-2010 in three districts of rural Tanzania, involving 129 health facilities. Undertaken through a collaborative partnership of public and private institutions, SMS for Life used mobile telephones, SMS messages and electronic mapping technology to facilitate provision of comprehensive and accurate stock counts from all health facilities to each district management team on a weekly basis. The system covered stocks of the four different dosage packs of artemether-lumefantrine (AL) and quinine injectable. RESULTS Stock count data was provided in 95% of cases, on average. A high response rate (≥ 93%) was maintained throughout the pilot. The error rate for composition of SMS responses averaged 7.5% throughout the study; almost all errors were corrected and messages re-sent. Data accuracy, based on surveillance visits to health facilities, was 94%. District stock reports were accessed on average once a day. The proportion of health facilities with no stock of one or more anti-malarial medicine (i.e. any of the four dosages of AL or quinine injectable) fell from 78% at week 1 to 26% at week 21. In Lindi Rural district, stock-outs were eliminated by week 8 with virtually no stock-outs thereafter. During the study, AL stocks increased by 64% and quinine stock increased 36% across the three districts. CONCLUSIONS The SMS for Life pilot provided visibility of anti-malarial stock levels to support more efficient stock management using simple and widely available SMS technology, via a public-private partnership model that worked highly effectively. The SMS for Life system has the potential to alleviate restricted availability of anti-malarial drugs or other medicines in rural or under-resourced areas. Background Artemisinin-based combination therapy (ACT) is recommended by WHO for first-line treatment for uncomplicated Plasmodium falciparum malaria [1], in recognition of the superior efficacy and faster symptomatic improvement observed with ACT compared to other treatments [2,3], as well as a reduction in gametocyte carriage among ACT-treated patients that could potentially contribute to a lower rate of disease transmission [1,4,5]. Maintaining adequate supplies of ACT at the health facility level in rural areas of sub-Saharan Africa, however, can be highly challenging. Poor supply chain management, including limited or non-existent stock control and forecasting, means that even though anti-malarial drugs may be available centrally there can be frequent stock-outs at the local level, which often last for extended periods. As a result, patients may have to travel long distances to obtain ACT or, all too often, remain untreated with the consequent risk of developing severe disease, organ damage and death. Tanzania has the third largest population at risk of malaria, with 11 million cases of malaria occurring each year [6]. ACT represents first-line therapy in the country, although rapid diagnostic tests (RDT) are only used to confirm the diagnosis where health facilities have this resource; otherwise, the diagnosis is made on the basis of clinical symptoms. Anti-malarial therapies are distributed via one of two mechanisms in Tanzania. Products can be issued to health facilities automatically in fixed quantities on a quarterly basis, with requirements determined at district level by the District Medical Officer (DMO) and at national level by the National Malaria Control Programme (NMCP) (the 'push' system). Alternatively, they can be distributed every month in response to individual requests from health facilities that are sent by the DMO for approval by the Ministry of Health, after which medicines are dispatched via an Integrated Logistics System (ILS) (the 'pull' system). In both cases, medicines are stored and dispatched from one of nine Zonal Stores in the country. Recognizing that standardly-available technology has the potential to improve supply management for antimalarial medicines in rural regions, a collaborative partnership of public and private institutions was set up under the auspices of the Roll Back Malaria Partnership to undertake a 21-week pilot project in Tanzania. The objective of the project was to improve the supply, planning and access to ACT therapy through use of mobile telephones, SMS messages and electronic mapping technology. The results of this pilot project, 'SMS for Life', are reported here. Objectives The objectives of the SMS for Life pilot were three-fold: (1) to demonstrate that visibility of weekly stock levels of key anti-malarial medicines at the health facility level will promote action to eliminate and/or reduce stockouts (2) to demonstrate that a state-of-the-art data gathering infrastructure can be made available via simple tools such as SMS and mobile telephones in remote locations in sub-Saharan Africa (3) to demonstrate the effectiveness of a public-private partnership model. Location Of the 131 districts in Tanzania, three rural districts (Lindi Rural, Ulanga and Kigoma Rural) were selected by the NMCP for inclusion in the pilot, covering a total population of 1.2 million. The selected districts met all four criteria for inclusion. First, the districts were to differ in terms of level of health service delivery and access, with the aim of providing a broadly representative sample of the entire country. Lindi Rural is an 'average' district. Ulanga is a challenging district in terms of staff shortages, skill level and remote location. Kigoma Rural also presents problems, due to its large geographic size and long distances between the Zonal Store and remote health facilities. Second, the districts were all to be in different regions of the country, and supplied by different Zonal Stores. Third, all districts were to be malaria endemic with malaria the most common cause of death. Fourth, selected districts were not to be involved in other pilot projects. Lindi Rural, Ulanga and Kigoma Rural districts included 48, 30 and 51 health facilities, respectively i.e. 129 health facilities in total. The Lindi Rural and Kigoma Rural districts operate anti-malarial supply using a 'pull' system via ILS. The Ulanga district is undergoing a transition from a 'push' system to the 'pull' system. Duration and scope of the SMS for Life pilot The pilot study was 21 weeks in duration. This period was chosen because it covered two quarterly order cycles and five monthly delivery cycles. Data collection started on 1 st October 2009 and ended on 25 th February 2010. The system covered stocks of artemether-lumefantrine (AL, Coartem®, Novartis Pharma AG, Basel, Switzerland) and injectable quinine (provided by multiple manufacturers). Stocks of four different dosage packs of AL were included: 'yellow' packs used for babies weighing 5 kg to < 15 kg, 'blue' packs for children weighing 15 kg to < 25 kg, 'red' packs for children weighing 25 kg to < 35 kg and 'green' packs for children weighing 35 kg or more and for adults. The SMS for Life system The system consists of two components: an SMS management tool and a web-based reporting tool. SMS management tool (Figure 1) The SMS application stores a single registered mobile telephone number for one healthcare worker at each health facility. Once a week, a stock request is sent by SMS to each of these telephone numbers. Stock messages are sent back in reply using a free short code number at zero cost to the healthcare worker i.e. telephones do not need to be in credit to reply. A standard message format is used to capture stock quantities of AL and quinine, and formatting errors are handled through follow-up SMS messages to the facility. Step 1 A personal mobile telephone number for one healthcare worker at each health facility in the three pilot districts was obtained during training sessions and registered with the SMS application. Only stock count messages from registered personal mobile telephone numbers are accepted. Step 2 Every Thursday at 14:00 an SMS message is sent to all registered health facility workers requesting stock counts. Step 3 Full boxes of AL in the storeroom of each facility are counted, and individual quinine injectable vials are counted in the storeroom and dispensary (the difference in accounting methodologies was at the request of the NMCP). Step 4 An SMS message is composed by the health facility worker, comprising a code for each type of medicine and the quantity, following an agreed format. Step 5 The heath facility worker either replies to the stock request SMS or sends a new SMS using the free short code number. If the message is sent in an incorrect format, the system automatically informs the sender. After three unsuccessful attempts, the district management is informed and asked to intervene. Step 6 The SMS system sends an automatic reminder to all health facilities that have not replied by Friday at 14:00. Step 7 The SMS system credits the healthcare worker's mobile telephone with a fixed amount of money (1000-1500 TZS, depending on the district) for personal use if the stock count reply is received before 17:00 on Friday. Late SMS replies are accepted until 13:00 on the following Thursday, but no credit is applied to mobile telephones for late replies. Step 8 The system provides a weekly status report to the DMO indicating (a) which health facilities did not provide a stock count and (b) which health facilities have a stock-out. Web-based reporting tool The data captured from the SMS stock count messages is made available via a secure website for which access requires a unique user identification and password. Access is provided to the DMO and his/her staff in each participating district, the relevant Regional Medical Officers and their staff, the project team, the NMCP and the Medical Stores Department including the Zonal Stores affiliated with each district. The website provides (a) current and historical data on AL and quinine injectable stock levels at the health facility and district level (b) Google mapping of district health facilities with stock levels overlays and stock-out alerts (c) SMS messaging statistics e.g. errors, received messages and (d) usage statistics. District-level management The DMO appointed one person in the district to redistribute medicines in response to stock-outs identified by the SMS for Life system. Redistribution could be undertaken by telephoning health facilities with stock-outs to inform them of excess stock in a neighboring health facility, or by contacting the Malarial Focal Person in the district to request that they move stock from a health facility with a high stock level to a neighboring facility. Participant training Training was provided at three levels: (i) At a national level, core project and system training was provided at a half-day session for NMCP, Medical Stores Department and additional staff to explain the project objectives, use of the reporting system and action to be taken based on stock count information provided. (ii) At the district level, a half-day training session was provided for the DMO, Malaria Focal Person, District Pharmacist and Zonal Store representative for each district. Training covered use of the reporting system, action to be taken based on stock count information provided, and education and assistance for health facility workers. (iii) At the health facility level, a half-day training session was provided by the NMCP in-country project lead for health facility workers within each district, in the local language. The session included registration of personal mobile telephone numbers, the procedure for counting stock, composition of the SMS stock count messages, live simulations of counting, composing and sending SMS messages, and best practice for stock management and storage of anti-malarials. Monitoring and evaluation Weekly stock reports, stock-out statistics, error rates, deliveries and system access were monitored daily online during the 21-week pilot study. Surveillance visits were undertaken for 116/129 health facilities (90.0%) at least once to validate the accuracy of stock count data provided by health facility workers. District management team members were interviewed towards the end of the pilot study to assess stock movement during the study, obtain feedback on use and ease of access to the data system and on use of the registration/de-registration function for health facility mobile telephone numbers, seek views on training and training materials, and elicit opinions on the SMS for Life project versus other stock management practices and the potential for future implementation of the scheme. Throughout the project, information on every order and delivery of AL or quinine injectable from Zonal Stores was collected. Project partnership and contributions The project partnership had a fixed-term commitment of less than one year, with no centralized budget, formal contract or memorandum of understanding. The Tanzanian Ministry of Health and Social Welfare, The Roll Back Malaria Partnership, Novartis Pharma AG, Vodafone and IBM took part in the pilot project. Each partner funded their own activities. The NMCP in Tanzania, operating as part of the Ministry of Health and Social Welfare, was the owner and main user of the SMS for Life pilot and coordinated all project activities in the country i.e. planning, implementation and evaluation, including provision of a project leader and vehicles with drivers. The Roll Back Malaria Partnership provided project oversight, including the work of the steering committee, and led advocacy activities. Novartis initiated and led the pilot, defining the solution, sourcing partners, establishing the steering committee, and providing the necessary resources and funding (e.g. to support health professional training). Vodafone and its partner, Matssoft, supported the design, funding and development of the system application and the implementation of the technical solution, and funded all technical operational costs of the pilot. IBM supplied management resource support to the project and provided an on-line collaboration tool 'Lotus Live', which allowed all the project partners to coordinate their inputs across company networks. Data collection During the 21-week study, the average response rate to SMS requests for stock count data was 95%. The response rate did not fall below 93% at any point ( Figure 2). The proportion of late replies (i.e. after 17:00 on Friday) was low, averaging 3% overall. The rate of responses, and the proportion of late responses, did not vary markedly during the pilot, other than after the request sent on 14 th January 2010 when there was a national problem with connectivity on one mobile telephone network (Figure 2). The highest response rate was in Lindi Rural (99%), compared to 93% in Ulanga and 94% in Kigoma Rural, which is likely to have been the result of disciplinary action in the Lindi Rural district, consisting of warning letters and interviews at the district office for non-compliant health facility staff. Across all three districts, feedback from district management and data from questionnaires completed by health facility workers indicated that the financial incentive of airtime credit was an important contributor to the high response rate. The error rate for composition of SMS responses was low, averaging 7.5% throughout the study (Figure 2). In Lindi Rural, 100% of error SMS responses were corrected, and although data on corrected rate were not routinely collected the fact that the accepted response rate did not fall below 93% at any point confirms that even incorrect messages from the other two districts were usually corrected. Stock counting, as assessed by surveillance visits to 116 of the 129 health facilities in the three districts, showed a data accuracy of 94% i.e. the most recent stock message matched the inventory inspected at the health facility. System usage The central NMCP log-in was activated on average once a day. The central Medical Stores Department and the Zonal Stores in the three districts virtually never accessed the system. At the district level, the weekly emails sent by the SMS for Life system were read by at least one team member in the district management team of each district every week during the pilot, with the exception of a single email to the Kigoma team. System usage in the Lindi Rural district decreased as stockouts were eliminated after week 8, declining from 45 log-ins during October 2009 to 13 log-ins during February 2010. In Ulanga, log-ins increased (35 log-ins during October-December, rising to 70 log-ins during the last 6 weeks of the project) after the Clinical Officer in the District Medical Office was given a Blackberry and more prescriptive input from the SMS for Life project team. In the third district, Kigoma Rural, access to the system was low in the early phase (33 times in October-December) but increased to 28 times in the last 6 weeks after the District Pharmacist and Malaria Focal Person were each given a Blackberrry device to access stock count data. Anti-malarial stock levels At the start of the pilot (week 1), 78% of health facilities had no stock of one or more of the four different AL dosage packs or of quinine injectable. By the end of the pilot (week 21), this proportion had fallen to 26%. The reduction in stock-outs was largely related to improvements in stocks of AL, since the proportion of health facilities with stock-outs of quinine at the start of the study was lower (18% compared to 77% of facilities with a stock-out of AL) (Figure 3). Stock-outs of all dosages of AL showed a progressive decline over the first two months of the pilot, with a gradual increase from the middle of December to the second half of January, reflecting the ILS delivery schedule. By the end of the pilot, stocks-out of AL blue, green and yellow were almost eradicated but a fifth of health facilities still had no AL red, almost entirely due to continuing stock-outs in the Kigoma Rural district (Figure 4a). Over 80% of facilities held stocks of quinine injectable at baseline, which increased to more than 95% by the end of the pilot (Figure 4b). Over the same period, total AL stock across the three districts increased by 64% from 2,696 boxes at week 1 to 4,411 boxes at week 21, while the number of quinine vials increased by 36% from 12,536 to 16,981 (36%). Stock levels showed a small increase for all AL dosages by week 21, with similar levels of AL blue, green and yellow, but stocks of AL red remained lower than for other dosages, again primarily due to the Kigoma Rural district (Figure 5a). Quinine injectable stock levels also showed a small increase during the pilot (Figure 5b). There were marked differences between the three districts in terms of achievement of full stocking and in stock levels, for a variety of reasons. The Lindi Rural district was the most successful in managing stock levels, eliminating stock-outs for all five categories of medicine by week 8 and maintaining stocks of all three anti-malarials at almost all health facilities thereafter. Two key factors contributed. First, after receiving the first set of stock count data, the district management team made an emergency order to the Zonal Store. This delivery was distributed to health facilities according to priority based on their urgency of demand during weeks 2, 3 and 4, thereby eradicating most stock-outs. Second, when a health facility reported having only one box of any AL dosage pack, the district pharmacist either issued further stock or moved stock from a neighboring health facility in a pre-emptive manner. In the Ulanga district, the rate of stock-outs at week 1 was high (87% of health facilities), largely because no blue dosage packs of AL had been delivered to the district for almost a year. Also, Ulanga was transitioning from the 'push' system to ILS delivery during the pilot. As a result, deliveries were delayed and there were discrepancies between stock orders and the item delivered, for example with no blue AL dosage packs included and only very small quantities of other AL dosages. Furthermore, an emergency delivery was not received. Following two ILS deliveries, the second of which included blue AL dosage packs, 78% of all health facilities in Ulanga became fully stocked by week 21. The proportion of health facilities with no quinine injectable, however, increased from 3% at week 1 to 7% at week 21. In the third district, Kigoma Rural, almost all health facilities (93%) had a stock-out of at least one type of anti-malarial at week 1, and 36% were out of stock of all five products. There was an ongoing shortage of red AL dosage packs until the end of January 2010, with 42% of health facilities still having no red packs at the end of the study. Over 90% of health facilities, however, had stocks of all other products by week 21. The district relied only on regular ILS deliveries. Following the two ILS deliveries that were received during the pilot, the district management took 3-4 weeks to distribute medicines from the first delivery to all health facilities and after delivery of a complete ILS order in late December, including red AL dosage packs, stock counts of red packs only rose from 21 January onwards. Several factors contributed to outcomes in the Kigoma Rural district. The ILS delivery quantity for red dosage packs of AL was only sufficient to prevent stock-outs for three weeks, such that stock-outs were inevitable. Second, when red dosage packs were delivered they were distributed unevenly between health facilities, with some facilities receiving none, and no active redistribution was undertaken subsequently. Lastly, no emergency orders were submitted from the Kigoma Rural district despite severe stock shortages for the majority of the pilot. Discussion The SMS for Life pilot achieved all three of its objectives. First, visibility of anti-malarial stock levels at the health facility level supported more efficient stock management. Across all three districts, the proportion of health facilities fully stocked with all five anti-malarial products increased from approximately one quarter to three quarters over the 21-week pilot. Second, the SMS for Life system brought accurate stock level information to all relevant parties using simple and widely available SMS technology that was easily accessed by appropriate users. Thirdly, the public-private partnership model worked highly effectively, and proved to be a major contributor to the success of the project. To achieve full stocking of all five anti-malarial products required both an adequate starting level of products across the district and proactive redistribution of products by district management between health facilities. Redistribution is always likely to be required to compensate for delivery of varying quantities to different health facilities and varying consumption rates, particularly when there is a shortage of stock. By providing visibility of stock levels, the SMS for Life system meant that both of these criteria could be met, as demonstrated in the Lindi Rural district where health facilities were virtually all fully stocked after week 8 of the pilot. Comprehensive stock information was provided from health facilities, with an average response rate of 95%. Stock level information was accessible even in the remotest areas, and was provided via both weekly emails and secure web-based data to maximize usage. All aspects of the system proved easy to use after only a short training session. It was important to track log-ins by district staff and intervene as necessary by offering further training or additional access solutions (e.g. provision of Blackberry devices or computer modems); such interventions prompted dramatic increases in log-in rates in both the Ulanga and Kigoma Rural districts. By tracking weekly usage of all malaria products (ACTs, quinine and RCTs if used) by individual health facility, the system can profile annual requirements by facility, to inform and improve the accuracy of ordering and supply chain efficiency. From weekly usage of RDT's and ACT's the system can also calculate a proxy for the number of positive versus negative tests. While expiry dates were not tracked, a significant finding was that weekly visibility by facility led to DMOs being extremely active in implementing ongoing re-distribution of stock between facilities, thus reducing the risk of stock going out of date. The pilot was implemented through a novel publicprivate partnership under the umbrella of the Roll Back Malaria Partnership. The SMS for Life solution was designed, built and implemented in less than a year, with no formal budget or legal contracts between partners. With a short timeframe and no ongoing financial commitments, this model was appealing to potential commercial partners, without whom the pilot could not have been undertaken. A number of critical success factors were identified (Table 1). Government commitment at a high level is essential to ensure the system is workable and sustainable, and that its use is mandatory. Mobile telephone coverage within an acceptable distance (maximum 2-3 hours' walk from the health facility, although a period of no more than 15-30 minutes would be ideal) is a necessary prerequisite to participation. It is also crucial for health workers to use their personal mobile telephones, with which they are familiar and for which maintenance is not the responsibility of the project. Accordingly, a free number for sending stock information is mandatory since messages can still be sent if the telephone has no credit, a situation that can arise frequently. Although the pilot did not include a control arm without a financial incentive, feedback from health workers, district management and the NMCP indicated that a credit incentive for timely responses was key to the high response rates observed. The training sessions for health care workers was essential, and learning points from this pilot include notifying delegates in advance to bring a personal mobile telephone; a practical session on how to send SMS text messages; and expanding the live scenario workshop component. Other uses of cell phones and SMS texts to improve health care delivery have previously been explored in resource-constrained settings in Africa [7][8][9][10]. These have typically focused on improving patient adherence to treatment for HIV/AIDS or tuberculosis, and enhancing communication between healthcare workers and remotely-located patients [7][8][9]. One innovative pilot study in Zambia has used weekly SMS reports of new cases of malaria from rural health centers to provide punctual detection of positive diagnoses and thus facilitate timely intervention to prevent an upsurge in transmission [10]. Such approaches have proved technically feasible and achieved good outcomes, such that mobile phone-based systems appear likely to expand as part of rural health care provision in Africa. The current study, which to our knowledge is the first to apply an SMScentered system to manage stock levels at a local level, has demonstrated another practical and successful application of the technology. As use of RDTs expands in Tanzania, ACT stock levels would be reduced accordingly and tight management of stocking would become even more critical to avoid stockouts. The current system would then become even more valuable -and additionally offer weekly monitoring of RDTs supplies to avoid reversion to clinically-based diagnosis for which ACT stocks then be inadequate to treat. In conclusion, this innovative pilot shows that the SMS for Life system has the potential to alleviate restricted anti-malarial drug availability in rural areas, one of the major barriers to effective management of the disease. The system is flexible, scalable and compatible with any mobile telephone network, and can be implemented in any country with minimal tailoring. Costs for implementing the system on a wider scale would be low, at approximately US$5,000 per district in Tanzania, with the largest single item being the per diem payment to health facility staff to attend training sessions. Ongoing post-implementation costs would be approximately $7,000 per district per year, including the weekly incentive payments. The system could also usefully be applied to stock management of other priority medicines in similar settings. Finally, the public-private partnership model piloted here effectively harnessed a series of diverse skills and expertise and could be utilized to tackle other societal problems. Authors' contributions Jim Barrington, as programme director, developed the initial concept for the project, established the project team, contacted and liaised with all project partners, coordinated activity throughout, and prepared the project report upon which the manuscript is based. Olympia Wereko-Brobby provided organizational support throughout the project to the program director and contributed to development of the project report. Peter Ward was the project manager throughout, and in addition significantly refined the final project reports and rewrote the guidance document.
2014-10-01T00:00:00.000Z
2010-10-27T00:00:00.000
{ "year": 2010, "sha1": "2313aad2f539ae58910caf2064351d0cf74ecc0a", "oa_license": "CCBY", "oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-9-298", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2313aad2f539ae58910caf2064351d0cf74ecc0a", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263331168
pes2o/s2orc
v3-fos-license
circulating tumor DNA in the immediate post-operative setting Background: Circulating tumor DNA (ctDNA) has emerged as an accurate real-time biomarker of disease status across most solid tumor types. Most studies evaluating the utility of ctDNA have focused on time points weeks to months after surgery, which for many cancer types, is significantly later than decision-making time points for adjuvant treatment. In this systematic review, we summarize the state of the literature on the feasibility of using ctDNA as a biomarker in the immediate postoperative period. Methods: We performed a systematic review evaluating the early kinetics, defined here as three days, of ctDNA in patients who underwent curative-intent surgery across several cancer types. Results: Among the 2057 studies identified, we evaluated eight cohort studies with ctDNA levels measured within the first three days after surgery. Across six different cancer types, all studies showed an increased risk of cancer recurrence in patients with a positive early postoperative ctDNA level. Discussion: While ctDNA clearance kinetics appear to vary based on tumor type, across all studies- detectable ctDNA after surgery was predictive of recurrence, suggesting early postoperative timepoints could be feasibly used for determining minimal residual disease. However, larger studies need to be performed to better understand the precise kinetics of ctDNA clearance across different cancer types as well as to determine optimal postoperative time points. Synopsis: This systematic review analyzed the use of ctDNA as a biomarker for minimal residual disease detection in the early postoperative setting and found that ctDNA detection within three days after surgery is associated with an increased risk of recurrence. Introduction The prognosis for cancer patients following surgical resection largely depends on post-operative disease status.Accurately identifying minimal residual disease (MRD) after surgery is crucial yet currently presents a challenge to clinicians.Current approaches for determining MRD typically rely on estimating risk based on clinicopathologic factors, which have poor individualized predictive and prognostic value 1,2 .Ideally, MRD could be determined with certainty immediately following surgery, to allow real-time treatment manipulation when disease levels are most actionable. Circulating tumor DNA (ctDNA) has emerged as an accurate real-time biomarker of disease status across most solid tumor types [2][3][4][5][6][7][8] .However, the performance metrics of ctDNA for detecting MRD immediately following surgery remain poorly understood, due to the scarcity of data available, variability in the approaches used, and the difficulty correlating MRD with recurrence when adjuvant treatment is delivered.Most studies evaluating the utility of ctDNA have focused on time points weeks to months after surgery, which for many cancer types is significantly later than decision-making time points for adjuvant treatment.While evaluating ctDNA levels as a prognostic biomarker in the preoperative period could be useful, data in this clinical context are highly variable across cancer types and patients [9][10][11] , as there are a myriad of the features which impact absolute ctDNA levels.Ideally, detection of MRD could be accomplished in the immediate postoperative period, giving immediate feedback on the success of surgery and need for additional treatment.In this systematic review, we summarize the state of the literature on the feasibility of using ctDNA as a biomarker in the immediate postoperative period, defined here as within three days of surgery. Materials and Methods A systematic review was performed by a medical librarian (L.C.) following the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 12terature Search A search of published articles and studies in Legacy PubMed (1946-), Embase.com(1947-), and Web of Science Core Collection (1900-) was performed on March 12, 2021, with an updated search performed on March 2, 2022.Search strategies were developed for each database (Methods S1).Each search utilized a combination of controlled vocabulary and keywords focused on the following concepts: ctDNA, curative treatment, and treatment outcome.The search was designed to exclude animal studies using the Cochrane search filter 13 .No filters for language, study design, date of publication, or country of origin were applied.All references were exported into Endnote 7.8 for deduplication and then to Covidence for further deduplication, study screening, selection, and data extraction.The search produced 3408 studies before deduplication, and 2057 after deduplication. Study Selection Studies examining ctDNA levels before and after curative surgery in adult patients with a history of cancer were considered eligible for inclusion.We considered studies that included a postoperative blood sample within the first three days after surgery.Studies that did not include a specific blood sample collection timeline, did not collect a post-operative blood sample within the first three days after surgery, or did not provide an assessment of the relationship between post-operative blood samples and recurrence/survival were excluded from the study.Also, studies with small sample size were subject to exclusion. Extracted data comprised cancer type, ctDNA detection and quantification method, target ctDNA, monitoring of ctDNA levels postoperatively, patient outcome, recurrence rate, and residual disease status.During the screening, any study written in a language other than English or German (languages spoken by the authors) was excluded.Titles and abstracts were screened by two authors independently (V.E.and M.H.) for full-text review.The same two authors independently conducted the full-text review.Any disagreements in the screening process were settled by discussion and consensus between the two authors.Disagreements that could not be settled in this manner were settled in consultation with a third author (D.F.).All eligible studies were screened for duplicate data by comparing authors, timeframe of data collection, and outcomes.After the full-text screening, eight studies remained for the final synthesis. Results Eight studies were identified for inclusion (Figure 1, Table 1).Three studies were in lung cancer and one each in colorectal cancer, melanoma, HPV-associated oropharyngeal squamous cell carcinoma (HPV+OPSCC), Epstein-Bar Virus (EBV) nasopharyngeal carcinoma, and pancreatic adenocarcinoma.Six studies used polymerase chain reaction (PCR) based approaches and two used Next-Generation Sequencing (NGS).Of the six studies that implemented a PCR-based approach, three used digital droplet PCR (ddPCR), two used quantitative PCR (qPCR), and one used BEAMing (beads, emulsion, amplification, and magnetics) PCR.Of the two studies that implemented an NGS-based approach, both used targeted NGS. Next Generation Sequencing Targeted NGS Chen et.Al 14 performed a prospective study on 26 newly diagnosed non-small cell lung cancer (NSCLC) patients undergoing surgery with curative intent.Plasma was collected at the following time points: immediately before surgery, during surgery, post-operative day (POD) 1, and POD 3. The plasma was analyzed for mutations in seven genes using the cSMART NGS detection platform.This cohort had a median follow-up time of 532 days.In this period, recurrence-free survival (RFS) or overall survival (OS) did not correlate with ctDNA levels measured on POD1 (p = 0.65, p = 0.462).However, patients with undetectable ctDNA levels on POD 3 had significantly better RFS (p=0.002) and OS (p=0.018)than those with detectable ctDNA.Furthermore, the kinetics of ctDNA were different in patients with MRD, which was defined as positive based on the detection of ctDNA on POD1, POD3, or POD30.The ctDNA half-life was longer in patients positive for MRD (103.2 minutes vs. 29.7 minutes, p=0.001) than in patients negative for MRD. Xia et.al 15 conducted a prospective cohort study in 330 NSCLC patients who underwent curative intent surgery.Plasma was collected before surgery, on POD 3, and on POD 30.Plasma samples were analyzed using a custom 769-gene panel.Patients positive on POD 3 (n = 19) and/or POD 30 (n = 19), were defined as MRD positive (n = 26).The median follow-up period was 1,068 days.At POD 3 and POD 30, the ctDNA level had a high positive predictive value for relapse (p < 0.001).Recurrence rates were significantly higher in MRD-positive patients (21/26) compared to MRD-negative patients (49/303) (p < 0.001).Additionally, MRD-positive patients had poorer RFS (p = 0.008) independent of pathologic subtype, EGFR mutation status, and TNM stage.Adjuvant therapy was shown to improve RFS only in MRD-positive patients (p = 0.002), after adjusting for clinicopathologic features. Polymerase Chain Reaction Quantitative PCR Hu et.al 16 performed a prospective cohort study of 168 patients treated for lung cancer (155 patients with NSCLC, 2 with small-cell lung cancer, and 11 with undetermined histology).Mutation status was determined using tissue samples and identified 36 patients as positive for the EGFR mutation and 16 as positive for the KRAS mutation.Plasma samples were collected immediately before surgery, on POD 1, POD 3, the day of discharge (POD 3-7), and POD 30.Using competitive allele-specific TaqMan PCR (CAST-PCR) one mutation was detected in EGFR and seven mutations in KRAS, from plasma samples.The median follow-up time was 638 days.A correlation of the total level of cell-free DNA (cfDNA) in plasma was shown for both patients with KRAS mutations (p<0.0001) and patients with EGFR mutations (p<0.0009).Interestingly, a higher increase in the levels of cfDNA was shown in the plasma of patients who recurred within four months (5/16) as compared to patients who recurred after four months (6/16) and patients who did not recur (5/16).At earlier time points, there were no significant differences seen in cfDNA levels between these groups.EGFR mutations detectible in cfDNA surged 24 hours after surgery for all patients with incomplete resections.Levels peaked (median = 336 copies per sample) on POD 3, then rapidly dropped by the day of discharge (POD 3-7).EGFR mutation remained detectable in the plasma for only two patients on POD 30, both of whom experienced recurrence within 4 months.Quantitative KRAS levels were not analyzed due to the small sample size. To et.al 17 recruited 21 patients with either recurrent (17/21) or persistent (4/21) EBV+ nasopharyngeal carcinoma (NPC).Plasma samples were collected in the immediate preoperative period, during surgery, and post-operatively.Plasma samples were analyzed using realtime qPCR for the BamHI-W fragment region of the EBV genome., Time to follow-up was variable (range 2-18 months).Out of the 17 recurrent cases,16 showed detectable ctDNA (median concentration pre-operatively 458 copies/ml).Additionally, one of four cases of persistent disease had detectable ctDNA levels (3.5 copies/ml).The other three cases of persistent disease with undetectable ctDNA levels showed no tumor on histological examination.Serial monitoring of ctDNA concentration was also performed in 11 patients-including one patient who underwent two operations for local recurrence-for a total of 12 serial monitoring cases.The median duration of serial monitoring was 6.7 days.In eight of twelve cases, the ctDNA levels peaked at a median of 15 minutes after the first excision.In eight of the eleven patients, ctDNA was undetectable at the end of the monitoring period.Two of three patients with detectable ctDNA had a recurrence within four months.In the two patients with documented recurrence, ctDNA levels increased from the postoperative time point to the time point when recurrence was diagnosed.In the patient without recurrence, ctDNA concentration fell to undetectable at 28h, then rebounded at 43h and fluctuated until the end of the study. Digital Droplet PCR Gouda et al 18 performed a prospective cohort study including 80 patients with newly diagnosed early-stage melanoma who underwent definitive surgery.Plasma samples were collected before surgery, one hour after surgery, POD 2, POD 3-7, and additional follow-up time points.ddPCR was used to detect BRAF mutations.76 patients had samples at baseline, one hour after surgery, and POD 1.Of the 28 patients with cfDNA detected BRAF mutations before surgery, 15 showed no detectable ctDNA one hour after surgery.One hour after surgery, 20 patients had detectable BRAF-mutated ctDNA.Those with mutated ctDNA had a higher likelihood of overall recurrence (p < 0.001), recurrence risk at six months (p = 0.004), and recurrence risk at 24 months (p = 0.042).Patients with BRAF-mutated ctDNA showed a shorter DFS and OS.On POD2, 24 patients had detectable BRAF-mutated ctDNA.These patients were associated with a higher rate of recurrence (p = 0.023), but not with a difference in median DFS or OS.At all other time points, ctDNA detection of mutant BRAF was not associated with a difference in recurrence risk, DFS, or OS.O'Boyle et al 2 conducted a prospective cohort study in 33 patients with HPV+OPSCC treated with curative intent surgery.Plasma samples were collected preoperatively, POD 1, and serially in follow-up.ddPCR assays were used to detect five high-risk HPV genotypes (HPV16, 18, 33, 35, 45).The median follow-up time was 1 year.Of the 33 patients, those without pathologic risk factors for recurrence had undetectable ctDNA on POD 1 (8/8).In patients with risk factors for macroscopic residual disease, ctDNA was markedly elevated on POD 1 (>350 copies per ml) and remained elevated until adjuvant treatment (n = 3/3).Patients with intermediate POD 1 ctDNA levels all had pathologic risk factors for microscopic residual disease (n = 9/9).POD 1 ctDNA levels were higher in patients who had known adverse pathologic risk factors, showed increased lymph node involvement, or received adjuvant treatment.Two of 33 patients with detectable ctDNA levels on POD1 had recurrent disease.None of the patients with undetectable ctDNA on POD1 had a recurrence by their one-year follow-up.Early ctDNA kinetics were determined by a cohort of twelve patients who had plasma samples collected immediately following tumor removal and then every 6 hours for the first 24 hours after surgery.Four of the 12 patients had no pathologic risk factors for recurrence and received no adjuvant treatment.In these patients, ctDNA levels decreased precipitously within 6 hours after surgery and remained undetectable by POD 1.Three additional patients with unclear pathologic risk factors had ctDNA levels clear by POD1.The remaining five patients, all of whom had adverse pathologic risk factors, had detectable ctDNA on POD 1. Yamaguchi et al 19 performed a prospective cohort study in 97 patients who underwent surgical treatment of pancreatic ductal adenocarcinoma (PDAC).KRAS mutations were detected using tumor samples in 78 patients (80%).Plasma was collected before surgery and POD 3. Samples were analyzed using ddPCR for three hotspot KRAS mutations.The median follow-up time for this cohort was 882 days.ctDNA was detected in 24 patients (25%) before surgery and in 27 patients (28%) on POD 3. POD 3 ctDNA levels were predictive of RFS (p = 0.027), showing a significantly shorter time to recurrence in patients with positive ctDNA levels (6.9 months) compared to patients with negative ctDNA levels (19.2 months).POD 3 ctDNA levels were also predictive of overall survival, which was significantly lower in ctDNA-positive patients (18.2 months) compared to ctDNA-negative patients (56.7 months).Additionally, patients who were positive for ctDNA at any time point (n=43) had worse OS (P <0.001) and RFS (P=0.003)compared with patients who were negative at both time points (n=54). Diehl et al 20 conducted a prospective cohort study in 18 patients with primary or metastatic colorectal cancer (CRC) who underwent surgical treatment.Mutation status of four genes (APC, KRAS, PIK3CA, TP53) was determined using tissue samples.Plasma samples were collected before surgery, POD 1, day of discharge (POD 2-10), and follow-up (POD 13-56).Plasma samples were analyzed using BEAMing (beads, digital PCR).Follow-up time was 547 days.An estimated half-life of ctDNA was determined to be 114 minutes by plasma sampling one subject several times after surgery.In all subjects who underwent complete resection, a sharp drop in ctDNA was observed, with a 96.7% median decrease evident on POD1 and a 99.0% decrease on the day of discharge (POD 2-10).In five patients with incomplete resection, ctDNA changes were variable.In two patients, concentration decreased only slightly (55-56%) on POD1.In the other three cases, the ctDNA concentration increased (141%, 325%, and 794%).While the quantity of ctDNA generally decreased in cases with complete resection, it did not decrease to undetectable by the first follow-up visit (POD13-56) in 16 of the 20 cases.Recurrence occurred in 15 of these 16 patients.In contrast, no recurrence occurred in the four patients with undetectable ctDNA at the first follow-up visit.Detectable ctDNA at the first follow-up visit was a significant predictor of recurrence rate (p = 0.006). Discussion ctDNA has emerged as a real-time biomarker for detecting MRD after surgical resection with curative intent.Our systemic review of the existing literature determined that the presence of ctDNA in the early postoperative setting was associated with an increased risk of disease recurrence.Overall, the absence of ctDNA was associated with a positive prognosis across multiple cancer types.However, most studies also showed that some patients with undetectable ctDNA levels in the postoperative period experienced recurrent disease.Thus, if ctDNA is to be used clinically as a tool to identify MRD, more sensitive approaches are needed to differentiate true negatives from false negatives in the immediate postoperative period. All studies demonstrated that detectable ctDNA levels after surgery were an effective predictor of recurrent disease.In six of the eight studies, detectable ctDNA levels in the early postoperative period (POD 0-3) were associated with a statistically significant increase in the risk of recurrence 2,14,15,[17][18][19] .In the two remaining studies, elevated ctDNA levels were predictive of recurrence at later time points.Hu et al 16 saw a sharp increase in ctDNA levels in the immediate postoperative period for all patients with incomplete resection; the levels rapidly dropped and were predictive of recurrence by the day of follow-up (POD 30).In contrast, Diehl et al 20 showed that all patients who underwent complete resection experienced a sharp drop in ctDNA levels by the day of discharge (POD 2-10), but the presence or absence of ctDNA at the first follow-up visit (POD 13-56) was most predictive of recurrence.This review was conducted across several tumor types, and clearance kinetics were highly variable.Median ctDNA half-life varied for NSCLC (35 minutes) 14 , CRC (114 minutes) 20 , and NPC (139 minutes) 17 .The time point at which ctDNA levels most effectively predicted disease status varied as well.According to Chen et al 14 , ctDNA levels on POD3 most accurately predicted recurrence-free survival (p = 0.002) for NSCLC.O'Boyle et al 2 demonstrated that ctDNA levels on POD1 were the best predictor of residual disease for HPV+OPSCC.Gouda et al 18 found that ctDNA levels measured one hour after surgery most accurately predicted recurrence in melanoma patients.As such, timing is a critical factor in using ctDNA as a predictor of MRD and may also depend on the overall tumor burden, which was not accounted for in most studies. In most studies, some patients with undetectable ctDNA levels after surgery experienced recurrent disease 2,14,15,17,-19 .This could be explained by the intrinsic limitations of the methods currently used to detect ctDNA.PCR-based approaches, like ddPCR and qPCR, require prior knowledge of the target mutation.Further, there are a limited number of targets that can be multiplexed per reaction.Studies that used targeted NGS panels also targeted a limited number of mutations, decreasing the detection rate of ctDNA.Increasing the sensitivity of MRD detection is essential to establish ctDNA as a clinically reliable tool in the immediate postoperative period.Newer approaches such as MAESTRO, which dramatically improved limits of detection for MRD, will be key in advancing the field to clinical utility 21 . Further, the current body of literature describing ctDNA as a tool for detecting MRD is limited.Increasing the number of studies using more sensitive, comprehensive approaches across different cancer types is critical for providing a better understanding of ctDNA clearance kinetics overall.Improved datasets would also allow us to determine optimal time points for MRD detection, in each cancer type. In summary, our review of the current body of literature shows that ctDNA levels in the early postoperative period can be used to predict recurrence and prognosis across multiple cancer types.Thus, ctDNA is a promising biomarker of MRD that could guide decision-making after surgical resection with curative intent; however, more sensitive and comprehensive MRD approaches are needed to decrease the rate of false negatives.Our review was limited by the small number of studies currently available and the variability of clearance kinetics across cancer types and tumor stages.
2023-10-02T19:04:20.317Z
2023-10-02T00:00:00.000
{ "year": 2023, "sha1": "4ac94b4d6df2da8e8516c2aa43f0740bf482c157", "oa_license": "CCBYNCND", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2023/10/02/2023.09.30.23296390.full.pdf", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "fb422dcd96a2a143fd15538eef1d4c5a0beaa36d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259946164
pes2o/s2orc
v3-fos-license
Discrepancies in Ratings of Behavioral Healthcare Interventions Among Evidence-Based Program Resources Websites Decision makers in the behavioral health disciplines could benefit from tools to assist them in identifying and implementing evidence-based interventions. One tool is an evidence-based program resources website (EBPR). Prior studies documented that when multiple EBPRs rate an intervention, they may disagree. Prior research concerning the reason for such conflicts is sparse. The present study examines how EBPRs rate interventions and the sources of disagreement between EBPRs when rating the same intervention. This study hypothesizes that EBPRs may disagree about intervention ratings because they either use different rating paradigms or they use different studies as evidence of intervention effectiveness (or both). This study identified 15 EBPRs for inclusion. One author (M.J.L.E.) coded the EBPRs for which “tiers of evidence” each EBPR used to classify behavioral health interventions and which criteria they used when rating interventions. The author then computed one Jaccard index of similarity for the criteria shared between each pair of EBPRs that co-rated interventions, and one for the studies used by EBPR rating pairs when rating the same program. The authors used a combination of chi-square, correlation, and binary logistic regression analyses to analyze the data. There was a statistically significant negative correlation between the number of Cochrane Risk of Bias criteria shared between 2 EBPRs and the likelihood of those 2 EBPRs agreeing on an intervention rating (r = −.12, P ≤ .01). There was no relationship between the number of studies evaluated by 2 EBPRs and the likelihood of those EBPRs agreeing on an intervention rating. The major reason for disagreements between EBPRs when rating the same intervention in this study was due to differences in the rating criteria used by the EBPRs. The studies used by the EBPRs to rate programs does not appear to have an impact. Introduction Service providers in the behavioral health disciplines are increasingly expected to use evidence-based interventions in their treatment of various behavioral health problems. [1][2][3][4] Evidence-based interventions are interventions that have "demonstrated positive outcomes through high quality clinical or organizational research" (p.1). 5 One type of publicly available resource for locating information about evidencebased interventions is the evidence-based program resources website (EBPR). 1,2,4 These EBPRs are databases of "reports that summarize the available evidence of programs' effectiveness, including programs in social services, education, public health, and criminal justice" (p. 409). 2 EBPRs use sets of criteria to evaluate the merit and worth of social interventions, typically using existing research and evaluation studies. 1,2,6 EBPRs tend to follow well-accepted hierarchies of evidence that define randomized controlled trials (RCTs) as producing the highest quality of evidence for intervention effectiveness. 1,6 The ultimate result of EBPR evaluations of social interventions is a summary rating of evidence, or placement in an evidence category, depending on the EBPR. Statement of the Problem Even though these EBPRs use similar hierarchies of evidence to rate interventions, the ratings they produce may appear to disagree, or even be contradictory. 1,4,6,7 These discrepancies may lead to confusion for users who have neither the time nor the expertise to conduct further investigation. This could ultimately threaten the credibility of the conclusions presented by the EBPRs, thereby making their utilization less likely. 1,4 Two major sources of such discrepancies found in the literature are variations in the criteria that constitute the intervention rating categories, and variations in the bodies of literature used as evidence of intervention effectiveness. 4,8,9 Background Differences in Evidence Criteria Among Intervention Rating Paradigms Definitions of what constitute an evidence-based intervention vary depending on who or what agency defines them. 3,10,11 Likewise, variations in the criteria used by EBPRs to construct those rating categories are examined in several studies. For example, the Pew-MacArthur Charitable Trusts (PEW) found that specific details about the way different EBPRs construct their rating categories make it difficult to compare the ratings given to the same interventions. 12 Some EBPRs require positive outcomes from an RCT as a minimum for acceptance as an evidence-based intervention (eg, Social Programs That Work), while others allow quasi-experimental designs (eg, Crimesolutions.gov). Still others require that an intervention's impact only needs to be statistically significant as compared with an alternative (eg, Blueprints for Healthy Youth Development), sometimes also requiring a minimum effect size (eg, Promising Practices Network). EBPR rating paradigms tend to use 2 approaches in assessing interventions-the quality of evidence in support of the intervention and the strength of evidence in support of the intervention. Quality of evidence refers to the way primary evaluation studies of the intervention are conducted, and strength of evidence refers to the degree of positive outcomes (ie, statistical significance or effect size). The scales used by the EBPRs in the present study tend to primarily consider the quality of evidence, although many also include a requirement that a statistically significant outcome is demonstrated in the evaluation studies. The application of these scales leads to a rating of effectiveness of an intervention. On the whole, these scales are constructed in one of 2 ways-they either include or exclude interventions or they place the intervention in a tier, or category, of evidence (eg, Effective, Promising, etc.). No matter which of these is used, the categories of evidence that are assigned to interventions by the EBPRs are delineated by arbitrary "cut points" into discreet categories, with the exception of a few EBPRs that use a rubric that produces a continuous rating of the strength of evidence (such as the What Works Clearinghouse). In 2008, the Cochrane Collaboration (now simply Cochrane) released a Risk of Bias (ROB) tool, which has been updated several times and is considered by researchers to be gold standard in component analysis. 29 Risk of bias here means the degree to which study operations in each component domain are likely to result in over-or underestimation of a treatment effect. 17 Studies with low risk of bias are thought to be more likely to provide a valid assessment of a treatment effect than those with higher risk of bias. 17 The ROB Tool is the preferred method for assessing study quality in Cochrane Reviews. 19 There are 15 criteria areas that are sorted into the following general domains: selection bias, attrition and missing data, reliability, and validity of measures, implementation fidelity, blinding, and other reactivity, analytic methods, and power analysis, and reporting of effects. The present study uses these ROB criteria as the foundation for the analysis of the impact of rating scale rigor on disagreements in rating of interventions. Research "rigor" has been defined as the "strict application of the scientific method to ensure unbiased and well-controlled experimental design, methodology, analysis, interpretation, and reporting of results" (p. n.p.). 30 Differences Based on Different Evaluation Literature Used for the Ratings The way that EBPRs screen studies varies across the EBPRs, and EBPRs do not treat all supporting studies equally. Many of the EBPRs have minimum requirements for studies to be assessed. For example, the What Works Clearinghouse (WWC) 31 evaluates studies in support of interventions using 2 categories-"meets WWC standards without reservations" and "meets WWC standards with reservations." If a study does not fall into one of those categories, then it WWC does not consider that study as part of the supporting evidence for an intervention. However, other EBPRs do not have the same standards as WWC, and therefore may assess studies that do not fall into one of the WWC categories. As a result, when both the WWC and another EBPR assess the same interventions, they may not use the same supporting studies. Therefore, important information may be missing from one of the evidentiary assessments, which would impact whether 2 EBPRs reach the same conclusion. Two prior studies attribute differences in intervention ratings to differences in the primary studies reviewed by different EBPRs when assessing interventions. 8,9 Purpose of the Study As discussed, the literature points to 2 major reasons for discrepancies in intervention ratings across EBPRs: differences in the content and relative strictness of criteria used to rate interventions and differences in the bodies of literature assessed in support of interventions. 4,8,9,32 The present study seeks to answer the question, "What accounts for the variations in the ratings of effectiveness given to the same interventions by different EBPRs?" Coding The present study examined the ratings given to behavioral health interventions by 15 EBPRs included in Burkhardt et al. 1 and Means et al. 6 The present study includes all behavioral health interventions included in the EBPRs, with no exclusions. The list of EBPRs included in the present study appears in Table 1. EBPR rating systems use variable numbers of rating categories across EBPRs, and these categories do not directly align with each other as the EBPRs originally define them. One author (M.J.L.E.) placed the ratings assigned to all interventions included in the 15 EBPRs into 4 analytic categories based on qualitative review of the criteria used by each EBPR to define their ratings. Author M.J.L.E. assigned these categories, while author S.M. reviewed these classifications. These categories are: 1. Category 1 (Top category/evidence-based-Ratings that show that interventions are superior. In the case of single category EBPRs, any interventions listed on the EBPR are considered to be in Category 1. Category 2 (Mid category/promising)-Ratings that show uncertainty in intervention effects. These categories may also reflect methodological deficiencies that may bias intervention effects. Also known as promising interventions. Category 3 (No effect/harmful) -Ratings that show that an intervention has either no effect or a negative effect. 4. Category 4 (Not reviewed/not rated)-Ratings where an EBPR considered an intervention, but that intervention did not meet minimum criteria for full assessment. Author M.J.L.E. coded Each EBPR according to which Cochrane Risk of Bias criteria the EBPR incorporated into its rating system by observing which ROB criteria were present or absent. The author's coding was confirmed in interviews with the primary managers (ie, administrators) responsible for the operation of each EBPR to ensure accuracy. Additionally, author M.J.L.E. computed a Jaccard index of agreement (JI) for the interventions rated by each possible pairing of EBPRs. The Jaccard index is a measure of overlap between 2 sets of data, which is defined as the degree of intersection of the 2 sets expressed as a proportion of the degree of union of the 2 sets. 33 The Jaccard index is calculated as: Total items shared by each EBPR EBPR A unique items Shared items Thus, for the case where EBPR A has 7 unique items, EBPR B has 7 unique items, and they share 23 items: It should be noted that in the context of this study, "items" can mean interventions rated by 2 EBPRs, ROB criteria shared between 2 EBPRs, or studies assessed by 2 EBPRs. Author M.J.L.E. coded each EBPR pair for level of agreement in ratings for each intervention they co-rated. The possible outcomes of these paired ratings are: Substantial agreement-this occurs when both EBPRs in a pair rate the same intervention as category 1 or category 2 2. Partial disagreement-this occurs when one EBPR rates an intervention in category 1 and another EBPR rates an intervention in category 2 3. Substantial disagreement-this occurs when one EBPR rates an intervention in category 1 or 2 and another EBPR rates an intervention in category 3. Author M.J.L.E. calculated a Jaccard index of agreement for ROB criteria shared for each possible pair of EBPRs. Some pairs of EBPRs used more criteria overall than others, leading to an issue where a pair that uses 2 criteria with one overlapping had the same Jaccard index as a pair that used 8 criteria with 4 overlapping. To account for this, the author calculated a weighted Jaccard index by dividing the total number of criteria used by either EBPR by 15, which was the total possible ROB criteria that could be used by a pair and multiplying the Jaccard index by this weighting factor. For example, if 2 EBPRs together used a total of 10 criteria, the weighting factor would be 0.67 (10 criteria used divided by 15 total criteria). To understand the agreement or disagreement between EBPR ratings in context of supporting study overlap, the authors create a matched sample of interventions that had at least one discrepant rating (one EBPR rates the intervention as category 1 and another rates the intervention in category 3) and interventions that had at least one major agreement rating. This sampling method was used to highlight differences that would otherwise be ambiguous if the partial disagreement category were included in the analysis. Figures 1 and 2 below present the number of interventions identified in each phase of the sampling process. Once the final agreement sample was derived, the disagreement sample was noted to only have 30 cases with data. The samples were balanced by removing 3 cases at random from the agreement sample. The final total number of paired ratings per agreement/disagreement group was, 30 with a total of 60 paired ratings overall. A Jaccard index was calculated for the studies assessed in support of interventions rated by each possible EBPR pair in this matched subsample. Data Analysis Descriptive statistics were obtained for the distribution of intervention ratings by analytic category and the frequencies of each type of disagreement in rating outcome between each possible EBPR pair when rating interventions. A correlation analysis was conducted between 3 indices of agreement in criteria used by each pair of EBPRs and the outcome of agreement or disagreement in ratings. These were the raw number of criteria shared between 2 EBPRs, the Jaccard index of agreement for shared ROB criteria, and the weighted Jaccard index of agreement for shared ROB criteria. A Jaccard index was calculated for the number of references used by both EBPRs in a rating pair when rating each intervention in the disagreement analysis sample. This Jaccard index was used in a chi-squared analysis of high and low Jaccard index and substantial agreement or substantial disagreement in intervention ratings given by each EBPR pair. To assess the impact of both shared rigor and shared studies on the likelihood of agreement or disagreement in intervention ratings between pairs of EBPRs, the weighted Jaccard index for shared rigor and the Jaccard index for shared references were regressed on a binary outcome (0 = substantial disagreement, 1 = substantial agreement). Distribution of Intervention Ratings In total, 1 151 ratings were given to the included interventions by the EBPRs. Of those, 23% were category 1 (top category/evidence-based) ratings, 38% were category 2 (mid category/promising) ratings, 13% were category 3 (harmful/ no effect) ratings, and 27% were category 4 (reviewed/not rated) ratings. Overall Rigor of the Rating Scales The largest proportion of criteria used by the EBPRs was 7 to 9 (40%). Roughly 47% of the EBPRs used 6 or less criteria, while roughly 53% used 7 or more criteria ( Table 2). The most used criteria were selection bias controls and implementation fidelity (Figure 3). The EBPRs varied in their degree of use of the other ROB controls. (Table 3) Impact of Shared Rigor on Shared Intervention Ratings Correlations were obtained for the percentages of intervention ratings for each EBPR pair for each category and the 3 agreement indices for each EBPR pair (Table 4). An example of the data for this analysis is presented in Table 5 and the results of the analysis are presented in Table 6. A statistically significant negative correlation was observed between the Weighted Jaccard index for a rating pair and the percentage of ratings where the rating pair substantially agreed on an intervention's rating. Additionally, a statistically significant positive correlation was found between the Weighted Jaccard index for a rating pair and the percentage of any type of disagreement in paired rating. The correlations in Table 6 indicate that EBPR pairs that are more similar in their rating criteria are less likely to agree on intervention ratings. The most commonly shared criteria ( Figure 4) included controls for selection bias in general, controls for attrition, and implementation integrity. It should be noted that a substantial proportion of EBPRs required that studies report statistically significant impacts only. Impact of Shared Evaluation Literature on Shared Intervention Ratings A chi-square test for independence did not indicate any significant relationship between the number of studies that are shared in a rating pair and the outcome of the rating (substantial disagreement or substantial agreement; χ 2 = .267, df = 1, P ≥ .05). The contingency table for this analysis appears in Table 7. Predictive Model A binary logistic regression analysis was also conducted by regressing the Weighted Jaccard index for criteria and the Jaccard index for shared studies on a binary rating outcome variable (0 = substantial disagreement, 1 = substantial agreement). Partial disagreement was not used as an outcome category in the regression as no data was collected on studies shared in partial disagreement cases per study protocol. The results of the binary logistic regression indicated that these variables, when taken as a whole, were not significant predictors of agreement on intervention ratings as defined in the present analysis (Cox & Snell R Square = .01, Nagelkerke R Square = .014, df 8, P > .05). Discussion Approximately 46% of paired ratings resulted in a substantial agreement on the intervention ratings, while the number of substantial disagreements was low (6%). If partial disagreements are considered, then 33% of paired ratings resulted in some type of disagreement. However, the situation where one EBPR gives a Category 1 rating and another gives a Category 2 rating (partial disagreement) is a special INQUIRY case that requires interpretation on the part of the decision maker. One could view this situation as a disagreement in intervention ratings, since the second EBPR is clearly saying that the intervention is not a Category 1 intervention. However, this situation could also be interpreted as meaning that the intervention is recognized by each EBPR as having some degree of evidentiary support. This speaks to the issue of granularity. The more rating categories each EBPR has, the more potential there is for some discrepancies in ratings that require additional interpretation by decision makers. There was a statistically significant negative correlation between the number of shared Cochrane ROB criteria and the probability of substantial agreement. This means that as the percent of shared Cochrane ROB criteria between EBPRs increases the similarity of their ratings of the same intervention decreases. One would expect the opposite. It would be reasonable to assume that as agreement in the rating criteria increases, so should the likelihood that 2 raters would agree, because they would be essentially using the same rating scale. It is possible that this negative correlation could be associated with which criteria the 2 EBPRs in a pair shared. For example, sharing the requirement for using an RCT may lead both EBPRs to rate the intervention into a specific category (such as Category 1) more frequently than if both EBPRs shared a requirement for adequate blinding of participants. This would increase the likelihood that the 2 EBPRs would agree. It is also possible that as the complexity of the rating paradigms used by each EBPR increases, the likelihood of either EBPR placing an intervention in Category 1 would decrease because it would be harder for an intervention to be classified into Category 1 by either EBPR. Thus, the cumulative impact of the use of a larger number of rating criteria by either EBPR would decrease the likelihood of agreement between the 2 EBPRs by chance alone. Finally, evaluation is a human process, so even seemingly objective criteria can be applied subjectively. Under this idea of human subjectivity, it would seem that the more criteria used by 2 raters, the more chance there is for disagreement based on a fine point of contention. One of the most important criteria typically used in determining rigor of research is the use of randomized controlled trials. In the present study, the specific requirement of randomization (vs simply requiring use of general selection bias controls) was not found at all in over 66% of the pairings and was unique to one EBPR in approximately 30% of the pairings. It is possible that if both EBPRs in a rating pair required randomization specifically as a Category 1 criterion, the chance that they would agree on an intervention rating would increase. The same is true for 2 other important aspects of methodological rigor. These are the use of studies with adequate statistical power and the use of supporting studies that report all effects (not just significant ones). The requirement for adequate statistical power as an assessment criterion was missing or unique to one EBPR in almost 90% of the ratings pairs included in this study. The requirement that studies report all effects was missing or unique in approximately 96% of the pairs. This again means that important methodological criteria were not agreed upon in many of the EBPR pairings. As with overlap in Cochrane ROB criteria used, one could infer that the more shared studies used by an EBPR pair, the greater the likelihood that those 2 EBPRs would agree on an intervention rating, as they are using the same pool of evidence to rate the interventions. However, this study demonstrates no relationship between the number of shared studies and the outcome of the paired rating. This appears to indicate that between the 2 explanatory factors (ie, rigor of the rating paradigm and the number of shared studies), the rigor of the rating paradigm has a more salient relationship. The analysis of shared studies and shared criteria together as predictors of rating agreement was statistically non-significant. This finding, when paired with the findings of the individual analyses of the relationship between shared Cochrane ROB criteria and shared ratings, appears to indicate that the idiosyncrasies of the current rating paradigms may contribute more to the differences in paired ratings than shared evaluative rigor or shared evidence. This appears to align with prior research that indicated that using methodology-based scales with arbitrary cut points (such as those used by the EBPRs currently) is not as effective as a bias reduction model in rating interventions as evidence-based. Conclusions and Recommendations The major finding of this study is that when a perceived substantial disagreement in intervention ratings occurs, it is likely due to differences in the intervention rating paradigms used, particularly in relation to the number of shared Cochrane ROB criteria used. Thus, when a decision maker encounters conflicting ratings, they may need to make themselves aware of how the structure of the EBPR rating paradigms affects those ratings. Future work in this area could involve the creation of a comprehensive user guide that includes multiple EBPRs and gives advice on how to manage conflicting ratings between the EBPRs. One other aspect of understanding and resolving seeming conflicts in ratings is understanding the degree of the conflict. Substantial disagreements (ie, when one EBPR rates an intervention as evidence-based and another EBPR rates an intervention as having no effect) are the most serious type of conflict in that the 2 EBPRs directly conflict with each other. Resolution such conflict is imperative, in that the user must decide which EBPR is correct if they are going to justifiably implement a given intervention. In this case, the user would be well-advised to consult another EBPR, carefully read the primary research, or seek the advice of a trusted expert to resolve the conflict. The second type of disagreement is a partial disagreement (or partial agreement). This type of disagreement occurs when one EBPR rates an intervention as top-tier and another rates the intervention as mid-tier. This is the less serious of the 2 types of disagreements. In this case, a decision maker could correctly infer that the intervention does possess some amount of evidence base. If decisionmakers are permitted to implement mid-tier/promising interventions, then the seeming conflict is not consequential. If decision makers are only permitted to implement top-tier interventions, then they still could make the case that there are indications that the intervention is, in fact, evidence-based. One final thought on the issue of conflicting ratings based on the rating paradigms themselves is that EBPRs vary in the number of rating scales used. Some EBPRs, such as NREPP, had both a quality of research and a readiness for dissemination scale. Other EBPRs, such as the CEBC consider relevance to the population of interest and still other EBPRs have specific impact ratings that go beyond the quality of the research (eg, WWC). These additional scales may also provide information that can help decision makers manage conflictual ratings. They may decide to use an intervention even if it has conflictual ratings, so long as that intervention shows readiness for implementation and is appropriate for the desired clinical context. Implications for Future Research The present study examined 2 major reasons why intervention ratings may vary among different EBPRs. However, more research is needed to further understand these variations. Future research may ask the following questions: Future research could help us further understand how decision makers resolve conflicts in ratings between EBPRs. Answering the above questions could ultimately help the EBPRs to improve their assessment processes, and also to help decision makers in understanding how to resolve conflictual ratings between EBPRs. This could improve the value of EBPRs for decision makers and other EBPR users. Limitations of the Present Study The data for the present study was collected in 2014 and 2015 for a parent study. However, the use of these older data for the present study is justified for the following reasons: The identity of the EBPRs has remained relatively stable over the last decade; only three of the original 15 that were studied have ceased operations. Moreover, inspection of the websites indicates that the substantial majority of the original interventions remain listed. Finally, the original data were collected in part through contact with EBPR managers, who confirmed data coding concerning the use of ROB criteria in EBPR rating paradigms. Because the funded study has concluded, the authors do not have access to the EBPR managers required to confirm any updated coding. This means that any new coding could be less valid than coding from the original dataset. A second limitation is that single-category EBPRs were conceptualized as rating an intervention as Category 1 if they included an intervention as an evidence-based intervention with no gradations of evidence. This was done because the assumption was that including the intervention indicated it was evidence-based, in the same way that the concept of Category 1 for multiple-category EBPRs also indicates that an intervention is evidence-based. However, it is unclear if that assumption holds. These 2 types of categories may be semantically different. For example, include/exclude-type ratings simply tell whether an intervention met minimum criteria for inclusion, but do not indicate if some interventions are truly superior to others. However, Category 1 ratings from multiple-category EBPRs do imply superiority because some interventions clearly did not make the cut. A third limitation is that this study used the number of Cochrane ROB criteria as an indicator of methodological rigor only. Some EBPRs also include requirements that an intervention shows a significant effect, and that requirement may contribute to some of the disagreement between EBPRs that is outside the influence of the Cochrane ROB criteria alone. In other words, the fact that the EBPRs do not explicitly use the Cochrane ROB criteria as their rating paradigms may make the comparisons in the present study less reliable. However, this may have been offset by the fact that the EBPR managers confirmed which Cochrane ROB criteria were approximated within their rating paradigms. The final limitation is that we only hypothesized 2 explanations for disagreements in intervention ratings. Those were variations in the content and relative strictness of EBPR rating paradigms and variations in the bodies of literature used by EBPRs as evidence of intervention effectiveness. There may be other explanatory factors such as considerations of readiness for dissemination and consideration of intervention effects that may also explain differences in intervention ratings. Additionally, the present study did not control for which Cochrane ROB criteria were shared by the EBPRs when rating interventions, and that may have also had impacts on the ratings outcomes. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Data collection was partially supported by National Institutes on Drug Abuse grant # 1R21DA032151-01. Declaration of Ethics Our study did not require an ethical board approval because it did not involve human subjects research. Declaration of Informed Consent Our study did not require informed consent as it did not involve human subjects research.
2023-07-19T06:18:53.692Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "2bfa4f731b1692f5f2d51c55195ad3b1a78e298c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/00469580231186836", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e8c5d831bd8b0a476ce15c4fd58f25715f723cf", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
3350914
pes2o/s2orc
v3-fos-license
Broadband Pulsations from PSR B1821-24: Implications for Emission Models and the Pulsar Population of M28 We report a 5.4\sigma\ detection of pulsed gamma rays from PSR B1821-24 in the globular cluster M28 using ~44 months of Fermi Large Area Telescope (LAT) data that have been reprocessed with improved instrument calibration constants. We constructed a phase-coherent ephemeris, with post-fit residual RMS of 3 \mu s, using radio data spanning ~23.2 years, enabling measurements of the multi-wavelength light curve properties of PSR B1821-24 at the milliperiod level. We fold RXTE observations of PSR B1821-24 from 1996 to 2007 and discuss implications on the emission zones. The gamma-ray light curve consists of two peaks, separated by 0.41$\pm$0.02 in phase, with the first gamma-ray peak lagging the first radio peak by 0.05$\pm$0.02 in phase, consistent with the phase of giant radio pulses. We observe significant emission in the off-peak interval of PSR B1821-24 with a best-fit LAT position inconsistent with the core of M28. We do not detect significant gamma-ray pulsations at the spin or orbital periods from any other known pulsar in M28, and we place limits on the number of energetic pulsars in the cluster. The derived gamma-ray efficiency, ~2%, is typical of other gamma-ray pulsars with comparable spin-down power, suggesting that the measured spin-down rate ($2.2\times10^{36}$ erg s$^{-1}$) is not appreciably distorted by acceleration in the cluster potential. This confirms PSR B1821-24 as the second very energetic millisecond pulsar in a globular cluster and raises the question of whether these represent a separate class of objects that only form in regions of very high stellar density INTRODUCTION Since the launch of the Fermi Gamma-ray Space Telescope in 2008, significant high-energy (HE, ≥0.1 GeV) pulsations have been detected from more than 40 1 millisecond pulsars (MSPs, Abdo et al. 2013, mostly in the Galactic field) using the Large Area Telescope (LAT, a pairproduction telescope sensitive to photons with energies from 20 MeV to >300 GeV, Atwood et al. 2009), the main instrument aboard Fermi. Additionally, HE emission has been detected from the directions of more than a dozen globular clusters (Abdo et al. 2010a;Kong et al. 2010;Tam et al. 2011), known or thought to host many MSPs, and the observed spectra of these point sources are consistent with the superposition of emission from several MSPs (predicted by Chen 1991). The one exception is PSR J1823−3021A (in the globular cluster NGC 6624, Biggs et al. 1994) from which significant gamma-ray pulsations have been detected and which accounts for all of the observed HE emission associated with the parent cluster (Freire et al. 2011). To date, all LAT sources associated with globular clusters are consistent with point-like emission, with reported 2σ upper limits on any extension of < 16 ′ assuming a two-dimensional Gaussian profile (Abdo et al. 2010a). MSPs are thought to be old "recycled" pulsars that have reached rapid rotation rates via accretion from a companion star (e.g., Alpar et al. 1982). However, the confirmation of PSR J1823−3021A as very energetic MSP suggests an unusual formation scenario, such as the collapse of a white dwarf to a neutron star induced by accretion or a merger with another white dwarf (Ivanova et al. 2008), which may be more likely in globular clusters. As such, it is important to search for and/or confirm similar MSPs in other globular clusters. Detecting gamma-ray pulsations from more MSPs in globular clusters will help to constrain models for the broadband emission from the clusters (e.g., Cheng et al. 2010;Zajczyk et al. 2013;Kopp et al. submitted). Constraining the models will generally determine the expected flux level and may be important for extracting the associated particle conversion efficiency from such modeling, once the number of sources is known, thereby constraining the reacceleration particles may undergo within the clusters once they leave the MSP magnetospheres. PSR B1821−24 Located within the core of the globular cluster M28 (NGC 6626), PSR B1821−24 is an isolated MSP with a spin period (P ) of ∼3.05 ms and was the first pulsar ever detected in a globular cluster (Lyne et al. 1987). The observed period derivative (Ṗ ) of ∼ 1.62 × 10 −18 s s −1 (Foster et al. 1988) leads to an inferred rotational energy-loss rate ofĖ ≡ 4π 2 IṖ /P 3 ∼ 2.2 × 10 36 erg s −1 , where I is the moment of inertia of the neutron star and is taken to be 10 45 g cm 2 . This is the highesṫ E of any known rotation-powered MSP, according to version 1.46 of the ATNF Pulsar Database 2 (Manchester et al. 2005). While it is possible that theṖ could be artificially enhanced by the gravitational field of the cluster, Phinney (1993) showed that theṖ is largely intrinsic. PSR B1821−24 is also the first MSP from which non-thermal pulsed X-ray emission was detected (Saito et al. 1997, using the Advanced Satellite for Cosmology and Astrophysics). Rots et al. (1998) used data from the Rossi X-ray Timing Explorer (RXTE ) and the Green Bank Telescope to determine that the first X-ray and radio peaks were separated by only 0.02 in phase. Using data from the Chandra X-ray Observatory, Rutledge et al. (2004) and Bogdanov et al. (2011) found that ∼15% of the non-thermal X-ray flux of PSR B1821−24 was unpulsed. PSR B1821−24 was the first MSP ever observed to undergo a glitch (Cognard & Backer 2004). A glitch has also been observed from the mildly-recycled PSR B1913+16 (Weisberg et al. 2010). Romani & Johnston (2001) and Knight et al. (2006) reported the detection of giant radio pulses of up to 50 and 91 times the mean pulse intensity, respectively, from PSR B1821−24. The giant pulses are concentrated in a narrow phase window coincident with the first X-ray peak, similar to what has been observed in the original MSP, PSR B1937+21 (Cusumano et al. 2003). Even at a distance (d) of 5.1±0.5 kpc (from optical observations of stars in M28, Rees & Cudworth 1991), the relatively largeĖ of PSR B1821−24 makes it a promising candidate for gamma-ray studies. A 4.2σ HE pulsed detection was reported by Pellizzoni et al. (2009) using data from the Astrorivelatore Gamma a Immagini LEggero (AGILE ) satellite, but pulsations were only significant in the first 5 days of the observation, the HE pulse profile observed with AGILE does not match the LAT profile (see Section 4.2 and Wu et al. 2013), and the observed flux above 100 MeV was greater than the 3σ upper limit set using data from the Energetic Gamma-Ray Experiment Telescope (Fierro et al. 1995). The 2FGL catalog (Nolan et al. 2012) associates 2FGL J1824.8−2449 with M28 and Abdo et al. (2010a) estimated the number of MSPs in the cluster, based on the HE spectrum, to be 43 +24 −21 . Wu et al. (2013) found a 4.3σ pulsed detection using ∼42 months of Pass 7 LAT data (Ackermann et al. 2012), without the updated instrument calibration constants discussed in Section 3.3, and using the timing solution of Ray et al. (2008), which is not contemporaneous with the LAT data thus leaving open the possibility that the gamma-ray peaks have moved with respect to the radio emission due to timing noise or unmodeled dispersion measure (DM) variations. OBSERVATIONS AND DATA ANALYSIS PSR B1821−24 is timed under the auspices of the LAT Pulsar Timing Consortium (Smith et al. 2008), within which ephemerides are provided from radio observatories around the world for 208 pulsars ranked by Ė /d 2 . The timing solution described in Section 3.1 will be made available through the Fermi Science Support Center 3 . RADIO TIMING The radio timing solution for PSR B1821−24 has been constructed with the Tempo2 4 pulsar timing package (Hobbs et al. 2006), using times of arrival (TOAs) recorded at the Nançay Radio Telescope (NRT) in France, the Westerbork Synthesis Radio Telescope (WSRT) in the Netherlands, and the Lovell Telescope at the Jodrell Bank Observatory in the United Kingdom. In order to encompass X-ray and Fermi LAT observations of PSR B1821−24, we used 2994 TOAs spanning from 1989 October 3 (MJD 47802) to 2012 December 1 (MJD 56262). Between 1989 October and 2004 November, Nançay pulsar observations were carried out by mixing the signal with a swept frequency local oscillator mimicking the dispersion caused by the interstellar medium, as described in Cognard et al. (1996); while after late 2004 observations were made using the Berkeley-Orléans-Nançay backend (Cognard & Theureau 2006). Although the bulk of radio observations were conducted at 1.4 GHz, the timing data set also included TOAs recorded at different frequencies from 1.6 to 2 GHz, in order to measure and monitor long-term changes in the DM, necessary for comparing profiles at different wavelengths. In addition, a total of 81 1.4 GHz WSRT TOAs recorded with the PuMa and PuMa-II backends (Voûte et al. 2002;Karuppusamy et al. 2008) between 2004 October 10 (MJD 53288) and 2012 September 14 (MJD 56184), as well as 29 1.5 GHz Jodrell Bank TOAs (Hobbs et al. 2004) recorded with the DFB backend between 2009 August 31 (MJD 55074) and 2012 September 13 (MJD 56183) were included. Figure 1 shows phase-aligned Nançay radio profiles recorded at 1.4 and 2 GHz, based on ∼58.1 hours of observations made between 2008 July 11 and 2011 February 25 and 40.9 hours of observations made between 2004 December 20 and 2008 May 13, respectively; and a 0.35 GHz Westerbork profile obtained by integrating ∼8.5 hours of observations conducted between 2013 June 6 and 2013 June 19 with a frequency bandwidth of 0.08 GHz. The relative phase alignment between the 0.35 GHz light curve and the higher frequency radio profiles was estimated by extracting four TOAs from the 0.35 GHz Westerbork data and calculating the average offset between the low frequency and the high frequency Westerbork TOAs with the ephemeris for PSR B1821−24 obtained from the analysis described below. We estimate that the statistical uncertainty on the relative alignment is on the order of 5 milliperiods (mP). The few 0.35 GHz TOAs were not included in the TOA data set for the timing analysis, having large uncertainties and being affected by strong scattering from the interstellar medium. For the radio profiles we use the peak naming convention of Backer & Sallmen (1997), though we shift the first peak to be at phase zero rather than ∼ 0.3. At 0.35 GHz the P2 is not visible while P1 and P3 appear to broaden and have comparable peak heights. We first constructed a timing model covering the total TOA data set with good accuracy. At this stage the free parameters were the pulsar position, proper motion, and pulse frequency and first two time derivatives. The published parameters from the glitch in 2001 March (Cognard & Backer 2004) were included, and refit in the timing model. We then fixed the parameters at the best-fit values and used the Nançay timing data set to determine the DM and its variations. The data set was split into seven intervals spanning two to three years of data, over which TOAs were recorded with a single backend and at multiple frequencies. A DM value was obtained for each of these intervals using Tempo2. A least-squares fit of the seven DM values with a linear function was performed, yielding the values for the DM at MJD 52400 and first time derivative listed in Table 1. The DM and first time derivative were included in the timing model, and frozen at those best-fit values in subsequent analyses. We note that the uncertainty in the DM leads to an uncertainty of ∼ 1.1 mP in the conversion of 1.4 GHz TOAs to infinite frequency at the epoch of the ephemeris. Finally, the timing model was updated by refitting the total TOA data set using the independentlydetermined DM value and its first time derivative, and leaving other parameters free. The best-fit parameters obtained from this analysis, displayed in Table 1, give an RMS of timing residuals of 9.2 µs, with a maximum excursion of 17 mP. As can be seen from Figure 2, the TOA residuals exhibit low-frequency structures consistent with rotational irregularities (so-called "timing noise", see e.g., Hobbs et al. 2004Hobbs et al. , 2010, which we modeled using eight harmonically related sinusoids, using the "FITWAVES" option of Tempo2, and fixing all other timing parameters. After the GHz Nançay profile, 1.4 GHz Nançay profile, and 0.35 GHz Westerbork profile. We denote the second-highest radio peak at 1.4 GHz, near phase 0, as P1; the highest radio peak at 1.4 GHz, near phase 0.3, as P2; and the lowest radio peak at 1.4 GHz, near phase 0.5, as P3. Both P1 and P3 are also visible at 0.35 and 2 GHz while P2 has no obvious counterpart at 0.35 GHz. whitening procedure, the timing residuals had an RMS of 3.1 µs, with a maximal excursion of 10 mP. The whitened timing residuals are displayed in the lower panel of Figure 2. The X-ray and gamma-ray timing analyses presented in Sections 3.2 and 4.2 were carried out with the whitened timing solution, including the FITWAVES parameters. The observedṖ of a pulsar can be increased from the true value by contributions from the proper motion (Shklovskii 1970). At a distance of 5.1 kpc and with a total proper motion of 8.5 mas yr −1 , this effect contributes ∼ 2.7 × 10 −21 s s −1 to the measuredṖ of PSR B1821−24, three orders of magnitude less than the value reported in Table 1. Therefore, we do not correct for this effect in the observed and derived parameters of PSR B1821−24. The latest proper motion measurement for M28 (Casetti-Dinescu et al. 2013) agrees well with our values, with a total difference of 21 km s −1 at a distance of 5.1 kpc. This difference is less than the estimated escape velocity of 63.8 km s −1 (Gnedin et al. 2002), suggesting that PSR B1821−24 is in fact bound to the cluster. X-RAY DATA The RXTE observations we report on here were performed by the Proportional Counter Array (PCA, which consists of 5 individual proportional counter units, PCUs) from 1996 September 16 (MJD 50342.261) to 2007 April 26 (MJD 54216.252), accumulating a total integration time of ∼469 ks. These observations employed anywhere from 1 to 5 PCUs in various combinations during each observation with data recorded using GoodXenon or GoodXenonwithPropane mode. The PCA data were analyzed using the HEASoft version 6.12 data analysis suite. We employed a variety of bit masks 5 to select events from the PCUs in the 3 to 16 keV range that were on during each individual observation. In addition, Ray et al. (2008) reported that including events from the first and second anode layer improved the signal-to-noise of the pulsed detection and we followed that prescription here. We did not apply a background correction. The PCA is not an imaging instrument. Rather, it has a field of view approximately represented by a Gaussian with FWHM of 14 ′ (Jahoda et al. 2006). This means that other X-ray sources known to be in M28 and to have significant flux above 3 keV (e.g., Becker et al. 2003) will contribute to the total count rate in each observation. Because the contribution from these additional sources will add incoherently to the pulsed signal from PSR B1821−24and we cannot know which events are from PSR B1821−24, we do not attempt to account for these additional X-ray sources in our analysis or to estimate a resulting background level for the pulsed analysis in Section 4.2. The events that satisfy our selection criteria were barycentered with the faxbary tool using the DE405 solar system ephemeris and including the RXTE fine clock corrections yielding an individual event timing accuracy of ∼ 6 µs (Rots et al. 1998;Jahoda et al. 2006). The proper motion of the pulsar was incorporated into the position used to barycenter the data at each epoch. Pulse phases Table 1 were calculated utilizing the Photon Events plugin 6 for Tempo2 and the radio ephemeris described in Section 3.1. LAT DATA: P7REP Pass 7 LAT data have been reprocessed 7 using updated calibration constants for the detector subsystems, most importantly for the calorimeter (CAL) to more accurately describe the positiondependent response of each scintillator crystal and the slight decrease in scintillation light yield with time (∼1% per year) from radiation exposure on orbit. This reprocessing affected the LAT data (P7REP, hereafter) in several ways. First, the pointspread function (PSF) is significantly improved above a few GeV, with a reduction in the 68% containment radius of 30% (40%) for events converting in the front (back) of the tracker (Bregeon et al. 2013). At these energies, the improved calibration constants result in more accurately calculated centroids of energy deposition in the CAL to constrain the incident event direction. Second, the significance of detection and precision of measured photon flux is increased slightly for most sources -more strongly for sources with hard spectra than for those, like pulsars, with cutoffs at a few GeV. Third, spectral features such as cutoff energies are shifted upward slightly in energy (∼few %) by the change in energy scale. We selected events from the P7REP data corresponding to the SOURCE class recorded between 2008 August 4 and 2012 March 31; with reconstructed directions within 11. • 5 of the pulsar radio position, allowing us to construct a 16 • × 16 • square region with no blank corners for a binned likelihood analysis (see Section 4.1); energies from 0.1 to 100 GeV, the lower limit is that recommended for analysis of P7REP data and the upper limit adequately covers the range of known pulsar cutoff energies; and zenith angles ≤100 • , to reduce contamination of gamma-rays from the limb of the Earth. Good time intervals were then selected corresponding to when the instrument was in nominal science operations mode, the rocking angle of the spacecraft did not exceed 52 • , the limb of the Earth did not infringe upon the region of interest, and the data were flagged as good. All LAT analyses were performed using the Fermi Science Tools v9r27p1. The recommended instrument response functions (IRFs, which include the PSF, effective area, and energy dispersion) for analyzing P7REP data are P7REP V15. These IRFs are derived from detailed simulations of the instrument (Ackermann et al. 2012) with some modifications based on on-orbit performance checks, which are detailed below. The accuracy with which incoming event directions are reconstructed is dependent on the energy (E), interaction point within the instrument, and angle with respect to the boresight 8 (θ). For a SOURCE class event converting in the front of the instrument, the energy-dependent 68% confidence-level containment radius, averaged over the acceptance, can be approximated as Although the reprocessing significantly improved the PSF at high energies, the angular distribution of gamma rays around point sources used for in-flight calibration of the PSF above 3 GeV was still found to be slightly broader in the P7REP data than predicted by the Monte Carlo (MC) PSF. The on-orbit PSF for the P7REP V15 IRFs was derived by rescaling the MC PSF to match the angular distribution of gamma rays around the Vela pulsar below 10 GeV and a sample of bright, high-latitude blazars above 10 GeV. This correction to the MC PSF model rescales the size of the PSF as a function of energy while preserving the dependence on θ; formerly, for the P7 V6 IRFs, recommended for analyzing the original Pass 7 data, the θ dependence was not preserved in making this correction (Ackermann et al. 2013). There is a known discrepancy between the fluxes arising from analyses using only events that convert in the front or the back of the tracker subsystem (see Figure 47 and Section 5.6 of Ackermann et al. 2012). This discrepancy occurs mainly at energies below 300 MeV with differences of 10%. The P7REP V15 effective area tables include an empirical correction for this that does not modify the overall effective area inferred from MC studies. The total effective area for a near on-axis, 1 GeV, SOURCE class gamma ray is ∼7000 cm 2 . SPECTRAL AND SPATIAL ANALYSIS A binned maximum likelihood analysis was performed on a 16 • × 16 • region centered on the pulsar position using the P7REP SOURCE V15 IRFs. All sources from a three-year source list, produced following the same procedure used for the 2FGL catalog, using the original Pass 7 data, and P7SOURCE V6 IRFs, within 15 • of PSR B1821−24 were included in the model of the region and all spectral parameters of sources within 8 • (23 point sources and 2 extended sources) were left free. The Galactic diffuse emission was modeled using the gll iem v05.fit model while the isotropic diffuse emission and residual instrument background were jointly modeled using the iso source v05.txt template 9 . These diffuse models were produced specifically for the P7REP data using a refined approach in which residuals in the LAT data were used to fit components of the diffuse emission not derived from observations at other wavelengths (see Ballet & Burnett 2013). We modeled the spectrum of PSR B1821−24 as both a simple power law (Eq. 1) and an exponentially-cutoff power law (Eq. 2). Using the likelihood ratio test, a simple power-law shape is ruled out, in favor of an exponentiallycutoff power law, with a confidence level of 5.6σ. We detect a point source at the position of PSR B1821−24 with a likelihood test statistic (TS, Nolan et al. 2012) of 438. The best-fit spectrum has E C = 6.1 ± 2.1 GeV, Γ = 2.2 ± 0.1, and gives integral photon and energy fluxes (from 0.1 to 100 GeV) of F = (7.2 ± 0.9) × 10 −8 cm −2 s −1 and G = (3.8 ± 0.3) × 10 −11 erg cm −2 s −1 , respectively, all uncertainties being statistical. PSR B1821−24 is a relatively faint source for the LAT and statistical uncertainties in these measurements dominate the systematic uncertainties; thus, we do not attempt to estimate systematic uncertainties on the best-fit parameters. The 2FGL catalog and Wu et al. (2013) have both reported flux values for point sources associated with PSR B1821−24 using the original Pass 7 data and the P7SOURCE V6 IRFs in the 1 to 100 GeV and 0.2 to 300 GeV energy ranges, respectively. Integrating our phase-averaged results over the same energy ranges yields higher values than reported by those authors, by on the order of 20%. These differences are larger than expected from switching to P7REP data alone. We note that the disagreement with the 2FGL flux is at the 2σ level and is likely just statistical fluctuation, while the disagreement with Wu et al. (2013) is < 1σ. We repeated the analysis described in Wu et al. (2013) using similar time, energy, and angular selections and the original Pass 7 data; with the same 2FGL point sources free and fixed in our model of the region; and with the same diffuse components. However, we found values more consistent with results from our analysis, described previously. Additionally, our re-analysis only found a TS of 248 for a point source at the position of PSR B1821−24, much less than the value of 825 reported by Wu et al. (2013). We note that Nolan et al. (2012) reported a significance of ∼ 11σ for 2FGL J1824.8−2449, using two years of data, which corresponds to a TS of ∼144. Extrapolating to ∼42 months we expect a TS of ∼200, for a non-variable source, which agrees with our re-analysis when accounting for differences in event selection. While the differences in Γ and E C may be related to the choice of minimum energy and differences in the diffuse model, the disagreement between the TS values is not understood. Using the initial phase-averaged results, we were able to detect significant pulsations (> 5σ, see Section 4.2) from PSR B1821−24, the gamma-ray light curve is characterized by two peaks at phases of ∼0.0 and ∼0.5, similar to the results of Wu et al. (2013). However, there was a clear offset above the estimated background level observed in the gamma-ray light curve. While it is possible that PSR B1821−24 has a near 100% duty cycle (as seems to be the case for PSR J1836+5925, Abdo et al. 2010c) we performed an analysis of the off-peak phase interval (defined to be φ ∈ (0.24, 0.34) ∪ (0.58, 0.82)) to study the emission in more detail. We first attempted to ascertain if this emission could be attributed to any of the other known pulsars in M28 10 (11 MSPs and one young, non-recycled pulsar, Bogdanov et al. 2011, and Bégin et al. in preparation). At a distance of 5.1 kpc, it is possible that the combined emission from these and any other unknown pulsars, less that of PSR B1821−24, may account for the observed off-peak emission. We obtained timing solutions for PSRs J1824−2452B-L (detailed in Bégin et al. in preparation) and searched for a periodic signal from each pulsar, at the spin and orbital periods, in the LAT data using event weights (a probability for each event to have originated from the source of interest based on the spectral and spatial model of the region, Kerr 2011) calculated from the initial phase-averaged analysis. We used both the full data set and the off-peak interval but found no signal with more than 2σ significance. Using the off-peak interval, the best-fit LAT position for this emission is right ascension (J2000) = 18:25:02.4, declination (J2000) = −24:43:48.0, with r 95 = 6 ′ . This position is 11 ′ 24 ′′ from the core of M28, nearly twice r 95 . All of the other known pulsars in M28 are within 18 ′′ of PSR B1821−24 except for J1824−2452F which is 2 ′ 45.6 ′′ away but still inconsistent with the off-peak emission (∼ 1.5 r 95 away). Our model of the region includes only one other point source within 1. • 5 of the timing position of PSR B1821−24. This source has an integral flux, from 0.1 to 100 GeV, of ∼ 0.9 × 10 −8 cm −2 s −1 and a photon index of ∼ 2. There is one additional source within 3 • of PSR B1821−24 with an integral flux, from 0.1 to 100 GeV, of ∼ 4.5 × 10 −8 cm −2 s −1 and a photon index of ∼ 2.5. All other sources are > 3. • 5 from PSR B1821−24. Therefore, the localization of the off-peak emission should not be strongly affected by known nearby sources. To verify the gtfindsrc position, we built TS maps in the off-peak interval with different minimum energies and a 3 • × 3 • region centered on the pulsar (using the Fermi Science Tool gttsmap in binned mode, see Figure 3). These maps are constructed by calculating the TS value of a hypothetical point source, with a power-law spectral model, at a grid of positions (constructed by dividing the region into pixels 0. • 1 on a side). While there may be some residual emission associated with M28, the peaks of the TS maps agree well with the best-fit position, except for the TS map above 5 GeV for which we find no significant TS at any position. The ∆TS contours of the 0.1 to 100 GeV TS map agree well with the off-peak r 95 from gtfindsrc. Spectral analysis of the off-peak emission shows no evidence for a cutoff in the spectrum; a power-law fit yields Γ = 2.5 ± 0.1 with F = (6.7 ± 1.1) × 10 −8 cm −2 s −1 and G = (3.0 ± 0.2) × 10 −11 erg cm −2 s −1 , where the flux values have been rescaled to the full phase interval. Within the LAT 95% confidence-level error circle of the off-peak emission, we found no cataloged NVSS (Condon et al. 1998) radio or RASS (Voges et al. 2000) X-ray sources down to the typical flux limits of ∼ 2.5 mJy (1.4 GHz) and ∼ 3 × 10 −13 erg cm −2 s −1 (0.1 − 2.4 keV) of the respective surveys. The lack of a bright radio / X-ray source, combined with the steep LAT gammaray spectrum makes a background blazar counterpart unlikely (see Abdo et al. 2010d). The Sun does pass close to M28 and is a significant and persistent source of HE gamma rays (Abdo et al. 2011b); however, the off-peak emission is at an ecliptic latitude of approximately −1. • 4, which is sufficiently offset from the ecliptic plane to rule out an association with the Sun. As can be seen in Figure 3, the error circle is still consistent with the tidal radius of M28 (11.27 ′ , Trager et al. 1995;Chun et al. 2012) so we cannot completely rule out an association with the cluster, but the interpretation of this emission as the combination of unresolved pulsars is uncertain unless there is a systematic shift in the best-fit localization. PSR J1824−2452F is several core radii away from the center of M28, providing some evidence for the possibility of pulsar ejection from the center of the globular cluster. Therefore, it is possible that the off-peak emission is an energetic pulsar that has been ejected from M28. However, the lack of spectral curvature in the off-peak emission (cutoff only preferred at the 1.5σ level) might argue against such an interpretation. Under the hypothesis that the off-peak emission described above is not associated with M28, we performed a spectral analysis in the off-peak interval with a source at the position found previously (not consistent with the cluster) and with a source at the position of M28. The M28 source is found with a TS of 0.05, which is not significant. Therefore, we calculated 95% confidence-level upper limits on the integral photon and energy fluxes from the direction of M28 in the off-peak interval of F ≤ 6.3 × 10 −9 cm −2 s −1 and G ≤ 7.0 × 10 −12 erg cm −2 s −1 , assuming a power-law spectral model with Γ = 2. We find no evidence for significant flux variability in the off-peak emission but do note a possible slow rise in the flux on 6-month to 1-year timescales. We repeated the phase-averaged analysis with the off-peak source included in the model, at the best-fit position and with all spectral parameters fixed. We find a point source at the position of PSR B1821−24 with TS = 76. A simple power-law model is rejected in favor of an exponentially-cutoff power-law model at the 3.9σ level. The best-fit spectrum yields E C = 3.3 ± 1.5 GeV, Γ = 1.6 ± 0.3, and integral fluxes of F = (1.5 ± 0.6) × 10 −8 cm −2 s −1 and G = (1.3 ± 0.2) × 10 −11 erg cm −2 s −1 . Given the disagreement between the location of the off-peak emission and the timing position of PSR B1821−24, we consider these values, rather than those from the initial phase-averaged analysis, to best represent the spectrum of the pulsar. The gamma-ray spectrum of PSR B1821−24 is shown in Figure 4. The flux points are derived from fits to the indicated energy bands in which the spectrum of PSR B1821−24 was modeled as a power law with Γ fixed to 2. The center of each bin is the weighted average energy using the spectral shape of the full energy range fit as the weights. This leads to the center energies moving closer to the low side of each bin with increasing energy since the pulsar is modeled with a cutoff in the full energy range fit. We required the source to be detected with a TS of at least 9 (∼ 3σ for 1 degree of freedom) or else a 95% confidence-level upper limit on the flux was calculated. -Phase-averaged gamma-ray spectrum of PSR B1821−24 with the off-peak source included in the model. The black line shows the best-fit model from the likelihood fit over the full energy range; dashed lines show the 1σ confidence region. The pulsar was assumed to have a power-law spectrum in each energy band and required to be found with a TS of at least 9 or else a 95% confidence-level upper limit was calculated. PULSATIONS We selected events with reconstructed directions within 2 • of PSR B1821−24 and used our best-fit, phase-averaged spectral model, with the off-peak source included in the model, to calculate a probability for each event to be associated with PSR B1821−24. Events triggering the LAT are time stamped using an on-board GPS receiver that is accurate to within <1 µs relative to UTC (Abdo et al. 2009b). We then folded the events at the radio period using the fermi Tempo2 plugin (Ray et al. 2011) and calculated the spectrally-weighted H-test significance (Kerr 2011), resulting in a 5.4σ pulsed detection. The light curves of PSR B1821−24 at different wavelengths are shown in Figure 5. The uncertainties for each bin of the gamma-ray light curve and the background level are calculated as described in Guillemot et al. (2012). This confirms the periodic signal candidate reported by Wu et al. (2013) and firmly establishes PSR B1821−24 as a gamma-ray pulsar. We used photon-weighted maximum likelihood (Abdo et al. 2013) to fit parametric functions (light curves) to the LAT and RXTE data. The gamma-ray light curves were fit using an unbinned analysis. The X-ray event phases were binned into 1000 bins, yielding time resolution comparable to that of the radio ephemeris. For a set of event phases and weights (φ i and w i ) this likelihood is given by log is the assumed functional form with parameters ψ. We fit each peak of the gamma-ray and X-ray data with a symmetric Gaussian shape, because asymmetric peaks were not significantly preferred by the likelihood, and report the best-fit values in Table 2. We considered Lorentzian shapes for each peak but found comparable likelihood values and, thus, report only results of the Gaussian fits. The weights for gammaray events are from the phase-averaged spectral fit, while we set w i = 1 for RXTE data. For the X-ray and gamma-ray light curves we identify peaks 1 and 2 in the order they appear in phase (as labeled in Figure 5). Using these fits, the first gamma-ray peak spans the phase range φ ∈ [0.0, 0.23] ∪ [0.87, 1.0) and the second spans the phase range φ ∈ [0.36, 0.56], where the quoted ranges correspond to the peak positions plus and minus twice the best-fit widths. Romani & Johnston (2001) and Knight et al. (2006) reported that the first X-ray peak was consistent with the phase at which giant pulses were observed in the radio (∼ 0.02 in phase after the first radio peak). While the phases of the first X-ray and gamma-ray peaks are not consistent with 0.02 within uncertainties, we note that 0.02 is only an estimate and thus confirm that the first X-ray peak and now the first gamma-ray peak are consistent with the phase of giant pulses. Knight et al. (2006) also observed a single giant pulse occurring 0.55 in phase after the bulk of the giant pulses, which they contend represents a second population of giant pulses from PSR B1821−24 based on the fact that this pulse had 21 times the mean pulse energy and that Romani & Johnston (2001) detected pulses at similar phase. With our phase convention, this corresponds to phase 0.57, which is consistent with the phase of the second X-ray peak. Given the very large spin-down luminosity of PSR B1821−24, Venter (2008) proposed this MSP as a potential very-high-energy target for H.E.S.S. (see also Frackowiak & Rudak 2005). The expected spectrum was very geometry-dependent, but some flux above 100 GeV would have been expected in a screened polar cap model for an optimistic geometry. The measured E C and the gamma-ray light curve shape presented in Figure 5 disfavor this model for PSR B1821−24. Note. -Peak positions are given by Φ 1 and Φ 2 with widths σ 1 and σ 2 (standard deviations) for the first and second peaks, respectively. All peaks are fit with Gaussians. The last row reports the phase separation (∆) between the first and second peaks in each waveband. MULTI-WAVELENGTH LIGHT CURVES The relative phasing of the multi-wavelength light curve components in Figure 5 presents a challenge to pulsar emission models. Our preliminary attempts to explain the gamma-ray and radio light curves of PSR B1821−24 using geometric models yielded the following general conclusions. It is extremely difficult, if at all possible, to obtain three radio peaks of the correct shape and position in phase by invoking only a single radio cone per magnetic pole (e.g., Story et al. 2007). If instead one attempts to model the first and third radio peaks as originating from opposite magnetic poles, an interpretation supported by the 0.35 GHz profile, the chosen value of the observer angle (ζ) has to be within ∼ 4 • of 90 • with a magnetic inclination angle (χ) between 40 • (required so that both P1 and P3 would be visible) and 60 • (to provide the correct radio peak multiplicity). This geometry results in the correct radio phase separation but cannot produce the correct gamma-ray peak positions (and shapes in some cases) when using standard, geometric realizations of outermagnetospheric emission models (e.g., Cheng et al. 1986;Dyks & Rudak 2003). Stated in a different way, one may find reasonable gamma-ray profile fits (e.g., at χ = 40 • and ζ = 85 • , although the peak separation is somewhat small and we have to choose a different fiducial phase), but then the radio peak multiplicity and / or peak positions are not correct. There is therefore a tension between the gamma-ray and radio profiles in terms of the most preferred fit. It is also possible to model the first two radio peaks using a radio cone above a single pole. This interpretation would be consistent with polarization measurements indicating high linear and low circular polarization as well as a nearly constant position angle in these peaks (indicative of non-caustic, conal emission, Backer & Sallmen 1997;Stairs et al. 1999). The third peak may arise from the opposite pole. However, this is problematic when using the standard prescription for radio emission height (e.g., Kijak & Gil 2003;Story et al. 2007). The maximum peak separation for the radio P1 and P2 is obtained when χ ∼ ζ (i.e., a small impact angle), and matching the observed peak separation requires χ and ζ to be 25 • , which does not reproduce the observed gamma-ray profile well and predicts roughly symmetric radio peaks, contrary to the data. On the other hand, choosing a large χ and ζ to more closely match the gamma-ray profile leads to too small a radio peak separation. Backer & Sallmen (1997) attempted to fit the polarization position-angle swing of PSR B1821−24 under this assumption but were unable to match the gradient across P1. Assuming that P1 and P3 were from opposite poles and P2 was a distant conal component from the same pole as P1, Backer & Sallmen (1997) found a reasonable fit to the polarization position-angle swing of PSR B1821−24 with χ = 50 • and ζ = 90 • . Such a solution gives the correct phasing for P1 and P3, but cannot reproduce the radio or gamma-ray profile shapes in the context of the above emission models. Alternatively, Venter et al. (2012) predicted that this pulsar may plausibly have (some) aligned gamma-ray, X-ray, and radio peaks based on the near alignment of the first X-ray and radio peaks. In fact, a subset of gamma-ray MSPs exists in which the radio and gamma-ray peaks occur at nearly the same phase (Abdo et al. 2010b;Freire et al. 2011;Guillemot et al. 2012;Espinoza et al. 2013); however, while the first radio and gamma-ray peaks are nearly aligned and the second gamma-ray peak is nearly aligned with the third radio peak, no gamma-ray feature matches the second radio peak, which is not visible at 0.35 GHz. In this sense, PSR B1821−24 is similar to PSR B1957+20 for which the two peaks in the 0.3 GHz pulse profile both have counterparts in the gamma-ray light curve but the additional component at 1.4 GHz, which occurs between the two lower-frequency peaks, does not (first noted by Espinoza et al. 2013). When comparing to the 0.8 GHz radio profile presented by Rots et al. (1998) we note that this peak is less prominent at lower frequency. The radio spectral indices of MSPs with aligned radio and gamma-ray peaks tend to be softer than other gamma-ray MSPs (Espinoza et al. 2013); with a spectral index of ∼ −2.4 (Lyne et al. 1987) PSR B1821−24 could plausibly belong to this subset of gamma-ray MSPs. A possible explanation for the near alignment of the first gamma-ray, X-ray and radio peaks and the second gamma-ray peak with the radio P3 is that they are all caustic peaks formed in the outer magnetosphere due to relativistic effects. Backer & Sallmen (1997) discussed such a model for the radio emission assuming that P2 was a polar cap beam while P1 and P3 came from the outer-gap region. In such a model assuming co-located emission regions (Venter et al. 2012) the small phase differences of the first peaks in all wavebands may be reproduced by invoking slightly offset emission altitude ranges (constrained by the peak shapes). The phase difference between the second gamma-ray peak and third radio peak may be similarly explained. In this case then, the radio P2 could come from nearer the polar cap, since it occurs at the phase expected for one of the magnetic poles. It is not clear if shifted altitude ranges could explain the larger offset between the second gamma-ray and X-ray peaks. Also, it would be difficult to model both the gamma-ray peaks and the radio P1 and P2 using altitude-limited models, given the relative phase lags between these peaks. For a low-altitude geometry, the position of the second radio peak may indeed be reproduced, but then it is very difficult to reproduce the actual position of the first radio peak given the fact that the radio emitting region cannot be too extended, or it would yield peaks that are much too broad. A caustic origin, in the outer magnetosphere, for the non-thermal X-ray emission could also plausibly explain both the pulsed and unpulsed component as noted by Bogdanov et al. (2011). Modeling the actual pulse shapes across all wavebands will be difficult and this scenario may be in conflict with expectations from the polarization data (aligned MSPs typically have no observed radio polarization Venter et al. 2012;Espinoza et al. 2013). Clearly, understanding the nature of the multi-wavelength light curves of PSR B1821−24 will require moving beyond the standard assumptions (e.g., fine tuning the azimuthal dependence of the emissivity of high-altitude caustic radio emission) about radio and gamma-ray emission geometries. LUMINOSITY The gamma-ray luminosity of PSR B1821−24 can be calculated as L γ = 4πf Ω Gd 2 , where f Ω is a geometric correction factor accounting for the fact that the pulsar emission is not isotropic and is typically ∼1 for outer-magnetospheric emission models (Watters et al. 2009;Venter et al. 2009). Using this formula and the results of the phase-averaged analysis with the additional off-peak source, we calculate L γ /f Ω = (4.0 ± 1.0) × 10 34 erg s −1 . Assuming f Ω = 1, we calculate the efficiency with which rotational energy is turned into HE gamma rays to be η γ ≡ L γ /Ė = 0.018 ± 0.005. Foster et al. (1988) noted that the period of PSR B1821−24 is nearly a factor of 2 smaller than the theoretical minimum assuming a mass of 1.4 M ⊙ and accretion at the Eddington limit. The minimum period they derive depends on the pulsar's surface magnetic field (which is derived froṁ P ), mass, and radius (e.g., Alpar et al. 1982;Verbunt et al. 1987) as well as on models of accretion by neutron stars (e.g., van den Heuvel 1977;Ghosh & Lamb 1979) which could be uncertain by 50%. This discrepancy may imply either a more massive neutron star, super-Eddington accretion, or that the observedṖ is artificially increased by the gravitational acceleration field in the cluster along our line of sight (as given by Eq. 3 where a l is the line-of-sight acceleration): The latter explanation was deemed unlikely by Foster et al. (1988), and Phinney (1993) showed that the maximum |a l | for M28 was 9 × 10 −9 m s −2 , which suggests that ≤6.6% of the observeḋ P is not intrinsic. Using Eq. 6 in the appendix of Freire et al. (2005) and the central velocity dispersion parameters from the Harris catalog 11 (Harris 1996) and the distance of M28, we find a slightly higher maximum |a l | of 2 × 10 −8 m s −2 . However, this still suggests that, at most, only 14% of the observedṖ of PSR B1821−24 is not intrinsic. We can use η γ to assess the need for any line-of-sight acceleration contribution toṖ obs . The average η γ for pulsars withĖ∈ [0.4, 4] × 10 36 erg s −1 in the second LAT catalog of gamma-ray pulsars, excluding those pulsars with no distance estimate or with distance uncertainties leading to systematic uncertainties on η γ of more than 50%, is 0.116 with a large spread (RMS = 0.090) (Abdo et al. 2013). While the value of η γ we calculate is somewhat below the average, it is not uncommon in thisĖ range; in particular, out of the 16 pulsars we use for this average 4 (25%) have η γ < 0.02. Thus, we see no strong indication from η γ that the measuredṖ is significantly enhanced by the cluster potential, supporting the findings of Phinney (1993) that the observedṖ of PSR B1821−24 is nearly 100% intrinsic. This differs from the conclusion of Wu et al. (2013) but we note that they compared results for PSR B1821−24 to those of MSPs in the Galactic field that have significantly lower values ofĖ and thus are not expected to have similar efficiencies. MSP POPULATION IN M28 Assuming that the off-peak emission discussed in Section 4.1 is in fact from other pulsars in M28, despite the positional offset, and following the prescription of Abdo et al. (2010a) we can estimate the number of energetic MSPs in M28 as, Using the off-peak luminosity, L γ,off = (9.4 ± 2.0) × 10 34 erg s −1 ; averageĖ of MSPs in globular clusters, Ė = (1.8±0.7)×10 34 erg s −1 (Abdo et al. 2009a); and average MSP gamma-ray efficiency, η γ,MSP = 0.245 calculated from Abdo et al. (2013) excluding 10 MSPs for which the distance uncertainties lead to systematic uncertainties on η γ greater than 50% and one with an unrealistic η γ > 1; we calculate N MSP = 20 ± 9 for M28, not counting PSR B1821−24. We note that this value is highly dependent on the value of η γ,MSP chosen and thus the systematic uncertainty of this estimate is greater than the statistical value we quote. If the off-peak emission is in fact not associated with M28, we can use the upper limit calculated at the cluster position in the off-peak interval to limit N MSP ≤ 5, not including PSR B1821−24. This is less than the number of pulsars known in M28, but is also highly dependent on the value of η γ,MSP used, as noted previously. Therefore, from this upper limit we can say only that there is no strong evidence for many pulsars in M28 beyond those already known. We can make another estimate of the gamma-ray flux contributed by the other pulsars in M28 if we statistically correct theṖ of the other known pulsars in M28 for the effect of a l . While we do not know a l for the individual pulsars, we can estimate the maximum acceleration at the projected distance from the cluster core and compute the probability distribution of a l following Phinney (1993). Using the known projected distances of each object, this gives us a probability distribution for intrinsic spin-down rate of each pulsar, solving Eq. 3 forṖ , and hence the intrinsic spin-down luminosity. Using a King-type cluster model with pulsar density n PSR ∝ r −3/2 , a simple L γ ∝ Ė efficiency law, and assuming the off-peak emission is associated with M28 we estimate that PSR B1821−24 should contribute 0.33±0.05 of the combined gamma-ray energy flux of the 12 known pulsars in M28. This agrees well with the observed ratio of the phase-averaged energy fluxes with and without the additional off-peak source of 0.34 ± 0.06. This analysis suggests that the other known pulsars in M28 easily provide enough luminosity to account for the off-peak emission. In turn, this implies that the number of energetic pulsars in M28 may not be much larger than 12 and that MSP radio beams cover a large fraction of the sky, comparable to that of the gamma-ray beams. It also suggests that the next brightest pulsar (likely C, I, or K) could provide as much as ∼ 1/4 the gamma-ray flux of PSR B1821−24. The high incidence (5/12, after correcting for a l ) ofĖ > 10 35 erg s −1 MSPs in M28 implies that not so many unknown pulsars need to contribute to the unpulsed flux, unless they are much fainter in gamma rays than PSR B1821−24. Though lower, this estimate does agree with the value of N MSP = 20 ± 9 MSPs using Eq. 4. Our first estimate relies on comparison with the average η γ of nearby field MSPs with typicalĖ ∼ 10 34 erg s −1 while this last estimate relies on the simple L γ ∝ Ė scaling. It is likely that the true pulsar efficiency at very lowĖ departs from this law (e.g., Harding et al. 2002;Zhang et al. 2004;Takata et al. 2010). While our analysis indicates that magnetospheric emission from the other known pulsars in M28 can plausibly account for the off-peak emission, eight of these pulsars are in binary systems, two are observed to eclipse and three are estimated to have low-mass ( 0.02M ⊙ ) companions. Shocked emission from interactions between the pulsar wind and the companion stars in these systems may contribute to the emission observed by the LAT (Harding & Gaisser 1990;Takata et al. 2012). The classic example of such emission is PSR B1259−63 (Abdo et al. 2011a) from which unpulsed GeV emission is only detected near periastron. However, searches for orbitally-modulated emission from energetic gamma-ray MSPs have resulted in no firm detections (Guillemot et al. 2012;Pletsch et al. 2012) with the best evidence, to date, a 2.3σ indication of orbital modulation above 2.7 GeV from PSR B1957+20 (Wu et al. 2012) and a 2σ indication for PSR J0610−2100 above 3 GeV (Espinoza et al. 2013). Thus, any non-magnetospheric emission from the known energetic binary MSPs in M28 is not expected to be strong and should not affect our previous conclusions. However, we did fold the data at the orbital periods of the M28 pulsars in binary systems and found no significant signal. CONCLUSIONS PSR B1821−24 is the second MSP located in a globular cluster from which significant gammaray pulsations have been detected. Similar to PSR J1823−3021A, the derived efficiency of PSR B1821−24 supports previous assertions that the observedṖ is largely intrinsic, providing further evidence that this is an unusually energetic MSP. This is further highlighted by other properties of PSR B1821−24 (such as the giant radio pulses and HE emission) that are generally observed in young, very energetic, and fast-spinning pulsars. PSR B1821−24 and PSR J1823−3021A haveṖ values ∼ 100 times larger than typical of other MSPs with comparable spin periods, which implies that their lives as MSPs will be ∼ 100 times shorter -a few tens of millions of years. This means that these pulsars must be forming at a rate comparable to that of other MSPs in globular clusters, which are ∼ 100 times more numerous but also ∼ 100 times longer lived. It is not clear whether these energetic MSPs formed by the same processes that formed the more normal MSPs; or by some alternative process (e.g., Ivanova et al. 2008). If the formation process is the same, then they do not represent a separate population and are part of the same continuum. This would indicate that the 'normal' formation mechanism is able to produce MSPs with a wider range of magnetic fields than is typically assumed. This would also imply that such very energetic MSPs should be observed in the Galaxy outside of globular clusters. To date, the only such field MSP that might belong to this class is PSR B1937+21. If no pulsars like PSR B1821−24 and PSR J1823−3021A are found in the Galaxy, that would lend credence to the hypothesis that these two MSPs are part of a separate population that forms only in globular clusters or other environments with very high stellar density. Verbunt & Freire (submitted) note that all "young" pulsars in globular clusters are found only in clusters with a high rate of stellar encounters per binary, where there is a reasonable chance of X-ray binaries being disrupted during recyling. This may be one way to explain why both PSR B1821−24 and PSR J1823−3021 are isolated without invoking alternate formation scenarios. Only improved statistics, from new MSP discoveries in globular clusters and the Galactic field, will tell. The multi-wavelength light curves of PSR B1821−24 suggest a complex relationship between the different emission regions. The fist gamma-ray and X-ray peaks (and possibly the second X-ray peak) are consistent with the phase of giant radio pulses. While the association of the off-peak emission with M28 is unclear, we find no strong evidence, in any case, that the population of energetic pulsars is much larger than the 12 pulsars already known. Multi-wavelength models of globular cluster spectra have different assumptions on the origin of the HE emission and create different expectations for the spectral shape. In the case where the HE emission results from the cumulative pulsed curvature radiation from MSPs, an additional unpulsed inverse-Compton component may dominate in the TeV band (e.g., Zajczyk et al. 2013;Kopp et al. submitted). This second component is expected to be much lower and would largely leave the curvature radiation signature unaffected, consistent with the observed spectrum that cuts off at several GeV and detection of gamma-ray pulsations from two globular cluster MSPs. Conversely, if the HE emission is due to inverse-Compton scattering ) the spectral shape may mimic a curvature radiation spectrum in the GeV range, sometimes also predicting TeV spectral components, for some parameter choices. This detection was enhanced by the use of LAT data that have been reprocessed with improved instrument calibration constants and demonstrates that, as the Fermi mission continues, improvements in the data reconstruction and analysis methods will continue to enhance LAT science. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'Études Spatiales in France. The Nançay Radio Observatory is operated by the Paris Observatory, associated with the French Centre National de la Recherche Scientifique (CNRS).
2013-10-07T16:30:39.000Z
2013-10-07T00:00:00.000
{ "year": 2013, "sha1": "03e33876f345e0648f7192a0616930b15b1ae1ac", "oa_license": null, "oa_url": "http://hal.in2p3.fr/in2p3-00923346/file/Johnson_2013_ApJ_778_106.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "03e33876f345e0648f7192a0616930b15b1ae1ac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234564764
pes2o/s2orc
v3-fos-license
Cytological Grading of Chronic Lymphocytic Thyroiditis and Correlation with Thyroid Profile Background: Chronic lymphocytic thyroiditis is a thyroid specific autoimmune disease often seen in middle aged women, although rarely do occur in men, children. This disease is characterized by antibody directed against thyroid peroxidase, called antimicrosomal antibody. The present study was undertaken to evaluate the various cytological features occurring in HT and to correlate with clinical and serological findings. Method: The study was conducted in department of Pathology from May 2017 to August 2017. The cases diagnosed as HT by FNAC were taken up for the study. Cytomorphologic features were reviewed microscopically and graded as per Bhatia et al. Results: Fifty cases were diagnosed as lymphocytic thyroiditis. Age of the patient ranged from 7-56 years. Clinically 41 of 50 cases (82%) presented with diffuse thyroid enlargement. In our study we had 31 cases (62 %) of grade 2 thyroiditis, 15 and 4 cases each of grade 1 and grade 3 respectively. We observed increased TSH values in 100% of G3 thyroiditis and 64.5% of G2 thyroiditis. None of the Grade 1 thyroiditis had increased TSH levels. The statistical correlation between grades of thyroiditis with T3, T4 and TSH levels was found to be significant with p values < 0.05. Conclusion: FNAC is simple cost effective and quick method for diagnosing HT. Also combined evaluation of HT with clinical findings and thyroid profile promotes more accurate diagnosis and early institution of therapy and follow up. FNAC is also necessary to rule out malignant lesions like lymphoma and papillary carcinoma at preliminary cytological level. Introduction Chronic lymphocytic thyroiditis being synonymous with Hashimoto's thyroiditis (HT), is a thyroid specific autoimmune disease often seen in middle aged women, although rarely do occur in men , children 1 . Clinically HT presents as minimal to moderate diffuse enlargement of thyroid gland. HT is considered the most common cause of hypothyroidism, although in the initial phase mild Hashitoxicosis is known 2 . This disease is characterised by antibody directed against thyroid peroxidise, called antimicrosomal antibody 3 , and often seen in high titres. Other antibodies are specific for thyroglobulin, colloid antigens and for thyroid-stimulating hormone (TSH) receptor. Classical destruction of thyroid by lymphocytic infiltrate is a feature in HT and hence also called as stroma lymphomatosum. In cytology, HT falls under diagnostic category II (Benign) in The 2017 Bethesda system for reporting thyroid cytopathology. FNA enables primary diagnosis of HT in most cases and also early diagnosis in some cases where serological changes are not yet seen 2 . The present study was undertaken to evaluate the various cytological features occurring in HT and to correlate with clinical and serological findings. Materials and Methods The study was conducted in the department of Pathology from May 2017 to August 2017. The cases diagnosed as HT by FNAC were taken up for the study. Clinical details were obtained from the patient case files. FNAC was performed by both aspiration and non aspiration techniques using a 24-gauge needle. Slides stained by Leishman and H&E stains were evaluated. Cytomorphologic features were reviewed microscopically and graded into 3 grades as per Bhatia et al 5 . Grade 1 thyroiditis shows few lymphoid cells infiltrating the follicles. Mild to moderate lymphocytic infiltrate with Hurthle cell change is seen in grade 2 thyroiditis. Grade 3 thyroiditis shows florid lymphocytic infiltrate with germinal center formation and scant follicular cells. Thyroid function tests were advised routinely for all patients prior to FNAC and values of T3, T4, TSH recorded were noted. Cytological grades of thyroiditis were further correlated with the clinical findings and thyroid hormone assay. Result Fifty cases were diagnosed as lymphocytic thyroiditis. Age of the patient ranged from 7-56 years with a mean age of 36.3 years. Majority of the patients (32%) were in the age group of 40-49 years. Only one patient belonged to age group of less than 10 yrs. Similar to literature studies, we also observed female preponderance with female to male ratio being 24:1 4,5,6,7 . Clinically 41 of 50 cases Annals of Pathology and Laboratory Medicine, Vol. 7, Issue 11, November, 2020 (82%) presented with diffuse thyroid enlargement while 9 (18%) cases exhibited nodular enlargement of thyroid. One patient of grade 1 thyroiditis had increased BMR and weight loss. 24 out of 50 patients clinically presented with cold intolerance. On cytology, most of the smears belonged to Grade 2 thyroiditis and showed chiefly hurthle cells in small clusters, singly and scattered along with polymorphic population of lymphoid cells, epitheloid cells and giant cells (Fig 3&4). Smears with few lymphocytic infiltrating the thyroid follicles and with occasional Ashkenazy cells, in hemorrhagic background were of grade 1 HT (Fig 1). Smears categorized as grade 3 exhibited florid lymphocytic infiltrate with few follicular cells ( Fig 5&6). In our study we had 31 cases (62 %) of grade 2 thyroiditis, 15 and 4 cases each of grade 1 and grade 3 respectively (Table 1). TSH values were obtained in all cases . We observed increased TSH values in 100% of G3 thyroiditis and 64.5% of G2 thyroiditis. None of the Grade 1 thyroiditis had increased TSH levels. The statistical correlation between grades of thyroiditis with T3, T4 and TSH levels was found to be significant with p values < 0.05. Chi square tests values were also obtained (Table 4). Discussion Hashimotos thyroiditis was first described by Hakaru Hashimoto in 1912 8 . HT is caused by breakdown of selftolerance to thyroid autoantigens, causing activation of CD4 T helper cells. Recruited autoreactive B cells secretes a variety of circulating antibodies like antithyroglobulin and antithyroid peroxidase antibodies 6 ,which causes progressive depletion of thyrocytes with infiltration by mononuclear cell infiltration replacing them and eventually causing organ destruction and fibrosis. Diagnosis of these lesions are necessary as patient subsequently becomes hypothyroid with need for lifelong thyroid supplementation. Long term follow up needed, in view of reported cases of risk of transformation to extranodal B cell lymphoma and thyroid carcinoma. HT is more prevalent between the age groups of 45 to 65 years and common in women than in men with female to male ratio of 10:1 to 20:1. 6 In our study majority of the patients were seen in the 3 rd and 4 th decade which was comparable to studies conducted Table 2). Thyroid lesions are common in females owing to the effect of female gonadal hormones mainly prolactin, estrogen and X chromosome inactivation on the gland . Also immune system greatly contributes to development of thyroid goitre, nodule and cancer 9 . Clinically both lobes of the thyroid are usually affected and is diffusely enlarged and firm, but asymmetry with localized nodular enlargement occur 10 . Grossly, the classic form is characterised by diffuse, symmetric, firm and rubbery enlargement of the thyroid 11 11 . Hurthle cell neoplasm should be looked for in Hurthle cell rich aspirates. But characteristic feature of monomorphic Hurthle cells, with nucleus exhibiting prominent nucleoli, abundant cytoplasm and absence of lymphocytic infiltrate enables to rule out HT. Cytologically large follicular cells in clusters with nuclear inclusions occur mimicking papillary carcinoma on cytology & it is also known to occur in HT. However, lack of cytologic findings of papillae, frequent nuclear grooves, nuclear chromatin clearing, thick stringent colloid enables us to rule out the latter. 3 One case of grade 2 thyroiditis was diagnosed as suspicious for papillary carcinoma on cytological examination in view of papillary clusters with occasional nuclear grooves, seen in addition to polymorphous population of lymphoid cells admixed with Hurthle cells. This patient presented with a nodule in the right lobe of the thyroid. Diagnosis of Hashimotos thyroiditis with suspicion for papillary carcinoma, Bethesda category V was made on cytology. Histopathology confirmed the cytological diagnosis of Papillary carcinoma with associated HT Conclusion FNAC is simple cost effective and quick method for diagnosing HT. Destructive infiltration of thyroid follicles by lymphocytes on cytology remains the gold standard for diagnosing HT in most cases. Also combined evaluation of HT with clinical findings and thyroid profile promotes more accurate diagnosis and early institution of therapy and follow up. FNAC is also necessary to rule out malignant lesions like lymphoma and papillary carcinoma at preliminary cytological level.
2020-12-17T09:06:23.936Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "f6e0e40b7268be4061d07dbb0479b3584c6e4f9d", "oa_license": "CCBY", "oa_url": "https://www.pacificejournals.com/journal/index.php/apalm/article/download/2899/1937", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "82c7d872df77f85bb2eee19562c9aa50d0b0d32a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118334549
pes2o/s2orc
v3-fos-license
Reductions of particular hypergeometric functions $_3F_2(a,a+1/3,a+2/3;p/3,q/3;\pm 1)$ We principally present reductions of certain generalized hypergeometric functions $_3F_2(\pm 1)$ in terms of products of elementary functions. Most of these results have been known for some time, but one of the methods, wherein we simultaneously solve for three alternating binomial sums, may be new. We obtain a functional equation holding for all three of this set of alternating binomial sums. Using successive derivatives, we show how related chains of $_3F_2(\pm 1)$ values may be obtained. It may be emphasized that we make no reliance on the WZ method for hypergeometric summation. Additional material on Pochhammer symbols and certain of their products is presented in an Appendix to supplement the pedagogical content of the paper. However, in the words of one of the surviving authors of [6], as to the original proof, "it is impossible to find the sources now" [4]. Herein we provide a detailed proof of this Proposition, making use of the properties of closely related alternating binomial sums. We avoid any use or reliance on the WZ method for hypergeometric summation (e.g., [2], section 3.11). There are several known transformations for functions 3 F 2 (1), as illustrated in Appendix B. Therefore the left sides in Proposition 1 may be rewritten in terms of other 3 F 2 (1) functions with altered parameters. Proposition 3 supplements the following expressions for 3 F 2 (−1) [6] (p. 547) which we restate. In light of the proofs of Propositions 1 and 3, we forego giving a proof. The finite series special cases f 30 (−n/3) and f 31 (−n/3) for integer n ≥ 0 occur in the online database OEIS in sequences A057681 and A057682 respectively. Thus for these integer sequences generating functions are readily available. Proof of Propositions We may note that unless a = 0, in which case the sum is 1, being a special case of the binomial summa- We may write this relation as by using the definition (1.1). By using the recurrence of binomial coefficients a+1 ℓ+1 = a ℓ + a ℓ+1 , we obtain the following relations: may be rewritten as: f 31 a − 1 3 (2.4) and 3 F 2 (a, a + 1/3, a + 2/3; 4/3, 5/3; 1) = Evaluation of f 30 (a). We will evaluate this binomial sum as a case of the more general sum We note that the factorization The identity (2.6) in turn implies the identities for ℓ ≡ 1 and 2 (mod 3) We then obtain Similarly for the generally nonterminating sum, with the same decomposition, we obtain f 30 (a; z) = 1 Hence via identity (2.6) By using the relations |1 − An extension of Proposition 1 would be to consider the following 3 F 2 (1) function, using another product of Pochhammer symbols coming from (A.1) together with (A.2). Here the last sum on the right side may be directly related to f 31 (a − 1/3)/(1 − 3a) as occurs in (2.4). By introducing another summation, the second summation on the right side may be written in terms of f 32 as appears in (2.5). However, using this approach, the first summation on the right requires several new summations. Via Proposition 3 we determine this 3 F 2 (1) value. Proposition 5. The differential equation follows from the explicit expression for u(a) given in Proposition 1. Corollary. Using the explicit expression for u(a) from Proposition 1 and the relation for ∂ b (b) j given in Appendix A, the summation identity follows. Remarks. In regard to Proposition 5, the second order linear differential equation for u(a) has positive constant coefficients. As such, it admits a ready physical interpretation as the equation of a damped harmonic oscillator with damping proportional to (π 2 + ln 2 27)/4 and spring constant proportional to 3 ln 3. One may also consider to insert integral representations for binomial coefficients into the summands of f 3j (a). However, it appears difficult to ensure convergence of the resulting expressions with this approach. We freely make use of the relation n k = (−1) k (−n) k /k!. Letting ψ = Γ ′ /Γ denote the digamma function, we also have
2015-06-25T10:17:02.000Z
2015-06-25T00:00:00.000
{ "year": 2015, "sha1": "01f1e6e3fc392c8ec19d0c79b254b7ea9b197f48", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "01f1e6e3fc392c8ec19d0c79b254b7ea9b197f48", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
236256285
pes2o/s2orc
v3-fos-license
Territorializing International Travel Emissions: Geography and Magnitude of the Hidden Climate Footprint of Brussels In the present article we investigate the geography and magnitude of the climate footprint of long‐distance travel with Brussels, Belgium, as a destination. The internationally networked position of this city goes hand in hand with a strong dependence on international mobility, which largely materializes in impressive volumes of long‐distance travel and associ‐ ated consumption of important amounts of fossil fuel. Despite a surge in concerns about global warming, the climate foot‐ print of most international travel, notably air travel, is not included in the official national and regional climate inventories, or in other words, it is not territorialized. The official climate footprint of the Brussels‐Capital Region attained 3.7 Mton CO2eq per year (in 2017). Based on our exploratory calculations, however, the total estimated climate footprint of all Brussels‐bound international travel equalled an additional 2.7 Mton CO2eq. In terms of geographical distribution, over 70% of international travellers to Brussels come from Europe, while these represent only 15% of the climate footprint of all international travel to Brussels. We conclude that the practice of not allocating emissions caused by international travel to territorial units has kept the magnitude and complexity of this problem largely under the radar and contributes to the lack of societal support for curbing growth of international aviation. gas inventory regulations that do not allocate such emissions to individual countries (Warnecke, Schneider, Day, La Hoz Theuer, & Fearnehough, 2019). The complexity of the climate issue, to which both embedded emissions in imported products and the contribution of long-distance travel are of great importance, is hardly recognized in governmental climate policy plans. Although an inventory of such plans is beyond the scope of this article, we quote here the official climate policy plan of our case study, the Brussels-Capital Region, in which none of both themes is mentioned (Brussels-Capital Region, 2019). The current territorial approach to the allocation of climate footprints causes an important bias in the way the climate issue is viewed by the public and by policy makers. However, both emissions from international transport and imported products are caused by consumers, citizens, and organizations that are established in certain and identifiable countries and regions. The emissions from international transport are not only absent from the climate inventories but seem also underexposed in the climate debate itself. In fact, the territorial focus of climate inventories ignores the internationalization of production chains and the structural shift towards service industries (tertiarization) of the economy of the most developed countries. Emissions are viewed as soil-bound affairs, while economic activities have increasingly become footloose. The shift from a manufacturing to a service economy means that emissions got detached from geolocalized production processes and shifted towards the geographically diffuse sector of long-distance transport. Reductions within national industrial production are clearly visible in the national climate inventories. However, increases in international travel associated with the rise of the service industry remain invisible in these inventories (Afionis, Sakai, Scott, Barrett, & Gouldson, 2017;Davis & Caldeira, 2010;Ottelin et al., 2019). The Case of Brussels, Belgium: A Focal Point of the Travel-Climate Issue The aim of this article is to provide insight into the geography and magnitude of the climate footprint of the international attractiveness of a city with an important international position as a business and political centre, in relation to the official, territorialized climate footprint of this city. We will explore this issue for the case study of Brussels by taking a traditional bottom-up approach that estimates climate footprint based on the distribution of transport modes used by travellers (Sun & Drakeman, 2020). The choice for Brussels was inspired by the role played by this city as a forum for international political decision-making, which includes European climate policy, while the city and the activities it hosts are an important generator of international travel and the related climate footprint (Van Parijs & Van Parys, 2010). In what follows, we consider the Brussels-Capital Region, which is one of the three administrative regions in Belgium (next to Flanders and Wallonia), home to 1.2 million residents, out of 11.5 million Belgians. We start with a look at the official climate footprint of Brussels, in relation to its geographical context. In 2017, according to the Belgian greenhouse gas inventory, the total climate footprint amounted to 114.5 Mton CO 2 eq (FPS Public Health, Food Chain Safety and Environment, 2019), of which only 3.7 Mton CO 2 eq (3.2%) was on account of the Brussels-Capital Region (Bruxelles Environnement, 2019). This remarkably modest contribution is even more noteworthy when we learn that in 2017 the Brussels-Capital Region not only housed 10.5% of the Belgian population, but even generated 17.8% of the Belgian gross domestic product. These figures are grist to the mill of those who claim that city dwellers, by definition, live more sustainably than suburban or rural dwellers, or as Banister (2008, p. 73) put it: "The city is the most sustainable urban form." Indeed, the official carbon intensity of the Brussels economy is around 5.5 times smaller than that of Belgium as a whole. However, just as Belgium is externalizing an important part of the emissions for which the Belgian economy is responsible to low-wage countries and to all sorts of foreign travel destinations, Brussels is externalizing an even larger part of its emissions to its hinterland, being an important consumer of food and industrial products, almost none of which are produced on its own territory. Also, no airports (Boussauw & Vanoutrive, 2019) or seaports are located within the modest area of the territory of Brussels, which means that even the climate intensity of travel by Brussels' residents, which may be well higher than the Belgian average (Czepkiewicz, Heinonen, & Ottelin, 2018), is invisible in any relevant databases. Mapping the actual climate footprint of the Brussels-Capital Region is beyond the scope of this article. Instead, we aim to understand the geography of the climate footprint of inbound international travel, and identify any knowledge gaps that may prevent us from doing so in a comprehensive and reproducible manner. This concerns all international journeys with Brussels as a destination, regardless of the purpose of the trip (business, politics, science, education, tourism). In this way, we subscribe to an existing tradition of research into sustainable tourism (Gössling et al., 2005;Le & Nguyen, 2021;Sun, 2014), although we expand leisure with business travel. In that context, Peeters and Schouten (2006), for example, already investigated the ecological footprint of tourism to and in Amsterdam. A similar assessment was recently carried out for Barcelona (Rico et al., 2019). In both cases, the results show that the overwhelming majority of the climate footprint of tourist visits are attributable to travel to the destination, in particular to long-distance air travel. These studies take into account the climate footprint related to touristic activities in the destination (accommodation, leisure and professional activities, intra-urban transport). However, they measure the climate footprint of transport to the destination just roughly, distinguishing between large categories (e.g., short, medium, long haul travel; or classifying trip origins merely by continent). In our case, we have sought to measure the climate footprint of travel from each country of origin. Such an approach, which considers at the same time the territory where the tourist activities take place (here Brussels) and the territories where the tourists come from is still quite rare in the research field of climate footprint of tourism (see Becken, 2002, for international passenger air travel to New Zealand; Dawson, Stewart Lemelin, & Scott, 2010, for polar bear viewing tourism in Churchill, Canada; El Hanandeh, 2013, for the pilgrimage to Mecca; Lenzen et al., 2018, for tourism-related global carbon flows between 160 countries; and Sharp et al., 2016, on Iceland). Finally, it is important to note that our bottom-up approach is only one possible option, prompted by our research question and the availability of data. By nature, this approach suffers from many limitations (Lenzen et al., 2018). In order to arrive at a more global picture of the climate footprint of international travel patterns, it might however make more sense to consider the resident as a statistical unit, rather than the visitor, as was argued by Larsson, Kamb, Nässén, and Åkerman (2018). Method Various bottom-up methods have been developed to assess the importance of the climate footprint of tourist trips to specific destinations, which usually and deliberately do not include outward trips made by residents of the city or region in question (e.g., Dwyer, Forsyth, Spurr, & Hoque, 2010;Peeters & Schouten, 2006;Rico et al., 2019). Other studies focus specifically on estimating the climate footprint of the residents of a certain area, such as Eijgelaar, Peeters, de Bruijn, and Dirven (2017) or Larsson et al. (2018). In what follows we will stick to the first of both approaches. The studies referred to above combine data on the number and origin of international overnight visitors (or 'tourists' according to definition of the World Tourism Organization (2010)) with modal split figures that vary according to their origin, trip lengths, and standardized emission rates per passenger kilometre. In this article, we will use the terms 'overnight visitor' and 'tourist' as synonyms. When making a distinction between overnight visitors or tourists who are on holiday or on business trip, we will use the concepts of 'leisure' versus 'business.' The time frame of our study is the year 2018 and the unit of analysis is one round trip of inbound travel of one international passenger. Number and Origin of Overnight Visitors With respect to the number and the origins of overnight visitors, the quality of available data sets considerably varies between countries and even between cities. Two key determinants are, first, the way in which the geographical basis of data collection is demarcated, and second, the tourist counting method that was applied. In the case of the Brussels-Capital Region, the statistical basis includes all officially registered tourist accommodation. This comprises around 180 hotel and hotel-like branches with a total capacity of 35,000 beds, 9 hostels offering around 1,400 beds, and around 100 other accommodations such as bed and breakfast and tourist residences additionally offering about 500 beds. However, this statistical basis covers only part of the actual offer of commercial accommodation. According to Wayens et al. (2020), covering the year 2017, nearly 34,000 beds available on the Airbnb and Home Away platforms would be off the radar. Not taking into account this vast set of unregistered accommodation, which is more or less equivalent to the capacity in registered branches, will lead to underestimating tourist arrivals by around 30%. Furthermore, it should be borne in mind that these figures are still exclusive of informal accommodation offered by friends and family members, a phenomenon which is probably important in Brussels, taking into account the high proportion of foreign residents, particularly those originating from wealthy states such as the European Union, North America, and Japan. According to a survey carried out in 2018-19 in the Brussels museums, one fifth of all international overnight visitors in Brussels were staying with friends or family members (Decroly & Tihon, 2019). Even though statistics of tourist accommodation in the Brussels-Capital Region are incomplete, they provide detailed data on international arrivals in officially registered accommodation. In these, for each guest or group of guests, staff members are required to collect information about the state of residence, the purpose of the stay, the day of departure, and the number of nights spent. The data is then transferred to Statistics Belgium, which procures detailed tables of the number of arrivals and overnight stays by purpose, for each country of residence. Residence is an important variable here, since it corresponds more frequently to the actual place of departure of the trip, compared to nationality (a variable that is more commonly collected than residence). Travel Modal Split According to Country of Origin Official statistics on tourist arrivals in Brussels do not contain information on the mode of transport used. Therefore, we complement these statistics with data from visitor surveys collected by the Art Cities Research project (Toerisme Vlaanderen, 2018). This survey was conducted between April 2017 and April 2018 among 1,400 people staying in Brussels for leisure purposes and includes travel mode choices by tourists from the nine most important sending countries that visited Brussels. At first glance, a surprising share, larger than or equal to 60%, of incoming trips by leisure tourists from Russia, China, Japan, and the United States seems to be over land travel (car and coach statistics cover ferry trips from the UK; Figure 1). This result is indicative of the way in which many international tourist trips materialize. A majority of intercontinental overnight visitors take advantage of the opportunity to visit multiple destinations, e.g., using the format of the low-cost coach tours that are offered by many non-European tour operators and have become popular, in particular among Chinese tourists (Arlt, 2013;Bui & Trupp, 2014;Xiang, 2013). Independent multi-destination tours are also common practice among Japanese, Korean, or Chinese tourists (Pendzialek, 2016). Although less well documented, this phenomenon is probably common as well among individual overnight visitors from other distant markets, such as the United States, Canada, or Australia. But even if tourists from distant markets frequently visit Europe in the form of a tour, which mainly involves surface transport, the initial trip to Europe was mostly a flight. The Art Cities Research (Toerisme Vlaanderen, 2018) summary tables confirm that about 100% of these incoming trips consist of air travel. This illustrates how difficult it is to determine the footprint of travel, which becomes even more problematic in attempts to allocate corresponding climate footprints to territorial units (such as the Brussels-Capital Region). It is not obvious whether we need to take into account the mode of transport used to get to Brussels, the one used to reach Europe, or both at the same time. Ideally, both would be combined, by distributing the emissions linked to transport to Europe across the various destinations visited, and by calculating the specific emissions that are associated with intra-European travel to Brussels. However, given the lack of data on intra-European tours by leisure tourists from distant markets, we cannot implement such a strategy. Instead, in line with the Art Cities Research summary tables, we assumed that all incoming travel of leisure overnight visitors in the Brussels-Capital Region that originate from a remote location at 2,000 km or more were done by air. In the current article, we use the Art Cities Research data to estimate the distribution of international arrivals in Brussels by travel mode, according to the overnight visitors' origins. Although the data relate only to a limited number of origins, only cover leisure trips, and do not resolve the complicated question of multi-destination tours in which tourists from distant markets take part, they offer the advantage that they represent real trips instead of modelled ones, as was done by Gunter and Wöber (2019), among other studies. However, Fiorello, Martino, Zani, Christidis, and Navajas-Cawood (2016) show that for equal trip lengths modal split differs, depending on travel purpose. Statistics on international arrivals in Brussels distinguish between leisure and business trips, which urges us to correct the modal split of business trips, a category of travel that is not included in the Art Cities Research survey. Therefore, we apply data from the annual outbound trip survey conducted in Norway (Statistics Norway, 2019), which provides a breakdown of international trips made by residents into travel purpose and travel mode. Mode choice of business travellers from Norway is not necessarily representative, partly because air travel is more common in Norway than in the rest of Europe and most of the world. That is why we only consider this data as indicative with respect to the use of cars and coaches. Results show that business overnight visitors do not use coaches, and that they have a much lower propensity to use cars and a higher propensity to use airplanes and trains compared to leisure tourists. On this basis, we assume that in the case of international business arrivals, the modal share of coaches would be systematically zero, that the share of car travel would be five times lower compared to leisure arrivals, and that the remaining trips would be shared between airplanes and trains in line with the distribution that was observed for leisure travel. In the case of Brussels-bound trips from France, for example, this leads to an increase in the share of plane travel from 10% to 20%, while train travel goes up from 35% to 70%, car travel is reduced from 50% to 10% and coach travel from 4% to 0%. The modal split of arrivals from countries that were not included in the Art Cities Research survey was reconstructed as follows. In cases where the trip length was less than 1,500 km, we applied the modal split as observed in a country or (sub-national) region located at a comparable distance or in a similar spatial context. As an example, survey figures for Italy were equally applied to tourists from Croatia, figures for Piemonte to Austria, and for Ireland to Northern Ireland. For origins located at a distance between 1,500 and 2,000 km, we applied correction factors derived from a 2014 survey of tourists in the Netherlands which was carried out by NBTC Holland Marketing (2015). The NBTC survey is rare in its kind, since it collects modal split data with respect to countries or country sets of origin. Correction factors were applied for business trips up to 2,000 km. For longer trips, we opted for a maximalist solution, assuming that all trips were made by airplane. Although one of the most accurate, feasible approximations, it is still important to realize that the outlined method attributes the entirety of emissions associated with travel to Europe to the Brussels-Capital Region as a single destination. It is important to keep in mind that this choice causes an upward bias in the results, which could not be corrected for because of lack of data on multi-destination tours. This is one of the reasons why we want to underline the exploratory nature of our study, and urge the reader to put the results obtained from our calculations in perspective. Also, it is important to bear in mind that the outlined method was only applied to estimate the modal split of tourist arrivals in Brussels in 2018. Estimating Distance between Origins and Destinations Distance calculation between countries and the centre of Brussels was based on centroid locations that were weighted by the geographical distribution of population, as computed by the Center for International Earth Science Information Network of Columbia University. Nevertheless, the distances obtained are still imperfect approximations of actual distances travelled when arriving in Brussels. It not only treats all flights originating from a single country in the same manner, regardless of the (unknown) origin city or region (for example, no distinction is made between New York and Los Angeles in the United States), it is also based on the assumption that air travel is always choosing the shortest path (greatcircle distance). Dobruszkes and Peeters (2019) show that the majority of commercial flights actually take longer routes, which on average adds 7.5% of distance. Therefore, we have corrected all 'shortest distances' between origins and destinations by means of the distance class-based coefficients as provided by Dobruszkes and Peeters (2019). Climate Footprint per Passenger by Travel Mode We distinguished between modes of transport with respect to emission rates per passenger kilometre travelled. We started from the figures provided by Peeters, Szimba, and Duijnisveld (2007), a well-cited source that nonetheless needed a slight update with respect to air and car travel data that date back to 2004. Indeed, both modes mentioned have faced fleet renewal which has led to lower emissions per passenger kilometre during operations. In the case of air transport, we have updated the rates ourselves, based on real air services at Brussels Airport (see Table 1 for more detailed explanation). Depending on the distance, the obtained rates are 15 to 30% lower than those calculated back in 2004. With respect to car transport, we used the results of a recent study in Denmark (Christensen, 2016), which shows that emissions per passenger kilometre were 25% lower in 2015 compared to 2004. Updating was not necessary, however, for emissions from trains and buses, as the current figures are very close to those measured in 2004 (see, e.g., Prussi & Lonza, 2018, for trains; and DEFRA, 2020, for coaches). For overland motor vehicles, only CO 2 emissions were calculated, given the limited (2019). For air travel, distance between origin and destination was multiplied by a coefficient to take into account the existence of detours (i.e., longer itineraries than the great-circle distance). We used the coefficients computed by Dobruszkes and Peeters (2019): 1.143 for distance less than 1000 km, 1.073 for 1000-4000 km, and 1.048 for more than 4000 km. Climate footprint Climate footprint per passenger kilometre, class of distance, and travel mode For airplanes: own calculations based on CO 2 emissions for all the flights to/from Brussels airport in 2018. The data on the provision of regular air services in Brussels Airport have been extracted from the 2018 OAG Schedules Analyser (OAG, 2018). For each flight, CO 2 emissions were calculated by using Eurocontrol Small Emitters Tools (Eurocontrol, 2019). Based on the World airline rankings 2018 (Flightglobal, 2019), a seat occupancy rate of 80% has been used to estimate the number of passengers for each flight. The calculated emission factors by classes of distance (expressed in kg CO 2 pkm) are: 0.144 for distances less than 500 km, 0.108 for 500-1000 km, 0.090 for 1000-1500 km, 0.084 for 1500-2000 km, and 0.093 for more than 2000 km. In a second stage, according to the literature (DEFRA, 2020), the emission factors were multiplied by 1.9 to convert CO 2 emissions into CO 2 eq ('climate footprint'). contribution of other emissions to the climate footprint. Given the importance of the radiative forcing (RF) effect, however, it would be unacceptable to maintain this simplification with regard to aviation. So, in order to estimate the total climate footprint of air travel, effects caused by non-CO 2 forcing agents (nitrogen oxides [NO x ], water vapour, soot and sulfate aerosols, contrail cirrus) were accounted for by applying a multiplier of 1.9 to the amount of CO 2 emissions, a conversion factor that was derived from Lee et al. (2010) and is recommended by DEFRA (2020). This conversion factor is defined as the ratio between total CO 2 -warming-equivalent emissions from all forcing agents and those from CO 2 alone, with a 100-year time horizon (Global Warming Potential or GWP100). In a recent paper, Lee et al. (2020) have updated their estimates, based on new models of the RF effect of contrail cirrus. When using the same metric (GWP100), the conversion factor obtained is slightly lower (1.7 as opposed to 1.9). However, when using another metric that is assumed to better reflect warming potential under the current growth conditions of air travel, the conversion factor rose to 3.0. On this basis, it is concluded "that aviation emissions are currently warming the climate around three times faster than that associated with aviation CO 2 emissions alone" (Lee et al., 2020, p. 8). Therefore, the climate footprint of aviation as an outcome of our analysis likely underestimates the impact of non-CO 2 agents. However, given the persistent uncertainties about these impacts, it seems more cautious to use a conversion factor that has been recommended for several years, than one that was only recently published. Besides, taking RF into account is the reason behind the deliberate use of the term 'climate footprint' in this article instead of the more common 'carbon footprint.' Table 1 provides more detail about the sources used and the calculation methods employed. In order to estimate the entirety of CO 2 emissions linked to international tourist arrivals, we have performed the calculation for each of the 247 countries from which overnight visitors arrive in Brussels. First, the number of arrivals was disaggregated by purpose and by travel mode, and for air travel additionally by distance class. Then, results obtained per travel purpose and mode were added up and multiplied by two in order to account for both the inward and the outward trip, as we want to allocate emissions of the entire journey to Brussels. Amount and Geography of International Arrivals In 2018, the Brussels-Capital Region registered around 2.9 million international arrivals in registered tourist accommodation. As such, Brussels represents an important, although not a major, urban destination in Europe. Its attractiveness remains modest not only compared to Paris (13.2 million international arrivals) and London (13.0 million), the two main poles of urban leisure and business travel in Europe, but also compared to cities that are well-established as destinations for tourists from distant markets, both as city-trip destination and as part of intra-European tours, be it individually visited or as part of a group (Rome, 9.6 million arrivals; Barcelona, 7.4 million; Amsterdam, 6.9 million; Prague, 6.7 million; Vienna, 6.3 million; Madrid, 5.2 million; Berlin, 4.9 million; Lisbon, 4.3 million; Venice, 4.3 million; Budapest, 3.8 million). Even Munich and Copenhagen, which are less well-known as international tourist attractions, welcome more international overnight visitors than Brussels. The situation does not change if we account for the size of the city. Indeed, also the number of international arrivals per inhabitant is lower in Brussels than in all cities listed above, except for Budapest. As shown in Figure 2 and Table 2, the vast majority of international tourists staying in Brussels arrive from a limited number of states: 70% of arrivals originate from just 12 origins. European states (70.5% of arrivals), especially neighbouring countries (41%), are the main source of overnight visitors, whether for leisure or business purposes. Among the most distant origins, the United States (217,000 arrivals, 7.7% of the total), China (88,000, 3.1%) and to a lesser extent Japan (48,000, 1.7%), Brazil (41,000, 1.4%), and Russia (38,000, 1.3%) stand out clearly. The map also highlights the significant volume of arrivals from Canada (32,000), India (27,000), and Australia (25,000). Given the important presence of international political bodies and the rather limited attractiveness of Brussels as a leisure destination, for decades the number of arrivals with a leisure purpose has been significantly lower than the number of business trips. Since the early 2000s, the ratio between both kinds of travel has gradually become more balanced. Currently, overall shares are more or less equal, although the relative importance between both purposes still depends on the origin (Figure 2). Looking at origin countries, business overnight visitors are generally overrepresented in Europe (except for Spain), the United States, the Arab-Persian Gulf countries, and Southeast Asia including Japan, while the reverse is true for arrivals from Latin America, Russia, India, China, Australia, and New Zealand. In line with related research (e.g., Le & Nguyen, 2021;Wu, Liao, & Liu, 2019), we hypothesize that the geography of the origin of the flows of international tourists staying in Brussels results from the combined effects of distance, the economic and population-based potential for sending travellers in the origin countries, and local preferences in terms of destination choice behaviour. In an attempt to disentangle the influence of these different factors, we have broken down international arrivals by distance class ( Table 2). The results show that the volume of flows decreases rapidly with distance: Nearly half of the arrivals come from within a radius below 1,000 km from Brussels, a fifth from a radius between 1,000 and 2,000 km, while barely 2.5% originates from countries located at a distance between 2,000 and 3,000 km. Beyond 2,000 km, the relationship between distance and number of trips is altered by variations in population size and per capita income between distance classes. The two distance classes between 7,000 and 9,000 km each produce more international overnight visitors to Brussels than those between 2,000 and 7,000 km, because they respectively include India and the United States (7,000 to 8,000 km) and China and Brazil (8,000 to 9,000 km). The expected negative relationship between distance and number of arrivals is only partly compensated for by the larger population in more remote distance classes, as shown by the number of arrivals in Brussels per 100,000 inhabitants in the origin classes (Table 2). Indeed, if the relative volume of flows to Brussels decreases steadily up to 5,000 km, it increases between 5,000 and 8,000 km, then again between 9,000 and 10,000 km. These variations result in part from differences in per capita income on number of tourists sent. It is clear that those intermediate distance classes, which represent lower numbers of arrivals per 100,000 inhabitants, are generally characterized by a fairly modest per capita GDP (see for example the classes of 4,000 to 6,000 km). Volume and Geography of Climate Footprints According to our calculations, international tourist arrivals in the Brussels-Capital Region generated a total of 1,452 kilotonnes of CO 2 (or 1.45 Mton CO 2 ) in 2018, taking into account both inward and outward trips. After applying the 1.9 multiplicator to air trips, the climate footprint of all international travel to Brussels that is included in our analysis, in 2018, amounts to around 2,701 kilotonnes of CO 2 equivalent (i.e., 2.70 Mton CO 2 eq), which equals about 73% of the entire climate footprint (all activities combined, including the residential sector and internal transport, but obviously excluding international travel) that were officially reported by the Brussels-Capital Region in 2018. Examination of the distribution of the tourisminduced climate footprint reveals a geography that is radically different from the geography of tourist arrivals. In fact, while the number of flows sharply decreases with distance, the amount of emissions increases with distance (Table 3). Thus, while visitor flows from Europe account for 70.5% of arrivals, they generate barely 15% of emissions, while flows from outside Europe, which represent less than 30% of tourists, generate nearly 85% of the climate footprint. This striking result can be explained by the specific relation between air transport and climate footprint, which is brought forward by Figure 3, a map that links emissions by origin country to journeys to Brussels. The very significant climate footprint of flows from the United States (21% of footprint for 7.6% of flows) and China (10% versus 3%) stand out, but so do Japan (6% versus 1.7%) and Australia (5.5% versus 0.9%). Also, one European state is present among the top ten countries in terms of emissions-Spain-which is the only origin country that combines a very large number of tourists to Brussels with an important share of air travel. Conclusions Territorializing the international share of Brussels's climate footprint is not an easy task. In the above analysis, numerous methodological choices had to be made, and furthermore, the scarce availability of data imposes important limitations. In our calculation, we chose to only include the climate footprint of tourists with Brussels as a destination, assuming that the climate footprint of journeys undertaken by Brussels's residents needs to be allocated to the destination territory. Then, we were unable to cover international overnight visitors who stayed in unregistered accommodation, which means that our analysis significantly underestimates the total number of tourists to Brussels. Furthermore, we were not able to redistribute the climate footprint of tourists arriving in Brussels among the often multiple destinations they visit within Europe, which implies that we overestimated the climate footprint of long-distance overnight visitors. We are also aware that the climate footprint resulting from our calculations covers only one, albeit an important, aspect of Brussels's international position. Embedded emissions in imported products were not included, nor was the share of the Brussels economy in the climate footprint of international sea shipping. A last caution that needs to be mentioned is the significant degree of uncertainty associated with the multiplicator (defined as 1.9) that was applied to convert air transport related CO 2 emissions into overall climate footprint. Therefore, an important initial conclusion of our study is that resources should be made available to collect better data. An extensive sample of detailed questionnaires about travel itineraries could be obtained from arriving tourists, especially at airports, but also in a variety of other venues, which would lead to more accurate insights. Such information could be supplemented with big data, in particular from mobile telephony that (2018); and own calculations based on Eurocontrol Small Emitters Tool (Eurocontrol, 2019) and OAG (2018) data. allows to reconstruct travel (see, e.g., Ahas, Aasa, Mark, Pae, & Kull, 2007;Saluveer et al., 2020). Despite all reservations that need to be taken into account, and the exploratory nature of our calculations, we can still report a number of interesting findings on the geography and magnitude of the climate footprint of international travel to Brussels. In terms of geographical distribution, over 70% of international travellers to Brussels come from Europe, while these represent only 15% of the climate footprint of all international travel to Brussels. It is clear that distance matters. The climate footprint of a journey from a non-European country is not only greater in absolute terms, due to the larger distance, but also in relative terms (expressed in CO 2 eq/km) due to the more favourable modal split for intra-European journeys. Besides, we note that Brussels is very conveniently located within Europe, centrally between the two main European travel destinations-London and Parisand with a convenient high-speed train connection to all surrounding major cities. In terms of magnitude, the calculated climate footprint of international journeys with Brussels as a destination equalled 2.7 Mton CO 2 eq in the year 2018, which is equivalent to about three quarters of the official total amount of emissions of the Brussels-Capital Region as recorded by the Belgian national climate inventory (3.7 Mton CO 2 eq in 2017). Moreover, emissions from international journeys are increasing at a rapid pace, with an average growth of more than 4% per year over the past 18 years (up to 2019, before the Covid-19 crisis). If the current growth rate would persist, by 2036 the climate footprint of international travel to Brussels will be more than twice as high as the official climate footprint of Brussels, a ratio that will be even higher in case the emission reduction targets in the other sectors will be achieved. The problematic nature of this finding is to be nuanced only to a limited extent by the observation that the climate footprint of international journeys to Brussels is smaller, both counted per trip and in total, than that of comparable cities such as Munich, Budapest, or Zurich (Gunter & Wöber, 2019). The typical position of Brussels as a centre of political decision-making urges to reflect on the finding that some locations may be better positioned than others to host such functions. Our analysis shows that Brussels is in fact doing remarkably well, since the climate footprint of intra-European travel to Brussels is rather low, while the overall climate footprint of inbound long-distance travel is considerably lower in comparison to other cities with a strong international position. Although Brussels's central location helps keeping modest the climate footprint of its incoming business travel, we should not forget that the favourable score of Brussels compared to cities such as Barcelona, Prague, or Amsterdam is largely due to the relatively limited touristic appeal of Brussels compared to the cities mentioned. From a wider perspective, we can conclude that in a rapidly globalizing and at the same time warming world, it is no longer tenable to omit territorializing the climate footprint of international transport, while this is wellestablished practice for emissions caused by industrial activities, agriculture, buildings, and domestic transport. Not including these emissions in climate inventories leads to major biases in the climate debate itself. While climate movements argue for the adaptation of Global Northern consumption patterns and production processes, a less visible threat seems to be situated in the increasingly globalized and networked nature of society. Dependence on long-distance travel not only makes the economy more carbon intensive, but also education, research, culture, and leisure activities, and even family visits rely ever more on the consumption of tremendous amounts of kerosene. Long-distance travel patterns seem to be increasingly anchored in society, and ever less reversible. And even as for medium-distance journeys in Europe, less carbon-intensive alternatives such as trains and coaches are available, an absolute reduction in the number of aircraft kilometres travelled is a particularly unattractive idea for many citizens, businesses, and organizations, for which broad societal support is virtually non-existent. Nevertheless, it is clear that a carbon neutral future is one where jet aircraft will no longer play a substantial role.
2021-07-26T00:06:03.741Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "f118c994339489ab6dbe132949dda40f13d777ea", "oa_license": "CCBY", "oa_url": "https://www.cogitatiopress.com/urbanplanning/article/download/3905/3905", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "39caf5399c7acc80573ec513994ceffb23eca7a1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
138849606
pes2o/s2orc
v3-fos-license
OPTIMIZATION OF THE MATERIAL REMOVAL RATE IN TURNING OF UD-GFRP USING THE PARTICLE SWARM OPTIMIZATION TECHNIQUE In this paper the particle swarm optimization technique is applied to experimental results in order to optimize the turning of unidirectional glass fiber reinforced plastics composite with consideration to the material removal rate. Taguchi’s L18 orthogonal array is used to conduct experimentation. The parameters considered are tool nose radius, tool rake angle, feed rate, cutting speed, cutting environment (dry, wet, cooled) and depth of cut. ANOVA is used to find out significant parameters (feed rate, cutting speed and depth of cut). The most significant parameters are feed rate and cutting speed. The maximum value of material removal rate is found to be 394.33mm 3 /sec., which is at feed rate (0.200 mm/rev), cutting speed (159.58 m/min) and depth of cut (1.3996 mm). PSO is an efficient and effective optimization tool for finding the optimum machining parameters for maximizing MRR. The results give a positive indication of the potential offered by PSO. INTRODUCTION Composite structure materials have successfully replaced traditional materials in respect of their high strength, high stiffness, good dimensional stability, higher fracture toughness, higher oxidation and corrosion resistance, directional properties, good resistance to heat, cold moisture and ease of fabrication applications (Bachtiar, Sapuan, & Hamdan, 2010;Umar, Zainudin, & Sapuan, 2012).As a result, the use of composites has grown considerably, particularly in the aerospace, aircraft, automobile, sporting goods, transportation, power generation and marine industries.Machining of these materials poses particular problems that are seldom seen with metals, due to the inhomogeneity, anisotropy and abrasive characteristics of the composites (Abrate & Walton, 1992).Composite materials are two different materials that, when combined together, produce a material with properties that exceed the constituent materials.There are two categories of constituent materials (Gordon & Hillery, 2002). 1. Reinforcement phase (e.g., fibers): The reinforcements impart their special mechanical and physical properties to enhance the matrix properties.2. Binder phase (e.g., compliant matrix): The matrix material surrounds and supports the reinforcement materials by maintaining their relative position. Composite materials may have ceramic, metallic or polymeric matrix.Most engineering materials can be classified into one of four basic categories as metals, ceramics, polymer or composites (Janardhan, 2005;Jeffrey, Tarlochan, & Rahman, (2011); Adebisi, Maleque, & Rahman, 2011;Bhaskar, & Sharief, 2012).Fiber-reinforced plastics (FRP) have been widely used in industry due to their excellent properties such as high specific modulus, specific strength and damping capacity.They are being commonly used in the aerospace and automotive industries, marine applications, sporting goods and biomedical components.Most of the FRP components are manufactured by molding operations almost to the final size of the desired product.However, postproduction machining is sometimes needed to remove excess material at the edge of the component by trimming and to drill holes for dimensional tolerance and assembly requirements.However, it has been reported that the strong anisotropy and inhomogeneity of FRP introduces many specific problems in machining, such as fiber pullout, delamination, surface damage, burrs and burning (Hu & Zhang, 2004).The machining of fiber-reinforced materials requires special considerations about the wear resistance of the tool.High speed steel (HSS) is not suitable for cutting owing to the high tool wear and poor surface finish.Hence, carbide and diamond tools are used as suitable cutting tool materials (Paulo Davim & Reis, 2004;Hariprasad, Dharmalingam, & Praveen Raj, 2013).Konig et al. (1985) found that measurement of surface roughness in FRP is less dependable than in metal, because protruding fiber tips may lead to incorrect results or at least large variations of the reading.The machined surface of Kevlar fibers reinforced plastics (KFRP) exhibits poor surface finish due to the fussiness caused by delaminated, dislocated and strain ruptured tough Kevlar fibers.Lee (2001) investigated the machinability of glass fiber reinforced plastics by means of tools made of various materials and geometries.Three parameters, namely cutting speed, feed rate and depth of cut, were selected.Single crystal diamond, poly crystal diamond and cubic boron nitride were used for the turning process.It was concluded that the single crystal diamond tool is excellent for GFRP cutting.Dhavamani and Alwarsamy (2011) presented a new methodology for the optimization of the machining parameters for drilling aluminum silicon carbide (AlSiC).Taguchi's method was used for the experimental design.Three parameters, cutting speed, feed and diameter of cut, were selected to minimize the surface roughness, volume fraction, machining time, metal removal rate, specific energy and flank wear.It was found that the machining performance can be improved effectively through this approach.Murthy, Rodrigues, and Anjaiah (2012) developed a thrust force and torque prediction model for the machining of GFRP composites using response surface methodology by using a solid carbide drill bit.Four parameters, spindle speed, feed, drill diameter and point angle, were selected to minimize the thrust force and torque.It was found that the spindle speed is the main contributing parameter for the variation in the thrust force and that the drill diameter is the main contributing factor for variation in torque.Suresh Kumar Reddy and Venkateswara Rao (2005) developed a surface roughness prediction model for the machining of AISI 1045 steel using a genetic algorithm by using a coated (TiAIN) carbide four fluted end mill cutter.Four parameters, tool geometry (nose radius and radial rake angle) and cutting conditions (cutting speed and feed rate), were selected to minimize the surface roughness.The predictive capability of the surface roughness model was improved by incorporating the tool geometry in the modeling. Surinder Kumar et al. (2012) developed a cutting force prediction model for the machining of UD-GFRP using regression modeling by using a polycrystalline diamond cutting tool.Three parameters, cutting speed, depth of cut and feed rate, were selected to minimize the cutting force.It was found that the depth of cut is the factor which has the greatest influence on the radial force, followed by the feed rate factor then other parameters, whilst the feed rate is the least significant parameter.Also, the authors concluded that the experimental values agreed with the predicted results, indicating the suitability of the multiple regression models.Kumar et al. (2012) investigated the turning process of the unidirectional glass fiber reinforced plastic (UD-GFRP) composites.A polycrystalline diamond (PCD) tool on the turning machine was used and the influence of six parameters, tool nose radius, tool rake angle, feed rate, cutting speed, depth of cut and the cutting environment (dry, wet and cooled (5-7° temperature)), on the surface roughness was measured.It was found that the feed rate is the factor which has the greatest influence on surface roughness, followed by cutting speed.Palanikumar (2008) evaluated the effect of cutting parameters on the surface roughness of the GFRP composites using a PCD tool.Three parameters, cutting speed, feed rate and depth of cut, were selected to minimize the surface roughness.It was found that depth of cut has the least effect on the surface roughness compared to the other parameters.Hussain et al. (2010) developed a surface roughness prediction model for the machining of GFRP pipes using a response surface methodology by using a carbide tool (K20).Four parameters, cutting speed, feed rate, depth of cut and workpiece (fiber orientation), were selected to minimize the surface roughness.It was found that the depth of cut has the least effect on the surface roughness compared to the other parameters.Palanikumar, Latha, Senthilkumar and Karthikeyan (2009) investigation focused on the multiple performance optimizations of machining characteristics of glass fiber reinforced plastics composites by using a non-dominated sorting genetic algorithm.Three parameters, cutting speed, feed rate and depth of cut, were selected to minimize the surface roughness and tool flank wear and to maximize the material removal rate.A polycrystalline diamond tool was used for the turning operation.Khan, Rahman, Kadirgama, Maleque, & Ishak (2011) proposed an approach for turning of a glass fiber reinforced plastic composites using two different alumina cutting tools: namely, a Ti[C, N] mixed alumina cutting tool (CC650) and a SiC whisker reinforced alumina cutting tool (CC670).Three parameters, cutting speed, depth of cut and feed rate, were selected to minimize the surface roughness.It was found that the performance of the SiC whisker reinforced alumina cutting tool is better than that of the Ti[C, N] mixed alumina cutting tool for machining GFRP composite.Kennedy and Eberhart (1995) suggested a particle swarm optimization (PSO) based technique for optimization on the analogy of a swarm of birds and a school of fish.The algorithm, which is based on a metaphor of social interaction, searches a space by adjusting the trajectories of moving points in a multidimensional space.The individual particles are drawn stochastically toward the position of present velocity of each individual and the best previous performance of their neighbors (Abido, 2001).The main advantages of the PSO algorithm are summarized as: simple concept, easy implementation, robustness to control parameters and computational efficiency when compared with mathematical algorithms and other heuristic optimization techniques (Dautenhahn, 2002).Zhang and Ishikawa (2008) proposed a new method to prevent premature convergence and for managing the exploration-exploitation trade-off in PSO search, Particle Swarm Optimization with Diversive Curiosity.It was observed that the ratio of success in finding the optimal solution to the given optimization problem was significantly improved and reached 100% with the estimated appropriate values of parameters in the internal indicator.Zhou et al. (2006) presented a particle swarm optimization technique in training a multi-layer feedforward neural network which was used for a prediction model of diameter error in boring machining.It was observed that the networks for diameter error prediction trained by the PSO algorithm or by the back propagation algorithm both improved the precision of the boring machining, but the neural networks trained by the PSO algorithm performed better than those trained by the back propagation algorithm. Verma (2012) used a fuzzy inference system and multi performance characteristic index (MPCI) for modeling and prediction of an FRP-polyester/epoxy composites workpiece.Three parameters, cutting speed, feed rate and depth of cut, were selected to minimize the surface roughness and maximize the material removal rate.It was found that the FIS and MPCI modeling technique can be effectively used for the prediction of the surface roughness and material removal rate in machining of FRP composites. Ravi Sankar, Srikant, Vamsi Krishna, Bhujanga Rao, & Bangaru Babu, (2013) discussed the comparison between the computational effectiveness and efficiency of the GA and PSO using a formal hypothesis testing approach.The results of this test could prove to be significant for the future development of PSO.It appeared that PSO outperformed the GA with a larger differential in computational efficiency when used to solve unconstrained nonlinear problems with continuous design variables.This paper investigates the optimization problem of the cutting parameters in the turning of unidirectional glass fiber reinforced plastic (UD-GFRP) composite rods.The material removal rate is the response variable.The experiments are performed using a Taguchi L 18 orthogonal array.The particle swarm optimization technique is used to find the optimum process parameters. MATERIAL AND EXPERIMENTAL TECHNIQUE Pultrusion processed unidirectional glass fiber reinforced composite rods are used.The fiber used in the rod is E-glass and the resin used is epoxy, while the properties of the material used are shown in Table 1.Workpiece material specimens having a size of 840 mm in length and 42 mm in diameter are used.The experiments are carried out on an NH22 lathe machine with 11 kW spindle power and a maximum speed of 3000 rpm using a PCD tool.A cutting tool insert with various rake angles (-6°, 0°, +6°) and tool nose radii (0.4 mm & 0.8 mm) are used.A tool holder SVJCR steel EN47 is used during the turning operation.The experimental results of turning of unidirectional glass fiber reinforced plastics composite are evaluated to ascertain the material removal rate (MRR).The experimental design based on the Taguchi L 18 orthogonal method is used. The Taguchi mixed level design is selected as it is decided to keep two levels of tool nose radius.The remaining five parameters are studied at three levels.The two level parameter has 1 DOF and the remaining five three level parameters have DOF i.e., the total DOF required is 11 [= (1*1+ (5*2)].The most appropriate orthogonal array in this case is L 18 (2 1 * 3 7 ) OA with 17 [= 18-1] DOF.The standard L 18 OA with the parameters assigned by using linear graphs is used.The unassigned columns are treated as error. The process parameters, their designated symbol and ranges are also given in Table 2.The plan is made of 18 tests (array rows) in which the tool nose radius, tool rake angle, feed rate, cutting speed, cutting environment (dry, wet and cooled) and depth of cut are assigned to columns 1 to 6 respectively, as shown in Table 3.The cutting environment (dry, wet and cooled) is set during the machining of the rod, so as to get a comparative assessment of the performance of the cutting environment, which has not been studied before.The material removal rate (MRR) in mm 3 /sec.is calculated using Equation 1.This is the volume of material being removed per unit time from the workpiece: SS = sum of squares, DOF = degrees of freedom, variance (V) = (SS/DOF), T = total, SS / = pure sum of squares, P = percent contribution, e = error, F ratio = (V/error), Tabulated F-ratio at 95% confidence level, * Significant at 95% confidence level. REGRESSION ANALYSIS A multiple linear regression equation is modeled for the relationship between processes parameters in a bid to evaluate the material removal rate for any combination of factor levels in a specified range.The functional relationship between the dependent output parameter and the independent variables under investigation is postulated by Eq. ( 2): where Y is a dependent output variable such as the material removal rate.x 1 , x 2 and x 3 are independent variables such as the feed rate, cutting speed and depth of cut.The constants a, b and c are the exponents of the independent variables.To convert the above nonlinear equation into linear form, a logarithmic transformation is applied to the equation and written as Eq. ( 3): This is one of the most popularly used data transformation methods for empirical model building.Now the above equation is written as Eq. ( 4): where η is the true value of the dependent material removal rate on a logarithmic scale, x 1 , x 2 and x 3 are respectively, the logarithmic transformation of the different parameters, while β₀, β 1 , β 2 and β 3 are the corresponding parameters to be estimated.Due to the experimental error, the true response η = y-ε, where y is the logarithmic transformation of the measured material removal rate parameter and ε is the experimental error.For simplicity, the equation is rewritten as where Ŷ is the predicted material removal rate value after logarithmic transformation and b 0, b 1 , b 2 and b 3 are the estimates of the parameters, β 1 , β 2 , and β 3 respectively. The values of b₀, b 1 , b 2 and b 3 are found by linear regression analysis (second order model), which is conducted with MINITAB standard version software (MINITAB 15.0 for Windows) using the experimental data.The first order model for the material removal rate revealed a lack of fitness due to high prediction errors for the material removal rate.As a result, the below mentioned second order models were developed and their form is given below. Here x 1 , x 2 , x 3 are log of feed rate, cutting speed and depth of cut. The empirical model developed by regression analysis for the material removal rate (MRR) is given below: 1233 MRR= 0.005 + 1.52 x 1 + 2.65 x 2 + 1.08 x 3 + (-0.684) x 1 x 2 + (-0.347) x 1 x 3 + (-0.334) x 2 x 3 + (-0.325) x 1 2 + (-0.651) x 2 2 + (-0.250) x 3 The predicted output values for the material removal rate are calculated with the help of the above equation and the given coefficients areas shown in Table 6.The multiple regression coefficient R 2 of the second-order model is found to be 99.5%.On the basis of the multiple-regression coefficient (R 2 ), it can be concluded that the second order model is adequate in representing this process, as shown in Table 6.Table 7 shows the analysis of variance in which the P value of (0.000) for regression <0.05 indicates that at least one of the terms in the model has a significant effect on the mean response of the material removal rate (Montgomery, Peck, & Vining, 2001).Table 6.Empirical expressions developed by second order model.Table 7. ANOVA for second-order model (MRR). Thus, it can be stated that the empirical equation built by using the second order model can be used.The relative error between the predicted and measured observed values for the material removal rate is calculated and presented in Table 7.The significance of the predictors, shown in Table 6, is also analyzed further, as shown in Table 8. Goodness of Fit for Surface Roughness and Material Removal Rate To test whether the discrepancies between the observed and expected frequencies can be attributed to chance, we use the statistics for test of goodness of fit for the material removal rate as given by Eq. 7. (7) Predictor Coefficient of material removal rate The criterion chosen for either accepting or rejecting the null hypothesis is: If χ2 > 8.672 (tabulated value).Reject the null hypothesis Table 8.Comparison between experimental and predicted values of material removal rate. Table 9 shows that χ2 = 1.1078 for the material removal rate for 17 degrees of freedom, where the degree(s) of freedom is given by: (rows-1) x (col-1) = (18-1) x (2-1) = 17.Therefore, analysis of the data does suggest that the perception is correct with a 95% confidence level.Otherwise, there is reason to believe that the program gives correct output, as shown in Table 9. PARTICLE SWARM OPTIMIZATION PSO is a global optimization technique that has been developed by Kennedy and Eberhardt (1995).The particle swarm intelligence technique combines social psychology principles in socio-cognition human agents with evolutionary computations.PSO was motivated by the behavior of organisms such as fish schooling and bird flocking, in order to guide swarms of particles towards the most promising regions of the search space.Generally, PSO is characterized as a simple concept, that is easy to implement and computationally efficient.Unlike the other heuristic techniques, PSO has a flexible and well-balanced mechanism to enhance the global and local exploration abilities.Thus, a PSO algorithm can be employed to solve an optimization problem.Each particle in the swarm represents a candidate solution to the optimization problem.In a PSO, each particle moves to the new position and makes use of the best position encountered by itself and the best position of its neighbors to position itself towards the global minimum.The principle of the PSO algorithm is as follows (Esmin, Lambert-Torres, & de Souza, 2005).The PSO considers a swarm S containing n particles (S = 1, 2…...N) in a ddimensional continuous solution space.The position and velocity of individual si are represented as the vectors x i = (x i1 ………x id ) and v i = (v il ………..v id ), respectively.A bird adjusts its position in order to find a better position, according to its own experience and the experience of its companions.Using the information, the updated velocity of individual i is modified using Eq. 8. Coding of Particles Generate each particle using binary coding.Here the binary format particle is decoded by using Eq. 9. where X i is the decoded feed, cutting speed or depth of cut, is the lower limit of feed, cutting speed or depth of cut, is the upper limit of feed cutting speed or depth of cut, and n is the substring length (= 4) and S i is the decoded value of the i th chromosome.Accuracy is given by Eq. 10: (10) Figure 1 and Table 10 show the flow diagram and algorithm of PSO. RESULTS AND DISCUSSION Experiments are performed on a turning machine according to the L 18 orthogonal array shown in Table 3. Table 4 shows the experimental results of the material removal rate.The Algorithm Step 1: Generate the initial swarm involving N particles at random. Step 2: Generate the initial velocity randomly. Step 3: Find the best solution that has been achieved so far by that particle and the best value obtained so far by any particle in the neighborhood of that particle. Step 4: Update the velocity and position Step 5: If the termination condition is satisfied, stop.Otherwise, go to Step 3. 5. From Table 5, it is clear that parameters C, D and F significantly affect both the mean and variation in the material removal rate value.The percent contributions of parameters, as quantified under column P of Table 5, reveal that the influence of the depth of cut on the material removal rate is significantly larger than the feed rate and cutting speed.The percent contributions of depth of cut (52.168%),feed rate (26.179%) and cutting speed (8.838%) in affecting the variation of the material removal rate are significantly larger (95% confidence level) than the contribution of the other parameters, as shown in Table 6( The PSO code is developed using MATLAB.The input machining parameter levels are fed to the PSO program.It is possible to determine the conditions at which the turning operation has to be carried out in order to get the optimum material removal rate.Figure 2 shows the MRR versus no. of iterations.Table 11 shows the performance of the material removal rate with respect to input machining parameters for PSO.It has been found that the maximum value of the material removal rate is 394.33mm 3 /sec., which is at feed rate (0.200 mm/rev), cutting speed (159.58 m/min) and depth of cut (1.3996mm).Hence, it can be concluded from the optimization results of the PSO program that it is possible to select a combination of feed rate, cutting speed and depth of cut to achieve the required material removal rate.The application of a PSO approach to obtain the optimal machining conditions will be very useful at the computer-aided process planning (CAPP) stage in the production of high-quality goods with tight tolerances by a variety of automated machining operations and in adaptive control machine tools.With the known boundaries of the material removal rate and machining conditions, machining can be performed with a relatively high rate of success with the selected machining conditions.Table 11.Output values of the PSO with respect to input machining parameters. CONCLUSIONS The turning test is performed on GFRP using a PCD tool.The Taguchi L 18 orthogonal array is used to perform experiments to analyze the MRR.In this paper, an approach based on particle swarms is used to solve the optimization problem for maximizing MRR.The simulation results show that the approach quickly converges.PSO is an efficient and effective optimization tool for finding the optimum machining parameters for maximizing MRR.The results give a positive indication of the potential offered by PSO.It can be concluded that better optimization of cutting parameters is necessary to obtain a high Meenu and Kumar /International Journal of Automotive and Mechanical Engineering 8 (2013) 1226-1241 1231 where T C = L/CN; N = spindle speed in rpm; D = initial Dia in mm; d = final dia in mm, L = length in mm, C = feed rate in mm/rev. Optimization of the Material Removal Rate in Turning of UD-GFRP using the Particle Swarm Optimization Technique 1234 Meenu andKumar /International Journal of Automotive and Mechanical Engineering 8 (2013) 1226-1241 1237 pooled version of ANOVA of the raw data for the material removal rate is given in Table A). Figure Figure 1.Flow chart of PSO Table 1 . Mechanical and thermal properties of the UD-GFRP materialOptimization of the Material Removal Rate in Turning of UD-GFRP using the Particle Swarm Optimization Technique 1230 Table 2 . Control parameters and their levels. Table 3 . Experimental layout using L 18 orthogonal array. Table 4 . Table 4 shows the experimental results of the material removal rate.Test data summary for material removal rate. Optimization of the Material Removal Rate in Turning of UD-GFRP using the Particle Swarm Optimization Technique Table 9 . Statistics for test of goodness of fit of material removal rate The maximum value of the material removal rate is found to be 394.33mm 3 /sec., which is at feed rate (0.200 mm/rev), cutting speed (159.58 m/min) and depth of cut (1.3996mm).
2017-09-27T19:37:49.664Z
2013-12-30T00:00:00.000
{ "year": 2013, "sha1": "7b8400f9540f24bea83b38effb97860922aa8e3d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15282/ijame.8.2013.13.0101", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7b8400f9540f24bea83b38effb97860922aa8e3d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
223800888
pes2o/s2orc
v3-fos-license
Risk Stratification Study of Indeterminate Thyroid Nodules with a next-generation Sequencing Assay with Residual ThinPrep® Material Objective: The management of indeterminate thyroid nodules is challenging. Molecular testing has emerged as a promising method for stratifying this gray area of fine-needle aspiration (FNA) cytology. Next-generation sequencing (NGS) can be used to test a large variety of genetic changes with very small amounts of nucleic acids obtained from FNA samples. Methods: Thyroid FNA assays were classified according to the Bethesda System for Reporting Thyroid Cytopathology after routine ThinPrep® slide preparation. Indeterminate nodules with surgical outcomes were assayed with an 18-gene NGS panel with the residual ThinPrep® material, including nodules categorized as atypia of undetermined significance (AUS)/follicular lesions of undetermined significance (FLUS) or follicular neoplasm (FN)/suspicious for a follicular neoplasm (SFN). We evaluated the diagnostic efficacy of the 18-gene panel for thyroid malignancies and potential malignancies and compared it with a well-accepted examination, ThyroSeq v2 testing. Results: A total of 36 indeterminate nodules were assayed, seven were categorized as AUS/FLUS and 29 as FN/SFN. All of them had adequate DNA for the NGS procedure. When noninvasive follicular thyroid neoplasm with papillary-like nuclear features (NIFTP) was considered malignant, the risk of malignancy was 71.4% for AUS/FLUS nodules, and 69.0%for FN/SFN nodules. The 18-gene panel showed 72.0% sensitivity, 72.7% specificity, 85.7% positive predictive value (PPV), and 53.3% negative predictive value (NPV) in identifying malignancies and potential malignancies in the indeterminate nodules. Compared with a multicenter report from ThyroSeq v2 testing, 18-gene panel showed a lower NPV (p=0.005), but a higher PPV (p=0.02). Conclusions: NGS assays are feasible on residual ThinPrep® material, with the advantage of not requiring additional FNA procedure. The 18-gene panel testing can be used as a 'rule in' test for surgical management based on indeterminate nodules and showed a lower NPV but a higher PPV compared to ThyroSeq v2 testing. Introduction With the wide application of thyroid ultrasound in physical examinations, thyroid cancer has become the fastest growing type of cancer identified throughout the world, including on the Chinese mainland [1]. Fine-needle aspiration (FNA) is the most effective diagnostic method for thyroid cancer. FNA allows the diagnosis of cancer or a benign nodule in most patients, although about 20% of FNA samples yield an indeterminate diagnosis [2]. These indeterminate nodules include two subcategories of cytological diagnosis: atypia of undetermined significance (AUS)/follicular lesion of undetermined significance (FLUS) and follicular neoplasm (FN)/suspicious for a follicular neoplasm (SFN) [3]. A predictor of indeterminate nodules that may place certain nodule types at higher malignancy rates is required. Molecular testing is recommended by the 2015 American Thyroid Association guidelines as an adjunct technique to further stratify the risk of cytologically indeterminate nodules [4]. To date, various commercial molecular tests, such as Afirma, ThyroSeq, or ThyGen X, have been approved by the U.S. Food and Drug Administration to evaluate cytologically indeterminate thyroid nodules [5]. However, there is currently no molecular test that can definitively rule malignancy either in or out. More molecular data on indeterminate nodules are required. Next-generation sequencing (NGS) can be used to test a large variety of genetic changes simultaneously with a very small amount of nucleic acids, and allows multiple genes to be tested in FNA samples. In this study, we used an 18-gene panel based on the NGS technology to test cytologically indeterminate nodules with residual ThinPrep® material. We evaluated the risk of malignancy in patients with a positive mutation detected with the 18-gene panel, and also compared the risk stratification achieved with the 18-gene panel to ThyroSeq v2 testing, which was also based on NGS platform and has been well-accepted. Case selection The thyroid FNA samples were collected from the National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College between November 2017 and June 2019 and analyzed retrospectively. The patients were selected on the basis of the following criteria: 1) cytological diagnosis of AUS/FLUS or FN/SFN according to the Bethesda System for Reporting Thyroid Cytopathology (TBSRTC) [3]; 2) they had undergone thyroid surgery and had correlated cytological-histological results; and 3) an adequate residual specimen was available for DNA extraction after the routine cytological diagnosis. The cytological-histological correlation was performed by matching the locations and sizes of nodules in both the ultrasound and pathology reports. All the patients gave their informed consent before FNA. This study protocol was reviewed and approved by the Ethics Committee of the National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital. Specimen preparation All FNA biopsies were performed under ultrasound guidance by radiologists. The aspirates were rinsed into a vial of CytoLyt ® (Hologic Inc., Marlborough, MA, USA) and prepared as slides with ThinPrep ® 2000 (Hologic Inc.). The slides were fixed in 95% alcohol and stained with Papanicolaou stain. They were then interpreted by two cytopathologists with experience ranging from 14 to 19 years. The residues were collected for DNA extraction. The residue selection criterion was defined as ten groups of cells on a slide in 10 ml of PreservCyt ® solution [6]. The liquid materials were stored at −20 °C and used for molecular testing within 3 months. DNA extraction After centrifugation, the cells were incubated in 500 μl of DNA lysis solution (1 mg/ml proteinase K, 10 mmol/l Tris-HCl (pH 8.0), 0.1 mol/l EDTA (pH 8.0), 0.5% (w/v) SDS) at 55 °C for approximately 12 h. The DNA was then extracted with the phenolchloroform method and stored at -20 °C for future use. The concentration and purity of the DNA were measured with a NanoDrop ND-1000 (NanoDrop Technologies, Wilmington, DE, USA) spectrophotometer. Targeted DNA sequencing Targeted DNA sequencing was performed for all patients with available DNA. The DNA was profiled with a capture-based targeted sequencing panel (Burning Rock Biotech, Guangzhou, People's Republic of China) that targets 18 genes (BRAF, NRAS, HRAS, KRAS, RET, NTRK1, ETV6, ALK, PPARG, TERT, EIF1AX, PTEN, AKT1, PIK3CA, TP53, CTNNB1, TSHR, and GNAS) and spans 140 kb of the human genome. In this way, we detected all single-nucleotide variants in these 18 genes and any gene fusions involving RET, NTRK1, ETV6, ALK, and PPARG. The design of 18-gene panel is based on data from public database and previous study in histology specimens [7,8]. bioanalyzer high-sensitivity DNA assay was performed to assess the quality and size of the fragments. The available indexed samples were sequenced on a NextSeq 500 sequencer (Illumina Inc., San Diego, CA, USA]) as pair-end reads. Sequence data analysis The sequence data were aligned to the human genome (hg19) with Burrows-Wheeler Aligner 0.7.10. Local alignment optimization and variant calling were performed with GATK v3.2-2. Both TopHat2 and Factera 1.4.3 were used for the DNA translocation analysis. To assess the level of DNA degradation, the insert size distribution and library complexity of each sample were computed. To avoid false positive mutation calls arising from DNA damage, different mutation calling thresholds were applied to DNA samples of different quality. Variants with population frequencies > 0.1% in the ExAC, 1000 Genomes, dbSNP, and ESP6500SI-V2 databases were grouped as common single-nucleotide polymorphisms and removed. Integrative Genomics Viewer (Broad Institute, USA) was used to visualize the variants aligned against the reference genome to confirm the accuracy of the variant calls by checking for possible strand bias and sequencing errors. Copy number variation was assessed by normalizing the read depth in each region to the total read number and region size, and correcting for GC bias using the LOESS algorithm. Statistical analysis The cytological and molecular results were correlated with the histopathological results. A χ 2 test was used to assess the differences in categorical variables. All statistical analyses were performed with SPSS 17.0, and p < 0.05 was considered statistically significant. Baseline characteristics of patients and nodules Between November 2017 and June 2019, 434 thyroid nodules showed indeterminate cytology. Thirty-six indeterminate nodules with correlated surgical outcomes and residual ThinPrep® materials were enrolled. The patient and nodule characteristics are listed in Table 1. The mean age of patients was 49 years, and the ratio of females to males was 3:1. The median nodule size was 1.3 cm. Surgical pathology showed that 23 nodules were malignant, and the follicular variant of papillary thyroid carcinoma was the commonest malignant histopathology (47.8%). Benign pathologies included two adenomas and nine nodular hyperplasias. Two nodules were classified as noninvasive follicular thyroid neoplasm with papillary-like nuclear features (NIFTP). Risk of malignancy in indeterminate nodules Among the 36 indeterminate nodules, seven were categorized as AUS/FLUS and 29 as FN/SFN. The ROM was 71.4% in the AUS/FLUS nodules and 69.0% in the FN/SFN nodules when NIFTP was considered malignant. If NIFTP was classified as benign, the prevalence of malignancy was 71.4% and 62.1%, respectively ( Table 2). Diagnostic capacity of 18-gene panel and comparison with ThyroSeq v2 assay The performance characteristics of the 18-gene panel testing for the diagnosis of thyroid malignancy including their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) are shown in Table 3. When NIFTP was considered malignant, the 18-gene panel had 72.0% sensitivity, 72.7% specificity, 85.7% PPV, and 53.3% NPV. If NIFTP was considered benign, the PPV decreased to 76.2%, NPV maintained a value of 53.3%. Table 3 also shows the comparison between our 18-gene panel and the previous reports from ThyroSeq v2. A statistical analysis was performed between present study and a multicenter report from ThyroSeq v2 in 2019 and 18-gene panel showed a lower NPV (p=0.005), but a higher PPV (p=0.02) with ThyroSeq v2. Two representative cases with positive molecular results are shown in Figure 1. Discussion Over the past decade, molecular testing has emerged as a promising method for stratifying indeterminate thyroid FNAs. Several molecular testing panels, such as the Afirma Gene Expression Classifier, ThyroSeq v2, and ThyGenX/ThyraMIR, are commercially available in the USA [5,9]. However, none of them has been approved or is available in mainland China. Ultrasound characteristics and clinical features are currently the main criteria used to stratify indeterminate nodules. The decision to operate is rarely made with reference to molecular results. We used NGS that targeted 18 genes to retrospectively analyze 36 cytologically indeterminate samples of thyroid lesions. All the samples were diagnosed as AUS/FLUS (7 cases) or FN/SFN (29 cases) with cytology, and the patients had undergone surgery based on their clinical and ultrasound features. The molecular analysis was performed after surgery with the residual liquid cytology samples after routine ThinPrep® slide preparation, which had been stored at -20 °C. To guarantee enough DNA for analysis, the selection criterion for the residues was defined as ten groups of cells on the slide in 10 ml of PreservCyt solution, as in our previous study [6]. The DNA quality of all 36 samples fulfilled the requirements for NGS. Molecular testing based on residual ThinPrep® material has the advantage of not requiring an additional FNA. In this study, the rates of malignancy for AUS/FLUS regardless of whether NIFTP was considered malignant or not were both 71.4%. The ROM of the FN/SFN nodules was 69.0% when NIFTP was considered malignant and fell to 62.1% when NIFTP was reclassified as benign. ROM was higher than the idealized ROMs described by TBSRTC [10]. This discrepancy is mainly attributable to the surgical cohort selected. Our hospital is the national cancer center of China. The experiences of surgeons and radiologist make this selection more effective. The various differences in ROMs described in the present and previous studies emphasize the need for surgeons to understand their individual data, rather than rely on TBSRTC predictions [11][12][13]. When NIFTP was considered malignant, 18-gene panel showed 85.7% PPV and 53.3% NPV for the diagnosis of thyroid malignancy in indeterminate nodules. If NIFTP was considered benign, PPV decreased to 76.2%, NPV maintained a value of 53.3%. As previously reported, we considered NIFTP malignant when evaluating molecular tests in clinical practice because the recommended treatment fort NIFTP is surgical excision [14,15]. Our 18-gene panel showed moderate NPV and high PPV when NIFTP was considered malignant, and may serve as a 'rule in' test for surgery. Our 18-gene panel involves next-generation sequencing that detects gene mutations and fusions. The design of our panel is similar to a well-accepted test designated 'ThyroSeq v2', which was initially suggested to be both a "rule in" and "rule out" test because both its PPV (83%) and NPV (96%) were high [16]. However, validation in the real world has suggested that its PPV may be lower than initially reported [17][18][19]. Recently, a multicenter study reported a PPV of 59% and an NPV of 86% when NIFTP was considered malignant [19]. Compared with that multicenter study, our 18-gene showed a lower NPV (p = 0.005), but a higher PPV (p = 0.02). As in previous reports, RAS was the most frequent mutation identified in the indeterminate nodules in this study, and was not specifically associated with malignant or potentially malignant outcomes [20][21][22]. Another false positive molecular result in our study was the mutation of PTEN. Two nodules with mutations in PTEN were both shown to be hyperplasia nodules. False PTEN mutations have rarely been reported in other studies [15,17,18,23]. This highlights the need for larger clinical studies to evaluate each mutation individually and to better characterize the risk of malignancy. The surgeon's familiarity with this information will allow more-appropriate clinical practices. This study was not without limitations. Because residual ThinPrep® FNA samples were collected for molecular testing, more nodules categorized as AUS/FLUS were excluded than those categorized as FN/SFN because there were fewer cells in the AUS/FLUS residues. Fewer samples in AUS/FLUS subcategory than that in FN/SFN subcategory may have weakened the power to demonstrate the diagnostic capacities of the 18-gene test in AUS/FLUS nodules. The small sample size may have also limited our understanding of the malignant risk associated with specific gene changes. Overall, residual ThinPrep® samples are suitable for NGS, and our 18-gene panel showed high PPV and moderate NPV for malignancy and potential malignancy. Therefore, it can be used as a 'rule in' test for stratifying indeterminate nodules. A lower NPV but a higher PPV was found with the use of 18-gene panel testing compared to the well-accepted ThyroSeq v2 testing.
2020-10-17T14:48:06.382Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "5321c07f48b9d64fff98afd7218c9e83156dd26d", "oa_license": "CCBY", "oa_url": "https://www.jcancer.org/v11p7276.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d70e32e81d6b4d132e2fcccfe7be8c5abcb927fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260706762
pes2o/s2orc
v3-fos-license
Cost-effectiveness of 20-valent pneumococcal conjugate vaccine compared with 23-valent pneumococcal polysaccharide vaccine among adults in a Norwegian setting Background The morbidity and mortality of adult diseases caused by S. pneumoniae increase with age and presence of underlying chronic diseases. Currently, two vaccine technologies against S. pneumoniae are used: the 23-valent pneumococcal polysaccharide vaccine (PPV23) and the pneumococcal conjugate vaccines, one of which is the 20-valent pneumococcal conjugate vaccine (PCV20) that has recently been approved for adults. Objective This study was conducted to investigate the cost-effectiveness of implementing PCV20 in a reimbursement scheme for Norwegian adults aged 18–99 years at risk of pneumococcal diseases and those aged 65 years and older at low risk compared to PPV23. Methods An established Markov model was adapted to a Norwegian setting to estimate the economic and clinical consequences of vaccinating the Norwegian population in specific age and risk groups against pneumococcal diseases. Inputs for the model were found in Norwegian or Danish real-world evidence or retrieved from available studies. The costs and clinical outcomes were assessed using a health sector perspective and a lifetime time horizon. Results The results showed that PCV20 was associated with better health outcomes including fewer disease cases, fewer disease-attributable fatalities, a higher gain of life years and quality-adjusted life years compared to PPV23. In addition, PCV20 had a lower total cost compared to PPV23. Therefore, PCV20 was the dominant vaccination strategy. The base case result was investigated in multiple sensitivity analyses, which showed that the results were robust to changes in input parameters and methodological assumptions, as PCV20 remained the dominant vaccination strategy in almost all scenarios. Conclusion Results showed that vaccinating the Norwegian adults with PCV20 was cost-effective compared to PPV23. Changes in the hospital cost of pneumonia, the price of PCV 20, the effectiveness of PCV20 against pneumonia, and the pneumonia disease incidence had the highest impact on the ICER, i.e., were the main drivers of the results. Introduction Pneumococcal diseases are common infections caused by the bacterial species S. pneumoniae.Worldwide, it is an important cause of infection and death among both children and adults [1,2].Infections with S. pneumoniae include both invasive pneumococcal diseases (IPD), described as meningitis, bacteraemia, and bacterial pneumonia, and non-invasive pneumococcal diseases, such as community-acquired pneumonia (CAP) [1][2][3].In Europe, S. pneumoniae is responsible for 20-30% of all CAP cases, and it is well known that the non-invasive pneumococcal diseases are three times more frequent than IPD in hospitalised adults [4,5].The pathogenicity and invasiveness of S. pneumoniae is determined by the composition of the polysaccharides in the capsule, which define the serotypes of the bacteria; currently, 100 distinct serotypes of S. pneumoniae are known [2,6]. The morbidity and mortality of pneumococcal diseases increase with age and the presence of underlying chronic diseases [4,7,8].To prevent IPD and CAP, vaccines have been developed, and currently two types of pneumococcal vaccines are available, the 23-valent pneumococcal polysaccharide vaccine (PPV23) and pneumococcal conjugate vaccines (PCVs).The vaccines are based on different technologies and thus induce different immune responses [9].Vaccination with PCVs provide a robust T-cell dependent immunisation as well as immunological and mucosal memory, which is not induced by PPV23 [10,11].In addition, vaccination with PCVs provide longer-lasting effects than PPV23 [9,12].In a recent review of studies, the effectiveness of the 13-valent pneumococcal conjugate vaccine (PCV13) and PPV23 was investigated on the same outcomes using similar methods and populations and found the conjugate vaccine to provide a superior protection against both pneumococcal disease and respiratory infections more broadly [13].But on the other hand PCVs in Norway are associated with a higher price than PPV23 [14]. In Norway, vaccination against pneumococcal diseases is primarily financed in the childhood vaccination programme, in which the 7-valent pneumococcal conjugate vaccine (PCV7) was introduced in 2006 and replaced by PCV13 in 2011 [5].Introduction of PCVs in the childhood vaccination programme has resulted in a decrease of the incidence of IPD caused by S. pneumoniae serotypes covered by PCV13 in all age groups [15].In addition, PCV13, as well as the recently-approved 15-and 20-valent pneumococcal conjugate vaccines (PCV15 and PCV20), are currently financed for selected medical high-risk groups and given in series with PPV23 according to the blue prescription ("blå resept") for people who are stem cell-transplanted or HIV-positive and people with functional or anatomic asplenia [5,16].PCV13 and PCV15 are approved for both children aged six weeks to 17 years and adults, whereas PCV20 is only approved for adults.Even though pneumococcal vaccination is not financed in any other risk groups in Norway, it is currently recommended that people with increased risk of IPD receive PPV23, as it covers more serotypes than PCV13, PCV15 and PCV20. However, the broader serotype protection of PCV15 and PCV20 compared to PCV13 narrows the serotype coverage gap between PCVs and PPV23 [11].In 2020 and 2021, 53% and 65% of the reported cases of IPD in Norway were caused by serotypes covered by PCV20 and PPV23, respectively [11].The protection against CAP and IPD in PCV13 has been demonstrated in the Community-Acquired Pneumonia Immunization Trial in Adults (CAPITA) study [17].As the serotypes of PCV13 are all included in PCV20, and PCV20 has shown noninferiority to PCV13, similar effects against CAP and IPD can be expected.In contrast, studies have shown inconclusive results regarding the vaccine efficacy of PPV23 against CAP [13,18,19]. Based on the burden of pneumococcal diseases and the economic impact of the diseases, this study aimed to investigate the cost-effectiveness of PCV20 vaccination of Norwegian adults aged ≥ 18 years at risk of pneumococcal diseases and all adults aged ≥ 65 years at low and at risk of pneumococcal diseases compared to PPV23.In the model, the at-risk group includes both immunocompetent adults (typically considered at moderate risk of pneumococcal diseases) and adults with immunocompromising conditions, who are not part of the current blue prescription scheme (typically considered at high risk of pneumococcal diseases). Methods To investigate the cost-effectiveness of PCV20 for adults in Norway, a cost-utility model was adapted to a Norwegian setting.The model has previously been adapted to a Danish setting and is also described in a study by Olsen et al. [20].The cost-utility analysis was conducted using a Markov transition model with one-year cycles.In the model, the possible transitions of each cycle are related to patients experiencing an event of IPD, defined as meningitis or bacteraemia in the model, or pneumonia with or without hospital contact.When patients experience a pneumococcal disease, they can either die or recover.In addition, patients can experience all three types of pneumococcal disease within one cycle.In Fig. 1, an overview of the model structure and the possible transitions are provided. The study population was stratified by age and risk of pneumococcal diseases.The age groups included in the model were adults aged 18-49 years, 50-64 years, 65-74 years, 75-84 years, and 85-99 years.Based on 2019 data from Statistics Norway, the number of people in each age group was estimated.2019 data was used because it represents the most recent data not influenced by the COVID-19 pandemic [21].Due to the initiatives introduced during the COVID-19 pandemic, such as lockdowns and social distancing, fewer cases of pneumococcal diseases were observed; however, it is expected that the number of disease cases will return to their prepandemic levels.This is already apparent in Norwegian IPD data, which show that the total number of IPD cases in 2018 and 2019 was 582 and 600 cases, during the pandemic in 2020 and 2021 the number of IPD cases was 294 and 318, respectively.In 2022, the total number of IPD cases was 517 and thus, almost returned to the level seen in 2018 and 2019 [22].In each age group, the population was stratified to be at low risk, at risk or high risk of pneumococcal disease.The share of people in each age and risk group were determined using the 10th edition of the International Classification of Diseases (ICD-10) codes to identify at-risk and high-risk diseases, and the share of patients in each risk group was determined using data for anyone with a diagnosis corresponding to the risk groups between 2015 and 2019 from the Norwegian Patient Registry (NPR) [23].The at-risk group is, as stated, a combination of patients at both moderate and high risk of pneumococcal disease for whom the pneumococcal vaccines are currently not reimbursed.Therefore, people at low risk constituted the remaining share of the population in each age group.The share included in each age and risk group is provided in Table 1.In the model, it is possible for the population to change risk group to a higher level of risk (see Fig. 1).A closed cohort was used and therefore the model did not include a new generation in each cycle. The model included a lifetime time horizon to capture costs and effects of the different vaccination strategies.The time horizon is estimated based on the model having an upper age limit of 99 years.As the model used a closed cohort, the length of the lifetime time horizon is Note: Risk groups refer to an individual's risk of pneumococcal diseases based on the presence of underlying chronic conditions.Low risk refers to adults with no underlying chronic conditions.At risk refers to immunocompetent adults with underlying chronic condition and immunocompromised adults, who are not currently included in the Norwegian blue prescription ("blå resept").High risk refers to immunocompromised adults who are currently included in the blue prescription.Sources: [21,23] Fig. 1 Model structure restricted to the age of the cohort at cycle 0 implying a time horizon of 81 years (99 years minus 18 years). Based on Norwegian guidelines, the included discount rate was 4% in model years 0-39, 3% in model years 40-74, and 2% from model year 75 onwards [24].The perspective included in the model was a healthcare sector perspective, meaning that only costs accrued by the public healthcare sector were included. Incidence and mortality of IPD and pneumonia The disease incidences for meningitis and pneumonia with or without hospital contact are based on data from NPR and the Norwegian Registry for Primary Health Care and were estimated per 100,000 [23,25].Based on the data available the incidence of bacteraemia was calculated as the incidence of meningitis subtracted from the incidence of IPD.The incidence of IPD cases was identified through the Norwegian surveillance system for communicable diseases registry [22].Inputs regarding the incidence are presented in Table 2. The mortality included both the mortality rate of the general Norwegian population and the case fatality of IPD and pneumonia.As there are no Norwegian data on the case fatality of IPD and pneumonia, these inputs were based on Danish data from the Danish National Patient Registry and included as the average case fatality in the years 2017 and 2018 [26].Thus, it was assumed that the case fatality is similar across the two countries.The mortality is specified per 100 people for the general population and per 100 cases of disease for IPD and pneumonia.In addition, it was assumed that anyone who died would be hospitalised beforehand, thus, no mortality was assumed for pneumonia without hospitalisation.Mortality inputs of the general population and the case fatality are presented in Table 2. Vaccine coverage, efficacy and waning Consistent with the study by Nymark et al., the vaccine coverage of both PCV20 and PPV23 was assumed to be 75% of the population [5].Therefore, in the first cycle of the model, 75% of the 18-64-year-olds at risk of pneumococcal diseases and 75% of the population aged 65 years and older at either low risk or at risk of pneumococcal diseases were modelled to receive vaccination. As has previously been confirmed by Essink et al., the immune response induced by PCV20 is non-inferior to PCV13 for all 13 serotypes [12], and so the vaccine efficacy and waning of PCV20 used in the model was assumed to be equivalent to PCV13.Therefore, PCV20 vaccine efficacy and waning was based on data from the CAPiTA study investigating people aged 65 years and older [17].For persons aged 65 years or older at low risk or at risk, the initial PCV20 vaccine efficacy was assumed to be 45% for pneumonia and 75% for IPD.Using data from Mangen et al. on the age-specific relative changes in vaccine efficacy, the vaccine efficacy was extrapolated to people aged 50-64 years [29].Thus, it was assumed that the initial vaccine efficacy of people aged 18-49 years was the same as persons aged 50-64 years.Based on data and post-hoc analyses of the CAPiTA study, it was assumed that the vaccine efficacy of PCV20 did not decrease within five years of vaccination [17,30].After five years, the annual waning of PCV20 was included in the model based on estimates by Mangen et al., who specified an annual decline in the vaccine efficacy of 5% in year 6-10, 10% in years 11-16, and after year 16, no vaccine efficacy was assumed [29]. The vaccine efficacy of PPV23 against IPD used in the model was based on data from Public Health England identified through a study by Djennad et al. [19].To estimate the vaccine efficacy for all age groups included in the model, a logarithmic curve was fitted to the data available from Djennad et al. in the age groups 65-74 years, 75-84 years, and 85-99 years.Waning of PPV23 against IPD was also estimated based on Djennad et al. [19] with a linear decline to 76.2% of initial vaccine efficacy by year 5 followed by a linear decline to no efficacy by year 10.Thus, it was assumed that after 10 years, PPV23 had no vaccine efficacy against IPD, which is supported by Berild et al. [31].As multiple studies have documented a lack of vaccine efficacy against pneumonia, it was assumed that PPV23 had no effect against non-bacteremic pneumonia [32][33][34][35][36].An overview of the vaccine efficacy of both PPV23 and PCV20 is presented in Table 3. For the vaccine efficacy of both PCV20 and PPV23, an adjustment was performed based on the definition of risk groups in the model, where the high-risk group only constitutes patients who currently hold a blue prescription (stem cell-transplanted, HIV-positive or with missing spleen function), meaning that the at-risk group includes immunocompromised patients who would usually be identified as high risk.Therefore, a weight was calculated representing the proportion of people in the at-risk group who would typically be categorised as high risk (20%) and multiplying it by the vaccine efficacy of high risk.The remaining proportion at risk (80%) was multiplied by the vaccine efficacy for at-risk adults. The vaccine serotype coverage, presented in Table 4, refers to the percentage of IPD and pneumonia cases that the vaccines protect against.Based on 2019 data from the European Center for Disease prevention and Control (ECDC), it was possible to identify Norwegian data on the serotype coverage against IPD in the age groups < 1 year, 1-4 years, and ≥ 65 years [37].Thus, no data for the age group 5-64 years was presented; however, as the total number of IPD cases was provided, it was possible to calculate the serotype coverage for this group.As it was not possible to stratify the data further into the age and risk groups used in the model, the ECDC data for 5-64-year-olds were used for the age groups of both 18-49 years and 50-64 years.In addition, the ECDC data for ≥ 65-year-olds were used for the models age groups of 65-74 years, 75-84-years, and 85-99 years.The vaccine serotype coverage for pneumonia cases was assumed to be the same as that of IPD.The IPD serotype distribution is only applied to pneumonia cases thought to be caused by S. pneumoniae, which is assumed to be 30% of allcause pneumonia [5] Therefore, the IPD vaccine serotype coverage was multiplied by 30% to estimate the serotype coverage against pneumonia. Herd immunity The model included the effect of herd immunity from the childhood vaccination programme on IPD and pneumonia incidence.The effect of herd immunity was estimated based on a report by the Norwegian Institute of Public Health in which the IPD incidence before and after introduction of PCVs in the childhood vaccination programme in 2006 were provided in the population aged 65 years and older [38].Data before introduction was based on year 2004 and 2005, when the IPD incidence was 75.6 per 100,000 persons, and which was reduced to an IPD incidence of 37.8 per 100,000 persons by 2017.Based on these estimates, it was possible to determine the annual reduction in IPD cases attributable to herd immunity during the 12.5 years (from mid-2004 to 2017), which was found to be 3.02%.As the report by the Norwegian Institute of Public Health was only based on the population aged 65 years and older, it was assumed that the estimated effect of herd immunity also applied for the population aged 18 to 64 years.In addition, the report only included the effect of herd immunity on IPD and not pneumonia; thus, it was assumed that the IPD herd immunity also applied for pneumonia.However, as S. pneumoniae according to Nymark et al.only accounts for 30% of all pneumonias [5], the effect of herd immunity against pneumonia only accounts for 30% of the yearly reduction in IPD cases of 3.02%, equal to a yearly reduction of pneumonia cases of 0.91%. Health-related quality of life There is no health-related quality of life (HRQoL) data for the general Norwegian population which is stratified by both age and risk group.Therefore, this model used HRQoL data from a study by Ara and Brazier which investigated utility values in an English population based on age and medical history [39].The age-specific utility data for people at low risk in the model was based on those who had no medical history, whereas a medical history of diabetes, heart attack, heart disease or hypertension was used for the at-risk population in this model.The utility data of the high-risk population was based on individuals with cancer.In Table 5, the utility values are presented based on age and risk group. In the model, people who experience an event of IPD or pneumonia with hospital contact will have a reduction of the annual HRQoL of 0.13 regardless of age and risk group.This reduction is based on a study by Mangen et al., who investigated the quality of life in patients aged 65 years and older hospitalised with pneumonia in the Netherlands [40].An event of pneumonia without hospital contact was associated with an annual reduction in HRQoL of 0.004 based on a study by Melegaro and Edmunds [41]. Costs As a health sector perspective was applied in the model, the analysis included costs associated with vaccination and treatment of IPD and pneumonia.All costs are presented in euros (EUR) using the exchange rate of 0.0966 from Norwegian kroner (NOK) to EUR based on the average exchange rate between 6 and 2022 and 6 December 2022 [42].All costs are presented at the 2022 price level.Costs included in the model are presented in Table 6. The vaccination cost included both the price of the vaccine and administration costs.The vaccine prices were identified through Legemiddelsok.no in October 2022 [14], based on the maximum pharmacy retail price excluding the value added tax (VAT), to comply with guidelines from the Norwegian Medicines Agency [24].Administration costs included the fee-for-service of a general practitioner (GP) consultations, which is calculated by multiplying the remuneration amount from "Normaltariffen" by two to comply with guidelines from the Norwegian Medicines Agency [24] and the cost of one subcutaneous injection.Based on the closed cohort, all vaccination costs occurred in the beginning of the model.All treatment of IPD and pneumonia with hospital contact were assumed to occur within the hospital sector, assuming no outpatient care.All costs associated with inpatient care were identified using the Norwegian diagnosis-related group (DRG) tariffs [43].Patients experiencing events of pneumonia without hospital contact received outpatient care, in which one GP visit and the costs of antibiotics were included based on the studies by Nymark et al. and Wollf et al. [5,44]. Sensitivity analyses The uncertainty of the model was investigated in oneway sensitivity analyses (OWSA), scenario analyses and probabilistic sensitivity analysis (PSA).In the OWSA, the uncertainty of input parameters and methodological assumptions were evaluated by varying each parameter one at time, using a ± 20% for inputs for disease incidence, mortality, vaccine efficacy of PCV20 and PPV23, and costs.For the remaining inputs of utility, disutility, proportion of pneumonia due to S. pneumonia and the proportion of IPD due to vaccine serotypes, the uncertainty was investigated with ± 10%. Multiple scenario analyses were conducted to evaluate the impact of alternative inputs of the methodological assumptions regarding the vaccine efficacy of PPV23 against pneumonia, vaccine coverage, usage of the IPD distribution of serotype coverage for pneumonia, the length of the time horizon (5 and 10 years), the discount rate (0% and 7%), the disutility of IPD and the choice of comparator on the base case results.The scenario analysis including the vaccine efficacy of PPV23 against pneumonia used inputs from a study by Lawrence et al., who found an efficacy of 25.7% among persons aged 16-74 years and 4.7% among persons aged 75 years and older [45].The vaccine coverage was assumed to be 75% based on Nymark et al.However, this assumption was associated with uncertainty, as the data from the Norwegian Immunisation Registry SYSVAK and NPR indicated a lower pneumococcal vaccine coverage [23,25].Therefore, scenario analyses with the vaccine coverage set to 25%, 50% and 100% were conducted.Scenario analyses were also performed to investigate the impact of serotype distribution and coverage of PCV20 on the number of pneumonia cases requiring hospitalisation.Data regarding the percentage of pneumonia caused by serotypes of S. pneumoniae, which are covered by PCV20 used for the sensitivity analyses were based on a Danish study by Benfield et al. and a Swedish study by Theilacker et al. [46,47].The serotype distributions of Benfield et al. and Theilacker et al. were adjusted by the 30% of pneumonia cases that are caused by S. pneumoniae.Therefore, the PCV20 serotype coverage was changed to 15.2% for patients aged 65 years and older and 20.7% for patients aged 18-64 years using Theilacker et al. and 16.9% using Benfield et al. [46,47].A scenario analysis in which the disutility of IPD was increased by 100% to capture that IPD is assumed to constitute more severe illness than pneumonia with hospital contact was also conducted.In addition, a scenario analysis comparing PCV20 with no vaccination was included to investigate PCV20 to the current standard practice of vaccination against pneumococcal diseases in Norway.Finally, a scenario assuming linear waning to 0% between years 5 and 15 for PCV20 and a scenario setting the herd immunity effect to 0% were included. To assess the joint uncertainty of the input parameters, a probabilistic sensitivity analysis (PSA) with 1,000 iterations was performed.A normal distribution was used for input parameters related to disease incidence, mortality and the vaccine prices.Beta distributions were used for parameters of utility, disutility, vaccine efficacy, the proportion of pneumonia due to S. pneumoniae and the proportion of IPD vaccine serotypes.Costs related to treatment were included in the PSA using a gamma distribution.Standard deviations for the applied distributions were derived. Results Results of the base case analysis are shown in Table 7. The results indicate that PCV20 is associated with fewer cases of pneumococcal diseases and fewer associated deaths compared to PPV23 using a lifetime time horizon of 81 years.PCV20 resulted in 1,539 fewer cases of bacteriaemia, 98 fewer cases of meningitis, 26,867 fewer cases of pneumonia with hospital contact, and 30,149 fewer cases of pneumonia without hospital contact compared to vaccination with PPV23.Similarly, PCV20 resulted in 330 and 1,055 fewer deaths due to IPD and pneumonia, respectively.In addition, the life year gain was 7,584 higher and the QALY gain was 7,966 higher for PCV20 than PPV23. PCV20 is a more costly vaccine than PPV23; therefore, the cost of vaccination was EUR 67,200,826 higher when vaccinating with PCV20.When costs associated with treatment of IPD and pneumonia at the hospital and primary sector were investigated, it was found that PCV20 resulted in a lower cost of EUR 139,712,512 in hospital costs and EUR 1,095,659 in primary sector costs than PPV23.This resulted in PCV20 having a total cost that was EUR 73,607,345 lower than PPV23. As PCV20 resulted in a higher QALY gain at a lower total cost compared to PPV23, the estimated incremental cost-effectiveness ratio (ICER) was negative (-9,240 EUR per QALY gained), indicating that PCV20 is the dominant vaccine strategy compared to PPV23. Sensitivity analyses The OWSA showed that the results were robust to changes in the input parameters when they were changed one at a time.And thus, in all OWSA, PCV20 remained the dominant vaccine strategy when compared to PPV23.A tornado diagram of the 12 parameters with the highest impact on the result is shown in Fig. 2. The scenario analyses showed that despite using alternative inputs for the vaccine coverage of the population, including vaccine efficacy of PPV23 against pneumonia, changing the time horizon, using different discount rates and increased disutility associated with IPD, and using different inputs for PCV20's waning and herd immunity, the results were robust.However, when the time horizon was changed to five years or PCV20 was compared to no vaccine, the results showed that PCV20 was associated with a greater health gain than PPV23 or no vaccine but at an additional cost, resulting in positive ICER estimates of EUR 1,437 per QALY gained and EUR 2,292 per QALY gained, respectively, see Table 8. Results of the PSA are illustrated in an ICER plane in Fig. 3.The ICER plane shows that in the 1,000 iterations performed during the PSA, 100% resulted in PCV20 having a greater QALY gain at a lower cost compared to PPV23; thus, PCV20 remained the dominant strategy in all iterations.Based on the 1,000 iterations of the PSA, the average incremental QALYs were estimated to 7,990 and the average incremental costs were estimated to EUR − 73,198,717, resulting in an ICER of EUR − 9,161 per QALY gained. Discussion This study investigated the health benefits and costs of vaccinating Norwegian adults aged 18 years and older at risk of pneumococcal diseases and adults aged 65 years and older at low risk or at risk of pneumococcal diseases with PCV20 compared to PPV23.The results emphasised that PCV20 was associated with a higher QALY gain and lower total costs than PPV23, and thus, PCV20 constituted the dominant vaccine strategy.The robustness of the results was evaluated through multiple sensitivity analyses, of which the majority showed that PCV20 remained the dominant vaccine strategy.Similar results of PCV20 being the dominant vaccine strategy compared to PPV23 have been identified in an English setting by Mendes et al., who evaluated the cost-effectiveness in adults aged 18 to 64 years with underlying conditions and all adults aged 65 to 99 years [48].Thus, even though the serotype coverage is better for PPV23 (four more serotypes than PCV20) the outcomes are better for PCV20 as PCV20's efficacy is expected to be higher, its duration of protection longer, and to confer protection against non-bacteremic pneumonia similarly to PCV13.These attributes outweigh the serotype gap among adults in Norway.Indeed, while there is PPV23-type disease not preventable by PCV20, the majority is non-bacteremic pneumonia which is also not preventable by PPV23. However, when the time horizon was reduced to five years or when PCV20 was compared to no vaccine, it resulted in positive ICERs, indicating that PCV20 is associated with a greater gain in health at a higher cost than the comparator.As there is no official cost-effectiveness threshold in Norway, it is not possible to determine Note: The table shows the clinical and economic outcomes of the base case analysis for each vaccination strategy and the increments between the strategies.Both the accumulated and per person QALYs, life years and total costs are presented.QALY: Quality-adjusted life years; ICER: Incremental cost-effectiveness ratio; IPD: Invasive pneumococcal diseases; PCV20: 20-valent pneumococcal conjugate vaccine; PPV23: 23-valent pneumococcal polysaccharide vaccine which of the vaccine strategies would be deemed costeffective.Despite this, it should be emphasised that the ICERs were very low (EUR 1,437 and EUR 2,292 per QALY gained) in both the scenario of a short time horizon and when compared to no vaccine. The cost-effectiveness of implementing a universal pneumococcal vaccination programme of PPV23 for older adults in Norway has previously been investigated by Nymark et al., who found that a universal vaccination programme was expected to be cost-effective among those older than 75 years (ICER in the lower end of the cost-effectiveness threshold range).Among those older than 65 years a universal vaccination programme was likely to be cost-effective according to Nymark et al. [5].Therefore, it should be emphasised that implementing PCV20 in a universal pneumococcal vaccination programme in Norway could be favourable, as the current study found PCV20 to be cost-effective compared to PPV23.The cost-effectiveness of implementing PCV20 in a universal vaccination programme was also investigated in a scenario analysis that compared PCV20 to no vaccine, which is the current standard practice in Norway.This scenario analysis showed that PCV20 was associated with an additional cost of EUR 2,292 per extra QALY gained compared to no vaccine.As it is a low ICER, it is possible that PCV20 would be deemed costeffective.Notably, an ICER of NOK 275,000 (EUR 26,572) per QALY gained was assumed cost-effective even at the lowest level of disease severity by a task force formed by the Norwegian Ministry of Health and Welfare in 2015 [49], well above the ICER identified in this study.Further in 2010, the Norwegian Medicines Agency assessed that HPV vaccination of 14-16 years old girls were cost-effective with an estimated ICER of NOK 48,000 (4,638 EUR) per QALY [50]. Differences in the efficacy and effectiveness of PPV23 and PCV20 have been observed, especially with regard to the protection against pneumonia, in which the effectiveness of PCV vaccines have been identified as high as 72.8%, whereas PPV23 has showed either little or no effect by offering only 2-3% protection [13,[51][52][53].For the base case analysis, no vaccine efficacy of PPV23 against pneumonia was assumed.Therefore, the higher vaccine efficacy of PCV20 against pneumonia used in this model results in better avoidance of pneumonia cases with PCV20.Using different studies of the vaccine efficacy of PPV23 against pneumonia could result in different results on cost-effectiveness.However, when the vaccine efficacy of PPV23 against pneumonia was changed to that found by Lawrence et al. [45], the costeffectiveness was re-evaluated and showed that PCV20 remained dominant.Note: The table shows the results of the scenario analyses according to the total incremental costs and QALYs between PCV20 and PPV23 and the calculated ICER The assumption that the serotype distribution of pneumonia is identical with that of IPD was investigated in a series of sensitivity analyses.The purpose of this was to investigate the impact of the serotype distribution and serotype coverage of PCV20 on the number of pneumonia cases requiring hospitalisation compared to the base case.When the serotype coverage was changed to both a higher and lower level than the base case, PCV20 remained dominant, and thus, the assumption did not substantially impact the cost-effectiveness results. The percentage of cases due to vaccine serotypes was assumed to be constant over time which is a limitation as paediatric introduction of PCV15 or higher valency PCVs not yet available may lead to serotype replacement.Predicting future serotype replacement is challenging and was not attempted in the current analysis.However, only replacement by the four PPV23 serotypes not contained in PCV20 would be expected to affect the ICER in the comparison against PPV23. The assumption of a 75% vaccine coverage was based on Nymark et al., who assumed a 75% vaccination coverage among adults aged 65 years [5].Therefore, it is possible that the vaccine coverage among adults aged 18-64 years is overestimated.In general, the vaccine coverage used in this analysis was high, as the pneumococcal vaccine uptake among adults in Norway are approximately 15% [15], indicating that the reality in Norway differs considerably from the base case analysis of this model, in which a 75% vaccine coverage was assumed.The use of a higher vaccine coverage is also supported by the current vaccine coverage of COVID-19 and influenza in Norway, which is above 90% and 62%, respectively [54].As the vaccine coverage is the same for PCV20 and PPV23, using a higher coverage will impact the results by resulting in both higher health gains and costs.When the impact of vaccine coverage was investigated through multiple scenario sensitivity analyses, the ICER remained negative, with PCV20 being the dominant vaccine strategy. Inputs for the model regarding the disease incidence and mortality rate of the general Norwegian population at low risk were based on real-world evidence from Statistics Norway and Norwegian patient registries.When registry data is used it is important to emphasise that it only reflects the data, which has been reported.Therefore, it is possible that the registry data is not the exact truth.However, the use of registry data is a strength, as it ensures that the model illustrates a Norwegian setting.As Norwegian data was not available for all inputs, it was necessary to use foreign data, which can be a limitation, as transferability of data across countries can be questionable due to differences in the delivery of healthcare and demographics of the population [55].The data regarding mortality was based on Danish realworld evidence, which was used due to generally similar populations in the Nordic countries.When Danish mortality data on people at low risk were compared to that of Norway, the data were found to correspond well.However, as risk groups are defined differently in Norway and Denmark, cf. the Danish adaption of the model [20], the Danish mortality rates applied in the Norwegian model adaption could result in either higher or lower mortality estimates. The utility data used in the model were based on the study by Ara and Brazier and an English population [39].This study was used to include utility values stratified by both age and risk groups, as there are no available Norwegian utility data which are stratified by risk groups.The use of utility values from Ara and Brazier was validated by comparing the values found by Ara and Brazier in the no-risk group with the Norwegian age-specific utility values used by the Norwegian Medicines Agency [24].A great similarity between the utility values of the age groups was found; however, a general tendency of the Norwegian utility values being slightly lower than the English values from Ara and Brazier was identified, i.e., the population aged 71-80 years in Norway has a utility value of 0.808.This difference could be explained by the Norwegian utility values representing the general population across all risk groups.Therefore, the Norwegian utility values represent a weighted average across all risk groups. The utility values of the high-risk group was, as stated, based on people with cancer in Ara and Brazier [39].However, this does not match the description of highrisk patients in the Norwegian adaption, as the high-risk group only comprises patients who currently hold a blue prescription.The decision to base high-risk utility values on cancer is based on the available literature from Ara and Brazier, in which cancer represents the best knowledge of a disease that is usually categorised as high risk, even though it is not included as high risk in the current model.This is not expected to impact the results, as the main difference in utility values of the risk groups is seen between low risk and at risk, not at risk and high risk, indicating that it will not affect the model outcomes to use cancer as a proxy for high risk.In addition, the atrisk utility values were based on only a few diseases, i.e., diabetes, heart attack, hypertension, and other heart diseases.Therefore, the at-risk utility values are estimated on a limited number of diseases categorised as at risk and is potentially overestimated or underestimated. The model included conservative assumptions for both costs and utility inputs, all of which could influence the results by either overestimating or underestimating the cost-effectiveness of PCV20 and were therefore assessed in sensitivity analyses.One of the conservative assumptions was that the disutility of IPD was the same as for pneumonia with hospital contact.In general, it would be expected that IPD is associated with a higher level of disutility than pneumonia with hospital contact, as IPD is expected to constitute more severe diseases than pneumonia.This assumption was investigated in a sensitivity analysis in which the disutility of IPD was increased by 100%, and the analysis showed that PCV20 remained the dominant vaccine strategy.Therefore, the decision to use a conservative input of IPD disutility did not influence the results of the analysis.Another conservative assumption included was to exclude revaccination of PPV23.According to the guidelines of the Norwegian Institute of Public Health, people should be revaccinated with PPV23 every six years, whereas no revaccination of PCV20 is needed [11].Therefore, the exclusion of PPV23 revaccination resulted in a lower vaccination cost of PPV23.However, the inclusion would only have increased the cost-effectiveness of PCV20. The model used a health-service perspective; therefore, the analysis did not include costs of patient time and transportation or any indirect costs from productivity loss.However, according to guidelines from the Norwegian Medicines Agency, an extended health-service perspective should be used, and therefore, both patient time and transportation should be included [24].These costs were excluded based on lack of data of number of contacts that patients have along with the travel time. Conclusion Results of this study showed that vaccinating the Norwegian population aged 18 to 99 years at risk of pneumococcal diseases and the population aged 65 years and older at low risk with PCV20 was cost-effective compared to PPV23.The anticipated reduction in cases of pneumococcal diseases and deaths would offset the incremental cost of PCV20 through reduction in treatment costs, and this result of PCV20 being the dominant vaccine strategy remained consistent through numerous sensitivity analyses; the few scenarios which indicated PCV20 would incur little incremental cost for the health benefits accrued.Broadening access to adult pneumococcal conjugate vaccines beyond the highest-risk patients in Norway should be considered. Fig. 2 Fig. 2 Tornado diagram showing the 12 input parameters with the highest impact on the incremental cost-effectiveness ratio Note: NBP: Non-bacteraemic pneumonia Fig. 3 Fig. 3 Results of the probabilistic sensitivity analysis illustrated in a cost-effectiveness plane Table 1 Number of people in each age group and their distribution into risk groups Table 2 IPD and pneumonia incidence and mortality according to age and risk group Table 3 Vaccine efficacy against IPD and pneumonia [17,19,29]able shows the vaccine efficacy and waning of PPV23 and PCV20 against IPD and pneumonia between year 1 and 16 after vaccination.In the base case analysis, PPV23 was assumed not to have any efficacy against pneumonia.Confidence intervals are provided in parentheses.IPD: invasive pneumococcal diseases; PCV20: 20-valent pneumococcal conjugate vaccine; PPV23: 23-valent pneumococcal polysaccharide vaccine.Sources:[17,19,29] Table 4 Vaccine serotype coverage of PCV20 and PPV23 against IPD [35]: The vaccine serotype coverage of PCV20 and PPV23 against IPD was based on data from 2019.IPD: invasive pneumococcal diseases; PCV20: 20-valent pneumococcal conjugate vaccine; PPV23: 23-valent pneumococcal polysaccharide vaccine.In the base case analysis, the vaccine serotype coverage against pneumonia was assumed to be identical with the one of IPD presented in this table.Source:[35] Table 5 Utility values based on age and risk group and disutility associated with events of IPD and pneumonia Table 6 Vaccination and treatment costs Note: Overview of the costs included in the model.DRG: diagnosis-related groups; PCV20: 20-valent pneumococcal conjugate vaccine; PPV23: 23-valent pneumococcal polysaccharide vaccine Table 7 Base case results Table 8 Results of scenario analyses
2023-08-09T13:38:56.180Z
2023-08-09T00:00:00.000
{ "year": 2023, "sha1": "efa938f50671d925703f8cef120206a2760aaa01", "oa_license": "CCBY", "oa_url": "https://resource-allocation.biomedcentral.com/counter/pdf/10.1186/s12962-023-00458-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "660b59eafa4673ce657a3b3b9bdd2e39a67e8f02", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
114732043
pes2o/s2orc
v3-fos-license
Implementation of an active instructional design for teaching the concepts of current, voltage and resistance In the present work we show the implementation of a learning sequence based on an active learning methodology for teaching Physics, this proposal tends to promote a better learning in high school students with the use of a comic book and it combines the use of different low-cost experimental activities for teaching the electrical concepts of Current, Resistance and Voltage. We consider that this kind of strategy can be easily extrapolated to higher-education levels like Engineering-college/university level and other disciplines of Science. To evaluate this proposal, we used some conceptual questions from the Electric Circuits Concept Evaluation survey developed by Sokoloff and the results from this survey was analysed with the Normalized Conceptual Gain proposed by Hake and the Concentration Factor that was proposed by Bao and Redish, to identify the effectiveness of the methodology and the models that the students presented after and before the instruction, respectively. We found that this methodology was more effective than only the implementation of traditional lectures, we consider that these results cannot be generalized but gave us the opportunity to view many important approaches in Physics Education; finally, we will continue to apply the same experiment with more students, in the same and upper levels of education, to confirm and validate the effectiveness of this methodology proposal. Introduction The innovation and implementation of new educational practices require a constant reflection on the elements and actors involved in the learning process [1]; on the first hand, at least, three elements can be identified clearly, one element is all the group of learning activities and materials that students can use during their instruction to facilitate the comprehension and understanding, of a part or a whole concept or procedure in a specific topic boarded; a second element is the group of methodologies and learning sequences that can be implemented to our students and how these will impact in their learning process taking the use of the first element guided by a teacher, and finally a third element is the evaluation process which considers the measure of the progress during the learning process not only at the end of the instruction but during all the learning process. All these three elements in execution generate the learning process by itself, based on how the students are guided by a welloriented methodology that includes materials and activities with its corresponding evaluation evidence. Particularly, in harddisciplines like Engineering and Science these elements require a wide variety of learning materials and activities because of the learning styles and motivations that are presented in the students, a welloriented methodology is necessary because the topics in these disciplines are focused in a longterm learning in students; the concepts viewed can be impacted in their future academic and personal lives, the evaluation must be a tool that facilitate in the teacher the general view of their students and how all the learning process is taking place in the learning environment, independently that if the environment is in the classroom or virtual. On the other hand, the actors involved in the learning process are represented by the students, the teachers and researchers in education. We can consider also to the authorities and the family as another actor that participate in the learning process, but the most directly involved are the three actors mentioned firstly. Currently, the curriculum in many Sciences and Engineering disciplines encourages the use of various types of learning and collaborative activities with the students in the classroom at different educational levels; these let the student to perform her/his learning process and promotes a more dynamic interaction between the teacher and the students. As a consequence, it facilitates the integration of the knowledge that students acquire, obtaining a better connection between the formal and concrete knowledge, facilitating the transfer of this to daily activities. We must take care with the design of the learning sequences that students have to perform, because they can be affected by a good or poor learning. The comic has been employed in various ways in the learning process of different Physics' concepts [2]. One way can be seen as a motivational element, which allows to the students the introduction of a physical concept [3]. A second way is the use of the comic as an educational tool that serves as the bridge between the process of finding information and achieving a meaningful learning [4]. And finally, the comic can be seen as a goal by itself, to analyze and synthesize what has been learned by the student, that is, evaluate the process of creating and fostering teamwork. In France, JeanPierre Petit created a series of comics called "scientific comic" and founded the association "Knowledge without Frontiers", together with Gilles d'Agostini [5], with the aim of freely distribute their comics of science. These have been translated into 28 languages to divulge some fields of science, especially in physics and mathematics. The aim of this work is to present the results from the design and implementation of a learning sequence that used a comicbasedon and different lowcost experiments for the learning of the electrical concepts of Current, Voltage and Resistance. The learning sequence was implemented to a group of 10 highschool students with ages between 15 and 18 years old, a control group was introduced to compare the results of this learning sequence proposal. The organization of this paper is as follows. In section II we present the description of the learning sequence designed and implemented to the students also the evaluation instrument that was used to measure the effectiveness of the learning sequence is described. In section III, we present the results generated by the implementation of the learning sequence and some discussion are presented to identify important results in this experiment. Finally, in section IV the conclusions are presented analyzing the general results and future work to do for this kind of experiments in the learning fields of Physics, Science and Engineering. Research Methodology Description Orlaineta, et. al. [7], have designed and implemented a learning sequence based on Active Learning as a general methodology, which considers that students must work on a handson mindson environment. Their learning sequence encourages the students to get involved in their learning process and promotes a longterm learning by the activities that the students have to do. Partial results of this proposal have already been reported briefly in [5], but in this contribution we present a complete analysis by ensuring a stronger conclusion. Design of the Learning Sequence Garcia and Sanchez [2] have proposed the design of learning sequences having in mind that the students are the principal and final actor during the learning process, they consider that the students must be guided from a concrete point of view of the world to a formal point of view in the discipline. A sixsteps instructional design is proposed to achieve this goal and these are defined with some flexibility to the teachers for the designing of more focused learning sequences. A brief description of the learning sequence is presented.  TO START. A very concrete question must be proposed, it is strongly recommended to formulate an everyday life question that can promote the introduction of the concept and disrupt the students' minds. For this experiment the question was: Why does not turn on the light bulb?  TO PREDICT, OBSERVE AND THINK. First the students must make some predictions for answering the question and write down on a piece of paper. After, lowcost experiments must be performed by students in small teams (groups of three students are recommended). These experiments must be performed by the groups and each student can observe carefully the phenomena and think individually about it. For this experiment we implemented the next low cost experiments: Communicating Vessels for the potential difference concept, a lemon as a battery and tickle in the tongue for the electric current concept and communicating vessels and the filament of a light bulb for the electric current concept with other intention different than the potential difference.  INTRODUCING NEW IDEAS. The same small groups perform a discussion about their beliefs and thoughts of the lowcost experiments observed, generating a brainstorming in the groups. After they do an individual reading of the comic [9], which can be downloaded for free from [6] they have to draw a mental map of the concepts that were identified. Finally, they share their mental maps and discuss about their ideas. At the end the teacher gives a brief explanation about the concepts observed in the experiments.  TO APPLY. Students have to perform experimental activities related to the function of an electric circuit; they have to measure resistance, current and voltage. The observation of how the brightness in a bulb changes when the voltage varies, must be identified by the students. As evidence of this activity students have to share their ideas with their peers and discuss in small teams.  TO SYNTHESIZE. Learning must be more specific with the topics covered, in this stage, students must identify the behavior of an electric circuit and also the serial and parallel configuration of resistance must be clearly identified.  TO EVALUATE. Students must conclude the comic, this is an important stage, because the comic that was used during the instruction was blanked in some dialogues with the intention that the students filled them out with the correct concepts, developing the creativity in them to write down their conclusions and ideas about the physical phenomena. Evaluation of the Learning Sequence To evaluate the learning sequence applied to students; two evaluation instruments were designed, the first one used was a multiplechoice test, composed of thirteen items, it was used as a pretest and as posttest to evaluate the previous ideas of the students and the final ideas of them after the implementation of the learning sequence, respectively; these items were disciplinary and conformed from the Electric Circuits Concept Evaluation (ECCE) developed by Sokoloff [11], only ten items were selected because they cover the concepts boarded in the learning sequence. And the other three items were selected from the proposal of Periago and Bohigas [12] to analyze preconceptions in the students. To evaluate the students' perceptions and attitudes related to the learning sequence a semantic differential [13] was applied. This instrument, let us to identify in a semiqualitative way how students felt with the implementations of the learning sequence, it uses a qualitative scale like "funny, difficult, useful, important, time saving, pleasant and their corresponding antonyms", and after we change it with a discrete scale based on numbers. Methodology implementation and study population description The learning sequence was implemented in the 'Instituto de Educacion Media Superior, IEMS' of Mexico City, this institute works with a free syllabus scheme and based on a competency learning model. The subject where the learning sequence was implemented was Physics II (Mechanics and Electromagnetism). In this research work, two groups we are defined, a group 'A' with 9 students as the control group and a 'B' group with 10 students as the experimental group; both groups ware conformed with students between 15 and 17 years old. Two teachers were involved in this sequence, one worked with the control groups and the other one with the experimental group. The methodology implemented with the group 'B' was previously defined in the section 2.1, here we would like to add that it was divided into 8 sessions of 90 minutesduration each session. In the first session the teacher explained how they will work during all the learning sequence, and the pretest was applied with a duration of 45 minutes. From session 2 to session 8 the learning sequence was implemented. Finally, two weeks later, the posttest and the semantic differential was applied to students. For the group 'A', the methodology implemented was conformed also into 8 sessions of 90 minutes duration each session. In session 1, the teacher explained to the students how the work was going to be conducted during the rest of the sessions and the pretest was applied to students with a duration of 45 minutes. A lectureway session was implemented during the rest of the sessions, where the teacher only presented the topics in an expositive way, two lab sessions were guided by the teacher where students have to follow instructions and perform activities, with little reasoning related to the activities and results, the students have to fill out some schemes, measure some values of resistances, current and voltage and to identify the colourscode in a physical resistance also the serial and parallel circuits' arrangement was seen by the students. The students during their instruction used an exercise book that contained a set of questions and problems related to the topics. Finally, two weeks later, the posttest and the semantic differential were implemented. Results and discussion Once that we collected the data from the assessment instruments: pretest, posttest, semantic differential and student evaluation work (continuation of the comic), we proceeded to analyse them. For the Conceptual Normalized Gain (CNG), we observed that the group 'B' obtained a 0.43 value (which is considered as a medium gain); for the group 'A' a 0.20 CNG was observed (which is situated as a low gain), with these results we can observe that the learning sequence implemented in group 'B' presented a better performance respect to group 'A'. We would like to mention that a better improvement like, electrical current flowing through a point in a single circuit, the equivalent point of a parallel circuit, the brightness of a light bulb connected to a simple and parallel circuit, and the way that the electric current flows from a battery and passes through a light bulb, were presented in the students. To analyze the learning in the students, based on the answers from the test, we applied the Concentration Factor (CF) proposed by Bao and Redish [14]. In the control group that did the pretest we found that 90% of the students presented, a lowgain region where the concepts are distributed as follows: 30% of students had random conceptual models, another 30% had at least two different misconceptions models and finally another 30% presented a onewrong conceptual model, the rest of the 10% presented a middlegain region. In the case of the experimental group that did the pretest we found that 100% are in a lowgain region, they were distributed as follows: 46% of students had random conceptual models, another 46% presented two predominant erroneous conceptual models and the rest 8% presented a single wrong conceptual model. In the control group that did the posttest we found that 46% stayed in the same lowgain region, the distribution of these students is as follows: 8% of them kept random conceptual models, 15% kept two different misconceptions models and finally the 23% presented a onewrong concept model. For the rest of the group, 38% moved toward a middlegain conceptual region and finally the 16% of the class presented a highlevel region of understanding. In the case of the experimental group that did the post test we found that 24% kept in the lowgain region, the distribution of these students is as follows: 8% kept a random conceptual model, another 8% presented a two misconceptions model and the last 8% kept onewrong conceptual model. It is important to notice that 53% of the student obtained a middle gain conceptual region which can show us the effectiveness of the learning sequence and a 23% of students demonstrated a highlevel region of conceptual understanding, which is higher than the presented by the control group. The semantic differential showed us important aspects of how students perceived from the learning sequence. In general, they commented that the using of comics and lowcost experiments promoted in them a selfconfidence and understanding of the topics viewed. Also the students confirmed that the use of these learning resources were funny and enjoyable, respect to other courses where the teacher only explains the topics and they have to solve some basic exercises. The time consuming and the difficulty in the activities performed were not a negative issue in this learning sequence. While the learning sequence was performed, we could observe that students could construct more clear and oriented short ideas of the dialogues that they read previously. In addition, the students decided to use the same vignettes taken from the comic because they didn't feel comfortable with the creation of new ones. Once we obtained the completion of the comics, we could observe that students could retain the concepts viewed in the lowcost experiments and used them to explain and conclude the comic using a good understanding of the concepts with an appropriate science language and presentation of the physical concepts. Conclusions In this study, we can see that the use of an active learning methodology and a learning sequence well designed for students can achieve a good comprehension of different physical concepts, like the electrical circuits' concepts, as in this particular case. Following the ideas proposed by Garcia and Sanchez [2], we could see that with a welldefined learning sequence, students can learn and understand different learning topics with a well oriented scheme. The design and implementation of the activities promoted in the students a better comprehension of physical concepts and self confidence, achievement in students for more motivation in the learning process for some topics considered in many cases difficult to learn and to understand was obtained.
2019-04-15T13:10:36.069Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "ec1335bd2cfff04fd51a0538d9ecebe92d861f26", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/792/1/012038", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e2db5d93cb673740fdf6e86e1c41346c3e1c1710", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
253458140
pes2o/s2orc
v3-fos-license
Artificial intelligence-based computer-aided system for knee osteoarthritis assessment increases experienced orthopaedic surgeons’ agreement rate and accuracy Purpose The aims of this study were to (1) analyze the impact of an artificial intelligence (AI)-based computer system on the accuracy and agreement rate of board-certified orthopaedic surgeons (= senior readers) to detect X-ray features indicative of knee OA in comparison to unaided assessment and (2) compare the results to those of senior residents (= junior readers). Methods One hundred and twenty-four unilateral knee X-rays from the OAI study were analyzed regarding Kellgren–Lawrence grade, joint space narrowing (JSN), sclerosis and osteophyte OARSI grade by computerized methods. Images were rated for these parameters by three senior readers using two modalities: plain X-ray (unaided) and X-ray presented alongside reports from a computer-assisted detection system (aided). After exclusion of nine images with incomplete annotation, intraclass correlations between readers were calculated for both modalities among 115 images, and reader performance was compared to ground truth (OAI consensus). Accuracy, sensitivity and specificity were also calculated and the results were compared to those from a previous study on junior readers. Results With the aided modality, senior reader agreement rates for KL grade (2.0-fold), sclerosis (1.42-fold), JSN (1.37-fold) and osteophyte OARSI grades (3.33-fold) improved significantly. Reader specificity and accuracy increased significantly for all features when using the aided modality compared to the gold standard. On the other hand, sensitivity only increased for OA diagnosis, whereas it decreased (without statistical significance) for all other features. With aided analysis, senior readers reached similar agreement and accuracy rates as junior readers, with both surpassing AI performance. Conclusion The introduction of AI-based computer-aided assessment systems can increase the agreement rate and overall accuracy for knee OA diagnosis among board-certified orthopaedic surgeons. Thus, use of this software may improve the standard of care for knee OA detection and diagnosis in the future. Level of evidence Level II. Introduction Characterized by functional disability and chronic pain, knee osteoarthritis (OA) accounts for approximately one-fifth of the OA of all joints [1]. OA may be diagnosed clinically or radiologically, though symptoms may be present years prior to the first appearance of X-ray signs indicative of OA [2,3]. The most frequently used classification for knee OA is the Kellgren Lawrence (KL) scale, which differentiates five stages (0-4) of OA severity [4]. However, the KL scale is criticized for its assumption of linear OA progression [5] as well as its differing interpretations leading to aberrant classification of especially low-grade knee OA [6]. Therefore, Osteoarthritis Research Society International (OARSI) has developed an OA classification system based on an atlas with exemplary X-rays of distinct features [7]. While magnetic resonance imaging (MRI) has gained importance in the diagnosis of musculoskeletal pathologies, the advantages of plain X-ray over MRI include their 1 3 prevalent availability and cost efficiency. However, earlystage OA signs are invisible on plain X-rays, as cartilage degeneration cannot be directly assessed, and OA constitutes a three-dimensional problem [8]. This is reflected by fair to moderate interobserver reliability for knee OA assessment using X-rays alone, with measured quadratic kappa values between 0.56 and 0.67 [9][10][11]. To overcome these issues, different solutions, including novel quantitative grading methods and automatic knee X-ray assessment tools, have been proposed [6,[12][13][14]. Currently, artificial intelligence (AI) and deep learning are used in medical image classification related to the musculoskeletal system [15][16][17]. In this study, the authors aimed to characterize two aspects of the impact of a novel, AI-based image annotation tool with regard to changes in the radiological judgement of knee OA [18]. First, we analyzed the intra-and interobserver reliability of board-certified orthopaedic surgeons (herein termed senior readers) regarding knee OA grade assessment using either AI-annotated or plain X-rays. Second, we compared the outcome of senior readers to that of senior residents (termed junior readers) with aided analysis in terms of agreement rate and overall performance. Methods Three board-certified orthopaedic surgeons (= senior readers) from a single hospital rated X-ray images, and readings with and without AI aid were compared to a gold standard (OAI consensus). The findings of a similar previous study [19] involving senior residents (= junior readers) were used as a comparator for senior reader performance. Data In the present study, plain knee X-rays were acquired from a publicly available dataset by Osteoarthritis Initiative (OAI) [20]. From this dataset, 124 knee X-rays (size comparable to previous study in this field [21]) were semirandomly selected using a selection probability proportional to the frequency of KL grades across the visits baseline, as previously described [19]. Thereby, a uniform distribution of KL grades was ensured in the sample set. The images used in the present study and training data of the AI were drawn from OAI but segregated by the patient level to avoid biasing AI performance due to overfitting. Overfitting implies that an AI model has been trained in a way that the learned methodology is only applicable to the training set but not to another independent dataset [22]. A few additional images from the OAI dataset, outside of the study set, were randomly chosen for training of the readers on the user interface of the study's annotation tool (see below). Table 1 depicts the distribution of KL and OARSI grades of the final cohort (n = 115; 9 images with incomplete annotation by readers excluded), as reported by consensus readings of the OAI study (i.e. ground truth). Table 2 contains the patient demographics of the final cohort stratified by sex. All knee X-rays from the OAI study followed a "fixed flexion" protocol, with standing X-rays in posterior-anterior (PA) projection with feet externally rotated by 10° and knees flexed to 20-30° (until the knees and thighs touch the vertical X-ray table anteriorly) [23][24][25]. Knee osteoarthritis labeling assistant KOALA (Knee OsteoArthritis Labeling Assistant) is a software providing both metric assessments of anterior-posterior (AP) or PA knee X-rays and proposals for clinical OA grade. These proposals should aid clinicians in assessing the degree of knee OA in adult patients or their risk of developing it. This is enabled by standardized quantitative measurements of morphological features (i.e. joint space width [JSW] and joint space area [JSA]) on AP or PA knee X-rays. The AI software subsequently provides numerical results together with graphical overlays on X-rays showing measurement points. Furthermore, OA severity is assessed with the AI software by proposing the following grades (the higher, the more severe) to the clinician/rater: maximum OARSI grade for sclerosis, joint space narrowing (JSN), osteophytes (each between 0 and 3), and KL grade (between 0 and 4). Grading proposals as well as metric assessments are summarized in a report that can be viewed on any DICOM viewer workstation approved by the FDA. The AI software applied in this study is based on several CNNs trained on large datasets of over 20,000 individual knee X-rays. It combines several low-and high-level modules, with the low-level modules being responsible for knee joint detection and landmarking. Subsequently, information is transferred to the high-level modules responsible for joint segmentation and measurement of KL, JSW, and OARSI grades, as previously described [19]. Labelling process The labelling process was divided into three steps ( Fig. 1). First, three senior readers were trained on the structure of the AI software report, OARSI grading system [7], labelling process and platform used (i.e. Labelbox, a data labelling tool designed for machine learning procedures [26]. Second, readers assessed-unaided (i.e. without AI annotations)-124 plain knee X-rays and defined KL grade (0-4), osteophytes (0-3), sclerosis (0-3), and JSN (0-3) by completing a list. The readers were able to work remotely at their preferred time and allowed to interrupt and resume labelling at any time, without time restrictions for labelling individual images or the entire dataset. However, the time it took readers to label each image, as well as the time of the entire labelling process, was ascertained. Third, after a minimum of 2 weeks after the second step had been completed, the same 124 knee X-rays were relabelled by the readers, with images provided at random order (to avoid creating observer bias; Fig. 1). At this point, however, each image was supplemented with the AI software's report together with a binary score of whether OA was present on the X-ray. As mentioned above, complete data by annotators for all modalities were not available for nine images; therefore, these were excluded from further analysis, resulting in a total subject count of 115 and a dropout rate of 7.3%. Agreement rates A two-way random, single score model intraclass correlation coefficient (ICC) was used to assess agreement rates between readers for items evaluated (i.e. presence of OA, osteophytes, sclerosis, KL grade, JSN) [27] when compared to the OAI consensus. As proposed by Shrout and Fleiss, 95% confidence intervals (CIs) were calculated [27]. Standard errors of the mean were estimated for ICCs by resampling observations with 1000 bootstraps. Via the z score method, the statistical significance of the difference between the unaided and aided labelling was determined. A p value of < 0.01 was considered statistically significant. Accuracy measures The performance of the readers was assessed by accuracy, sensitivity and specificity. True positives (TP), true negatives (TN), false-positives (FP) and false negatives (FN) were calculated for each measure of the readers against the readings from the OAI study. In particular, the ability to detect any abnormality (KL grade > 0) or OA (KL grade > 1), JSN (> 0), sclerosis (> 0) or severe sclerosis (> 1) and the presence of osteophytes (> 0) was assessed. Normal approximation to binomial proportional intervals was used to estimate standard errors as well as CIs for sensitivity, specificity, and accuracy. Receiver operating characteristic curves As the AI software used not only provides recommendations regarding the presence and severity of OA but also a confidence score on the recommendation made, a receiver operating characteristic (ROC) curve was plotted to visualize the effect of the AI software's application on reader performance with regard to changes in TP rates (TPR) and FP rates (FPR). Agreement between readers in unaided and aided labelling Agreement rates for different measures (i.e. KL, presence of OA, JSN, sclerosis, osteophytes) between readers were calculated separately for unaided and aided labelling. Agreement rates for senior readers improved with aided labelling for all scores assessed (Table 3, Fig. 2). In detail, agreement rates increased twofold, 1.37-fold, 1.42-fold, 1.59-fold and 3.33-fold for KL grade, JSN, sclerosis, osteophytes and OA diagnosis, respectively (Table 3). When using the agreement classification proposed by Cicchetti [28], improvements from unaided to aided labelling were observed for all measures, except for sclerosis (Table 3). In general, agreement rates for KL grade, JSN, and OA diagnosis with unaided labelling were higher between junior readers than between senior readers (Table 3, Fig. 2). Notably, when using aided labelling, ICCs for the junior and senior readers were comparable. Consequently, less pronounced improvements in ICC from unaided to aided labelling were found with junior readers (Table 3). Senior reader performance in unaided and aided labelling The accuracy of senior readers significantly improved with aided labelling for all measures (Fig. 3). Sensitivity only increased for OA diagnosis (i.e. KL grade > 1) while decreasing-without statistical significance-for all other scores. On the other hand, all measures showed a significant increase in specificity, indicating a decrease in overdiagnosis upon AI-aided labelling compared to the OAI ground truth (Fig. 3, Table 4). Individual reader performance The effect of the AI software on senior reader performance was comparable to that of our previous findings for junior readers only [19]. A reduction in FPR was observed, with no or little effect on TPR. Notably, two readers showed simultaneously increased TPR and reduced FPR regarding the feature "presence of OA" (i.e. KL > 1). For the "presence of osteophytes", increased TPR and decreased FPR was found for another reader (Fig. 4). Discussion The main finding of the study was improvement in the senior reader agreement rate with aided analysis for KL grade, diagnosis of OA, JSN, osteophytes and sclerosis assessed on knee X-rays. Furthermore, the specificity and accuracy for all features mentioned increased with the AI-aided modality. Notably, the agreement and accuracy rates achieved when using aided analysis were comparable between senior readers and junior readers (from a different study). Although over 100 commercially available AI applications similar to the tools investigated herein using CNNs are currently on the market, peer-reviewed literature on the impact of these systems on clinicians is scarce [18]. Nevertheless, reliable and homogenous evaluation of radiological images is necessary to improve the care of knee OA patients by means of timely treatment planning [29]. This applies particularly to the early stages of knee OA, in which well-established radiographic assessment scores such as the KL scale are prone to imprecision, as the incipient tissue damage characteristic of early OA is barely visible on X-rays [30,31]. However, in settings with both junior and senior readers [32], as well as senior readers alone [29], assessment differences regarding the severity of radiographic knee OA features are found. Therefore, an AI-based tool supporting readers in decision-making may aid in the standardization of image evaluation. In the current study, agreement rates for senior readers increased between 1.37-fold (for JSN) and 3.33-fold (for OA diagnosis) when utilizing AI software-aided labelling compared with unaided labelling. These observations are in line with a previous study on junior readers alone [19]. rates between junior readers (blue) and senior readers (red) for the unaided (lighter) and aided (darker) modalities. Error bars denote standard errors of the ICC. Stars indicate statistically significant differences between the unaided and aided modalities, with a p value of < 0.01 considered statistically significant. JSN junior was the only nonsignificant result, with a p value of 0.125. Horizontal lines denote the thresholds separating poor, fair, good and excellent agreement, as defined by Cicchetti et al. [28]. The values for junior readers already published by Nehrer et al. [19]. OA was defined as KL > 1. KL Kellgren-Lawrence, JSN joint space narrowing, SC sclerosis, OS osteophyte, OA osteoarthritis) Comparable to previous results from our group on junior readers [19], use of AI-enhanced X-rays appears to standardize knee OA assessment among senior readers. This is of particular importance considering that initial agreement rates between senior readers and the OAI ground truth were evidently lower than those found between junior readers and the OAI ground truth, especially regarding OA diagnosis and KL grade. This discrepancy may be explained by the fact that orthopaedic specialists rely on their long-term experience when evaluating images rather than adhering to given scoring system definitions, as enforced during the truthing of the OAI. In the field of musculoskeletal radiology, comparable observations have been made by Peterlein et al. regarding developmental dysplasia of the hip assessment by ultrasonography [33], with similar performance found between medical students and paediatric orthopaedic Fig. 3 Mean differences between senior reader AI software-aided and unaided labelling in sensitivity, specificity and accuracy for KL > 0, KL > 1, JSN > 0, sclerosis OARSI grade > 0, sclerosis OARSI grade > 1 and osteophyte OARSI grade > 0. Values to the right of the vertical line at 0 are improvements by the use of AI software. Error bars signify 95% confidence intervals. KL Kellgren-Lawrence, JSN joint space narrowing, SC sclerosis, OS osteophyte, OA osteoarthritis surgeons [33]. Notably, in the present study, the junior and senior readers achieved similar agreement rates with AI software-aided labelling. This implies that orthopaedic specialists may benefit to a greater extent from AI software than senior residents. One may argue that any improvement in agreement with aided labelling may be related to some kind of cognitive bias-or "anchoring effect" [34], a phenomenon first observed in psychophysics [35]. It describes the situation in which a person's decision is influenced to a considerable degree by a single, potentially irrelevant piece of information, i.e. the "anchor" [36]. As already outlined in our previous study [19], some facts eventually contradict this assumption. On the one hand, the readers had been specifically trained in assessment of X-ray features, and objective decision-making can be expected. On the other hand, improvements with aided labelling against the OAI consensus as the ground truth were mainly caused by an increase in specificity, reducing FPR (and thus overdiagnosis) with respect to the OAI consensus. Furthermore, the senior readers achieved better overall results regarding TPR and FPR with aided labelling compared to AI software alone. This implies a rather subordinate role of the "anchoring effect", as the senior readers should have achieved results similar to the AI software unless they eminently relied on provided annotations. As the pool of readers in the current study consisted of three board-certified orthopaedic surgeons, one may argue that generalizability was impaired. To overcome this issue, a specific ICC was calculated including the readers as random effects. Consequently, the pool of readers was treated as a sample of a larger pool, enabling better generalization of the results obtained. Limitations of the study include the drop-out rate of 7.3% and the limited sample size of the 115 images ultimately analyzed. Furthermore, the number of readers involved might have biased the results obtained. Another potential source of bias was the 2-week interval chosen between the two assessments, which might have led to "memory bias" [37,38]. Nevertheless, the effect of this bias is controversial in the literature, with studies on the presence of both strong [38] and weak [37] "memory bias" in imaging studies at two time points. Moreover, AI might have been overfitted to the OAI dataset, eventually leading to a less pronounced difference between the senior and junior readers for varying datasets due to lower performance of the AI. Additionally, it cannot be ruled out that AI software itself sometimes classifies X-rays incorrectly, and aided image analysis would present readers with inaccurate information. Furthermore, the discrepant finding of better performance for senior residents compared to board-certified orthopaedic surgeons can only be explained by the hypothesis that experienced readers tend to analyze images in a less-structured and faster manner, relying on their long-term experience, but that less experienced readers are more likely to adhere to presented image classification systems. In conclusion, use of AI-based KOALA software leads to improvement in the radiological judgement of senior orthopaedic surgeons with regard to X-ray features indicative of knee OA and KL grade, as measured by the agreement rate and overall accuracy in comparison to the ground truth. Moreover, the agreement and accuracy rates of senior readers were comparable to those of junior readers with aided analysis. Consequently, standard of care may be improved by the additional application of AI-based software in the radiological evaluation of knee OA. Conflict of interest The authors declare the following potential conflicts of interest regarding the research, authorship, and/or publication of this article: Richard Ljuhar and Christoph Götz are shareholders of the ImageBiopsy Lab and declare conflicts of interest. Tiago Paixao Fig. 4 Changes to the true positive rate (y-axis, TPR) and false-positive rate (x-axis, FPR) for each individual senior reader for KL > 0, KL > 1, JSN > 0, sclerosis OARSI grade > 0, sclerosis OARSI grade > 1 and osteophyte OARSI grade > 0. The black line denotes the ROC curve for the AI software within the dataset. Arrows point from the unaided to aided modalities. Arrows pointing upward and left indicate absolute improvements in detection ability. Note that even though some arrows point downward and left, the improvement in FPR was greater than the loss in TPR, representing a net increase in accuracy. KL Kellgren-Lawrence, JSN joint space narrowing, SC sclerosis, OS osteophyte was employed by ImageBiopsy Lab and declares a conflict of interest. The remaining coauthors have no conflicts of interest to declare. Ethical approval This study was approved by the local ethics committee and received no specific grant or funding. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-11-12T06:18:13.320Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "edf3794c971830b7fbad3089b5dcb7f77610d825", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-022-07220-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "deb10fb918b05c24f1160d058aba7537905ae13c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
92111371
pes2o/s2orc
v3-fos-license
Chitin Biosynthesis Inhibitors in Euschistus heros Fabr . ( Hemiptera : Pentatomidae ) : Morphometric Alterations in Testes and Nuclei of Testicular Accessory Cells of Adults Euschistus heros is an important pest in Brazilian agriculture growing importance in the general Neotropical realm. Its reproductive potential is the key factor for its characterization as a pest in major crops such as soybean and cotton. The aims of this study were to characterize morphometric parameters of testicles and testicular accessory cells-TACs nuclei of adults E. heros treated with chitin biosynthesis inhibitors (CBIs). The insecticides lufenuron (Match 50 EC) and buprofezin (Applaud 250 WP) were applied individually in 4 instar nymphs, remained in controlled conditions until the emergence of adult males. The testicles were identified and removed 72 h after emergency, fixed, photographed for anatomic analysis, and processed for morphometric analyses of the TACs nuclei. It was possible to observe that lufenuron and buprofezin decreased the testicular area. Buprofezin decreased the mean nuclear area analyzed in the TACs, and nuclear hypertrophy can indicate an activity on support and nutrition of germ line cells, presenting a possible effect on protein synthesis. The intense reaction for Fast green in control compared to buprofezin treatment may indicate that total protein (histones and non-histones) has been altered. The tested insecticides, with special focus on buprofezin may affect the final stages of reproductive development of E. heros, with potential to be used in field to the control of this pest. Introduction Testicular accessory cells (TACs), also known as support, follicle, nurse, cystic cells or widely known as Sertoli cells are present in the majority of vertebrates and invertebrate's testicles and may vary in structure and function between the different taxonomic groups (França & Chiarini-Garcia, 2005).The Sertoli cell nomenclature in invertebrate's studies presents divergence between some authors (Pudney, 1993;Griswold & McLean 2006).Thus, in this work it will be named as testicular accessory cells, as described by Guraya (1995).In insects, these cells are linked to those cysts in which the germ line cells develop (Hinsch, 1993). The TACs has been described as highly important for perform different reproductive functions.In addition to the classical function of physical support, these cells have been described to act in nutrition, secretion of substances, endocytosis and hormonal regulation of spermatogenesis (Griswold & McLean, 2006).However, the physiology of TACs has been still little studied in insects and other arthropods (Hinsch, 1993;Guraya, 1995;Gabała, 2006;Mazzei, Longo, & Brundo, 2015). The structure of TACs may present as much variation as its functions.The TACs generally have a columnar shape, extending from the follicular wall (or tunica propria) to the intern cysts, maintaining contact with basal cells, with TACs and with the germ line cells.These cells normally present a single, triangular shaped nucleus, composed mostly by euchromatin and generally located in the basal region (Guraya, 1995;França & Chiarini-Garcia, 2010). The Neotropical-brown stink bug Euschistus heros Fabricius (Hemiptera: Pentatomidae) became a major agricultural pest in the productive sector in Brazil (Fonseca, Fernandes, Justiniano, Cavada, & Silva, 2014;Soares, Cordeiro, Santos, Omoto, & Correa, 2018).The expressive reproductive potential and its proliferative index is the main cause that defined your spread in the area.Significantly reduce insect populations, productivity and quality of seeds in training (Panizzi, Bueno, & Silva, 2012). It is known that Chitin Biosynthesis Inhibitors (CBIs) are insect growth disruptors insecticides that affects the process of exocytosis of the intracellular monomer of N-Acetylglucosamine (GlcNAc), the basic component of the chitinous structure of the exoskeleton (Merzendorfer, 2006).Some CBIs prevents the ion channels of the vacuolar complex export the GlcNAc monomers out of the cytoplasm and compose the chitin polypeptides (Matsumura, 2010).These products can be used in crop models based on integrated pest management, with characteristics of selectivity and lower toxicity to beneficial insects (Rasdi et al., 2012;Vieira et al., 2012). The use CBIs in reproductive development of stink bugs pests has been studied before by Agüero et al (2014), with representative results of effects on morphology in Nezara viridula L. (Hemiptera: Pentatomidae), where the authors could detect negative effects on gonad formation in insects treated with diflubenzuron.Also, in another study, female and male reproductive organs of Dichelops melacanthus (Hemiptera: Pentatomidae) were affected by CBIs (Cremonez, Pinheiro, Falleiros, & Neves, 2017), however the studies relating the effects of insecticides on insects TACs are practically inexistent, especially in adults E. heros treated with CBIs. In this context, the aims of this study were to characterize the morphometric parameters of testicles and TACs nuclei in the testicular follicles of adults E. heros, in addition to determine possible changes in these cells as results of the application of CBIs lufenuron and buprofezin. Insect Rearing Adults of E. heros were obtained from a colony maintained the Laboratory (Embrapa Soybean-Brazil) and taken to the State University of Londrina (UEL), Brazil.The stink bugs were reared in plastic boxes with perforated cover (V = 3.75 L), containing pods of organic common beans (Phaseolus vulgaris L.) (Fabaceae), peanuts (Arachis hypogaea L.) (Fabaceae) and soybean (Glycine max Merrill) (Fabaceae) as food ad lib, and cotton wools moistened with distilled water for hydration. Insecticide Application The CBIs were applied in pre-determined sublethal doses (LD 30 ), according to the methodology proposed by Zanuncio et al. (2005), as it follows: lufenuron (Match ® 50 EC) LD 30 = 3.34 × 10 -3 mL a.i.L -1 and buprofezin (Applaud ® 250 WP) LD 30 = 1.12 × 10 -3 g a.i.L -1 .The insecticide solutions were diluted to the respective LD 30 with distilled water, then applied individually through the topical application of 1 µL above the N4 dorsum with the aid of a micropipette.For the control treatment, it was used pure distilled water. After the application, the groups of ten N4, were maintained by a polystyrene-crystal box (11.0 × 11.0 cm) containing food and water as described above.The insects that remain alive until adulthood were sexed and the males individualized for the experiment. Testes Collection-Morphology Technique Virgin adult males were individualized in Petri dishes (9 cm diam.) and after 72 h of the adult's emergency the males were anesthetized in cold (-4 ºC) for 5 min and dissected in Petri dish.The testicles were identified, removed and immediately fixed in Karnovsky solution (glutaraldehyde 2.5% + paraformaldehyde 4% in phosphate buffer solution pH 7.2) at room temperature by 24 h and submitted to the procedures described below. Testes Morphometry The removed testes (n = 20 per treatment) were analyzed and photographed in stereoscopic microscope Olympus SZ61® (Olympus) and the apparent area was bounded and morphometric analyzed according to the proposed methodology of Schneider, Rasband, and Eliceiri (2012), using the ImageJ 1.51k software. Morphometry of the TACs Nuclei Five testicles of each treatment fixed in Karnovsky's solution were dehydrated in a gradual sequence of ethyl alcohol and included and embedded in Leica HistoResin® (Leica) using protocol of Insect's Laboratory/Department of Histology/UEL.The cross sections (5 µm) were obtained and stained by HE, adapted of Bastos et al. (2018).The sections were analyzed and photographed in the Axiophot® photomicroscope (Zeiss) coupled with a camera Moticam® 3.0 MP (Motic). The nuclei of the TACs were analyzed through ImageJ, according to Baviskar (2011).The data was obtained by the nuclear area of the TACs present only in the growth zone, where one image was taken on each region of the primary spermatocytes. Additionally, sections of control and buprofezin treated testicles were stained by Fast green pH 2.7, analyzed and qualitatively compared in staining intensity according to the methodology of Gifford Jr. and Dengles (1966), and Beerman (2013). Statistical Analysis The analysis of area of the adult stink bugs testicles follow the completely randomized design, with three treatments (i.e.lufenuron, buprofezin, distilled water).The values were subjected to Bartlett test of variance homogeneity and to Shapiro-Wilk's analysis of normality, then to analysis of variance and the means compared by Tukey test (p ≤ 0.05). The TACs nuclei area was followed by the same design, with three treatments (i.e.lufenuron, buprofezin, distilled water) and data observed only in the growth zone of the testicular follicles.Bartlett and Shapiro-Wilk tests were also performed.The values were subjected to mixed linear model for repeated measures analysis (follicle as a random effect) and the treatment means compared by Tukey test (p ≤ 0.05). The whole process was formatted and processed with the R® software (R Core Team, 2018). Results The testicles of E. heros were identified and each testicle presents six follicles we numbered F1-F6 from the proximal region.It was possible to observe morphological differences among them, more apparent on the spermatozoid bundles.It is possible to observe smaller bundles in F1, F2, F3 and F5, and a more elongated type of spermatozoid in F4 and F6 (Figure 1). Discussion The testes of E. heros are anatomically paired, connected to an ejaculatory common duct by vas deferens as described for other Pentatomidae insects (Araújo et al., 2011;Cremonez et al., 2017).The morphological differences between the testicular follicles of E. heros were previously observed (Souza & Itoyama, 2010;Aguiar et al., 2017), demonstrating that F1, F2 and F3 are regular sized follicles, while F4 and F6 are slightly larger and F5 is much thinner than the others. The difference between follicles were related to the polymorphism of sperm and cells of the germ line, also observed in other Hemiptera (Araújo et al., 2011(Araújo et al., , 2012;;Barcellos, Cossolin, Dias, & Lino-Neto, 2017;Santos & Lino-Neto, 2018).In E. heros this polymorphism was analyzed using transmission electronic microscopy (TEM) (Cossolin, 2015), evidencing a type I sperm present in F1, F2 and F3, a type II present in the follicle F5, and a type III sperm present in the follicles F4 and F6. In addition to the direct action on the development of the target pest, lufenuron and buprofezin also affects the neuroreceptor of acetyl-cholinesterase (Doucet & Retnakaram, 2012).These factors, combined with the classical effect of uneven distribution of GlcNAg in chitin structure by failures of the activity of chitin synthase (Merzendorfer, 2013;Campbell, Baldwin, & Koehler, 2017) can affect the normal development and physiology of the targeted insect.A study showed that lufenuron caused high mortalities over nymphs of E. heros but did not affected the adult's fecundity or their eggs fertility (Turchen, Hunhoff, Viana, & Pereira, 2016). In works with different insecticides was observed reproductive alterations.Imidacloprid significantly reduced the volume of adult testicles of Blattella germanica L. (Blattodea: Blattellidae) (Messiad, Habes, & Soltani, 2015), while over males of E. heros presented higher reproductive rates and increased metabolic and locomotor activity, however, the testicles have not changed in size (Haddi et al., 2016).The mixture thiamethoxam + lambda-cyhalothrin did not affected the spermatogenesis nor reproductive organs structure of E. heros over 15 generations (Aguiar et al., 2017).In this study, the morphometric alterations in testicular area suggests the action of the CBIs over reproductive development in the final stages of E. heros growth. Larger nuclei are characteristics of cells with higher activity in protein synthesis (Alberts et al., 2014).Additionally, it is known that buprofezin presents a slightly action suppressing DNA synthesis (Dhadialla, Retnakaran & Smagghe, 2010).The alterations in the TACs nuclei area suggests the effects of buprofezin over these cells and may affects the nutrition and structure of the germ line cells, probably affecting the male reproductive performance, as observed previously in D. melacanthus (Cremonez et al., 2017). The intense positive reaction of Fast green pH 2.7 in control testes compared to buprofezin treatment may indicate that total protein (histones and non-histones) has been reduced with this CBI, probably relating the nuclei hypertrophy to the protein synthesis in the testes, and future studies may be conducted to better elucidate these findings. Buprofezin is recommended for control of hemipteran pests of the suborders Sternorrhyncha and Auchenorrhyncha (Naranjo, Ellsworth, & Hagler, 2004;Prabhaker & Toscano, 2007).The main effects of buprofezin over E. heros suggest it is efficient to suppress populations of Heteroptera as well, mainly to affect the reproductive aspects of the stink bug. The CBIs lufenuron and buprofezin presented potential for population control of the Neotropical brown stink bug. This study shows a possible direct activity of buprofezin on protein synthesis and an indirect action on the germ line cells.Other studies may be conducted to elucidate the CBIs action over E. heros reproduction and viability. Table 1 . Mean testicular area and testicular accessory cell (TAC) nuclei area of testicles of adult Euschistus heros males treated with chitin biosynthesis inhibitors applied in 4th instar nymphs under laboratory conditions Table 2 . Intensity of Fast green pH 2.7 staining reaction in follicle germ line cells cytoplasm in testicles of adult Euschistus heros males treated with Chitin Biosynthesis Inhibitors applied in 4th instar nymphs under laboratory conditions
2019-04-03T13:08:50.870Z
2018-12-15T00:00:00.000
{ "year": 2018, "sha1": "98c2cd91429b55a06db921b2e63d5d91a648f414", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/jas/article/download/0/0/37801/38197", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "98c2cd91429b55a06db921b2e63d5d91a648f414", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
257248348
pes2o/s2orc
v3-fos-license
β-arrestin1-E2F1-ac axis regulates physiological apoptosis and cell cycle exit in cellular models of early postnatal cerebellum Development of the cerebellum is characterized by rapid proliferation of cerebellar granule cell precursors (GCPs) induced by paracrine stimulation of Sonic hedgehog (Shh) signaling from Purkinje cells, in the external granular layer (EGL). Then, granule cell precursors differentiate and migrate into the inner granular layer (IGL) of the cerebellum to form a terminally differentiated cell compartment. Aberrant activation of Sonic hedgehog signaling leads to granule cell precursors hyperproliferation and the onset of Sonic hedgehog medulloblastoma (MB), the most common embryonal brain tumor. β-arrestin1 (ARRB1) protein plays an important role downstream of Smoothened, a component of the Sonic hedgehog pathway. In the medulloblastoma context, β-arrestin1 is involved in a regulatory axis in association with the acetyltransferase P300, leading to the acetylated form of the transcription factor E2F1 (E2F1-ac) and redirecting its activity toward pro-apoptotic gene targets. This axis in the granule cell precursors physiological context has not been investigated yet. In this study, we demonstrate that β-arrestin1 has antiproliferative and pro-apoptotic functions in cerebellar development. β-arrestin1 silencing increases proliferation of Sonic hedgehog treated-cerebellar precursor cells while decreases the transcription of E2F1-ac pro-apoptotic targets genes, thus impairing apoptosis. Indeed, chromatin immunoprecipitation experiments show a direct interaction between β-arrestin1 and the promoter regions of the pro-apoptotic E2F1 target gene and P27, indicating the double role of β-arrestin1 in controlling apoptosis and cell cycle exit in a physiological context. Our data elucidate the role of β-arrestin1 in the early postnatal stages of cerebellar development, in those cell compartments that give rise to medulloblastoma. This series of experiments suggests that the physiological function of β-arrestin1 in neuronal progenitors is to directly control, cooperating with E2F1 acetylated form, transcription of pro-apoptotic genes. Development of the cerebellum is characterized by rapid proliferation of cerebellar granule cell precursors (GCPs) induced by paracrine stimulation of Sonic hedgehog (Shh) signaling from Purkinje cells, in the external granular layer (EGL). Then, granule cell precursors differentiate and migrate into the inner granular layer (IGL) of the cerebellum to form a terminally differentiated cell compartment. Aberrant activation of Sonic hedgehog signaling leads to granule cell precursors hyperproliferation and the onset of Sonic hedgehog medulloblastoma (MB), the most common embryonal brain tumor. β-arrestin1 (ARRB1) protein plays an important role downstream of Smoothened, a component of the Sonic hedgehog pathway. In the medulloblastoma context, β-arrestin1 is involved in a regulatory axis in association with the acetyltransferase P300, leading to the acetylated form of the transcription factor E2F1 (E2F1-ac) and redirecting its activity toward pro-apoptotic gene targets. This axis in the granule cell precursors physiological context has not been investigated yet. In this study, we demonstrate that β-arrestin1 has antiproliferative and pro-apoptotic functions in cerebellar development. β-arrestin1 silencing increases proliferation of Sonic hedgehog treated-cerebellar precursor cells while decreases the transcription of E2F1-ac pro-apoptotic targets genes, thus impairing apoptosis. Indeed, chromatin immunoprecipitation experiments show a direct interaction between β-arrestin1 and the promoter regions of the pro-apoptotic E2F1 target gene and P27, indicating the double role of β-arrestin1 in controlling apoptosis and cell cycle exit in a physiological context. Our data elucidate the role of β-arrestin1 in the early postnatal stages of cerebellar development, in those cell compartments that give rise to medulloblastoma. This series of experiments suggests that the physiological function of β-arrestin1 in neuronal progenitors is to directly control, cooperating with E2F1 acetylated form, transcription of pro-apoptotic genes. KEYWORDS arrb1, E2F1, granule cell precursors (GCPs), neuronal stem cell (NSC), medulloblastoma (MB) 1 Introduction Medulloblastoma (MB) is the most common malignant brain tumor of childhood arising in the cerebellum. By large-scale omic studies conducted in the recent decades, four molecular subgroups have been universally recognized, termed WNT, Sonic hedgehog (SHH), Group 3 and Group 4. The different epigenetic and transcriptional profiles, as well as the specific genetic alterations, suggest that MB subgroups arise from distinct cell-of-origin or developmental lineages (Hovestadt et al., 2020). In particular, two cell populations are known to give rise to SHH MB: granule cell precursors (GCPs) populations (ATOH1+) for the SHH MB subgroups and stem/progenitor cell populations for the MYCN-driven SHH MB (Hovestadt et al., 2020). GCPs born in the rhombic lip (RL) of cerebellum at embryonic day 13 in mice migrate from the RL into the posterior dorsal region of RL, forming the external granular layer (EGL). After birth, for up to approximately 2 weeks, GCPs in the EGL continue to proliferate, in response to paracrine stimulation with Sonic Hedgehog (Shh) ligand from the underlying layer of Purkinje cells. Following the Shh proliferation-induced phase, GCPs differentiate, migrate to the internal granular layer (IGL) of the cerebellum to form a terminally differentiated granule layer, unresponsive to Shh stimuli (Ruiz i Altaba et al., 2002). This evidence suggests a regulatory mechanism that controls the response to Shh and cell cycle exit. Indeed, physiological widespread apoptosis characterizes GCPs when they exit the cell cycle during postnatal development (Ahlgren and Bronner-Fraser, 1999;Charrier et al., 2001). The second MB cell-of-origin is the cerebellar neural stem cell (NSC) residing in the subventricular zone (Yang et al., 2008;Northcott et al., 2012), with a stem cell phenotype. NSCs can be derived from both cerebellum during development and in adulthood (Swartling et al., 2012). It is known that neuronal precursors death during differentiation is apoptotic in physiological development (Contestabile, 2002;Yeo and Gautier, 2004;Argenti et al., 2005;Allais et al., 2010) and that this process is regulated by signaling pathways rather than by the apoptotic machinery (Desagher et al., 2005). Recently, various mouse models (orthotopic, transgenic, and somatic gene transfer animals) have been used to demonstrate that stem/progenitor cells can be successfully transformed recapitulating the molecular and phenotypic characteristics of MYCN-driven SHH MB or MYCN-or MYC-driven Group 3 MB (Hovestadt et al., 2020). β-arrestins proteins are the major transducers of G proteincoupled receptors (GPCRs) (Crépieux et al., 2017); they act by scaffolding proteins that can be activated independently, or in conjunction with G proteins, in both cytosol and nucleus. Moreover, in response to Shh stimuli, β-arrestin 1 (ARRB1) changes its subcellular localization and moves to a specialized structure required to Shh response, the primary cilium (Kovacs et al., 2008). In cerebellar NSCs, ARRB1 is epigenetically silenced to maintain stem cell features. The re-expression of ARRB1 enhances the cell cycle inhibitor P27 while inhibiting proliferative signaling, thus resulting in stem cell differentiation and growth arrest (Po et al., 2017). Moreover, in GCPs in which ARRB1 moved to the nucleus, it forms a complex with cofactors (P300 and CREB) increasing the transcription of p27, a differentiation marker of GCPs (Ma and Pei, 2007). Parathath and colleagues described a negative feedback mediated by Shh-stimulated ARRB1 driving to cell cycle exit through transcription enhance of P27 (Parathath et al., 2010). E2F1 is a transcription factor implicated in the control of GCPs cell fate in the postnatal cerebellum (Wang et al., 2007). E2F1 has a central role in cell cycle progression interacting with retinoblastoma protein (pRB), but it is now clear that its function is not limited to cell cycle regulation, but also for tuning apoptosis, senescence and DNA-damage response (Denechaud et al., 2017). Depending on the interactors which it partners with, E2F1 can direct the GCPs towards cell proliferation and differentiation (RB/E2F1 complex) or towards apoptosis at the end of postnatal development of cerebellum (Suzuki et al., 2011). Moreover, an aberrant expression of E2F1 on GCPs has also been implicated in cerebellar neoplastic transformation (Suzuki et al., 2011) and its acetylation was increased when GCPs were stimulated with Shh (Miele et al., 2021). Notably, in a recent study, we reported a new regulatory axis in which ARRB1 and E2F1 are critical for MB progression. Specifically, low expression of ARRB1 promotes tumor growth enhancing the E2F1 survival function, while high expression of ARRB1 triggers E2F1 acetylation switching E2F1 function from pro-survival into pro-apoptotic (Miele et al., 2021). However, the physiological mechanism of action of ARRB1 and E2F1 in regulating the two SHH MB cell-of-origin remains elusive. In the present work, we identified a new crucial axis in two physiological neuronal cell models, in which ARRB1 works in partnership with acetylated E2F1 to guide the physiological apoptosis and growth arrest in GCPs and NSCs. Mice Mice were purchased from Charles River Laboratories and maintained in the Animal Facility at Sapienza University of Rome. All procedures were performed in accordance with the Guidelines for Animal Care and Use of the National Institutes of Health with the approval of the Ethics Committee for Animal Experimentation (Prot. N 03/2013) of Sapienza University of Rome. RNA extraction and real-time PCR Total RNA was purified and reverse transcribed as previously described (Spiombi et al., 2019). Quantitative RT-PCR (RT qPCR) analysis was performed using the ViiA 7Real-Time PCR System (Thermo Scientific), using best coverage TaqMan gene expression assays, specific for each analyzed mRNA. Each amplification was performed in triplicate, and the average of the three threshold cycles was used to calculate the amount of transcripts (Thermo Scientific). Transcripts quantification was expressed in arbitrary units as the ratio of the sample quantity to the calibrator or to the mean values of control samples. All values were normalized to the 4 endogenous gene controls: Gapdh, ß-Actin, ß2-microglobulin and Hprt. Cell biology assays Cell proliferation was evaluated by bromodeoxyuridine (BrdU) labeling assay (Roche) according to the manufacturer's instructions. BrdU pulse was of 24 h and cells were counted in triplicate and the number of BrdU positive nuclei was annotated. Apoptosis was detected by terminal deoxynucleotidyl transferase-mediated UTP nick end labeling (TUNEL) assay with the In Situ Cell Death Detection Kit Fluorescein (Cat. 702 No. 1684795, Roche Applied Sciences), according to the manufacturer's instructions. Images were acquired with Carl Zeiss microscope (Axio Observer Z1) and AxioVision Digital Image Processing Software. Cells were counted in triplicate and the number of TUNEL-positive nuclei was annotated. To evaluate cell viability GCPs were plated at a density of 5 × 10 5 cells/well in 96-well plates and were incubated with MTS solution (CellTiter 96 ® AQueous One Solution Promega). Chromatin immunoprecipitation Chromatin immunoprecipitation (ChIP) analyses were performed on chromatin extracts according to manufacturer's specifications of MAGnify Chromatin Immunoprecipitation Frontiers in Cell and Developmental Biology frontiersin.org System kit (Invitrogen). Sheared chromatin was immunoprecipitated with 5 µg of the following antibodies: anti-β-arrestin1 (Clone 10 cat. 610550 BD Biosciences) or normal mouse IgG, provided by the kit, was used as negative control. Eluted DNA was amplified by qPCR using EpiTect ChIP qPCR Assay (Qiagen) for the indicated genes (Mouse Cdc25a, Trp73, Cdkn1b, Casp3, Casp7, Zeb1, Zeb2, Birc5, Vim and Fn1). As control we used Actin and Gapdh gene. Data are presented as input percentage enrichment over background. CASPASE-3 and CDC25A indexes were generated based on the count of DAB positive cells on the total number of cells, quantified using the bioimage analysis software QuPath (Bankhead et al., 2017). Statistical analysis Statistical Analysis was performed using Prism software Version 6.0 (GraphPad, United States). Statistical differences were analysed by Mann-Whitney U test for non-parametric values and p-values lower than 0.05 were considered statistically significant. Results are expressed as means ± S.D. ARRB1 controls apoptosis and cell proliferation in granule cell progenitors (GCPs) We wanted to elucidate the function of ARRB1 in the early postnatal stages of cerebellar development, in which GCPs proliferation, differentiation, and death are coordinated by Shh signaling (Wechsler-Reya and Scott, 1999). As expected, between 4 and 7 days after birth, undifferentiated GCPs are in proliferating stage under the effect of an active Hh signaling, whose activation is detectable by GLI1 protein expression levels. Subsequently, between 7 and 21 days after birth, GCPs progressively exit cell cycle, demonstrated by increased level of P27 (Supplementary Figure S1), and differentiate into mature granule cells (no detectable level of GLI1) [ Figure 1 and (Ferretti et al., 2008)]. We observed detectable protein levels of ARRB1 at different stage of cerebellar development (2-, 5-, 7-, 15 post-natal day old mouse cerebella), under its physiological regulator Shh ( Figure 1A; Supplementary Figure S2A). We observed a co-expression of GLI1 (as a readout of Shh signaling activation) and ARRB1 between 5 and 7 days of cerebellum development ( Figure 1A). As already described, GLI1 regulates the shuttling of ARRB1 into the nucleus but not its transcription since ARRB1 is not a direct target of Gli1. For this reason, a concomitant increase in the two molecules is not appreciated. ARRB1 appeared to primarily function as a nuclear messenger for GCPs, likely providing scaffolds that regulate the localization and concentration of specific transcription factors at target gene promoters (Ma and Pei, 2007;Parathath et al., 2010). Cerebellar GCPs were isolated according to procedures reported in the material and method section and we confirmed that GCPs expressed specific lineage markers such as ZIC1 and MATH1 (Supplementary Figures S2B, S3A). Based on previous findings (Parathath et al., 2010), exogenous Shh stimulation of cerebellar GCPs from post-natal day 4 mice significantly increased their GLI1 (both mRNA and protein levels) ( Figure 1B; Supplementary Figure S2C). Interestingly, when such experiment was repeated on GCPs subjected to siRNA-mediated silencing of ARRB1 (si-Arrb1), the Shh-induced increase in proliferation was even more substantial reported as a percentage of BrdU positive cells ( Figure 1C), while no difference was observed in terms of cell viability (Supplementary Figure S4A). ARRB1 is known to interact with CREB and with the histone acetyltransferase P300 on the promoter of P27, enhancing its expression by acetylating histones H3 and H4 (Kang et al., 2005;Parathath et al., 2010). Our findings confirmed that the presence of ARRB1 in GCPs serves to terminate Shh-induced proliferation by increasing P27expression (Figure 1D). These findings are consistent with previous reports of a negative feedback loop, whereby mitogenic Shh signaling causes nuclear accumulation in cerebellar GCPs of the cyclin-dependent kinase inhibitor P27, which ultimately induces their cell-cycle exit (Parathath et al., 2010). In contrast, neither Shh stimulation nor ARRB1 depletion had any effect on GCPs differentiation, as reflected by β III TUBULIN (βIIItub) ( Figure 1E) and ZIC1 (Supplementary Figures S2B, S3A). Further on, considering Shh the major regulator of ARRB1 function in GCPs we examined cell apoptosis as it is a physiological process that plays fundamental roles in normal cerebellar development (Yeo and Gautier, 2004;Argenti et al., 2005). As shown in Figure 1F, Supplementary Figure S4B, Shh stimulation increased apoptosis of GCPs, and ARRB1 appears to be a key player in this effect given the significantly blunted apoptotic response observed in ARRB1-depleted cells and increased when ARRB1 is overexpressed. Accordingly, after Shh stimulation, GCPs increased the apoptotic PARP-C expression in the nucleus, together with ARRB1, while PARP-C protein decreased after si-Arrb1 (Supplementary Figures S2D, S4C). Moreover, no modulation of cell cycle associated protein (PCNA) was observed in siRNA experiments (Supplementary Figures S2D, S4C). Cerebellar GCPs apoptosis is also reportedly dependent on expression of the transcription factor E2F1 (O'Hare et al., 2000). However, in normal and neoplastic lung cells, ARRB1 binds E2Fresponsive promoters of genes that promote cell proliferation and survival (Dasgupta et al., 2011). The Janus-like behavior of E2F1 is controlled by its acetylation status (Pediconi et al., 2003). Acetylation of E2F1 "shifts its attention" from target genes that promote cell cycle progression (e.g., Cell division cycle 25a -Cdc25a, Thymidylate synthetase -Tyms, and Baculoviral IAP repeat-containing 5-Birc5) (Dasgupta et al., 2006;2011) (Pillai et al., 2015) to those that are pro-apoptotic, including Transformation-related protein 73 (Trp73), Caspase 3 (Casp3) and Caspase 7 (Casp7) (Pediconi et al., 2003;Ianari et al., 2009). Therefore, the role of ARRB1 in GCPs apoptosis might conceivably be related to its effects on E2F1 acetylation. Previous evidence (Miele et al., 2021) showed that E2F1 and ARRB1 co-immunoprecipitated in GCPs and the acetylation of E2F1 (E2F1-ac) are induced by overexpression of ARRB1. Based on these data, we demonstrated that silencing of ARRB1 had no effect on the abundance of E2F1 protein, but it appreciably diminished levels of the acetylated form and cleaved form of CASPASE 3 (CASP-3-C), one pro-apoptotic target of E2F1-ac ( Figure 1G; Supplementary Figure S2E). Collectively, these findings highlight two critical roles for ARRB1 in physiological neuronal cell models: induction of GCPs apoptosis by the acetylation of E2F1 and termination of cell cycle progression by enhancing P27 expression. Frontiers in Cell and Developmental Biology frontiersin.org 06 We observed by RT qPCR that among the E2F1-ac targets induced by Shh, ARRB1 regulated only the pro-apoptotic genes Caspase 3, Caspase 7, Trp73, as shown by both ARRB1 silencing and overexpression experiments (Figures 2A, B). On the other hand, the proliferative (Cdc25a, Birc5, and Tyms) and the EMT (Zeb1 and Fn1) E2F1-ac target genes were controlled by Shh without requiring the presence of ARRB1 (Figures 2C, D). We did not observe a significant modulation of the other EMT genes (Zeb2 and Vim) neither under Shh stimulation, nor after ARRB1 silencing ( Figure 2D). To support our mRNA data, we evaluated by IHC two of the E2F1-ac target proteins (CASPASE 3 and CDC25A) during mouse cerebellum development (from p2 to p15). As shown in Figure 2E the expression level the pro-apoptotic CASPASE 3 was expressed from p2 to p10, with a peak on p4/p7 in a context of Shh stimulation and ARRB1 expression (Supplementary Figure S2E). Instead, the proliferative CDC25A followed a significant negative trend during development, decreased from p2 to p10/p15 (Supplementary Figure S2E). Collectively, these findings demonstrated the critical role of ARRB1 in normal cerebellar development in induction of GCPs apoptosis via E2F1-ac pro-apoptotic genes. ARRB1-E2F1 complex direct regulates the expression of E2F1-ac pro-apoptotic target genes Such ARRB1 function was validated by chromatin immunoprecipitation (ChIP) experiments. In Shh treated GCPs, ARRB1 mediates the binding of E2F1 to the promoter region of Trp73, Casp3 and Casp7 highlighting the role of ARRB1 in the regulation of apoptosis ( Figure 3A); while it does not bind to the promoter of pro survival/proliferative genes as Birc5 and Cdc25a, and to epithelial mesenchymal transition genes as Zeb2, Vimentin, Fn1 and Zeb1 ( Figure 3B). We found that ARRB1 also bound to the Cdkn1b/p27 promoter, strengthening support for ARRB1's role in GCPs growth arrest ( Figure 3A). Collectively, these findings demonstrated the direct controls of ARRB1-E2F1-ac complex on pro-apoptotic targets' promoter regions. ARRB1 controls apoptosis via E2F1-ac target in neural stem cells The early postnatal murine cerebellum contains multipotent NSCs [described by (Lee et al., 2005)] which can also give rise to SHH MB . For this reason, we investigated the physiological roles of ARRB1 in these cells obtained from the cerebellum of 4 days old WT mice (Ferretti et al., 2008). NSCs growing in stem medium expressed very low levels of ARRB1 (Po et al., 2017) (Figure 4A; Supplementary Figure S2F), conversely they were positive for neuronal marker β-III TUBULIN (TUBB3) and stemness markers NANOG, NESTIN, SOX2, (Supplementary Figure S3B). On the other hand, ARRB1 overexpression is linked to a "differentiated neural phenotype" (Po et al., 2017), confirmed by the expression of differentiation markers [TUBB3, S100, PARVALBUMIN (PARV) and GFAP] (Supplementary Figure S3C). As shown in Figure 4B and Supplementary Figure S2G, in NSCs E2F1 is not acetylated in the absence of ARRB1, while in FIGURE 3 ARRB1-E2F1 complex direct regulates the expression of E2F1-ac pro-apoptotic targets. (A,B) qPCR-ChIP assay of ARRB1 in GCPs stimulated or not with Shh. Immunoprecipitation with IgG was performed as control. Eluted DNA was amplified by qPCR using primers specific for the regulatory region of the indicated genes. Actin and Gapdh (not shown) were used as endogenous non-enriched regions. qPCR data are presented as percentage of ChIP input controls. Data represent means ± S.D., from at least three independent experiments; *p < 0.05; **p < 0.01; ***p < 0.001. Frontiers in Cell and Developmental Biology frontiersin.org 07 differentiating conditions, ARRB1 protein is expressed and increased E2F1 acetylation together with the expression of its target Trp73 ( Figure 4C; Supplementary Figure S2H). In line with these observations, ARRB1 depletion reduced Trp73 transcription in differentiating NSCs but had no effect on Cdc25a transcription ( Figure 4C). Consistent with the role of ARRB1 observed in GCPs, in differentiating NSCs where ARRB1 is expressed and E2F1 is acetylated, ARRB1 protein is expressed and induced proapoptotic E2F1-ac target genes transcription. The results in this physiological context allow us to propose a model where ARRB1 is involved in apoptosis and cell cycle exit in committed precursors to favor cell differentiation ( Figure 5). Discussion In this study, we show the key roles of ARRB1/E2F1-ac axis in vitro experiments using two cellular populations that give rise to SHH MB: cerebellar GCPs and NSCs, useful models to mimic the normal cerebellar environment. Notably, the dysregulation of this process is a major promoter of tumor cell growth in MB (Miele et al., 2021). The developing cerebellum needs a proper and timely balance between cell proliferation, survival, differentiation and apoptosis, this latter is a hallmark feature of CNS development (Yeo and Gautier, 2004). ARRB1 is known to regulate multiple intracellular signaling pathways, many of which are involved in "life-or-death" balance in the cell (Lefkowitz and Shenoy, 2005;Gurevich and Gurevich, 2006). ARRB1 was described as a scaffolding protein that shuttles between the cytoplasm and the nucleus, where it interacts with CREB and the acetyltransferase P300 on the promoters of its target genes (Kang et al., 2005;Parathath et al., 2010). The functional consequences of ARRB1's nuclear activity are less clear than the cytoplasmic ones, and many appear to be cell type-and/or context-specific. ARRB1 transcriptionally regulates genes involved in cell-cycle arrest/differentiation (p27, c-fos), those involved in proliferation/ survival by recruiting E2F1 (cAbl, Bcr/Abl), Cdc25A, Tyms, and Birc5 (Dasgupta et al., 2011;Qin et al., 2014) as well as those controlling apoptosis (Trp73, Caspase 3 and Caspase 7), mediated by the binding with the E2F1 acetylated form (Miele et al., 2021). To evaluate ARRB1's role within physiological cerebellar models, we analyzed its temporal expression at different stages of cerebellum development (from 2 to 15 days), and we carried out experiments modulating its levels in two cerebellar cell models, GCPs and NSCs. In committed neuronal precursors (GCPs), we carried out experiments modulating β-arrestin-1 levels after Shh and chromatin immunoprecipitation experiments in GCPs to assess its role in this context. We found that ARRB1, in GCPs, as shown in Figure 5, acts in concert with its molecular partners (E2F1-ac) to ensure normal cerebellar development. In the present work we demonstrated that ARRB1 exerts its physiological nuclear functions at two levels: a) by the activation of the cell cycle exit via P27 (Figures 1-3), and b) by enhancing the acetylation of E2F1, redirecting its functions to non-proliferative ones. Indeed, ARRB1 promotes the acetylation of E2F1 under Shh stimuli and induce apoptosis via the pro-apoptotic targets of E2F1acetylated (Trp73, Caspases 3 and 7) (Figures 2, 3). Our findings are consistent with reports of the diffuse, physiological GCPs apoptosis (Ahlgren and Bronner-Fraser, 1999;Charrier et al., 2001). In the other cell model analyzed, neural stem cells (NSCs), ARRB1 is epigenetically suppressed, such as other developmental genes, during the expansion phase of the cerebellar pool ( (Burgold et al., 2008) and this report), to favor the proliferation and survival. Later, when the pool of NSCs has expanded, ARRB1 expression is reactivated to terminate the proliferative phase and allow the precursors to undergo differentiation or apoptotic elimination (Wechsler-Reya and Scott, 1999;Po et al., 2017). Coherently, our result demonstrated that the ectopic expression of ARRB1 in NSCs induced the expression of acetylated E2F1 and its target Trp73 ( Figure 4A). Moreover, the endogenous expression of the ARRB1, under differentiation conditions, regulated the transcription of proapoptotic genes such as Trp73 but not that of the proliferative genes (Cdc25a) (Figures 4B, C). MiR-326 also contributes to this process by blunting proliferative signals mediated by E2F1, Hedgehog, and Notch, and by promoting cell differentiation as already reported (Ferretti et al., 2008;Kefas et al., 2009;Po et al., 2017;Miele et al., 2021). Altogether, our results provide a significant contribution to elucidate a molecular mechanism through which ARRB1 mediates apoptosis and cell cycle exit in the two cells of origin of SHH-MB: cerebellar granule neuron precursors and neural stem cells. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. Ethics statement The animal study was reviewed and approved by Ethics Committed neuronal precursors (i.e., NSCs grown in DFM, GCPs). In our previous works, we identified miR-326 as a miRNA necessary for maturation of granule cell progenitors (GCPs) into mature granule cells (Ferretti et al., 2008). Moreover, this miRNA is integrated into the first intron of the Arrb1 gene and shares the same regulatory regions as its host gene. miR-326 also contributes to ARRB1 functions by blunting proliferative signals mediated by E2F1, Hedgehog, and Notch, and by promoting cell differentiation (Ferretti et al., 2008;Kefas et al., 2009;Po et al., 2017;Miele et al., 2021). Committed neuronal precursors express ARRB1 and mir-326, which regulate their development at multiple levels. Shh signaling upregulates ARRB1 levels and promotes its translocation to the nucleus. There ARRB1, in complex with P300, induces acetylation of E2F1 (E2F1-ac), redirecting the transcription factors activity from survival/proliferative gene targets towards those that promote apoptosis (Trp73, Caspases 3 and 7). Interacting with CREB and P300, ARRB1 upregulates the expression and nuclear accumulation of P27, which eventually blocks cell cycle progression. miR-326 favors neuronal cell differentiations by inhibiting multiple survival/proliferative signaling: E2F1, Hedgehog (Hh) and Notch via direct binding of the 3′-UTRs of E2f1, Smo, Gli2, Notch1 and Notch2. (B): In NSCs, non-expression of ARRB1 and miR-326 promotes cell proliferation, survival, and stemness by favoring non-acetylated E2F1 activity and active Hedgehog (Hh) and Notch signaling. Frontiers in Cell and Developmental Biology frontiersin.org Funding This work was supported also by the Italian Ministry of Health with "Current Research funds", Ricerca Finalizzata, Grant #GR-2018-12367328 (to EM). Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. SUPPLEMENTARY FIGURE S2 Densitometric analysis. (A-H) Densitometric graphs of the western blots presented in the manuscript. Data represent means ± S.D., from at least three independent experiments. SUPPLEMENTARY FIGURE S3 Characterization of cellular models. (A) Left: CT values (by RT qPCR) of GCPs markers (Math1 and Zic1), and β-2-microglubulin (as housekeeping gene), evaluated in silencing experiment on GCPs. Right: Western Blot analysis of endogenous expression of ZIC1, a GCPs marker, in GCPs that had or had not undergone siArrb1. GAPDH: loading control. (B) Representative image of immunofluorescence staining of NSCs for stemness/neuronal markers (NANOG, NESTIN, SOX2 and TUBB3). Nuclei are counterstained with Hoechst. Scale bar: 10 μm. (C) Representative image of immunofluorescence staining of differentiated NSCs for differentiation markers (TUBB3, S100, PARV and GFAP). Nuclei are counterstained with Hoechst. Scale bar: 10 μm. For western blot, densitometry values are shown below the blots and densitometric graphs are presented in Supplementary Figure S2. SUPPLEMENTARY FIGURE S4 ARRB1 overexpression increases apoptosis in GCPs. (A) Evaluation of cell viability, measured with MTS assay, of GCPs treated for 48 h with Shh had or had not undergone siRNA-mediated silencing of ARRB1 (siArrb1). Statistical differences (versus untreated cells, ns = not significant) were evaluated by One-Way ANOVA. Data represent means ± S.D., from at least three independent experiments. (B) Apoptosis evaluated by a double-labeling assay for detecting apoptotic cells (TUNEL assay), and Arrb1 overexpressed (pcDNA3 β-arrestin1 HA) using anti-HA antibody, by immunofluorescence. Data represent means ± S.D., from at least three independent experiments; *p < 0.05; **p < 0.01; ***p < 0.001. (C) Nuclear subcellular localization of endogenous ARRB1, PARP-C and PCNA by Western blot (SP1 was used as loading controls and markers for purity of nuclear fraction). For western blot, densitometry values are shown below the blots and densitometric graphs are presented in Supplementary Figure S2.
2023-03-01T16:08:49.271Z
2023-02-27T00:00:00.000
{ "year": 2023, "sha1": "229f12ff194c5c095e6c7b54c588bca56b2029a2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2023.990711/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ded255a25f305ddc92e1eddac23f04cbba6ba415", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238045704
pes2o/s2orc
v3-fos-license
Defining the information flows for DLT of a transport company in the mining industry according to the criteria for sustainable development . The application of information technologies leads to the improvement of the companies’ production parameters in each sector according to the criteria for sustainable development. Naturally, in order to achieve efficiency, they must be tailored to the specifics of the industry, in this case the mining industry. The article proposes a methodology for the introduction of Distributed Ledger Technology (DLT) for the transport information flow at a mining company. Based on the chosen organizational structure, the participating actors and the data they share, the information channels are determined. According to the necessary rights of the participants to modify the transactions and the number of channels of the transport information flow, a consensus mechanism Practical Byzantine Fault Tolerant and the so-called smart contract have been chosen. Different DLT platforms are analyzed. Hyperledger Fabric was selected as an appropriate platform in order to ensure the continuity of the system, the asynchronous control of the various channels and the ability to include different actors. Introduction In the second half of the XX th century there is an increase in population along with global economic growth, which is associated with excessive, intensive and uncontrolled use of natural resources [1]. In order to meet the needs of present and future generations on the Earth, the Brundtland Commission published a report on the sustainable development in all areas of life and human activity [2]. This means that the implementation of measures in one area must not be at the expense of deteriorating performance in another area. In addition, the measures taken should not be temporary initiatives, but long-term management decisions that have a positive effect on all areas affected. Dubinski defined the following main pillars of sustainable development in the mining sector [1]: -technical and economic activities for continued economic growth; -introduction of ecological measures for environmental protection and appropriate use of natural resources; -social events to improve working conditions, care for employees and personal development in the mining community. With the growth observed in the quantities of extractеd minerals from ancient times to present days, the main aim of the economic and technical means is the reasonable acquisition of natural resources. Therefore, according to the first pillar of economic growth, it is recommended to undertake measures for long-term sustainability in terms of planned production and sales volumes, which is directly related to the improvement of the technical means for extraction and processing of mineral resources. In recent years, measures have been taken worldwide to reduce harmful emissions of waste products into soil and water generated by the mining, chemical and pharmaceutical industries [3,4]. The reclamation of the excavated land masses and the improvement of the working conditions with reduced amounts of industrial accidents, dust and noise are considered with regard to the environmental measures [5]. The reduction of industrial accidents and occupational diseases, due to the presence of harmful substances are the main target for improving the working conditions, typical not only for the mining, but also for the chemical industry and agriculture. They have the highest share in countries with low innovation potential and in developing countries [6]. According to [7], the measures become effective with government support and this is proved by the enactment of many directives for reclamation and waste recycling [8,9] from the mining activities, reduction of carbon emissions, efficient use of energy sources, etc. At the same time, measures for staff training with new technologies, introduction of leisure activities, practices for the development of the innovative concept "Shared values", the "Got it" system for collection, evaluation and implementation of ideas from the employees, communication programs with the community and others are recommended [10,11]. An essential element for achieving the recommendations for sustainable development is the life cycle assessment (LCA) for each production process.There are two aspects with regard to the mining industry [12]. On the one hand, this is the impact assessment of the products from the activity on the environment and comparison of the indicators of different types of mining per kilogram. On the other hand, the mining industry can use LCA to assess the impact of its activities on nature in order to improve the technology. Sustainable development and information technology The introduction and evaluation of production activities according to the criteria for sustainable development can be done through new technical means for data management and processing, related to the complexity of the links among the individual sectors in each company. The implementation of an information system would not only lead to traceability and security of information flows, but would make progress in increasing the innovation potential in the mining sector. At the current stage of production, Bulgaria is the only EU member state with the lowest innovation index [10]. Therefore, process management, assessment of the state and raising the innovation index would not be possible without modern information technologies. The introduction of information technologies in industrial production is associated with positive and negative aspects. Initially, financial resources, training, infrastructural changes, and sometimes even layoffs, recruitment or retraining of staff are needed. On the other hand, remote control, improved security and accountability, traceability of the information flow, etc. could be achieved by using them. Firstly, the implementation of DLT implies security in data transmission, which will first increase trust among partners and therefore improve the working atmosphere. Secondly, it will contribute to the efficient consumption of fuels and resources, which will have an economic effect. Thirdly, the automatic sending and archiving of reports increases the accountability and traceability of all documents to the point of impossibility of theft by third parties. Naturally, each new technology is built on the basis of a previous one. For example, when upgrading databases with DLT, it was found that the information could be compared and evaluated. On the other hand, data are shared among nodes more slowly compared to the SCADA system, which is designed to manage production processes. The advantage of DLT is the security, nonmodifiability and traceability of the transmitted data, which makes it a suitable system not for managing production processes, but for sharing confidential data among commercial, production and regulatory organizations. With the implementation of this new information technology, different communication channels appear among subjects which exchange data of mutual interest. Therefore, to assess production support measures according to the criteria of sustainable development, DLT is suitable for sharing information flows in mining enterprises. These are electricity, production, waste, transport, security. As a new technology, whose legislation is from 2019, it is necessary to develop an algorithm for implementation, definition of information channels and definition of the attributes in the blockchain for the described sharing channels. At present, such do not exist yet. The purpose of this article is to determine the stages of the method for implementing DLT and achieving improved communication by clearly defining the communication channels and the type of transmitted information for the transport sector in a mining enterprise. Methods As a new information technology, the regulations for blockchain and DLT are respectively from 2017 and 2019, which necessitates a brief definition of their functions, advantages and disadvantages. The methods for building DLT are based on different platforms -Ethereum, Hyperledger Fabric, Corda and others. These platforms differ according to the method of data transmission, the rights of the participants to modify the data, the transmission speed, etc. Given the history of DLT, the main research and development of the above-mentioned platforms are for cryptocurrency management [13,14], trade and economic flows, as well as their protection against cyberattacks [15]. In recent years, there has been a study on the implementation of the technology in ticket sales in rail transport [16] and fleet management. With the isolation due to the COVID-19 pandemic, the real applications of the technology are increasing.The already implemented and working solutions prove the possibilities for tracking cryptocurrencies to various commercial products. Although the DLT regulation is from 2019, there are already successful commercial applications such as tracking the origin of eggs from Farmers Hen House via QR shared codes, authentication of COS sweaters via My Story™ labels by H&M and VeChain [17,18]. The most visible effect is to the transparent and unmodifiable tracking of supplies of medicines and food, encompassing the companies Deloitte, Maersk, the World Bank and the World Food Program [19]. In the industry, as a representative of the transport sector, Volkswagen has implemented DLT to track parts and origins of materials for the manufacture of batteries for electric vehicles [20], management and settings of cars in the construction of smart cities [21]. An analysis of the platforms shows that the Hyperledger Fabric is preferred in the transport sector, as can be seen from the website [22]. At a round table in Toronto, leading IT and mining experts have identified the following streams as leading directions for the implementation of DLT in the mining industry [23]: 1. Management of the financial resources in communication with banks and suppliers; 2. Communication with state institutions; 3. Human resources management; 4. Repair system when reporting accidents. The management of vehicle traffic is not explicitly mentioned, but haulage is a major activity in large and distributed mining companies and area 4 -the repair and accident system is part of it. For this reason, the article is aimed at defining the channels of the transport information flow and the shared data. In order to define the stages for implementing a blockchain-based DLT for transport channels for sharing, these concepts have to be defined first. Since the organizational structure of each enterprise is the framework for reporting the relationships in an organization, the first step of the proposed methodology is to consider the main organizational structures. The values for measurement and transmission to the respective actors are determined according to the structure of the organization and with regard to the selected information flow. The number of channels for the respective information flow is then determined, as not all data are transmitted to all participants in a given communication scheme. In order to achieve consensus, decentralization and equality in data transmission, it is necessary to define a consensus mechanism. The following is a design for smart contracts to set sharing rules and a platform selection. DLT and Blockchain DLT is a decentralized database containing information visible to all actors in real time. The system allows the sharing of the necessary information among the network's participants through the respective synchronization. To achieve security, the data is encrypted and changes by each participant are authenticated with their own key. On different platforms, participants' rights to modify the data may be different, but the common denominator is that once modified, the data cannot be deleted. Thus, the system ensures unmodifiability, traceability and transparency of information flows [24]. The blockchain [25], as the name suggests, is a chain of blocks (codes) connected by cryptographic algorithms, containing identical information. The management of transactions in the network is through a consensus mechanism that validates the allowed transactions when adding them to a block. This is a method of authenticating and validating a value or transaction without the need for explicit trust or reliance on a central institution, i.e. on a third party. Thus, the recommendation for economic optimization of expenditures according to the criteria for sustainable development is implemented by eliminating the need for an intermediary and giving the possibility to each participant to be both a provider and a user of data. The transmission and recording of the information among these participants in the blocks takes place after its verification as genuine through the so-called smart contracts. The choice of blockchain, smart contract and consensus mechanism depends on the parameters of the organizational structure, the rights of the participants to modify the transactions, the number of information flow channels and the type of information flow. It has been established in previous studies that the information flows in the mining industry are the supply and reporting of fuel or repair equipment, the monitoring of energy parameters by shifts and departments, the quantity and quality of the extracted or processed product, the monitoring of emissions [26]. These information flows include data from the measures and laws introduced in recent years to reduce harmful emissions, optimize production, introduce new technologies for extraction and transport. The development of information technologies has necessitated a remote or GPS system for tracking the routes of vehicles. In addition, methods are being developed to study their technical serviceability or efficiency. To carry out the above-mentioned activities, specialized information management is required [27]. In order to improve the efficiency of vehicle management in a mining company through DLT, the transport sector will be analyzed. Due to the differences in mining companies, the options are many and specific, but the information flow in all cases depends on the organizational structure of the company, which requires a brief overview of the main structures. Organizational structure Different organizational structures are characteristic for the mining industry and they depend on the functional peculiarities of production, the history of the organization, and the geographical features. Since mining production includes various activities such as extraction, processing, reclamation, there are different types of companies -Ltd., JSC, holding, industrial group and outsourcing. All these companies are characterized by a hybrid type of organization, which combines functional and geographical type of structure. Holding The holding company is a business entity that combines the assets of various subsidiaries and performs supervisory functions. It is characteristic that it generally does not carry out specific business activities and does not actively participate in the management of the day-to-day operations of its subsidiaries. Therefore, this type is not considered in the present study. Industrial group The industrial group includes companies that can work in various fields -construction, consulting, mining, energy, environment, haulage, security, etc. The companies in the industrial group can interact among themselves, but they can also have projects with external companies and partners. Outsourcing Outsourcing is the business practice of hiring a firm outside the company to provide services or create goods that are traditionally done internally by the company's own employees and staff. It was first recognized as a business strategy in 1989. This way of transferring tasks is undertaken as a cost reduction measure. This is a method for companies to allocate resources where they are most efficient, according to the criteria for sustainable development. It helps to preserve the nature of free market economies worldwide. This way of transferring tasks allows the companies to focus on key aspects of the business, allocating less critical operations to external organizations. In the present study, the communication channels in a given outsourcing company are defined, because in the communication among companies it is possible to reduce the security in data transmission, to lose trust among the partners. Therefore, a need for protection arises when transmitting confidential data without disrupting the communication among the suppliers and the company, etc. The implementation of DLT will further facilitate the efforts of the legal teams of the companies when signing contracts in which the type and the way of sharing confidential data must be precisely mentioned. With the implementation of Internet technologies, in all companies, amorphous parts of the structure have emerged, which can lead to confusion in the management and executive staff. This further reaffirms the need for a DLT, which will clearly define the ways in which the information required is transmitted to each participant. The object of analysis is the transport information flow, as the communication channels proposed in the article can be modified according to the parameters and organizational features of a specific production process. They can also be used by smaller companies. Main parameters The implementation of each database as well as DLT begins with defining the transmission data [28]. For a transport company working in the mining sector, it is necessary to differentiate the data according to the actors involved. The actors interested in these data are the Pit, the Processing Plant, the Transport Company, the Mining Company. It communicates and shares data with state-owned companies, agencies and regulatory bodies in accordance with the legislation. It is obvious that there are many participants and sharing all the information would complicate the work of each of them. Therefore, the following information flows are defined for the transport sector: 1. The amount of ore. In this information flow among the participating companies are shared data on the parameters: amount of development, amount of ore transported, number of courses of vehicles, fuel consumed, stope (working face). The actors interested in these data are the Pit, the Processing plant, the Transport Company, the Mining Company. They all share the measured values and confirm the received records. The mining company sends official reports to the Bowels of the Earth Agency and the concession, which only accept these data, i.e. their rights are read-only. The conceptual model for communication in this case is visualized in Fig.1. 2. Technical condition. The technical condition of the vehicles of the transport company depends on the working hours, the number of courses of full vehicles, the number of courses of empty vehicles, the distance traveled. In the presence of electric vehicles, the company may have implemented a system for remote diagnostics according to the temperature of certain parts or other parameters. In order to reduce carbon emissions [11], the share of electric vehicles in all sectors of production is expected to increase [29], and the information flow can be supplemented with parameters such as tire condition, battery condition, battery voltage of the individual vehicles, charging time, number of charging cycles, LCC, management or optimization of PV renewable source for charging [30] and etc. Information flows in the management of electric vehicles are not discussed in the article. The data about the technical condition are transmitted by the transport company to the repair company and to the mining company. The conceptual model for communication in this case is visualized in Fig. 2. 3. Delivery of repair equipment and parts. Every company has repair parts in stock, but each repair is also related to the delivery of new ones. In this information flow, the main data are the number and type of parts in stock and the number and type of parts to be ordered. Therefore, when registering a repair event, the transport company contacts the repair company and the mining company. For its part, the repair company contacts all the suppliers. It has information about the technical condition of the fleet and can pre-order the elements necessary for maintenance. The conceptual model for communication in this case is visualized in Fig. 3. In Figure 4, The Mining company submits data to all external organizations, and they only have the right to view it. The funds for the financial servicing of the fleet and the employees are received by the Mining Company after reporting on the work performed. In addition, the Transport Company submits a report about its expenditures to the Mining company, which is in contact with the government agencies (National Revenue Agency (NRA), National Social Security Institute (NSSI)), the electricity distribution company and the regulatory authorities. The only organization that can verify the information sent is the electricity distribution company. This happens when monitoring the parameters of electricity, for example if the transport company has a large number of electric vehicles, which is not reported in the scheme. If it is necessary to halt the activity of the mining company, which is a big consumer of electricity, it warns the electricity distribution company. The same process is repeated at start-up, with the electricity distribution company allowing the start-up of a mining company's activity. If it is necessary to monitor the parameters of electricity, it is necessary to make a separate scheme of information flow related to the quality of electricity, which is among the future tasks of our team. The study is useful for the mining industry, but the defined flows can be applied to trace hazardous substances in the chemical and pharmaceutical industries, hospitals [4], automotive [20], engineering, agriculture, etc. The results of the study can be applied in other activities after the identification of the following flows: Characteristics -composition of the flow, source, purification (treatment), discharge -qualitative and quantitative composition, time; The Supply chain -spare parts, fuel and lubricantsquantity, supplier, quality etc.; Maintenance -regular vehicle checks, distance travel, tracking of the lifetime engineering cycle; Documents flow -tamper resistance, redundancy, non-repudiation, tracking etc.; Operational data -drivers working shifts, quantity of raw material transported, regular vehicle position checks. Consensus mechanism The data from the defined information flows are validated when added to a block and transmitted to the network of ledgers by a consensus mechanism. It is not necessary for all participants to see all the information in this structure, which is one of the main criteria for choosing a consensus mechanism. In the event of a software issue or another problem in one of the communication channels, it is necessary for the system to continue to operate, without requiring high speed data sharing. Analyzing all these features for the case under consideration, the suitable consensus mechanism is the Practical Byzantine Fault Tolerant, in which the management is centralized and carried out by the Mining Company. This mechanism allows a clear definition of the participants without excluding the addition of new ones, for example when signing a contract with a new supplier. It provides protection against Byzantine faults [31]. Although Hyperledger Fabric (v 2.2) ordering service is still not ready with PBFT implementation (RAFT CFT is used), it is marked as the milestone for next releases [32,33]. In addition, its advantages are that there are no initiatives on the nodes, there is no requirement of hash power. Smart contracts The business logic of the channel is implemented by Smart Contracts, which are computer programs or frameworks that automatically take over tasks and responsibilities in the shared ledger. This is a computer code that recreates the contractual logic of the real world. Acting at the node level under a specific regulation among the participants and the proper functioning of the principle of consensus, they validate and record the shared data. Examples of such platforms are Ethereum, Hyperledger Fabric, etc. The following table 1 provides a comparative analysis of the main platforms. Ethereum Ethereum uses the Proof-of-Work consensus algorithm, in which all participants must reach a consensus on the order of transactions. It lacks confidentiality because any user can see any type of information, which is inapplicable in this case. Corda (R3 CEV) Corda was created for financial institutions, with participants agreeing in advance on a set of rules. The Node-to-Node (N2N) consensus algorithm is used, which allows control over the access to the network records. It is suitable in case of a request of a regulator to make a detailed and comprehensive check of the transactions in the network. Achieving security and indisputability of the records is related to the requirement for all participants to be online. For the mining sector, the use of a consensus mechanism suitable for the banking sector is inappropriate. Hyperledger Fabric It is designed for corporate use and all peers maintain one ledger for the channel to which they are subscribed (channels can be more than one). However, unlike other blockchains, in Hyperledger Fabric not all nodes are the same and this arises as a result of the different roles of the representatives of the organizations in the network, which is appropriate for the considered information flow -fleet management. Hyperledger Fabric allows each network member to identify its representatives, which are configured in appropriate cryptographic materials, such as a Certificate of Identity. There is an opportunity to enter individual settings and preferences when building solutions for shared ledgers, which allows multi-channel communication of the transport company when sharing essentially different information flows. The main shortcoming which is documented is the requirement to develop a list of participants in the network and give them access through membership of a centralized institution. In this case, however, this is an advantage because it coincides with the peculiarities of the considered organizational structure of an outsourcing mining company. Therefore, for the analyzed information flow, this has already been done and Hyperledger Fabric is the appropriate platform. The constraints of the research are the usage of non-permissive DLT platform Hyperledger Fabric in the management of the information flow in a transport company in the mining industry. The implementation of DLT to the mining industry is an even more controversial issue. We were unable to find research papers on the application of DLT in the mining industry. Examining the maturity of technology, there is a rapid development of the legislative and regulatory framework, as well as the number of companies that have implemented the system. However, this involves costs, organizational changes, staff training, an increased attack area and the need for new cyber defense methods [15]. Despite Deloitte's estimates that by 2025, about 10% of global GDP will be based on blockchain structures [34], categories of standards applicable to the DLT and the blockchain have not been developed yet. These are framework standards, technology standards, platformspecific standards and industry-specific standards. The above-mentioned shortcomings are typical not only for DLT, but also for most information systems for data transmission. Despite these limitations in the implementation of DLT as an ever-changing legal framework, the high electricity costs for data transactions, the need for staff training and more, the team believes that applications will increase worldwide. According to [35] DBMS should provide: • Data storage, retrieval and update; • User accessible catalog or data dictionary describing the metadata; • Support for transactions and concurrency; • Facilities for recovering the database, should it become damaged; • Support for authorization of access and update of data; • Access support from remote locations; • Enforcing constraints to ensure data in the database abides by certain rules. Thus, data confidentiality, processing logic, irreversibility, non-repudiation, and data redundancy aren't any part from an initial DBMS fully fledged solution. However, they can be partially achieved with additional explicitly added IT procedures which are out of scope of our research. DLT solutions implicitly add an additional layer that provides the described missing properties on the top of the DBMS solution. Conclusions According to the criteria for sustainable development in the mining sector, fleet management through a new information technology has been proposed. The article presents an algorithm for implementing DLT in an outsourcing company for sharing data by a transport company. According to the methodology, the directions for sustainable development of the mining sector were initially clarified and the peculiarities of the organizational structure of an outsourcing mining company were analyzed. In order to build information management, it is necessary to define variables for data sharing for a transport company and a description of the main information flows. Conceptual models of the main information flows are proposed. The variables in the described information flows are not final and depending on specific features of each mining company, changes in the regulatory requirements and in the activity can be made or new ones can be added. Based on the participants in the communication channels, the presence of a centralized management organization, the required level of security and data transfer speed, the consensus mechanism Practical Byzantine Fault Tolerant was chosen. It ensures continuous operation, even in the presence of a software problem and the ability to add new participants. A platform for regulating the contractual data sharing policy has been chosen. Hyperledger Fabric was chosen because it is the closest to the considered organizational structure which needs multi-channel data sharing. The suggested solution provides a fast transaction finality time -e.g. in Hyperledger Fabric it varies from 150 -200 ms to 2 sec. Exposition of new attack vectors, thanks to increased attack surface. This platform is suitable not only for the transport sector in a mining company, but also for the transport sector as a whole. In addition, Hyperledger Fabric is suitable for other information flows in the mining industry -tracking of resources, electricity, etc. New perspectives for further research are the classification of the data passed to the DLT system and maintenance of the DLT system.. The attributes of the block are not determined for the defined information flows of the transport company for an outsourcing company, this will be among the future tasks of the team.
2021-08-27T16:34:08.611Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b33606f19454b3778b64b528119f0dcc90101184", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/56/e3sconf_icsf2021_08003.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2d41597a50e09eabe2e7acb183e5ee4ef08a7630", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
210718171
pes2o/s2orc
v3-fos-license
Early Initiation of ARV Therapy Among TB–HIV Patients in Indonesia Prolongs Survival Rates! Background: The HIV epidemic remains a public health problem with rising tuberculosis (TB) numbers around the world. Antiretroviral (ARV) therapy (ART) is essential to increase the survival of patients with TB–HIV coinfection. The aim of this study is to investigate the effect of ARV treatment initiation within TB treatment duration for the survival of patients with TB–HIV coinfection. Methods: This is a retrospective cohort study of patients with TB–HIV coinfection and who were ARV naive from Prof. Dr. Sulianti Saroso Infectious Disease Hospital between January 2011 and May 2014 (N = 275). The Kaplan–Meier method, bivariate with the log rank test, and multivariate with the Cox regression were applied in this study. Results: Cumulative survival probability of the patients with TB–HIV coinfection receiving ARV in a year was 81.5%. The death rate in patients with TB–HIV coinfection who received late ART initiation during TB treatment is higher by 2.4 times [adjusted hazard ratio (aHR) = 2.4, 95% confidence interval: 1.3–4.5, p = 0.006] compared with the patients who were in early ART initiation and were thereafter adjusted by the location of Mycobacterium tuberculosis infection. Conclusion:The effect of ART initiation is essential in the intensive phase (2–8 weeks) of anti-TB medication to increase the survival among TB–HIV coinfection group. INTRODUCTION Tuberculosis (TB) remains to be a leading cause of death that has been projected to be 15 times higher among TB-HIV cases than non-TB-HIV cases [1,2]. TB cases among people living with HIV are almost 60% undiagnosed and untreated. Indonesia is one of the TB-HIV high-burden countries, with 360,565 cases of TB notified in all forms in 2016 including 14% with known HIV cases. Indonesia is listed among the eight countries with around 70% of all TB deaths among people living with HIV [1,2]. To ensure HIV programs integrate with regular TB screening and treatment, UNAIDS (The Joint United Nations Program on HIV/ AIDS) collaborated with World Health Organization (WHO) has recommended countries to elaborate the programs that aim to respond immediately to TB active cases, beginning with antiretroviral (ARV) therapy (ART) immediately within 2-4 weeks after initiating TB therapy [2,3]. The Ministry of Health, Republic of Indonesia conducted guidelines to support concomitant treatment of the two diseases. The initial of ART is often deferred until almost the completion of TB therapy due to concerns of potential side effects from the Immune Reconstitution Inflammatory Syndrome (IRIS), high pill burden, and drug interactions [4][5][6]. However, the controversial issue of timing of ART in TB patients plays a significant role [3]. The observational study in Sanglah Hospital showed that of the 60 patients with TB-HIV coinfection, only 20 patients (33.3%) initiated ART within 2 months during the TB therapy [6]. In fact, the risk of TB infection in people living with HIV is reduced with ARV by around 65% [2,7]. A trial showed a significant upsurge in survival with ART initiation within 2 weeks of the starting the TB therapy [6]. On the other hand, delay in ART initiation may result in AIDS-related illnesses and even death, as 63% of deaths occurred within the first 6 months of therapy [1,5]. The advantages of initiating TB-HIV co-treatment will have an impact on patients' care and for receiving the optimal treatment outcomes. Therefore, the aim of this study is to examine the effect of ART initiation within the TB treatment duration for the survival of patients with TB-HIV coinfection. ARV Therapy for TB-HIV Program The TB-HIV collaborative activities are being implemented at primary health care centers and hospitals in Indonesia. The activities Study Design and Setting This study used a retrospective cohort study design to enroll all patients aged >18 years with TB-HIV coinfection receiving TB treatment and ARV naïve at Prof. Dr. Sulianti Saroso Infectious Disease Hospital between January 2011 and May 2014. We compiled the data from Medical Record Department, Directly Observed Treatments TB program, HIV-AIDS program of Infectious Disease Hospital Prof. Sulianti Saroso, Jakarta, Indonesia. In this study, we included patients who were diagnosed as HIV positive and at least 15 years old, TB first and second category, having proper anti-TB treatment, registered in national ARV program, and the duration of TB-HIV treatment more than 8 weeks as exposed group. Non-exposed group was patients diagnosed as HIV positive and at least 15 years old, TB first and second category, having proper anti-TB treatment, registered in national ARV program, and the duration of TB-HIV treatment 2-8 weeks. This study excluded pregnant women and patients with uncompleted database of medical record ( Figure 1). We used the sample size for hypothesis testing for two population proportions at 95% confidence interval with a power of 80% [8]. We estimated that the minimum sample size to be 248 patients and they were divided into two groups: early initiation of ART during TB treatment group (124 patients) and late initiation of ART group (124 patients) ( Figure 2). Statistical Methods Demographic characteristics, timing of TB and HIV diagnosis, time of TB treatment and ART initiation, number of deaths, and medical characteristics were obtained from the hospital registration book, the ARV monitoring book, and medical records. The Kaplan-Meier method, the log rank test and the Cox regression model were applied to analyze the data statistically. The data analysis used STATA 11 (Stata Corporation, College Station, TX, USA). RESULTS Of 275 patients who participated in this study, 131 patients (47.64%) received ART within 2-8 weeks (early ART initiation during TB treatment group) and 144 (52.36%) patients received ART >8 weeks after TB treatment (late ART initiation group). By the end of the first year observation, 205 (74.5%) patients were still alive, 49 (17.8%) patients had died, and 21 patients (7.6%) were lost to follow-up. There were 49 death cases reported during the follow-up, they consisted of 14 patients (10.7%) who were in the early ART initiation group and 35 patients (24.3%) who were in the late ART initiation group ( Table 1). The proportion sample of male patients was 92 (46.2%) in the early ART initiation group and 107 (53.8%) in the late ART initiation group. The proportion of female patients accounted for 39 (51.3%) in the early ART initiation group and 37 (48.7%) in the late ART initiation group. The Kaplan-Meier method estimates that the cumulative proportion for those surviving over the first year in early ART initiation during TB treatment was 81.5% (Figure 3), whereas the survival rates of patients in the early and late ART initiation groups were 89.1% and 74.5%, respectively. The result from the log rank test showed a significance between two group (p = 0.003) (see Figure 4). The bivariate Cox regression analysis revealed that late therapy increased mortality rate [crude hazard ratio (cHR) = 2.5; 95% confidence interval (CI): 1.3-5.1; p = 0.001]. The patients who received late ART initiation had 2.4 times higher risk of death [adjusted hazard ratio (aHR) = 2.4; 95% CI: 1.3-4.5; p = 0.006] compared with those who were in the early ART initiation. Other covariates, for example, the location of Mycobacterium tuberculosis, were also associated with mortality (aHR = 1.9; 95% CI: 1.0-3.4; p = 0.039) (see Table 2). DISCUSSION This study analyzed survival outcomes over the first year among patients with TB-HIV coinfection receiving early and late ART during TB treatment at Prof. Dr. Sulianti Saroso Hospital for the first time. The survival time for the patients who received early ART initiation was better than those who received the late ART initiation. This result is in line with a study that the people after the follow-up are less frequently to reporting as death cases than in the deferred time (p = 0.02) [9]. Survival analysis of this study found that the risk of death for the TB-HIV cases that received late ART was higher compared with the patients who received early ART. The patients who engaged in ART for 8 weeks of TB treatment had 1.89 times the rate of death (aHR = 1.89, 95% CI: 1.05-3.40, p = 0.03) than the cases who never or late engaged ART >8 weeks of TB treatment [9]. Havlir et al. demonstrated that ART can be safely administered early in the onset of TB treatment [10]. The urgency of starting ART during the course of TB therapy is dependent on the immune status of the patient. The patients who started ART 2 weeks after the start of TB treatment compared with those who started over 8 weeks had reduced mortality and AIDS-related illnesses (26.6% to 15.5%) [11]. On the other hand, mortality in the CAMELIA study (median entry CD4+ lymphocyte counts of 25 cells/mm 3 ) (interquartile range, 10.56) and death or AIDS-defining illnesses in the low CD4+ strata (<50 cells/mm 3 ) and in the starting antiretroviral therapy at three points in tuberculosis (SAPIT) study were all significantly reduced among patients starting immediate vs early ART [9,11]. However, especially for patients with CD4 counts ≥50 cells/mm 3 , the decision of early initiation must require clinical judgment to confirm that the patient has the capacity to manage IRIS and toxicities [7,11]. The recommendation of early initiation in TB-HIV coinfected patients with a better IRIS management practice was important to maintain the benefits with minimal TB IRIS risk in health care facilities [3]. In this study, the overall baseline mean of CD4 cell count was 66 cells/mm 3 and the proportion of extra pulmonary TB was 60 patients (21.82%). The data showed that the patients in this study defined advance HIV clinical stadium. Patients who delay ART initiation more likely increase the risk of new opportunistic diseases and death, especially in patients with advanced HIV disease [12]. HIV infection causes a rapid decline of the immune responses, resulting in the multiplication of the mycobacterium within the granuloma and leading to the reactivation of the infection. It was also theorized that there can be reported increasing replication of HIV infection by the multiplying within the activated CD4+ T cells and the macrophages accumulating at the site of granuloma also causing the reactivation of the infection, thus resulting in a lower ability to control M. tuberculosis and survival [13]. We also found that the first year's cumulative proportion of surviving patients were higher than the cumulative survival time from other hospitals in Indonesia, Dharmais Cancer Hospital (66.4%); Fatmawati Hospital (79.4%); Drug Dependence Hospital (54.46%). The reason for these differences is frequently to be a result of the fact that the study population in the three hospitals did not consider the status of initiation of ART, early or late initiation of ART. There are a few limitations in this study. First, the exact dates and the loss of patients not following up were not collected (2.54%). Second, misclassification is likely to occur in a grouping of adherence categories. In conclusion, TB-HIV coinfection accelerates the disease progression when the patients received late or never started the ART initiation. The effect of ART initiation is essential in the intensive phase (2-8 weeks) of anti-TB medication to increase the survival of patients with TB-HIV coinfection. CONCLUSION We conclude that it is crucial to start ART in the intensive phase (2-8 weeks) of anti-TB medication to increase the survival rate among the TB-HIV coinfection cases. Elaborate ART in TB program might play significant role in TB-HIV coinfection program in hospital or health care services.
2020-01-20T10:45:11.564Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "755dfe386785466f3ea086f938477d0316b253c2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/jegh.k.200102.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a343358a115941f875c73d433243c108889a5227", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248256730
pes2o/s2orc
v3-fos-license
Do We Mistake Fiction for Fact? Investigating Whether the Consumption of Fictional Crime-Related Media May Help to Explain the Criminal Profiling Illusion The disparity between the ongoing use of criminal profiling and the lack of empirical evidence for its validity is referred to as criminal profiling illusion. Associated risks for society range from misled police investigations, hindered apprehensions of the actual offender(s), and wrongful convictions to mistrust in the police. Research on potential explanations is in its infancy but assumes that people receive and adopt incorrect messages favoring the accuracy and utility of criminal profiling. One suggested mechanism through which individuals may acquire such incorrect messages is the consumption of fictional crime-related media which typically present criminal profiling as highly accurate, operationally useful, and leading to the apprehension of the offender(s). By having some relation to reality but presenting a distorted picture of criminal profiling, fictional crime-related media may blur the line between fiction and reality thereby increasing the risk for the audience to mistake fiction for fact. Adopting a cultivation approach adequate to examine media effects on one’s perception, the present study is the first to investigate whether the perception of criminal profiling may be influenced by the consumption of fictional crime-related media based on a correlation study. Although the results provide support for the assumption that misperceptions of criminal profiling are widely spread in the general population and associated with the consumption of fictional crime-related media, the found cultivation effects are small and must be interpreted cautiously. Considering that even small effects may have the potential to influence real-life decision-making, they may still be relevant and affect society at large. Introduction Criminal profiling (CP) describes the process of analyzing available information on a given crime to infer characteristics of the unknown offender(s) (Chifflet, 2015). The resulting profile typically involves information about the offender's physical characteristics (e.g., age, sex), cognitive processes (e.g., planning of the crime, motive), social status (e.g., level of education, employment status, marital status), and behavior while committing the crime (e.g., how the offender approached the victim) (Kocsis et al., 2000). The aim is to assist police investigations in identifying unknown offender(s) by narrowing down the suspect pool or suggesting new directions of investigation (Homant & Kennedy, 1998;Wilson & Soothill, 1996). Originally starting as an ad hoc practice (Chifflet, 2015), the use of CP has grown immensely during the past decades and is now commonplace worldwide (Snook, Eastwood et al., 2007). As surveys among police officers that have previously been working with CP exhibit, most consider CP to have been operationally useful (69-89%), advanced the understanding of the case (61-89%), and provided accurate predictions (74%), only the number believing that CP has helped identifying the offender (3-78%) varies considerably (Copson, 1995;Snook, Haines, et al., 2007;Trager & Brewster, 2001). Beyond, most police officers view CP as a valuable investigative tool (88%) and believe that profilers use sound scientific techniques (59%) (Snook, Haines, et al., 1091243S GOXXX10.1177/21582440221091243SAGE OpenGreiwe and Khoshnood research-article20222022 1 Alumni, Malmö University, Sweden 2 Lund University, Sweden 2007). Contrary to the police's positive view on CP, the scholarly literature is increasingly drawing attention to the lack of empirical support for the validity of CP questioning its usage (Chifflet, 2015). The disparity between the ongoing use, the overall positive attitudes toward CP, and the lack of empirical evidence is also referred to as criminal profiling illusion (CPI) (Snook et al., 2008) and has far-reaching implications for society. Using CP despite no evidence for its validity is fraught with risks such as misled police investigations, hindered apprehensions of the actual offender(s), and wrongful convictions of innocent citizens (Muller, 2000;Snook et al., 2008). Theoretical accounts to explain this discrepancy share the core idea that people receive and adopt incorrect messages about CP through mechanisms such as the reliance on anecdotes, the continuous repetition of the message that CP works, the myth of profiling experts, and reasoning errors (for a review see Snook et al., 2008). Combining several of these mechanisms, the consumption of fictional crimerelated media which typically provide a distorted portrayal of CP (Dowler et al., 2006) presents a holistic account to explain the CPI. However, there has to date only been little research on media consumption as an explanation to the CPI. To address this research gap, the present study investigates whether the consumption of fictional crime-related media may influence one's perception of CP and thus may serve as an explanation to the CPI. Why is Criminal Profiling an Illusion? The Lack of Scientific Scrutiny. Since CP has only during its globally increasing use gradually become subject to scientific scrutiny, large parts of its literature have been published without peer-review (Chifflet, 2015;Homant & Kennedy, 1998). As a result, articles on CP to a large extent use common sense type justifications such as anecdotal arguments, testimonials, authority, and intuition as source of knowledge (Snook, Eastwood, et al., 2007). Unlike scientific evidence, common sense type justifications often rely on retrospective self-report and thus are prone to bias and inaccuracies (Chifflet, 2015;Muller, 2000). Beyond, reviews demonstrate that the theoretical assumptions underlying CP are to a large extent outdated and lack empirical support (Petherick & Ferguson, 2013;Snook et al., 2008). The basic idea behind CP is that the characteristics of an unknown offender can be inferred from their behavior during the crime and rests upon two main assumptions: behavioral homology (offenders committing similar crimes possess similar characteristics) and behavioral consistency (offenders behave consistently across their offenses) (Turvey, 2012b). The few available studies examining these assumptions provide no or only partial empirical support (Bateman & Salfati, 2007;Bennell & Canter, 2002;Bennell & Jones, 2005;Sjöstedt et al., 2004;Woodhams & Toye, 2007). Moreover, the assumptions have been criticized for neglecting that criminal behavior is influenced by numerous different and particularly situational factors Turvey, 2012a). The Validity Dilemma. The lack of scientific scrutiny is closely related to the lack of validity. Validity is generally defined as "the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of conclusions drawn from some form of assessment" and is divided into criterion (how well a test is correlated with an established criterion of comparison), construct (how well a test measures what it is supposed to measure), and content (how well a test captures a representative sample of the construct of interest) validity (American Psychological Association [APA], n.d. -a, n.d.-b, n.d.-c, n.d.-h). A fourth but less scientific form is face validity, a subjective assessment of how appropriate a test appears to be for measuring the construct of interest, irrespective of the actual empirical support (APA, n.d.-f). The validation of CP is faced with two main issues. The first refers to the need for validation. Although the need to validate CP is obvious from a scientific point of view, the law enforcement perspective may attach greater weight to the perceived utility of CP in the investigation process (Homant & Kennedy, 1998). The second problem is the lack of objective validation criteria and methods (Chifflet, 2015). The few available studies examining the validity of CP mainly focus on its face validity and thus rely on subjective assessments (Ribeiro & Soeiro, 2021). According to Chifflet (2015), the absence of objective validation criteria and methods has led to a highly fragmented validation research focusing on three criteria: the accuracy, utility of profiles, and the skills of profilers. Accuracy. Considering the potential risks related to flawed predictions, the need for accuracy in profiling is crucial (Muller, 2000). Accuracy evaluations are typically done by a retrospective measurement of how many of the predictions fit the actual offender(s) (Chifflet, 2015). Two major problems hinder this procedure. The first problem lies in the lack of an objective criterion leading to a large degree of subjectivity in the assessment of how well a profile fits a person (Homant & Kennedy, 1998). As a content analysis of 21 profiles by Alison, Smith, Eastman, et al. (2003) shows, large proportions of the statements made about potential offender characteristics are not verifiable (55%) or ambiguous referring to inner thoughts, fantasies, personal abilities, and emotional or social skills (24%). However, the ambiguousness of profile statements does not seem to be identified as such. Alison, Smith, and Morgan (2003) found support for a tendency to interpret vague or ambiguous profile statements as relatively accurate. The large amount of not verifiable and ambiguous claims renders a comprehensive accuracy analysis of profiles difficult and makes room for cognitive biases increasing the risk of distorted evaluations. The second problem is the lack of published data based on large samples due to the reluctance of the police to make their evaluations publicly (Muller, 2000;Petherick, 2009). Utility. Due to the absence of objective measurements, the effect of CP and thus its utility in police investigations is unknown (Fox, 2022;Snook, Eastwood et al., 2007). Available studies are based on consumer satisfaction surveys and are highly subjective (Chifflet, 2015). As reviewed earlier, most police officers consider CP as operationally useful and a valuable investigative tool (Copson, 1995;Snook, Haines, et al., 2007;Trager & Brewster, 2001). However, retrospective utility assessments are prone to reasoning errors such as the fundamental attribution error (e.g., investigative success is falsely attributed to a profiler's skills instead of the police officer's work), illusory correlation (e.g., falsely creation of links between the resolution of a case and the profile), and confirmation bias (e.g., selective processing of information confirming already existing beliefs) (Snook et al., 2008). Consequently, it remains unknown to what extent the results of utility assessments are influenced by reasoning errors or mirror the actual utility of CP. Skills of a Profiler. Due to the lack of a regulating body, there is neither consensus nor any formal requirements regarding who is qualified to work as a profiler (Eastwood et al., 2006;Kocsis, 2004). As a result of the reluctance of profilers to participate in research, there are only a few experimental studies examining profiler skills systematically (Chifflet, 2015;Kocsis, 2004). A first meta-analysis of four studies comparing the ability to predict offender characteristics between self-labeled profilers and a control group was done by Snook, Eastwood, et al. (2007). Although the self-labeled profilers seemed to have outperformed the control group in four of five predictive criteria (overall offender characteristics, cognitive processes, physical attributes, social status), statements regarding statistically significant differences are difficult due to the imprecision of the estimated effect sizes (Snook, Eastwood et al., 2007). Beyond, three of the four studies included in the meta-analysis were conducted by the same group of researchers and have been criticized for a range of methodological and conceptual shortcomings (see Bennell et al., 2006). How May the Consumption of Fictional Crime-Related Media Contribute to the Criminal Profiling Illusion? In view of that crime is a mediated experience for most individuals, their knowledge about and perception of crime and the actors of the criminal justice system is typically derived from and thus determined by their representation in the media (Dowler, 2002;Surette, 2015). This may be especially true for CP since some authors argue that only its depiction in the media has raised public awareness to its existence (Homant & Kennedy, 1998). Considering that television shows featuring crime have entrenched as one of the most watched entertainment programs (Donovan & Klahm, 2015), it is worth looking at how CP is presented medially and how its medial depiction may influence the general populations' perception of CP. How Criminal Profiling is Presented in the Media. Since the early 1990s, CP has constantly been featured both in fictional and non-fictional media and has gradually evolved to an omnipresent topic in crime-related media (Herndon, 2007;Muller, 2000). As for non-fictional media, CP has been displayed in numerous biographical books written by profilers, television documentaries, magazines as well as newsmagazines (for a review see Herndon, 2007). However, CP is most featured in fictional crime-related television shows (CRTS) (e.g., Silence of the Lambs, Criminal Minds) (Bolton, 2019;Herndon, 2007). Although being fictional, many CRTS dealing with CP have some relation to reality, for example by portraying notable real-life profilers or resting their storylines on real-life cases (Bolton, 2019;Dowler et al., 2006;Doyle, 2006). Although CRTS are therefore often presented as providing a realistic insight into police investigations, they tend to depict a distorted picture of CP (Dowler et al., 2006). According to Herndon (2007), fictional crime-related media often sensationalize and dramatize CP for the effect of entertainment. As a content analysis by Donovan and Klahm (2015) shows, police investigation teams including profilers portrayed in CRTS such as Criminal Minds and NCIS: Naval Criminal Investigative Service reach success rates of 100 respectively 88% in apprehending the offender(s) and never mistakenly suspect innocent citizens. Profilers are hence portrayed as making highly accurate predictions that are operationally useful and lead to the apprehension of the offender(s). By referring to but conveying a distorted picture of reality, CRTS blur the line between fiction and reality (Donovan & Klahm, 2015;Dowler et al., 2006). This blur may make it difficult for the audience to differentiate between fact and fiction. According to a nationally representative survey among U.S. citizens, more than 40% believe the content provided in CRTS to be somewhat to very accurate (Dowler & Zawilski, 2007). Considering the availability and popularity of CP in CRTS (Snook et al., 2008), it is likely that individuals to a large extent derive their knowledge about CP from fictional media increasing the risk for the individual to mistake fiction for fact. How Consumption of Fictional Media could Influence Perception. A theoretical approach commonly applied to study how media consumption may influence one's perception is the cultivation theory by Gerbner and Gross (1976). Originally developed to investigate the effects of television viewing on one's perception of violence and fear of crime, cultivation theory is nowadays also used to examine other crime-and-justice-related topics such as perceived police effectiveness and the willingness to become a police officer (Dowler, 2002;Gerbner et al., 1978;Morgan & Shanahan, 1997;Pollock et al., 2022). The theory focuses on television as it has for a long time been considered as the only media type that could reach large and heterogenous audiences (Gerbner et al., 2002). Theoretical Assumptions. The two main underlying assumptions are that television depicts a distorted version of reality and that frequent exposure to this distorted realityversion results in its internalization (Shrum & Lee, 2012). A key mechanism by which the distorted reality-version is conveyed to the individual is the continual repetition of a relatively coherent set of messages that is shared across programs (Gerbner et al., 2002). As pointed out, such messages regarding CP refer to a high accuracy and utility (Donovan & Klahm, 2015). It is assumed that the more an individual is exposed to such coherent sets of messages portraying a distorted version of reality, the more it develops a perception of reality that is consistent with the image conveyed in television shows (Gerbner et al., 2002). Cultivation Effects. Resulting cultivation effects are measurable and divided into first-and second-order effects as they are assumed to have different underlying processes (Potter, 1991;Shrum et al., 2011). Both effects are traditionally studied with correlational designs using surveys that entail questions about the participants' media consumption and the respective concept of interest (Gerbner et al., 2002). First-order effects refer to the overestimation of frequencies such as the number of doctors, lawyers, and police officers and the prevalence of violence (Gerbner et al., 1980;Hawkins & Pingree, 1982;Shrum, 1996;Shrum et al., 1998). The overestimation occurs in consequence of a construct's overrepresentation on television which is transferred to its representation in memory and thus is likely to influence the frequency judgment (Shrum et al., 2011). In contrast to this, second-order effects refer to subjective judgments such as attitudes and beliefs and are assumed to be constructed during the viewing process as the media content is encountered (Hastie & Park, 1986;Shrum, 2002). Since second-order judgments do not rely on memory, they are considered as less effortful and more reliable (Shrum et al., 2011). Examples for second-order effects are higher levels of fear of crime and anxiety linked to heavier television consumption (Bryant et al., 1981;Gerbner et al., 1978). Previous Studies. To date, there has only been little research on cultivation effects on the perception of CP. The only available studies are a non-peer reviewed bachelor's thesis (Lutfy, 2013) and doctoral dissertation (Bolton, 2019). In a sample of 96 undergraduate students, Lutfy (2013) found positive correlations between the overall level of television consumption and the belief that profilers can accurately predict the characteristics of a suspect as well as between the consumption of CRTS and the belief that investigators can predict an offender's marital status respectively that CP provides credible information for an investigation. Beyond, an experimental comparison between the participants' attitudes before and after watching an episode of a fictional CRTS featuring CP yielded a post-exposure increase in the belief that investigators can predict the marital status of a suspect (Lutfy, 2013). Bolton (2019) conducted a similar experiment with 123 participants and found mixed results. After watching an episode of a fictional CRTS featuring CP, participants were more likely to agree that CP contributes useful information but less likely to agree that profilers can accurately predict the characteristics of an offender compared to the pre-test. No change was observed regarding whether CP can provide credible information for an investigation (Bolton, 2019). Research in related fields provides support for cultivation effects regarding the perception of police investigations more generally. As Donovan and Klahm (2015) showed, viewers of CRTS considered the police as more successful in apprehending offenders and were less likely to believe that police misconduct leads to wrongful convictions compared to non-viewers. These results perfectly mirror the medial portrayal of police efficacy in fictional CRTS identified in the authors' content analysis (mentioned in How Criminal Profiling is Presented in the Media) (Donovan & Klahm, 2015). Another example is the so-called CSI Effect describing the increase in expectations by crime victims and actors of the criminal justice system regarding the level of resources invested in the investigation of real-life crimes after the rise of fictional CRTS (Dowler et al., 2006). Regarding public attitudes toward the police on a more general level, there is extensive research examining both the importance of and potential drivers for public attitudes toward the police. Public attitudes toward the police have shown to play an important role in law-related behavior such as the willingness to support and cooperate with the police (e.g., Jackson & Gau, 2016;Mazerolle et al., 2013;Tyler, 2004;Tyler & Jackson, 2012). Besides socio-demographic characteristics, victimization experiences, perceived safety as well as perceived disorder, public attitudes toward the police seem to be primarily dependent on police behavior, more specifically on whether the police are perceived to act fairly and according to shared values with the public (Jackson & Bradford, 2010;Jackson et al., 2012;Jackson & Bradford, 2019). The few available cultivation-approach-based studies examining public attitudes toward police-related concepts focus either on different types of media such as CRTS (Dowler, 2002;Dowler & Zawilski, 2007;Intravia et al., 2018) or news media (e.g., Chermak et al., 2006;Jackson et al., 2012;Rosenberger & Dierenfeldt, 2021) or on crimerelated media more generally (Choi et al., 2020). Although indirect cultivation effects mediated by fear of crime, perceived incivilities, and race were found (Choi et al., 2020;Rosenberger & Dierenfeldt, 2021), there is currently no evidence for direct cultivation effects on public attitudes toward police-related concepts (Chermak et al., 2006;Dowler, 2002;Dowler & Zawilski, 2007;Intravia et al., 2018;Jackson et al., 2012). Instead, in particular studies investigating cultivation effects due to the consumption of CRTS found controls such as problems in neighborhood, age, gender, race, education, and experience with the criminal justice system to be more important predictors (Dowler, 2002;Dowler & Zawilski, 2007;Intravia et al., 2018). The Present Study Following cultivation theory, the recurring exposure to the in CRTS continuously repeated messages that CP is highly accurate, operationally useful, and leading to the apprehension of offenders (Donovan & Klahm, 2015) may increase the risk for the audience to falsely rely on and internalize them-and thus to mistake fiction for fact. The few previous studies (Bolton, 2019;Lutfy, 2013) yield mixed results but provide tentative support for cultivation effects promoting the consumption of CRTS as an explanation to the CPI describing the disparity between the ongoing use, the overall positive attitudes toward CP and the lack of empirical support (Snook et al., 2008). The present study builds on and extends this research by conducting the very first correlation study testing for secondorder cultivation effects on the perception of CP. Adopting a cultivation approach, the present study subjects a holistic theoretical explanation to the CPI to empirical scrutiny: the internalization of incorrect information about CP due to the consumption of fictional CRTS. By targeting the general population, potential cultivation effects could help to explain the CPI and why the CPI has so far been given little public attention to. Research on cultivation effects on the perception of CP does not only contribute to advance the knowledge about the CPI helping to close the currently existing research gap but also is crucial from a societal perspective. Given the availability and popularity of fictional CRTS (Snook et al., 2008), cultivation effects may affect large parts of society leading to widespread misperceptions of CP. Such misperceptions may influence real-life decision making resulting in real-life consequences affecting the society at large such as misled investigations, hindered apprehensions of the actual offender(s), wrongful convictions of innocent citizens (Muller, 2000;Snook et al., 2008) and mistrust in the police and their methods more generally. Aim The overall aim is to investigate whether the consumption of fictional CRTS may serve as an explanation to the CPI. Since the CPI is characterized by both positive attitudes toward and the ongoing use of CP despite no scientific support (Snook et al., 2008), the present study has two subgoals: to examine whether the consumption of fictional CRTS affects the participants' (1) attitudes toward and (2) acceptance of CP as a tool used in police investigations. Applying cultivation theory, it is expected that a higher consumption of CRTS is associated with • • more positive attitudes toward CP, independent of demographic variables, and prior knowledge about CP (Hypothesis 1). • • a higher level of acceptance of CP, independent of demographic variables, and prior knowledge about CP (Hypothesis 2). An additional exploratory analysis examines to what extent the attitudes toward, and the acceptance of CP may differ between participants with different levels of CRTS consumption (e.g., no, low, medium, heavy viewers). Study Design The present study adopts a cross-sectional, correlational study design commonly used in cultivation research (Gerbner et al., 2002) and is based on an online survey. The variables are divided into dependent variables referring to the perception of CP, and independent variables, more specifically control and media variables. Sample The sample consists of 734 participants and was drawn based on opportunity sampling. The participants were recruited online via research-, psychology-, or criminology-related groups in social networks and thus could take part no matter their current location. The data were collected from March 22 to April 18, 2021, within the scope of the first author's degree project in Criminology (see Greiwe, 2021). The data collection was done using SoSci Survey (Leiner, 2021, Version 3.2.23), a free tool to carry out online surveys for research purposes. The survey was specifically developed for the present study and based on self-report. To make the access to the survey and thus participation in the study as easy as possible, the survey was distributed online via a certain link and could be filled in using any technical device with access to the internet such as smartphones, tablets, and computers. The completion rate was 69.2% and the average time to fill the survey was 4 minutes 25 seconds. To ensure that the items were understood correctly and to prevent potential technical issues, the survey was tested in a pilot study with seven participants. To calculate the optimal sample size, an a-priori power analysis was conducted using G*Power (Version 3.1.9.4). For a multiple regression analysis with one tested predictor and five covariates, α = .05 and 1 − β = .80, the power analysis yielded a sample size of 395, 55, 25 to detect small (f² = .02), medium (f² = .15), large (f² = .35) effects. To take part in the study, the participants had to be 18 years of age or older and give written informed consent. No further requirements for participation were set. Consequently, anyone meeting the above-mentioned inclusion criteria could take part in the study. Incomplete questionnaires (n = 226) and participants with basic English skills (n = 48) were removed. In total, 460 individuals were included in the analysis. Measures Independent Variables. The independent variables are divided into control variables to account for potential covariates and media variables to capture the participants' television consumption. Control Variables. The control variables refer to demographic characteristics such as age (in years), gender (0 = male, 1 = female, 2 = other), highest level of completed education (0 = compulsory education, 1 = high school diploma, 2 = bachelor's degree, 3 = master's degree, 4 = doctor's degree) and English level (0 = basic, 1 = intermediate, 2 = advanced) and were measured with one item each. An additional item asked for prior knowledge about and experience with CP. The item had to be answered based on a 5-point-likert scale (0 = very unfamiliar, 1 = unfamiliar, 2 = neutral, 3 = familiar, 4 = very familiar) and has previously been used by Bolton (2019). Media Variables. The media variables refer to the television consumption in general and regarding fictional CRTS. Both were measured with one item each asking the participants to indicate how many hours they spend watching any kind of television respectively fictional CRTS (such as Criminal Minds, CSI, Mindhunter, etc.) in a typical week. The items were similar to those used in previous studies on cultivation effects due to CRTS (Bolton, 2019;Donovan & Klahm, 2015;Intravia et al., 2018). Dependent Variables. The dependent variables refer to the attitudes toward and acceptance of CP. Both concepts were measured with scales that were specifically developed for the present study based on items used in previous studies as well as recurring themes in the CP literature and CRTS. In both scales, the participants were asked to indicate on a 7-pointlikert scale (0 = strongly disagree, 6 = strongly agree) to what extent they agreed to the respective statements (see all items in Supplemental Appendix A). To provide comparability between the scales, mean scores ranging from 0 to 6 were calculated for each scale with higher values indicating more positive attitudes toward respectively more acceptance of CP. Attitudes. The scale to capture the attitudes toward CP consists of 11 items and is divided into three subscales. The subscale accuracy consists of three items developed based on previous studies measuring attitudes toward CP (Bolton, 2019) or investigating the accuracy of CP (Snook et al., 2008;Snook, Eastwood et al., 2007). The items ask for the attitudes toward (1) the accuracy of predictions based on CP, (2) whether CP contribute useful and (3) reliable information about an offender. The subscale skills is made up of five items dealing with the skills of profilers to predict offender characteristics. Each item picks up a skill that is often depicted in fictional CRTS and has been tested in experimental studies (e.g., Kocsis, 2004;Kocsis et al., 2000). The five items refer to the ability to predict the offender's (1) personality, (2) thinking before, during or after the offense, (3) physical attributes, (4) behavior while committing the crime, and (5) social status. The subscale evidence contains three items that were created based on studies examining the empirical validity of CP (Snook et al., 2008;Snook, Eastwood, et al., 2007). The items ask the participants to indicate how rigorously they think (1) the accuracy of CP has been tested and how well, (2) the utility of CP in police investigations respectively, and (3) the validity of CP is supported scientifically. The internal consistency was excellent for the total scale (α = .92) and ranged from acceptable to good for the subscales (accuracy: α = .80, skills: α = .83, evidence: α = .79). Acceptance. To measure the acceptance of CP, three items were used (see Bolton, 2019). The participants were asked to indicate the extent to which they agree to that (1) police investigations can benefit from CP, (2) the police should rely heavily on CP as an investigative tool, and (3) that CP should be implemented by all police departments. The internal consistency was good (α = .77). Statistical Analysis The data were prepared and analyzed using the statistic software IBM Statistics SPSS (Version 21). The data preparation was done in two steps. The first step involved checking for outliers. Two participants were removed from the analysis of the overall television consumption due to unrealistic values (140 hours). Due to a skewed distribution, both media variables were truncated. The scale for the overall television consumption ranged from 0 to 21 hours (21 ≥ 21) and the scale for the consumption of CRTS ranged from 0 to 10 hours (1 = .1 to 1, 2 = 1.1-2, 10 ≥ 10). The second step was creating the mean scales for the attitudes and acceptance including reliability analyses. The data analysis was divided into univariate (relative frequencies, descriptive statistics), bivariate (Pearson's r), and multivariate analyses to test the hypotheses (multiple linear regressions) respectively to conduct the exploratory analysis (analyses of variance). Before multiple linear regression analyses were run, the visual inspection of a scatter plot with the unstandardized predictive values and the studentized residuals approved homoscedasticity and linearity of the relationship between the dependent and independent variables. Multicollinearity was ruled out (all VIFs ≤ 1.3). A histogram of the standardized residuals indicated that the residuals were normally distributed. Regarding the analyses of variance, the values for the consumption of CRTS were classified into groups of no (0 hour), low (1-2 hours), medium (3-5 hours), and heavy (6 or more hours) consumption. The Levene-test indicated homoscedasticity. Q-Qdiagrams approved normal distribution of the dependent variables in each group. Post-hoc t-tests were run using the Bonferroni correction. The distribution of both the attitudes and the acceptance (Table 1) is negatively skewed since the participants mainly used the upper range of the scales indicating a general tendency toward positive attitudes (M = 3.95, SD = 0.9) and acceptance (M = 4.13, SD = 1.09). As for the attitudes, the most positive values were found for the accuracy (M = 4.05, SD = 1), followed by the skills (M = 3.91, SD = 0.98) and evidence (M = 3.89, SD = 1) subscale. Whereas more than half of the participants at least somewhat agreed that predictions based on CP have high accuracy (58.5%), more than threefourths at least somewhat agreed that profiles contribute useful (86.1%) and reliable (75.2%) information about an offender. Beyond, more than two-thirds at least somewhat agreed that profilers can predict an offender's personality (75%), thinking (69.6%), physical attributes (78%), offense Note. M = mean; SD = standard deviation; 0 = strongly disagree; 1 = disagree; 2 = somewhat disagree; 3 = neutral; 4 = somewhat agree; 5 = agree; 6 = strongly agree. behavior (73%), and social status (69.1%). Although more participants were neutral regarding the scientific evidence compared to the other subscales, still almost half of the participants at least somewhat agreed that the accuracy of CP has been rigorously tested scientifically (48.3%) and about two-thirds at least somewhat agreed that the utility of CP in police investigations (65%) and its validity (72.8%) is supported by scientific evidence. Regarding the acceptance, the majority at least somewhat agreed that the police should rely heavily on CP (55.4%) and that CP should be implemented by all police departments (66.5%). Almost all participants at least somewhat agreed that the police can benefit from CP (92.2%). All bivariate correlations are shown in Table 2. Apart from a medium positive correlation between the overall television consumption and the consumption of CRTS (r = .38) and a large positive correlation between the attitudes and the acceptance (r = .82), all other effects were small. The consumption of CRTS, the attitudes and the acceptance are associated negatively with the educational level and positively with prior knowledge. In other words, a higher consumption of CRTS as well as more positive attitudes toward and a higher level of acceptance of CP are significantly correlated with a lower level of education and a higher level of prior knowledge. Beyond, the attitudes and the acceptance are positively associated with age and the consumption of CRTS. As an additional analysis revealed, the positive correlation between the acceptance and gender is most likely due to the overrepresentation of females and thus can be disregarded. Multivariate Analysis Multiple Linear Regressions. The multiple linear regressions on the attitudes and acceptance were conducted block-wise to examine the influence of the control variables (model 1), the overall television consumption (model 2), and the consumption of CRTS (model 3) separately and collectively. The coefficients of both analyses are summarized in Table 3 and reveal a similar pattern with all models being significant and the third model showing the best, but still weak, model fit (attitudes: R² = .106, acceptance: R² = .102). In both analyses, the most important predictors across all models are education and age as indicated by the standardized coefficients. In both final models, the consumption of CRTS is the third most important predictor. The amount of additional explained variance due to the consumption of CRTS is 1.5% regarding the attitudes and 2.2% regarding the acceptance. Overall, the analyses mirror the bivariate correlations yielding significant associations between the independent variables age, education, prior knowledge as well as consumption of Note. IV = independent variables; AG = age; GD = gender; ED = education; KN = prior knowledge; TG = television in general; CRTS = crime-related television shows. *p > .05. **p > .001. CRTS and both dependent variables. Considering the many significant correlations, it was tested for potential moderation effects. Separate moderation analyses for each control variable were run but did not reveal any significant results. Thus, it could be assumed that the relationship between the consumption of CRTS and the dependent variables is not moderated by the participants' age, gender, education, prior knowledge, or level of television consumption in general. Potential mediation effects were ruled out due to the block-wise procedure of the multiple regression analyses controlling for the effect of the control variables. Unlike the bivariate correlations, both final models of the multiple linear regression analyses contain a significant negative association between the overall television consumption and both dependent variables. Exploratory Analysis. To explore whether the attitudes and acceptance may vary between different consumption levels of CRTS, the participants were divided into groups with no (0 hour), low (1-2 hours), medium (3-5 hours), and heavy (6 or more hours) consumption. The analyses of variance were conducted without and with covariates (Table 4). Without covariates, significant differences in both the attitudes and the acceptance were found explaining 2.6% respectively 3.6% of their variance. Regarding the attitudes, the effect is due to heavy viewers holding significantly more positive attitudes than no viewers (p = .008). As for the acceptance, both heavy (p = .001) and medium (p = .033) viewers were found to show a higher acceptance compared to participants not watching CRTS. Beyond, the acceptance was higher among heavy than light viewers (p = .039). When including the covariates, the group differences regarding the attitudes were no longer significant. Regarding the acceptance, only the group difference between non-and heavy viewers remained significant. As in the bivariate correlations and the multiple linear regressions, the attitudes, and acceptance were significantly influenced by age, education, and prior knowledge. Discussion The ongoing use of and positive attitudes toward CP despite the lack of scientific evidence for its validity, also known as CPI (Snook et al., 2008), is fraught with numerous risks for society (Muller, 2000;Snook et al., 2008). However, there has to date only been little research on potential explanations. The present study addresses this research gap and investigates whether the positive perception of CP among police officers (Copson, 1995;Snook, Haines, et al., 2007;Trager & Brewster, 2001) may also be prevalent in the general population and associated with their consumption of CRTS. The underlying rationale is based on a cultivation framework resting on the popularity and availability of fictional CRTS (Snook et al., 2008), that individuals typically derive their crime-related knowledge from the media (Dowler, 2002) and that CRTS convey and continually repeat a coherent set of incorrect messages favoring the accuracy and utility of CP (Donovan & Klahm, 2015). Misperceptions of Criminal Profiling Are Widely Spread Like police officers (e.g., Copson, 1995;Snook, Haines, et al., 2007;Trager & Brewster, 2001), most participants hold positive attitudes toward CP as manifested in the belief that profilers can predict various offender characteristics and that CP is highly accurate, provides useful and reliable information. In line with the positive attitudes, the participants seem to accept CP as a tool used in police investigations. Contrary to forensic psychologists and psychiatrists as well as police psychologists (Bartol, 1996;Torres et al., 2006), the participants did not severely question the validity of CP. Instead, the participants' positive attitudes toward and acceptance of CP impeccably mirror its medial portrayal as an accurate, operationally useful tool that leads to the apprehension of offender(s) (Donovan & Klahm, 2015). The results thus support the notion that CP is widely misperceived in society, regarding both its accuracy and scientific validity as well as the skills of profilers to predict offender characteristics. Considering that such misperceptions may influence real-life decision-making processes leading to real-life consequences for society, the high prevalence of positive attitudes toward and acceptance of CP is alarming. The Consumption of Crime-Related Television Shows Matters Based on the cultivation theory by Gerbner and Gross (1976), it was expected that a higher consumption of fictional CRTS is associated with more positive attitudes toward (Hypothesis 1) and a higher acceptance (Hypothesis 2) of CP as an investigative tool, independent of control variables. The multiple linear regression analyses provide support for both hypotheses and thereby underpin the underlying rationale that the recurring exposure to incorrect messages about CP may lead to their internalization. The internalization of such incorrect messages may in turn explain the widely spread misperceptions of CP resulting in a positive perception of CP despite the lack of evidence for its validity (Snook et al., 2008). In contrast to prior studies investigating cultivation effects of CRTS on police-related concepts such as police effectiveness, misconduct, and legitimacy (Dowler, 2002;Dowler & Zawilski, 2007;Intravia et al., 2018), the present study is the first to find support for direct crime-genre-specific cultivation effects. However, the amount of variance explained solely by the consumption of CRTS is small for both the attitudes (1.5%) and the acceptance (2.2%). Although small effects are a common finding in cultivation research as indicated by a previous meta-analysis (Morgan & Shanahan, 1997), their practical relevance as explanation to the CPI may be questionable. On the other hand, considering that cultivation effects, no matter their size, can affect real-life actions such as decision-making processes in favor or against using CP in police investigations, they may have a high societal relevance. Nevertheless, control variables such as education and age explain more variance in both the attitudes and acceptance. Similar patterns were found by Dowler and Zawilski (2007) and Dowler (2002) suggesting that education and age may be more important variables to explain attitudes toward policerelated concepts than the consumption of CRTS. In the present study, the comparatively strong negative association between education and the attitudes respectively the acceptance, as indicated by the standardized coefficients in the multiple regression analyses, suggest that more educated participants tend to hold fewer positive attitudes toward CP and to accept CP as a tool used in police investigations to a lesser degree. Following Gerbner et al. (1980), more educated individuals tend to be less imbued with the distorted version of reality displayed in television shows, also because they tend to be lighter viewers. The found negative bivariate correlation between the educational level and the consumption of CRTS supports this notion. However, the moderation analyses testing education as a potential moderator on the relationship between the consumption of CRTS and the dependent variables were not significant. Consequently, it can be assumed that the educational level has a direct influence on the attitudes toward and the acceptance of CP. The Level of Consumption May Matter A common finding in cultivation research is that cultivation effects predominantly occur among heavy compared to light and medium viewers (e.g., Bryant et al., 1981;Gerbner & Gross, 1976;Gerbner et al., 1978). Generating control groups naïve to television consumption is generally considered difficult (Gerbner & Gross, 1976). Since the present study focused on crime-genre-specific effects, it was possible to establish a control group not watching CRTS on a regular basis. Consistent with previous research, the exploratory analysis tentatively suggests that cultivation effects on the perception of CP mainly occur among heavy viewers as indicated by more positive attitudes and a higher acceptance among heavy compared to light respectively non-viewers. As the results only partially remain significant when controlling for covariates, they must be interpreted cautiously. Strengths and Limitations The present study has two fundamental strengths. The first strength is the correlational design. The few previous studies on cultivation effects on the perception of CP (Bolton, 2019;Lutfy, 2013) adopted an experimental design which has been criticized for not being adequate to study cultivation effects due to its inability to simulate the long-term aspect of media exposure assumed to underlie cultivation (Gerbner & Gross, 1976). In contrast to this, correlational designs can capture associations between variables in a real setting and thus are more appropriate and commonly used in cultivation studies (e.g., Dowler, 2002;Dowler & Zawilski, 2007;Intravia et al., 2018). The second strength is the use of the a-priori calculated sample size to ensure that even small cultivation effects could be detected. The limitations concern the internal and external validity of the results. The internal validity describes the extent to which a study design allows conclusions about causal relationships among the observed variables (APA, n.d.-g) and may be undermined through several problems dealing with the present study's correlational design. Due to the directionality problem in correlational research (APA, n.d.d), inferences about the causality underlying the found associations cannot be made. However, based on Homant and Kennedy (1998) stating that CP has only due to its depiction in the media come to the awareness of the public, it could be argued that the consumption of CRTS must precede the perception of CP opening the door for an interpretation toward a causal influence of the consumption of CRTS on the attitudes and acceptance. But, since the present study is cross-sectional capturing only a single point of time, temporal influences on the results cannot be ruled out rendering causal interpretations even more difficult. Another aspect reducing the internal validity is the low level of variance explained by all variables collectively. It may be possible that other important variables which may bear on the attitudes and acceptance have been missed out, a problem also referred to as underfitting (Eid et al., 2013). Another more general limitation of the present study is that the collected data were based on self-report. Considering that response bias is a common finding in behavioral research (Rosenman et al., 2011), it cannot be ruled out that the collected data are influenced by as for example the social-desirability bias. This limitation may be less relevant for variables such as age and gender but more relevant for the estimation of the weekly media consumption as well as the self-assessed prior knowledge about CP and education. The potential response bias could reduce the validity of the instruments used for the data collection. The external validity refers to the generalizability of the results (APA, n.d.-e). In the present study, the generalizability of the results may be limited as a consequence of the techniques deployed to the data collection. As the present sample was drawn based on opportunity sampling and is characterized by an overrepresentation of women, young and well-educated individuals, it may not be representative for the general population. Additionally, the participants were recruited online implying that people without access to the internet and especially social networks may have been underrepresented. Beyond, it is unknown whether the proportions of viewers and nonviewers of CRTS in the general population has been adequately represented. Considering that cultivation effects are typically found among heavy viewers and that heavy television consumption is associated with lower education (Gerbner & Gross, 1976), it is possible that a sample with a more representative level of education would have yielded stronger cultivation effects. Moreover, the fact that temporal influences (e.g., releases of CRTS) on the results cannot be ruled out questions whether the results represent enduring or situational effects. Conclusion and Future Research The present study raises awareness to the CPI and its associated far-reaching implications for society but also sheds light on one of its potentially underlying mechanisms: the receipt and adoption of incorrect information on CP provided by fictional media, or more simply put, the mistaking of fiction for fact. Despite limitations reducing the internal and external validity and thus the causal interpretation and generalizability of the results, the present study is the first to find empirical support for the assumption that misperceptions of CP are widely spread in the general population and associated with the consumption of CRTS. Although the found cultivation effects are small, they may have a high societal relevance due to their potential to influence real-life decision-making affecting the society at large. The present study can be seen as a starting point for a societal debate of how to act on the finding that our attitudes-and potentially our actions-are influenced by incorrect information derived from fictional media. From a scientific point of view, the present study contributes to a more comprehensive understanding of the CPI and paves the way for future research: To allow for causal interpretation and generalizability of the results, future studies should address the limitations of the present study. To achieve this, a longitudinal design with participants representative for the general population randomly assigned to groups with different consumption levels of CRTS and a control group naïve to CRTS is necessary. Since the variables considered in the present study can only partly explain the perception of CP, a further investigation of potentially relevant variables is needed. For example, it should be examined whether the portrayal of CP in other media types such as non-fictional television shows may also influence one's perception of CP. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Supplemental Material Supplemental material for this article is available online.
2022-04-20T15:21:34.401Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "eb04e403eac705addca439d200947481bfeb1b9e", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440221091243", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "f46383ef6328c7360c7948f9a64967d275ec16dd", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
252110055
pes2o/s2orc
v3-fos-license
Loss of olfactory sensitivity is an early and reliable marker for COVID-19 Abstract Detection of early and reliable symptoms is important in relation to limiting the spread of an infectious disease. For COVID-19, the most specific symptom is either losing or experiencing reduced olfactory functions. Anecdotal evidence suggests that olfactory dysfunction is also one of the earlier symptoms of COVID-19, but objective measures supporting this notion are currently missing. To determine whether olfactory loss is an early sign of COVID-19, we assessed available longitudinal data from a web-based interface enabling individuals to test their sense of smell by rating the intensity of selected household odors. Individuals continuously used the interface to assess their olfactory functions and at each login, in addition to odor ratings, recorded their symptoms and results from potential COVID-19 test. A total of 205 COVID-19-positive individuals and 156 pseudo-randomly matched control individuals lacking positive test provided longitudinal data which enabled us to assess olfactory functions in relation to their test result date. We found that odor intensity ratings started to decline in the COVID-19 group as early as 6 days prior to the test result date (±1.4 days). Symptoms, such as sore throat, aches, and runny nose appear around the same point in time; however, with a lower predictability of a COVID-19 diagnosis. Our results suggest that olfactory sensitivity loss is an early symptom but does not appear before other related COVID-19 symptoms. Olfactory loss is, however, more predictive of a COVID-19 diagnosis than other early symptoms. Olfactory dysfunction is a key symptom of the COVID-19 disease and symptom tracking studies have demonstrated that a sudden loss of olfactory functions is the most reliable symptom of the disease (Menni et al. 2020;Gerkin et al. 2021). Anecdotal evidence indicates that olfactory loss is an early symptom appearing before other symptoms but objective measures supporting this statement are currently missing. The key to an individual's attempt to limit the spread of any contagious disease is monitoring early disease symptoms. At the onset of the COVID-19 pandemic, fever and cough were reported as reliable early symptoms in non-hospitalized cases and considerable monitoring effort was globally focused on these 2 symptoms (Hu et al. 2020;Wei et al. 2020). However, olfactory dysfunction soon emerged as a symptom of interest (Tostmann et al. 2020) and we now know that a great portion of individuals with confirmed COVID-19 infection report either complete or partial loss of olfactory functions (Hannum et al. 2020;Gerkin et al. 2021). Given that a large portion of all individuals with COVID-19 lose either all or some olfactory functions at some stage of the disease, it is not surprising that a reduced sense of smell is the symptom with the highest odds ratio in non-hospitalized cases (Menni et al. 2020;Rudberg et al. 2020;Gerkin et al. 2021). Olfactory loss at some stages of the disease is so prevalent that loss of olfactory functions can be used to monitor the increase of COVID-19 prevalence in a geographical area (Iravani et al. 2020;Pierron et al. 2020). In a non-clinical healthy population, the relationship between self-assessed and psychometrically assessed olfactory function is, however, poor (Landis et al. 2003). While most people will notice a sudden and complete loss of olfactory function, awareness of a partial olfactory loss is far lower than a perceptual loss in other sensory modalities, such as audition and vision. To reliably estimate olfactory loss, probing olfactory functions with actual odors is therefore needed. At the onset of the pandemic, an international group of chemosensory scientists provided an online tool that enabled individuals to assess their olfactory performance using 5 selected common household odors from a list of 71 suggestions (Iravani et al. 2020;Snitz et al. 2022). Although the tool is anonymous to protect user privacy, individuals can continuously monitor their odor performance over time using a login mechanism. Importantly, at each login, the user completes a COVID-19 symptom check, reporting potential symptoms, such as cough, fever, etc., as well as reporting any formal COVID-19 testing they had undergone as well as the outcome of the test. Several hundred individuals used the tool to continuously assess their olfactory functions and report their potential COVID-related symptoms. Some of them contracted COVID-19 during their use of the tool, thereby creating a natural longitudinal experiment and data that enables direct comparisons between onset of olfactory dysfunction, COVID-related symptoms, and potential COVID-19 diagnosis. To this end, we used data obtained between April 2020 and February 2021, originating mainly from the Swedish first and second waves where individuals were assumably infected with one of the prevalent SARS-CoV-2 virus strains in the general Swedish population at the time in question; the wild-type, the B.1.1.7 (Alpha), and to a lesser extent, the B1.351 (Beta) variants (Public Health Agency of Sweden 2022). Utilizing this unique longitudinal sensory data, we assessed the hypotheses that a decline in olfactory functions occurs before other COVID-related symptoms are reported by participants. Confirmation of this hypothesis would suggest that a decline in olfactory function is not only an early symptom of COVID-19 but also a symptom that occurs before other common COVID-related symptoms. Participants A total of 5,608 unique individuals enrolled, identified themselves as residing in Sweden, and entered complete data on the web-based data registry platform smelltracker.org during the 10 months between April 2020 and February 2021. We are here only assessing individuals from Sweden because our ethical permit for this assessment only covers Swedish residents and the time between the COVID test and result distribution is uniform. Moreover, because we were only interested in assessing individuals who provided longitudinal odor data, we excluded all individuals who only completed one session, as well as 161 individuals who rated all odors consistently above 95 on a 0-100 scale, leaving a total of 1,168 individuals. All the remaining individuals were above 18 years old, and their COVID-19 status was either confirmed with a PCR test, so-called C19+ (n = 205, 149 women and age: 43 ± 13) or not determined and labeled undetermined COVID-19 (UC19). Given that the testing date distribution of UC19 cohort was different from that of the C19+, we pseudo-randomly selected individuals from UC19 (n = 152, 113 women and age: 45 ± 14) to comparably match the number of individuals in 2 cohorts for a given month (Fig. 1A). The study was approved by the Swedish Institutional Review Board (Etikprövningsnämnden) and participants did not receive any form of monetary compensation for their participation and consent was waived. All aspects of the study complied with the Declaration of Helsinki for Medical Research involving human subjects. Procedure and data collection All data collection was carried out via the Swedish version of the web-platform by which participants were able to create account to provide details regarding age, sex (Woman/Man/ Other), and their COVID-19 test status (i.e. not tested, tested negative, tested positive). Particularly, regarding the COVID-19 status, if the participant provided no answer or marked "not tested," we labeled them as "undetermined" (UC19). Of note, we did not include participants who marked "tested negative" in the analysis to remove bias from our results due to the notion that these individuals got tested by experiencing symptoms that were not COVID-19-related. For repeated measurement, the web-platform allows individuals to repeatedly report their COVID-19 test status as well as self-test their odor performance. Specifically, for the odor performance test, participants chose 5 household odors from a list of 71 common household items. We had participants rate 5 odors to strike a balance between increased reliability, where more assessments render more reliable data (Kern et al. 2015), and a low burden for participants to facilitate broad participation. The most frequently selected odors are illustrated in Fig. 1B. At repeated testing, the same 5 odors, freshly prepared, were used. Participants then proceeded to smell each odor and, on a separate page for each odor, rated their perceived intensity and pleasantness on visual analog scales, ranging from very weak/very unpleasant to very strong/very pleasant, respectively. These scales were coded as ranging from 0 (min) to 100 (max). Participants could smell the odors as often as they Chemical Senses, 2022, Vol. 47 3 liked and there was no time pressure applied. We are here only focusing on odor intensity ratings. Moreover, in each session, participants were asked to report any experienced COVID-19 symptoms from a list of symptoms containing the following options: No symptoms, Fever, Cough, Shortness of breath or Difficulty Breathing, Tiredness, Aches, Runny nose, and Sore throat. Data reduction and statistical analysis We analyzed the C19+ odor intensity ratings to determine the time-course of the potential odor intensity impairment with respect to the COVID-19 test result date. The interval during which the odor ratings were evaluated included a range between −25 and +25 days with the date of reported COVID-19 test result as day 0. This interval was logarithmically segmented into 13 bins and ratings entered during each bin were averaged. We used logarithmic bins for 2 reasons. The number of individuals exponentially decreased as we deviated from the COVID-19 test result day; using logarithmic bins prevented skewing of results due to differences in sample size within each bin. Moreover, assessing olfactory intensity ratings over a long time both before and after the COVID-19 test result day naturally leads to statistical testing of multiple time points. Using logarithmic bins allowed us to limit the number of statistical tests yet focus our statistical testing power around the date of the reported COVID-19 test result (day 0). Naturally, the number of individuals for each bin varies depending on the availability of the data for that specific bin with the maximum number of individuals occurring in the bin that includes the test result date (i.e. equal to 205, the total number of individuals in C19+). To correct for the unbalanced distribution, we used Welch's t-test wherein the inequality of variances is not a concern. Moreover, we created normative baseline values of intensity within the C19+ cohort by averaging ratings 60 days before or after the test result. Because frequentist approaches are more affected by included sample size and number of tests performed, we also assessed the time-course of the intensity judgments over time within a Bayesian framework where we considered an uninformed prior for the variance on 2 levels (i.e. within and across days to account for unequal variance in a conservative manner). A half-Cauchy with a scale factor of 10 was considered to explain the inter-days variance and further a uniformed prior normal distribution with mean of 12 and standard deviation of 4 was taken into account for explaining the intra-days variance. Therefore, our new complementary Bayesian statistical model was defined as follows for each day: Odor Intensity ~ N (mu, sigma); mu = b0 + b1 × [C19_ Interval/Baseline]; sigma ~ Half-Cauchy (scale factor = 10); b1 ~ N (0, sigma_b1); sigma_b1 ~ N(12,4). Finally, we assessed each of the COVID-19 symptoms' timecourse as a function of days with respect to the test result date. Identical data reduction was applied as described above for time-course assessment of odor intensity impairment. However, the interval for this assessment was reduced to −10 to +10 days, using the date of reported COVID-19 test result as day 0, to achieve comparable statistical power. To assess differences between symptoms, we first estimated the null occurrence probability of each symptom in the UC19 cohort. Next, using a two-sided binomial test, we determined the corresponding z-value for each bin within the C19+ cohort. Significant and high z-values, for each day, indicate that the prevalence of this specific symptom is exclusive to COVID-19 whereas significant and low z-values denote that this specific symptom is not exclusive to COVID-19. Finally, we followed up on this analysis using logistic regression to assess the earliest day that each specific COVID-19 symptom was manifest in relation to the test result date and if that given symptom was able to dissociate C19 from UC19. For each COVID-19 symptom, including odor intensity ratings, we fitted a logistic regression and compared the sensitivity, specificity, and the balanced accuracy, which is defined as the average of sensitivity and specificity. Nineteen unique individuals (age = 43 ± 11, 18 women) who were diagnosed with COVID-19 fulfilled the criteria for this analysis with enough longitudinal data. Consequently, we picked 21 random individuals (age = 47 ± 14, 17 women) from the UC19 cohort who registered data around the same day from a hypothetical test result day, here determined as the median of the reported test result dates (i.e. 5 December 2020). Next, for each COVID-19 symptom, including odor intensity ratings, we used the fitted logistic regression model and determined the confusion matrix as well as the balanced accuracy for predicting C19+. Onset of reduced odor intensity perception might occur before positive COVID-19 test We first sought to know whether measures of odor intensity had decreased before the individual underwent a test for COVID-19. At the time of the study, the result after returned PCR test arrived on average across the region within 2 days (Almgren and Björk 2021). To this end, we assessed the intensity ratings for C19+ across 25 days before and after the COVID-19 test result day to determine the curve of odor intensity impairment in COVID-19 over that extended time. Specifically, ratings of the 5 odors were averaged and time-locked to COVID-19 test result day. We found that the median of the odor intensity ratings started to decline in the C19+ group as early as 6 days (±1.4) prior to the test result date (i.e. denoted by 0 in Fig. 2A Onset of odor intensity impairment aligns with the earliest COVID-19 symptoms Odor intensity ratings declined as early as 6 days (±1.4) prior to the reported test result, thereby suggesting that a decline in odor intensity perception is an early sign of COVID-19. We therefore assessed whether odor intensity values were aligned with other COVID-19 symptoms. There was a significant negative association, r(14) = −0.95, P < 0.001, between median odor intensity ratings and a number of COVID-19 symptoms for C19+ over time, as determined by a Spearman rank correlation. This finding demonstrates that odor intensity impairment aligned with COVID-19 symptom progression. Next, we asked whether the onset of decline in odor intensity perception occurred earlier than other non-odorrelated COVID-19 symptoms. To test this hypothesis, we first determined whether a COVID-19 symptom is significantly discernible on a specific day in the course of the disease by estimating the probability of reporting a specific symptom in the UC19 cohort. We found that probabilities of reporting COVID-19 symptoms in the UC19 group were as following (in descending order): Runny nose, P = 0.32; Tiredness, P = 0.31; Cough, P = 0.21; Aches, P = 0.12; Sore Throat, P = 0.11; Fever, P = 0.06; Shortness of Breath or Difficulty Breathing, P = 0.04. We considered these probabilities as our null hypothesis. Next, we assessed when, across the time-course of ratings, each COVID-19 symptom in the C19+ group significantly stood out from the baseline (i.e. the null probabilities derived from the UC19 cohort) as a function of days locked to the test result date. In other words, when each symptom might serve as an indication of COVID-19. We assessed this using two-way binomial tests separately for each symptom (Fig. 3). We found that, in addition to olfactory intensity impairment that start to differentiate the groups 6 (±1.4) days prior to the test result date, Sore throat (z = 4.25, P < 0.01), Aches (z = 3.30, P < 0.01), and Runny nose (z = 2.27, P < 0.03), are the earliest symptoms. It is worth mentioning that although the effect size for Runny nose is smaller than most of the aforementioned symptoms, it consistently stays above the significance level for 3-4 days (day −3: z = 3.12, P < 0.01). One other significant COVID-19 symptom that surpassed the significance level was Fever (z = 2.15, P < 0.05) but at a slightly later time point compared to other symptoms, around −3 days in respect to test result day (Fig. 3). It is worth noting that Shortness of Breath or Difficulty Breathing did appear to increase earlier than −6 days (±1.4), yet due to a low number of observations at the early sessions, we were not able to statistically test the symptoms probability for a wider range of days. Finally, we sought to determine which symptom in our data best predicted a COVID-19 diagnosis on the −6 days using logistic regression models fitted to the data of each symptom across 2 cohorts. In order to assess the predictive performance of our models, we calculated a confusion matrix that displays and summarize the model performance according to the known (True label) and predicted outcomes (Predicted label). To this end, the confusion matrix for each symptom's logistic model was computed to estimate the sensitivity and specificity of that symptom. We found that odor intensity impairment has the highest balanced accuracy of 70% followed by Runny nose with a balanced accuracy of 69%. Using chisquared test, we further found that odor intensity impairment, χ 2 = 13.1, P < 0.01, Runny nose, χ 2 = 6.61, P < 0.01, Aches, χ 2 = 5.91, P < 0.02, and Tiredness χ 2 = 5.06, P < 0.03, logistic models significantly outperformed the constant null model (Fig. 4A). The logistic model for Sore throat performed marginally better, χ 2 = 2.75, P < 0.10, than the constant null model. Other symptoms' logistic models (all Ps > 0.34) were not significantly different from Fig. 4B). Discussion We can here demonstrate that although reduced olfactory abilities are an early sign of COVID-19, it is not appearing earlier in the disease progression than several other symptoms of COVID-19. However, olfactory dysfunction was a symptom that demonstrated the highest predictability of COVID-19; a finding that has been demonstrated in several other studies using subjective measures. Olfactory ratings started to clearly decline 6 days before participants indicated a positive test result. It is worth highlighting that all participants were regular data providers before their positive test result meaning that we were in a unique position to assess odor ratings before a potential test result might bias their ratings. It is not possible to definitely know, however, at what point in time participants were infected because test results might be communicated with different delays and participants did not provide information how much time had elapsed from test result and first postresult data entry. Nonetheless, it is clear that olfactory dysfunction was not only a common COVID symptom but also an early one, appearing around 4 days prior to testing for COVID-19. Although it was a more reliable symptom during the early occurring strains of the SARS-CoV-2 virus (Rudberg et al. 2020), olfactory loss did not, however, seem to occur earlier than other common signs of COVID-19. The main pathway for the SARS-CoV-2 virus into the body is thought to be the angiotensin-converting enzyme 2 (ACE2) receptor; a receptor that is expressed throughout the human respiratory system with high density in the nasal epithelium and especially in the supporting sustentacular cells (Fodoulian et al. 2020;Hou et al. 2020;Muus et al. 2021). It is therefore not surprising that reduced olfactory function is an early sign of COVID-19, appearing already around 6 days before participants reported their positive test. It is not possible to exactly know when in time, in relation to their reported positivity, participants were infected. However, it can be assumed that all participants using the webpage were familiar with media reports of the link between olfactory loss and COVID-19 and therefore can be assumed to have an interest to quickly perform their next olfactory assessment after receiving news of positive COVID-19 tests. Given that the average incubation time of the SARS-CoV-2 virus being reported as 5-6 days (McAloon et al. 2020), the decline in olfactory sensitivity can then be assumed to occur within the first days after infection. Because the time of testing of included participants stretches over almost a year, several strains (Variants Being Monitored [VBMs]) of the SARS-CoV-2 can be assumed to have infected included participants. It is not possible to know exactly what proportions of VBMs dominated in our sample and when but the wild-type strain, the B.1.1.7 (Alpha), and to a lesser extent, the B1.351 (Beta), were the dominating VBMs in the 3. COVID-19 symptoms compared to intensity impairment time-courses. The time-course of 6 major COVID-19 symptoms, including Sore throat, Aches, Runny nose, Fever, Tiredness, Shortness of breath or Difficulty Breathing, and Cough time-courses, denoted by filled red connected squares were assessed and compared to the time-course of rated odor intensity, denoted by filled yellow connected squares. The vertical red line at 0 depicts the test result day. The green distribution together with the green axis on the right side of the plots shows the number of individuals for each specific day. Two dotted horizontal black lines show the significance threshold level. Red and yellow arrows show the earliest significant day for symptom and odor intensity inflections, respectively. general Swedish population at the time in question (Public Health Agency of Sweden 2022). Whether olfactory loss is an early and reliable sign of COVID-19 infection also for the B.1.1.529 and BA.1-2 (variants of Omicron) VBMs is, at the time of submission in May 2022, still debated. Tentative data originating from the verbal track-and-trace program in the United Kingdom suggests, however, that fewer individuals report subjective olfactory dysfunction after infection with the Omicron variant (Vihta et al. 2022). That said, these subjective data are collected already 1-2 days after a positive test and it is not yet determined whether potential lower numbers are due to a delay in onset of olfactory dysfunction or whether reports that the Omicron variant, in contrast to previous VBMs, often causes a nasal discharge or congestion might affect these early results (Vihta et al. 2022). In the present study, we assess olfactory functions using intensity ratings of common household odors. In most studies on COVID-19 influence on the olfactory system, olfactory function has either not been assessed, assessed using subjective self-reports, or assessed with cued olfactory identification performance. While most people do notice a sudden and complete loss of olfactory function, awareness of a partial loss is far lower than a comparable perceptual loss in other sensory modalities like audition and vision (Landis et al. 2003). Cued identification performance alone is a crude measure of olfactory function that is most suitable to detect anosmia given the use of strong odors, that difficulty level is partly decided by the similarity between the presented odors and the lures on the cue card, and its partial reliance on cognitive and language skills (Larsson et al. 2004;Hedner et al. 2010). Therefore, to reliably estimate olfactory loss that does not border on anosmia, it is beneficial to probe aspects of olfactory function that are linked to the individual's sensitivity. Odor intensity estimates are linked to the individual's odor detection threshold (Cain 1969). However, of higher relevance here is degree of fluctuation over time and based partly on the same data, we previously estimated the testretest reliability of online odor intensity measure as 0.66, a value nearly identical to another study assessing test-retest of odor intensities (Kern et al. 2015). Moreover, this value is in-line with common odor detection thresholds where reliability between 4 time points has been reported in the range of 0.43-0.85 (Albrecht et al. 2008). In conclusion, we can demonstrate that for individuals infected by the SARS-CoV-2 virus, odor intensity ratings start to decline as early as 6 days prior to their reported test result. However, other symptoms of COVID-19, such as aches, shortness of breath, and sore throat appear around the same point in time. These non-olfactory-related symptoms display lower predictability of a COVID-19 diagnosis. Our results demonstrate that olfactory dysfunction is an early symptom of COVID-19 but not a symptom that appears before other related COVID-19 symptoms.
2022-09-08T06:16:39.207Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "fe4b7fcfd8eb5dd12dd56ade53172a63d0371b06", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/chemse/bjac022", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e527e5b45ebc09f1512f6809a6deebd4ffb420bf", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
249211730
pes2o/s2orc
v3-fos-license
Influence of Sunlight during Harvest on the Oxidation and Yellowing of Natural Mastic Resins Used as Varnishes on Artwork : The natural resin mastic, composed largely of triterpenes, is used as a varnish on artwork. This study investigates the influence of light on the autoxidation and yellowing of mastic, both during harvest and after application as a film. The nature of photoinitiation reactions is considered, as is the propagation of oxidative processes in both light and darkness. Oxidation, radical content and yellowing were studied by graphite-assisted laser desorption mass spectrometry, EPR and UVNIS spectrometry, respectively. Exposure to sunlight during harvesting is found to strongly affect the resin. The radical content increases dramatically, and oxidation isaccelerated. These differences are also observed during artificial aging under a range of conditions. Mastic that is harvested without exposure to sunlight deteriorates less quickly in all respects. This is attributed to lack of sunlight-generated radicals and/or labile radical precursors, which are very long-lived in the viscous resin or solid film. Remarkably, radicals are found to be nearly as prevalent in dark-aged films as those aged in light. Oxidation in the dark is also nearly as fast as with continuous light exposure. These results suggest that dark and light aging are not fundamentally different, in contrast to the conventional model. reason for replacement of a varnish. Because removal can damage the painting [2][3][4], various authors have studied the aging behavior of varnishes, with the goal of understanding and slowing their deterioration [1] [5][6][7][8][9][10][11][12][13][14][15][16]. Although progress has been made, the problem remains unsolved. Synthetic varnishes and multilayer synthetic/natural coatings have been introduced [17][18][19][20], but these are also not entirely satisfactory, and the search continues for the optimum artwork varnish. Dammar and mastic consist mainly of triterpenoids, with lesser amounts of hydrocarbon polymers and sesquiterpenoids [1] [21] [22]. Deterioration of these materials proceeds via radical chain reactions (autoxidation), as summarized in Eqns (1)-(5) [13][23] [24]: Initiation via UV excitation of keto groups followed by alpha cleavage (Norrish reaction) has been proposed as a major initiation step [5] [14]. The peroxyl radicals are relatively stable, thus step (3) is probably rate determining. Hydroperoxides are homolytically cleaved by heat or light (4), and the chain is branched and propagated. In addition to the reactions shown, others proceed in parallel. Alkoxy radicals RO°can react to alcohols, ethers or ketones. Addition to double bonds may compete with abstraction in step (3) [23] [24]. Products like hydroperoxides also undergo non-radical reactions. As a result, many of the known aging products of dammar and mastic varnishes are not expected to form directly by reactions (1)- (5), but nevertheless result from Introduction Many paintings are varnished with the triterpenoid natural resins dammar or mastic [1]. Initially these varnishes saturate and brighten the colors and give a smooth, glossy appearance to the painting. With time they yellow, become brittle and crack. Yellowing may change the subjective impression of a painting significantly (see Fig. 1 these primary events [9][10]. Components of resins also polymerize and decay again with progressive aging [6][16] [21]. Radical oxidation in darkness was believed to be of minor importance in varnishes because of rapid termination and insufficient initiation rates [6]. While intuitively reasonable and true for other materials used in artwork, such as oils [25], evidence for this was marginal. One indication was the solubility of photoaged varnishes in polar solvents, versus that of thermally aged varnishes in less polar solvents [5]. The trend was believed to correlate with the degree of autoxidation [6] [7], but this assumes that oxidation always leads to strongly polar products that are insoluble in apolar solvents. This is not necessarily true because photoaging results in enhanced formation of acids compared to thermal aging [6], which strongly decreases solubility in apolar solvents. This is especially the case if UV-rich light sources such as xenon-arc lamps are used [11]. Dark autoxidation is a known phenomenon in general (e. g. drying of oils or synthetic polymers), and was recently also report-ed for polyterpenes [26]. Absence of light does not alter the propagation pathways (reaction steps (2) and (3)), but decreases the initiation rate (reaction step (1)) [24]. However, our prior work proved that dark oxidation takes place in resins and resin varnishes [16]. Oxidation was easily followed by graphite-assisted laser desorption/ionization mass spectrometry (GALDI-MS). Commercially available resin was found to be in an advanced stage of oxidation, although considered 'fresh' [14] [16]. With this technique, ana-Iytes are desorbed from 2-/JITI graphite particles and detected as sodium adducts. Electron paramagnetic resonance spectroscopy (EPR) revealed that considerable amounts of radicals are present in varnishes stored in darkness. This suggests that dark oxidation also proceeds by radical chain reactions, as in light. However, an important question remained: how are the radical reactions initiated without light? The only possibility for thermal initiation seemed to be the homolytic decay of labile peroxides, but they are only produced by the autoxidative processes themselves. To explain this body of data, oxidation and deterioration of the resins was postulated to be initiated when drying in the sunlight on the trees [16]. To test this and find out more about the influence of sunlight during harvest on the deterioration of the resulting varnish, mastic resin was collected on the island of Chios, Greece, with and without exposure to sunlight. These samples, along with commercial mastic, were then aged under different conditions, to investigate their long-term stability. Experimental The 'Chios' commercial mastic was obtained from A. Grogg Chemie (Bern, Switzerland). The fresh mastic resin was harvested near Pyrgi on the island of Chios, Greece. The branches of the trees were wrapped in aluminum foil to protect the resin from sunshine, see Fig. 2. The resin was collected for several days, and transferred to sealed containers during early morning or evening, so as to avoid any exposure to direct sunlight. The samples for artificial aging were prepared as follows: the resin was dissolved in a commercial hydrocarbon solvent (Dottisol D40, I: 3 wt%) and filtered. Aliquots were pipetted onto glass microscope slides. Artificial aging was carried out under daylight-simulating lamps (Power Twist True Lite, 'Duro Test' 20TH12 TXC) in an oven at 60°C. The samples were protected from direct irradiation by the glass window of the oven. For aging without UV, an additional commercial UV-filter (cut-off at 410 nm) was placed between the lamps and the samples. The graphite-assisted laser desorption/ionization experiments were performed on a home-built 2 m linear timeof-flight mass spectrometer. Resolution was improved by delayed extraction to ca. 600 at mlz 500. Ions were extracted using a 21 kV acceleration voltage and a delay time of 180 ns. Desorption was performed using the 337 nm output from a nitrogen laser (VSL-337ND-T, Laser Science Inc., Franklin, MNUSA). The samples were prepared for mass spectrometry as follows: A suspension of 2-11m graphite particles (Aldrich, Buchs, Switzerland) in methanol was allowed to Harvest of mastic resin protected from sunlight. After cutting the bark of the mastic trees, the colorless resin oozes out within minutes. Traditionally it is left on the tree to dry for about two weeks. For this study, some branches of the trees were wrapped in aluminum foil immediately after cutting to protect the resin from direct sunlight. dry on the sample tip. A THF solution of the resin was pipetted onto the graphite and also allowed to dry. The sample quantity was varied empirically for best signal and resolution. The analytes were detected as alkali metal adducts. To avoid spectral confusion, sodium adduct formation was enhanced by addition of a small amount of NaCl to the graphite/ methanol slurry. Residual contamination of the spectrometer with diffusion pump oil led to signals at mlz 413, 469, 483 and 507 (marked with asterisks in the Figs) which interfered with the signals of the triterpenes. These signals appeared immediately after sample insertion, and grew with time. The cw-EPR spectra were recorded on a Bruker spectrometer (ESP 300 E, microwave frequency 9.4 GHz). The magnetic field was determined by a NMR Gaussmeter (ER 035 M, Bruker, Fallanden, Switzerland). The concentration of radicals in the pulverized resin samples were determined by comparing the integral of the absorption line with the linear regression of a series of four standard samples with known radical concentrations. The standard was the fourth line in the spectrum of VO(acach (Acros Organics, Basel, Switzerland, purity: 99%), dissolved in water-free toluene. Due to physical differences between the calibration standard (toluene solution) and the resin samples (powder), the remaining uncertainty in radical concentration is estimated to be ±20%. The absolute radical concentrations reported previously [16] are too large by a factor of 5 due to subsequently discovered calibration problems. Nevertheless, the relative proportions are correct and therefore also the conclusions drawn from the data. UVMS-spectra were collected with a Uvikon 940 spectrophotometer (a 2-beam instrument), Kontron Instruments (Watford, Herts, UK). Samples of ca. 10 mg resin were dissolved in 5 ml freshly distilled tetrahydrofuran and measured in quartz cuvettes. The varnish sample from the painting 'Damenpottrait' (18th century, artist unknown) was provided by the Swiss Institute for Art Research. It was removed with isopropanol during its restoration. Harvesting of resin with protection from sunlight irradiation resulted in very low radical content. Irradiation for only four days led to almost twice the amount of radicals, and conventional drying in the sun for about two weeks increased the radical content by more than ten times. These results show a clear effect of sunlight irradiation during harvest on the amount of radicals. This is a probable cause of subsequent oxidation and degradation of the resin. Radical contents also seem to be strongly influenced by the surface/volume ratio, since small beads (tears) contain much more radicals than large pieces, but still much less than a varnish film. Unaged Mastic Resins The mastic beads or 'tears' which are commercially available are at least six months old. Harvesting and processing on the island of Chios, followed by packaging and transport are responsible for this delay. Most mastic used on paintings is, however, older since it is rarely, if ever, purchased on a 'just-in-time' basis. As a result of these factors, what a restorer applies to a painting is quite different from what came out of the tree. This is already apparent to the naked eye -the exudates on the tree are crystalclear, while commercial mastic beads are usually distinctly yellow. To follow the process of aging from the beginning, samples of mastic were collected on Chios, both in the traditional way involving two weeks of drying in the sun on or under the tree, and under exclusion of light with no drying period. A nice overview of mastic collection methods is given in [27]. In this study, traditionally collected samples were studied without the usual processing with sea water and soap. As seen in the mass spectra of Fig. 3, there is a dramatic difference in the composition of the truly fresh resin VS. commercial resin. Initially, it shows two dominant group of peaks, at m/z 461-467, and mlz 477-481. These correspond to the expected main components of mastic [9][12] [16]. In contrast, the commercial 'fresh' samples are highly oxidized. A typical series of oxidative peak groups begins to become apparent, at mlz 493, 509, etc. Also one begins to observe degradation products at mlz values below that of the parent compounds. Equally dramatic is the difference in radical content of mastic collected or stored for short periods under different conditions, as seen in Fig. 4. A few days of sunlight during harvest increases the CHIMIA 2001,55, No. 11 radical concentration significantly. In commercial mastic, large beads (tears) contain fewer radicals than small tears, presumably an effect of surface/volume ratio. Similarly, a varnish film contains many more radicals than a resin bead. Storage in darkness decreases the radical concentration, but it soon reaches a constant level and does not decrease further. Effect of Indoor Sunlight Aging As seen in Fig. 5, the difference in the mass spectra between aging with and without indoor sunlight is small. Although oxidation is stronger with light, exposure to air as a thin film is clearly a more important factor in the oxidative changes that occur. The peak groups that develop on aging are spaced by 14-16 mass units, corresponding to incorporation of oxygen, or oxidation with associated unsaturation. This is consistent with the formation of multiple oxidation products from each component of the fresh resin. Identical aging behavior has been observed for pure triterpenes such as hydroxydammarenone (dipterocarpol) [15], a major component of both mastic and dammar. With increasing number of incorporated oxygens, the probability of polymerization also increases, thus further oxi-daL;')n leads to more polymerization producL';, while the pattern in the triterpenoid mass range stays more or less the same (Fig. 5). Polymerization products also decay again, resulting in more decomposition products at low masses [16]. Harvest on Aging Two mastic resins were artificially aged: one collected under complete exclusion oflight ('protected'), and the other collected in the traditional manner with sunlight exposure ('exposed'), but not processed (e.g. washed) in any other way. Artificial aging was carried out both with and without the UV component of the spectrum (wavelength <410 nm). This was compared to aging in darkness. As seen in Fig. 6, the mastic collected with exposure to sunlight is already sig-nificantly oxidized compared to the protected sample. It is not as dramatically altered as a commercial sample, but lies somewhere in-between. This sample was dried in the sun and kept in darkness for only two months. The presence of new compounds that are not exuded by the tree means that initiation of decomposition processes has occurred. Since the energy-rich compounds formed via photolytic oxidation propagate the autoxidation process, it was predicted that the exposed sample would age more rapidly than the protected sample. This was found to be the case, as shown in Fig. 7 and Fig. 8. Both with and without UV exposure, the exposed sample was more heavily oxidized, at all stages of artificial aging. The peak groups corresponding to sequential oxidation extend to higher masses and are more intense. In the protected sample the singly oxidized group (m/z 481, from m/z 465) remains the most intense, while it is the doubly oxidized group (mlz 507, from m/z 477) that is the maximum of the distribution for the exposed sample. , and in darkness, respectively (middle). For comparison, the spectrum of fresh mastic resin is also shown (upper). The mass spectra of the aged films are very similar, and it is obvious that strong oxidation took place in both cases. Thus, oxidation is stronger in light, leading mainly to more polymerization and decomposition, but these processes also proceed in darkness. Exposure to light is obviously not as important as being exposed tothe air as a thin film. Signals marked with asterisks are contaminants in the spectrometer. Fig. 8. GALDI-MS of films of mastic (conventionally dried in the sun) after 300 h of artificial aging. Strong oxidation occurred under all aging conditions, and again the differences in the mass spectra are rather small. Compared to the aged films of 'protected' mastic ( Fig. 7), oxidation is stronger: the peak groups corresponding to sequential oxidation extend to higher masses and are more intense. This is true even if one keeps in mind that the initial composition of the samples was different. The exposed sample contained more triterpenoids appearing at mlz 477 (mostly isomeric acids [9J),containing one oxygen atom more than the prevalent triterpenoids in the protected sample (appearing at m/z 465/467 in the mass spectrum). However, this is not due to a larger degree of oxidation in the fresh resins but to natural variation in the composition of unoxidized triterpenoids. 450 500 m/z 550 600 As found for unaged resins and also reported earlier [16J,application as a thin film with a large surface/volume ratio is probably more important for degradation processes in varnishes than exposure to light. The mass spectra of samples aged in darkness and exposed to simulated daylight without UV show a quite similar degree of oxidation. Exposure to some UV (simulated daylight through window glass) does have some effect on the mass spectra, but not a large one. The distribution of peak groups extends typically one increment further to the high mass side for samples that were both protected or exposed during harvest. The effects seen in the mass spectra during aging are reflected in the EPR results (see Table). The fresh protected sample had an unmeasurably low radical content, the exposed sample already contained a substantial amount. Aging increased the radical concentration in all cases, but the protected sample was less affected. Again the presence of initiator compounds in the exposed sample appears to sensitize the resin to further autoxidation. As noted in the introduction, one of the most important aspects of aging from the viewpoint of art conservation is yellowing, a consequence of oxidation. Here too a clear difference was found between the protected and exposed samples during artificial aging. Samples protected from sunlight during harvest yellowed significantly less, as could easily be observed with the naked eye. As shown in Fig. 9, the absorption at the blue end of the spectrum was lower for the protected samples throughout the aging process. The harvesting method was clearly more important for the yellowing behavior than the nature of the aging conditions. Also notable is the fact that aging in complete darkness is rapid, as also reported before [16].This was found in the mass spectra, EPR, and yellowing studies. It is also known among restorers that paintings kept in darkness yellow significantly, although the color bleaches somewhat on reexposure to light. Oxidative radical chain reactions proceeding in darkness support the hypothesis of unsaturated ketones being the cause of yellowing [16]. Although oxidation occurs at a higher rate in light, yellowing is aged with UV aged in darkness aged without UV Fig. 9. Development of absorption at 300 nm during artificial aging of mastic films. The resin protected from sunlight during harvest yellowed significantly less than the exposed resin, independent of aging conditions. Even after 300 h of aging, its absorption at 300 nm is still in the range of the exposed sample after only 50 h of aging. Yellowing can be substantially reduced by protection of the resin from direct sunlight during harvest. The great similarity of all measured parameters between light and dark aging, along with the clear influence of light exposure during harvest strongly suggests that autoxidation is a process which proceeds indefinitely after initiation. Continuous or repeated light exposure is not needed to promote it. Naturally Aged Varnishes from Artwork An example of a naturally aged varnish from a painting is depicted in Fig. 10. The painting was restored and its varnish replaced because of the strong yellowing. In the GALDI-MS, the same oxidation and degradation pattern is visible as in the artificially aged samples, but is even more advanced: the signals of the initial compounds and primary aging products are smaller compared to degradation products at m/z <460, and the peak groups are less distinct than in the beginning of oxidation. This is a typical phenomenon of progressive aging since new compounds with more or less than 30 carbon atoms (thus with masses between these of triterpenoids) can be formed by continued polymerization and degradation [16]. Conclusions Truly fresh mastic resin was collected with and without the traditional exposure to sunlight. Both these resins were found to be much less oxidized and yellowed than commercially available mastic. However, the protected samples contained far fewer radicals and were less oxidized than the exposed samples. Application as a varnish film significantly increased the rate of deterioration of all samples. The differences between samples observed at harvest persisted during artificial aging with light and/or heat. The samples protected from sunlight during harvest were in every respect more stable, including yellowing, one of the most important aspects for artwork conservation. Radical initiation in mastic resins appears to occur during harvesting and processing. Thereafter, autoxidation can no longer be stopped, and natural mastic varnishes deteriorate relatively quickly on paintings regardless of the care taken in storing or exhibiting them. Dark and light aging of mastic varnishes are therefore essentially the same process. Samples of varnishes removed during restoration of paintings are consistent with this and with the results of artificial aging.
2022-06-01T15:12:58.678Z
2001-11-28T00:00:00.000
{ "year": 2001, "sha1": "aab3933c39ca228c087cebeba3284e6bd62bd6b6", "oa_license": "CCBYNC", "oa_url": "https://www.chimia.ch/chimia/article/download/2001_972/2809", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d90e363ba7cf3b2f33b2dda76eae11f663767407", "s2fieldsofstudy": [ "Art", "Chemistry", "Materials Science" ], "extfieldsofstudy": [] }
125954555
pes2o/s2orc
v3-fos-license
Impact of Dense and Frequent Surface Observations on 1-Minute-Update Severe Rainstorm Prediction: A Simulation Study This study aims to investigate the potential impact of surface observations with a high spatial and temporal density on a local heavy rainstorm prediction. A series of Observing System Simulation Experiments (OSSEs) are performed using the Local Ensemble Transform Kalman Filter with the Japan Meteorological Agency non-hydrostatic model at 1-km resolution and with 1-minute update cycles. For the nature run of the OSSEs, a 100-m resolution simulation is performed for the heavy rainstorm case that caused five fatalities in Kobe, Japan on July 28, 2008. Synthetic radar observation data, both reflectivity and Doppler velocity, are generated at 1-km resolution every minute from the 100-m resolution nature run within a 60-km range, simulating the phased array weather radar (PAWR) at Osaka Jou n l of the Meteorological Society of Japan, 97(1), 253−273, 2019 doi:10.2151/jmsj.2019-014 ©The Author(s) 2019. This is an open access article published by the Meteorological Society of Japan under a Creative Commons Attribution 4.0 International (CC BY 4.0) license (https://creativecommons.org/licenses/by/4.0). Journal of the Meteorological Society of Japan Vol. 97, No. 1 254 Introduction A sudden local rainstorm in the summer season is usually associated with a cumulonimbus with active convection.To predict the hazardous severe weather caused by such a rainstorm, numerical weather prediction (NWP) and data assimilation (DA) play an essential role (e.g., Brooks and Doswell 2002;Pielke and Carbone 2002;Simmons and Sutter 2005;Theis et al. 2005).Stensrud et al. (2009Stensrud et al. ( , 2013) ) reported the National Oceanic and Atmospheric Administration (NOAA)'s "Warn-on-Forecast" effort on NWP and DA experiments using the Ensemble Kalman Filter (EnKF, Evensen 1994) for severe convective weather events.Yussouf and Stensrud (2010) performed Observing System Simulation Experiments (OSSEs) using an EnKF system to assess the impact of assimilating phased array radar data every minute at 1-km resolution and showed that it has a potential to improve 50-minute forecasts of severe convective weather.Yussouf et al. (2013) performed a real-case DA experiment at 2-km resolution with every 3-minute Doppler radar data and showed promising results.Also, Schwartz et al. (2015) have been working on the Short Term Explicit Prediction Program (STEP), and developed an EnKF system at 15-km resolution and a real-time forecast system at 3-km resolution, based on the EnKF.Their results showed improvements in meso-scale precipitation forecasts.More recently, Miyoshi et al. (2016a, b) reported Japan's "Big Data Assimilation (BDA)" project that proposed orders of magnitude more rapidly updated and higher-resolution NWP by taking advantage of the next-generation remote sensing and supercomputing technologies, such as the Phased Array Weather Radar (PAWR, Wu et al. 2013;Yoshikawa et al. 2013;Ushio et al. 2015), which has resolutions of 100 m in radial direction, 1° in azimuth angle and 200 levels in elevation angle, and a 100-member EnKF with 100-m resolution conducted by the 10-petaflops K computer.They developed a prototype BDA system with super-rapid 30-second update cycles and showed encouraging results for real-case severe convective weather. It is generally well known that the surface atmospheric conditions are important for the lifecycle evolution of a cumulonimbus (e.g., Troen and Mahrt 1986;Ogura and Yoshizaki 1988;Chen and Avissar 1994;Elfatih and Jeremy 1996;Ducrocq et al. 2002).For example, convergence of warm and moist air mass near the surface is favorable for convective initiation.Therefore, it would be important to improve the near-surface atmospheric conditions in NWP, and a number of studies have explored to assimilate surface observation data for improving severe convective weather prediction.For example, Hacker and Snyder (2005) proposed an approach to improve the planetary boundary layer using a single-column model.Fujita et al. (2007) performed a real-case EnKF experiment in the summer season and demonstrated the impact of surface data on improving the analysis of mesoscale features favorable to active convection.Järvinen et al. (1999) implemented surface DA with the European Centre for Medium-Range Weather Forecasts (ECMWF) operational four-dimensional variational (4D-VAR) DA system and improved forecasts up to 96 hours.Zhang et al. (2006) and Meng andZhang (2007, 2008) discussed the reproducibility of meso-scale cyclones by EnKF experiments with surface data.Ha and Snyder (2014) performed DA and forecast experiments using surface data from the aviation routine weather report (METAR) stations and showed that the surface DA leads to an improvement of rainfall amount due to the improved analysis in the planetary boundary layer.Gustafsson et al. (2018) reviewed the operational DA methods for convective scale NWP and discussed the surface DA technique at operational centers. Following the success of the previous studies, the present study aims to investigate the potential impacts of assimilating dense and frequent surface observations on a sudden local rainstorm prediction in the framework of BDA by Miyoshi et al. (2016a, b).Previous studies focused on rainstorms with scales at several tens of kilometers or larger.Here we explore one kilometer order scale severe weather with a high observing density of several kilometers and every minute.Meisei Electric developed an affordable auto-mated weather station named "POTEKA II", which can observe wind speed and direction, temperature, pressure, relative humidity, sunshine, and rainfall amount every 30 seconds, although with relatively larger errors compared with high-cost operational auto mated weather stations.The idea is to deploy a large number of the low-cost instruments, and this study explores how we could use these dense and frequent surface observation data in convective-scale NWP.We installed the "POTEKA II" stations at seven Kobe city elementary schools and RIKEN Center for Computational Science (R-CCS) (Fig. 1, blue dots) and have been getting the data in real time since summer 2013.This study performs a series of OSSEs for a case of a disastrous heavy rainstorm that caused five fatalities in Kobe, Japan on July 28, 2008.The "POTEKA II" stations were not deployed yet in 2008, and we do not use real observation data in this study.Instead, we simulate surface station data at all Kobe city elementary schools (167 locations, Fig. 1) to investigate the potential impact of dense and frequent surface station data on heavy rainfall prediction. This paper is organized as follows.Section 2 describes the experimental settings, and Section 3 presents the results.Finally, Section 4 provides the conclusion. Nature run at 100-m resolution This study used the Japan Meteorological Agency non-hydrostatic model (JMA-NHM; Saito et al. 2006Saito et al. , 2007)), which was used as the operational mesoscale NWP model in JMA from September 2004 to February 2017.Previous studies used the JMA-NHM to investigate heavy rainfall events around Japan (e.g., Kato 2006;Seko et al. 2007Seko et al. , 2011)). Figure 2 shows the hourly JMA radar echo composition data on July 28, 2008, which captured strong convective cells around the northern part of Kobe city (black ellipses).A well-developed linear rainband was formed by 0100 UTC (Fig. 2a).The intense rainfall region was extended to the west (Fig. 2b) and moved southward with significant rainfall intensity (Fig. 2c). As the rainband approached Kobe, an intense rainfall over 100 mm h −1 occurred, and a large amount of water flowed into River Toga, an urban river located in Kobe.Its water level rose by 1.3 m within only 10 minutes and caused five fatalities, although the river did not overflow.This event was carefully studied by Kusabiraki et al. (2011).In the northern part of the rainband, we found a significant cold pool and outflow near the surface, whereas from the southern side, warm and moist southwestern winds flowed into the rainband. The intense low-level convergence of the low and high potential temperature air masses maintained and enhanced the rainband.Shoji et al. (2009) mentioned the difficulty of the prediction of the event by the operational JMA-NHM at 5-km resolution.Seko et al. (2011) simulated this disastrous case in Kobe successfully using the JMA-NHM at 5-km resolution by assimilating precipitable water vapor data from the zenith delay observations of Global Positioning System using the Local Ensemble Transform Kalman Filter (LETKF; Hunt et al. 2007).In the present study, we performed a downscale simulation of the best ensemble member of Seko et al. (2011) and generated the 100-m resolution nature run for OSSEs using the JMA-NHM.To simulate the detailed structure of this heavy rainfall event as shown by Kusabiraki et al. (2011), the nature run was performed at 100-m resolution.Here, we take a typical multipledomain nesting strategy with gradually refining the resolution (Fig. 3). Figure 4 shows the nested model domains from Domains 1 to 4, and Table 1 summarizes the model settings for each domain, including the initial times on July 28.The initial and boundary conditions for the outer-most Domain 1 come from the ensemble member of the LETKF experiment by Seko et al. (2011).We chose the ensemble member best representing the actual observed rainband.The inner-most domain (Domain 4) was used for the nature run at 100-m resolution, simulating the main features of the heavy rainfall event, including the intense linear rainband moving southward (Fig. 5a) and the north-south low-level temperature contrast along the rainband (Fig. 5b). Figure 5a shows the precipitation intensity maximum over 100 mm h −1 , close to the JMA composite weather radar echoes although the timing is slightly delayed by approximately 20 minutes as in Seko et al. (2011).These features are consistent with the analyzes by Kusa-biraki et al. (2011). OSSE Figure 6 summarizes the general OSSE workflow.This study performs a series of LETKF experiments at reduced 1-km resolution with 40 ensemble members and assimilates PAWR data and surface observations generated from the nature run at 100-m resolution. Here, we include the model error originated from the different model resolutions.In the OSSEs, we assimilate simulated observations, not real observation data.In fact, neither the PAWR nor the POTEKA II stations were available in 2008.The detailed workflow is described as follows. a. Initial ensemble members Figure 6 summarizes how the initial ensemble members of the LETKF were generated.First, a 37hour numerical simulation at 5-km resolution for Domain 1 was performed (Fig. 6a, blue bar).Here, the initial and boundary conditions were obtained from the operational JMA Global Spectral Model (JMA-GSM) forecasts initialized at 0000 UTC July 27 at 20-km resolution.Forty ensemble members were chosen from the simulation at different times (Fig. 6a, green bar).The first member (M01) was chosen at 0000 UTC July 28 after a 24-hour spin-up, the second member (M02) at 0020 UTC, the third member (M03) at 0040 UTC, and similarly to the 40th member (M40) at 1300 UTC.Although the forecast times are different, these 40 ensemble fields at 5-km resolution were assumed as the initial conditions at 0000 UTC July 27, 2008 (Fig. 6b, leftmost vertical green bar).The 40 ensemble fields of Domain 2 at 1-km resolution were interpolated from the 5-km resolution fields and were integrated for 26.5 hours to generate the initial ensemble members of the LETKF experiments at 1-km resolution at 0230 UTC July 28 (Fig. 6b, green arrows).Here, the lateral boundary conditions were produced from the 5-km simulation (Fig. 6b, blue bar), so that there is no ensemble perturbation at the lateral boundaries.The ensemble initial conditions have relatively large errors, and we investigate the impacts of dense and frequent surface DA on the severe rainfall prediction. b. DA cycles From 0230 UTC July 28, NHM-LETKF (Miyoshi and Aranami 2006;Kunii 2014) with 1-km resolution in Domain 2 was cycled every minute for 1.5 hours (from 0230 UTC to 0400 UTC) with the boundary conditions from the simulation initialized at 0000 UTC July 27 at 5-km resolution (Fig. 6b, red).The localization scale was chosen to be 2000 m in the horizontal and 1000 m in the vertical (Table 2), where the localization length corresponds to a standard deviation of the Gaussian function.The cutoff lengths are given by 2 10 3 / σ , where σ is the localization length standard deviations, i.e., 7302 m in the horizontal and 3651 m in the vertical (Table 2).Adaptive covariance inflation by Miyoshi (2011) was adopted. c. Synthetic observation data To perform a series of 1-km mesh OSSEs and to simulate synthetic observation data, we conducted the 100-m mesh nature run.As described above, neither the PAWR nor the POTEKA II stations were available in 2008, and real observation data do not exist.The synthetic reflectivity and radial wind of PAWR data with the polar coordinate were simulated from the 100-m resolution nature run by applying the observation operator by Maejima et al. (2017).The simulated PAWR data were interpolated to the Cartesian coordinate at 100-m horizontal resolution and were converted to 1-km horizontal resolution by averaging the 10by-10 pixel values of 100-m resolution for each 1-km pixel.The number of independent unbiased white observational noise from a normal distribution was added to each datum; the error standard deviations are assumed to be 10 % for both reflectivity and radial velocity but fixed at 2 dBZ if it becomes less than 2 dBZ for reflectivity and, similarly, fixed at 3 m s −1 for radial velocity (Table 3).Observation error bias is not considered explicitly, although the resolution difference between the nature run (100 m) and DA experiments (1 km) might contain biased differences implicitly. For the surface data, we first identified the locations of all Kobe city elementary schools (Fig. 1, red and blue dots) from the official website of the Kobe city government (http://www.city.kobe.lg.jp/safety/ prevention/evacuation/).The 100-m resolution nature run at the lowest model level (20 m elevation) was interpolated bi-linearly in the horizontal to the locations of the elementary schools.In the vertical, the actual elevations of the surface stations were not considered, and we used simply the lowest model level data.Similar to the PAWR data, independent unbiased white random noise from a normal distribution was added to the interpolated nature run data to simulate the observation noise.Table 3 shows the observed variables and corresponding error standard deviations for the simulated observations.The observation error standard deviations are set to be larger than the instrumental errors by considering additional errors from various possible factors including representativeness and the observation operators (Table 3, the middle column).For reference, the measurement errors in the actual instrument specifications are also shown in the right column of Table 3. d. Observation scenarios We investigate three scenarios to evaluate the impact of dense and frequent surface DA, and the list of the main series of the LETKF experiments are shown in Table 4a.The control experiment (CTRL) assimilated only PAWR data.Two other experiments assimilated both PAWR data and surface data.The S8 experiment assumed the existing sites (8 points, blue dots in Fig. 1), and the S167 experiment additionally Horizontal wind components (u,v) 50 % (minimum 2 m s −1 ) ± 1 m s −1 Relative humidity 10 % ± 5 % Temperature 1 K ± 0.5 K Pressure 1 hPa ± 0.5 hPa included all other Kobe city elementary schools (167 points, blue and red dots in Fig. 1).At each LETKF step, approximately 400,000 PAWR data were input for assimilation.In addition, 835 (40) surface data were available in S167 (S8), only 0.2 % (0.01 %) of the PAWR data.To investigate the impact of DA and an influence from the boundary condition, an experiment without DA (NO-DA) was also performed.We also performed sensitivity experiments of S167 to find the relative importance of each observed variable.Here, every single variable, such as horizontal winds, pressure, relative humidity, and temperature, was assimilated separately.The list of the experiments is summarized in Table 4b. Forecast experiments To investigate the impact of surface DA on forecasts, we perform 30-minute forecast experiments at 1-km resolution (Domain 2) by JMA-NHM, initialized by the ensemble mean analyses at 0300 UTC (after 30 cycles), 0310 UTC (after 40 cycles), and 0320 UTC (after 50 cycles).Numerical model settings are the same as the nature run at 1-km resolution. Verification method To verify the analyses and forecasts, we take the difference from the nature run.Here, the nature run at 100-m resolution is reduced to 1-km resolution by averaging the 10-by-10 pixels for each 1-km pixel.This way, we obtain the differences between the experimental results and the nature run with 1-km resolution.The differences from the nature run are considered as the errors, so that we can compute the root mean square errors (RMSEs) and the ensemble spread at 1-km resolution.For the domain average, we take the entire model domain for the nature run of the size 120 km-by-120 km (Domain 4).We consider the ensemble mean fields as the best theoretical estimate and focus on the ensemble mean fields for verification.Hereafter, the nature run refers to the averaged nature run fields at 1-km resolution unless otherwise noted. General performance of the LETKF system To investigate the general performance of the LETKF system, Fig. 7a shows the time series of the analysis RMSE and ensemble spread for water vapor mixing ratio at the 2-km level of the JMA-NHM's terrain following vertical coordinate (z* = 2 km, see Saito et al. 2006 for the definition).This measure is chosen because water vapor in the lower troposphere is strongly related to precipitation.The RMSE drops rapidly by assimilating the PAWR data (blue full line). After repeating 40 cycles of the LETKF (0310 UTC), the RMSE reaches to the asymptotic level about a half of NO-DA (black full line).Although only radar reflectivity and Doppler velocity were assimilated, we find clear improvements in the moisture fields. The ensemble spread also shows a similar drop (blue dashed line).Figure 7b shows the time series of the ensemble spread in all four experiments.With more observations, the ensemble spread becomes smaller.Overall, the results suggest that the LETKF system performs stably with 40 ensemble members.The ensemble spread of NO-DA is nearly flat (black broken line), so that the unperturbed boundary conditions have a limited impact in this case study. Control experiment (CTRL) To investigate the impact of the PAWR DA at the first analysis, Fig. 8 shows the water vapor mixing ratio at z* = 2 km at 0231 UTC after the first analysis.The nature run shows moist areas corresponding to well-developed convections from west to east near 34.8 N (Fig. 8a).NO-DA, or the first guess, underestimates moisture (Fig. 8b), but CTRL shows rainfall and moist areas extended from 34.5 N to 35.0 N, closer to the nature run (Fig. 8c). Figure 9 shows the accumulated rainfall amount from 0310 UTC (40 cycles after the spin-up) to 0400 UTC (90 cycles).In NO-DA, the main convective line is completely missing (Fig. 9b).By contrast, CTRL shows precipitation patterns corresponding to the main convective line (Fig. 9c).Although the precipitation amount was underestimated, it captures peak values over 50 mm h −1 .Even though the LETKF experiment employs the reduced-resolution model, PAWR DA was generally effective to simulate the main convec- tive rainband. Impacts of surface data (S8 and S167) We now investigate the impact of the surface observation data in addition to the PAWR data (Table 4).Figures 10 and 11 show the side-by-side comparisons of the nature run, NO-DA, and three LETKF experiments (CTRL, S8, and S167) at the lowest model level (z* = 20 m) after 45 cycles of LETKF (0315 UTC).First, we focus on rain mixing ratio around (34.7 N, 135.3 E) (Fig. 10, black ellipses) corresponding to the disaster site.The peak value of rain mixing ratio in S167 reaches more than 5 g kg −1 , similar to the nature run, although the heavy rain area is slightly reduced.This is likely related to the resolution degradation from the nature run at 100-m resolution.S8 has an improvement in rain mixing ratio compared with CTRL, but it is considerably smaller than S167.NO-DA shows no rain at all. Figure 11 shows the near-surface divergence fields.Shoji et al. (2009) and Kusabiraki et al. (2011) pointed out that a strong convergence zone was extended along the leading edge of the convective line, and it is a noteworthy feature tied to the heavy rainfall.The nature run shows the strong convergence zone clearly (Fig. 11a, black ellipse).CTRL shows some of this feature from the PAWR data (Fig. 11c), and it is improved in S8 and S167 (Figs. 11d, e).In S167, the convergence zone extends in the northwest direction compared with the other experiments (Fig. 11), better matching with the nature run (Figs.11a, e).So far, we have found that the surface data had a significant impact although the observations stick to the ground, and the data size is only a tiny fraction compared with the PAWR data.To investigate how the impact of the surface data propagates horizontally and vertically in time, the evolution of the differences of equivalent potential temperature (EPT) and winds is investigated.Figure 12 shows the differences between CTRL and S167 and between CTRL and S8.At 0231 UTC after the first analysis cycle (Fig. 12a1), the EPT differences spread widely in the horizontal, corresponding to the observation distribution (Fig. 1).The vertical impact extends up to approximately 1500 m high as limited by the vertical localization.As the LETKF cycle progresses, the area of large EPT analysis increments is concentrated around (34.7 N, 135.3 E) (Figs. 12a2 -a4), corresponding to the improvement of rain mixing ratio (Fig. 10c).We also find a large impact on horizontal winds, so that the convergence is enhanced.The vertical signals also extend to higher levels as the LETKF cycle progresses.The area of large EPT analysis increments was propagated to upper levels with intensified upward motion approximately 135.3 E, where the intensive rain occurs.S8 shows generally similar improvements although in a narrower region and with smaller amplitudes (Fig. 12b).The narrower region after the first cycle (Fig. 12b1) corresponds to the observational sites for S8 (Fig. 1, blue).Figure 13 shows the mean-sea-level temperature at 0315 UTC (after 45 cycles of LETKF).The lowest model level temperature is corrected to the meansea-level by assuming the atmospheric lapse rate of 6.5 K km −1 .Compared with NO-DA, CTRL became closer to the nature run.Additional surface observations improve the surface temperature field (Figs.13c, d, e).As Shoji et al. (2009) and Kusabiraki et al. (2011) pointed out, the large temperature gradient in the north-south direction is a main feature at the leading edge of the convective line, as highlighted by the black ellipses in Fig. 11. The fine temperature gradient provided major improvements in the vertical structure around the main rainfall area.Figures 14 and 15 show the zonal-vertical cross-sections of EPT, zonal-vertical wind, and rain mixing ratio at 34.7 N, the center of the convective cells of our interest, as shown by the dashed lines in Fig. 13.In all cases, EPT is generally decreasing with height below 3 km, so that the stratification is convectively unstable (Fig. 14).However, the vertical motion is quite different especially between the nature run and NO-DA (Figs. 14a, b).The nature run shows an intense upward motion, enhancing the convective development (Fig. 14a).In NO-DA, in contrast, the vertical motion is very weak and unfavorable to initiate convective activities even if the stratification is convectively unstable (Fig. 14b).In CTRL, the vertical motion was clearly improved from NO-DA (Fig. 14c) and was effective to generate a convection located approximately 135.32 E. By assimilating the surface observations, EPT near the surface was increased, and S8 and S167 showed more convectively unstable stratifications than CTRL (Figs. 14c, d, e). The improvement of the surface conditions contributed to the favorable environment for convective developments.In response to EPT and upward motion, rain mixing ratio was also intensified (Fig. 15).In S167, the peak value for rain mixing ratio was over 2 g kg −1 , very similar to the nature run.In CTRL and S8, the distributions of rain mixing ratio were similar to that of S167, but the values were a half or less. Forecast experiments It is expected that the improved initial condition would contribute to improve the subsequent forecast accuracy.Figure 16 shows the RMSE of water vapor mixing ratio at z* = 2 km, similarly to Fig. 7.The black and blue full lines are the same as those of Fig. 7, and red and blue dashed lines denote the forecast experiments.As we have seen, S167 (red full line) is superior to CTRL, so that the analysis RMSE was reduced by approximately 0.08 g kg −1 or 10 % improvement.The improvement generally persists in the forecasts; the RMSEs of S167 are consistently lower than those of CTRL, particularly in the first 30 minutes.After 30-minute forecasts, the advantage of S167 becomes smaller.Compared with NO-DA, forecasts are skillful for an hour, although the skill is decreased rapidly.We notice that all three forecasts show rapid error growths in the initial 5 to 10 minutes, and after that, the error growths become slower. Figures 17 and 18 show the side-by-side comparisons of the nature run and the forecasts initialized by the three LETKF experiments (CTRL, S8, and S167) at 0320 UTC. Figure 17 shows the rain mixing ratio at the lowest model level (z* = 20 m) at 0330, 0340, and 0350 UTC (10-, 20-, and 30-minute forecasts).Here, we focus on convections in the black circled area in Fig. 17.As the surface observation density increases, rain mixing ratio becomes more intense.In the 30minute forecasts (0850 UTC), the peak value of rain mixing ratio reaches 2 g kg −1 in S167, consistent with the nature run.By contrast, it is only less than 10 % of the nature run in CTRL (Figs. 17a, c, d). Figure 18 shows the vertical cross-sections of mixing ratio of cloud water, rain, cloud ice, snow, and graupel at the center of the convection of our interest (black dashed lines in Fig. 18).S167 shows three active convections at 0330 UTC (A, B, and C in Fig. 18c-10).Among them, the convection B developed, and its cloud top reached over 10-km altitude at 0350 UTC .The location and peak values of rain mixing ratio and upward motion are similar to those of the nature run.S8 also shows a similar convection , but in a narrower region and with smaller mixing ratio (Fig. 18b-30).In CTRL, although the convection is gene- rated , it is significantly weaker and does not develop in the 30-minute forecast period (Fig. 18a-30). Sensitivity experiments The results of the sensitivity experiments are summarized in Fig. 19.S167 is the best, so that assimilating all surface variables provides the best results.RHs is the second best probably because relative humidity is related to the lower atmospheric stratification and plays an essential role in developing the upward motion and convective clouds.UVs and Ts also showed better results compared with CTRL.As mentioned in the previous subsection, the horizontal convergence and the temperature gradient near the surface were noticeable features in this rainfall event (Figs. 12,14).The surface wind and temperature DA contributed to create better conditions in the lower atmosphere.Ps is better than CTRL but shows the smallest impact probably because the local pressure field is relatively less important. Conclusion In this study, we performed a series of OSSEs using the NHM-LETKF at 1-km resolution and with 1-minute update cycles and investigated the potential impact of dense and frequent surface observations on a disastrous rainstorm event in Kobe, Japan on July 28, 2008.A 100-m-resolution simulation using the JMA-NHM was performed for the nature run, showing important characteristics of the event, such as the precipitation intensity over 100 mm h −1 , close to the JMA operational radar observation.From the 100-m resolution nature run, both reflectivity and Doppler velocity of PAWR at Osaka University and surface data at 167 locations at all Kobe city elementary schools and R-CCS were simulated for the observations of the OSSEs.In this way, the OSSEs include the model error originated from the different model resolutions. The control run (CTRL) assimilates the PAWR data only.Although the precipitation intensity was underestimated, the RMSE decreased to be about a half of NO-DA.The two sensitivity experiments with additional surface observations showed that the dense and frequent surface observations had positive impacts on surface temperature, moisture, and convergence, even if the number of surface observations is a tiny fraction of the PAWR data.The surface data increased EPT and enhanced convergent flow along the convection, enhancing favorable atmospheric conditions for convective development.The effect propagated spatially as DA cycles progress, and it enhanced the convective activities with hazardous severe rainfall near River Toga, the actual disaster site.The time series of RMSE in the forecast experiments showed that the improved analysis data contributed to improve the forecasts.S167 showed significantly better results than S8; namely, more surface observations showed larger positive impact. The results suggest that the surface DA with dense and frequent surface data observed by low-cost instruments potentially be effective to improve the performances of the analyzes and forecasts of severe convective weather.These surface data are relatively smaller in number, but provide important observations about lower atmospheric conditions that are generally more difficult to observe by remote sensing instruments, such as PAWR.A small number of the surface in situ data and a large number of remote sensing data can help each other effectively. This study performed idealized OSSEs and showed a potential of dense and frequent surface data.To use real surface data, there are potential issues.The inexpensive instrument tends to have quality issues, such as bias and missing data, and using real surface station data is not trivial.An immediate next step would be to use the actual data from the already implemented eight POTEKA II stations.We will develop methods to handle the potential issues with real surface station data. Fig. 1 . Fig. 1.Locations of the surface weather stations.Red and blue dots indicate the positions of all elementary schools in Kobe city and RIKEN Center for Computational Science (167 stations).Blue dots indicate the observation sites (8 stations).Black ellipse indicates the location of River Toga. Fig. 4 . Fig. 4. Nested model domains for the nature run.Red circle indicates the location of PAWR at Osaka University, and blue circle indicates River Toga in Kobe city.Red shaded circle shows the observation range of the PAWR.Color shading shows topography. Fig. 6 . Fig. 6.Overview of the OSSEs.(a) The generation method of the initial states at 0000 UTC July 27, 2008.(b)Workflow of spin-up run and 1-minute update LETKF cycles. Fig. 7 . Fig. 7. (a) Time series of RMSE for water vapor mixing ratio [g kg −1 ] at z* = 2 km (solid line) and the ensemble spread [g kg −1 ] (dashed line).Black and blue lines show NO-DA and CTRL experiments, respectively.(b) Time series of ensemble spreads of NO-DA (black line), CTRL (blue line), S8 (green line), and S167 (red line). Fig. 8 . Fig. 8. Water vapor mixing ratio [g kg −1 ] at z* = 2 km for (a) nature run, (b) NO-DA, and (c) CTRL at 0231 UTC after the first analysis.Black circles indicate the observational range of the PAWR. Fig. 10 . Fig. 10.Rain mixing ratio [g kg −1 ] at the lowest model level (z* = 20 m) at 0315 UTC after 45 LETKF cycles for (a) nature run, (b) NO-DA, (c) CTRL, (d) S8, and (e) S167.White and black dots indicate the locations of the 167 stations, where white dots indicate the existing 8 POTEKA II stations.Black ellipses show the disaster site. Fig. 12 . Fig. 12.(a) Differences of equivalent potential temperature [K] and wind velocity between S167 and CTRL at 0231 UTC (after first cycle of LETKF), 0245 UTC (15 cycles), 0300 UTC (30 cycles), and 0315 UTC (45 cycles).Lower panels show the results at the lowest model level (z* = 20 m), and upper panels show the vertical cross-section at the black dash ed lines.Vertical wind is enlarged by a factor of 3. (b) Similar to (a), but for the differences between S8 and CTRL. Fig. 13 . Fig. 13.Mean-sea-level temperature at 0315 UTC (after 45 cycles of LETKF) for (a) nature run, (b) NO-DA, (c) CTRL, (d) S8, and (e) S167.White and black dots indicate the locations of the 167 stations, where white dots indicate the existing 8 POTEKA II stations.White ellipses show the large temperature gradient zone.Dashed lines denote 34.7N latitude that corresponds to the vertical cross-sections in Figs. 14 and 15. Fig. 16 . Fig. 16.Time series of RMSE for water vapor mixing ratio [g kg −1 ] at z* = 2 km.Black, blue, and red lines correspond to NO-DA, CTRL, and S167, respectively.Full and broken lines show the analysis and forecast RMSE, respectively. Table 1 . Model settings for nature run. Table 3 . The observed variable and error standard deviations. Table 4 . List of experiments.(a) The main series of experiments
2019-04-22T13:12:57.834Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "7740e07b0945e76cb2726be8f518a0832cefd670", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/jmsj/97/1/97_2019-014/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8b7f0a4273a444d37ec794167d12b9fa2aa296ad", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
15510456
pes2o/s2orc
v3-fos-license
On a suggestion relating topological and quantum mechanical entanglements We analyze a recent suggestion \cite{kauffman1,kauffman2} on a possible relation between topological and quantum mechanical entanglements. We show that a one to one correspondence does not exist, neither between topologically linked diagrams and entangled states, nor between braid operators and quantum entanglers. We also add a new dimension to the question of entangling properties of unitary operators in general. Introduction In a recent series of papers [1][2][3], it has been argued that there may be a relation between quantum mechanical entanglement and topological entanglement. This hope has been raised by some formal similarities between entanglement of quantum mechanical states which is an algebraic concept and linking of closed curves which is a topological concept. Let us begin by simple definitions of these two concepts and the basic idea of a correspondence put forward in the above papers. A pure quantum state of a composite system AB (a vector |Ψ in the tensor product of two Hilbert spaces H A ⊗ H B ) is called entangled if it can not be written as a product of two vectors , i.e. |Ψ = |ψ A ⊗ |φ B . The simplest entangled pure states occur when H A and H B are two dimensional with basis vectors |0 and |1 , called a qubit in quantum computation literature. For brevity in the following we will not write the subscripts A and B explicitly. A general state of two qubits |Ψ = a|0, 0 + b|0, 1 + c|1, 0 + d|1, 1 is entangled provided ad − bc = 0. On the other hand two curves can be in an unlinked position like the one shown in figure (1) or in a linked position like the one shown in figure (2). One is tempted to view the two unlinked curves as a topological representation of a disentangled quantum state and the two linked curves as a representation of an entangled state. In the same way that cutting any of the curves in figure (2) removes the topological entanglement, measuring one of the qubits of the state |Ψ in (1) in any basis (not necessarily the {|0 , |1 } basis), disentangles the quantum state. More evidence in favor of this analogy is provided by figure 3 [3], which provides an alleged topological equivalent for the so-called GHZ state [4] |GHZ := 1 √ 2 (|0, 0, 0 + |1, 1, 1 ). ( In this figure cutting any of the three curves, leaves the other two curves in an unlinked position, in the same way that measuring any of the three subsystems in the GHZ state in the {|0 , |1 } basis, leaves the other two subsystems in a disentangled state. One may be tempted to make a general correspondence between topologically linked diagrams and entangled states or vice versa. For example while figure 3 corresponds to the GHZ state, a slight modification of the crossings of this link diagram, as shown in figure 4, may correspond to the following state If one measures one of the subsystems in this state in the {|0 , |1 } basis, the other two subsystems are projected onto an entangled state, in the same way that cutting out any of the component curves in figure 4 leaves the other two components in a linked position. A natural question arises as to how serious and deep such a correspondence may be. Certainly such a relation, if exists, will be much fruitful for both fields and it is worthwhile to explore further the possibility of its existence. We should stress here that we only want to analyze one particular suggestion [2,3], regarding a possible correspondence between topological and quantum mechanical entanglement. We are not concerned here with other aspects of the relation between topology and quantum computation or quantum mechanics. These avenues of study have been followed in [5][6][7][8] where the possibility of doing fault tolerant quantum computation by using topological degrees of freedom of certain systems with anyonic excitations or the design of quantum algorithms for calculating topological invariants of knots are analyzed. It is the aim of this paper to shed more light on these analogies and to study more closely the similarities and differences between these the above types of entanglement. The overall picture that we obtain is that these analogies do not point to a deep relation between these concepts, since despite some superficial similarities, there are many serious differences which lead to the conclusion that such a correspondence can not be taken seriously. Here we list some of these differences. 1-If we want to correspond any component of a linked knot with a state of a vector space in a tensor product space, (the number of vector spaces being equal to the number of components of the link diagram), then we are faced with the obvious question of "What kind of state corresponds to a knot which is highly linked with itself". We can imagine many topologically different one component knots and yet we have to correspond them all to a single state in a vector space which necessarily has no self-entanglement. Figure (5) shows such a knot known as trefoil knot. One way out is to consider only linked diagrams whose individual components have no self linking and to take into account the linking between different components. But there is no natural way to separate the linking of a component with itself from that with others. A component may be topologically trivial by itself (when one removes all the other components), but can not be deformed continuously to a trivial knot due to the presence of other components. 2-The second problem is that quantum entanglement should not change by local unitary operations which is equivalent to local change of basis. Therefore for such a correspondence to be valid, two quantum mechanical states which are related to each other by local unitary operations should correspond to topologically equivalent diagrams. Let us see if this is the case. Consider the examples given above: The two states (2) and ( and yet they correspond to completely inequivalent diagrams, shown in figures 3 and 4, respectively. 3-The third problem concerns the alleged relation between "measurement" of a quantum state on one hand and "cutting a line" in a knot diagram on the other hand. This relation is very questionable. The only evidence is that in some simple cases as those mentioned above, it appears that measurement (reduction) of a state |Ψ which corresponds to a knot K, produces a state |Ψ ′ which corresponds to a knot K ′ obtained by cutting one of the lines of K. However this correspondence is too superficial since the reduction of a wave function depends on what value we obtain for our observable while cutting a line is an action with a unique and predetermined result. To see this more explicitly consider a state like If we measure the first qubit in the computational basis {|0 , |1 } and obtain the value 0, the other two qubits are projected onto an entangled state, while if we obtain the value 1, the other two qubits are projected onto a disentangled state. Therefore one can not identify a measurement with a simple cutting of a line in a knot diagram. The result of the measurement also determines if the remaining state is entangled or not. These examples provide sufficient reasons to abandon the kind of correspondence mentioned above. But the question of a possible relation remains open and there may be an alternative and more tractable framework for studying it. It is well known that all knots and links can be obtained from closure of braids, the latter having a direct relation with operators acting on tensor product spaces. Therefore it may be possible to find a correspondence between entangling operators on the quantum mechanical side and braid operators which produce topological entanglement on the other side. It is in order to present a short review of braid group and braid operators. A review of braid group A braid on n strands (figure 6) is the equivalence class of a collection of continuous curves joining n points in a plane to n similar points on a plane on top of it. The curves should not intersect each other but can wind around each other arbitrarily. Two collections of curves which can be continuously deformed to each other are considered equivalent. There is a wellknown theorem stating that each knot can be constructed from the closure of a braid (see [9] for a review). By closure of a braid we mean joining the points on the lower plane to those on the upper one by continuous lines which lie outside all the curves of the braid. The collection of all braids can be equipped with a group structure by defining the product of two braids α and β as the equivalence class of a braid obtained by inserting the braid β on top of the braid α. The unit element of this group is simply the equivalence class of paths which do not wind around each other when they go from the lower plane to the upper one. This group, called the braid group on n strands and denoted by B n , is generated by the simple braids σ i , i = 1, . . . , n − 1, shown in figure 7, where each σ i intertwines, only once, the strands i and i + 1 ( σ −1 i intertwines the strands in the opposite direction). Such elements generate the whole braid group when supplemented with the following relations which express topological equivalence of braids as the reader can verify: The expression of braids as elements of a group shows how to find the inverse of braids as topological objects. For example the inverse of One can obtain a representation of the braid group B n for any n on the tensor product space V ⊗n , if one can find a solution of the following equation in V ⊗3 called hereafter the braid relation in which R : V ⊗ V → V ⊗ V is a linear operator called a braid operator and I is the identity operator. Once such a solution is found, representations of generators of the braid group and hence the whole braid group is obtained as follows: where for simplicity we have used the same notation for σ i and its representation. Thus if we have a braid operator R, we can produce representations of all kinds of braids with all the variety of their topological entanglement. Once a representation is in hand one can try to construct invariants of knots by defining suitable traces on the space V ⊗n [10]. We have now set the stage for asking the question of a possible relation between topological and quantum mechanical entanglement in an appropriate way. We can ask the following questions: 1-Does every braid operator which produces topological entanglement, also necessarily produces quantum entanglement? or conversely 2-Does every quantum entangler (an operator which entangles product states) necessarily produces topological entanglement?, that is , is any quantum entangler related somehow to a We think that the answer to these questions will shed light on the question of relation between topological and quantum mechanical entanglements. We choose to investigate these questions for two dimensional spaces, since in two dimensions we have both a classification of solutions of the braid group relation and a great deal of information about measures of quantum entanglement. In the rest of this paper we try to answer the above questions and draw our conclusions which are mainly negative, that is we conclude that the two types of entanglement may not be related to each other in such a direct way. This however does not exclude the possibility that quantum computation may someday be used for calculating topological invariants of knots [5][6][7]. The structure of this paper is as follows. In section 2 we present all the unitary solutions of the braid group relation in two dimensions (4×4 unitary braid operators R). In section 3 we collect the necessary tools for the analysis of entanglement of states and entangling properties of operators. In section 4 we use these tools to characterize the braid operators. Finally we end the paper with a discussion which encompasses a summary of our results. All unitary braid operators in two dimensions Let V be a vector space and letR : V ⊗ V → V ⊗ V be a linear operator. The following equation which is a relation between operators acting on V ⊗V ⊗V is called the Yang-Baxter relation first formulated in studies on integrable models [11] where the indices indicate on which of the three spaces, the operator is acting non-trivially. Any solutionR of the Yang-Baxter equation provides a braid operator R by the simple relation R = PR, where P is the permutation or SWAP operator defined as P |i, j = |j, i . When the vector space V is two dimensional, the solutions of Yang-Baxter equation have been classified up to the symmetries allowed by the equation [12][13][14]. From these solutions we can select those solutions of the braid group equation which are unitary. We should stress that this restriction can be relaxed and one can also consider non-unitary solutions of braid group. The reason for our interest in unitary operators in this paper is that in quantum mechanics we want to use these operators as quantum gates. There are only two types of unitary solutions. A single one designated as and a continuous family of solutions where the complex parameters a, b, c, and d are pure phases, i.e. |a| = |b| = |c| = |d| = 1. Note that the SWAP operator (denoted by P ) is a special kind of the matrix R ′ for which a = b = c = d = 1. For general value of its parameters, it is simply the SWAP operator times a diagonal matrix. The second of these solutions can be generalized to arbitrary dimensions, in the form R ′ ij,kl = M ij δ il δ jk , where |M ij | = 1. We do not know of any generalizations of the other solution. The single parameter C := 2|αδ − βγ| called the concurrence, characterizes the entanglement of this state [15,16]. For a product state |ψ ⊗ |ψ ′ ≡ (x|0 + y|1 ) ⊗ (x ′ |0 + y ′ |1 ), this parameter is zero and for a maximally entangled state like one of the Bell states ( |0,0 ±|1,1 , it takes its maximum value of 1. We note that the concurrence can be written as C = | Ψ * |σ y ⊗ σ y |Ψ | where σ y is the second Pauli matrix and * denotes complex conjugation in the computational basis. This also shows that the concurrence is invariant under local transformations |Ψ → U ⊗ V |Ψ , since U T σ y U = σ y . Any other measure of entanglement, like the von Neumann entropy of the reduced density matrices ρ A or ρ B defined as E v (ρ) := −tr(ρ ln ρ) or the linear entropy defined as can be expressed in terms of this parameter. A simple calculation shows that the eigenvalues of the reduced density matrix for the state in (12) are from which the simple expression E l = 1 2 C 2 is obtained for the linear entropy. The concurrence, the linear entropy or the von Neumann entropy are increasing functions of each other, all of them vanish for a product state and take their maximum values of 1, 1 2 and 1 respectively for maximally entangled states. One can use any of these measures for the characterization of entanglement of a pure state of two qubits. So much for the entanglement properties of states, we now turn to the entangling properties of operators acting on the space of two qubits. The space of unitary operators acting on two qubits (the group U(4)) when viewed in terms of entangling properties has a rich structure. Those in the subgroup U(2)⊗U(2) are called local operators. Elements of this subgroup can not produce entangled states when acting on product states. The complement of this subgroup forms the set of non-local operators. In the set of non-local operators, those which can produce a maximally entangled state when acting on a suitable product state are called perfect entangler [17]. An example in this class is the CNOT operator defined as CNOT|i, j = |i, i + j (mod 2) . Those non-local operators which do not have this property are called non-perfect entanglers. Note also that there are non-local operators which can not produce any entanglement at all. An example is the SWAP operator P which is incidentally a braid group operator. An important concept is the local equivalence of two operators. Let two operators U and U ′ in U(4) be related as follows: where the local operators k, l, k ′ , l ′ ∈ U(2). Two such operators should be regarded equivalent as far as their entangling properties are concerned. As far as entangling properties are concerned one may extend this notion of equivalence to the case where the two operators are related by the SWAP operator P , that is when U ′ = U P or U ′ = P U , or both, since the SWAP operator does not change the entanglement of a state. However the SWAP operator is non-local which means that it can not be implemented by local unitary operations on the two states. Moreover as far as topological properties are concerned, the SWAP operator is a braid operator and totally changes the topological class of a braid. For this reason we restrict ourselves to the notion of bi-local equivalence as in (15). How can we find if two such operators are equivalent? This question has been studied by many authors [17][18][19][20][21]. The orbits of states under bi-local [17,21] and multi-local unitaries (in the case of multi-particle states) [19,20] have been characterized by certain invariants. Here we use the invariants found in [17,21]. Let us define the matrix Q as follows: For any matrix U ∈ U(4) define the following matrix: where T denotes the transpose. Note that Q † U Q is nothing but the matrix expression of the operator U in the Bell basis modulo some phases. It is shown in [17,21] that the followings are invariant under bi-local unitary operations: Perfect entanglers By a perfect entangler we mean an operator which can produce maximally entangled states when acting on a suitable product state. The following theorem [17] determines when a given operator U ∈ U(4) is a perfect entangler. Theorem [17]: An operator U ∈ U(4) is a perfect entangler if and only if the convex hull of the eigenvalues of the matrix m(U ) contains zero. We remind the reader that the convex hull of N points p 1 , p 2 , · · · , p N in R n is the set The above criterion divides the set of non-local operators into perfect entanglers and nonperfect entanglers. A more quantitative measure has been introduced in [22] which defines the entangling power of an operator U , as where E is any measure of entanglement of states and the average is taken over all product states. To guarantee that the entangling power of equivalent operators are equal, as it should be, the measure of integration is taken to be invariant under local unitary operations. Equipped with the above tools we can now repose the questions raised in the introduction and ask what are the status of the braid group solutions in the space of all operators acting on two qubits. Which of them is a perfect entangler? If yes, are they both equivalent to some well-known perfect entanglers like CNOT or else, they belong to different equivalence classes of perfect entanglers? In answering these questions we have found some new features of entangling properties of operators as we will discuss in the sequel. Entangling properties of braid operators In this section we want to study the entangling property of the braid operators (10,11). Before proceeding we note a point without any calculation. The SWAP operator is a braid operator, (it is equal to R ′ when a = b = c = d = 1) and yet it can not entangle product states. In fact P (|φ ⊗ |ψ ) = |ψ ⊗ |φ . On the other hand the operator CNOT is not a solution of braid group relation and yet it is a perfect entangler. In fact when acting on the product states it produces the maximally entangled Bell states However by this example we do not want to rush to the conclusion that there is absolutely no relation between braid operators and quantum mechanical entangling operators. The reason is that although the operator CNOT may not be a braid operator itself, it may be locally equivalent to a braid operator via bi-local unitary operators. Therefore to study the entangling properties of braid operators we have to extract their nonlocal properties which is achieved by first calculating their invariants. For comparison we note that the invariants of CNOT turn out to be G 1 = 0, and G 2 = 1. 1: For the braid operator R we have: from which we obtain the invariants These invariants are the same as the invariants of CNOT, and hence this braid operator is equivalent to a quantum mechanical perfect entangler. It is readily seen from (10) that when acting on the computational basis {|0, 0 , |0, 1 , |1, 0 , |1, 1 } it produces the Bell basis. 2: For the continuous family R ′ we obtain after simple calculations m(R ′ ) = diag(ad, bc, bc, ad) This leads to the invariants where ∆ := ad bc . The last relation G 2 = −1 + 2G 1 shows that none of the members of this family is equivalent to CNOT. In fact they are not equivalent to any controlled operator U c (such an operator acts on the second qubit as U only if the first qubit called the control qubit is in the state |1 , otherwise it acts as a unit operator). In fact a simple calculation shows that for all such controlled operators we have which means that even if the first invariant of such an operator is made equal to that of R ′ , their second invariants can not be equal to each other and thus under no condition the braid operator R ′ can be locally equivalent to a controlled operator U c . Now that none of these braid operators are equivalent to CNOT, is there any perfect entangler among them? To answer this question we note that the eigenvalues of the matrix m(R ′ ) are ad and bc. The convex hull of these points in the complex plane is a line which passes through the origin only if the parameter ∆ is real. Since all the parameters a, b, c and d are of unit modulus, this parameter can only have two values, namely ±1. The value ∆ = 1 should be excluded, since in that case the eigenvalues are all equal and the convex hull degenerates to a point. Thus the braid operators R ′ are perfect entanglers only if ∆ = −1. Since this same parameter determines the invariants of R ′ , there is only one single perfect entangler in this class up to local equivalence. We take this perfect entangler to be the following matrix with invariants G 1 = 0 and G 2 = −1: It produces maximally entangled states when acting on an appropriate product basis: where |x± = |0 ±|1 √ 2 . Incidentally we note that the operator R ′ when acting on the above product basis produces an orthonormal basis of states all with the same value of concurrence C = |1−∆| 2 , Up to now we have found that the two braid group families (the single and the continuous one) each encompass a perfect entangler. This finding is certainly in favor of a relation between topological and quantum mechanical entanglements. Meanwhile we have found another maximally entangled basis which is not bi-locally equivalent to the Bell basis in the sense that no local unitary can turn one into the other, since if they were this would mean that the nonlocal operators R ′ 0 and CNOT which generate them from product bases were locally equivalent which we know is not the case. We should add that all maximally entangled bases are equivalent to the Bell basis up to phases. This applies also to the above basis. However these phases can be removed only by nonlocal operations. We are now faced with the following question: Are there perfect entanglers which are not locally equivalent to the braid group operators? To answer this question we should search for nonlocal operators U which although have different local invariants from (G 1 = 0, G 2 = 1) and (G 1 = 0, G 2 = −1), the eigenvalues of their m(U ) matrix, encompass the origin, so that they become perfect entanglers. One such matrix is the square root of the SWAP operator [17] for which we have m( √ P ) = diag(1, 1, −1, 1), G 1 = i 4 and G 2 = 0. This operator is a perfect entangler and can turn a suitable product state like |x+ |x− into a maximally entangled state like 1 2 (|0, 0 − i|0, 1 + |1, 0 − |1, 1 ). However, there is an important difference. Unlike CNOT and R ′ 0 it can not maximally entangle an orthonormal product basis. We can prove this as follows. The most general form of an orthonormal product basis is as follows: where |a| 2 + |b| 2 = |c| 2 + |d| 2 = |e| 2 + |f | 2 = 1. We now act on one of these states, say the first one, by the operator √ P and obtain: Such a state is maximally entangled if its concurrence is equal to 1. The concurrence is easily calculated to be C( √ P |ψ 1 ) = |ad − bc| 2 . Thus for this operator to turn these orthonormal states into maximally entangled states the following equations should be satisfied simultaneously: However the first two equalities when added together side by side give (|a| 2 + |b| 2 )(|c| 2 + |d| 2 ) = 2 which is impossible sine the left hand side is equal to 1 due to the normalization of states. This is also true for the second pair of equalities. Therefore the operator √ P can not maximally entangle a product basis. Note that although we have arrived at a contradiction by only considering the pair of equalities obtained from the states |ψ 1 and |ψ 2 it is not true to conclude that this operator can not maximally entangle any two orthonormal states. For we could have taken two orthonormal product states as without running into any contradiction, i.e. the operator √ P maximally entangles the two orthonormal product states |x+ |x− and |x− |x+ . This raises the hope that the braid operators may be the only perfect entanglers which have the important property of maximally entangling a basis. This could be a substantial evidence for the existence of a relation between topological and quantum mechanical entanglement. However we have found other classes of perfect entanglers, locally inequivalent to the braid operators which have the above mentioned property. Each member of the following one parameter family of operators has local invariants G 1 = 0, G 2 = cos 4φ and maximally entangles the product basis {|0, 0 , |0, 1 , |1, 0 , |1, 1 } as follows: Note that although the phases e ±iφ enter in the entangled basis states as overall phases, nevertheless this phase is important when acting on linear combination of states and can not be removed by local operations. In view of this, we may conclude that the braid operators have no special status among perfect entanglers. We conclude this section by calculating the entangling power of the the braid operators R and R ′ 0 . We use the linear entropy E l (Ψ) for our measure of entanglement of a pure state |Ψ , since calculation of the resulting integrals is easier. This is indeed the measure used in [22] for defining entangling power of operators. As mentioned in the introduction E l = 1 2 C 2 where C is the concurrence of the state. Thus the entangling power of an operator U denoted by e p (U ) is calculated as follows: We take a product state |ψ |ψ ′ , where |ψ = cos θ , determine the concurrence C(U |ψ |ψ ′ ) from (13) and then calculate the following integral e p (U ) = 1 We expect the following relations to hold and indeed they turn out to be correct: Note that the operators CNOT, R ′ 0 and U φ are not locally equivalent. The reason for their equal entangling power is that they are related by the SWAP operator. The second inequality is expected since the operator √ P although a perfect entangler, can not entangle orthonormal bases. Straightforward calculations along the lines mentioned above give the following explicit values: Discussion Following a suggestion by Kauffman and Lomonaco [2] we have tried to see if there is any relation between topological and quantum mechanical entanglements. We have searched for a possible relation from two different points of view. The first point of view which is based on a possible correspondence between linked knots and entangled states is easily refuted by various counterexamples and arguments. The second viewpoint which is based on a correspondence of braid operators and quantum mechanical entanglement is more promising. In two dimensional spaces there is a complete classification of braid operators. There is a continuous family and a discrete one. We have shown that the discrete solution is a quantum mechanical perfect entangler and the continuous family encompasses a quantum mechanical perfect entangler. Both these operators have the important property that they can maximally entangle a full orthonormal basis of the space, a property which is shared by well-known quantum entanglers like CNOT but not by all of them. However we have found other operators having this property and yet not locally equivalent to the braid operators which shows that even in this view point one can not ascribe a very special status to the braid operators. In our study we have come across new ideas and questions about entangled states and entanglement which are outside the scope of the title of our paper. For example we have shown that not every perfect entangler is perfect. By this we mean that although it can maximally entangle some product states, it may fail to do the same for a product basis. Questions like "How many inequivalent classes of maximally entangled bases exist for a space V ⊗ V ? " or "How many inequivalent classes of perfect entanglers exist which can maximally entangle a product basis? " have been new to us. We hope that these questions are also new and interesting for others.
2014-10-01T00:00:00.000Z
2003-07-22T00:00:00.000
{ "year": 2003, "sha1": "eb00707cb507371bb120b5211aa48c901bb7e93c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0307155", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "eb00707cb507371bb120b5211aa48c901bb7e93c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
245946094
pes2o/s2orc
v3-fos-license
Installing oncofertility programs for breast cancer in limited versus optimum resource settings: Empirical data from 39 surveyed centers in Repro-Can-OPEN Study Part I & II Purpose As a further step to elucidate the actual diverse spectrum of oncofertility practices for breast cancer around the globe, we present and discuss the comparisons of oncofertility practices for breast cancer in limited versus optimum resource settings based on data collected in the Repro-Can-OPEN Study Part I & II. Methods We surveyed 39 oncofertility centers including 14 in limited resource settings from Africa, Asia & Latin America (Repro-Can-OPEN Study Part I), and 25 in optimum resource settings from the United States, Europe, Australia and Japan (Repro-Can-OPEN Study Part II). Survey questions covered the availability of fertility preservation and restoration options offered to young female patients with breast cancer as well as the degree of utilization. Results In the Repro-Can-OPEN Study Part I & II, responses for breast cancer and calculated oncofertility scores showed the following characteristics: (1) higher oncofertility scores in optimum resource settings than in limited resource settings especially for established options, (2) frequent utilization of egg freezing, embryo freezing, ovarian tissue freezing, GnRH analogs, and fractionation of chemo- and radiotherapy, (3) promising utilization of oocyte in vitro maturation (IVM), (4) rare utilization of neoadjuvant cytoprotective pharmacotherapy, artificial ovary, and stem cells reproductive technology as they are still in preclinical or early clinical research settings, (5) recognition that technical and ethical concerns should be considered when offering advanced and innovative oncofertility options. Conclusions We presented a plausible oncofertility best practice model to guide oncofertility teams in optimizing care for breast cancer patients in various resource settings. Introduction Breast cancer is the most common cancer impacting women of reproductive age [1]. Contemporary breast cancer treatment often requires aggressive gonadotoxic therapies that necessitates fertility preservation treatments for those who desire future fertility. Young women with breast cancer have a higher risk of carrying pathologic mutations in the BRCA1 or BRCA2 genes, adding further complexity to their oncofertility counseling [2]. According to the most recent international guidelines from the American Society of Clinical Oncology (ASCO) [3], the American Society for Reproductive Medicine (ASRM) [4], the European Society of Human Reproduction and Embryology (ESHRE) [5] and the European Society for Medical Oncology (ESMO) [6], several established, debatable, and experimental oncofertility options can be offered to young female patients with breast cancer to preserve and restore fertility. Established oncofertility options include embryo cryopreservation, oocyte cryopreservation, and recently ovarian tissue cryopreservation and autotransplantation. Debatable options for fertility preservation for breast cancer patients include GnRH analogs and hormonal suppression, fractionation of chemotherapy and radiotherapy. Experimental oncofertility options include oocyte in vitro maturation (IVM), artificial ovary, neoadjuvant cytoprotective pharmacotherapy, stem cell reproductive technology and others [3][4][5][6]. Despite recognition as official recommendations, oncofertility international guidelines face several challenges in practice. Over the past years, the Oncofertility Consortium has studied oncofertility practices in many countries within its Oncofertility Professional Engagement Network (OPEN) [7,8]. Our previous studies identified a variety of standards and challenges in oncofertility practices worldwide [9][10][11][12][13]. Recently in our Repro-Can-OPEN Study Part I & II, we proposed installation of specific oncofertility programs for childhood, breast, and blood cancers in limited versus optimum resource settings. The main objectives of Repro-Can-OPEN Study Part I & II were to measure empirically the availability and degree of utilization of oncofertility options provided by the surveyed centers, to identify different styles of oncofertility practice for common cancers in limited and optimum resource settings, and to suggest best practice models for oncofertility care based on the results of the survey and the existing literature [14,15]. Limited resource settings include the following criteria especially in low-and middle-income countries ( Fig. 1): shortage of reproductive care services provided to young patients with cancer, lack of experienced oncofertility teams and necessary equipment, lack of national registries for in vitro fertilization (IVF) and/or cancer treatments, lack of awareness among providers and patients, cultural and religious constraints, partial or complete legal prohibition of third-party reproduction, lack of insurance coverage for IVF and/or cancer treatments resulting in high out-of-pocket costs for patients, and lack of funding to support oncofertility programs. Even in developed countries, a state of limited resource settings could be experienced where access is limited or in case of sudden national disasters when most of public services including healthcare are negatively affected as occurred recently during COVID-19 pandemic and its related economic shutdown. Additionally, within developed countries there may be specific regions that may qualify as limited resource [14]. Optimum resource settings include the following criteria especially in high-income countries ( Fig. 1): availability of reproductive care services provided to young patients with cancer, availability of experienced oncofertility teams and necessary equipment, presence of national registries for IVF and cancer treatments, awareness among providers and patients, minimal cultural or religious constraints, legally allowed third-party reproduction, insurance coverage for IVF and cancer treatments, and availability of funding to support oncofertility programs [15]. As a further step to reflect the actual diverse spectrum of oncofertility practices for breast cancer around the globe and to help provide a plausible oncofertility best practice model, this study sought to compare oncofertility practices for breast cancer in limited versus optimum resource settings according to data reported in the Repro-Can-OPEN Study Part I & II. Methods The Oncofertility Consortium sent the Repro-Can-OPEN Study questionnaire via email to 39 oncofertility centers in total; 14 oncofertility centers with limited resource settings from Africa, Asia & Latin America in Repro-Can-OPEN Study Part I, and 25 oncofertility centers with optimum resource settings from the United States, Europe, Australia and Japan in Repro-Can-OPEN Study Part II ( Table 1). The Repro-Can-OPEN Study questionnaire included questions on the availability of fertility preservation options provided to young female patients with breast cancer in their reproductive years (age < 40 yr.), and whether these options are always, commonly, occasionally or rarely used. Responses To analyze the collected data, we developed a new scoring system, 'the oncofertility score' [14,15]. As previously described, the oncofertility score is a new diagnostic tool to measure the availability and degree of utilization of oncofertility options for cancer patients in a treating center, country, or group of centers or countries. Although empirical, the oncofertility score could be also used as a prognostic tool to follow up on the development of oncofertility options and strategies provided to cancer patients over time especially in absence of accurate national oncofertility registries. The oncofertility score is calculated as a percentile ratio between the actual and maximal points of utilization that an oncofertility option might have ( Table 2 & Fig. 2). When a fertility preservation option is available and always used for cancer patients, it is given (Yes + + + +) that weighs 100 actual points (25 points per each +). When a fertility preservation option is available and commonly used for cancer patients, it is given (Yes + + +) that weighs 75 actual points (25 points per each +). When a fertility preservation option is available but occasionally used for cancer patients, it is given (Yes + +) that weighs 50 actual points (25 points per each +). When a fertility preservation option is available but rarely used or only used in research settings for cancer patients, it is given (Yes +) that weighs 25 actual points (25 points per each +). When a fertility preservation option is not available, it is given (No) that weighs 0 actual points. When Fertility Research Centre, Royal Hospital for Women, Barker Street, Sydney, Australia the fertility preservation option is not available to cancer patients because it is still in the preclinical research stage, it is marked with (No*). The maximal points of utilization that an oncofertility option might have is 100 when it is available and always used for cancer patients and is given (Yes + + + +), (25 points per each +) [14,15]. In our Repro-Can-OPEN Study Part I & II, the oncofertility score was calculated as a percentile ratio between the total actual points and the total maximal points of utilization that an oncofertility option might have. The total actual points for an oncofertility option equal the sum of actual points for this option in the surveyed centers. The total maximal points for an oncofertility option equal 100 points multiplied by the number of surveyed centers [14,15]. Results Based on data collected in the Repro-Can-OPEN Study Part I & II, all 39 surveyed centers responded to all questions. The oncofertility scores (%) for options provided to young female patients with breast cancer in the 14 centers with limited resource settings versus in the 25 centers with optimum resource settings, respectively, were as follows ( In our Repro-Can-OPEN Study Part I & II, the responses for breast cancer and their calculated oncofertility scores (1) Higher oncofertility scores in optimum resource settings than in limited resource settings especially for established options, (2) frequent utilization of egg freezing, embryo freezing, ovarian tissue freezing, GnRH analogs, and fractionation of chemo-and radiotherapy, (3) promising utilization of oocyte in vitro maturation (IVM), (4) rare utilization of neoadjuvant cytoprotective pharmacotherapy, artificial ovary, and stem cells reproductive technology as they are still in preclinical or early clinical research settings, (5) recognition that proper technical and ethical concerns should be considered when offering advanced and innovative oncofertility options to patients including ovarian tissue freezing and autotransplantation, oocyte in vitro maturation (IVM), artificial ovary technology, neoadjuvant cytoprotective pharmacotherapy and stem cells reproductive technology. Technically, the aforementioned advanced and innovative oncofertility options are sophisticated procedures that require well-resourced oncofertility centers with expert teams of oncologists, reproductive endocrinology and infertility specialists, gynecologists, biologists, embryologists, OncoferƟlity OpƟons and Scores (%) for Breast Cancer in Limited versus OpƟmum Resource Seƫngs Centers with limited resource seƫngs (n=14) Centers with opƟmum resource seƫngs (n=25) scientists, and transplantation surgeons. Early referral of breast cancer patients to highly specialized oncofertility centers is strongly recommended. Recently in 2019, the American Society for Reproductive Medicine Committee Opinion on fertility preservation in patients undergoing gonadotoxic therapies stated that ovarian tissue freezing and autotransplantation should be considered an established medical procedure and no longer considered experimental [4]. Afterwards in 2020, the ESHRE guideline also considered ovarian tissue freezing and autotransplantation non-experimental but used the term 'innovative' rather than established to reflect the evidence base [5]. However, oocyte in vitro maturation (IVM), artificial ovary technology, neoadjuvant cytoprotective pharmacotherapy and stem cells reproductive technology are still considered experimental and have limited data on efficacy, and it is essential that they are offered to patients strictly under clear ethical regulations. Obtaining ethical approval from the Institutional Review Board (IRB) or the equivalent ethics committee is essential, as is obtaining informed consent from the patients. Informed consent for experimental medical treatments and interventions should include the explanation of the procedures, benefits, risks, alternative treatments, and information about the expected outcome and costs. Several oncofertility options are expensive and not fully covered by health insurance in many states and countries, leaving many patients under acute financial pressure at the time of a life-altering cancer diagnosis. In such complex situations, doctors and patient navigators as well as patient support and advocacy organizations can play an important role in reassuring and guiding patients [16][17][18]. General considerations for oncofertility care of breast cancer Based on the responses and their calculated oncofertility scores (Table 3 & Fig. 3), we propose to design and install plausible oncofertility programs for breast cancer as an extrapolation for a best practice model (Table 4). Existing literature and international oncofertility guidelines and recommendations were also considered [3][4][5][6][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. Immediately after a breast cancer diagnosis, we recommend early referrals of patients to the oncofertility team to review the cancer therapy plan and estimate the related risk of gonadotoxicity and subsequent fertility loss. The risk of anticancer therapy-induced gonadotoxicity and fertility loss depends mainly on the type and stage of the disease, type and dose of anticancer therapy as well as the age of the patient and her ovarian reserve at the time of treatment. If the risk of gonadotoxicity and fertility loss is detected or even unknown, a comprehensive multidisciplinary oncofertility strategy should be offered before, during and after anticancer therapy. From a practical point of view, an effective oncofertility strategy should be individualized and tailored to the patient's circumstances and it may integrate various established, debatable, and experimental options after proper counselling and obtaining informed consent from the patient. It is recommended that a proposed oncofertility strategy should include at least one cryopreservation option. After complete cure or extended remission from cancer, and when the patient decides to have biological children, a new assessment of reproductive function should be performed. If anticancer therapy-induced premature ovarian insufficiency (POI), fertility restoration may be achieved by using the cryopreserved eggs, embryos or ovarian tissue [36][37][38]. Installing oncofertility programs for female patients with breast cancer In addition to breast cancer patients, women with BRCA mutations have several concerns that can affect their reproductive potential. A recent study showed that women with BRCA mutations not only have a lower basal ovarian reserve but also are more likely to lose it after chemotherapy. These findings highlight the importance of offering fertility preservation options to such patients [39]. Furthermore, women with BRCA mutations carry significantly higher risks to develop breast and ovarian cancers (Hereditary Breast-Ovarian Cancer Syndrome; HBOC), and they should receive appropriate oncofertility care as well. According to a recent large study, the cumulative breast cancer risk is 72% for BRCA1 and 69% for BRCA2 carriers, while the cumulative ovarian cancer risk is 44% for BRCA1 and 17% for BRCA2 carriers [40]. Unique medical challenges in oncofertility programs for breast cancer exist and include (1) conventional ovarian stimulation prior to egg or embryo freezing results in elevated serum estradiol levels that should be avoided in estrogen sensitive malignancies such as breast cancer, (2) autotransplantation of frozen ovarian tissue in patients with BRCA mutations should be handled with caution due to significantly higher risks of developing ovarian cancer [41][42][43][44]. According to the aforementioned unique medical challenges as well as the responses from the 39 surveyed centers and their calculated oncofertility scores (Table 3 & Fig. 3), we suggest installing the following oncofertility programs for breast cancer as a best practice model (Table 4). Before initiation of anticancer therapy, cryopreservation of eggs or embryos should be attempted with a random-start protocol for controlled ovarian stimulation and using letrozole or tamoxifen to avoid high estradiol levels [45,46]. Cryopreservation of ovarian tissue can be attempted especially when controlled ovarian stimulation is not feasible. In vitro maturation and further vitrification of oocytes retrieved in-vivo or ex-vivo from the extracted ovarian tissue (ovarian tissue oocytes in vitro maturation; OTO-IVM) could be attempted [47][48][49]. Artificial ovary technology is still experimental and cannot be relied upon alone as an effective oncofertility option. Although experimental, oocyte IVM and artificial ovary technology aim to provide safe alternatives to avoid future ovarian tissue autotransplantation and any potential risk of reintroducing malignant cells. During anticancer therapy, GnRH analog administration before and during chemotherapy should be considered for reducing the risk of POI but it should not be considered a stand-alone fertility preservation strategy. Fractionation of chemo-and radiotherapy could be attempted whenever deemed feasible by the oncologists. Neoadjuvant cytoprotective pharmacotherapy is still experimental and not yet clinically proven as an effective oncofertility option [50]. After anticancer therapy, fertility restoration may be achieved by frozen embryo transfer, or in vitro fertilization of stored oocytes. Patients with BRCA mutations could be advised to consider preimplantation genetic testing (PGT) during in vitro fertilization to avoid transmitting the mutation [51]. Autotransplantation of frozen ovarian tissue can be offered to restore fertility but it should be handled with caution in patients with BRCA mutations due to significantly higher risks of developing ovarian cancer. Proper ovarian tissue assessment in patients with BRCA mutations is mandatory to reduce the risk of reintroducing malignant cells with autotransplantation. For additional safety measures, it may be a possible option for patients with BRCA mutations to remove the transplanted ovarian tissue as well as the remaining ovary (if any) after childbearing is complete and at the time of an elective caesarian section. Stem cell reproductive technology may be promising in research settings but it is not yet clinically proven as an effective oncofertility option (Table 4). After installation of these specific oncofertility programs for breast cancer, we encourage using the 'oncofertility score' as a prognostic tool to follow up on the development of these new oncofertility programs over time. In cases where oncofertility options are rejected, contraindicated, infeasible, unsuccessful or unavailable, adoption and third-party reproduction, such as sperm, egg, and embryo donation and surrogacy can be offered as family building alternatives [11]. Limitations of Repro-Can-OPEN Study Part I & II included the small sample size (14 vs 25 surveyed centers with limited and optimum resource settings, respectively) making statistical significance difficult to attain, the empirical status of data collected on the availability and degree of utilization of oncofertility options, and lack of data on success rates of the oncofertility options due to absence of national registries for cancer and IVF treatments in many developing countries involved in the study [14,15]. Despite challenges, many opportunities exist to improve oncofertility practice in limited resource settings and create potential for the future including improved cancer survival rates and improved success rates of several oncofertility options as well as emergence of new promising technologies. The Oncofertility Consortium will continue to engage more stakeholders from the USA and abroad to help build a sustainable oncofertility core competency worldwide according to the Oncofertility Consortium Vision 2030 [52]. Conclusion In our Repro-Can-OPEN Study Part I & II, the responses for breast cancer and their calculated oncofertility scores showed the following characteristics: (1) higher oncofertility scores in optimum resource settings than in limited resource settings especially for established options, (2) frequent utilization of egg freezing, embryo freezing, ovarian tissue freezing, GnRH analogs, and fractionation of chemo-and radiotherapy, (3) promising utilization of oocyte in vitro maturation (IVM), (4) rare utilization of neoadjuvant cytoprotective pharmacotherapy, artificial ovary, and stem cells reproductive technology as they are still in preclinical or early clinical research settings, (5) recognition that proper technical and ethical concerns should be considered when offering advanced and innovative oncofertility options. Although challenging, oncofertility teams working in limited resource settings should be encouraged and supported. Dissemination of our comparisons and recommendations will provide efficient oncofertility edification and modeling to oncofertility teams and related healthcare providers around the globe and help them offer the best care possible to their breast cancer patients.
2022-01-15T14:32:55.842Z
2022-01-15T00:00:00.000
{ "year": 2022, "sha1": "c30ba884aa692bca1c1db55f0b718080c3f0a500", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10815-022-02394-3.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c30ba884aa692bca1c1db55f0b718080c3f0a500", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252228664
pes2o/s2orc
v3-fos-license
Overview of BSDF Reconstruction Methods for Rough Surfaces The work provides an overview of methods aimed to the reconstruction of Bidirectional Scattering Distribution Function (BSDF) for rough surfaces. The elements with rough surfaces are permanently present in our life and widely used in modern optical devices, for exam-ple, in light guiding plates for display illuminating systems, car dashboards, or luminaires. Light scattering by rough surface is an important component in the visual appearance of many materials including water, glass, skin, etc. The problem of the rough surface visualization is complex and contains many different aspects. Accordingly there are many techniques to provide their realistic rendering. In many lighting simulation and optical design tasks it is sufficient and more effective to replace real geometry of rough surface by a surface optical characteristics expressed via BSDF. So, accurate reconstruction of scattering properties of rough surfaces is a significant factor in visualizations tasks and generation of photorealistic images. In some cases, BSDF can be just measured. However, in many cases direct BSDF measurements are impossible if, for example, it is required to define BSDF inside of the material and neither a measuring device detector nor a light source can be placed inside the material. So this results in the development of many approaches for BSDF reconstruction. It started in the end of the last century with the development of many analytical methods based on microfacet models of rough surface such as the Phong, the Ward reflection, the Cook-Torrance models. Nowadays many direct numerical methods of BSDF reconstruction appear, for example, methods based on normals and heights distribution. As a rule, these methods use ray tracing to calculate BSDF. Sizes of microroughness can be small, sufficient to raise a problem which optics wave or ray is more appropriate here. To answer this and other ques-tions related to BSDF reconstruction, an investigation of well-known and effective reconstruct methods was conducted. This paper also presents the study results for eight real samples with different profile parameters of rough surface. The verification is based on numerical comparison with real measured data and visual comparison of images generated using different reconstructed BSDF. Finally, the general recommendations are presented about what methods and for what applications are more appropriate. Introduction Rough surfaces are all around us. When we generate realistic images, the task of visualizing them arises. Fig. 1 shows examples of such visualizations created by us: frosted glass with objects visible through it, and a rough car interior panel. Fortunately, in many lighting simulations and optical design tasks it is sufficient and more effective to replace real geometry of rough surface by a smooth surface with certain optical characteristics. The definition of scattering properties for a smooth boundary between two media is a simple task and the light scattering can be easily simulated using Snell's law of refraction and reflection. However, in case of rough boundary the definition of light scattering is more complex and can be expressed via Bidirectional Scattering Distribution Function (BSDF). BSDF determines output angular light transformation (refraction and reflection) in the dependence of input light conditions, angles of the input light. Figure 1: Examples of rough surface visualization In simple cases, when light scattering by whole plate is only important, the direct BSDF measurements may be sufficient. The ordinary way of BSDF measurements for the rough surface is presented on Fig. 2a. The sample, one side of which is rough, is illuminated with a parallel light beam under the specific incident directions, then an angular light distribution of transmitted light (BTDF -Bi-Directional Transmittance Distribution Function) and reflected light (BRDF -Bi-Directional Reflectance Distribution Function) is measured. In other words, such BSDF measured for the whole sample works in cases when we can ignore object thickness. The examples of such ordinary BSDF applications (Fig. 2b) may be various diffuse films, thin plates, layers. Figure 2: BSDF application in the simplified "one-sheet" and more "solid" models However, in plenty of cases the direct usage of measured BSDF is impossible. As an example we can consider a light guiding plate with rough surface (Fig. 2c). Correct simulation of light propagation in this optical system requires BSDF from each side of the rough surface that includes BSDF from the material side. The BSDF measurement from material side is impossible or very expensive because we cannot place the light source and detector inside of the material. Another problem is the significant inaccuracy of BSDF measurements for big illumination angles because of light leakage inside of measured samples, shadowing of sample illumination and some other reasons. The mentioned problems related to BSDF measurement result in the development of many approaches and methods for BSDF reconstruction. One of the main purposes of this paper is an analysis and verification of the popular methods of BSDF reconstruction. The paper contains an overview of most prominent approaches and their comparison done on the base of real measured samples with rough surfaces. Overview of BSDF reconstruction methods Generally, methods of BSDF reconstruction can be divided into two main groups: 1. Analytical methods. The analytical methods are based on the theory of physics (optics) or empirical formulas. The methods represent rough surface models published by Ward, Cook-Torrance, Phong, etc. The main advantages of analytical approaches are high efficiency because analytical solutions are fast to calculate. This is important because optimization procedure is typically used to get parameters of analytical functions describing required BSDF shape. The disadvantage of the approaches is their approximation. They use approximate algorithms to describe complex optical effects like a masking or shadowing of the incident light illuminating of the rough surface (Fig. 3a, 3b) and a interreflection of light on rough profile (Fig. 3c). During BSDF reconstruction this can introduce noticeable inaccuracy for surfaces with big roughness. 2. Numerical methods. These approaches are based on simulation of light propagation through models of rough surfaces. In the given paper two main numerical approaches are considered which are based on distribution of microfacet normals or heights. These approaches are more correct than analytical ones from viewpoint of optical theory but require noticeable calculation resources. The described classification is not the only one. For example, the methods for BSDF reconstruction can be also divided depending on what optics, geometrical (ray) or wave, is applied. In our work we investigated all these groups of methods. Analytical methods of BSDF reconstruction Most of the methods for defining a light scattering (BSDF) through a rough surface are based on the "microfacet" model. In the case of the "microfacet" model rough surface with complex geometry is presented with a set of flat smooth surfaces (micro facets), see Fig. 4. When boundary (microfacet) is smooth the transmission, reflection can be easily simulated using Snell's and Fresnel laws of refraction and reflection. So it is possible to calculate a general light scattering through the rough surface knowing general density distribution of microfacet slopes or their normals. One of the earlier attempts to model light reflection from a rough surface is described in [1]. It was restricted with reflection light component only but it was a basis for developing one of the more well-known microfacet models introduced by Cook and Torrance [2]. A lot of different modifications of the microfacet model have been developed at that time [3][4][5]. The next developments are related to extensions of reflection "microfacet" models with the support of anisotropy, sampling with correct weights, application of Backmann distribution [6,7], and development of alternative sampling methods with fitted separate approximations [8]. Shlick [9] develops more simple approximation to the Cook-Torrance model with the help of rational approximations with the application of the Fresnel formula widely adopted nowadays. Ashikhmin and Shirley [10] introduced an anisotropic reflection model on the base of Phong microfacet distribution including correct importance sampling. Then an energy-conserving reflection model [11] is introduced. It is derived from arbitrary microfacet distributions, though this formulation involves numerically estimating integrals without closedform solutions. Microfacet models are widely used in computer graphics; experimental data appear for verification of scattering models. For example, different models of BRDF reconstruction ("Ward", "Ward-Duer", "Blin-Phong", "Cook-Torrance", "Lafortune et al", "He et al", "Ashikhmin-Shirley") are compared with real measurements in [12]. A set of development is related to the derivation of the refraction part of scattered light [13]. There are investigations that take into account thin effects such as the shadowing masking, multi-inter-reflections on elements of rough surface, using of importance sampling [2,14,15]. The reflection models based on wave optics are proposed in [16]. The method can simulate a wider range of surface effects than microfacet models. However, wave approaches are much more expensive to calculate and as a rule very approximate in support of thin effects as a multi reflection on profile with rough surfaces. The numerical simulations of transmissions models are performed in [17][18][19][20]37]. The "GGX" microfacet model was introduced in [21]. It is an improved variant of the Cook-Torrance microfacet model supporting reflection as well as refraction and shadowingmasking. The [21] work contains numerical data comparison of different analytical models and demonstrates a lot of advantages relative to other analytical methods of BSDF reconstruction of a rough surface. The "GGX" model is considered one of the most accurate, flexible and wide-used analytical approaches. It supports both reflection and refraction components, masking-shadowing, and importance sampling and shows more accurate output in relation to the Cook-Torrance model [21]. So, the "GGX" model is selected for examination in our paper as representative of the analytical group of methods. Typically, an analytical model is represented with two base functions. The first function, denoted as D(m), is a microfacet distribution function. It describes the statistical distribution of surface normal m over microsurfaces. The second bi-directional function, denoted as G (i, o, m), describes what fraction of the microsurface with normal m is visible in both directions i and o (Fig. 5). Typically, the shadowing-masking function has relatively little influence on the shape of the BSDF except for near grazing angles or for very rough surfaces but is needed to maintain energy conservation. where G1 is derived from the microfacet distribution D as described in [14,15]. We used the "GGX" model with the following microfacet distribution and masking shadowing function D(m), parameter  specifies surface roughness: and omnidirectional masking-shadowing function: where  is the angle between m and n,  between v and n, and  + ( ) is the positive characteristic function (which equals one if > 0 and zero if ≤ 0). v equals either to i or o vectors (Fig. 5). Note the function is rather similar to the well-known Beckmann distribution used in the Cook-Torrance model. So the process of BSDF reconstruction consists of the definition of the parameter  -degree of surface microroughness for which generated BSDF gives more close results to measurement data. It will be considered in the next chapters in more detail. Numerical methods of BSDF reconstruction Nowadays with increasing of computer's power new approaches for BSDF reconstruction have been developed in [22,23,25,26]. Part of them is based on pure numerical methods in which a BSDF is calculated by ray tracing simulation through an explicit geometry model of rough surface. The method based on the normal density distribution of rough surfaces is proposed in [26,27]. In this method the micro-relief is simulated with the help of distribution of normals represented with an analytical function having a set of parameters defined with the help of the optimization process. The process of BSDF calculation is presented in Fig. 6. The approach is maximally natural and transparent. To calculate BSDF the flat boundary presenting a rough surface is illuminated with parallel light from both sides of the boundary. Typically, stochastic (Monte Carlo) ray tracing is used. Each time ray hits the boundary, normal is defined with a probability according to analytical function -normal density distribution. Then ray reflection, refraction is defined according to Snell's law. The transformed light is registered with detectors which finally form resultant BSDF. The main problem here is the definition of analytical function specifying normal density distribution and its parameters. In [27] it was proposed to use two analytical functions like Gauss and Cauchy ( Fig. 7): where  is an angular variable specifying angle of surface normal, 0 is zero angle specifying the position of function maximum and it should be equal to zero for most of cases for rough surfaces with roughness distribution close to normal. So, these functions are used in our work. The two main parameters  and n specify shape of function of normal density distribution and can be defined with the help of the optimization process, which is presented in the scheme in Fig. 7 and consists of several main steps: 1. The first step includes an input of measured BSDF and other sample parameters affecting light propagation like refractive index and thickness. 2. An objective function for optimization and parameters of illumination and observation to be used in light simulation are defined at the second step. The measured BSDF of whole sample can be used directly as objective function, sometimes it is recalculated to ordinary angular intensity distribution for simplification. The detector parameters (angular, spatial resolution, distance to measured surface) during simulation are chosen maximally close to parameters of real detectors used in measurements. As a rule only small illumination angles close to normal of measured sample are used during optimization. It is done because the accuracy of measurements decreases significantly for incident angles far from normal direction. 3. During the third step an explicit model of the sample with rough surface is generated for normal density distribution for some initial parameters  and n. 4. Most of modern light simulation software can simulate light propagation through boundary between two dielectric media specified with normal density distribution. So there is no problem to calculate angular light distribution for sample model defined in the previous optimization step. 5. An optimization criterion is defined as root mean square deviation (RMSD) between measured and simulated angular intensity distributions. 6. An optimization criterion (RMSD) calculated on the previous step together with current , n parameters transfer to optimizer. An external optimizer of SCIPY library with "Simplex" algorithm was used in our work. 7. The optimizer makes decision to continue optimization process (7.1 in Fig. 7) or to interrupt it (7.2 in Fig. 7) in case the optimization goal is achieved or due to another reason (for example, maximal number of optimization steps is achieved). If the goal is achieved a final model of rough surface based on optimized normal density distribution is generated. In case of simulation there is no problem to place detectors and light sources anywhere including inside of sample material and calculate light scattering from both sides of rough surface, i.e. to calculate BSDF of rough surface. The optimization procedure (Fig. 7) was used to reconstruct BSDF with "Normal" numerical method. More details are presented in [27]. The investigations show the "Normal" method is very effective from viewpoint of calculation speed and fast convergence in optimization procedure during BSDF reconstruction. However, it has evident drawbacks too, namely, it does not support interreflections and masking-shadowing. Another numerical approach is based on height density distribution and described in [28]. There is some similarity of the "Heights" and the "Normals" methods. However, an analytical function is used here for another goal: to define 2D height distribution H(x, y). Figure 8. Definition of "Height" distribution It shows regular grid of points with uniform steps along x and y axes (Fig. 8). Each point in the grid presents a node of microprofile. To define profile height in each node with (x, y) coordinates analytical probability function of one or several parameters can be used. In other words, height in each node is defined according to a probability defined according to normal (Gauss) or some another analytical function specifying height density distribution. In our work two analytical functions were used for "Heights" approach -Gauss and Cauchy: Note that formulas (7), (8) are similar to (4), (5) used for "Normal" approach but use z coordinate instead of angular  variable, which is defined in the range [0, Hmax] and specifies heights distribution. Both functions depend on four parameters (, Hmax, n and z0). It is rather substantial number of parameters which can complicate process of optimization convergence. However experiments show that in most of the cases the only  (sigma) is sufficient, "n" (degree) can slightly improve convergence in some cases. Hmax can be set to 1 in most of the cases if to set step between nodes of profile grid around the same unit value. z0 is supposed to be zero (density of heights is symmetrical relatively to Hmax). z is in range [0, Hmax]. So ( , ) defines distribution of height density. According to formula (6-8) height distribution of microprofile can be defined and used for profile geometry generation (Fig. 9). The definition of optimal parameters specifying height distribution is fulfilled with optimization procedure similar to "Normals" methods (Fig. 7). The only difference is an explicit model of sample where rough surface is simulated with a geometry based on heights distribution instead of simplified normal density distribution. At present, most of the modern optical software, such as SPEOS, LightTools, Lumicept [35], for example, allow calculating such microgeometry, so BSDF can be calculated without a problem. Note that the main advantage of the "Height" method is support of all effects: interreflections and shadowing-masking. The process of BSDF reconstruction with this method is more complex relatively to "Normals" because the number of parameters used in reconstruction is increased. In the "Heights" method the Hmax parameter defining the maximal scale of micro profile is added to parameters specifying the shape of the height distribution function, namely parameters  and n (formulas (4) and (5)). Methods of BSDF reconstruction based on wave optics Most of the methods described in the previous chapters use ray optics for simulation of light propagation. However, an application of geometrical optics can be inaccurate. A rough surface is considered as a combination of microelements and their size varies from great to small values, up to sizes comparable with wavelength. Application of geometrical optics theory can result in the noticeable inaccuracy of reconstructed BSDF. Another problem of the ge-ometrical approach is the parasitic influence of measurement noise in case of measured height distribution. The main problem of wave methods is their extreme complexity. The precise wave methods cannot be applied practically due to the complexity of micro-surface geometry. So an approximate wave solution should be used for BSDF reconstruction. As an example, one of them is described in [16] but it is related to the reflection model only. The more well-known and more usable method to reconstruct BSDF for rough surfaces is based on the Kirchhoff approximation. The method is built on a simple FFT (Fast Fourier transform) based procedure. A more detailed description can be found in [29][30][31][32][33][34]. The BSDF reconstruction based on the Kirchhoff approximation is developed for both reflection and transmission components and was examined in our work. The Kirchhoff method should be applied to the surfaces containing smooth roughness (without breaks) or consisting of enough great facets. The local condition of applicability looks like this: 2 3  ≫  (9) where  is a wavelength (in the medium where scattered light propagates), R is a "typical" curvature radius of roughness,  is a local angle of incidence. The wave-based approach does not consider the multiple reflections. The limitation can be expressed in form: where is a characteristic roughness length, is RMS of height deviation from a flat surface. The method also does not take into account shadowing and masking (shadowing is for occlusion of illumination direction, masking is for occlusion of observation of one). This limitation can be expressed as: where  is an illumination/observation angle that is counted from a normal to a flat surface. In the wave model scattering is calculated for the infinite periodic surface. If there is no seamless conjugation between opposite sample edges, an artifact scattering by periodic conjugation can arise. It is negligible for a large relief sample but can be quite serious for a small one. The calculations are done for non-polarized illumination. Set of samples for verification Before describing the samples to be used in the investigation let's consider a profile of rough surface, Fig. 10. Figure 10. Parameters of microroughness Several wide-used parameters describe a profile rough surface. These parameters will be used for the description of measured samples of rough surfaces, so shortly consider them. The first parameter Ra is the most common one and is calculated using the formula (12): is an arithmetical mean deviation of the assessed profile. The next parameter is a root mean square The next two parameters specify average values of valleys (heights below the mean line) and ledges (heights above the mean line) over the assessed profile, Rv and Rp correspondently: = |min | ; = max (14) The next parameter is the most trivial. Rz is maximal profile height and is calculated using parameters from (14): = + (15) One more well-known parameter is RzJIS or Rz5. It is related to the Japanese industrial format. It is based on the five highest peaks and lowest valleys over the entire sampling length (l in Fig. 10). = And the last two advanced parameters present Rsk -skewness and Rku -Kurtosis: The parameters of all eight profiles have been calculated based on measured height distributions and using formulas (12)- (17). These parameters are combined in Table 1. The #1-#8 in the first line are sample identifiers. Additionally, the second and third rows of the table 1 present size of the measured fragment on the sample and resolution of measurements -number of measured profile points along with both x and y directions. The step between measurement points was constant. The images of investigated profiles are presented in Fig. 12. For convenience the profiles in Fig. 12 are placed in order of their root mean square (Rq) increasing from the left to the right and from up to down. [35,36] was selected for measurements because it has very advanced characteristics, such as angular resolution = 0.6, very small angular step = 0.1 and wide range of observation directions = 90. The high angular resolution is very important in case of our investigation because part of measured samples has very small roughness, comparable with wavelength, so angular transmission is supposed to have a very narrow shape. The measurements of transmission are done for five angles of incident light direction = 0, 15, 30, 45 and 60 in the single plane of light incidence. The goniometer GP-200 outputs data in the relative shape calibrated to measurements without sample. So, for the correlation of measured and simulated data, the same calibration process is fulfilled in simulation, as it is explained in [31]. Set of methods to be verified The first two methods selected for verification are based on the measured heights distribution and fulfilled in the Lumicept lighting simulation software [35]. The software has special instruments for direct simulation of rough microgeometry on the base of numerical height distribution, apart from it has physically accurate Monte-Carlo ray tracing and BSDF generator allowing to calculate BSDF based on ray as well as wave optics (Kirchhoff approximation). The first method is based on ray optics. It is denoted as "Measured_Heights (ray)". In the given method explicit geometry is created as the boundary between air medium with refractive index = 1 and dielectric medium with refractive index = 1.49 (the refractive index of sample material). The boundary is illuminated under different incident directions from 0 to 85 with parallel light from both sides: from the air and dielectric. Ray propagation through the rough surface is based on Fresnel, Snell laws. The detectors are placed above and below the boundary of the rough surface and detect transmitted and reflected light. Then BSDF is generated based on calculated data. It supports all complex effects such as interreflections on microrelief, masking and shadowing explained in the second chapter. The main restriction of the method is an applicability of ray optics. It can be inaccurate for the sample with small roughness (with sizes close to the wavelength). Another possible drawback of the method is also related to ray optics. It is the high sensitivity of generated BSDF to the quality of measurements. The different steps between measured nodes or noise can result in a noticeable difference in BSDF shape. The second method is denoted as "Measured_Heights (wave)". It uses measured sample profile, i.e. height distribution, too. However, light propagation is realized here analytically based on Kirchhoff approximation. The disadvantages of the approach are listed in chapter 2.3 above. It should be pointed out that measured profiles are not used directly in this investigation. To minimize possible errors related to the quality of height distribution measurements, application of ray or wave optics, and other possible reasons optimization procedure is run for each profile. It is explained in [26] and similar to optimization procedure presented for "Normals" approach in Fig. 7. The purpose of the optimization is to obtain the transmitted light distribution maximally close to the measured one. And parameters of optimization are scaling and filtration of microrelief. The scaling allows to increase/reduce microroughness and filtration allows reducing measurement noise. These ways of profile modification are presented in Figure 13. The verification of the next three methods is the main goal of this work. They do not require measurements of height distribution which can be expensive or simply not available. The method denoted as "GGX" was selected as the best representative of analytical approach. A utility was created to generate BSDF based on the analytical formulas (2) and (3). To define an optimal parameter of roughness ( according to formulas (2) and (3)) an optimization procedure, similar to the one presented in Fig. 7 was executed. The optimization goal was to obtain light transmission distribution maximally close to measurements. The generation of BSDF with two numerical approaches denoted as "Normals" and "Heights" is very similar to "GGX" and is explained in chapter 2.1 and in [31,32] in more details. The parameters to reconstruct normal and heights distribution are defined with an optimization procedure with an objective function to obtain maximal closeness to measured transmission. So, finally, we have five methods to be verified: two based on measured profile and angular sample "Measured_ Heights (ray)", "Measured_ Heights (wave)" and three methods based only on angular sample transmission "GGX", "Normals" and "Heights". Visual verification of methods Comparison of measured versus simulated light transmission through a plate with rough surface is used for verification of different BSDF reconstruction methods. However, such comparison can be not sufficient. The BSDF of rough surface can have complex shape and even small inaccuracy in its generation can result in defects visible in the image, appearance of some artifacts. Especially it can be noticeable if BSDF is attached to complex curved objects which are illuminated under grazing angles. So, it is also preferable to verify how BSDF samples are visualized under some realistic conditions. A special model aimed at visualization was prepared, see Fig. 14. The scene presents a virtual model of a special measuring box JUDGE-II by X-Rite [39]. It has surfaces close to diffuse and several luminescent tube lamps emulating daylight. The several objects: a plate, a sphere and a torus are placed into the measuring box. The reconstructed BSDF is attached to the external surface of the test objects. Internal surfaces are simulated as ideally smooth and have perfect Fresnel properties. The medium of all objects has the refractive index = 1.49, which corresponds to measured samples. Figure 14: Scheme for visualization The scene is observed at a finite distance with a special sensor emulating the human eye or camera. The image is generated with the help of simple forward Monte Carlo ray tracing technique in Lumicept [35]. Although it is not the most effective tool nowadays from viewpoint of efficiency and calculation speed, and generated images, as a rule, contain noise, however it is a more reliable and safe tool because of its simplicity. Results The results of the simulation are presented in two variants: 1. As graphs with angular distribution of transmitted light intensity. A special scene to simulate the characteristic as precisely as possible has been prepared, which is maximally close to the measurement scheme of GP-200 goniophotometer [31]. The simulation was done for normal incident direction of parallel light in one plane corresponding to the plane of light incidence ("sigma" = 0deg). All six graphs (one measured with GP-200 + all five reconstructed methods) are combined into the single graph picture. 2. As images generated as it is specified in section 3.3. The images are generated with the help of simple forward Monte Carlo ray tracing renderer in Lumicept simulation system [35]. The simulation is fulfilled for all five methods of BSDF reconstruction explained in section 3.2. Figures 15 and 16 present graphs of angular intensity distribution of transmitted light for normal incident direction (sigma = 0deg). More results for other incident directions are published in [40]. The general numerical difference (error) between measured and simulated angular intensity distribution of transmitted light is estimated as root mean square deviation (RMSD) reduced to the maximal value of measured intensity in relative shape (*100%): where is the measured intensity and is calculated. Index i means the value of the intensity defined for a specific direction of illumination and observation. All observation directions (in the range of ±90deg with step = 0.1deg) and all illumination directions (sigma = 0, 15, 30, 45 and 60deg) are used in the calculation of the difference. The value of "error" for all samples and all BSDF reconstruction methods is combined in Table 2. The best result (the lowest error) is highlighted (bolded) for each sample. Conclusions As we can see from the results in Table 2 the most of the investigated methods work well. The exception is "Measured_Heights (wave)" based on Kirchhoff approximation, where we see a noticeable error for samples #5-#8 with Rq > 1m. This is also clearly seen on graphs (Fig. 16). It is quite predictable analyzing the restrictions of the Kirchhoff approximation based method listed in chapter 2.3. So, the "Measured_Heights (wave)" method cannot be recommended for samples with roughness Rq > 1m. From the other side, analyzing graphs for samples #1-#3 (Fig. 15), the wave optics approach gives more close results in the shape of angular distribution of transmitted light. Moreover the wave approach almost does not require optimization of measured height distribution unlike the "Measured_Heights (ray)" based on ray optics. It can be explained with the big sensitivity of the ray approach to the quality of microrelief measurements (noise, step between measured nodes). The analytical "GGX" method (the improved Cook-Torrance model) works reasonably for all samples. In the case of "GGX" the noticeable inaccuracy appears only for samples with big roughness. So, considering its simplicity because only one parameter manages BSDF shape and analytical type of calculation, the method can be recommended for modeling rough surfaces with average microroughness. Comparison of methods based on measured height distribution ("Measured_Heights (ray)" and "Measured_Heights (wave)") for samples with small microroughness versus all other methods demonstrates that agreement between measured and simulated intensity is better for methods which use measured geometry of microroughness, especially for big illumination angles. Both numerical methods show good agreement with measurements practically for all examined samples. The numerical "Normals" method is slightly better than numerical "Heights" in the area of general transmission estimation, has better convergence during optimization, and is simpler in calculation. Surprisingly, "Measured_Heights (ray)" works not well for sample #8 (Fig. 16, Table 2). One of the possible reasons is too small measurement area of height distribution, so it is just not representative. In general the methods based on measured height distribution are supposed to be more precise because the real profile geometry is used during ray transformation. In the case of the "Measured_Heights (ray)" method the interreflection, shading, and masking effects are supported in the whole volume because it uses the Monte Carlo ray tracing. This method can suffer from the restrictions of ray optics, inaccuracy of measurements of height distribution or measured fragments are not representative. However, these drawbacks are overcome with modification of microrelief during optimization procedure, at least partially. Thus the "Meas-ured_Heights (ray)" method can be considered as reference ("etalon") one in visual comparison with other methods, so all other methods are compared to it. During visual comparison ( Fig. 17 and 18) we see the images of the flat plane with average illumination and observation inclination are similar to each other. The situation with the curved object is more complicated. The images generated with the analytical "GGX" method are similar to the reference (etalon) image in the case of small and average microroughness. The effect of "a dark ring in the sphere" is absent for any images created with the "GGX" BSDF, however the curved objects look darker for samples with big roughness. Likely the energy conversation works not so well in approximations of analytical methods for samples with noticeable microroughness. The numerical "Normals" approach generates quite good images for samples #1-#4 (Fig. 17) but images for rougher samples #5-8 (Fig. 18) have noticeable artifacts like bright edge ring in the sphere. The reason for the effect is evident: the approach does not support interreflection, masking, and shadowing. From the viewpoint of visual appearance the numerical "Height" method shows the best results (most close to the etalon images) for all samples. Summarizing all simulated data we can recommend the numerical "Heights" distribution method as more accurate in case the precise simulation is required and there are no measurements of microrelief geometry. In case of rather small roughness the analytical "GGX" or the numerical "Normals" can be sufficient.
2022-09-15T15:05:31.862Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "a4c6aed4b438e1a466a8f3c5b03315fc7c67bb54", "oa_license": null, "oa_url": "http://sv-journal.org/2022-3/10/en.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ae684d78112d467c8844712845f473c3b978a08d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
247889386
pes2o/s2orc
v3-fos-license
Targeting the DNA Damage Response to Increase Anthracycline-Based Chemotherapy Cytotoxicity in T-Cell Lymphoma Mature T-cell lymphomas (MTCLs) represent a heterogeneous group of aggressive non-Hodgkin lymphomas comprising different entities. Anthracycline-based regimens are considered the standard of care in the front-line treatment. However, responses to these approaches have been neither adequate nor durable, and new treatment strategies are urgently needed to improve survival. Genomic instability is a common feature of cancer cells and can be caused by aberrations in the DNA damage response (DDR) and DNA repair mechanisms. Consistently, molecules involved in DDR are being targeted to successfully sensitize cancer cells to chemotherapy. Recent studies showed that some hematological malignancies display constitutive DNA damage and intrinsic DDR activation, but these features have not been investigated yet in MTCLs. In this study, we employed a panel of malignant T cell lines, and we report for the first time the characterization of intrinsic DNA damage and basal DDR activation in preclinical models in T-cell lymphoma. Moreover, we report the efficacy of targeting the apical kinase ATM using the inhibitor AZD0156, in combination with standard chemotherapy to promote apoptotic cell death. These findings suggest that DDR is an attractive pathway to be pharmacologically targeted when developing novel therapies and improving MTCL patients’ outcomes. Introduction Nodal T-cell lymphomas (TCL) are a rare and aggressive subgroup of lymphoid malignancies, accounting for 10-15% of non-Hodgkin lymphomas (NHL). According to the latest World Health Organization (WHO) classification, they comprise several entities, such as angioimmunoblastic T-cell lymphoma (AITL), anaplastic large T-cell lymphoma (ALCL), and peripheral T-cell lymphoma not otherwise specified (PTCL-NOS), which is the most common subtype including all the TCL cases that still fail to be categorized [1,2]. Anthracycline-based chemotherapies-namely, CHOP-based (cyclophosphamide, doxorubicin, vincristine and prednisone) or CHOEP-based (CHOP + etoposide) regimens-are commonly used as front-line approaches. Suboptimal results led investigators to design preclinical and clinical studies to test the addition of novel agents, monoclonal antibodies or kinase inhibitors to these regimens to improve efficacy [3,4]. In this view, we recently reported the benefits of adding the tyrosine kinase inhibitor dasatinib to CHOEP [5]. Nonetheless, with the exception of ALK+ ALCL, in real-life experience, responses and Int. J. Mol. Sci. 2022, 23, 3834 2 of 14 overall survival (OS) rates for patients with TCL are still very low [4]. Thus, there is need to increase our knowledge of TCLs biology to design innovative treatment strategies and improve outcomes. In recent years, several efforts have been made by us and others to elucidate the genetic features of TCLs, reporting a broad range of copy number alterations (CNAs) and structural variations (SVs) rather than point mutations [6][7][8], thus suggesting that TCLs are characterized by genomic instability. It is well consolidated that genomic instability is a hallmark of cancer cells [9][10][11] that has been recently described also in hematological malignancies including multiple myeloma (MM), diffuse large B cell lymphoma (DLBCL), acute myeloid leukemia (AML), and chronic lymphocytic leukemia (CLL) [12][13][14][15][16][17][18] but has never been investigated in TCLs. A peculiarity of genomic instability is the presence of intrinsic DNA damage usually associated with basal activation of the DNA damage response (DDR). One of the key players of DDR is the serine/threonine kinase ataxiatelangiectasia mutated (ATM), an apical kinase and a sensor of DNA damage. In the presence of DNA lesions, ATM is one of the first molecules to be activated in order to spread the signal throughout the nucleus. The signaling cascade triggered by ATM mediates cell cycle checkpoints activation and promotes the recruitment of specialized proteins to sites of damage to allow DNA repair [19]. Because of its role as master regulator of DDR signaling, ATM is an appealing therapeutic target for successfully sensitizing cancer cells to standard treatments [20]. Accordingly, inhibitors of ATM and other DDR signaling kinases are now being tested in clinical trials either as monotherapy for specifically mutated tumors or in combination with chemotherapeutic agents [21,22]. AZD0156, a recently developed selective ATM inhibitor [23], was able to enhance the effects of olaparib and radiation in preclinical models of different solid tumors [24,25] and is now being tested in advanced solid tumors alone in or in combination with olaparib and irinotecan-based chemotherapy in a phase I study (NCT02588105). Here, we report for the first time the characterization of intrinsic DNA damage and basal DDR activation in preclinical models in T-cell lymphoma. Additionally, we provide evidence that targeting the apical kinase ATM in combination with CHOEP is effective in promoting apoptotic cell death. Malignant T Cell Lines Are Characterized by Endogenous Levels of DNA Damage and Basal Activation of DDR Signaling We first explored whether T-cell lymphomas as well as solid tumors and some hematological malignancies are characterized by intrinsic DNA damage, a feature of genomic instability. The presence of γH2AX and 53BP1 foci, well-recognized markers of DNA damage [26,27], was evaluated by immunofluorescence in a panel of cancer cell lines of T lineage representing the heterogeneity of TCLs. We observed that all malignant T cells evaluated, with the exception of KARPAS-299 and JURKAT, display a higher percentage of nuclei with detectable γH2AX and 53BP1 foci compared with normal T lymphocytes isolated from the peripheral blood of healthy donors (62.8 ± 12.3 vs. 28 ± 2.8 for γH2AX foci; 58.8 ± 17.9 vs. 37 ± 5.7 for 53BP1 foci; mean ± SD of all malignant T cells vs. healthy T cells, respectively) ( Figure 1A). In addition, the number of γH2AX foci/cell was significantly higher in all cell lines than in normal T lymphocytes (7.7 ± 1.6 vs. 1.55 ± 0.07; mean ± SD of all malignant T cells vs. healthy T cells, respectively) ( Figure 1B,C). The number of 53BP1 foci/cell, although higher than in normal T lymphocytes in all TCL cell lines, was significantly superior only in two out of six cell lines (KARPAS-299 and HH) ( Figure 1B,D). As expected, 24 h upon chronic treatment with the anthracycline-based chemotherapy regimen CHOEP, the presence of both γH2AX and 53BP1 foci significantly increased ( Figure S1), consistent with the fact that CHOEP components are well-established DNA damaging agents [28][29][30][31]. (A,B), respectively. Data are expressed as the mean ± SD of independent experiments. Asterisks indicate statistically significant differences between each cell line and healthy T lymphocytes (* p < 0.05; ** p < 0.01; *** p < 0.001; ns: not significant). Representative immunofluorescence images (C,D) of experiments described in (A,B), original microscope magnification 100×. SUP-T1 cells were exposed to increasing concentrations of CHOEP (IC50, 4× and 8×, as in [5]) or to 20 μM etoposide for 3 h. After harvesting, untreated and treated cell lysates were analyzed by Western blot using the indicated antibodies (E). (A,B), respectively. Data are expressed as the mean ± SD of independent experiments. Asterisks indicate statistically significant differences between each cell line and healthy T lymphocytes (* p < 0.05; ** p < 0.01; *** p < 0.001; ns: not significant). Representative immunofluorescence images (C,D) of experiments described in (A,B), original microscope magnification 100×. SUP-T1 cells were exposed to increasing concentrations of CHOEP (IC 50 , 4× and 8×, as in [5]) or to 20 µM etoposide for 3 h. After harvesting, untreated and treated cell lysates were analyzed by Western blot using the indicated antibodies (E). We then assessed whether the presence of intrinsic DNA damage was associated with constitutive activation of DDR signaling pathways. To this aim, we studied the expression of activating post-translational modifications of several DDR players including ATM, Chk2, Chk1, and KAP1 before and after 3 h treatment with increasing concentrations of CHOEP and with 20 µM etoposide alone, which is well known to activate DDR in human cells [30,32]. As shown in Figures 1E and S2, TCL cell lines display basal DDR activation, which is further enhanced upon chemotherapy exposure. Basal DDR activation was not present (data not shown) as expected in healthy T lymphocytes [33,34]. Taken together, these data indicate that in vitro models of T-cell lymphoma show intrinsic DNA damage and constitutive activation of the DDR signaling pathways, despite some degree of heterogeneity observed in the analyzed cell lines. Preclinical Models of T-Cell Lymphoma Are Sensitive to the ATM Inhibitor AZD0156 As we have shown that the DDR signaling cascade is basally activated and gets further triggered in response to chemotherapy, we hypothesized that targeting the DDR could block the possible attempts made by the cell exposed to CHOEP to repair the induced DNA lesions, thus leading to augmented apoptosis. For this purpose, we took advantage of a recently developed inhibitor, AZD0156, that targets the apical kinase ATM [23] and is being tested in clinical trials for solid tumors. First, we exposed cell lines to increasing concentrations of AZD0156 (range 1 nM-500 µM) for 48 h and observed a dose-dependent reduction in viable cells in all cell lines, even though sensitivity to AZD0156 was heterogeneous ( Figure 2A). The only exception was represented by HH cells, which displayed only a mild reduction in cell viability when exposed to high doses of AZD0156 (not shown) and that for this reason were excluded from subsequent experiments. We then calculated the inhibitory concentrations (ICs) for each cell line (IC 50 range: 0.55-2.3 µM) ( Table S1). The decrease in cell viability observed was caused by an increase in dead cells (which internalize propidium iodide, PI), thus suggesting a cytotoxic rather than a cytostatic effect of this compound on the tested cell lines ( Figure 2B). This was further corroborated by cell cycle analyses upon IC 20 (Figures 2C and S3) and IC 50 (not shown) doses of AZD0156 with no significant changes detected among untreated and AZD0156-treated cells. ATM Inhibition Sensitizes Malignant T Cell Lines to CHOEP Treatment We then decided to exploit malignant T cell sensitivity to AZD0156 to potentiate CHOEP cytotoxic effects. Thus, we exposed malignant T cell lines to IC 20 CHOEP (defined in our previously reported study [5]) and IC 20 AZD0156 (Table S1), alone and in combination, and we monitored cell growth by flow cytometry. With the exception of HD-MAR-2 cells, in all cell lines the addition of AZD0156 to CHOEP significantly reduced cell proliferation, compared with CHOEP treatment alone (CHOEP 81.5 ± 6.4; CHOEP + AZD0156 68.1 ± 13.6; mean ± SD of all cell lines, p < 0.05) ( Figure 3A). Consistent with these findings, CHOEP treatment caused an induction of ATM-Ser1981 auto-phosphorylation, which was abrogated by AZD0156 addiction, indicating the molecular activity of the ATM inhibitor ( Figure 3B). Importantly, AZD0156 used either alone or combined, as well as CHOEP [5], did not impair cell viability of normal T lymphocytes ( Figure 3C). CHOEP + AZD0156 68.1 ± 13.6; mean ± SD of all cell lines, p < 0.05) ( Figure 3A). Consistent with these findings, CHOEP treatment caused an induction of ATM-Ser1981 auto-phosphorylation, which was abrogated by AZD0156 addiction, indicating the molecular activity of the ATM inhibitor ( Figure 3B). Importantly, AZD0156 used either alone or combined, as well as CHOEP [5], did not impair cell viability of normal T lymphocytes ( Figure 3C). , and cell lysates were subjected to Western blot using the indicated antibodies (B). OCI-Ly12 and healthy T lymphocytes, isolated from the peripheral blood of two different healthy donors, were treated for 48 h with IC 20 CHOEP and IC 20 AZD0156 alone or in combination. Then viable cells were monitored by flow cytometry (C). In A and C, data are expressed as the percentage of untreated samples and are the mean ± SD of at least three independent experiments (* p < 0.05; ** p < 0.01; **** p < 0.0001; ns: not significant). To rule out the possibility that the different effects observed in the cell lines treated with either AZD0156 alone or the AZD0156 + CHOEP combination could be due to the presence of mutations in ATM and/or TP53, we performed targeted sequencing analysis. As expected, all cell lines bear pathogenic TP53 alterations but no ATM mutations. ATM is mutated only in JURKAT cell line, but we were unable to define a clear inactivating effect of the detected alterations (Tables S2 and S3). When monitoring γH2AX foci upon treatment with AZD0156 and CHOEP, alone and in combination, a basal level of DNA damage foci was observed in untreated samples, which markedly increased upon CHOEP treatment ( Figure S4A). In AZD0156 treated cells, the number of foci was comparable with that of untreated cells, possibly because γH2AX (H2AX-phospho-Ser139) is phosphorylated by ATR and DNA-PK too [26]. Of note, the combination of CHOEP and the ATM inhibitor was able to prevent CHOEPinduced γH2AX phosphorylation. These data are consistent with the fact that γH2AX is a direct target of ATM, which is affected by ATM inhibition. As expected, when monitoring 53BP1 foci upon CHOEP and AZD0156 treatment, we observed that both CHOEP given alone and the CHOEP-AZD0156 combination cause the induction of 53BP1 foci formation irrespective of ATM inhibition ( Figure S4B). We also monitored the signaling downstream ATM in response to treatment with AZD0156 and CHOEP. As expected, the treatment with AZD0156 given either alone or in combination with CHOEP impacts not only ATM activation, as shown in Figure 3B, but also its activity on downstream targets ( Figure S5). ATM Inhibitors AZD0156 and KU-55933 Enhance CHOEP-Induced Apoptosis We further investigated the mechanisms responsible for the reduction in cell proliferation caused by the addition of AZD0156 to CHOEP, monitoring cell death and cell cycle by flow cytometry. We observed that, compared with CHOEP alone, the AZD0156-CHOEP combination caused an increase in apoptotic/necrotic cell death in all cell lines (fold change to untreated cells: CHOEP 1.31 ± 0.28; CHOEP + AZD0156 1.75 ± 0.33; mean ± SD of all cell lines, p < 0.0001) ( Figure 4A), with a partial involvement of mitochondrial membrane depolarization, an event associated with apoptosis ( Figure 4B). As expected, when monitoring cell cycle perturbations, we observed a trend towards increased cell cycle arrest in G2/M phase when we combined AZD0156 with CHOEP, compared with CHOEP alone (percentage (%) of cells in G2/M phase: CHOEP 18.94 ± 6.76; CHOEP + AZD0156 23.64 ± 10.86; mean ± SD in all cell lines, p: ns) (Figures 4C and S6). The other cell cycle phases were not subjected to substantial modifications compared with CHOEP (not shown). Although not statistically significant, together with the induction of apoptotic cell death, this increase in G2/M arrested cells well explains the reduction in cell growth we observed ( Figure 3A). To conclude, we confirmed the efficacy of ATM inhibition in potentiating CHOEP effects using another ATM inhibitor, KU-55933, widely used in in vitro studies. We tried to titrate KU-55933 in our cell lines, but in a range of concentrations between 16 nM and 16 µM, we observed a reduction in cell viability of 10-20%. Cell viability was significantly reduced when we reached higher concentration (100 µM-200 µM) (data not shown). As KU-55933 has been administered to cell lines at 10 µM to inhibit ATM kinase activity in other studies [35,36], we decided to adopt this concentration for the reported experiments. We first exposed cell lines to 10 µM KU-55933 alone and in combination with IC 20 CHOEP, and we observed a significant reduction in cell proliferation ( Figure S7A). Apoptosis induction was confirmed by cleaved caspase 3 expression by Western blot ( Figure S7B) and further confirmed by mitochondrial membrane depolarization assay ( Figure S7C). These data are in line with a previous study showing, in JURKAT cells, the pro-apoptotic activity of KU-55933 when added to etoposide [33]. As well as AZD0156, also KU-55933, used alone or in combination with CHOEP, did not alter the viability of healthy T lymphocytes ( Figure S7D). To conclude, we confirmed the efficacy of ATM inhibition in potentiating CHOEP effects using another ATM inhibitor, KU-55933, widely used in in vitro studies. We tried to titrate KU-55933 in our cell lines, but in a range of concentrations between 16 nM and 16 μM, we observed a reduction in cell viability of 10-20%. Cell viability was significantly reduced when we reached higher concentration (100 μM-200 μM) . In (B) data are expressed as percentage of untreated samples. In (A-C) data are the mean ± SD of at least three independent experiments. Asterisks indicate statistically significant differences (* p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001; ns: not significant). Discussion To the best of our knowledge, this is the first study evaluating the presence of intrinsic DNA damage and basal DDR activation in preclinical models of TCL. This is not surprising, as similar evidence was already reported in solid tumors and in some hematological malignancies-namely, DLBCL, MM, AML, and CLL [12][13][14][15][16][17][18]. Of interest, in MM, Cottini and colleagues reported that DNA damage is caused by replicative stress and identified a subset of patients characterized by chromosomal instability and poor prognosis and correlated these features with an increased expression of the oncogene MYC [12,13]. Interestingly, higher MYC levels have been reported in MTCL patients characterized by a worse clinical course and poor response to therapy [8,37]. Thus, further investigations will be needed to clarify whether MYC overexpression is involved in the establishment of intrinsic DNA damage also in PTCLs. Recently, we and others have reported high frequencies of CDKN2A and TP53 alterations in PTCLs [6][7][8]. CDKN2A encodes for p16 INK4a , a protein that acts as a tumor suppressor gene and is involved in the modulation of cell cycle progression, thus explaining why CDKN2A deletion is a frequent event in cancer establishment [38]. Moreover, we have shown that the genome of MTCL patients is characterized by high genomic instability associated with chromothripsis [6], thus further supporting the role of DNA replication stress in the pathogenesis of MTCLs. The association between genomic instability and intrinsic DNA damage observed in solid and hematological malignancies raises the question of whether targeting the DDR signaling pathway may be exploited to switch DNA repair and pro-survival mechanisms off in cancer cells exposed to genotoxic agents used as standard chemotherapy. In particular, such strategy is appealing in B-cell lymphomas, as during maturation and antibodies production, B cells experience somatic hypermutation and V(D)J recombination, which expose cells to high levels of DNA damage. For this, in recent years, several inhibitors of DDR proteins-including ATR, DNA-PK, PARP, Chk1, and WEE1, administered alone and in combination with other agents-have been preclinically tested [39]. Consistently, ATR inhibitors have shown strong cytotoxic and in vivo antitumor activity in mantle cell lymphomas (MCL) and DLBCL, regardless of their TP53, MYC, and ATM mutation status [40]. In the present study, we report encouraging preclinical data describing the benefits of ATM inhibition in promoting CHOEP-induced cell death in preclinical models of T-cell lymphoma using two different chemical compounds-AZD0156 and KU-55933. Despite the presence of basal DDR activation and intrinsic DNA damage observed in all the cell lines included in this study, their response to AZD0156 and AZD0156-CHOEP treatment was heterogeneous, further confirming the heterogeneity of the pathology. Interestingly, we observed a reduction in γH2AX foci upon combined treatment with AZD0156 and CHOEP because this serine is directly phosphorylated by ATM. Nonetheless, 53BP1 foci formation was not impaired by AZD0156 addition to CHOEP in our models. Thus, the decrease in γH2AX foci is not suggestive of the absence of DNA damage, but rather of dysfunctional DDR signaling caused by ATM inhibition. Consistently, DNA damage is not efficiently repaired, leading to apoptotic cell death, as in Figure 4A,B. The AZD0156 inhibitor is currently clinically tested in advanced solid tumors alone or in combination with olaparib and irinotecan. Moreover, recent preclinical studies reported AZD0156 capability to enhance the genotoxic effects of olaparib and radiation in preclinical models of different solid tumors [24,25]. In accordance with these data, AZD0156 enhances apoptosis induced by CHOEP treatment. Given alone, AZD0156 does not induce cell cycle perturbations in malignant T cells, as observed in solid tumors [24,25]. Nonetheless, we reported a mild but not significant modulation of G2/M arrest when we combined the ATM inhibitor with CHOEP that could be explained by ATM involvement in S-phase checkpoint. This finding is in agreement with what observed combining AZD0156 with radiation, as a clear cell cycle arrest was not observed [25]. On the contrary the combination with olaparib strongly caused G2/M cell cycle arrest [24], possibly because the effects of CHOEP are more similar to those experienced by cells exposed to radiation than to olaparib. In recent years, first-line regimens built on a CHOP-like backbone have been studied, but none, including the one combining the histone deacetylase (HDAC) inhibitor romidepsin with CHOP (Ro-CHOP), significantly improved the survival of patients affected by PTCL [41], thus supporting the concept that identifying better treatments remains a major unmet need. In this view, the data we present, indicate that ATM inhibition combined with anthracycline-based programs represents a potential new therapeutic option for the treatment of TCLs. The in vitro models of T-cell lymphoma employed in this study are characterized by pathogenic TP53 mutations, suggesting that AZ0156 is effective independent of TP53 mutational status. On the contrary, we did not observe ATM putative inactivating mutations. Nonetheless, consistent with the fact that mutations have been reported in the ATM gene in TCL patients [7,42], additional studies will be required to define whether the proposed drug combination can be active in delineated MTCL subtypes and/or if it could be used even in presence of genetic alterations affecting ATM. Notably, our in vitro data suggest that suboptimal concentrations of the ATM inhibitor are not detrimental for healthy T lymphocytes. Nonetheless, we cannot rule out the possibility that the combined treatment with AZD0156 and CHOEP could cause adverse effects in humans. Preliminary data from the phase I clinical trial aimed at assessing efficacy and tolerability of AZD0156 in combination with olaparib reported minor toxicities in about 40% of patients enrolled in the study. However, hematologic toxicities were observed when higher doses of both drugs were used [43]. Thus, phase I studies will be required to explore the safety of the AZD0156-CHOEP combination and to assess the doses and durations of exposure. [44]. Cells were grown as described in [5]. Healthy T lymphocytes were obtained from the peripheral blood of healthy volunteer donors who provided informed consent and were isolated upon density gradient centrifugation and separation with the autoMACS Pro separator (Miltenyi Biotec, Bergisch Gladbach, Germany) using CD3 Micro Beads (Miltenyi Biotec, Bergisch Gladbach, Germany) following the manufacturer's instructions. Treatments All drugs (cyclophosphamide monohydrate, doxorubicin hydrochloride, vincristine sulphate, etoposide, prednisone, and ATM inhibitors AZD0156 and KU-55933) were purchased from Selleck Chemicals (Houston, TX, USA). CHOEP was prepared as described in [5]. Briefly, CHOEP 1× was composed of cyclophosphamide monohydrate 5.84 pM (C), doxorubicin hydrochloride 1.5 pM (H), vincristine sulphate 260 pM (O), etoposide 0.3 µM (E), and prednisone 1 µM (P). AZD0156 was added to cells 30 min before CHOEP. KU-55933 was used at 10 µM and was added to cells 1 h before CHOEP treatment. The half maximal inhibitory concentrations (IC 50 ) were determined upon 48 h of chronic exposure, as the concentration of drug able to reduce 50% of cell growth. Immunofluorescence of γH2AX and 53BP1 Foci Upon treatment, cells were transferred on glass slides using a cytospin centrifuge (5 min at 500 rpm). Glass slides were dried overnight at room temperature, then fixed with 2% paraformaldehyde and permeabilized with PBS 0.2% Triton X-100. Saturation was performed in PBS 5% BSA 0.2% TWEEN 20. Primary antibodies used were γH2AX #A300-081A (Bethyl Laboratories, Montgomery, TX, USA) and 53BP1 #NB100-305 (Novus Biologicals Bio-Techne, Centennial, CO, USA); secondary conjugated antibody used was Alexa Fluor 488 #A11034 (Thermo Fisher Scientific, Waltham, MA, USA). At the end, nuclei were counterstained with DAPI (Merck Millipore, Burlington, MA, USA). Images were acquired with a Nikon Eclipse E1000 fluorescence microscope equipped with a DSU3 CCD camera, using a 100× magnification objective, as previously described [45]. In each experiment, foci enumeration was performed on at least 100 nuclei. Cell Viability, Cell Cycle, Cell Death, and Measurement of the Mitochondrial Transmembrane Potential Cell viability, cell death, mitochondrial membrane potential, and cell cycle were studied by flow cytometry as previously described [5]. Briefly, reagents used to label cells were: propidium iodide (PI, Merck Millipore, Burlington, MA, USA) for cell viability, Annexin V-FITC Kit (Miltenyi Biotec, Bergisch Gladbach, Germany) for cell death, the fluorescent probe tetramethylrhodamine ethyl ester-TMRE (Thermo Fisher Scientific, Waltham, MA, USA) for mitochondrial membrane potential. For the analyses of cell cycle distribution, cells were fixed in 70% ethanol and then stained with PI. All data were acquired using the flow cytometer MACSQuant Analyzer (Miltenyi Biotec, Bergisch Gladbach, Germany) and analyzed with the MACSQuantify software version 2.11 (Miltenyi Biotec, Bergisch Gladbach, Germany). Mutational Profiling of Cell Lines Genomic DNA was extracted using the Nucleospin Tissue kit (Macherey-Nagel GmbH & Co, Düren, Germany) and quantified using Qubit 2.0 and the Qubit DNA HS Assay kit (Thermo Fisher Scientific, Waltham, MA, USA). DNA was run on the Oncomine Comprehensive Assay (OCA) Plus targeted panel of 1.7 Mb (Thermo Fisher Scientific, Waltham, MA, USA) with an AmpliSeq-based enrichment. All library preparation was performed manually according to manufacturer's instructions (MAN0018490). Multiplex PCR amplification was conducted using 20 ng of DNA as input. Purified libraries were quantified with real-time PCR with the Ion Library TaqMan™ Quantitation Kit (Thermo Fisher Scientific, Waltham, MA, USA). The 50 pM libraries were pooled and loaded onto Ion 550™ Chips (Thermo Fisher Scientific, Waltham, MA, USA) according to manufacturer's instructions (MAN0017275) and prepared for sequencing using the Ion Chef™ System (Thermo Fisher Scientific, Waltham, MA, USA). Sequencing was performed using the Ion Gene Studio S5 Sequencer (Thermo Fisher Scientific, Waltham, MA, USA). For the analysis, data were initially processed using Ion Torrent Suite Software™ (Thermo Fisher Scientific, Waltham, MA, USA), and variant calling was performed using the Variant Caller plugin. Variants were filtered for coverage greater than 40 reads, frequency greater than 5%, quality value greater than 30, and coverage depth greater than 500X. Resulting variants were annotated using the OpenCravat tool (available online: https://opencravat.org (accessed on 14 February 2022)) and the Ion Reporter™ Software (v. 5.18, Thermo Fisher Scientific, Waltham, MA, USA). Variants were classified using ClinVar (available online: https://www.ncbi.nlm.nih.gov (accessed on 14 February 2022)), cBioPortal (available online: https://www.cbioportal.org (accessed on 14 February 2022)), and dbSNP databases. Variants categorized as neutral/benign and variants with a frequency >0.00001 in the population (not pathogenic) were considered as single nucleotide polymorphisms (SNPs) and were thus excluded. Statistical Analyses Data are expressed as the mean ± standard deviation (SD) of independent experiments. GraphPad Prism 9 software (GraphPad Inc, San Diego, CA, USA) was used to perform graphs and statistical analyses. Specifically, one-way ANOVA test was used to calculate significance of cell viability, cell death, and TMRE assays. Tukey post hoc test was applied. Two-way ANOVA followed by Bonferroni posttest was used to calculate the significance of cell cycle experiments. For γH2AX and 53BP1 foci enumeration, p values were calculated with Student's t test.
2022-04-03T16:08:29.584Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "0d57ebcc25a894052f430b57445cbd1d19c07d74", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/7/3834/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d3cd56d2ca642b15a3657336865b9dc2bf62905", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238644575
pes2o/s2orc
v3-fos-license
Understanding Drivers of Unsustainable Natural Resource Use in the Comoro Islands The Comoros archipelago is a biodiversity hotspot by virtue of its high level of endemism. However, it suffers one of the highest rates of forest loss worldwide, mainly due to strong anthropogenic pressures. As Comorian populations depend on forest resources for subsistence, establishing relevant conservation strategies for their sustainable management requires the consideration of multiple stakeholders’ perspectives toward biodiversity and habitat conservation. To better understand the relationships between humans and nature; how comorian people use natural resource and the relevance of a protected area for long-term biodiversity conservation, we used Q-methodology to assess local people’s perceptions regarding biodiversity and conservation actions. Three discourses are identified during analysis: “Pro-environment discourse”, “Keeping things as usual” and “Social and environmental concerns”. According to the results, employed respondents, were favorable to long-term forest and biodiversity conservation. In contrast, unemployed respondents were in favor of more immediate benefits while unemployed but educated respondents were in favor to both long-term forest conservation and immediate benefits from forests. This suggests that poverty and a lack of access to basic services is associated with overharvesting of natural resources by rural people. These results suggest that biodiversity conservation of the Comoros archipelagos may benefit for plan aiming at (1) developing tourism and maintaining sustainable production of crops and livestock that could allow enhancing livelihoods and well-being of all social groups, (2) developing projects such as local markets that could allow villagers to sell agricultural productions, (3) setting up awareness campaign for tree-planting and reforestation. Reforestation could allow re-establishing natural plants and make large trees available for long-term purposes. Introduction Biodiversity and natural resources provide many direct as well as indirect services to human society, including playing a crucial role in sustaining people's well-being (Giannini et al., 2012). As a consequence, human populations strongly depend on natural ecosystems (Zhu et al., 2016). This is especially true for the poorest populations of developing countries, who largely rely on wild plants for building materials and for natural medicines and food, and on wild animals for meat (Ryan et al., 2016). However, on a global scale, biodiversity and natural resources are being degraded at alarming levels, mainly induced by anthropogenic pressures (Brook et al., 2008). Over the past two decades, scientists and numerous national and international organizations have argued for the urgent need to find alternative community-based approaches to protect and manage natural systems in developing countries (Jantz et al., 2015; A. C. King et al., 2021). Untill recently, natural resource and habitat management strategies tended to rely on biological and ecological data based on species ecology, population genetics or demographics, but have often neglected the human societies that critically depend on natural ecosystems (Fritz-Vietta, 2016, Gaebel et al., 2020. Although some conservation strategies have been developed in many countries on collaborative governance processes and participatory protected area management for instance, such strategies are non-existent in different parts of the world (Arumugam et al., 2021;Ayivor et al., 2020;Ghosh-Harihar et al., 2019;Jin et al., 2021;Krueck et al., 2019;O'Brien et al., 2021;Rittelmeyer, 2020). Communities living in geographic proximity to natural resources and forests typically have traditional knowledge about as well as emotional bonds with these areas. Ignoring the needs and practices of local communities in habitat conservation initiatives may result in conflicts between natural resource managers and these populations if the latter feel they face restrictions in the benefits they acquire from these areas (J. A. Fisher et al., 2020;Gaebel et al., 2020). This can eventually have a negative effect on both the long-term effectiveness of biodiversity conservation and on the livelihoods of the local population (Debata et al., 2017;Fritz-Vietta, 2016;Gaebel et al., 2020;Jin et al., 2021;Sournia, 1990). Reconciling the needs of the local population and natural resource use is now seen as fundamental in developing countries to implement management plans that ensure livelihoods and well-being in parallel with biodiversity conservation objectives (Boron et al., 2016;Helm et al., 2006;Jin et al., 2021). The Comoros (an archipelago consisting of the islands of Anjouan, Grande Comore, Moh eli and Mayotte) is a biodiversity hotspot by virtue of its high level of endemism (Myers et al., 2000). However, on the islands of the Union of Comoros (Grande Comore, Anjouan and Moh eli), natural habitats are experiencing one of the highest rates of habitat loss in the world (9.3% each year, FAO, 2010). The Union of Comoros is also one of the poorest nations in the world (Bourgoin et al., 2017). According to B. Fisher and Christopher (2007), about 72% of Comorians depend directly on forest resources for subsistence (Bourgoin et al., 2017;B. Fisher & Christopher, 2007). About 60% of Comorians live below the poverty line (population living in less than $1 per day) and 49% are undernourished. Additionally, the Union of Comoros has a fastgrowing population, leading to an acute need of land for agriculture and wood for building (Elvidge et al., 2009). Many researchers have pointed to intensive land use as the direct cause of the very high rate of natural habitat loss observed in the archipelago (Ibouroi, Cheha, Arnal, et al., 2018;Ibouroi, Cheha, Astruc, et al., 2018). Yet this pressure on natural forests and biodiversity is altering the ecosystem services they provide for the Comorian people. Effective conservation strategies are crucially needed to ensure the long-term preservation of biodiversity and natural habitats in the Comoros. On the three islands of the Union of Comoros, some measures have been undertaken by local, national, and international organizations in the aim of ensuring the long-term conservation of biodiversity (Granek & Brown, 2005;Ibouroi, Cheha, Astruc, et al., 2018;Ibouroi et al., 2019;Poonian et al., 2008). For instance, in 1992, Mickleburgh et al. proposed a long-term monitoring of the Livingstone's flying fox population and the establishment of a captive-breeding program for the species (Mickleburgh et al., 1992). The creation of the Moh eli Marine Park was successful in 2001 (Granek & Brown, 2005). Some of these projects were funded by the United Nations Development Program (UNDP 1998). In 2016, the national network of marine and terrestrial protected areas was created in the three islands of the Union of Comoros (see Ibouroi et al., 2019). However, most of these conservation strategies have been restricted to protecting Livingstone's flying fox roosts (Ibouroi, Cheha, Astruc, et al., 2018), as this is one of the most endangered species on the islands. Strategies to conserve the islands' biodiversity and habitats need to consider various contentious aspects that currently involve complex decision-making dilemmas (e.g. forest management, hunting management, representation of local communites, etc.). Solutions have not yet been clearly defined. For instance, numerous gaps still remain in understanding stakeholders' perspectives regarding natural resource management and biodiversity conservation. Local people's subjectivity and viewpoints are important to identify in order to inform conservation strategies and future management practices, to avoid making mistaken decisions in planning these measures and to increase their chance of being effective (Niedziałkowski et al., 2018). In this study, we conducted a Q-methodology approach to assess the relationships between stakeholders and their use of natural resources as well as their impact on habitats in the Comoros. Specifically, we assessed (1) how stakeholders perceive benefits from natural resources, (2) the level of awareness of the impact of their practices on biodiversity, and (3) their knowledge about, perceptions of and attitudes toward biodiversity and conservation actions. As social factors such as the level of formal education, employment and geographic location can affect knowledge and determine attitudes, we assessed what factors were related to positive or negative perception of forests and biodiversity conservation. This information may help (1) to understand the local community's representation of biodiversity, and (2) to explore future scenarios, with the objective of proposing relevant long-term conservation actions and habitat management strategies. Study Area The Comoros archipelago is located in the Indian Ocean, midway between Madagascar and the eastern coast of Africa. This archipelago comprises four islands: Grande Comore, Moh eli, Anjouan (the Union of the Comoros), and Mayotte (an overseas department of France). Without Mayotte, the Comoro Islands cover 1,862 km 2 and represent the third smallest African nation in terms of surface area. The islands are separated from each other by a distance of about 40-80 km. Since their emergence about 7 million years ago, these islands have never been connected to a continental mainland or to each other (Louette et al., 2004). Our study focused specifically on the three islands of the Union of the Comoros. In the Union of Comoros, habitat fragmentation and loss differ between islands due to differences in habitats, ecology and human demographics among islands (Sewall et al., 2007). For example, Anjouan Island experiences the highest human population density (772.13 inhabitant/km 2 against 180.55 inhabitant/km 2 and 357.78 inhabitant/km 2 for respectively Moh eli and Grande Comoro Islands) within the archipelago, which has direct consequences on natural habitat disturbance. On this island, between 1972 and 1987, more than 85% of natural habitat was converted into farmland, urban areas and secondary forests (Goodman et al., 2010). In the Grande Comoro Island, the rate of habitat loss is also high but in certain regions for instance in the Karthala forest, habitat fragmentation is moderate. In contrast, both habitat loss and fragmentation are relatively limited on Moh eli, probably because of the presence of a protected area (the Moh eli Marine Park) but also due to the low human population density on this island (180.55 inhabitant/km 2 ). The Moh eli Marine Park was established in 2000 with the goal of protecting 404 km 2 of marine habitats home to many endemic and threatened taxa, such as the dugong (Dugong dugon) and the green sea turtle (Chelonia mydas). The presence of this marine protected area represents an important source of income for local communities -several members of the community have been hired by the park as regular staff (Granek & Brown, 2005). Many tourists also come to see the endemic marine taxa and then take the opportunity to discover endangered terrestrial species such as the Livingstone's flying fox (Pteropus livingstonii) and the mongoose lemur (Eulemur mongoz). This tourist activity generates direct incomes for some local people (for example, who work as guides or in hotels, etc.). Our study involved different localities on the three islands of Comoros (Anjouan, Moh eli and Grande Comore, Figure 1). To understand how stakeholders perceive benefits from natural resources, their knowledge, perceptions and attitudes toward biodiversity and conservation actions, some of the interview questions and Q statements centered on two native flying fox species: Livingstone's flying fox (Pteropus livingstonii) and the Seychelles fruit bat (P.seychellensis comorensis), which differ in their feeding and roosting behavior as well as in their dispersal patterns (Ibouroi, Cheha, Arnal, et al., 2018;Norberg et al., 2000). Pteropus livingstonii is confined to the remaining mountain forests on Anjouan and Moh eli and feeds on endemic forest plants, while P. s. comorensis is widely distributed over the four islands of Comoros, feeding in both forests and cultivated areas (Ibouroi, Cheha, Astruc, et al., 2018;Trewhella et al., 2001). Both species are important ecosystem service providers, as they are pollinators and seed dispersers (Ibouroi, Cheha, Astruc, et al., 2018). Their differences in habitat use and feeding ecology ensure different ecosystem services. The two species have a potentially crucial impact on both forest regeneration and the cultivation of crops, thus are critical for maintaining overall ecosystem dynamics (Ibouroi, Cheha, Astruc, et al., 2018). Because of this contrasted pattern of dispersal, feeding and roosting behavior, conservation strategies and conflicts between humans and bats are also different between the two species. For instance Pteropus livingstonii populations are the subject of conservation actions, some of which involve local communities. These conservation actions focus on this species not only because of its low population size but also the rapid forest loss in the Comoros (Ibouroi, Cheha, Astruc, et al., 2018). Regarding P.seychellensis comorensis, as the species roosts and feeds in overexploited forests, its population is commonly involved in conflicts, as individuals feed in farmed areas and can damage cultivated plants. Such conflicts are believed to be the primary driver of legal and illegal persecution of this species, as is the case in many countries (Oleksy et al., 2021). For these reasons, these species are an ideal model to investigate local Comorian perceptions, allowing their discourses to be mapped regarding the flying fox, biodiversity and social development, followed by an analysis of the consequences for the long-term conservation of natural habitats." Research Design Q-methodology is a standard method used to reveal people's subjectivity and explore viewpoints on defined issues that are often contested (Stephenson, 1935). It specifically aims at identifying underlying patterns among stakeholders and comparing the key viewpoints, which leads to the identification of shared broad common points as well as divergences between them (Arumugam et al., 2021;Bavin et al., 2020;Watts & Stenner, 2005). The approach combines the qualitative study of attitudes with the statistical rigors of quantitative research techniques (Arumugam et al., 2021;Bavin et al., 2020;Watts & Stenner, 2005). It is increasingly applied in different types of environmental research, including environmental management and policy and social science of conservation (Arumugam et al., 2021;Debata et al., 2017;J. A. Fisher et al., 2020;Kamal & Grodzinska-Jurczak, 2014;Niedziałkowski et al., 2018;Rittelmeyer, 2020;Walder & Kantelhardt, 2018). Q-methodology involves five main steps: (1) collecting a broad sample of statements (concourse and Q-set design); (2) Selecting a representative sample of statements (reflecting the diversity of the wider concourse) to consider as Q-set ('Formulating the Q-Set'); (3) Selection of participants ('Identifying the P-Set'); (4) Conducting the Q-sorts and Interviews ('Q sorting and post-sorting interview'); (5) Analyzing the data using factorial analysis ('Analyzing the data and development of factor perceptions') (Eden et al., 2005;Kamal & Grodzinska-Jurczak, 2014). The standardized steps of Q methodology are summarized in the Figure 2. Concourse and Q Set Design. Q-methodology was conducted in three phases: between August and October 2016, January and April 2018, and between December 2018 and March 2019. In each field session, the three islands were visited for collecting data. As a first step, we established a concourse, defined as the full opinion spectrum in relation to the topic of habitat and natural resource uses, biodiversity and habitat conservation. For this concourse establishment, we used semi-structured interviews with local population during our first field session (August and October 2016) to gather information regarding the forest and natural resource uses, land uses and biodiversity conservation. These semistructured interviews were based on pre-defined interview questions (Table 1) and all the people interviewed were not preselected but directly asked to participate to the interview when encountered in villages in the course of their daily activities or during our prospection in forests. Each discussion and the recording of the collected information took about one hour. All responses were recorded with a dictaphone. For each person interviewed, their gender, age, place of residence, socioprofessional activity, and level of formal education were recorded. In total, 40 people were asked to participate in the interviews, of which 13 (1 man and 12 women) declined and 27 agreed. Of the 27 people interviewed, one respondent was under the age of 18 and was excluded from the analysis. The other 26 respondents were 23 men and 3 women aged between 22 to 65 (average age of 41); 14 lived on the island of Anjouan, 7 on Moh eli, and 5 on Grande Comore. From the final interview transcripts, a total of 60 statements were extracted to form the concourse. Formulating the Q-Set. Within the 60 statements selected as concourse (see above), a final set of 33 statements ( Figure 2) were selected as 'Q-set' by using a structured filtering process in order to reduce the whole concourse into a manageable set of statements. Statements expressing the same value or viewpoints were summarized into one overarching statement. These 33 statements (Q-set or Q-sort, Figure 3) representing the diversity of the wider concourse cover five main topics:(1) land use, (2) the livelihood activity of the local population, (3) the importance of the forest and biodiversity for the local population, (4) the importance of flying foxes for both the forest and the local community, and (5) the relevance of a protected area for long-term biodiversity conservation and natural habitat management. Identifying the P-Set. Typically, Q methodology involves a relatively small number of respondents, varying from 26 to 46 (Zabala et al., 2018) although some few studies used large number of respondents beyond 100 individuals (Carmenta et al., 2017;Milcu et al., 2014;Zabala et al., 2018). Although respondents involved in Q-study have to be diverse, the sample does not have to be representative of the population as the aim is to get the most diverse range of opinions, regardless of whether they are minority ones (Zabala, 2014). In order to represent a range of opinions from local people, 66 respondents (P-set, 51 men [77%] and 15 women [23%]) who had not participated in the previous semi-structured interviews were invited to complete the sorting and postsorting interview. In contrast to the concourse stage, Q-sort respondents were firstly preselected according to their level of formal education, whether they were employed or not, and their geographical location. This selection was firstly based on our knowledge in the Comoros institutions and forest workers, local networks and collaboration but also based on a snowball sampling approach (i.e. the identification of stakeholders by other participants). In the field, we get other information regarding villagers working in conservation and environmental institutions/NGO but also villagers with high/ low level of education for each locality. These villagers are selected as respondents for the Q-Method process. Q Sorting and Post-Sorting Interview'. In the Q-sorting and post-sorting process, a researcher presents the statements (Q-Set) so participants (P-Set) can rank them according to the predefined Q-sort structure in order to express their level of agreement or disagreement with. Interviews were conducted on a face-to-face. As for the semi-structured interviews, discussion and the recording of the collected information took about one hour and interviews were conducted in local language. For each respondent, the gender, age, place of residence, socio-professional activity, and level of formal education were also recorded. The researcher had to explain to all participants that the aim of the Q-sorting process was to obtain their opinions rather than to test for their knowledge. Participants represented by men and women but also by people from urban vs. non-urban regions (see Table 2) were given the Q-set and were instructed to read the statements carefully. They were asked to sort the 33 statements according to a nine-point scale of agreement/disagreement (4, 3, 2, 1, 0, À1, À2, À3, À4) presented in a sorting grid, forcing them to rank statements into a quasi-normal distribution (see Figure S1). Each participant was then asked to explain their most extreme scores (À4 and þ4), and these comments were later used to interpret the results. During this Q-sort process, some of the difficulties encountered were: (1) The fact that the method is timeconsuming in the preparation, data collection, and analysis phases. For instance: (a) because of the high rate of poverty in the Comoros, a large number of our potential participants, especially those working in forests, declined to participate unless they were paid. (b) Our semistructured and Q-sort sampling involved only a small number of woman's as they tended to decline to be interviewed probably for reasons related to the local culture. The few interviewed women were mainly employed in NGOs and students. No woman met in villages agreed to be interviewed. This is probably because the preselection of participants from the different villages were carried out few days before the interviews and no discussion was made with their husbands or legal parents. (2)Because respondents were often selected few days before the Q-sorting process, some of them did not have basic knowledge of the questions and Who exploits the forests in this region? 4 Do you have any knowledge regarding the history of this forest? 5 Have you seen any recent changes? 6 What do you want this forest to be like in the future? 7 If this forest disappeared completely, would it have any implications for you? 8 What wildlife are you familiar with in this forest? 9 Do you have any relationships with these animals? What do these animals represent for you? 10 Are any of these wild animals hunted? If so, for what purpose? 11 Who hunts in this region? 12 Which hunting technique is most used in this region and is most effective? 13 Have you seen an increase or a decrease (in number) in these animals? 14 Do you know about fruit bats? 15 What type of fruit bats have you encountered in your life? 16 Where do these bats live? 17 Where do these bats feed? 18 What do you think of these bats? 19 What activities do you do to make a living? 20 What do you cultivate? 21 In which area do you prefer to cultivate? 22 What type of foods do you grow? 23 Based on your knowledge of the soil in the past, have you noticed any changes compared to before? 24 What are the difficulties you face in developing your livelihood? 25 Do you receive any assistance from the government? 26 Do you receive any assistance from an NGO? 27 What would you like to do to improve your livelihood activities? 28 Do you know about protected areas? 29 Would you agree to the creation of a protected area in this forest? 30 Which possible areas would you propose for a protected area? often answered haphazardly. This that can impact our results as the goal of the research is to use a set of relevant people and a sample of opinion statements to draw conclusions." Analyzing the Data and Development of Factor Perceptions. The data were analyzed using the 'qmethod' package for R (R Development Core Team, 2016; Zabala, 2014) which groups responses according to their similarity, using PCA and varimax rotation (a common approach in Q methodology). Different factors were rotated and compared during the multivariate analyses. We choose three factors based on a combination of total explained variance, minimum correlations between factors and reduced number of confounders (participants loading on more than one factor). These three factors were retained as different discourses because they had the minimum of two or more significantly loading participants (at p < 0.01 level, threshold value ¼ 2.58 *1/ͱ (number of statements ¼ 33) ¼ AE 0.45)." Additionally, we analyzed the dataset with an interclass Principal Component Analysis (PCA) implemented in the ade4 R package (Thioulouse & Dray, 2007) in order to easily identify contrasted statements between the different social groups. This method which doesn't require parametric data (it is not based on any probabilistic model, but only on geometric considerations) rotates the selected PCA axes to maximize correlation between predefined groups. In a first analysis, we tested the discrimination between (1) the group of employed people working in NGOs (EmpNGO), (2) the group of employed people not working in NGOs (Emp), and (3) the group of unemployed people with a low level of formal education (unp). In addition, we tested the discrimination between (4) people from the three islands of the archipelago, (5) people from urban vs. non-urban regions, (6) age classes (ages were classifed as young [18 to 35 years] and old [36 to 75 years]) and (7) the gender (men and women groups). We tested whether these predefined groups significantly differed from each (Top) to Consensus (Down, in Bold and Italic), Based on Z-Score Differences. A statement is considered distinctive when comparing all pair of factors and at least one factor is significantly different to the others for this statement at p-value <.01 (e.g. statement 11); if all the comparisons between each pair of factors are significantly different at p-value <.01, the statement is considered as "distinctive all" (e.g. statement 29); a statement is considered as consensus when none of the comparisons are significantly different at p-value <.01(e.g. statement 15); if a statement is distinctive for a factor (at p < 0.01), the symbol is filled and if a statements is not distinctive for a given factor, the symbol is empty. other in terms of Q-sort scoring using a permutation test based on 1000 permutations. The tests were considered significant when the p-value was <0.05." Semi-Structured Interview Responses All 26 people interviewed stated that they receive benefits from forests and use natural resources for everyday life. All respondents stipulated that they go to the forest to work (for agriculture and cultivation and to collect wood). In answer to the question "If this forest disappears completely, would that result in changes for you? What influence does the forest have on your wellbeing?", most respondents highlighted that the forest is essential for fertile soil, and thus necessary for agriculture, and is also important in maintaining the water source. A large majority of the respondents stated that they know what biodiversity is and its importance for their subsistence and most of these have a positive perception of wild animals and reported that these are useful for their well-being. Only a few minority of our respondents stated that some wild animals are harmful. Comorian attitudes toward bats were mostly positive and only a minority reported that they did not know the usefulness of fruit bats. Of those with a positive attitude and perception of fruit bats, most reported their importance (1) as seed dispersers for forest regeneration, (2) as seed dispersers for important cash crops such as cashews and mangos, (3) as pollinators, or (4) as a source of income from tourism (the case of P.livingstonii). Some respondents mentioned that fruit bats, especially those living in villages (P.s.comorensis), generate some damage in cultivated areas. All the interviewees had some knowledge about the primary forest and its usefulness for the local population. In answer to the question "Have you noticed any recent changes?" regarding their perception of landscape changes within the forest, a large majority reported that the forest is overharvested and is decreasing in surface area. They highlighted that the decline of the forest is having an impact on their livelihood. When asked "Who exploits the forests in this region?", they gave contrasting responses. Some respondent reported that villagers are responsible for forest loss due to the practice of intensive wood collection and only a minority of respondent claimed that their forests are harvested by foreigners from other cities on the island. Despite these diverging views, all respondents reported the negative impact of forest misuse on their livelihoods and well-being, and stated that if forests disappear completely, human life will not be possible in their region. Most respondents reported that rural populations are neglected and lack assistance from the government and/ or NGOs, stating that this is the main cause leading to forest overharvesting. A large majority reported that they never benefit from any government assistance or help from NGOs and said that the lack of agricultural equipment and technical assistance are the main factors inciting rural people to harvest the forest. They mentioned that the lack of assistance from the government and NGOs forces the rural population to be highly dependent on forests as they do not have any alternative livelihood. On this point, the local population agreed that forests must be protected or even regenerated and a majority agreed with the creation of protected areas in their region, and only a minority agreed under certain conditions, notably governmental support of their livelihoods and for agricultural equipment and technology. Q Sorting and Post-Sorting Results Among the 66 respondents interviewed during the Q-sort possess, eight individuals (five men and three womens, see Table 2 for age and demographic repartition) were null-cases and did not agree with any discourse as they had low sorts loadings on all factors. Among all participants, seven were confounded, among which six participants were confounders between the narrative A and C and only one participant was confounder between the narrative A and B (Table 2). Both confounders and null-cases are not considered for interpreting results in our own case. Of the 33 Q statements (see Figure 3), six (18%) were consensual for all respondents (either positive or negative) and thus did not contribute to discriminations in discourse. Altogether, these discourses explained 55% of total variance. These three discourses were labeled according to the different statements significantly loaded to the considered factor (narrative A: "Pro-environment discourse", narrative B: "Keeping things as usual", and narrative C: "Social and environmental concerns"). The results found a low correlation between narratives A and B (r ¼ 0.30) and between narratives B and C (r ¼ 0.30), indicating that they are distinct (Table s1). The correlation between narratives A and C was higher (0.68), indicating some similarities between them (Table S1). Consensus Statements. There was consensus on the need to develop tourism activities on the islands [statement 31], for example, all respondents agreed that "Tourism is important for Comoros development". One respondent ranked this on the extreme end of the scale of agreement (þ4) and commented: "We need to develop tourism; this is part of our development program in the Moh eli Marine Park" (see Figure 3). All respondents disagreed Narrative A: Pro-Environment Discourse. Narrative A (factor 1) explained 27% of the total variance (Table S1). For this narrative, 31 of the 66 respondents loaded significantly. These respondents were mainly employed, either in NGOs or in another sector (EmpNGOs ¼16 and Emp ¼12, see Table 2). They agreed with the statement that people will disappear from the islands if the forest disappears [statement 5; Factor 1 score: þ3]. As one respondent commented, "The forest is our life: when it disappears from the island, we cannot survive." Another participant who strongly agreed (þ4) stated, "The forest is very valuable to our lives, if it disappears it will be catastrophic and will be the end of our lives." Respondents in line with this narrative agreed with the fact that it would be good to reestablish the forest as before [statement 3; Factor 1 score: þ4] and to have protected areas for habitats and animals [statement 33; Factor 1 score: þ4]. For example, one respondent who strongly agreed commented, "It would be good to reestablish the forest as it was before. There used to be a diversity of foods, many rivers and it was wetter." Another strongly agreeing respondent (þ4) highlighted that "Dense natural forests are important; before, the forest brought more benefits than now." They disagreed with the statement "There is a need to cultivate more land [statement 22]". Instead, they agreed that "It is important to develop new agricultural techniques" [statement 30; Factor 1 score: þ3]. As one respondent commented, "We need new methods and techniques to improve lands for cultivation that will allow us to increase production." Other comments included: "We need materials and methods for agriculture that are more ecological." "Technical and material aid is important as this will allow us to improve agricultural production." Those associated with this narrative disagreed with the fact that Comorians do not eat fruit bats (Table 3). One respondent affirmed, "Some Comorians eat fruit bats, I can confirm this as I have been present in many cases." Narrative B: Keeping Things as Usual. Narrative B (factor 2) explained 15% of the total variance (Table S1). For this narrative, only 8 respondents loaded significantly. These respondents were all unemployed (Unp) with a low level of education (Unp ¼ 8 respondents). They disagreed with the statement, "It is mainly villagers who are cutting trees" [statement 1; Factor 2 score: À3], though In any case, they consider that hunting animals is too difficult and so no hunting occurs [statement 11; Factor 2 score: þ4]. As one respondent who strongly disagreed (-4) commented, "It is very difficult to hunt because it requires having a gun." Another said, "Although we would like to hunt, it is very difficult and nobody hunts here." They believe that villagers should manage forests [statement 8; Factor 2 score: þ3]. One who strongly agreed with this statement commented, "The forest belongs to the villagers and it is up to them to manage it." Another claimed, "Forests are for villagers living nearby and who have experience in issues related to them. It is up to them to manage and to benefit from forests." They slightly agreed that agriculture is not profitable because of low prices [statement 23; Factor 2 score: þ1] and generally agreed that crops should be mainly developed in plains [statement 21; Factor 2 score: þ2], but they disagreed that rice cultivation should be further developed [statement 26; Factor 2 score: À2]. They slightly agreed that it is important to preserve traditional agriculture [statement 27; Factor 2 score: þ1]. They agreed that fishing brings them a lot of revenue [statement 32; Factor 2 score: þ3, Table 4]. Narrative C: Social and Environmental Concerns. Narrative C (factor 3) explained 13% of the total variance. For this factor, 12 respondents loaded significantly. These respondents were mainly unemployed with a low level of education (Unp ¼9), some unemployed but educated respondents (UnpE ¼ 3), while three were employed. They agreed with the statement that forests are declining on the island [statement 4; Factor 3 score: þ3] and strongly disagreed with continuing deforestation to develop cultivated land [statement 6; Factor 3 score: À4]. As a respondent who strongly disagreed explained, "No, it is not really areas to cultivate that are lacking." They slightly disagreed that wild animals are destroying their crops [statement 16; Factor 3 score: À1]. They also disagreed that many outsiders come to their villages to hunt [statement 9; Factor 3 score: À2] and that only children and teenagers hunt in their village [statement 10; Factor 3 score: À2]. They agreed that it is prohibited to kill bats [statement 12; Factor 3 score: þ2]. They strongly agreed that money from NGO or government projects never reaches farmers [statement 29; Factor 3 score: þ4], and disagreed that the Comorian government often helps the local population [statement 28; Factor 3 score: À3]. As one respondent commented, "The Comoros government has never given assistance to local people. If it helped us, we would not be as poor as we are." Other comments included: "The Comoros government never helps the people that is false." "Unfortunately, NGO money is shared by agencies and does not reach the villagers." The narrative C respondents also agreed that there are problems with robbery in cultivated areas [statement 25; Factor 3 score: þ3] (Figure 3, Table 4, Table S2)." Inter-Class Principal Component Analysis Considering the first three principal components, we found a high level of inter-group variation (53.40% of the total variation) between employed people (EmpNGO and Emp together) and unemployed people (Figure 4). Axis PC1 clearly differentiated between the two groups EmpNGO/Emp vs Unp, and this discrimination was significant according to the permutation test (p-value ¼ 0.01). Together, the EmpNGO and Emp groups were agreed with the following statements: "It would be good if the natural forest was reestablished as before" [statement 3], "Forests are declining on the island" [statement 4], "If the forest disappears from the island, people will also disappear" [statement 5], "Aid is hunt" [statement 11], "We need to cultivate more land because it produces less than before" [statement 22], and "Development project/NGO money never reaches farmers" [statement 29]. It was negatively correlated to the statements: "It is mainly villagers who are cutting trees" [statement 1], "Wild animals are decreasing in our area" [statement 14], and "Rice cultivation should be further developed" [statement 26]. Considering the influence of the three islands on the first three principal components, we found a high level of inter-island variation (53.73%). Axis PC1 differentiated the three islands, and this discrimination was significant according to the permutation test (p-value ¼0.03). People from Grande Comore were positively correlated to the following statements: "Crops should be mainly developed in plains" [statement 21], "It is important to preserve traditional agriculture" [statement 27], and "Fishing brings a lot of revenue for us" [statement 32]. They were negatively correlated to the statements: "Agriculture and farming are the only possible livelihood activities here" [statement 19], "If wildlife disappears, our crops will decrease" [statement 17], and "Rice cultivation should be further developed" [statement 26]. People from the island of Moh eli were positively correlated to the statements: "Agriculture and farming are the only possible livelihood activities here" [statement 19], "If wildlife disappears, our crops will decrease" [statement 17], and "Rice cultivation should be further developed" [statement 26]. They were negatively correlated to the statements: "Fishing brings a lot of revenue for us" [statement 32], "It is important to preserve traditional agriculture" [statement 27], and "There are not enough people who cultivate" [statement 20]. The views of people from Anjouan were situated between those from the islands of Grande Comore and Moh eli ( Figure 4)." Considering the influence of the gender, people from urban vs. non-urban regions and age classes, the discrimination test was not significant (p-value >0.05). Natural Resource Use by Local People and Its Relationship to Forest Loss According to the information collected in the interviews, Comorian people rely heavily on natural resources for sustenance. All (100%) of our respondents confirmed that they use the forest for cultivation or to collect wood -even those with fairly high socio-economic levels, such as administrative, financial or human resources directors. Most of the respondents have a minimum of formal knowledge about biodiversity and forests. They stated that they know what biodiversity encompasses, and they generally have a positive attitude toward wild animals. Our Q-sort sampling involved only a small number of woman's (23% of all respondents, see Table 2.). In rural areas of the Comoros Islands, women are more involved in natural resource uses as they are responsible of daily subsistence including producing agricultural crops, food processing and marketing activities and animal husbandry of small livestock. They thus have traditional knowledge about forest and natural resource (Bourgoin et al., 2017). Our interpretations take into account this sampling bias. The Q-sort results show that, despite the diversity of viewpoints among stakeholders, all stated the importance of forests and biodiversity, including flying fox species. However, the findings also highlight the complex links between biodiversity, natural habitats and human needs, which include the economic benefits received from agroforestry systems. Despite their understanding of the negative impacts of degraded forests on their well-being, some rural populations have no other solution for subsistence than forests and natural areas. Comorian people know that the surface area of natural habitats is decreasing in the archipelago and are aware that if the forest disappears, no human life will be possible on the islands. Most people have accurate ideas of the mechanisms involved: for instance, they detailed that complete forest loss would generate a decrease in water resources, a low yield in agriculture, a lack of charcoal and wood for building, and the disappearance of other resources, such as food, medicinal plants, etc. This indicates that Comorian people are aware of the ongoing process of degradation and its consequences, but have no alternative livelihood than to harvest in forests. A few respondents had negative perceptions of fruit bats (raised during the interviews but not in the Q-sort surveys). These are probably due to the fact that P. s. comorensis feeds in cultivated areas and in fruit orchards, resulting in some damage to crops. But some respondents stated that benefits from fruit bats on their farms clearly outweigh damages. Various studies examining attitudes towards biodiversity and habitat conservation in developing countries have shown similar positive perceptions of biodiversity: for instance, in Madagascar (Ratsimbazafy et al., 2012), India (Badola et al., 2012;Silori, 2007) and Uganda (Infield & Namara, 2001). In our study, positive perceptions of biodiversity were largely driven by the perceived benefits to the respondents. For example, most positive attitudes toward P. livingstonii were due to the fact that the species attracts many tourists as it is one of the largest bats as well as one of the most threatened animals in the world, but also because the species plays a crucial role in forest regeneration and in crop cultivation. The positive attitudes toward P. s. comorensis were related to its role as a seed disperser, but also to the fact that the species represents an important source of food for many rural populations. Our results identified three main discourses, or narratives, one of which (Pro-environment discourse) supports long-term biodiversity conservation through the creation of protected areas. This narrative recognizes the consequences of forest loss and supports the development of ecological agricultural methods that allow forests to be maintained and developed. The second narrative (Keeping things as usual) is more in favor of immediate benefits from the forest and the protection of local activities and revenues, despite the awareness of the importance of forests and the effects of natural habitat loss on local livelihoods. The third narrative (Social and environmental concerns) is in favor of immediate benefits from forests, but equally sees the necessity of preserving natural habitats. These respondents understand the importance of preserving forests and the negative impact of forest and biodiversity loss, but are forced by poverty to harvest natural resources. According to our results (Table 2), the narratives (Social and environmental concerns) and (Pro-environment discourse) are highly correlated. This correlation between the two discourses is explained by a high number of confounders between the two factors. Positive attitudes toward long-term biodiversity conservation (Pro-environment discourse) are held mainly by employed people, including NGO staff, professors, agricultural engineers and other public officials. This could be linked to the fact that their employment leads them to be less dependent on forests and natural resources. Many previous studies have shown a significant relationship between employment, formal education and perceptions of biodiversity and forest conservation (Cairns et al., 2014;B. King & Peralvo, 2010). Our results indicated that respondents with a low level of formal education, who are often unemployed, are associated with narrative "Keeping things as usual". Being dependent on forest resources, their main concern is to protect their livelihoods rather than biodiversity leading them to stress that only local people should manage forests and natural resources. This highlights that the lack of other means of securing the necessities of life is the main factor leading rural people to harvest natural resources. While they are aware of the broad importance of forests, for these people, protecting them is essential mainly for their subsistence or health rather than for intrinsic or ecological reasons. According to our analysis, the Narrative B appears to represent an attitude associated to Grande Comores respondents (9 respondents from Grande Comoro). The results must be interpreted with caution as may not be broadly applicable across the three islands of the Union of Comoros. Among unemployed respondents loading significantly on the different discourses, three were educated (level of university), and are finished education since few years but do not have formal jobs. These unemployed but educated respondents mainly belonged to the narrative (Social and environmental concerns, see Table 2) indicating that they are aware of the necessity of preserving natural habitats but are also in favor of immediate benefits from forests because of the level of poverty in these islands. Despite their high education level and their awareness regarding the importance of forests, biodiversity and natural habitat, these people are poor and struggle to meet their day-to-day needs and are in favor of any actions that may generate immediate benefits for their survival. This highlights that although education is crucial for understanding and awareness regarding the importance of forest and biodiversity conservation, reducing poverty and increasing livelihoods of local people of these islands is the key strategy to allow habitat and biodiversity conservation actions to be effective." These rural people claimed that aid money never reaches farmers. In the Comoros, development project budgets are often managed by people with a high level of education, and local people believe that this money is always absorbed by these agencies. As aid from development projects and NGOs is often limited, and thus insufficient to reach all rural people, this leads those who do not benefit to have a negative perception of NGOs. Our results found that rural people from Grande Comore and Anjouan intensively collect wood to sell it, resulting in a high harvesting rate of the forests of these islands compared to Moh eli forests (Granek, 2000;Ibouroi, Cheha, Arnal, et al., 2018;Sewall et al., 2011). In contrast, respondents from Moh eli are in favor of forest and biodiversity conservation, including the development of ecological rather than traditional agriculture (the latter is preferred by respondents from Grande Comore). Respondents from Moh eli feel that wood collection should be prohibited in their region. On this island, due to the presence of the National Park of Moh eli, various nature conservation projects, and the high level of tourism linked with local biodiversity (e.g. sea turtles, Livingstone's flying fox etc.), biodiversity represents the main source of income for the population (Granek & Brown, 2005). Our study's findings highlight the diversity of viewpoints among Comoros stakeholders depending on several social factors, including formal education level, employment, and geographic location. These results join a number of other studies that have shown diverse local perceptions of biodiversity and how to manage natural resources (Gall & Rodwell, 2016;Kamal & Grodzinska-Jurczak, 2014;Watkins & Cruz, 2007). Understanding the nuances in attitudes and the different weights attributed by stakeholders to each element of the dilemma may help to find unexpected areas of agreement and to advance new solutions. Conservation Recommendations Previous studies in the Comoro Islands have proposed different strategies for limiting intensive forest exploitation including law enforcement, deployment of the national army in forest, educational initiatives such as increasing awareness and understanding of conservation issues (Miku s, 2009;Poonian et al., 2008;Trewhella et al., 2005). However, all these strategies do not appear as appropriate solutions for effectively reducing habitat destruction since Comorians' exploitation of natural resources is a question of survival. As many stakeholders commented during our interviews, "We use natural resources for our survival. We will continue to exploit forests even if it costs our life." In the other hand, our results indicate that Comorians today do not lack awareness concerning the importance of natural habitats and the impact of habitat disturbance and loss on their livelihood. Rather it appears that the main constraint is poverty, forcing them to heavily exploit forests. In addition, employing force as a conservation measure is dangerous for villagers, forest managers and conservationists. Some Moh eli respondents affirmed that marine turtle poachers are often armed. In an assessment of Comorians' perception of the Moh eli Marine Park, a Marine Protected Area (MPA), Poonian (2008) revealed that the most important factors affecting habitat management in the protected areas of Comoros are the lack of sustainable alternative livelihoods, inequitable distribution of benefits and continuing environmental threats. Poonian (2008) suggested that, to ensure habitat conservation and the continuity of this protected area, MPA managers should adopt programs that carefully consider sustainable sources of finance for stakeholders and lower-cost alternatives that reduce poverty. Hauzer et al. (2008) highlighted that Comorians, especially from Moh eli, were aware of the importance of the protected area, but felt that their survival was of priority importance. Hauzer et al. (2008) suggested that the best conservation strategy would be a measure that would "(1) ensure sustainability through effective financial planning and appropriate management techniques; (2) mobilize local communities to create a truly co-managed MPA; (3) ensure tangible benefits to local communities through realistic alternative livelihood options". In a study of the links between resource dependency and attitude of commercial fishers to coral reef conservation in the red sea, Marshall et al. (2010) found a direct relationship between conservation attitudes and aspects of resource dependency. Especially, fishers with higher income were more likely to have a positive conservation attitude. Sewall et al. (2011) suggested that local Comorians living near forests should be compensated if agricultural land use within a reserve were restricted. One of the most important management strategy in protected area is involving local people and habitat users in the management (Nordlund et al., 2014). Freed and Granek (2014) suggested that priority for management actions should be to include local community members and stakeholders in the decision-making and implementation process for protecting fragile reef ecosystems in the Comoros. These authors suggested that local communities would serve as the primary management actor for an effective conservation strategy (see also Freed et al., 2016). Sewall et al. (2011) also suggested that any plans for a reserve should be adopted through a formal process that includes local community engagement, as without this conservation strategies will not be effective. Our results highlight that employment influence local perceptions and suggest that poverty and a lack of access to basic services is associated with overharvesting of natural resources by rural people that conducts to forest fragmentation and a high rate of habitat loss in the archipelago. Although these rural people cultivate more, most of them apply traditional methods especially by using slash-and-burn making lands unproductive few years letter and increasing the need of more lands. The first key recommendation we propose for the preservation natural habitats is developing and maintaining sustainable production of crops for local human benefit. New methods and materials to develop ecological agriculture must be made available to local communities. Rocliffe et al. (2014) highlighted that underdevelopment of legal structures supportive of local communities was one strong constraints for formal local protected areas in many developing countries of the Western Indian Ocean islands including the Comoros archipelago. Projects such as these could allow local populations to improve yield with the same surface area, thus reducing the conversion of forest into farmland. The lack of market to sell cultivated products is the second factor leading to overexploitation of natural habitats as highlighted by many respondents. As a second key recommendation we propose to develop projects of local markets that that could allow the creation of new jobs for local people. According to our results (semi-structured interviews), all interviewed respondents are using forests and natural resource for subsistence. B. Fisher and Christopher (2007) highlighted that, about 72% of Comorians depend directly on forest resources. This strong dependence on natural resources is due to the fact that many development sectors such as tourism are not yet developed in the Union of Comoros (Granek & Brown, 2005). The third key recommendation we propose is to develop eco-touristic project, including the construction of bungalows in strategic villages as well as tourist sites for observing emblematic species such as the endemic flying foxes, lemurs, scops owl, etc. Villagers and local communities could manage these infrastructures. As the lack of governmental assistance is claimed to be the main cause leading to the overharvesting of forests, the fourth key recommendation we propose here is that government aids and support should made available for rural people that could reassure them of the government good intentions to contribute to local development. The fifth key strategy allowing ensuring the preservation of Comoros forests and natural habitats in midterm and long-term purposes will be to set up awareness campaign for replantation and reforestation. As Comorian people do not lack awareness regarding the necessity to preserve forests and natural habitats but overexploit forest for their everyday needs, forest managers must ensure the improvement of living conditions of local population before any replantation project otherwise community will exploit the forests before replanted trees grow. Replantation can play an important role in sustaining native biodiversity and makes an important contribution to the conservation of native biodiversity (Rocliffe et al., 2014). Their re-establishment involves the replacement of native natural plants but also makes large trees available for mid and long-term purposes which are crucial for human wellbeing (Brockerhoff et al., 2008). Conclusion In conclusion, habitat loss and the vulnerability of biodiversity in the Comoros are the results of the unsustainable overexploitation of natural resources. Yet to maintain the ecological balance necessary for daily human needs and for future generations (clean water, productive agricultural land, ecosystem services from biodiversity and forests), it is vital to conserve the natural habitats on these islands. As the exploitation of natural resources by local people is a question of survival, a program allowing to reduce poverty for instance by developing tourism and maintaining sustainable production of crops, livestock and setting up awareness campaign for tree planting and reforestation projects are necessary for the Union of Comoros. On the three islands of the Union of Comoros, a project to create national marine and terrestrial protected areas -including national parks funded by the Global Environmental Finance (GEF) and put in place since 2016 by the United Nation Development Program (UNDP) -has been agreed by the Comorian government. The project is now managed by an independent institution (The National Network of Protected Areas or R eseau National d'Aires Prot eg ees RNAP). Based on our interviews with local people, most rural communities agree with the creation of protected areas if they can gain direct benefits from the project and are involved in the conservation actions. Constructive engagement with local residents (such as providing employment as local guides or park rangers, for example) would contribute to supporting long-term conservation success. Despite the limited number of women participants during the Qsort process, they should be involved in the conservation projects and decision-making regarding conservation strategies because of their knowledge in forests and natural resource use.
2021-08-27T16:35:45.898Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e5b8e8d09f896286094a2462c0a2f2956179fdb2", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/19400829211032585", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "1fcf6ba85c5e68a441c4711c3828f941f5a5b501", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
249694108
pes2o/s2orc
v3-fos-license
A dosimetric comparison of proton versus photon irradiation for paediatric glomus tumour: a case study Abstract Background: Intensity-modulated radiation therapy (IMRT) has revolutionised the way head and neck cancers can be treated. It allows for a more conformal treatment plan when compared to 3D conformal radiation therapy. In paediatric patients, however, IMRT continues to deliver higher doses than desirable. Proton beam therapy on the other hand has the potential to further spare organs-at-risk. Methods: A 16-year-old boy with a left-sided paraganglioma of the left base of skull manifested by headaches, neck pain and tongue cramping was simulated, planned and treated with proton therapy with significant contralateral organ-at-risk sparing. Results: For this patient, dosimetric plan comparison between photon and proton plans clearly showed better sparing of contralateral organs-at-risk with protons. The contralateral parotid gland received a mean dose of 386·3 cGy with photons, whereas it received 1·3 cGy (CGE) in the proton plan. Conclusions: The dosimetric advantage of proton beam over photon beam therapy has successfully been demonstrated in this case study for a paediatric patient with a head and neck tumour. Sparing of contralateral structures is especially important in paediatric patients who are at a greater risk of secondary malignancies due to possible long life expectancy. Introduction Paragangliomas and pheochromocytomas are rare slow-growing benign neuroendocrine tumours. 1 Pheochromocytomas arise within the adrenal glands, whereas paragangliomas are found in the extra-adrenal autonomic paraganglia. 2 Both tumours have the ability to secrete catecholamines; however, pheochromocytomas are much more likely to do so. [3][4][5] The majority of paragangliomas arising around the base of skull region are non-secreting tumours. 6 When treating these tumours with radiotherapy, the goal is to achieve local control with the least amount of toxicity to surrounding tissues. 7 This goal is even more important in paediatric populations due to the risk of secondary malignancies arising later in life. In this case study, we compared a proton plan to a photon plan for a paediatric patient. Clinical history The patient is a 16-year-old boy who presented with worsening headaches, difficulty in swallowing, neck pain, tinnitus and intermittent tongue spasms. The patient also reported pressure behind his eye without any vision changes. Magnetic resonance imaging of the head and neck region demonstrated a cystic mass near the left jugular foramen measuring 1·7 cm x 2·7 cm x 3·5 cm (AP x W x CC). The mass was noted to occupy the left superior parapharyngeal space. Computed tomography of the neck revealed cystic and necrotic characteristics. Contrast enhancement allowed for better visualisation revealing a 3·1 cm x 2·3 cm x 4·6 (AP x W x CC) thick-walled lesion extending from the jugular foramen to C1-C2 level. No metanephrines were noted in a 24-hours urine study. Multidisciplinary tumour board treatment decision was for radiotherapy because of the location of the lesion and also high surgical risk. Both a proton and a photon arc plan were generated. Due to the patient's young age, as well as clear superior dosimetric profile, the decision was made to treat the patient with proton beams. The patient was treated with a dose of 5500 cGy (CGE) in 25 fractions, where CGE stands for cobalt Gray equivalent and had overall good treatment tolerance. The patient met with the physician team weekly to discuss ongoing side effects. The patient was noted to have grade 2 skin erythema towards the end of his treatment along with grade 1 dysphagia. The patient also reported that headaches initially became worse during treatment but subsided towards the end. The patient was seen for follow-up at 18 months after completing treatment. The patient denied any residual dysphagia, odynophagia, hoarseness or throat soreness. MRI of the face and neck with and without contrast obtained during that visit was consistent with a significantly smaller mass measuring 1·1 cm x 1·6 cm x 2·7 cm (AP x W x CC). Discussion This young boy's tumour was located near the left jugular foramen. Dose distributions and dose-volume histograms for photon and proton beam are shown in Figures 1 and 2, respectively. As shown in Table 1, the majority of the right-sided structures received a significantly lower dose than left-sided structures in the proton plan when compared to photon plan. The right parotid gland received a mean dose of 1·3 cGy (CGE) and a max dose of 8·7 cGy (CGE) with our proton plan. In the photon plan, the right parotid gland received a mean dose of 386·3 cGy and a max dose of 854·7 cGy. Interestingly, the oral tongue received a higher max dose in the proton plan than photon plan 2704·3 cGy (CGE) versus 2195·4 cGy. The mean dose remained lower in the proton plan versus photon plan at 209·6 cGy (CGE) versus 676·8 cGy, respectively. This finding may likely be attributed to the location of some portions of the oral tongue in relation to the tumour. This level of physical dose sparing allows for dose escalation within the target if necessary or desired while preserving surrounding structures. In this patient's case, given the benign nature of his condition, it was paramount to limit dose outside of our target as much as possible. A higher maximum dose was noted in the proton plan of 5586·1 cGy (CGE) versus photon 5349·7 cGy despite identical prescriptions. It is possible that this higher dose was seen either because of the direction of the beams used or a less steep dose fall-off, which could be attributed to the range shifter used for this plan. Paragangliomas are rare tumours that are often described along with pheochromocytomas. These tumours have an estimated annual incidence of 0·8 per 100,000 person years. 8 One of the aspects that makes our case report unique is that most patients are diagnosed with head and neck paragangliomas in their 40 seconds. 9 Studies in other malignancies such as rhabdomyosarcoma have also demonstrated that proton therapy provides adequate target coverage while still decreasing mean integral dose. Ladra et al. conducted a phase II clinical trial revealing that the IMRT mean integral dose was 1·8 times higher for H&N (p < 0·01) than proton therapy. 10 Current ongoing trials such as DAHANCA 35 (NCT04607694), investigating proton versus photon in head and neck cancers, will provide the medical community with answers regarding proton use in adults. 11 The issue that we often see is that trials like DAHANCA 35 are often focusing on adult patients and exclude paediatric patients. Although important, this trial highlights the lack of higher level evidence for paediatric patients. And given the rarity of paediatric cancers, it is likely that such a trial would have significant challenges in completing accrual. Ioannis et al. published a case series that included 13 adult patients with head and neck paragangliomas that were treated with either proton or photon radiation between 2004 and 2014. 12 This retrospective study had a follow-up of 52 months, which is not sufficient when monitoring for long-term complications such as secondary malignancies. This study also did not generate proton plans for its patients who were treated with photons in order to compare dose to surrounding tissues. Chowdhury et al. showed in another case series that both treatment modalities, photons and protons, were equivalent in treating head and neck paragangliomas. 13 The median age for this case series was 53. The median followup was of 30·9 months. Conclusion Given the rarity of paragangliomas in the paediatric population, it is unlikely that a non-inferiority clinical trial of protons versus photons in this patient population will ever come to fruition. This report serves as a dosimetry comparison for a base of skull paraganglioma in a paediatric patient. It provides the medical community with an objective comparison of between two different treatment modalities and provides supporting evidence of physical dose sparing to surrounding organs-at-risk. Dosimetric superiority of protons in the skull base region is largely due to the absence of dose deposition distal to the target, or 'exit dose'. This phenomenon is explained by the distinctive Bragg peak that protons have that allows for a rapid dose falloff beyond the target. Contralateral structures were significantly Table 1. Journal of Radiotherapy in Practice spared with the proton plan. As previously established, proton beam therapy remains the therapy of choice for paediatric patients given their long-term survival and concerns for secondary malignancy, as well as lower doses to most if not all normal structures of interest.
2022-06-16T15:09:02.134Z
2022-06-14T00:00:00.000
{ "year": 2022, "sha1": "33f8e310d919dec8c8addedfb5016a3ca021974c", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1E4E9450F9E7EB3442DBC7BA362D5605/S1460396922000140a.pdf/div-class-title-a-dosimetric-comparison-of-proton-versus-photon-irradiation-for-paediatric-glomus-tumour-a-case-study-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "83f8e37997e51cf7e865e0d9717a21163337ef19", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
247308706
pes2o/s2orc
v3-fos-license
The Tool for Evaluating Media Portrayals of Suicide (TEMPOS): Development and Application of a Novel Rating Scale to Reduce Suicide Contagion Research suggests that media adherence to suicide reporting recommendations in the aftermath of a highly publicized suicide event can help reduce the risk of imitative behavior, yet there exists no standardized tool for assessing adherence to these standards. The Tool for Evaluating Media Portrayals of Suicide (TEMPOS) allows media professionals, researchers, and suicide prevention experts to assess adherence to the recommendations with a user-friendly, standardized rating scale. An interdisciplinary team of raters constructed operational definitions for three levels of adherence to each of the reporting recommendations and piloted the scale on a sample of articles to assess reliability and clarify scale definitions. TEMPOS was then used to evaluate 220 news articles published during a high-risk period following the suicide deaths of two public figures. Post-hoc analyses of the results demonstrated how data produced by TEMPOS can be used to inform research and public health efforts, and inter-rater reliability analyses revealed substantial agreement across raters and criteria. A novel, wide-reaching, and practical approach to suicide prevention, TEMPOS allows researchers, suicide prevention professionals, and media professionals to study how adherence varies across contexts and can be used to guide future efforts to decrease the risk of media-induced suicide contagion. Introduction Suicide is one of the leading causes of death worldwide, with over 800,000 people dying by suicide annually, more than war, malaria, or breast cancer [1]. Media representations of suicide can influence the contagion of suicidal behavior, particularly in vulnerable populations. Rates of self-harm and suicide attempts have been increasing in recent years [2,3], and a growing body of research has established a link between self-harm and increased media use [4][5][6]. Suicide mortality tends to increase following highly publicized suicide events, a phenomenon known as the Werther Effect [7]. The association between media reporting and suicide contagion is well established as a globally reaching public health concern, documented in over 150 empirical studies and systematic reviews from around the world [8][9][10][11][12]. Newspaper coverage of suicide has been found to be significantly associated with the initiation of suicide clusters [13], and a substantial number of suicide attempt survivors report being affected by a media story about suicide [14,15]. Increases in suicide Int. J. Environ. Res. Public Health 2022, 19, 2994 2 of 12 rates following a highly publicized suicide event tend to be proportionate to the volume, duration, and prominence of the coverage [16], and are greater when the majority of the coverage is sensationalistic or includes details about the suicide method [17,18]. For example, in the three months following the highly publicized suicide of American comedian Robin Williams in 2014, there were 16% more suicides than expected; moreover, the greatest increases were seen in deaths by asphyxiation (the method used by Williams) and in males over 30, suggesting an imitative effect [19]. Despite these findings, media can also play an important role in suicide prevention [18,20]. When media outlets minimize the inclusion of certain kinds of harmful information (e.g., information about how the suicide was completed), the risk of imitative behavior decreases [18,20]. For example, a campaign by suicide prevention experts in Austria to implement guidelines for reporting on railway suicides in Vienna in the 1980's led to a reduction in the volume of coverage and, in turn, an 84% reduction in suicides [21]. More recent research demonstrates that media reports that portray suicide as a preventable outcome and disseminate resources and information about suicide prevention can help decrease suicide rates, a phenomenon known as the Papageno Effect [20,22]. In order to help media professionals decrease the risk of imitative suicide and instead promote a Papageno Effect, the World Health Organization published a set of recommendations for safely reporting on suicide [23]. These recommendations-which draw upon decades of research on suicide contagion and were developed with input from leading experts in the fields of suicide prevention, journalism, and public health-are continually expanded and adapted to reflect current empirical understanding. In addition to the WHO recommendations, suicide prevention professionals around the world have created and disseminated similar sets of recommendations, such as the Recommendations for Reporting on Suicide, developed in the United States [24], the Mindset guidelines from Canada [25], and the Mindframe guidelines from Australia [26]. Despite the existence of many different sets of recommendations, their content is consistent: Media professionals are advised to avoid sensationalizing or glamorizing the person who died, to avoid including explicit details about the death or suicide method, and to include prevention resources for those who may be struggling and at risk of suicide. Adherence to such guidelines is associated with a reduction in suicide rates [27], decreased use of highly lethal suicide methods [28], and increased utilization of support resources [20]. Adherence to these guidelines is especially vital during surges of suicide-related coverage, such as when a high-profile figure dies by suicide. Famed American fashion designer Kate Spade died by suicide on 5 June 2018, leading to a spike in suicide-related news coverage in the United States. Celebrity chef and TV personality Anthony Bourdain took his own life just three days later, on 8 June 2018. Celebrity suicides that occur in such close succession are exceedingly rare but provide a valuable opportunity to study how media outlets cover suicide. However, there currently exists no standardized method for measuring adherence levels, making it difficult to compare results across studies or understand how adherence varies across contexts. Many studies examining adherence use a binary rating metric, noting the presence or absence of each recommended reporting practice [29][30][31]. Although simple to use, binary rating systems fail to account for the fact that degrees of adherence to a particular reporting recommendation may have differential impacts on the audience. For example, a newspaper article that provides graphic details about a suicide method may be significantly more harmful than an article that mentions the method in passing; however, under a binary rating system, these two articles would both be coded the same way. In order to capture more nuance in reporting, some researchers have utilized traditional content analysis methods [13,17,32,33], while others have developed their own rating methods tailored specifically to their research aims [34,35]. Although these more complex approaches succeed in capturing more nuance, they tend to be difficult and time-consuming to execute. Consequently, they are unlikely to be widely adopted by suicide prevention programs or media professionals in need of evaluation and monitoring tools. The lack of a standardized, user-friendly rating system also poses a challenge to suicide prevention programs aiming to monitor and evaluate their progress in working with the media to increase adherence to the reporting recommendations. For example, in response to the clusters of youth suicides in 2009 and 2014, the Suicide Prevention Program of Santa Clara County in Northern California has been working with local media to improve suicide-related reporting since 2011. A 2016 study conducted by the Centers for Disease Control (CDC) in response to these clusters found that among the 246 media reports analyzed, only 17% included any sort of suicide prevention resource. On average, each media report contained 4.3 potentially harmful characteristics, compared to an average of only 0.5 protective characteristics [36]. After the study ended, however, program staff had no standardized method to assess the progress of their work with the media over time, to identify targeted areas for further improvement, or to provide quantified feedback to media partners about their adherence. Program leadership identified the need for the development of a new standardized assessment tool that would allow suicide prevention programs to track changes in media adherence over time and identify targeted areas for improvement. To address these gaps, the County of Santa Clara's Suicide Prevention Program and the Stanford Department of Psychiatry and Behavioral Sciences collaborated to develop the Tool for Evaluating Media Portrayals of Suicide (TEMPOS). The primary aim of this work is to develop a novel, user-friendly, and non-binary rating tool that can be used by members of the media (i.e., journalists, editors), suicide prevention professionals, and researchers. The second aim is to illustrate how TEMPOS may be used to monitor and evaluate media coverage. The present study describes the development of TEMPOS and its subsequent application to a dataset of 220 suicide-related news articles collected during a surge of suicide-related coverage. Through the process of applying the scale, tool characteristics and reliability were further explored. Scale Development TEMPOS was developed by an interdisciplinary team of researchers with backgrounds in psychology, psychiatry, public health, media, and community mental health. TEMPOS consists of ten criteria (Table 1), which were derived from the American version of the suicide reporting recommendations, the Recommendations for Reporting on Suicide [19]. Rather than utilizing a binary scoring system (adherence/non-adherence), TEMPOS utilizes a three-point rating scale to capture more complexity without being onerous. When using TEMPOS, a rating of 2 indicates full adherence to the guideline, a rating of 1 indicates partial adherence, and a rating of 0 indicates non-adherence. In order to delineate what qualifies as full, partial, or non-adherence, operational definitions were constructed for each rating level (three for each of the 10 criteria, and 30 definitions in total). Wherever possible, definitions were constructed using language drawn directly from reportingonsuicide.org (accessed on 28 January 2022), in order to maximize alignment with the recommendations. The interdisciplinary TEMPOS team followed a four-step scale development process ( Figure 1). First, following initial construction of the 30 rating choices, the team discussed each operational definition and revised any wording that was deemed unclear or ambiguous. Second, the 3-point rating system was pilot-tested with a subsample of 5 suicide-related articles drawn from a larger dataset of 220. Four raters independently rated these articles in order to assess inter-rater reliability and identify any scale definitions that were too vague or difficult to apply. Third, the team reviewed the ratings and worked together to refine the scale definitions in response to common points of confusion and disagreement that arose during the pilot coding process. ( Figure 1). First, following initial construction of the 30 rating choices, the team discussed each operational definition and revised any wording that was deemed unclear or ambiguous. Second, the 3-point rating system was pilot-tested with a subsample of 5 suiciderelated articles drawn from a larger dataset of 220. Four raters independently rated these articles in order to assess inter-rater reliability and identify any scale definitions that were too vague or difficult to apply. Third, the team reviewed the ratings and worked together to refine the scale definitions in response to common points of confusion and disagreement that arose during the pilot coding process. The fourth step in scale development intended to strengthen the validity of the scale through an external review process with experts on suicide contagion and media-influenced harm. We invited five external reviewers, each an expert in suicide contagion and media-influenced harm, to provide feedback and critique of TEMPOS. Each external reviewer was sent a draft of the scale and asked to provide comments on the structure of the scale, as well as the wording and validity of the constructs. Based on the feedback from these external reviewers, the team revised the scale and completed one final round of test coding on a subsample of 10 articles in order to assess inter-rater reliability prior to applying the scale to the full dataset of 220 articles. Suicide News Media Dataset The suicide deaths of Kate Spade and Anthony Bourdain in early June of 2018 triggered a surge in suicide-related coverage, providing a natural opportunity to study how regional and national news outlets cover suicide. Over the course of a month, the County of Santa Clara's Suicide Prevention Program compiled a dataset of suicide-related news articles published in the United States. Articles were obtained using Google Alerts and manual searches of the keywords "suicide", "suicide prevention", "mental health", "mental illness", and "self-harm". Letters to the editor, articles from publications focused on gossip (e.g., TMZ), non-English articles, obituaries, and articles covering murder-suicides were excluded. The fourth step in scale development intended to strengthen the validity of the scale through an external review process with experts on suicide contagion and media-influenced harm. We invited five external reviewers, each an expert in suicide contagion and mediainfluenced harm, to provide feedback and critique of TEMPOS. Each external reviewer was sent a draft of the scale and asked to provide comments on the structure of the scale, as well as the wording and validity of the constructs. Based on the feedback from these external reviewers, the team revised the scale and completed one final round of test coding on a subsample of 10 articles in order to assess inter-rater reliability prior to applying the scale to the full dataset of 220 articles. Suicide News Media Dataset The suicide deaths of Kate Spade and Anthony Bourdain in early June of 2018 triggered a surge in suicide-related coverage, providing a natural opportunity to study how regional and national news outlets cover suicide. Over the course of a month, the County of Santa Clara's Suicide Prevention Program compiled a dataset of suicide-related news articles published in the United States. Articles were obtained using Google Alerts and manual searches of the keywords "suicide", "suicide prevention", "mental health", "mental illness", and "self-harm". Letters to the editor, articles from publications focused on gossip (e.g., TMZ), non-English articles, obituaries, and articles covering murder-suicides were excluded. Application of Scale To illustrate how TEMPOS may be used to monitor and evaluate media coverage, the first author applied all ten TEMPOS criteria to each of the 220 articles in the dataset. In order to assess the inter-rater reliability of the scale, each article was also independently rated by one of five secondary raters. Following the completion of the coding process, raters met to discuss and resolve discrepancies between the two sets of ratings in order to examine the inter-rater reliability of TEMPOS when applied to a large sample of media and produce a final set of ratings. In addition to rating each article for adherence to each of the ten criteria, researchers also calculated an overall TEMPOS score by dividing the total number of points scored by the total number of points possible. If any criteria were rated as "not applicable", the total number of points possible was adjusted (20 total minus 2 points for each criterion that was rated "not applicable"). For ease of interpretation, scores were converted to percentages, with 0% indicating total non-adherence to the reporting recommendations, and 100% indicating full adherence. Characteristics of the Dataset In total, 226 articles were collected from several media outlets covering national and local Bay Area news, including broadcast networks, online magazines and newspapers, and blogs that had readership of at least 1000 people. By the time the scale was fully developed, six articles from the dataset were no longer available, leaving a total of 220 articles. As expected, there was a surge in suicide-related coverage immediately following the death of Kate Spade, and coverage peaked three days later following the death of Anthony Bourdain ( Figure 2). Article characteristics are presented in Table 2. Application of Scale To illustrate how TEMPOS may be used to monitor and evaluate media coverage, the first author applied all ten TEMPOS criteria to each of the 220 articles in the dataset. In order to assess the inter-rater reliability of the scale, each article was also independently rated by one of five secondary raters. Following the completion of the coding process, raters met to discuss and resolve discrepancies between the two sets of ratings in order to examine the inter-rater reliability of TEMPOS when applied to a large sample of media and produce a final set of ratings. In addition to rating each article for adherence to each of the ten criteria, researchers also calculated an overall TEMPOS score by dividing the total number of points scored by the total number of points possible. If any criteria were rated as "not applicable", the total number of points possible was adjusted (20 total minus 2 points for each criterion that was rated "not applicable"). For ease of interpretation, scores were converted to percentages, with 0% indicating total non-adherence to the reporting recommendations, and 100% indicating full adherence. Characteristics of the Dataset In total, 226 articles were collected from several media outlets covering national and local Bay Area news, including broadcast networks, online magazines and newspapers, and blogs that had readership of at least 1000 people. By the time the scale was fully developed, six articles from the dataset were no longer available, leaving a total of 220 articles. As expected, there was a surge in suicide-related coverage immediately following the death of Kate Spade, and coverage peaked three days later following the death of Anthony Bourdain (Figure 2). Article characteristics are presented in Table 2. Inter-Rater Reliability Inter-rater reliability was calculated by identifying the number of agreements between the two sets of ratings for each article and calculating the overall percentage of agreement. Across all raters and criteria, pure inter-rater agreement was 81.31%. To adjust for chance agreements, we calculated Cohen's Kappa (κ) for each criterion [37]. Since the κ statistic depends on marginal values to calculate chance agreement, low prevalence of a variable can produce lower κ values. Accordingly, reporting characteristics that displayed low variability-for example, very few articles in our dataset contained content that glamorized suicide-displayed significantly lower κ values. Therefore, in addition to Cohen's κ, percentage agreement is also presented for each criterion ( Table 3). The average κ value across all criteria was 0.62; a κ value between 0.6 and 0.8 is generally understood to indicate substantial agreement among raters [37]. Analysis of TEMPOS Scores We performed a series of exploratory analyses to understand overall levels of adherence, as well as how adherence levels varied between publications, across criteria, and over time. Overall TEMPOS percentage scores ranged from 5% to 100% (M = 74.7%, MDN = 75.0%, and SD = 18.2%). The distribution of overall scores was negatively skewed, as illustrated in Figure 3. Adherence levels varied significantly by criterion (Figure 4). The criterion that displayed the lowest mean levels of adherence was "suicide prevention and mental health resources" (M = 0.96, SE = 0.05), which aligns with past findings that media reports of suicide often fail to provide information and resources that could help those who may be struggling [29,30,38]. The criterion that displayed the highest mean levels of adherence was "glamorization of suicide" (M = 1.79, SE = 0.03), suggesting that very few media outlets portray suicide in a positive manner. Adherence levels varied significantly by criterion (Figure 4). The criterion that displayed the lowest mean levels of adherence was "suicide prevention and mental health resources" (M = 0.96, SE = 0.05), which aligns with past findings that media reports of suicide often fail to provide information and resources that could help those who may be struggling [29,30,38]. The criterion that displayed the highest mean levels of adherence was "glamorization of suicide" (M = 1.79, SE = 0.03), suggesting that very few media outlets portray suicide in a positive manner. We then examined how TEMPOS scores varied across and within publications (Figure 5). The average TEMPOS scores of each publication ranged from 4% to 96.9% (M = 76.6%, SD = 14.6%). Adherence levels varied significantly by criterion (Figure 4). The criterion tha played the lowest mean levels of adherence was "suicide prevention and mental resources" (M = 0.96, SE = 0.05), which aligns with past findings that media repo suicide often fail to provide information and resources that could help those who m struggling [29,30,38]. The criterion that displayed the highest mean levels of adhe was "glamorization of suicide" (M = 1.79, SE = 0.03), suggesting that very few medi lets portray suicide in a positive manner. We then examined how TEMPOS scores varied across and within publications ure 5). The average TEMPOS scores of each publication ranged from 4% to 96.9% 76.6%, SD = 14.6%). We then examined how TEMPOS scores varied across and within publications ( Figure 5). The average TEMPOS scores of each publication ranged from 4% to 96.9% (M = 76.6%, SD = 14.6%). Lastly, we examined whether overall adherence to the guidelines changed over the 1-month study period ( Figure 6). An independent-samples t-test was conducted to compare TEMPOS scores on the day of Kate Spade's death and the day of Anthony Bourdain's death. Reporting on Bourdain's death (M = 79.7%, SD = 13.9%) was significantly more adherent than reporting on Spade's death (M = 62.3%, SD = 26.3%); t(23) = −2.82, p = 0.01. Lastly, we examined whether overall adherence to the guidelines changed over the 1-month study period ( Figure 6). An independent-samples t-test was conducted to compare TEMPOS scores on the day of Kate Spade's death and the day of Anthony Bourdain's death. Reporting on Bourdain's death (M = 79.7%, SD = 13.9%) was significantly more adherent than reporting on Spade's death (M = 62.3%, SD = 26.3%); t(23) = −2.82, p = 0.01. Discussion This paper describes the development and application of a novel, user-friendly, nonbinary rating system to assess media adherence to suicide reporting recommendations. Lastly, we examined whether overall adherence to the guidelines changed over the 1-month study period ( Figure 6). An independent-samples t-test was conducted to compare TEMPOS scores on the day of Kate Spade's death and the day of Anthony Bourdain's death. Reporting on Bourdain's death (M = 79.7%, SD = 13.9%) was significantly more adherent than reporting on Spade's death (M = 62.3%, SD = 26.3%); t(23) = −2.82, p = 0.01. Discussion This paper describes the development and application of a novel, user-friendly, nonbinary rating system to assess media adherence to suicide reporting recommendations. Discussion This paper describes the development and application of a novel, user-friendly, nonbinary rating system to assess media adherence to suicide reporting recommendations. While many studies have examined the relationship between adherence to reporting recommendations and suicide rates, disparate measurement approaches make it difficult to draw meaningful comparisons across studies. To address these gaps, the current study explains the scale development process for the Tool for Evaluating Media Portrayals of Suicide (TEMPOS), as well as its application to a dataset of 220 media reports collected during a surge of suicide-related coverage. Results demonstrate the scale's reliability, validity, and its utility as a tool for researchers, journalists, and public health professionals engaged in suicide prevention. The application of TEMPOS yielded high inter-rater agreement among coders. Consistent with prior research [32,34], rater agreement levels were directly related to the degree of subjective interpretation required by raters. More concrete criteria (i.e., details about the suicide method) displayed higher levels of inter-rater reliability than more abstract criteria (i.e., glamorization of suicide), which may require more subjective judgment from the raters. The application of TEMPOS to a dataset of suicide-related media articles from June 2018 demonstrated that adherence increased slightly following the death of Kate Spade, which aligns with previous findings that media reporting on Bourdain's death was more guideline-adherent [38]. At the time of these highly publicized deaths, many media outlets received public criticism for the inappropriate ways that they reported on Spade's death [38], which may have alerted media outlets to the importance of adhering to the reporting recommendations. TEMPOS provides a method for monitoring trends in reporting adherence over time, which may help shed light on the links among public suicides, media adherence levels, and suicide rates. This is a vital step in developing prevention strategies for combatting suicide contagion and protecting public health. A critical contribution from this study is an illustration of how TEMPOS can be used by a range of constituents involved in the prevention of suicide. Professionals engaged in regional or broad-reaching prevention programs can use TEMPOS to better understand how media adherence to suicide reporting recommendations varies among criteria, across and within publications, and over time. TEMPOS can also be used to identify which reporting recommendations are commonly violated and which practices have already been widely adopted. For example, our analyses revealed that very few articles in the dataset featured suicide prevention and mental health resources, which aligns with previous research [29,30,38]. Systematic ratings from TEMPOS can yield powerful datapoints that allow suicide prevention professionals to develop more targeted and efficient interventions focused on improving adherence to the most commonly ignored recommendations, or media outlets in particular need of further training. TEMPOS can also be directly used by editors and publication leadership to determine if there are specific sections or reporters within their organizations that are in need of training. Importantly, TEMPOS also makes it possible to examine the impact of such trainings by providing a standardized way of measuring adherence before and after the training is administered. Taken together, the development, testing, and application of this novel rating scale for assessing adherence to suicide reporting recommendations provides a promising springboard for a diverse set of constituents to make strides in evidence-based suicide prevention efforts. Limitations & Future Directions Several limitations should be noted. First, the dataset may not be fully representative of all news articles published in June 2018 because the article collection process relied on publicly accessible media. In addition, social media and articles shared via social media were not included in the dataset; however, social media is among the most common sources of news media for young people ages 18-29, and 48% of adults report receiving news from social media [39]. In addition to sharing and consuming news media, many people use social media to express personal experiences with mental health and suicide, which can further contribute to the spread of suicidal behavior [40]. Consequently, adapting TEMPOS to be suitable for assessing social media content is an especially important direction for future work. One promising example is the #Chatsafe Project in Australia, which introduced a set of evidence-based guidelines aimed at helping young people communicate about suicide safely online [41]. Future work can draw upon the #Chatsafe project and other research on the relationship between social media and suicide contagion to adapt TEMPOS for use with social media content. Second, applying TEMPOS to every type of news article about suicide proved difficult. While the scale was easily applied to reports on individual suicide deaths, assessing articles that addressed suicide more broadly (e.g., articles discussing suicide trends, the general topic of suicide, or other related issues) was more challenging. Although efforts were made to expand the functionality of the tool (e.g., adding a "not applicable" option for some criteria), further work is needed to optimize the scale for application across a wide range of media types. Third, under the current scoring system, all criteria are considered equal in significance when calculating a total score. While all ten criteria are important aspects of responsible reporting, failure to adhere to certain guidelines may result in higher risk for imitative behavior than others. For example, sharing specific details of the suicide method and location may be more likely to elicit copycat suicide attempts than the use of stigmatizing language [18]. Further development of the rating scale might consider assigning weights to criteria based on specific elements of reporting that have been identified as being most harmful. Furthermore, elements such as article size, tone, and ratio of positive to negative content are potentially important factors underlying the risk of imitative behavior [20,32,34]. Future iterations of TEMPOS may choose to leverage text analysis software programs such as Linguistic Inquiry & Word Count (LIWC) [42] to automatically calculate such variables and factor them into the overall TEMPOS score. Finally, although TEMPOS offers a relatively efficient approach to measuring adherence, newsrooms often operate on extremely tight timelines. A primary aim of this work was to create a rating system that was complex enough to capture more nuance than a binary rating system, yet still accessible enough to be used by media professionals who are unfamiliar with the topic of suicide contagion. Ideally, media professionals would be able to use TEMPOS as a 'self-check' tool to assess the adherence levels of their content prior to publishing, and thereby reduce the risk of releasing harmful content. However, applying the scale prior to publication may not be realistic or feasible in the context of breaking news stories (e.g., the suicide of a high-profile figure). One way to further increase the practical utility of TEMPOS would be the use of artificial intelligence to automate evaluation of adherence to the reporting recommendations. Recent advances in machine learning and natural language processing may present an opportunity for TEMPOS to become partially or fully automated, which would allow for a wider range of applications. TEMPOS could also be used to intervene "upstream" by increasing reporters' awareness of the reporting recommendations prior to scenarios in which they would need to apply them. For example, schools of journalism and publication companies could incorporate TEMPOS into their curricula, standards of practice, and professional training. Conclusions The application of the Tool for Evaluating Media Portrayals of Suicide (TEMPOS) has the potential to dramatically change how suicide is discussed and ultimately perceived. TEMPOS is a novel, user-friendly, and reliable tool for assessing adherence to suicide reporting recommendations that can be used by researchers, suicide prevention professionals, and media professionals alike. A key strength of TEMPOS is that it acknowledges the nuances in communication around suicide and the complexity of suicidal behaviors. In a departure from other rating scales, which typically employ binary measures of adherence, TEMPOS's three-point scale allows raters to capture more nuance in adherence to reporting recommendations. As illustrated in this study, TEMPOS makes it possible to examine how media adherence to suicide reporting recommendations varies among criteria, across and within publications, and over time. Suicide prevention professionals seeking to work with media outlets to increase adherence can use TEMPOS to develop more targeted programming, both preemptively through education and training and during more urgent periods, such as during heightened-risk periods following surges of suicide-related coverage. Internally, media organizations can employ TEMPOS as an ongoing tool for self-assessment and monitoring. Ultimately, TEMPOS provides a platform for promoting widespread awareness of how reporting on suicide impacts individuals and communities, potentially leading to reduced stigma and improved visibility of a looming public health issue that is not commonly discussed.
2022-03-09T16:16:33.082Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "0edb9b9cd668b2ce425213befb7f6c2e249888dd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/5/2994/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "158a25867871c62b7f2509569ce2bb8da1ed5a94", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
75007
pes2o/s2orc
v3-fos-license
High Streptococcus pneumoniae colonization prevalence among HIV-infected Kenyan parents in the year before pneumococcal conjugate vaccine introduction Background Streptococcus pneumoniae is a leading cause of pneumonia, meningitis and sepsis in developing countries, particularly among children and HIV-infected persons. Pneumococcal oropharyngeal (OP) or nasopharyngeal (NP) colonization is a precursor to development of invasive disease. New conjugate vaccines hold promise for reducing colonization and disease. Methods Prior to introduction of 10-valent pneumococcal conjugate vaccine (PCV10), we conducted a cross-sectional survey among HIV-infected parents of children <5 years old in rural Kenya. Other parents living with an HIV-infected adult were also enrolled. After broth enrichment, NP and OP swabs were cultured for pneumococcus. Serotypes were identified by Quellung. Antimicrobial susceptibility was performed using broth microdilution. Results We enrolled 973 parents; 549 (56.4 %) were HIV-infected, 153 (15.7 %) were HIV-uninfected and 271 (27.9 %) had unknown HIV status. Among HIV-infected parents, the median age was 32 years (range 15-74) and 374/549 (68 %) were mothers. Pneumococci were isolated from 237/549 (43.2 %) HIV-infected parents and 41/153 (26.8 %) HIV-non-infected parents (p = 0.0003). Colonization with PCV10 serotypes was not significantly more frequent in HIV-infected (12.9 %) than HIV-uninfected parents (11.8 %; p = 0.70). Among HIV-infected parents, cooking site separate from sleeping area and CD4 count >250 were protective (OR = 0.6; 95 % CI 0.4, 0.9 and OR = 0.5; 95 % CI 0.2, 0.9, respectively); other associations were not identified. Among 309 isolates tested from all parents, 255 (80.4 %) were penicillin non-susceptible (MIC ≥0.12 μg/ml). Conclusions Prevalence of pneumococcal colonization is high among HIV-infected parents in rural Kenya. If young children are the pneumococcal reservoir for this population, PCV10 introduction may reduce vaccine-type colonization and disease among HIV-infected parents through indirect protection. Background Streptococcus pneumoniae (pneumococcus) is a leading cause of pneumonia, meningitis and sepsis among children and adults in developing countries [1,2]. Pneumococci colonize the upper respiratory tract, which is a precursor state to pneumonia and invasive pneumococcal disease (IPD). Transmission is common from person-toperson through respiratory secretions, particularly within families and other groups in which people are in close contact [3][4][5]. Groups at highest risk for IPD after pneumococcal acquisition include young children, the elderly and immuno-compromised individuals such as those infected with Human Immunodeficiency Virus (HIV). Persons infected with HIV are approximately 25 to 50-fold more likely to develop IPD than are HIVuninfected persons [6][7][8]. While contact with young children is a well-established risk factor for both pneumococcal colonization and disease among all adults [9,10], it is a particularly important determinant of pneumococcal colonization and disease among HIV-infected adults [11]. HIVinfected adults are more likely to have disease caused by serotypes associated with invasive disease in children (e.g. 6B, 9 V, 14, 19 F, 23 F) than HIVuninfected adults [12], and HIV-infected mothers are more likely to have disease caused by pediatric serotypes than HIV-infected fathers [13,14]. The introduction of the 7-valent pneumococcal conjugate vaccine (PCV7) among U.S. children in 2000 led to a 94 % decrease in vaccine-type IPD among all children <5 years from 1998-2003 [15]. It also led to decreases in IPD among adults, including a 25 % decline in all-serotype IPD incidence among HIV-infected adults living with AIDS [7,10]. This indirect or "herd" effect in unvaccinated persons occurred because vaccinated children were less likely to be colonized with PCV7 serotypes and were therefore less likely to transmit them to unvaccinated persons [16,17]. It is unknown whether the substantial indirect vaccine effect seen among high-risk groups in the U.S. will also be seen in African countries with high HIV seroprevalence. In January 2011, Kenya became the third African country to introduce pneumococcal conjugate vaccine into its national immunization program, and the first to utilize the 10-valent formulation (PCV10) which expanded coverage to include PCV7 serotypes (4, 6B, 9 V, 14, 18C, 19 F, 23 F) plus serotypes 1, 5, and 7 F. To better understand the changes in pneumococcal ecology among high risk groups before and after introduction of PCV10, we designed a study of pneumococcal colonization among HIV-infected parents of young children in a high HIV prevalence area of western Kenya. In this report, we describe the baseline prevalence of pneumococcal colonization, including serotype distribution, antibiotic resistance, and risk factors for colonization. Study setting This study utilized two ongoing surveillance systems in Asembo, western Kenya, to select and enroll participants-the western Kenya Health and Demographic Surveillance System (HDSS) and Population-Based Infectious Disease Surveillance (PBIDS) program. Both were established through collaboration between the Kenya Medical Research Institute (KEMRI) and Centers for Disease Control and Prevention (CDC) [18,19]. The Asembo area is mostly poor and is located in a rural province with one of the highest HIV prevalence rates in Kenya. HDSS collects demographic data on the population, including health status, socioeconomic status, and education. In 2007, 14.9 % of adults ages 15-64 years in Asembo were HIV-infected [20]. Since 2005, residents of 33 HDSS villages within the Asembo HDSS area have also been enrolled in PBIDS, which measures morbidity in the community and at the hospital [18]. Home-based counseling and testing (HBCT) for HIV occurred throughout the Asembo area in 2008-2009; these data were linkable to HDSS data and used to identify HIV-infected persons for this study [20]. HIV status was determined using two parallel HIV rapid tests, as previously described [20]. Participants provided written consent for any testing performed, and for linkage of HIV results to their HDSS and PBIDS records. Cross-sectional survey We performed a cross-sectional survey among HDSS residents who were parents of a child <5 years of age and resided within the 33 PBIDS villages and 13 additional adjacent HDSS villages in the Asembo area. We used HDSS records to identify living compounds where at least one HIV-infected parent of a child under 5 years of age resided. To maintain confidentiality on HIV testing status, HDSS village reporters approached the selected compound and invited all parents of children under 5 years of age (regardless of HIV status) residing there to participate in the study. Interested persons were referred to St. Elizabeth's Mission Hospital clinic for enrollment. Upon enrollment, data on household characteristics, recent respiratory illness, smoke exposure, cooking practices, and antibiotic usage were collected. Additional demographic data and HIV-indicators (HIV status, CD4 counts, use of highly active antiretroviral therapy [HAART], and attendance at an HIV clinic) were obtained through HDSS and PBIDS databases. For these data, we attempted to obtain the most recent information reported prior to sample collection. The survey was conducted during October 29-December 23, 2009. Laboratory methods and definitions Polyester-tipped swabs were swept over the posterior oropharynx (OP) and tonsils, and calcium alginate swabs were inserted into the posterior nasopharynx (NP) and rotated 360°, as previously described [21]. Both NP and OP swabs were collected from each participant. Swabs were immediately placed in separate vials containing skim milk-tryptone-glucose-glycerol (STGG) transport medium and placed in a cool box as per World Health Organization consensus methods [22]. Within 8 h, specimens were vortexed and placed in a liquid nitrogen container. The next morning specimens were transported approximately 50 km to the KEMRI/CDC laboratory and stored at −70°C. Pneumococcal isolation was conducted at the KEMRI-CDC laboratory in Kisumu, Kenya by adding 200 μl of NP-STGG 200 μl of OP-STGG from each individual in an enrichment broth step following methods previously described [23]. Any pneumococcal alpha-hemolytic colony potentially identifiable as S. pneumoniae was subjected to optochin susceptibility and bile solubility testing [24]. In cases where more than one potential pneumococcal colony type was identified per plate, representatives of each colony type were subjected to testing. Pneumococcal isolates were then transported on dry ice to the CDC laboratory in Atlanta, GA for serotyping. Serotypes for the pneumococcal isolates were obtained by latex agglutination and Quellung reaction testing. Antimicrobial susceptibility testing for commonly used antibiotics was performed at KEMRI-CDC or CDC-Atlanta laboratories by broth microdilution (Trek Diagnostics, Cleveland OH) according to the manufacturer's instructions. Susceptibility was determined using Clinical and Laboratory Standards Institute (CLSI) criteria for minimum inhibitory concentration (MIC) from 2012 for non-beta lactams and 2007 for penicillin (≥0.12 μg/ml), which we felt was most biologically relevant for carriage where reduced susceptibility may provide a selective advantage. Intermediate and resistant isolates were designated as "non-susceptible". We categorized pneumococcal colonization by serotypes present in either PCV10 (serotypes 1, 4, 5, 6B, 7 F, 9 V, 14, 18C, 19 F, 23 F) or the 13-valent PCV (PCV10 serotypes plus serotypes 3, 6A and 19A) vaccine formulations. When multiple pneumococcal serotypes were identified from a specimen, participants were classified as colonized by a vaccine-serotype if at least one serotype was included in the vaccine. The isolate with the highest MIC was used in reporting of antimicrobial resistance when more than one isolate was detected. Data management and analysis Analyses were performed using SAS software (version 9.3; SAS institute). We categorized participants by HIV-status (HIV-infected, HIV-uninfected, or HIV-unknown) and used data on CD4 counts, history of use of HAART, and last HIV-clinic attendance, when available. We defined an 'isolate' as a pneumococcal strain of a particular serotype from a participant. For example, if two colonies were selected from a plate and had the same serotype, this was considered 1 isolate; colonies of 2 different serotypes were considered 2 isolates. We calculated a serotype diversity index (SDI) by dividing the number of serotypes detected by the total number of isolates, such that the maximum diversity score would be 1.00 and least would be approaching 0. We performed univariable and multivariable logistic regression to assess the association of various risk factors with colonization among participants, accounting for compound as a repeated measure. Interactions with antibiotic usage, age, and gender were explored. Odds ratios with 95 % confidence intervals were calculated. Ethical considerations This study was approved by both KEMRI and CDC ethical committees. All participants gave written informed consent. Results We identified 772 HIV-infected parents of children <5 years old living among 436 compounds in the HDSS database. Of these, 549 (71.0 %) were enrolled in the study. The primary reason for non-enrollment in the study was outmigration from the HDSS area. Persons enrolled did not differ significantly from those not enrolled by gender, although non-responders were slightly younger (median age 28 years, compared to 32 years among responders; p < 0.0001). An additional 424 parents (153 HIV-uninfected and 271 of unknown HIV status) living with an HIVinfected parent also participated in the study. The age range among enrolled HIV-infected parents was 15-74 years, and 68 % were mothers ( Table 1). The median number of children <5 years old living in each household was 1 (range 1-4). A total of 141 (25.7 %) reported a current cough, and 74 (13.5 %) reported a fever within the previous 24 h. A large proportion reported antimicrobial usage on the day of swabbing (n = 169, 30.8 %), the majority of which included cotrimoxazole (161 of 169, 95.2 %). Among 200 parents for whom details on HIV indicators were available, 106 (53.8 %) were enrolled at an HIV care and treatment clinic, 23 (11.6 %) were on HAART, and 44 (22.0 %) had CD4 counts <250. On multivariable analysis, high CD4 count was associated with significant decrease in colonization among HIV-infected persons (Table 3). HIV-infected parents with CD4 count ≥250 were less likely to be colonized than those with lower CD4 counts (OR = 0.5; 95 % CI 0.2, 0.9). We also observed a protective effect of cooking location being separate from sleeping location (OR = 0.6; 95 % CI 0.4, 0.9). We did not observe a significant association between colonization and smoking tobacco, the number of children <5 years old in the home, the number of children attending school, the number of people living in the compound, wealth quintile, type of cooking fuel used, or self-reported fever within 24 h of sample collection. HIV infected parents were 2.1 times more likely to be colonized than HIV-uninfected parents (95 % CI 1.44, 3.03; data not shown), controlling for other factors. We did not see any other differences in colonization risk factors by HIV status. Interactions with gender or amoxicillin use (current, within 7 days, or within 30 days of sample collection), gender were also not observed. Although age less than 40 years was protective in univariable analyses (OR = 0.6; 95 % CI 0.4, 0.9), this did not hold true in multivariable models. PCV introduction in Africa has accelerated in recent years, yet few published data document the baseline prevalence of pneumococcal disease or colonization among groups not targeted to receive vaccine, particularly adults with HIV infection. The 43.2 % pneumococcal colonization rate that we observed among HIV-infected adults is similar to a previous study conducted in Kenya among HIV clinic attendees (34.6 %) [25], but higher than in studies conducted among HIV clinic attendees in Uganda (18.0 %), mineworkers in South Africa (8.8 %), and mothers of young infants in both South Africa (20.2 %) and Zambia (11.4 %) [26][27][28][29]. The overall colonization rate of 38.6 % observed among our participants is also higher than those reported among adults with unspecified HIV status in Nigeria and the Gambia, where 26 % of adults >18 years of age and 21 % of mothers of 12-month-old infants were colonized, respectively [5,30]. These variations may be partially explained by the broth enrichment step used in this study, which was not done in other studies from Africa. We used broth enrichment before plating because it has been shown to increase recovery of pneumococci from respiratory specimens [23]. In addition, all participants in our study were parents of young children who live together in the same compound, which may result in frequent transmission of pneumococci. Other unspecified differences in the populations studied in other African pneumococcal colonization studies (e.g. recent illness or tobacco smoke exposure) may also have contributed to the differences observed in colonization rates. The high colonization rate among our participants with unknown HIV status (35.8 %) suggests that undocumented HIV-infection may have been common among adults in this group. Two groups known to be at Does not include non-typeable isolates: HIV-infected (1), HIV-uninfected (2), HIV-unknown (3) higher risk of HIV are those who refused HIV testing and those who were recent in-migrants to the study area. [31] In this study HIV-infected parents were over two times more likely to be colonized with pneumococci than HIV-uninfected parents living in compounds with an HIV-infected parent, when adjusting for other risk factors. Because the mechanism of mucosal protection against pneumococcal colonization is T-cell dependent, HIV-infected persons with low CD4 counts may be at highest risk for colonization and consequently for invasive disease. CD4 counts less than 350 have been associated with increased risk of invasive disease in some studies [6]; however, the link to colonization is less well established [28,29,32]. No differences in colonization prevalence by CD4 count were previously found among HIV-infected adults in studies conducted in Kenya, Brazil, and South Africa [25,29,33]. We found HIVinfected parents with counts ≥250 to be somewhat protected from pneumococcal colonization. Although our data were limited, we did not see a protective effect with usage of HAART or attendance at an HIV clinic among HIV-infected parents, which has been described before [33]. Besides HIV infection, we also observed a significant association between pneumococcal colonization and cooking location where persons who described cooking in an area separate from their sleeping area were less likely to be colonized. Exposure to indoor air pollution has been linked to adverse health outcomes including pneumonia in the developing world [34] however the relationship with pneumococcal colonization is less well established [35,36]. We did not observe an association between colonization and other established risk factors for pneumococcal carriage including age, tobacco smoke exposure, or the number of children <5 years in the home [37,38]. We detected 41 different serotypes in this rural Kenyan population, with a greater degree of diversity observed among HIV-uninfected compared to HIVinfected participants. This finding is consistent with the greater diversity of invasive serotypes observed among HIV-uninfected compared to HIV-infected adults in South Africa [12] and will have implications for understanding changes in pneumococcal ecology and serotype replacement after vaccine introduction [39]. The most frequent colonizing serotypes observed among HIVinfected parents in our study (3, 16 F, 19 F, 23 F) are similar to findings from other surveys conducted among HIV-infected adults in Nigeria, South Africa, and Uganda [26][27][28] and among HIV-infected children in Kenya and Tanzania [35,36]. HIV-infected parents were slightly more frequently colonized with PCV10-type and PCV13-type pneumococci than HIV-uninfected parents, although this difference was not statistically significant. The baseline colonization rate among these groups will be important to understanding the impact of vaccine introduction in Kenya and will complement data on invasive disease as it becomes available. The effect of PCV10 introduction on the prevalence of serotypes 3 and 19A will be particularly important. In our study serotype 3 was the most frequently carried serotype, although it is not covered by PCV10. Serotype 19A, also not included in PCV10, was not detected in our study but increased in some countries after PCV7 introduction [40][41][42][43]. Still, as early data documenting the impact of PCV10 on invasive disease among Kenya children are becoming available [44], the potential for indirect impact of the vaccine on pneumococcal colonization and disease among non-vaccinated groups appears promising. In the U.S., HIV-infected persons experienced a 91 % drop in vaccine-type IPD after vaccine introduction [7]. Our study had several potential limitations. Conducting the study over an 8-week period might only provide a snapshot of colonization in this population, as pneumococcal disease is transient and has been shown to vary by season in some studies [35,45] though not others [46]. Second, much of our HIV-related data pre-dated enrollment in the carriage study by over 1 year which may have resulted in misclassification, as HIV-uninfected may have become HIVinfected during the time since their negative test. In the Asembo area, the HIV incidence is estimated to be approximately 1.2 % per year [personal communication, KEMRI/CDC]. Similarly, data on CD4 counts may have been inaccurate as these are also timesensitive and can change over the course of HIV infection and with use of HAART. Conclusions Characterizing the direct effect of PCV among children targeted to receive vaccine, and assessing the indirect vaccination effect among high-risk groups in Kenya will have important policy implications in Africa. Because HIV-infected persons have an impaired immune response to polysaccharide vaccine, and conjugate vaccines are not yet approved for their use in adults in most countries [47,48], the indirect effects of PCV introduction in Kenya will be critical in preventing disease in this group. Sustaining long term funding for vaccination programs in early-adopting countries like Kenya and encouraging other countries to adopt PCV will require demonstration of vaccine impact. More potential factors may lead to a different herd effect with introduction of PCV in Africa than in high-income countries, including a potentially lower rate of vaccine coverage, different socio-economic conditions, and a variety of additional factors which could influence the immunologic responses or protective effect among vaccinated children (e.g. younger target age-group, and higher prevalence of malaria and malnutrition) [1,49]. The baseline data presented in this report will be compared with ongoing analyses of pneumococcal carriage rates in HIV infected parents over three years following introduction of PCV10 in infants in Kenya. Once this investigation is completed, the findings will contribute to a growing body of data on PCV impact that will be critical for decision-making regarding sustained pneumococcal immunization in Africa. (97)
2018-04-03T04:50:40.663Z
2015-12-01T00:00:00.000
{ "year": 2016, "sha1": "b90fe9232c05a6f38a8712765502099ab79d5576", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-015-1312-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90bfdd589b6c53bb756037254b9f8a5ae321b3b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
121015163
pes2o/s2orc
v3-fos-license
Probing Magnetism in CePdAl under Multi-Extreme Conditions using Polarized Neutrons We have performed polarized neutron experiments on single-crystalline CePdAl at two different conditions: at 4 K and ambient pressure in a field of 9 T applied along the c-axis and at 40 mK under a pressure of 0.85 GPa and the same magnetic field. We observe that in contrast to zero field, where only two Ce atoms carry magnetic moments, in fields under different conditions all Ce moments are significantly developed. Thus, the magnetic field lifts the magnetic frustration caused by the geometry of the system. It also eliminates effects of the pressure that drives CePdAl to a quantum criticality and loss of long-range magnetic order. Introduction Geometrically frustrated systems have attracted considerable attention in recent years due to their peculiar magnetic properties [1] leading to a manifold of different ground states. The hexagonal compound CePdAl, in which the cerium moments are located on a frustrated Kagome lattice, represents a system, in which ordered and fluctuating moments coexist. It forms in the ZrNiAl type of structure (see Fig. 1) [2] and orders antiferromagnetically (AF) below T N = 2.7 K [3]. The AF structure is described by an incommensurate propagation vector q = (0.5 0 τ ), τ ≈ 0.35 that is temperature dependent [4,5] down to 1.9 K. No further magnetic phase transition is detected down to a milikelvin region [5] except for lock-in of the τ value at 1.9 K. The low-temperature specific heat coefficient is strongly enhanced (γ = 250-270 mJ mol −1 ), qualifying it as a heavy-fermion system [6]. Due to the geometrical frustration of triangularly coordinated Ce atoms, only two of the three Ce atoms develop a stable magnetic moment. Both are oriented along the c axis and longitudinally modulated with an amplitude of about 1.7 µ B . The third Ce, which is frustrated, strongly fluctuates down to at least 35mK. Application of hydrostatic pressure shifts the anomalies connected with the appearance of the AF order to lower temperatures indicating its collapse around 1.0 GPa (see Fig. 2) [7]. On the other hand, neutron diffraction showed that the pressure destroys the magnetic order already around 0.8 GPa [8]. In this contribution we report on polarized neutron diffraction experiment on CePdAl single crystal at multiextreme conditions with the aim to check how the the Ce moment recovers from different starting situations -from geometrical frustration at ambient pressure where one moment is strongly suppressed and from pressure-induced quantum critical point, where all three moments are quenched. We have determined magnetization density maps in this material at ambient pressure (at 4 K) and under pressure of 0.85 GPa (at 100 mK), both in a magnetic field of 9 T applied along the c-axis. Surprisingly spin density maps at both thermodynamic conditions do not differ drastically suggesting that the magnetic field restores in both cases a fermi-liquid behaviour in CePdAl. Experimental Details The single crystal used in the present study (in both elastic and inelastic experiments) has been prepared by a Czochralski method. It is the same one as used in our previous neutron experiments [8]. It has an irregular shape with approximate dimmensions 2.5x2x4 mm 3 . Two kinds of diffraction experiments were performed, both at ILL Grenoble. The unpolarized neutron diffraction intensities for the crystal structure determination were collected on the hot neutrons diffractometer D9 at 4 K, i.e. above T N . The crystal was glued on an aluminium holder and mounted in a closed-cycle refrigerator. The incident wavelength used in the unpolarized experiment was 0.824Å. Polarized data, here flipping ratios, were collected at low temperatures under pressure and at ambient pressure in fields up to 9 T using the diffractometer D3 on the very same crystal. The magnetic field was applied close to the c-axis, which has been found previously to be an easy magnetization direction [7]. The incident neutron wavelength was 0.825Å and its polarization degree of 95 %. Erbium filter has been used to cut the λ/2 contamination to 10 −4 . In the case of the experiment under pressure, the 1.5 GPa clamp type pressure cell made from CuBe has been used. As transmitting medium we have used fluorinert FC770. It is well known that such a transmitting medium freezes out at certain temperature upon cooling and lowers the pressure at low temperatures. While very little can be done against the first effect, it is known from experience that the pressure decreases upon lowering the temperature by about 0.25 GPa. In order to achieve the desired pressure of 0.85 GPa, the cell has been presurized at room temperature to 1.10 GPa. The cell was then placed at the tip of a dilution refrigerator capable to reach temperatures as low as 100 mK that was in turn inserted into to a superconducting vertical cryomagnet. A magnetic field of up to 9 T has been applied along the c-axis. In order to refine the structural parameters of CePdAl, to derive the magnetic structure factors and to perform the magnetic model refinements, a suite of programs within the Cambridge Crystallography Subroutine Library [9] were used. Spin densities were determined using the software package PRIMA [10], which calculates the most probable distribution that is in agreement with the symmetry of the parent lattice, observed magnetic structure factors and associated errors using the maximum entropy (MAXENT) method [11]. The resulting magnetization densities were drawn using the computer code VESTA [12]. Results A schematic representation of the non-centrosymmetric hexagonal crystal structure of CePdAl is shown in Fig. 1. This structure consists from two basal planes stacked along the c axis. (1) and Pd (2), respectively. Ce atoms occupy crystallographically equivalent 3g position that splits into inequivalent positions (denoted Ce1, Ce2 and Ce3, respectively) due to magnetic ordering. The P-T phase diagram of CePdAl single crystal constructed from combined from our previous neutron diffraction data [8] and literature [7]. The lines are guides for an eye. The open point and the arrow indicate the position of the current experiment within the P-T diagram and where CePdAl was previously found to be non-magnetic, respectively. One layer contains three Ce atoms in 3g position together with one third of Pd atoms in 1b position separated by a layer built-up by Al atoms together with remaining Pd atoms (denoted as Pd (2)) in 2c position. There are two free positional parameters: the x position of Ce and Al. Each Ce has four nearest neighbours within the basal plane at a distance of 372 pm, arranged in a distorted kagome lattice. Two next-nearest Ce neighbours are found along the caxis. Although being in an equivalent crystallographic positions, are Ce atoms in the magnetic state not equivalent. The antiferromagnetic structure consists from ferromagnetic Ce chains, separated by fluctuating Ce ions. There are three such equivalent directions resulting in an existence of three magnetic domains. All ordered Ce moments are oriented along the c-axis and their magnitudes are modulated along the c-axis. Careful analysis of unpolarized diffraction data (1075 Bragg reflections were collected within the 0.23 ≤ sinθ/λ ≤ 1.10 range) with proper corrections for absorption and extinction effects led to structural parameters that are in very good agreement with literature data [5]. The effect of the extinction was found to be very weak (few %) and the quality of the fit could not be improved by varying the occupation numbers suggesting that our crystal has a stoichiometry very close to the ideal 1:1:1 ratio. Resulting structural parameters were consequently adopted for the analysis of both, pressure and zero-pressure polarized data. The classical polarized neutron diffraction experiment is based on collecting Bragg reflection intensities I + (Q) and I − (Q) for as many as possible scattering vectors Q for an incident polarization of the beam respectivelly parallel (+) and antiparallel (-) to the direction of the applied magnetic field. The ratio between the two intensities (corrected for the relevant background), the so-called flipping ratio R(Q), allows via the interference term between the nuclear F N (Q) and magnetic structure factors This leads to a relatively complex expression for the flipping ratios and difficulties in deriving F M 's directly from measured R(Q)'s. This is only possible for a sub-set of reflections having real structure factors. This follows from the fact that although the crystal structure factors can be calculated from the known F N (Q)'s, one cannot determine both real and imaginary parts of F M (Q)'s for a particular Bragg reflection from a single measured R(Q). In addition, further assumption needs to be applied, namely that the main contributions to the spin distribution originate from electrons centered on Ce atoms. Real crystal structure factors are for space group P62m found for reflections that obey h = -k, h = 0 or k = 0 relations, where i is the fourth hexagonal index i = -(h + k). Among those, (4 0 l) and (11 0 l) type reflections, assuming the presence of only Ce atoms, have the structure factor very small, suggesting that they are, if the majority of magnetic contribution originates from Ce atoms, are insesitive to magnetism and consequently, R(Q)'s are expected to be close to unity. Indeed, none of these flipping ratios deviate from 1.00 by more that 2σ for both polarized experiments. On the other hand, (2 0 l), (5 0 l) and (7 0 l) type reflections have structure factors that are large. For such reflections we observe R(Q)'s that deviate from unity by more than 10σ, corroborates the picture that magnetic properties in CePdAl are governed by Ce atoms. Maximum entropy method MAXENT method is more powerful than the usual Fourier synthesis since it does not make any a priori assumption concerning the unmeasured Fourier components. As a result, it reduces both the noise and truncation effects. We have applied this method to F M 's data sets calculated from R(Q) with the help of known crystal structure parameters assuming a space group P1, i.e. treating Ce sites independently. The unit cell of CePdAl was divided into 64x64x32 = 131072 cells, in which the magnetization is assumed to be constant. The reconstruction were started from a small flat magnetization distribution. As the final result, we have obtained the most probable reconstructed three-dimmensional density of magnetic moment, i.e., the map which fits the data and for which the entropy is maximum. A common way to represent such a density is a projection onto a certain crystallographic plane. In Fig. 3(a) . Cerium magnetic form factor multiplied by the total average Ce moment as a function of sin(θ)/λ determined fropm data collected at ambient pressure. Only experimental points for which the structure factors are real are shown. The best fit assuming the same Ce3 + configuration at all three sites is drawn by a solid line. from data obtained at 4 K, in a field of 9 T applied along the c-axis at ambient pressure. Clearly, the magnetization is found at places of Ce atoms. Integration in three dimmensions leads to a magnetic moment of 1.8(1) µ B at all three Ce sites. In Fig. 3 Model Refinement At first we have analyzed the experimental data by the direct refinement of the measured R(Q)'s of Bragg reflections with real structure factors. We assumed all the magnetic moments to be centered on the Ce ions only. These were represented by the Ce 3+ magnetic form factor f (Q) that has in general orbital (µ L ) and spin (µ S ) parts. The magnetic amplitude of elastic neutron scattering at the scattering vector Q from a magnetic ion with the moment µ is then proportional to (µ L + µ S ) f (Q) = µf (Q). The best fit to data taken without the pressure using magnetic form factor of the Ce 3+ gives an agreement factor of χ 2 = 4.1. This type of fit is shown in Fig. 4. The Ce moment has been found to be µ = 1.63(2) µ B . Moreover, a significant orbital part µ L =1.52(4) µ B has been refined leading to a strongly reduced parameter C 2 = µ L / µ = 0.93(5) µ B . This value suggests a strong hybridization of the Ce 4f electrons with other states in the solid. In the next step we have treated the three Ce moments as independent quantities. The best fit leads a slightly better agreement and Ce magnetic moments that are very similar (between 1.54 and 1.71 µ B , with orbital parts between 1.51 and 1.78 µ B ). This suggests that in a field of 9 T the geometrical frustration is completely lifted. Finally, we have tried also to refine a magnetic moment on the Pd sites. However, no significant magnetic contribution has been found. Refinements using the computer code Fullprof [13] yield consistent results. The best fit to data taken under pressure yields an agreement factor of χ 2 = 2.6 and an average Ce magnetic moment that is slightly larger than found without the pressure (1.67(6) µ B /Ce). The apparent difference, however, is a smaller orbital part in the case of the pressure data (1.05(8) µ B /Ce) leading to C 2 = 0.63 (9) suggesting that the 4f electron states are more delocalized with respect to ambient pressure state. However, a similar plot of data as a function of sin(θ)/λ reveals that the data do not fall on a single curve. This can be explained by different Ce moments at different sites or by non-zero magnetic moment on the Pd/Al sites. The best fit assuming the former scenario leads to Ce moments that vary significantly between the sites (between 1.0 and 1.8 µ B with orbital contributions being between 0.7 and 2.1 µ B ). Refinements using the computer code Fullprof yield similar refined values suggesting that 4f electron states centered at different Ce sites experience a different delocalization degree. However, these results need to be taken with care as the pressure data were obtained with larger error bars. The inequivalency could be due to the experimental errors. Discussion and Conclusions We have performed polarized neutron experiment on single-crystalline CePdAl at two different thermodynamical conditions. Once was the CePdAl in the field-saturated state, which one achieves from a geometrically frustated antiferromagnetical state by applying strong magnetic field. The data taken at ambient pressure at low temperatures with field of 9 T applied along the c-axis yield (achieved either by direct model refinement or via MAXENT analysis) Ce magnetic moments that are more or less equal among the originally frustrated sites. The Ce magnetic moments follow the dependence expected for the Ce3 + state, however, the reduced parameter C 2 indicates a strong hybridization of 4f states with other electronic states in the solid. The other thermodynamical state studied was achieved by applying pressure at very low temperature o ≈ 100 mK and under pressur of 0.85 GPa. Under these conditions, in the absence of magnetic field, is CePdAl close to a quantum critical point with all the Ce moments being quenched. We observe that all the Ce moments are significantly developed when a field of 9 T is applied along the c-axis. However, first, the three sites seem to carry different moments and, second, the spin and orbital contributions seem to be very different for these sites. It is not clear, however, at the moment whether such inequivalency is a consequence of experimental uncertainities. Nevertheless, it is clear that the magnetic field lifts the magnetic frustration caused by the geometry of the system. It also eliminates effects of the pressure that drives CePdAl to a quantum criticality and loss of a long-range magnetic order.
2019-04-19T13:04:43.962Z
2015-04-14T00:00:00.000
{ "year": 2015, "sha1": "1eca7e101364a559e787699945791e3d27d03f9c", "oa_license": "CCBY", "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/592/1/012082/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ee8338299fd0ebb1b16af46455cdba548f0eb718", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
140362940
pes2o/s2orc
v3-fos-license
Social support as a mediator between sleep disturbances, depressive symptoms, and health-related quality of life in patients undergoing hemodialysis Background The hemodialysis regimen is an inevitable and mandatory treatment for patients with end-stage renal disease (ESRD). During the dialysis journey, patients may experience maladaptation in terms of sleep disturbances, depressive symptoms, and reduced health-related quality of life (HRQOL). Psychosocial resources such as social support may have beneficial influences on health outcomes, but studies have rarely analyzed the integrated relationships among risk factors which include pain, sleep disturbances, duration since diagnosis and various health outcomes in Taiwan. This study aimed to bridge this gap by investigating the relationships among related risk factors, social support, sleep disturbances, depressive symptoms, and HRQOL, which is composed of physical quality of life (PQOL) and mental quality of life (MQOL), in ESRD patients. Method A correlational design was used, and 178 patients aged 20 years or older were recruited via convenience sample. The relationships among the risk factors, the mediators, depressive symptoms, PQOL, and MQOL were analyzed using structural equation modeling. Results The findings showed that more than 70% of the participants reported poor sleep quality, and 32% reported depressive symptoms. When participants had greater pain and more sleep disorders, they were more likely to be depressed. When participants had more appraisal support; they had better PQOL and fewer depressive symptoms. Overall, the structural equation model explained 31.8% of the variance in self-reported depressive symptoms, 29.4% of the variance in PQOL, and 5.7% of the variance in MQOL. Moreover, appraisal support enhanced PQOL and reduced depressive symptoms by exerting its two mediating effects on sleep disturbances. Conclusion Our findings indicate that patients with ESRD who have more social support have better PQOL and MQOL and fewer depressive symptoms than those with less social support. Introduction End-stage renal disease (ESRD), which is highly prevalent worldwide, is a complication of the primary disease of diabetes or the cardiovascular system [1], and patients must accept permanent dialysis for the remainder of their lives if they do not accept further aggressive treatment such as kidney transplant. Psychological problems may occur when patients with ESRD undergo long-term dialysis, and depressive symptoms have been reported to be highly prevalent in patients with ESRD [2,3]. Moreover, the loss of bodily control among patients with ESRD is accompanied by depressive symptoms and results in negative outcomes in terms of economic burden, family dysfunction, and worse health-related quality of life (HRQOL) [4][5][6][7][8]. In the US, approximately 20-30% of patients with ESRD have significant depressive symptoms, which could contribute to distress and sleep disturbances [9][10][11][12] and increase the risk of mortality and morbidity. In Taiwan, the prevalence of depressive symptoms in patients with ESRD is relatively high at approximately 50-70% [13][14][15]. Moreover, sleep disturbances are frequently reported in patients with ESRD, which may be due to symptoms related to uremiaassociated sleep disturbances or renal-related symptoms or treatments [11,16]. The literature has indicated that having sufficient social support resources may reduce emotional stress and help enhance adaptation skills in daily life among patients with spinal cord injury [17], mental health issues [18], osteoporosis [19], and breast cancer [20], but few studies have mentioned the effect of social support on patients with ESRD in Taiwan. Social support is multidimensional, and four aspects have often been evaluated in different fields: informational support (IS), emotional support (ES), appraisal support (AS), and tangible support (TS) [21]. This study used the modified Chinese version of the Social Support Inventory (SSI) [22], which has been found to have good reliability, to assess social support. The SSI dimensions include ES, which focuses on individual behaviors intended to support people, including love and empathy. IS refers to suggestions and information provided to help people make treatment decisions or deal with life events. TS consists of material assistance, and AS refers to affirmations of (or respect for) an individuals' activities, values, or progression. Each of these categories of social support plays an important role in meeting the needs of patients with ESRD and their families. Previous studies [23,24] in different fields have suggested that individuals who perceive high levels of social support experience better quality of life (QOL), leading to enhanced well-being [8,19]. Numerous studies have suggested that social support may have mediating or moderating effects on health. Further, research has shown that a lack of social support is related to depressive symptoms, anxiety, frustration, and social withdrawal [17,25]. Too little research has focused on determining the effects of different types of social support on depression in patients with ESRD in Taiwan, especially the integrated relationships among disease factors and the impact of social support on depression in patients with ESRD. Baron and Kenny [26] defined mediation as "the generative mechanism through which the focal independent variable is able to influence the dependent variable of interest." Mediation is used to assess hypotheses in which the main independent variable operates through a mediator to impact a dependent variable [26]. Therefore, structural equation modeling (SEM) was applied to examine the relationships between the risk factors which include pain, sleep disturbances, duration since diagnosis and the outcomes. HRQOL was measured based on physical quality of life (PQOL) and mental quality of life (MQOL), which represent well-being. To clarify the types of social support and to show that HRQOL is associated with risk factors in patients with ESRD, the following research questions were proposed: 1. What are the relationships among risk factors which include pain, sleep disturbances, duration since diagnosis, types of social support, and depressive symptoms/PQOL/MQOL in patients with ESRD? 2. Does type of social support play a mediating role between sleep disturbances and depressive symptoms/PQOL/MQOL in patients with ESRD? Research design A cross-sectional survey was conducted, and a convenience sample of 178 patients with ESRD and eligible based on the below inclusion criteria was recruited from the hemodialysis center at a hospital, from August 2015 to Jan. 2016. Approval from the Institutional Review Board of E-Da Hospital (EMRP-103-098) was obtained for the study. Participants were provided information about the study. The inclusion criteria were as follows: 1.) aged 20 years or above, 2). undergoing routine hemodialysis treatment, and 3). able to communicate in Mandarin or Taiwanese. The exclusion criteria were as follows: 1). no DSM-IV psychiatric diagnoses, 2). no severe complications during dialysis, and 3). no other severe diseases such as cancer. After written informed consent was collected, the investigator conducted face-to-face interviews with a structured questionnaire. The recommended appropriate sample size for path analysis using SEM approaches is between 150 and 200 subjects [27]; thus, this study had an adequate sample size. A total of 185 eligible patients with ESRD were contacted, and seven eligible participants declined to participate because of time or fatigue. The response rate was 96%, and there were no missing data during the analysis. Variables and measures The study used several questionnaires to measure the research variables, including demographics, pain, sleep quality, social support, depression, and HRQOL. The corresponding questionnaires included a demographic and clinical characteristics information sheet, the Visual Analogue Scale (VAS) [28], the Pittsburgh Sleep Quality Index (PSQI) [29], the SSI [22], the Center for Epidemiological Studies-Depression (CES-D) scale [30] and the Short Form-36 (SF-36) Health Survey [31]. A detailed description of each questionnaire is provided below. Demographic and clinical characteristics. The demographic and clinical characteristics information sheet was self-reported and measured specific demographic variables, including age, duration since diagnosis, number of diseases which are the other diseases endorsed by participants chronic but well-controlled diseases, marital status, education, and monthly household income. Visual analogue scale. Pain was measured by VAS, which is originally developed by Freyd in 1923 [28]. The VAS is widely used in pain assessment and has demonstrated good validity and reliability [32]. The VAS consists of a 10 cm long straight line with the two-side endpoints identifying as "no pain at all = 0" and "pain as bad as it could be = 10." Individuals reported their pain level by marking on the line and the distance from zero to the marked point represents the pain score. A higher score indicates greater pain. Pittsburgh sleep quality index. The PSQI was adopted to measure sleep quality [29]. The PSQI consists of 19 items that are divided into seven sections to evaluate participant perspectives of sleep quality. These sections are subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleep medication, and daytime dysfunction over the last month. The global PSQI score of sleep quality was ranging from 0 to 21 with higher scores showing worse sleep quality. A PSQI score greater than 5 were deemed as sleep disturbances [29]. Cronbach's α for the Kao et al.'s study was .82 [33] and was 0.78 in this study. Social support inventory. Social support was assessed by a modified Chinese version of SSI [22], originally developed by Barrera, Sandler, and Ramsay [34], which has been used in evaluating patients with spinal cord injuries. The Cronbach'sαwas 0.86 in the previous study [17]. This 19-item scale applies a four-point Likert scale (1 = never to 4 = always) with total scores ranging from 19-76, includes emotional support (ES, Q1-4), informational support (IS, Q8-13), tangible support (TS, Q14-19), and Appraisal support (AS, Q5-7). The higher score indicates greater social support. The Cronbach's α was 0.97 for the total support, in each subscale was 0.95 for ES, 0.95 for IS, 0.91 for AP, and 0.87 for TS in the current study. Center for epidemiological studies depression. Depressive symptoms were evaluated by the CES-D scale [30]. The CES-D consists of 20-item scale, a 4-point scale ranging from 0 (= rarely or none of the time) to 3 (= most or almost all the time) in each item, and the total CES-D scores range from 0-60. The higher the scores the patients achieve, the more depressive symptoms the patients have. In this research, the Cronbach's alpha was 0.84. The depressive level is classified as follows: scores lower than 16 show no depression; scores ranging from 16-20 show mild depressive symptoms; scores ranging from 21-26 show moderate depressive symptoms; and scores ranging from 27-60 show severe depressive symptoms [35]. SF-36 health survey. The HRQOL was measured by applying the SF-36 questionnaire, a generic indicator of health [31]. The SF-36 includes eight subscales relevant to the general health of the individual: physical function (PF), role physical (problems with work or other daily activities as a result of physical health; RP), bodily pain (BP), general health (GH), social functioning (SF), role-emotional (problems with work or other daily activities as a result of emotional problems; RE), vitality (VT), and mental health (MH). According to a previous study that suggested manual scoring [31], SF-36 is divided into two components: PQOL consisting of PF, RP, BP, GH; MQOL consisting of RE, VT, SF, and MH, and was employed in this manner for the study. Cronbach's α in the current study ranged from 0.71-0.85 among the subscales in this analysis. Data analysis SEM was adopted to determine the effects of the risk factors which include pain, sleep disturbances, duration since diagnosis and social support on depressive symptoms/MQOL/PQOL using IBM SPSS AMOS 22.0. The goodness-of-fit index (GFI), average GFI (AGFI), and root mean square error of approximation (RMSEA) were calculated to assess the goodness-of-fit of the model. A model was considered to be a good fit if X 2 /df < 3 [27]. The values for GFI and AGFI should be ≧ 0.90, and the value for RMSEA should be ≦ 0.08. Additionally, the mediating effects of the four types of social support were examined using the methods described by Baron and Kenny [26] and Gogineni, Alsup, and Gillespie [36]. These authors suggested that complete mediation occurs when the effect of sleep disturbances on the outcome variables (depressive symptoms/PQOL/MQOL) is alleviated by the influence of social support as a mediating variable and is statistically not significant. If the result remains significant but the correlation decreases, then a partial mediating effect has occurred. Characteristics of the sample A sample of 178 adults undergoing hemodialysis for ESRD participated in this study, and the mean age was 62.9 (SD = 11.5) years ( Table 1). The average duration of hemodialysis was 56.8 (SD = 40.2) months, the mean pain level was 2.5 (SD = 1.3, score range 0-10), 159 participants had at least one chronic disease other than ESRD, most (58%) of the participants were males, and 136 (76.4%) of the participants were married/cohabitating. Most of the participants (70.8%) reported that their monthly household income was 25,000~75,000 per month in New Taiwan (NT) dollars (1 US dollar = 30.5 NT). The participants reported the following levels of education: illiterate (29,16.3%), elementary (72, 40.4%), junior high (32, 18.1%), and college or above (12, 6.7%). The prevalence of depressive symptoms and sleep quality Table 3 shows that pain and sleep quality were positively correlated with depressive symptoms (r = 0.30, p < 0.01; r = 0.43, p < 0.01). The results indicated that those with higher levels of pain and sleep disturbance were more likely to be depressed. The less emotional support (r = -0.27, p < 0.01), appraisal support (r = -0.37, p < 0.01), informational support (r = -0.25, p < 0.01), and tangible support (r = -0.20, p < 0.01) the individuals perceived, the more depressive symptoms they reported. Depressive symptoms were negatively correlated with PQOL and MQOL (r = -.63, p < .01; r = -.34, p < .01). Moreover, when patients with ESRD had more pain, they reported worse PQOL (r = -.40, p < .01) and MQOL (r = -.21, p < .01). Patients with ESRD who reported sleep disturbances were more likely to have lower PQOL (r = -.26, p < .01), although there was no significant correlation with MQOL. The constructs of risk factors, social support, and HRQOL/Depressive symptoms To examine the direct and indirect effects of the risk factors and social support on HRQOL and depressive symptoms, a model was constructed to evaluate the structure shown in Fig 1. Our findings showed that our initial conceptual model exhibited a lack of model fit. We examined a structural equation model to determine the direct/indirect effects of the four types of social support (ES, AS, IS, or TS) on health outcomes (PQOL, MQOL, and depressive symptoms) among patients with ESRD. After several modifications, the overall goodness-of-fit statistics revealed that the proposed model fit the data well, with RMSEA = 0.001, χ 2 /df = 0.79, GFI = 0.99, and AGFI = 0.95. Additionally, we observed partial mediating effects for the four types of social support. Fig 2 shows that this structural model consisted of independent variables (three subject characteristics, four types of social support) and three dependent variables: depressive symptoms, PQOL, and MQOL. Furthermore, the four types of social support were mediators of depressive symptoms/PQOL/MQOL. Appraisal support had a significantly negative direct effect on depressive symptoms (β = -0.40, p < 0.001), and higher levels of appraisal support led to better PQOL (β = 0.37, p < 0.01). The following mediating effects were observed: 1) Appraisal support mediated sleep disturbances and depressive symptoms, and 2) AS mediated sleep disturbances and PQOL (Fig 3). Overall, the structural model explained 31.8% of the variance in self-reported depressive symptoms, 29.4% of the variance in PQOL, and 5.7% of the variance in MQOL. Mediating effects of different types of social support Specifically, among the four types of social support, only appraisal support played a mediating role between the risk factors and health outcomes: 1). Appraisal support mediated sleep disturbances and PQOL, and 2). Appraisal support mediated sleep disturbances and depressive symptoms (Fig 3). Fig 3A shows that appraisal support influenced sleep disturbances and PQOL. In step 1, sleep disturbances significantly affected PQOL (β = -.26, p < .001). In step 2, sleep disturbances significantly affected appraisal support (β = -.17, p < .05), and appraisal support affected PQOL (β = .37, p < .001). However, in step 3, the effect of sleep disturbances on PQOL was reduced but continued to be significant when appraisal support entered the regression (β = -.16, p < .05). Because only a reduction effect occurred, this result suggests that appraisal support only partially mediated the relationship between sleep disturbances and PQOL. Therefore, appraisal support can influence the relationship between sleep disturbances and PQOL. More specifically, the mediating effects of appraisal support can promote patient PQOL. Fig 3B shows that appraisal support mediated the relationship between sleep disturbances and depressive symptoms. In step 1, sleep disturbances significantly influenced depressive symptoms (β = .43, p < .001). In step 2, sleep disturbances significantly influenced appraisal support (β = -.17, p < .05), and appraisal support affected depressive symptoms (β = -.40, p < .001). In step 3, when appraisal support entered the regression, the influence of sleep disturbances on depressive symptoms decreased (β = .33, p < .001) but remained significant. Therefore, appraisal support can change the relationship between sleep disturbances and depressive symptoms. Specifically, the mediating effects of appraisal support can lessen patients' depressive symptoms. Discussion This research focused on the relationships among individual demographic and clinical characteristics, including four types of social support, depressive symptoms, and HRQOL. The results showed that 32% of the participants reached the cut point for depressive symptoms. This result coincides with the findings of a previous study [17] that showed that the prevalence of depressive symptoms in patients with ESRD in long-term dialysis was 18-35% and that patients with more social support were less likely to experience depressive symptoms than those with less social support. In this study, approximately 60% of patients with ESRD reported sleep disturbances, and sleep disturbances were positively correlated with depressive symptoms and negatively correlated with PQOL; these results are consistent with a previous study [14]. Sleep quality has been reported to be a health issue for 20-83% of patients with ESRD [11], and sleep quality is related to multiple factors. A further intervention study is needed to explore methods for decreasing sleep disturbances in patients with ESRD. One potential method is the use of psycho-education on how to enhance social support, which could lead to decreases in sleep disturbances. And this will need a further investigation to validate the causal relationship between social support and sleep quality in our future research. Based on the research results, the rates of depression and sleep disturbance coincided with previous studies on patients with ESRD. Due to geographic limitations that might affect the external validity of the research results, caution should be used when generalizing our results to a larger population. Patients with ESRD showed a high percentage of depressive symptoms, possibly because patients have inadequate social support to meet their needs or tangible help for daily events during long-term dialysis. Additionally, participants may have pain or care information needs that are not being satisfied, which could result in worse HRQOL. In these circumstances, if ESRD patients have more access to social support, especially appraisal support, they may experience fewer depressive symptoms. The literature has noted that satisfaction with social support can influence psychological outcomes [25]. As predicted in the correlational matrix, all four types of social support had a significant inverse relationship with depressive symptoms; however, after several modifications to achieve a good model fit, only appraisal support significantly influenced health outcomes. The reason why only appraisal support is remained after SEM analysis is possibly due to the confounding effect among four components of social support since they are mutually correlated. Thaden and Kneib [37] proposed that SEM techniques prove to be helpful to disentangle direct covariate effects from indirect covariate effects arising from correlation with other variables. There could be several reasons for the mediating effects of appraisal support in this study. First, appraisal support may act as a resource for patients as they cope with challenges. Second, there may be a mechanism through which greater appraisal support could enhance PQOL, i.e., by affirming individual activities or values. Finally, appraisal support may have an effect on individual adaptation to diagnosis, long-term dialysis, sleep disturbances, and other complications. The above explanations support the study findings regarding the mediating effects of appraisal support. Moreover, appraisal support may enhance patients' self-esteem and reduce their feelings of frustration, and it may aid them in rebuilding their sense of well-being. This study also demonstrated an inverse relationship between appraisal support and depressive symptoms; this result explains why appraisal support provides protection against depression in the mediation model [38]. Through greater support, individuals can establish positive relationships that in turn mediate the effects of risk factors on health Social support and quality of life outcomes [39]; when only partial appraisal support is perceived, the degree of impact on negative health may be reduced. Regarding the SEM results, only appraisal support played mediating roles between sleep disturbances and depressive symptoms/PQOL, while the other three types of social support had a correlational effect. Appraisal support may alleviate the detrimental effect of sleep disturbances through its mediating influence. Further investigation is needed regarding the reasons that the other types of social support did not play a mediating role in health outcomes. The findings showed that appraisal support was significantly directly and indirectly associated with depressive symptoms in the current study, which is consistent with the results of previous studies [17]. Healthcare experts should implement psychosocial education to help patients with ESRD and their families obtain better access to resources and health-related information. Healthcare experts could organize support groups to improve positive self-efficacy, and they may need to provide social access and psychological support to enhance the level of mental support patients receive. Given differences in the physical conditions of patients, healthcare experts should concentrate on developing interventions related to self-efficacy that provide AS for patients in order to mitigate patients' experience of severe depressive symptoms. Limitations The study had some sampling and methodological limitations. First, convenience sampling was adopted, which might constrain the applicability of the research findings to the population in southern Taiwan; thus, generalizability may be limited. In the future, researchers can extend the research setting to different locations. Second, the subjective nature of self-reported questionnaires including social support or quality of life is also the concern. Third, the cross-sectional nature of the current study was a limitation, and the time effects of the study variables were unclear. Another limitation exists, regarding the exclusionary criteria; we exclude patients with pre-existing depression and serious medical disease, which may also limit the participants to attend this study. Therefore, it was not appropriate to develop inferences regarding the longitudinal influences of the independent variables on the HRQOL of patients with ESRD. Conclusion The results revealed a high prevalence of depression and sleep disturbances in patients with ESRD in Taiwan, which is consistent with reported rates in Taiwan but higher than rates reported in the US. Social support played an important role as a mediator between sleep disturbances and depressive symptoms in patients with ESRD in this study. Our findings offer healthcare professionals a better understanding of ways to utilize social support, especially appraisal support, based on the finding that appraisal support promotes PQOL. Additionally, the findings provide general support for the hypotheses regarding the effect of social support on depressive symptoms. Further research should be carried out with nurses in renal departments to investigate their perceptions and knowledge of how to evaluate social support in patients with ESRD, which could lead to better health outcomes for patients with ESRD.
2019-05-01T13:04:01.531Z
2019-04-29T00:00:00.000
{ "year": 2019, "sha1": "4ac7eedd9044906a8f3fa7c2e9c49bab5b9edfad", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0216045", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07b1f1b643b344f607ab18b15d55fa2ba6bc8920", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
52081554
pes2o/s2orc
v3-fos-license
Level of Compliance in Orthokeratology Supplemental Digital Content is Available in the Text. O rthokeratology (ortho-k) is a method that uses specially designed rigid gas-permeable (RGP) contact lenses overnight to reshape the cornea to temporarily reduce or eliminate myopia. Orthok can slow myopia progression [1][2][3] and has some other advantages such as improving the uncorrected visual acuity during the daytime. The number of people who wear ortho-k lenses has exceeded 1.5 million and continues to increase in China. 4 However, because ortho-k lenses can cause a number of complications, including visually threatening microbial keratitis (MK), 5,6 the safety of ortho-k has been a constant concern. When ortho-k was first introduced in China, ortho-k-related keratitis, which was potentially due to inappropriate lens care procedures, patient noncompliance with practitioner instructions, and persisting in lens wear despite discomfort, frequently occurred. 7 This painful history has led to our continued focus on the safety of ortho-k. In recent years, the overall environment of ortho-k in China has significantly improved, and the incidence of MK has significantly decreased, which is mainly attributed to the training and certification of ortho-k practitioners, universal education of ortho-k wearers, and a series of regulations promulgated by China's Food and Drug Administration. 8 These regulations include standard wear and care procedures for ortho-k patients, as well as standard follow-up visit procedures. However, according to previous studies, not all contact lens patients fully comply with standard wear and care or follow-up visit procedures. In addition, different types of contact lenses have different compliance rates, which vary greatly from 0% to 60%. [9][10][11][12] In a study by Morgan et al., 11 different countries also had varying compliance rates for contact lens wearers. However, to the best of our knowledge, no study has examined the compliance rates for ortho-k lenses in Mainland China. Because patient compliance is one of the major risk factors for contact lens-related complications, [13][14][15] and an increasing number of people are choosing to wear ortho-k lenses in Mainland China, we suggest that investigating the compliance of ortho-k wearers, identifying possible problems, and taking measures to improve these issues are important and will help to enhance the safety of ortho-k in clinical practice. Cheung et al. 16 demonstrated that in addition to effectiveness, safety is another major factor that affects parents' decisions in selecting a myopia control strategy for their children, and one of the main methods by which patients learn about myopia control options is through word-of-mouth. Thus, we suggest that enhancing patient compliance may also be beneficial for the healthy development of the ortho-k market and enable ortho-k to play a greater role in myopia control. METHODS To determine whether a difference exists between the compliance for wear and care behaviors and the compliance for follow-up visits, we investigated the compliance for wear and care behaviors and the compliance for follow-up visits separately. Our study used two different methods to collect these two types of data. For wear and care information, data were collected by a questionnaire. Then, after receiving the patients' questionnaires to confirm their consent to participate in the study, we collected their follow-up visit information using a retrospective survey. The questionnaire (see Appendix 1, Supplemental Digital Content 1, http://links.lww.com/ICL/A83) contained the following contents: patient demographic information, the independence of wear and care of the lenses, the reasons for missing follow-up appointments, and eight wear and care behaviors. The eight wear and care behaviors included methods of hand washing before handling lenses, lens cleaning procedures, use of expired solution, procedures for soaking lenses, the interval of lens case replacement, exposure to nonsterile solution, the interval of lens deposition removal, and removal of lenses without suction holders (Table 1). Importantly, all these behaviors increase the risk of contact lens-related keratitis or were identified as risk factors for contact lens-related complications in the literature. 9,11,14,[17][18][19][20] Because a study by Boost and Cho 14 showed that suction holders showed a high contamination rate among ortho-k wearers, suction holder use was included in our survey. This behavior has not been surveyed in previous studies. Before the questionnaire was released, it was sent to four eye care practitioners (ECPs) at the Eye Hospital of Wenzhou Medical University (EHWMU) (who had worked in the ortho-k field for more than 5 years) for modification and to ensure that the questions and answers were reasonable. Next, 10 ortho-k patients were selected to answer the questions in person to modify statements in the questionnaire that patients considered ambiguous or obscure. Finally, each patient was judged to be compliant or noncompliant according to the compliance behaviors outlined in Table 1. At the beginning of the questionnaire, we described the purpose and content of the questionnaire in detail so that the patients could voluntarily choose to participate in the survey. Patients were free to discontinue participation at any time. The survey passed ethical review and complied with International Chamber of Commerce/ European Society for Opinion and Marketing Research (ICC/ESO-MAR) International Guidelines for Market Research and Social Surveys to ensure the confidentiality of data processing. The questionnaire link was then sent as a text message to adult patients and parents of minor patients younger than 18 years in EHWMU. Our survey was conducted between July and September 2017. The inclusion criteria comprised patients who were prescribed ortho-k lenses after January 2013 and only patients who had worn ortho-k lenses for more than 1 year to ensure that patients were familiar with the schedule of the ortho-k process. The EHWMU is a tertiary eye care facility and is ranked second in the field of ophthalmology among the most influential hospitals in Science and Technology in China in 2017. The EHWMU was also one of the first medical institutions in Mainland China to provide ortho-k and has been providing ortho-k for more than 10 years. Its ortho-k guidelines are presented in video and written materials, which include a follow-up visit schedule and eight compliant behaviors outlined in Table 1. In addition, professionally trained practitioners provide each patient with one-to-one guidance until the patient has completely mastered the guidelines. At each follow-up visit, the ECP always reminds patients about the next follow-up visit time. If patients do not adhere to their follow-up visit schedule, staff will promptly call to remind them. Data were analyzed using SPSS Version 23.0 (IBM Inc., Armonk, NY) statistical software. The mean6SD was used to represent data as appropriate. The Pearson chi-square test (x 2 ), the Fisher exact test, or the Mann-Whitney U test was used to analyze differences between two groups. Logistic regression was used to analyze the association between compliance and age or sex. A P value less than 0.05 was considered as statistically significant. Compliance for Wear and Care Behaviors The full compliance rate for wear and care behaviors was 18.5%, and the compliance rate for each behavior is detailed in Table 1. Among these behaviors, the behavior with the worst compliance was avoiding exposing lenses to nonsterile solution, and the behavior with the best compliance was avoiding using expired solution (Fig. 1). To analyze the relationship between compliance and wearing experience, patients were divided into three groups according to the duration of lens wear ( Table 2). The Pearson chi-square test (x 2 ) was used for multiple comparisons, and after Bonferroni correction, a P value less than 0.0167 was considered statistically significant. The results showed no difference in the level of compliance among the three groups (Table 2). According to the independence of wear and care of lenses, we divided patients into the following groups: self-care patients who were responsible for their own wearing behaviors and lens care, and a non-self-care patient group, in which wearing compliance and lens care were monitored/conducted by their parents. The Pearson chi-square test showed that the non-self-care group Table 3). Because the demographic information of parents was not collected, when analyzing the relationship between compliance and age or sex, we included only the self-care group to eliminate the impact of care provided by parents. Logistic regression analysis showed no correlation between compliance and age (P¼0.941) or between compliance and sex (P¼0.954). As shown in Table 4, patients were divided by sex to analyze the relationship between independence and sex. Although the average age of females was younger than that of males, regardless of whether wearing or caring for lenses was considered, the independence rate of females was higher than that of males. Compliance for Follow-up Visits The full compliance rate for follow-up visits was 63.3%. After lenses were provided to patients, follow-up visits were scheduled at EHWMU at 1 day, 1 week, 1 month, 3 months, and then every 3 months thereafter (the window after 3 months was 61 month). The follow-up visit compliance rates for each follow-up visit within 2 years were 100%, 100%, 100%, 98.8%, 95.6%, 90.9%, 90.4%, 87.9%, 87.9%, 86.2%, and 86.6% (Fig. 2). As shown in Figure 2, the compliance rate decreased with wearing experience; therefore, to determine the stage at which the compliance rate significantly decreased, we used the Pearson chi-square test to compare adjacent follow-up visits. Significant differences in follow-up visit compliance rates were found between the third and sixth months and between the sixth and ninth months only ( Table 5). The percentages and reasons for lack of follow-up visits are shown in Figure 3. The most common reasons were lack of time, no symptoms, and inconvenience. DISCUSSION In our study, most ortho-k wearers were adolescents; this result is consistent with a previous study that found that on average, 80% of Chinese people who wear ortho-k lenses are younger than 18 years. 21 Because ortho-k wearers in Mainland China are predominantly juveniles, we should pay more attention to the safety of ortho-k. Therefore, investigating the compliance of ortho-k users in Mainland China is important and necessary. The full compliance rate in this study was 14.1%. In reviews of previous studies, the rates of compliance varied among studies for different types of contact lenses. Cho et al. 9 found that the "good" compliance rate for ortho-k in Hong Kong was 52% (n¼38). Sapkota 12 showed that the "good" compliance rate for traditional softlens wearers was 28.2% (n¼78). A multinational investigation by Morgan et al. 11 showed that the full compliance rate was 14.7% for daily disposable contact lens wear, 0.2% for extended wear contact lenses, and 0% for ordinary RGP contact lenses. Morgan et al. also found that compliance rates vary among different regions. As shown in the above studies, different individuals, types of contact lenses, and regions exhibit various compliance rates. As Efron 22 stated, compliance is a complex issue. Many factors such as personality traits, education, socioeconomic status, occupation, and race are unrelated to compliance. 23 In the literature, many measures, such as intense initial education, noting the severe consequences of noncompliance, reducing the cost of goods, procedural documents, humorous videos, or signing a contract of shared responsibility, do not have any significant effect on the level of compliance. [24][25][26] Fortunately, although compliance is a complex issue that is difficult to improve, some improvements can be made. Compliance can be improved by constantly reminding patients of correct procedures at aftercare visits, 9 and simpler guidelines may result in increased patient compliance. 22 The full compliance rate of our study (including wear and care behaviors and follow-up visits) was not high; this was mainly due to the poor compliance rate for wear and care behaviors, particularly for the three worst compliance behaviors, including P#0.0167 (0.05/3) was considered statistically significant. Group I¼patients wearing ortho-k lenses for 1 to 2 years, Group II¼patients wearing ortho-k lenses for 2 to 3 years, and Group III¼patients wearing ortho-k lenses for more than 3 years. avoiding exposing lenses to nonsterile solution, removing deposits according to the ECPs' recommendations and adequate hand washing. After further observation of the details of these three behaviors, we found that the main reason for the low compliance rate for avoiding exposing lenses to nonsterile solution was the lack of drying hands after washing, and the main reason for the low compliance of hand washing was not a lack of hand washing but washing hands without soap. As shown above, the low compliance was not due to lack of the behavior but improper performance of the behavior. Therefore, we hypothesized that the cause of the low compliance rate for wear and care behaviors might be the same as that in Claydon's survey. 27 Claydon's survey found that most patients were not intentionally noncompliant but rather engaged in noncompliant behaviors because of misunderstandings, forgetting, and poor guidance. Only a small portion of noncompliant behaviors were intentional, because of reasons such as inconvenience, neglect, or denial of risk. Therefore, increased attention should be focused on the details of these behaviors during re-education. A study by Morgan et al. 11 showed that compliance decreases with age. Although our study showed no correlation between age and compliance for wear and care behaviors, the non-self-care group showed higher compliance rates than the self-care group, indicating that the parents' compliance rate was higher than that of the children. We speculate that the reason for this finding may be that patients in our survey were mainly adolescents; very few adult patients were included, resulting in a limited age range. However, the influence of age on compliance may exist between adolescents and adults or among different age groups of adults. No correlation was found between sex and the compliance for wear and care behaviors in our study, consistent with a study by Yeung et al. 28 However, other studies have shown that males exhibit lower adherence to wear and care behaviors, 11,29 although none of these studies included ortho-k lens wearers. Sex was not correlated with compliance in our study. However, it is noteworthy that for both lens wear and care, male juveniles were less independent than female juveniles, indicating that male juveniles require more parental assistance than female juveniles. Therefore, ECPs should be more cautious when screening male juvenile patients. For example, if a male child attends boarding school, the ECP should consider whether he can manage the ortho-k care procedure independently. We also found no correlation between wearing experience and compliance for wear and care behaviors. Our result was consistent with a survey by Yung et al. 30 ; however, Claydon and Efron 27 found a strong association between wearing experience and compliance for wear and care. Radford et al. 31 also found that compliance decreased rapidly within the first 2 years, with a slower rate of deterioration in hygiene compliance thereafter. The reason for this difference may be that we surveyed only patients who had worn lenses for 1 to 3 years and did not investigate patients who had worn lenses for less than 1 year. Comparing the compliance for wear and care behaviors, we did find some differences in the compliance for follow-up visits. First, compared with the compliance for wear and care behaviors, the compliance rate for follow-up visits was much higher and may be attributed to the prompt calls from the staff of EHWMU to patients who did not attend scheduled follow-up visits. Second, the main reasons for lack of follow-up were lack of time, no symptoms, and inconvenience, whereas forgetting appointments accounted for Independence indicates that patients wore and cared for lenses by themselves. only 8.4% of missed visits. In our study, patients intentionally missed follow-up visits. This finding is the opposite of Claydon's 27 findings regarding the reasons for noncompliance for wear and care behaviors. Claydon's survey found that most patients were not intentionally noncompliant. Our follow-up rate may also be due to reminders from our staff, which decreased the proportion of "forgotten" visits. This finding may show that constant reminders (Cho effects 9 ) are indeed effective in improving compliance. In addition, we found that follow-up visit compliance was related to wearing experience. The compliance for follow-up visits declined significantly from the third month to the ninth month and began to stabilize thereafter. Therefore, ECPs should focus on compliance with follow-up visits during this period. This study has some limitations. First, a questionnaire was used in the study. This method depends on the subjective responses of patients and may not provide accurate results. However, this method is currently the only way to obtain information regarding patient compliance. The large sample size of our study helps to increase the objectivity of our results. Second, this study was only a single-center hospital study, but as our study is the first report of the compliance with ortho-k guidelines in Mainland China, it may provide a reference for future multicenter studies.
2018-08-26T21:42:57.049Z
2018-08-28T00:00:00.000
{ "year": 2018, "sha1": "28b1bfbf3c45549e95b2cea5b89e5ca7222ee288", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/claojournal/Fulltext/2018/09000/Level_of_Compliance_in_Orthokeratology.10.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "28b1bfbf3c45549e95b2cea5b89e5ca7222ee288", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }