id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247676742
pes2o/s2orc
v3-fos-license
Tracheobronchopathia osteochondroplastica in the setting of COVID-19 Tracheobronchopathia osteoplastica (TO) is a rare, benign disease of unknown etiology, primarily affecting the major tracheobronchial tree, characterized by irregular nodular calcifications of the cartilaginous component of the inner wall of the tracheobronchial tree while sparing the posterior wall, leading to progressive narrowing of the airway. We report the case of a 60-year-old male otherwise healthy nonsmoker, who complained of chronic breathing discomfort and recurrent chest infections and was found to have TO according to radiographic, microlaryngoscopic, and biopsy findings. He experienced a flare up with worsening of disease progression after years of being in stable condition, after his infection with SARS-CoV-2. Introduction Tracheobronchopathia osteoplastica is an uncommon nonneoplastic disease that affects the trachea and major bronchi. The etiology remains unknown, with no genetic predisposition documented. The condition is more common in males, with diagnosis usually being made between ages 40 and 60, with no association to smoking. It is characterized by the development of cartilaginous or bony submucosal nodules, mainly involving the airway cartilages and extending into the tracheobronchial lumen. The membranous posterior wall of the trachea typically is spared due to its lack of cartilage support. 1 In most cases, the disease is asymptomatic, and it is often discovered incidentally either on autopsy, or with intubation, bronchoscopy, or radiographic imaging. When symptomatic, it most commonly presents as persistent or recurrent cough, exertional dyspnea, recurrent lower respiratory tract infections, and occasionally hemoptysis. 1 Due to the resemblance in presentation to diseases such as bronchial asthma, the diagnosis of TO is often missed or delayed, with patients being managed as cases of asthma with no improvement in their condition. Therefore, it is important to increase awareness of this condition to improve patient outcomes and guide healthcare workers in their diagnostic approach. Case report A 60-year-old otherwise healthy male non-smoker, presented to our tertiary center complaining of chronic shortness of breath on exertion and cough of many years, with a history of recurrent hospital admissions due to bronchopneumonia. His condition began with mild symptoms, gradually worsening with time, and leading to his admission to the internal medicine department multiple times for bronchopneumonia requiring IV antibiotics and supportive treatment. The reason for his recurrent chest infections was unknown. The patient was referred to our otolaryngology department for further evaluation and to rule out an upper respiratory cause for his shortness of breath. He had no significant history of weight loss, and the patient's head and neck examinations were unremarkable. Flexible laryngoscopy revealed rigidity in all the cartilaginous parts of the larynx with thickening in the upper tracheal rings. A simple chest x-ray showed irregular narrowing in his trachea and the major bronchial airway primarily, with some narrowing in the small bronchi and bronchioles (Figures 1). CT of his neck and chest revealed abnormal calcifications and irregular nodular thickening in the anterolateral walls of the trachea and major bronchi, sparing the posterior wall ( Figure 2). Due to the severity of his symptoms, we admitted the patient for diagnostic bronchoscopy and supportive treatment. Rigid bronchoscopy revealed a diffusely nodular, irregular inner wall of the trachea, sparing the posterior wall, with severe narrowing in a small upper segment of the trachea, leading us to perform limited balloon dilatation of the stenosed segment, with multiple punch biopsies taken from the inner wall of the trachea. Histopathology showed abnormal metaplasia of the osteocartilaginous elastic tissue of the tracheal mucosa. Postoperatively, the patient returned to his usual health status and daily activities, with improvement in his shortness of breath and cough. He spent a period of 2 years after his surgery with no deterioration in his condition, no chest infections, and only mild symptoms. Within the first few months of the COVID-19 pandemic, the patient developed acute exacerbation in his shortness of breath, as well as severe cough, high fever, myalgia, and generalized weakness. The patient presented due to worsening of his symptoms which were similar to those seen with COVID-19. A COVID-19 test was positive. He was admitted for severe COVID-19 with exacerbation of TO symptoms to the intensive care unit for supportive management and close monitoring. 48 hours after admission, his condition worsened, and he was found to have COVID-19 bronchopneumonia with worsening of TO as shown on x-ray ( Figure 2). After recovering from COVID-19 and being discharged, the patient noticed persistent shortness of breath and cough, with a return to his pre-balloon-dilatation state according to the patient. Soon thereafter, he developed mild stridor with worsening shortness of breath, which led him to seek care at our clinic. On reevaluation, he had no audible stridor or wheeze. However, repeat flexible laryngoscopy showed worsening in tracheal patency and progression of the disease in comparison with his previous findings, the deterioration was felt to be due to the effect of the recent COVID-19 infection on the patient's airway. Follow-up flexible laryngoscopy 4 months later showed further rapid progression of the disease, with a rigid larynx and extensive calcifications, and a severely stenosed tracheal lumen mainly in the anterolateral wall. Clinical features and presentation Tracheobronchopathia osteochondroplastica (TO) is a rare, benign disease of the endobronchial system with nonspecific symptoms and various treatment approaches. 2 The condition was first macroscopically described by Rokitansk in 1855, and microscopically described by Wilks in 1857. Some theories have been formulated about the pathogenesis of this condition, with Dalgaard stating that the elastic tissue suffers metaplasia, with cartilage formation and calcium deposition; Virchows theory was that echondrosis and exostosis promote calcium deposition and ossification of the tracheal rings. Aschoff-Freiburg reported changes to the tracheal elastic tissue, and used the term osteoplastic tracheopathy to describe the condition, and similarly, in 1964, Secrest et al. labeled it osteoplastic tracheobronchopathy. 3 TO is a very rare disease mostly diagnosed as an incidental finding by bronchoscopy or on autopsy, with most patients diagnosed being between the ages of 50 and 70 years, and patients in the 5th decade of life being the most frequently affected. 4 According to Secrest, it is estimated that only 5% of the cases are diagnosed during the person's life. 3 While most patients with TO are asymptomatic, when symptomatic, common presenting symptoms are chronic cough, dyspnea, wheeze, hemoptysis, and recurrent respiratory tract infections, which often lead to the misdiagnosis of asthma. 5 Histopathological findings Histologically, the mucosal bed may look normal, with areas of inflammation and necrosis, as well as abnormal proliferative cartilaginous or bony formations on the submucosa. Often, you may find squamous metaplasia of the columnar epithelium, calcium deposits, fragments of adipocytes, and active hematopoietic medullar bone tissue are seen. 6 Comparatively, in our patient, histopathological study of the intraoperative biopsies taken showed fragments of bony tissue lines by respiratory epithelium, as well as the presence of hematopoietic cells. Radiological findings Plain chest radiography may show irregularity and narrowing of the affected segments of the tracheobronchial tree, similar to our patient's X-ray findings. CT of the chest may show irregular thickening and nodularity of the tracheal cartilage, sparing the posterior (membranous) tracheal wall. 7 CT of the neck and chest done for our patient revealed evidence of irregular thickening, nodularity, and calcifications of tracheal cartilage involving the anterior and lateral walls while sparing the posterior (membranous) wall extending down to the proximal portions of both main stem bronchi, causing significant luminal narrowing. Diagnosis and treatment Definitive diagnosis of the disease is confirmed through a combination of typical laryngoscopic, radiographic, bronchoscopic, and biopsy findings. The typical bronchoscopy findings are often described vividly as a cobblestone, beaded, or stalactite cave or a rock garden appearance. 8,9 During our patient's bronchoscopy, we saw multiple subglottic hard, bony osteoma like masses causing airway stenosis. There is no specific treatment for TPO. Recurrent infections and atelectasis are seen as complications during the disease course and are treated in conventional fashion. Intubations, if required, may be difficult because of the calcified rings of trachea. Occasionally, tracheostomy is required. Surgical treatment options should be considered when conservative measures have been failed. Resection of the affected tracheal segment, anterior laryngo fissure, partial laryngectomy, removal of lesion with bronchoscopy, and rigid bronchoscopic dilatation are also reported as surgical options as it is difficult to remove bony lesions with a rigid bronchoscope. 10 In conclusion, tracheobronchopathia osteochondroplastica, while a rare condition, is likely more common than previously thought, and due to the resemblance in presentation to common conditions such as bronchial asthma, it is likely to be under or misdiagnosed. Therefore, it is important for otolaryngologists to be familiar with this condition and its presentation, and respiratory physicians and otolaryngologists alike must be vigilant when dealing with patients with similar presentation. With the advent of the COVID-19 pandemic, given the rapid progression of our patient's condition after recovery from COVID-19, and taking into account the lack of further data comparing the relationship between COVID-19 and TO, we believe physicians should be aware of a possible relationship between the 2 entities. However, future studies are recommended to clarify this possible relationship. As there are no clear guidelines for the management of TO, our plan is to follow-up with our patient regularly with repeat flexible laryngoscopy and with chest x-rays, in collaboration with regular pulmonary evaluation as long as there is no deterioration in his condition. In the event of worsening in our patient's respiratory condition, our team may consider performing long lumen tracheotomy, with possible further dilation or laser ablation to improve airway patency. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2022-03-26T06:23:38.299Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "30b3265bd585f7810b88ce22df0313446513bd23", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5413977dab96586f0a62292be2878167c186f7c4", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
18081871
pes2o/s2orc
v3-fos-license
Host Galaxies of z=4 Quasars We have undertaken a project to investigate the host galaxies and environments of a sample of quasars at z~4. In this paper, we describe deep near-infrared imaging of 34 targets using the Magellan I and Gemini North telescopes. We discuss in detail special challenges of distortion and nonlinearity that must be addressed when performing PSF subtraction with data from these telescopes and their IR cameras, especially in very good seeing. We derive black hole masses from emission-line spectroscopy, and we calculate accretion rates from our K_s-band photometry, which directly samples the rest-frame B for these objects. We introduce a new isophotal diameter technique for estimating host galaxy luminosities. We report the detection of four host galaxies on our deepest, sharpest images, and present upper limits for the others. We find that if host galaxies passively evolve such that they brighten by 2 magnitudes or more in the rest-frame B band between the present and z=4, then high-z hosts are less massive at a given black hole mass than are their low-z counterparts. We argue that the most massive hosts plateau at<~10L*. We estimate the importance of selection effects on this survey and the subsequent limitations of our conclusions. These results are in broad agreement with recent semi-analytical models for the formation of luminous quasars and their host spheroids by mergers of gas-rich galaxies, with significant dissipation, and self-regulation of black hole growth and star-formation by the burst of merger-induced quasar activity. Introduction In the past decade, we have begun to understand the important role that black holes play in galaxy evolution. Observations suggest that supermassive nuclear black holes are likely present in nearly all normal galaxies, and that black hole mass is correlated with host galaxy bulge mass (Kormendy & Richstone 1995;Magorrian et al. 1998) and stellar velocity dispersion (Ferrarese & Merritt 2000;Gebhardt et al. 2000;Tremaine et al. 2002;Marconi & Hunt 2003;Häring & Rix 2004). Low-redshift quasars too are consistent with these results: at low redshift, the most luminous quasars reside in massive, early-type host galaxies, and fit the black-hole-mass-spheroid relation for Eddington fractions of about ∼ 40% (McLeod & Rieke 1995;McLeod, Rieke, & Storrie-Lombardi 1999;McLure et al. 1999;McLeod & McLeod 2001;Floyd et al. 2004;Kiuchi et al. 2009;Silverman et al. 2009). These results have implications for the evolution of luminous, highredshift quasars. If a galaxy already had a supermassive black hole early on, then according to the local black-hole/bulge relation, it must by today be one of today's most massive galaxies. In the context of the ΛCDM framework for hierarchical structure growth, specific predictions can be made for the evolution of quasar hosts and their environments through cosmic time (Mo & White 2002;Kauffmann & Haehnelt 2002). If dissipationless gravitational collapse of cold dark matter were the only process at work, then one would expect the ratio of black hole mass to stellar spheroid mass, M BH /M * , to be roughly constant as spheroids merge and their nuclei coalesce. However, luminous quasars like the ones in current samples of quasars at z ≥ 4 are likely the product of major mergers of gas-rich disk galaxies of comparable mass (Hopkins et al. 2005;Croton 2006;Di Matteo et al. 2008;Hopkins et al. 2008;Somerville et al. 2008). The central black holes merge, and merger-induced gas accretion results in a burst of quasar activity. Quasar radiative energy and winds eventually halt further mass accretion and clear out the cold gas, halting star-formation. The merger disrupts the original gaseous disks, and the result is a low angular-momentum spheroid of stars that subsequently evolves passively. Semi-analytical models for these processes, along with numerical simulations of the collapse of cold-dark-matter halos, are successful in reproducing many observations of galaxies, galaxy clusters and quasars. In these models, high-z quasars are expected to have less luminous hosts than their low-z counterparts (Kauffmann & Haehnelt 2000), with the ratio of black hole mass to stellar spheroid mass, M BH /M * , decreasing with redshift (Croton 2006;Somerville et al. 2008). By necessity, models for quasar host evolution rely on semi-empirical prescriptions for key physical processes, because the resolution of numerical simulations cannot follow all of the crucial physics from the spatial scales of galaxy clusters down to galactic then to atomic scales. Direct observations of high redshift quasar hosts such as the study described here provide one interesting empirical check on the overall validity of the theoretical picture of hierarchical galaxy and black hole formation and evolution. Detecting the host galaxy "fuzz" is technically challenging at high redshift however because it appears small and faint compared to scattered light from the nucleus in the wings of the point spread function (PSF). Ideally, we would study the fuzz in the rest-frame near-IR, which would both highlight the mass-tracing stellar populations of the hosts and provide the best possible galaxy-to-nuclear light contrast (McLeod & Rieke 1995). For high-z objects, that would mean observing in the mid-IR, but there are not yet telescopes with the necessary combination of sensitivity and angular resolution to make such observations feasible. Most high-z host studies so far have therefore used near-IR imaging. At z ∼ 2 − 3, the first near-IR imaging of handfuls of objects using 4m-class telescopes produced detections only in the case of radio-loud (RL) quasars (Lehnert et al. 1992;Lowenthal et al. 1995;Carballo et al. 1998). Fuzz was subsequently seen around a few radio-quiet (RQ) quasars using adaptive optics (AO) on 4m telescopes Hutchings et al. 1999;Kuhlbrodt et al. 2005), and AO on the Gemini North 8m yielded only one of 9 hosts at z ∼ 2 (Croom et al. 2004). The wellcharacterized and stable PSF of NICMOS allowed successful detections of larger samples of RQ hosts in this redshift range (Kukula et al. 2001;Ridgway et al. 2001;Peng et al. 2006), which led to the first tantalizing comparisons with hierarchical models. Peng et al. (2006) suggest that at z = 2, M BH /M * was several times larger than it is today, and that hosts were not yet fully formed, though those results have been the subject of some debate. For example Falomo et al. (2008) have recently used AO imaging on the VLT to detect the hosts of three luminous quasars at 2 < z < 3, which they use to help argue for passively evolving ∼ 5L * elliptical hosts. Moreover, Malmquist bias in high-z samples would skew detections towards bright quasars in small hosts (Lauer et al. 2007;Treu et al. 2007;Woo et al. 2008); we discuss this effect further in §7.6. To provide more leverage for probing the hierarchical models, we would like to measure host properties at z ∼ 4. So far, host detections have been claimed for just a few objects in this range, all with unknown radio type. Peng et al. (2006) used NICMOS to measure host magnitudes and sizes for two gravitationally lensed quasars at z = 4.1 and 4.5. Hutchings has used Gemini to observe seven targets with z ∼ 5 (Hutchings 2003(Hutchings , 2005; unfortunately, the results are hard to interpret in light of the distortion and nonlinearity effects that we have found to be significant for the same instrument (see below). With the improvement in sensitivity provided by 6-8m class telescopes, we have begun to expand the sample of quasars imaged at high z. Our long-term program includes new multi-wavelength imaging to search for hosts and to characterize environments; new and archival visible spectroscopy to be used for virial mass estimates of the central engines from emission line properties; and modeling of the accretion disks and hot coronae using data from IR to X-ray, including new X-ray data from ongoing Chandra and XMM observing programs. In this paper, we describe a sample of 34 z ∼ 4 quasars that we have imaged in the near-IR. We present the observations and report on the search for hosts. The environments will be described in a subsequent paper (Bechtold & McLeod 2010, in preparation). We adopt the cosmology Ω Λ = 0.7, Ω m = 0.3, H 0 = 70 km s −1 Mpc −1 throughout. All magnitudes are reported in the Vega system, unless otherwise noted. The Sample We have observed a sample of quasars selected to have redshifts in the range 3.6 z 4.2. The sample is listed in Table 1 with names as given in the NASA/IPAC Extragalactic Database (NED). The redshift range was chosen so that the 4000Å break falls between observed H-and K-bands, so that broad-band colors give maximum leverage for estimating photometric redshifts and stellar populations. Out of the ∼ 300 quasars known in this interval when we began the project, we observed a randomly-chosen sub-sample of 34 objects, yielding a median < z >= 3.9 and spanning a range of magnitude as shown in Fig. 1. We plot our sample against the ∼ 1600 quasars at these redshifts listed in the most recent Sloan Digital Sky Survey (SDSS 4 ) Quasar Catalog (Schneider et al. 2007). We are observing first in the K-band, which samples the rest frame B for these objects. The median observed total nuclear K s = 17.2 for the quasars in our sample corresponds to M B = −26.9 (Vega magnitudes), similar to the local luminous quasar 3C273. Radio Loudness To characterize the radio properties of our sample, we adopt the definition of radio loudness given in Ivezic et al. (2002), who analyzed the radio properties of SDSS qusars. They define the apparent AB magnitude (Oke & Gunn 1983) at 1.4 GHz as where F int is the integrated 20cm radio flux measured from a two-dimensional Gaussian fit to the radio source. The radio-to-optical flux density is then defined as where i AB is the AB magnitude at Sloan i in the continuum. With these definitions Ivezic et al. (2002) find that radio-loud quasars have R i ∼ = 1 − 4 and radio-quiet quasars have R i < 1. The radio properties for the quasars in our sample are given in Table 2. We compiled the optical magnitudes from a number of sources. When available, we adopted the values for i AB published by members of the Sloan consortium; references are listed in Table 2. For most other objects, we used the i AB -band photometry given on the SDSS web site (DR6). For two objects we measured photometry from archival HST images. For these and a few other cases, the only available photometry was from the literature in other filters, which we transformed to i AB using the zero-points given in the NICMOS web site unit converter, and assuming a quasar spectral energy distribution in the form of a power law which is the average derived from the SDSS quasars by VandenBerk et al. (2001) (see also Pentericci et al. (2003)). We corrected the optical magnitudes for Galactic reddening, using the E(B − V ) given by Schlegel et al. (1998) as tabulated in NED. The transformations A r = E(B − V )/2.751 and A i = E(B − V )/2.086 were adopted (Schneider et al. 2003). For most of the sample quasars, the most sensitive radio data comes from the Faint Images of the Radio Sky at Twenty-cm Survey (Becker et al. 1995, FIRST), which we accessed through NED. If the quasar did not fall in one of the FIRST fields, we used the data from NRAO VLA Sky Survey (NVSS; (Condon et al. 1998)). In a few cases the most sensitive radio data had been reported in targeted searches in the literature. For objects detected in FIRST, we adopted the FIRST catalog integrated flux. For others, we derived a 2σ upper limit from the root-mean-square fluxes in the maps downloaded from NED. Of the 34 sample quasars, 16 are radio-quiet, 5 are radio-loud, 4 have no radio data, and 9 have radio data which are not deep enough to know whether or not the quasar is radio-loud or radio-quiet. Going into our survey, we wanted to test whether radio-loud quasars have different host galaxy properties than the radio-quiet majority. We therefore tended to give priority to observing quasars which we knew to be radio-loud, since they are rare and we knew that it would be difficult to get a statistically large sample of them. In the end, at least 5 quasars in the sample of 34 are radio-loud, compared to approximately 1 expected to be radio-loud, had we observed a sample representative of the radio properties of the bright quasar population at z ∼ 4 as a whole (Jiang et al. 2007). Black Hole Mass Estimate We estimated the black hole mass M BH for the quasars in our study from emissionline spectroscopy. As described below, we measured the full-width-half-maximum (FWHM) value of the broad CIV emission line and the quasar UV continuum luminosity from new and existing spectra. We then used these to compute black hole mass according to the relation Here, AB 1450 ≡ −2.5log 10 f ν − 48.6, where f ν is the (reddening-corrected) continuum flux in erg s −1 cm −2 Hz −1 measured at an observed wavelength of λ = 1450Å(1 + z) as in Fan et al. (2001). DM is the (luminosity) distance modulus. The AB 1450 magnitudes were compiled from the literature or measured by us as shown in Table 3. Where available, we adopt the reddening-corrected AB 1450 tabulated by members of the Sloan consortium, who give values based on spectrophotometry of the quasars at a rest frame wavelength of 1450Å. For 10 of the objects, we measured the continuum fluxes ourselves from spectra obtained from the SDSS Skyserver. For the objects for which no fluxcalibrated spectra were available, we used the i AB magnitudes from Table 2 and transformed to AB 1450 assuming α = −0.44 as described in §2.1. For objects with large CIV equivalent widths that contaminate the broadband measurements, the AB 1450 derived from photometry will be systematically bright. A comparison of the spectroscopically-derived AB 1450 to the photometrically-derived one for the SDSS objects shows the former to be fainter on average by 0.4 ± 0.3 mag. For most objects, we measured the FWHM of the C IV emission lines given in Table 3 using spectra from the SDSS Skyserver or electronic versions of published spectra from several authors who kindly made them available. In a few cases, we digitized published spectra using "Plot Digitizer" software. We also carried out new long-slit optical spectroscopy for five targets in the sample, including one for which no other spectroscopy is published, [VH95]2125-4529. For the new observations, we used the DEIMOS spectrograph on the Keck-II telescope (Davis et al. 2003) on the nights of 2008 Oct 24 and 2004 Oct 12 and 13. The objects were observed through a 0.7 arcsec wide slit with the 1200 l/mm first order grating, resulting in a dispersion of 0.33Å/pixel. A GG495 filter was used to block second order light. Exposures were 600-900 seconds, mostly through clouds or at twilight. We reduced the data using the DEEP2 project IDL reduction pipeline, which flatfielded, sky-subtracted, wavelength-calibrated and extracted the spectra as described in the DEEP2 webpage, http://astro.berkeley.edu/∼cooper/deep/spec2d/. Spectra are shown in Figure 2. To derive the CIV line width, we subtracted a local continuum fit, derived by fitting a linear curve through the spectra in rest wavelengths 1425Å to 1500Å and 1760Å to 1860 A. We replaced absorption features with an interpolated continuum estimate, and then fit a gaussian to the C IV emission line. The quality of the C IV line profiles for the quasars in our sample ranged from very high-signal-to-noise examples with easy-to-define continua, to barely detected lines in discoveryquality spectra. Moreover, the redshifts of our targets shift the C IV line to wavelengths with strong telluric absorption and night-sky emission features that are difficult to calibrate out completely. Some lines probably are suppressed by undetected absorption features intrinsic to the quasar. As many authors have noted, quasar emission lines are non-Gaussian, in the sense that they have "pointy" peaks. Some C IV lines in our sample were also significantly asymmetric. For these reasons, we measured the FWHM values by hand, using IRAF's 5 splot. In cases where part of the line profile was very noisy, we measured the half-width of the better side of the profile and doubled it. In a few of the spectra with very good signal-to-noise, there is clearly a narrow (2000-3000 km s −1 FWHM) component, and broader wings (10,000-15,000 km s −1 FWHM). For several objects, we estimated two values of the line width, one for each component; both are listed in Table 3. For [VCV96]Q2133-4625, the C IV profile is so noisy, possibly because of an absorption trough, that it was impossible to derive a FWHM from the published spectrum. Our resulting black hole mass estimates are given in Table 3. The systematic uncertainties in these estimates for black hole mass are well-known (Wandel et al. 1999;Collin et al. 2002;Dietrich & Hamann 2004;Kaspi et al. 2005;Vestergaard & Peterson 2006;Netzer et al. 2007;Kelly & Bechtold 2007;McGill et al. 2008). The primary assumption is that the CIV emitting gas is in virial equilibrium with the central black hole mass, and is located at a radius that scales with luminosity. For the luminous quasars in our sample, this means extrapolating from the relations tested in emission-line regions studied with reverberation mapping locally (Peterson et al. 2004). Further, the bolometric luminosity of each quasar is assumed to be a constant multiple of the λ1450 continuum luminosity, which certainly is not the case (e.g. Kelly et al. (2008)). In Figure 3 we plot the black hole masses of the quasars in our sample along with those for the ≈ 1600 SDSS quasars in this redshift range recently tabulated by Shen et al. (2008). For the 6 objects in common, our black hole mass estimates generally agree within ≈ 0.2dex. Accretion Rates We combined the black hole masses with the K-band observations described below to calculate the quasar mass accretion rates. Because the observed K-band samples the rest-frame B-band, the K-band magnitude allows us to compute a B-band luminosity independently of spectral shape. This avoids the errors that result when one must extrapolate from optical photometry to the rest-frame B assuming a spectral index α. We apply a Bband bolometric correction factor of 10.7 (Elvis et al. 1994) and we compare the resulting bolometric luminosity to the Eddington luminosity computed from the black hole mass via L Edd = 3.3 × 10 4 (M BH /M ⊙ ) L ⊙ . We have assumed that all of the rest-frame B-band light can be attributed to the nucleus, which is a reasonable estimation for such luminous objects. The resulting accretion rates as fractions of Eddington, L bol /L Edd , are tabulated in Table 3. The median L bol /L Edd for the sample is 0.47 ± 1.6 (1σ), and the minimum value is 0.1. These rates are good matches to those inferred from studies of host galaxies locally; McLeod & McLeod (2001) found that the most luminous local quasars radiate at 0.1L Edd , and Floyd et al. (2004) deduce a median rate 0.47 for the most luminous local quasars. The calculation of accretion rates yields a handful of quasars with super-Eddington rates. Of these, BR2212-1626 is gravitationally lensed (Warren et al. 2001) and so the continuum luminosity, which we have not corrected for gravitational magnification, is overestimated. Since both M BH and L/L Edd ∝ L 0.5 , both quantities are also overestimated. For BRJ0529-3553, we have only a discovery quality spectrum, and the C IV line width is very uncertain. For 5 others which have L/L Edd >> 1, the C IV profile has good enough signal-to-noise to detect a distinct narrow and broad component. If the FWHM of the broad component is used, very large black hole masses, and sub-Eddington accretion rates are implied. Detailed modeling of the quasar spectra energy distribution and higher quality spectra of all targets would improve the estimates of black hole mass and accretion rate. We do not list the statistical errors for M BH and accretion rate in Table 3 because these numbers are dominated by systematic uncertainties and the simplifying assumptions described above. Excluding the L/L Edd > 1 objects, the median rate becomes 0.41 ± 0.3, consistent with the distribution plotted by Shen et al. (2008) for z > 3 SDSS quasars. As a second way to estimate black hole masses for our sample quasars, we assume that all of the quasars are radiating at 0.4L Edd , with bolometric luminosities determined from M B as above. The derived values are then the minimum plausible black hole mass that the nuclei could have to be emitting at the luminosity observed. These values are listed in Table 3. Near-IR Imaging Observations We have obtained deep, near-IR images of 34 quasars over the period 2002 September -2005 January at the Magellan I 6.5m and Gemini North 8m telescopes. We have observed each field in K s , with 5 also observed in H or H c . Most of the objects (26) were observed with Magellan's PANIC (Martini et al. 2004), a 1024 × 1024 HgCdTe array with a pixel scale of 0. ′′ 125 and a field-of-view (FOV) of 128 ′′ . Before PANIC was installed, we imaged a few objects (6) with the old ClassicCam (Persson et al. 1992 With all three instruments, we observed using a 9-or 25-point dither pattern of short exposures (10 −30 sec each, repeated 1 −3 times per dither position). The times were chosen to keep the quasar images in the nominal linear range, with repeats limited to ensure fair sampling of sky variation. The dithers were typically repeated for half a night, yielding up to a thousand frames per field and average total on-source times of 3 hours for PANIC and NIRI. With NIRI and PANIC, the FOV was big enough for us to use stars on the quasar frames to measure the PSF. Due to ClassicCam's smaller FOV, we had to alternate quasar dithers with dithers on a nearby star to sample the PSF. The average on-source time for ClassicCam was thus shorter, only about 2 hours, resulting in shallower images. As we describe in the sections below, the ClassicCam images turned out to be of very limited use for the host searches. However, we include them in this paper both because they remain somewhat useful for investigations of the near environments of the quasars, and to illustrate the difficulties of using out-of-field stars for PSF subtraction. We were fortunate to have excellent seeing for many of the observations, with the final, combined K s quasar images from PANIC and NIRI having full-width half-max (FWHM) ranging from 0. ′′ 32 − 0. ′′ 66 with median value < F W HM >= 0. ′′ 43. ClassicCam images were worse. Photometric calibration was done for the PANIC and NIRI images using the 2MASS stars (usually 1−3) found on each combined quasar frame. We also cross-checked these values against observations of Persson faint IR standards (Persson et al. 1998), and estimate that the photometric calibration is accurate to ∼ 0.1mag. For the ClassicCam images, whose FOV are too small to contain 2MASS stars, we used only the Persson et al. standards. Data Reduction For all three instruments, we reduced the data using standard techniques in IRAF with its add-on packages gemini/niri and panic, the latter kindly provided by Paul Martini. The data from each instrument were handled somewhat differently, with flats made from twilight exposures, flat lamp exposures, and object frames for PANIC, NIRI, and ClassicCam respectively. Sky frames were generally made by median-filtering 9−10 dither positions after masking out sources. For the NIRI images, persistence proved to be a significant problem, which we solved to our satisfaction by including in each frame's bad pixel mask the object masks from the 2 previous frames. The PANIC and NIRI detectors are known to be nonlinear by ≈ 1% at 15,000ADU and 11,000ADU respectively, and we kept our quasar+sky counts well below these limits. Still, for the tight tolerances in this project we needed to take some care with the linearity correction. For PANIC we used flat lamp exposures of varying lengths to determine our own second-order correction which differed from the nominal pipeline correction by 0.5% at 15,000ADU. The NIRI pipeline does not offer any nonlinearity correction and we lacked the data to determine our own. We discuss the residual effects of nonlinearity in §5. In the course of our analysis we detected a geometric distortion in the PANIC images. The distortion was visible as a radial stretch in contour plots of stars taken around an image. Paul Martini gave us a second-order geometric distortion correction derived from the PANIC optical prescription, which we then implemented in the pipeline. We note that the NIRI pipeline does not offer a distortion correction, though distortion proved to be an issue there as well. We discuss the implications further in §5. The hundreds of reduced frames for each quasar were magnified by a factor of two, aligned on the quasar centroid, and combined after rejecting a handful of frames deemed bad because of bias level jumps in one quadrant or poor flattening. The deepest NIRI K s images reach a surface brightness limit of K s = 22.9mag arcsec −2 (measured as a 1σ pixelto-pixel variation). The median value is 21.7 mag arcsec −2 , typical for the PANIC images, while the ClassicCam images are more shallow. A typical PANIC image is shown in Fig. 4, where the FOV corresponds to ∼ 1.3 Mpc at z = 4, well-suited for studies of quasar environments (see Bechtold & McLeod 2009). PSF Characterization Any search for host galaxies is only as good as the characterization and removal of the nuclear point source. We followed the traditional practice of selecting "PSF stars" from each image, and using them in model fits to the quasar images. However, in the course of our analysis we discovered some subtle effects of residual distortion and nonlinearity. Because we have not seen these issues addressed in other high-z host searches, including ones also done with NIRI, we discuss them in some detail here. For a recent look at the PSF perils that host galaxy studies might encounter even with HST, see Kim et al. (2008). Geometric distortion, or What to Do When The Seeing is Too Good Beginning with PANIC, our experiments with multiple PSF stars showed that poorer fits tend to result when the PSF star is farther from the quasar. Even though we had performed a distortion correction on the PANIC images, we found that a small residual geometric distortion compromised the fits. The distortion we detected would be insignificant (and indeed not noticeable) for most projects, with camera optics generally designed to create instrumental PSFs small compared to the seeing size. In our case, however, the excellent seeing and tight tolerances required for high-z host detection made the distortion apparent. To improve the fits, we were able to effect a higher-order distortion correction by recentering each quasar's hundreds of frames on PSF stars to create "PSF frames" for each target, as suggested to us by Brian McLeod. With distorted images, the pixel scale at the edges is different than that near the center. Therefore, when shifting and combining frames from different dither positions, the shifts computed based on the objects near the center (in this case quasars) will be the wrong number of pixels to align the objects near the edges, yielding a combined image with stretched edges. The recentering technique to help correct for this works as follows. First, as described above, we align the hundreds of images to the quasar centroids, and combine them to create a "quasar frame." From this we extract a postage-stamp image of the quasar to use for fitting. We then start again with the same hundreds of images and align them this time to the centroid of a particular PSF star, and use these new shifts to combine them to make the "PSF frame" for that star. We then extracted a postage-stamp image of the PSF star from the latter frame for use with the fitting. An example is shown in Fig. 5. We used this technique with good success on most of the PANIC images. Examples of fits performed with and without our re-centering technique are shown in Fig. 6. The NIRI images also suffer from distortion that proved significant for this project. No distortion correction is used in the NIRI pipeline. We applied our re-centering technique and found that it did improve the NIRI fits considerably, but PSF-PSF tests (where we subtracted PSF stars from each other) showed that residual distortion remains in the K image of SDSSJ012019.99+000735.5 and the H image of BRI0241-0146. For these two images, the only PSF star is far from the quasar. The ClassicCam images provided their own set of challenges because the PSF stars were observed alternately with the quasars, and inevitable seeing variations resulted. We were able to obtain more reasonable results in most cases by rejecting frames according to the seeing, so that the resulting quasar and PSF star frames had the same FWHM. However, the ClassicCam results are never as satisfactory or robust as the Panic and NIRI results, and will be more useful for studies of the quasar's near environment than for host detection. Nonlinearity Unfortunately, we also discovered that the near-IR images exhibit a small nonlinearity even after the nominal correction has been applied. This effect was more subtle than the distortion and became apparent only under scrutiny of the ensemble of data for our many objects. Such an effect could easily have escaped our detection in a study with fewer objects. We noticed that our best fits were found for PSF stars with similar brightness to the quasar. PSF stars brighter or fainter than the quasar could leave compact central emission or, more insidiously, rings that mimicked host galaxies in the difference images. Star-minus-star experiments performed on multiple PSFs from the same image confirmed our suspicion. This is difficult to illustrate with observed stars because of the residual distortion discussed above. However, we investigated this further by simulating observations of stars of different magnitudes and fitting and subtracting them after applying different plausible linearity corrections. For example, we have used the nonlinearity curve for PANIC shown in Fig. 7 to generate the suite of stars shown in Fig. 8 with radial intensity profiles given in Fig. 9. The counts for the fainter stars were chosen to keep the detector within the range where the response is approximately linear, as is done for the observed targets. For the brightest stars, we allowed the brightness to enter the nonlinear (but not close to saturated) regime. For the PANIC response, this corresponds to ∼15000 counts. At this level, the difference between the plausible prescriptions for the linearity correction amounts to 0.5%. When we generated stars using one prescription and then "corrected" them for nonlinearity using another, the 2D residuals in star-star tests were clearly positive. In other words, the uncertainty in the linearity correction can lead to spurious detections when scaling and subtracting point sources that differ in flux. However, in the cases we tried, the spurious residuals were distinguished either by unphysically compact sizes (FWHM less than the image FWHM) as shown in Fig. 9, or else by donuts that could be mistaken for over-subtracted hosts. The latter did not extend past a diameter of D = 2.5 FWHM, as illustrated in Fig. 8. Implications Our results call into question the traditional approach of selecting "PSF stars...chosen to be as bright as possible without encountering detector saturation effects" (Hutchings 2005). We have used this approach ourselves for low-z quasars (e.g. McLeod & Rieke (1995)) so that noise in the PSF wings is scaled down during the fitting process. However, our current analysis suggests that for PANIC and NIRI at least, a more robust practice is to chose PSF stars whose brightness is similar to the quasar, and whose positions are as close as possible. Which criterion takes priority might depend on the instrument and the observing conditions. Distortion corrections and linearity corrections are essential, but not sufficient. The PSF star re-centering technique described above provides a higher-order distortion correction, but a possible added complication is that the accuracy of the registration can be dependent on the brightness of the stars. In addition, the characteristics of spurious residuals are dependent on the weighting process used during normalization of the PSF (e.g. normalize to the flux in the central few pixels, use the whole source for the fit, weight the fit by flux, etc.). In principle, adaptive optics (AO) observations in which images of a PSF are interleaved with those of the quasar should be free from geometrical distortion when both are observed on the same part of the array. However, our results suggest that the case is not so clear. First, the AO observations will suffer from the same nonlinearity issues described above. Second, the adaptive correction procedure can be dependent on the object's flux and the details of the profile, which can lead to an effective distortion. This underscores the desirability of observing PSF stars simultaneously with, and not just close in time to, the quasar; see however Ammons et al. (2009). We conclude that the residual effects of distortion and nonlinearity should be addressed by individual near-IR host-hunters for their particular data sets. For the present study, we evaluate the residuals based partly on our knowledge of the brightness and proximity of the PSF stars. In most cases, we adopt as a criterion that positive residuals are considered significant only if they extend beyond a diameter of D > 2.5 FWHM, i.e. a radius r > 1.25 FWHM, which for the typical frame here means r 0.55 ′′ . We explore this further with the simulations discussed below. Of course, this particular criterion might not be appropriate for data taken under different seeing conditions or with different flux levels. PSF Fits To begin our search for host galaxies, we modeled each quasar as a point source with shape represented by the PSF star images. We determined a two-dimensional (2D) best-fit model for each quasar using the C program imfitfits provided by Brian McLeod and described in Lehár et al. (2000). This is the same program that we used on NICMOS images of lowredshift quasars (McLeod & McLeod 2001). We used the 2x magnified images to ensure good sampling of the PSF, and extracted an 8 ′′ × 8 ′′ sub-image for the fitting. Imfitfits makes a model by convolving a theoretical point source with the observed PSF, and then varying any combination of the parameters defining the background level and the position and magnitude of the point source to minimize the sum of the squares of the residuals over all the pixels. By subtracting the best-fit model from the quasar image, we can examine the result for any residual flux due to an extended component. We achieved excellent results with at least one PSF star in most cases, as shown in Fig. 10. In a few cases where other sources within the 8 ′′ box would bias the fits of the quasar, we have simultaneously fit those other sources as either point sources or galaxies. In these cases, we subtracted only the fitted quasar component for the figures; the other sources remain for comparison. Our fitting process necessarily subtracts out any unresolved contribution from the host. A logical next step would be to perform a simultaneous fit of a point source plus a model galaxy as is commonly done with lower-redshift quasars. Unfortunately, we have found that for these data, multicomponent fits give uninterpretable results. The problem is that for our data, the likely range of scale lengths for the galaxies are small compared to the seeing disk and too little of the galaxy extends beyond the PSF. The result is that running a multicomponent fit, whether unconstrained or partially constrained (for example by holding fixed the centers, or the centers and the galaxy shape, or the centers and the nuclear flux, ...), results in the "galaxy" component being turned into a meaningless compact or even negative source to improve the fit in the quasar's core, where PSF variations are the biggest. One idea to get around this problem is to downweight or mask out the core in the fits. However, our tests on real and simulated hosts have shown that the resulting "galaxy" is sensitively dependent on the weighting scheme. (We had even seen hints of this with the much better resolved hosts in our low-z HST study (McLeod & McLeod 2001).) Thus, we have developed a different way to estimate host magnitudes and morphologies via the simulations and isophotal diameter analyses described below. As another tool for assessing host detection, we have generated one-dimensional (1D) radial brightness profiles, measured in circular annuli, of the quasar and PSF images. We present the surface brightness profiles in Fig. 11. For comparison we also generated a "fully-subtracted" profile by normalizing the PSF to the quasar within the central few pixels and subtracting. In most cases, the PSFs are excellent matches to the quasars down to the level of the sky noise. We caution that the 1D profiles need to be interpreted carefully. For example, for our ClassicCam image of q0311-5537, the profile alone (see Fig. 11) looks like those of some host detections postulated in the literature. However, the 2D fit (see Fig. 10) shows that the residual flux is due to PSF mismatch. For the cases where we do have candidate host galaxies, we can use the 1D profiles to obtain estimates of the host flux. To do this, we subtract a fractional PSF profile that leaves a just-monontonic residual (as any plausible host would not decrease in brightness towards its center), and add up the residual light by integrating. This technique is necessarily crude, but the data do not warrant more sophisticated fits. Detection Limits In §5 above, we concluded from our PSF-PSF tests that any residual flux from our quasar fits outside a diameter of D > 2.5 FWHM is likely significant. In this section, we explore this criterion further in two ways: through simulations and through calculations of predicted isophotal diameters for galaxies of various types. This second technique is (as far as we know) a new and potentially very useful approach for quasar host galaxy studies. Simulated Hosts To probe our detection limits we generated suites of fake galaxies and added them to the magnified images of apparently point-like quasars. We selected quasars both brighter and fainter than the median for our sample, and images both at, and deeper than, the median surface brightness limit. We convolved each model galaxy with the quasar image, added the result to the quasar image itself, and reran the analyses with the PSF stars. We also duplicated some of the tests with noiseless (Moffat) PSFs having the same FHWM as the quasars. Our simulated galaxies included both exponentials (central surface brightness µ 0 , scale length r 0 ) and deVaucouleurs profiles (surface brightness µ ef f at effective radius r ef f ). We considered sizes r 0 ,r ef f of 0.125, 0.25, 0.5, and 0.75 ′′ , corresponding to 0.88, 1.5, 3.5, and 5.2kpc at z = 4. These values are similar to the range observed for z ∼ 4 galaxies in the Hubble Ultra-Deep Field (HUDF) (Elmegreen et al. 2007) and high-z lensed quasar hosts (Peng et al. 2006). We tested axial ratios 0.2 < b/a < 1. Visual inspection of the residuals supported the validity of our D > 2.5 FWHM criterion; detectable galaxies left residual light outside of that diameter. In terms of flux, we found that for the galaxies we tried, the hosts were cleanly visible for an observed K s − band flux ratio F (host) 1 3 F (nucleus). These hosts leave central (negative) holes in the subtracted 2D images and have flux in clear excess of the PSF at r ≈ 1 ′′ in the 1D profiles. For hosts with F (host) < 1 3 F (nucleus), the detectability by visual inspection depends on the size. The hardest galaxies to recover were those whose scale lengths or effective radii were < 1 3 FWHM, and also the very large r ef f = 0.75 ′′ deVaucouleurs galaxies for which too much of the galaxy's flux is at low surface brightness. Isophotal Diameter Analysis Bolstered by the results from our simulation, we recast our detection criterion from Section 5 above as a detection limit in terms of galaxy isophotal diameter D iso , here taken to mean the diameter at which the galaxy light drops below the sky noise. In other words, we assume that we can detect any hosts that have D iso 2.5 FWHM for the surface brighteness limits of our images. To explore the range of galaxies that could be detectable as hosts, we have calculated D iso for model exponential and deVaucouleur galaxies following the tradition of Weedman (1986) but updated for the currently favored cosmology. We start with exponential and deVaucouleurs galaxies covering a range of scale lengths similar to those in §6.1. We transport them to z = 4 by applying cosmological surface-brightness dimming and cosmological angular diameter distances. We calculate their z = 4 colors and k-corrections by redshifting and integrating a spiral galaxy spectral energy distribution template over the filter bandpasses. [We have also used bluer and redder templates, but we note that for these data, the kcorrection is nearly independent of galaxy spectral shape because the observed K s -band corresponds to rest-frame B.] Finally, we combine these to calculate the isophotal diameters for the galaxies given the surface brightness limits of our images. We also compute each galaxy's observed magnitude m Ks (obs) by integrating the galaxy flux inside the isophotal diameter. Fig. 12a shows D iso as a function of observed magnitude m Ks (obs) for a range of galaxy types and sizes, and for the range of surface brightness limits found for the 2x magnified PANIC and NIRI images that we use for host detection. We stress that m Ks (obs) represents only the fraction of the galaxy's light that falls above the sky noise; it is not simply the absolute magnitude adjusted by the cosmological distance modulus. On these plots, only galaxies above the 2.5FWHM line are in principle detectable as hosts. One can see that for the galaxies considered (i) the faintest visible deVaucouleurs hosts span ∼ 1mag at a given surface brightness limit; (ii) the faintest visible exponential hosts span ∼ 1.5mag; and (iii) deVaucouleurs hosts must be relatively brighter to be detected because more of their light is hidden in the steeply sloped and unresolved core. In Fig. 12b, we plot these D iso values as a function of L * assuming no evolution in the mass-to-light ratio of the stellar population. We adopt for reference a local L * galaxy of magnitude M * V ega B = −20.5. Transported to z = 4 such a galaxy would have a total magnitude of m * V ega Ks,no evolution = 23.6, whereas its observed magnitude would be fainter depending on the surface brightness limit. One can see from the figure how the detectability depends on the galaxy scale length. For example, on an image with the median FWHM and with the deepest limiting surface brightness (µ = 22.4mag arcsec −2 ), the intermediate-scale (r 0 = 1.5kpc) exponentials are visible at lower luminosity than either the small-or large-scale exponentials. The smaller galaxies hide a larger fraction of their flux in the unresolved core. The larger galaxies have lower central surface brightnesses at a given luminosity, and their relatively more shallow disks do not pop above the surface brightness limit until farther out in their profiles. We use these plots to estimate host detection limits for each object. We note that there are different ways that one might measure the surface-brightness limit of any given image. We have compared our calculated isophotal diameters and apparent magnitudes from this section with those measured on the images for the simulated galaxies discussed above in §6.1. We find them to be in excellent agreement when the surface brightness limit used for the isophotal diameter calculation is that given by the 1σ pixelto-pixel sky noise. This is how we have characterized the surface brightness limits for our images in Table 1. Results In a typical image, we are sensitive to field galaxies as faint as m V ega K ∼ 23 (m AB K = 24.8), with the actual limits dependent upon morphology. To translate this apparent magnitude into a corresponding luminosity for a present-day galaxy with the same stellar mass, we need to account for luminosity evolution of the stellar population. A reasonable assumption for the evolution is that the galaxies undergo ∼ 2 mag (i.e a factor of 6) of fading between z = 4 and now, which we infer from the K-band k-corrections measured for galaxies in the HDF-S (Saracco et al. 2006). This amount of evolution is also expected in the (rest frame) visible mass-to-light ratio according to the stellar population synthesis models of Bruzual & Charlot (2003) for formation redshifts of z 5. If this is the case, our images yield galaxies with stellar masses corresponding to a present-day galaxy with luminosity L * in the fields around the quasars. A detailed study of the quasar environments will be presented in Bechtold & McLeod (2009). Here we discuss the results for hosts. Host Limits For host galaxies, our detection limits are of course brighter than the limits for galaxies in the field. We inspected the radial profiles together with the two-dimensional fits to classify detections as y/?/n (likely/maybe/unlikely) with results given in Table 4. We looked for residuals that extend beyond the sizes of the circles shown in Fig. 10, and that are not likely attributable to nearby (projected on the sky) companions. We further used the 1D fits to verify that the residuals were plausibly broader than the PSF. We find four likely hosts, and note that they are seen on the images that have the best seeing, FWHM < 0. ′′ 4, and nearly maximal depth, indicating that we are pushing the limits of detection with these images. Other objects might well have hosts lurking just beneath the noise. Three of the four likely detections are found on NIRI images. The fuzz associated with quasar q0848 in the Ks band was marginally detected in H as well. These four likely hosts are shown in Fig. 13. For all of the objects observed with PANIC or NIRI, we estimate the host detection limits by applying our D iso 2.5 FWHM criterion using the curves in Fig. 12 and the surface brightnesses and FWHMs in Table 1. The results are summarized in Table 4, where we list for each model two possible values for the limit on the host galaxy. The "conservative" value represents the most luminous host that would be just visible; the galaxies may be luminous but are conspiring to evade detection either by putting too much light in their unresolved cores or by having such a large scale length that the middle radii are below the sky. The "optimistic" value represents the least luminous host that would be just visible. Of course the stellar masses of the galaxies in Table 4 could be considerably smaller than the straight luminosities indicate. For example, if we allow for 2 mag of evolution, then the the present-day equivalents would be galaxies lower in luminosity by a factor of ∼ 6. In that case, a 12L * galaxy in the Table would represent 2L * of stellar mass. One can see from Table 4 that for the depth and resolution of our images, the range in upper limits for each type of galaxy typically spans a factor of two. In addition, the luminosity limits for deVaucouleurs galaxies are typically double those for exponentials, reflecting the fact that the peakier spheroids can hide more light in the unresolved core. In general, the least certain limits by the isophotal diameter method will be for those images with big FWHM and/or shallow depths, because in these cases galaxy isophotal diameters are only weakly dependent on galaxy luminosity-in other words, they fall on the flat outer parts of the curves in Fig. 12b. The ClassicCam images were sufficiently insensitive that the limits are not interesting. There are also several PANIC and NIRI images whose conservative limits were so large as to be also uninteresting. Host Detections For the four likely hosts we can also estimate fluxes from the residuals after fitting. We use Fig. 12a first to identify the kind of galaxy that could yield the observed magnitude and isophotal diameter for the surface brightness limit of the image. We then use Fig. 12b to translate that into a possible intrinsic B-band luminosity for the whole galaxy. Note that this luminosity is bigger than the luminosity one would calculate simply from applying the distance modulus to the observed magnitude; it includes contributions from the inner part of the galaxy under the PSF and from the outer part below the sky noise. Finally, we apply 2 mag of evolution to the luminosity and from it calculate the corresponding M B that a galaxy of the same stellar mass would have today. For q0109, the residuals add up to m V ega K (obs) ∼ 22.7 and they extend to a diameter of approximately 2. ′′ . We use Fig. 12a (the 22.4mag arcsec −2 depth is appropriate for this image) to learn that this galaxy could for example be a large scale-length exponential disk. Locating this curve on Fig. 12b, we find that the same D iso gives a luminosity of ∼ 30L * with no evolution. Allowing for 2 mag of evolution yields a galaxy with mass corresponding to a ∼ 5L * galaxy today. For comparison, the object at 1. ′′ 5 southeast of the quasar has m V ega K (obs) ∼ 22.7 and D iso ∼ 1. ′′ 1. If it is at the redshift of the quasar, it could represent a companion with roughly half the mass of the host at a projected separation of about 11kpc. For the bright residuals in q0234, we measure D iso ∼ 2. ′′ 6, while integrating the 1D residuals gives m V ega K (obs) ∼ 19.9-bright enough to represent a ∼ 60L * exponential (10L * with evolution), again with a large scale length. For q2047, the residuals extend asymmetrically to the south and possibly represent the combined flux of the host and a companion. This object's classification as a hyperluminous infrared galaxy by Rowan-Robinson (2000) (based on sub-mm emission that is likely too strong to originate in the quasar's dust torus) would suggest that we are observing a merger. Integrating the 1D residuals gives m V ega K (obs) ∼ 19.7. The diameter is harder to define because of the asymmetry but we adopt 1. ′′ 6 as an estimate, which implies a ∼ 40L * -mass, intermediate scale length exponential (7L * with evolution). Table 4 shows that three of the four detections are more luminous than the conservative detection thresholds for exponentials and all fall above the optimistic thresholds. We overplot the estimates on Fig. 14-16. If we examine the deVaucouleurs curves in Fig. 12 we find that the residuals for q0109 and q0234 are too big for their magnitudes observed to be represented by the curves plotted; in other words, if they are spheroids, they must have r ef f >> 4kpc. On the other hand, the residuals for q0848 and q2047 fall on the curves for r ef f = 4kpc and could be spheroids with masses of 1-2 times their exponential counterparts. For the several quasars with detections listed as "?" the data do not warrant any attempt to characterize the magnitudes other than to say that if the hosts are there, they likely lie close to the limits listed in Table 4. Color of the Host Galaxy of SDSSpJ084811.52-001418.0 For one of the quasars with a detected host, SDSSpJ084811.52-001418.0, we also obtained a deep H-band image with PANIC. We carried out the PSF estimation and subtraction from the nuclear quasar image for the H-band, and found a residual flux, consistent with the K-band detection. The color of the host light is H-K=0.8. This color is nominally bluer than that of a redshifted spiral galaxy spectral energy distribution, which would have H-K=2.0. The quasar itself has H-K=0.5, as expected for a quasar at this redshift (Chiu et al. 2007). Thus, the host galaxy is redder than the quasar itself, but bluer than a young stellar population at z = 4. It could be that the host is experiencing a burst of star-formation, as one would expect to occur for a major merger of gas-rich galaxies, and is metal-poor, so has a weak 4000Åbreak. Another possible explanation, however, is that the host light is contaminated by a foreground galaxy. There is no known damped Ly-α absorber along the SDSSpJ084811.52-001418.0 sight-line (Murphy & Liske 2004) but other intervening absorption line systems are no doubt present. Finally, we note that the uncertainty in the H-K color is very large, and difficult to estimate. Deeper imaging of a larger sample, or spectroscopy of the fuzz, could distinguish among these possibilities. The Local Black-hole/Bulge Relation In this section we compare our limits with the local black-hole/bulge relation. While the local relation is given for spheroids, we include both exponential and deVaucouleurs models in our discussion to allow for the possibility that the stellar mass might be differently distributed at early cosmological times. Wherever necessary we have adjusted values to our adopted cosmology, and we have transformed host absolute magnitudes in the literature to M B assuming galaxy rest-frame colors B − V =0.8, B − R = 1.4, and B − H = 4, all appropriate for a spiral-like stellar population. For an older (elliptical-like) population the colors are up to 0.3mag redder, but the uncertainties in these colors are small compared to the other uncertainties. We have computed rest-frame absolute B magnitude limits for our hosts from the luminosities in Table 4 assuming 2 magnitudes of evolution. We plot our host limits and black hole masses against the local relation in Fig. 14 (2001) local (z < 0.3) luminous quasars, the latter which provide high-mass black hole counterparts analogous to those in our sample of high-z quasars. The results are similar if we use our 0.4L edd estimates for the black hole mass. We interpret this figure as follows. The conservative case occurs where the hosts are all at their maximum allowed values, i.e. with limit given by the right-hand bar of each pair and hiding much flux in either a very compact core or in the low surface brightness wings of a shallow profile. In this case, the exponential and deVaucouleurs distributions are both consistent with the local relation for the two magnitudes of evolution assumed; the limits thus do not provide interesting constraints on possible evolution in the relation. The optimistic case occurs when the left-hand bar in each pair gives the upper limit to the galaxy luminosity. In this case, the limits for deVaucouleurs hosts remain uninteresting. On the other hand, hosts with exponential profiles would be less massive for a given black hole mass than are the local (Tremaine) galaxies, though might yet be consistent with local quasars. However, if the evolution correction are more than the two magnitudes assumed, our optimistic exponential upper limits would yield hosts lower in mass for a given black hole mass than local luminous quasars. The Bruzual & Charlot (2003) models suggest that evolution in excess of 2 magnitudes between z = 4 and now would be expected if for example the population were younger than about 400Myr at z = 4. Kukula et al. (2001) and 36 objects (most lensed) from their own observations. The points in Fig. 15 are shown with no evolution correction, which explains the (rightward) trend towards more luminous hosts for a given black hole mass as redshift increases. Peng et al. (2006) found that an evolution correction for a passively evolving population formed at high redshift pushes the intermediate-redshift hosts leftward beyond the local relation, implying smaller stellar mass relative to the black hole mass compared to local galaxies. The constraints from our high-z objects are also dependent on the amount of evolution as discussed above. In Fig. 16, we plot our host limits against rest-frame nuclear B-band absolute magnitude, and compare them to those in the z < 0.4 quasar host galaxy compilation by McLeod & McLeod (2001). Because the observed K s directly traces the rest-frame B for our quasars, the B-band absolute magnitude plotted here is independent of the nuclear spectral shape and so provides an complementary approach to using black hole mass as a tracer of the nuclear engine. We see from the plots that the limits for exponential hosts imply galaxies fainter than their low-z counterparts, especially using the optimistic limits (bottom bar in each pair) as a bright limit. The same is true for the optimistic deVaucouleurs limits. As in the previous discussion, any excess evolution would push the two distributions farther apart. An obvious limitation here is that there are few local quasars with luminosities as high as those of the z = 4 sample. However, one solid conclusion from Fig. 16 is that there does appear to be a maximum allowed host. For the 2 mag of evolution plotted, this maximum corresponds to M B ∼ −24 in the conservative limit, or M B ∼ −23 (roughly a 10L * galaxy) in the optimistic case. Alternatively, if there are ever independent suggestions that the mass limit must be less than that corresponding to an 10L * galaxy, then our results would imply that the evolution must be more than the 2 mag assumed. K-band Galaxy Evolution: The K-z relation To look at our observations another way, we plot K s -magnitude versus redshift in Figure 17. The observed K s -magnitude of a given galaxy varies with redshift because of kcorrections, evolution of the galaxy's stellar population, and merging. Here we compare the quasar host galaxies of our study with observations of the K s -magnitude of field galaxies at the same redshift. We plot observed K s -magnitudes for radio galaxies (Lacy et al. 2000;De Breuck et al. 2002;Willott et al. 2003;De Breuck et al. 2006), which define the locus of brightest galaxies at all redshifts. The locus of radio galaxies is plotted as a solid line given by K s = 17.37 + 4.53log 10 z − 0.31(log 10 z) 2 from Willott et al. (2003). Fainter galaxies found as Lyman dropouts (Reddy et al. 2006) or similar optical selection (Iovino et al. 2005;McLure et al. 2006;Temporin et al. 2008) and subsequent spectroscopic redshift measurement are shown as well. We plot Vega magnitudes for K s , and convert from K AB given in the literature by assuming that K AB = K V ega + 1.84. The sharp locus of the radio galaxies is interpreted to indicate a maximum mass for galaxies of 10 12 M ⊙ , possibly the result of a fragmentation limit in cooling proto-galactic gas clouds. Also shown are the expected K s -band evolution tracks for elliptical galaxies of various masses as computed by Rocca-Volmerange et al. (2004). The evolution curves for spirals are similar (see Rocca-Volmerange et al. (2004)). We see that between z = 0.25 and z = 4, we expect a given mass galaxy to undergo approximately 5 magnitudes of evolution at K s . Note that of the four z=4 hosts detected here, three (SDSS010905.8+001617.1, SDSSpJ023446.58-001415.9, and PC2047+0123) are radio quiet. The other, SDSSpJ084811.52-001418.0, lacks definitive radio data. Despite having radio quiet nuclei, two of the three most luminous hosts have magnitudes comparable to those of radio galaxies at the same redshift. If we interpret the host galaxy detections with the Rocca-Volmerange et al. (2004) model for spheroids in the K-z diagram, then we can derive the ratio of black hole mass to spheroid mass: we find a ratio of 0.016, compared to the local value of 1.4 ± 0.4 × 10 −3 seen locally (Häring & Rix 2004). In other words, given the black hole masses we inferred from the emission lines, the local relation would imply more massive host galaxies than we see. This is what is predicted by theory (Croton 2006;Somerville et al. 2008). Lauer et al. (2007) have emphasized that surveys of high redshift quasars suffer from such strong Malmquist bias that drawing conclusions about the evolution of host galaxies from small samples is problematic. The problem is that we pick targets based on their quasar luminosity, then look for the host. Even if there is a correlation between host galaxy luminosity and black hole luminosity, the observed host galaxies will be systematically fainter than the mean relation derived from spheroid velocity dispersions. This is because for any plausible galaxy luminosity function, there are many more faint galaxies than bright ones. Thus the intrinsic scatter in the relation causes an excess of low-luminosity galaxies with black holes big enough to make the sample. Malmquist Bias in This Sample To look at this effect quantitatively for our sample, we carried out a Monte-Carlo simulation. We chose galaxies randomly from a Schechter galaxy luminosity function, from L/ L * = 0.1 to 15, given by φ(L)dL = n * (L/L * ) α exp(−L/L * )dL/L * with α = −0.46, and L * = 2×10 10 L ⊙ (Sparke & Gallagher 2007). We assume a galaxy mass--25to-light ratio M/L=5 (Cappellari et al. 2006;Tortora et al. 2009), so that the corresponding mass of an L * galaxy is M*≈ 10 11 M ⊙ . Then, we assign to each galaxy a black hole with mass chosen randomly from the Häring & Rix (2004) distribution, in which black holes have mass 0.14% +/-0.04% of the galaxy mass. We further assume that to be found as a luminous quasar and be eligible for inclusion in our sample, the object must have a black hole mass above a certain cutoff, taken to be either log 10 (M BH /M ⊙ ) = 8.5 or 9.0. We draw millions of galaxies, and count up the fraction of host galaxies as a function of host galaxy luminosity for all objects with black hole masses above our threshold. The results are shown in Fig. 18. As expected, we see that for a given value of the black hole cutoff mass, the results will be skewed towards lower-luminosity hosts than inferred from the mean value of the black hole-bulge relation. If we take a cutoff of log 10 (M BH /M ⊙ ) = 8.5, which is at the small end of our sample, the effect is modest; most of the hosts would be very close to the expected mean value of about 2L * . However, if we restrict our sample to the more massive black holes, log 10 (M BH /M ⊙ ) = 9, the effect is somewhat larger with the bulk of the contributors in the range 4-8L * , skewed from the expected mean value of 7L * . However, even if we include the effects of the Malquist bias, we could have detected such hosts for the most massive black holes in our sample, those with log 10 (M BH /M ⊙ ) > 9.5, if the evolution were at least two magnitudes as discussed above. Discussion When we initiated this program, we hoped to test whether or not quasars at z = 4 followed the same correlation of black-hole mass and host galaxy spheroid mass seen in spheroids locally. We knew in principle that if the very luminous nuclei in the high redshift objects were hosted by proportionately massive spheroids, they would be relatively straightforward to detect at K, with the best ground-based seeing. We have taken a very conservative approach to reducing the data, to looking for host galaxy detections, and to estimating host galaxy fluxes and limits. We explored more than one way to compare our observed detection of hosts and limits on host galaxies to the low redshift spheroids. This comparison is complicated for a number of reasons, primarily the fact that we do not measure spheroid velocity dispersion or mass directly, but must infer the host galaxy properties from the emitted starlight. Nonetheless, our data indicate that the host galaxies of some z=4 quasars in our sample are fainter than we expect from the low-redshift correlations. We note that for the mean local values M BH /M gal = 0.14% and M/L=5, we expect black holes with log 10 (M BH /M ⊙ ) = 9.5, 10, and 10.5, to have host galaxies with M gal = 2, 7, and 22x10 12 M ⊙ , or L/L * = 20, 70, and 220 respectively. However, such galaxies are implausibly large, and do not have present-day analogues. For example, as shown in Fig. 17, the upper mass envelope for radio galaxies corresponds to 10 12 M ⊙ , or L ≈ 10L * . This suggests that the M BH /M gal correlation must plateau at M gal ≈ 10 12 M ⊙ . Summary We observed 34 high redshift quasars in the near-IR to search for their host galaxies. Our conclusions are the following. 1. We found that to characterize the PSF and subtract the nuclear quasar light properly, we had to account for geometric distortions in the camera optics, and non-linearity in the detectors, beyond what is normally corrected for in standard pipeline reductions. We caution that small uncertainties in the linearity correction, and the use of PSF stars which are significantly brighter than the quasar, can lead to undersubtraction of the nuclear PSF, and spurious host galaxy detections. 2. We derived black hole masses for the quasars in the sample from the profile of the C IV emission line, but noted several cases where the profile appears to include a narrow and broad component. Low signal-to-noise spectra could easily miss the broad component, with the result that the black hole mass is underestimated. The black hole masses range from 10 8.7 to 10 10.7 M ⊙ . These quasars are very luminous, and so rare as to be not represented in some models for quasar evolution (e.g. Kauffmann & Haehnelt (2000); Di Matteo et al. (2008)). They are more luminous than the knee in the quasar luminosity function, and are probably peak emitters ) that are undergoing a major merger. 3. Accretion rates were derived from the observed K photometry, which directly samples the rest-frame B for the sample quasars. The median accretion rate of the sample corresponds to an Eddington fraction of L bol /L Edd = 0.41 ± 0.3, consistent with the findings for other samples such as the Sloan quasars, and for low redshift quasars whose host galaxies have been studied by a number of authors. 4. We estimated the K-band magnitudes of the host galaxies of the quasars in our sample with a new method which takes into account the isophotal diameter of the galaxies as a function of redshift, as well as the surface brightness limit of the images. We explored parameter space by considering galaxies with exponential and deVaucouleur radial light distributions, and quantified the dependance of derived host galaxy properties on assumed galaxy properties. 5. We detected host galaxies for 4 quasars, at least three of which are radio quiet.The detections all occurred on our deepest, sharpest images. The K-band luminosities of the hosts are consistent with massive galaxies at the redshifts of the quasars. For one object with H-band data, the K-H color is bluer than expected for a star-forming galaxy at the quasar redshift, but the uncertainties in the color are large. 6. We interpreted the detected hosts and limits on host luminosity in several ways, taking into account expected evolution of the stellar populations. We find that if the hosts are already spheroids at early times, then their black-hole/bulge relation could well be consistent with that for local galaxies and luminous quasars; our limits are weak in the case of very compact or very extended spheroids. On the other hand, if the hosts are exponential disks, they likely have less stellar mass for a given black hole mass than would be inferred from the extrapolation of the local relation to high black hole masses, but they could contain as much stellar mass as the spheroidal component of local luminous quasars. Any B-band rest frame luminosity evolution in excess of the 2 magnitudes assumed would make any evolution in the black-hole/bulge relation stronger; such would be the case for a stellar population younger than ∼400Myr. 7. If we interpret the K-magnitudes of the detected hosts with models for the evolution of massive spheroids we find that the ratio of black hole mass to spheroid mass for the 4 detected hosts is approximately 0.02, compared to 1.4±0.4×10 −3 observed in local spheroids. Several authors have pointed out that the Malmquist bias inherent in any study that looks for hosts in very luminous, rare quasars will overestimate the black hole mass to spheroid ratio, given the likely scatter in the correlations, and the fact that faint galaxies outnumber bright ones. We made a rough estimate of the Malmquist bias in our sample through a Monte Carlo simulation. We find that our detection rate is inconsistent with what we should have seen, had the z=4 quasars followed the same relation between black-hole mass and spheroid mass. Instead, the host galaxies in the past appear fainter (and by assumption less massive) than host galaxies today. This conclusion depends on uncertain assumptions for the scatter in the black-hole mass-spheroid correlation, the mass-to-light ratios of galaxies, and the evolution of the spectral energy distribution of spheroids. However, our results are in broad agreement with semi-analytical models for the growth of black holes and merger-induced activity. We are grateful to the staffs of the Las Campanas, Gemini, and Keck Observatories, and to the referee for comments that helped to improve the presentation. This work was carried out with help from undergraduate students Francesca D'Arcangelo, Shelby Kimmel, Melissa Rice, Talia Sepersky, Rebecca Stoll, and Amanda Zangari, who were supported in part by the Keck Northeast Astronomy Consortium's NSF-REU grant AST-0353997. University of Arizona undergraduates who worked on this project were Angela Bivens, who was supported by the NSF program Futurebound to Pima Community College, and Meri Hidalgo Hembree, who was supported by a UA/NASA Space Grant. McLeod acknowledges support from the Theresa Mall Mullarkey Associate Professorship. We are grateful to Paul Martini for providing the IRAF panic package, Brian McLeod for providing imfitfits, and Ed Olszewski for carrying out one of the Magellan runs (with thanks to the University of Michigan Astronomers who gave up some nights). The Gemini observations were obtained under program GN-2003B-C-3 (PI: Bechtold). This research has made use of SAOImage DS9, developed by Smithsonian Astrophysical Observatory. This research has also made extensive use of the NASA/IPAC Infrared Science Archive, and the NASA/IPAC Extragalactic Database (NED), both of which are operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The analysis pipeline used to reduce the DEIMOS data was developed at UC Berkeley with support from NSF grant AST-0071048. Thanks to Michael Cooper for help with the DEIMOS reductions, and Greg Wirth and Buell Jannuzi for help with DEIMOS observing. The surface brightness limit in the center is K = 22mag arcsec −2 , which is slightly better than average, but at 0. ′′ 5 the seeing is slightly worse than average. At this redshift, the field shown is ∼ 1Mpc across. Top left: frames have been aligned to the quasar centroid before combining, and no distortion correction has been performed. The star "c" image is stretched. Top right: the nominal distortion correction has been applied before aligning to the quasar centroid and combining, which has tightened up the star "c" image; however, it is still broader than the image of the quasar taken from the same frame. Bottom left: the distortion-corrected frames have instead been aligned to the centroid of star "c" before combining, effecting a second-order distortion correction for that star. The star "c" image now has the same FWHM and shape as the quasar did in the image aligned on the quasar centroids. Fig. 4 and 5 after creation of "recentered" PSF frames for stars "a" and "c." The better fit with star "a" is likely due to its relative proximity to the center of the frame. Fig. 7. Bottom three rows: results of correcting and subtracting these stars from each other via different schemes. In rows 2 (PSFs normalized to the central pixel before subtraction) and 3 (normalization based on a fit that minimizes residuals) we have performed no linearity correction. Residuals are obvious and the details depend on the normalization. In row 4, we have applied a linearity correction that differs from the one used to generate the stars by 0.5%. The spurious residuals are only obvious with the brighter PSFs, and do not extend beyond a diameter of 2.5*FWHM. Fig. 9.-Radial profiles for the simulated stars described in §5.2 and shown in Fig. 8. The stars that have not been corrected for linearity. For p40, whose flux puts it into the nonlinear regime, the profile is obviously different. For p10 and p1, the more subtle nonlinearity falsely suggests a host contribution in the profile, though the residuals are unphysically compact. The circles have diameter D = 2.5FWHM. The obviously bad fit for q0241H resulted from telescope mirror support problems, and illustrates the effects of distortion in an extreme case. The q0120 image shows the much more subtle but typical effect of residual distortion and nonlinearity. In the case of q0234, significant residuals are obvious. Figure 10 cont'd-targets observed with PANIC. In most cases the fit to the core is excellent. Figure 10 cont'd-targets observed with ClassicCam. Fits are generally poorer than those for NIRI and PANIC. As described in the text, for the small ClassicCam field of view, PSF stars were not visible on the same frames as the quasar and had to be obtained in separate observations interleaved with the quasar exposures. In addition, the ClassicCam images generally have broader PSFs and shallower depths. -Isophotal diameter predictions v. observed magnitude m V ega Ks (obs) (a) and luminosity L/L * (b) for model galaxies at z = 4. The observed magnitudes include only the galaxy flux that is inside the isophotal diameter. The luminosity plots assume no evolution; with a more realistic 2 magnitudes of evolution the luminosity axis labels will decrease by a factor of ∼ 6. The quantity r represents r 0 and r ef f for exponential and deVaucouleurs laws respectively. The three vertical panels in each set represent surface brightness limits appropriate for the range of our NIRI/PANIC observations. Dashed horizontal lines show the median FWHM and 2.5 FWHM. As described in §6.2, we set detection limits for the various host galaxy types according to the criterion D iso 2.5 FWHM. Table 4. Open circles are radio galaxies (Lacy et al. 2000, De Breuck et al. 2002, Willott et al 2003, De Breuck et al 2006. X's are optically-selected galaxies (Reddy et al. 2006, McLure et al 2006, and dots are galaxies in the VIRMOS deep galaxy survey (Iovino et al. 2005, Temporin et al. 2008. The solid line is the fit to the locus of radio galaxies, as given by Willott et al. (2003). The dotted, dashed, long-dashed and dot-dashed lines are evolutionary tracks for elliptical galaxies of mass 10 13 M ⊙ , 10 12 M ⊙ ,10 11 M ⊙ , and 10 10 M ⊙ , respectively, as computed by Rocca-Volmerange et al. (2004). All magnitudes are K s in the Vega system. The detected quasar hosts have K s magnitudes consistent with those of massive galaxies at their redshifts. For the bigger cutoff black hole mass, the Malmquist bias is more severe, in that there is a larger contribution from galaxies with lower-luminosities than that inferred from Magorrian relation mean. (Fan et al. 1999(Fan et al. , 2000(Fan et al. , 2001; L01 (Landt et al. 2001); P01 (Péroux et al. 2001); S01 (Schneider et al. 2001); SL96 (Storrie-Lombardi et al. 1996); HB (Hewitt & Burbidge 1989); VV (Véron-Cetty & Véron 1996; ACS,WFPC2 archival HST images measured here; HV96 (Hawkins & Veron 1996) and r AB = R + 0.3. b C1,C2=ClassicCam with scale 0. ′′ 115/pix, 0. ′′ 95/pix; N=NIRI with scale 0. ′′ 116/pix; P=PANIC with scale 0. ′′ 125/pix c Surface brightness limit of final image (Vega mags) derived from 1σ pixel-to-pixel variation. Effective limits for the magnified images used with PSF fits are ∼ 0.4mag brighter. (Fan et al. 1999(Fan et al. , 2000(Fan et al. , 2001; L01 (Landt et al. 2001) computed from B j ; P01 (Péroux et al. 2001) computed from R; S91 (Schneider et al. 1991) computed from AB1450; SL96 (Storrie-Lombardi et al. 1996) computed from AB7000Å; HV96 (Hawkins & Veron 1996); ACS,WFPC archival image measured here c Detections or 1σ upper limits d FIRST and NVSS were accessed through NED; C01 (Carilli et al. 2001); G93 (Griffith & Wright 1993); S92 (Schneider et al. 1992); W90 (Wright & Otrupcek 1990) e Radio-to-optical flux ratio calculated from detections, or from 2σ limits for nondetections b Sources for spectra and photometry: i (derived from i AB in Table 2); C02 (Constantin et al. 2002); D03 (Dietrich et al. 2003); Fan99,00,01 (Fan et al. 1999(Fan et al. , 2000(Fan et al. , 2001; K01 (Kuhn et al. 2001); L01 (Landt et al. 2001); P01 (Péroux et al. 2001); S01,92 (Schneider et al. 2001;Schneider et al. 1992); SL96 (Storrie-Lombardi et al. 1996); SS (SDSS DR6); SW00 (Storrie-Lombardi & Wolfe 2000); Keck this paper c FWHM measured by us from spectra. For objects with two values listed we measured both a broad and a narrow component. d Quasar absolute magnitude M B (Vega system) computed from the observed K-band magnitudes in Table 4 e Black hole mass log 10 (M BH /M ⊙ ) (derived from FHWM CIV and AB 1450 ) and Eddington fraction L bol /L Edd (from mass and M B ).
2009-09-14T19:23:55.000Z
2009-09-14T00:00:00.000
{ "year": 2009, "sha1": "70fd50876dc02bb22f287390d62ce2acc4f3ef38", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0909.2630", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "70fd50876dc02bb22f287390d62ce2acc4f3ef38", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
20150553
pes2o/s2orc
v3-fos-license
25-hydroxyvitamin D status in patients with alopecia areata Introduction Alopecia areata (AA) is a T cell-mediated autoimmune disease that causes inflammation around anagen-phase hair follicles. Insufficient levels of vitamin D have been implicated in a variety of autoimmune diseases. Aim To investigate the status of serum 25-hydroxyvitamin D (25(OH)D) in patients with AA, serum 25(OH)D concentrations were compared between AA patients and healthy controls and thus determine if a possible association exists between serum 25(OH)D levels and AA. Material and methods The study comprising 41 patients diagnosed with AA and 32 healthy controls was conducted between October 2010 and March 2011. The serum vitamin D levels of the study group were determined by high performance liquid chromatography. Serum levels of calcium, phosphorus, alkaline phosphatase, and parathyroid hormone were also evaluated. Results The study was based on 41 patients aged between 20 and 50 (mean: 32.8 ±7.5). The control group included 32 healthy persons aged between 20 and 51 (mean: 32.7 ±7.5). Serum 25(OH)D levels in patients with AA ranged from 5.0 to 38.6 ng/ml with a mean of 8.1 ng/ml. Serum 25(OH)D levels in healthy controls ranged from 3.6 to 38.5 ng/ml with a mean of 9.8 ng/ml. There was no statistically significant difference in the serum vitamin D level between AA patients and healthy controls (p > 0.05). Conclusions Deficient serum 25(OH)D levels are present in patients with AA. However, considering the high prevalence of vitamin D deficiency in Turkey, no difference was noted between AA patients and controls. Introduction Alopecia areata (AA) is a tissue-specific autoimmune T cell-mediated disease. The exact pathogenesis of AA is not fully understood. However, recent available evidence supports an autoimmune targeting of hair follicles. The histological feature of AA is lymphocyte infiltration around and within affected hair follicles. Also macrophages and Langerhans cells around and within the hair follicles have been observed [1,2]. Alopecia areata is an autoimmune disease and many autoimmune conditions are associated with reduced vitamin D levels, including rheumatoid arthritis, diabetes mellitus and multiple sclerosis [3]. Vitamin D is a fat soluble steroid synthesized in the skin from 7-dehydrocholesterol (as a hormone) or ingested with food (as a vitamin). It has a role in mediating the normal function of both the innate and adaptive immune systems and initiates biological responses via binding to the vitamin D receptor (VDR) that is widely distributed in most tissues. Vitamin D has been implicated in processes that may trigger or exacerbate autoimmunity [4,5]. Various studies report that vitamin D levels are associated with the incidence and/or severity of some autoimmune diseases including multiple sclerosis, lupus erythematosus, type 1 diabetes mellitus, and rheumatoid arthritis [6][7][8][9]. Also vitamin D analogues are effective topical therapies for cutaneous autoimmune conditions including psoriasis and vitiligo [10]. Aim The aim of this study was to evaluate the status of vitamin D in patients with AA. Because of the high prevalence of vitamin D deficiency in our geographic area, we also evaluated the levels in a healthy control group and compared the two groups between October 2010 and March 2011. In order to minimize the effect of seasonal changes on vitamin D levels, the study was conducted during the fall and winter months. Material and methods This study was carried out in the Department of Dermatology with the approval of the Hospital Ethics Committee. Patients and controls had to sign informed consent, according to the Declaration of Helsinki. Patients This study was conducted in the Department of Dermatology of the Turgut Ozal University Hospital, Ankara, Turkey from October 2010 to March 2011. Forty-one patients with AA (26 males and 15 females) and 32 healthy control subject (18 males and 14 females) were studied. Exclusion from analysis was based upon oral vitamin D supplementation; major cardiovascular, liver, kidney or digestive disease; treatment for AA 1 month before testing; or refusal to have laboratory testing. Patient demographics including gender, age, history of AA onset, main site of involvement, duration and progression of the disease, Fitzpatrick skin phenotype, and personal and family history of comorbid autoimmune diseases were acquired during patient interviews in the department. Assays Serum calcium, phosphorus, and ALP levels were measured with a spectrophotometric device using a commercial kit (Roche Integra 800). Vitamin D was quantified by high performance liquid chromatography (HPLC) (Vertical Mark of Column device by UFLC-SHIMADZU, with features by Verti Sep TM GES C18HPLC Column, Im-muChrom GmbH lot number VD-130218F). Serum levels of PTH were measured by a chemiluminescence immunoassay device (Siemens Centaur XP). Statistical analysis Data analysis was performed using SPSS for Windows, version 11.5 (SPSS Inc., Chicago, Illinois, United States). Normal or non-normal distributions of continuous variables were determined by the Shapiro Wilk test. The mean differences between groups were compared by using the Student's t-test; otherwise, the Mann-Whitney U or Kruskal-Wallis tests were used according to the number of independent groups for the comparisons of median values. When the p-value from the Kruskal-Wallis test statistics is statistically significant, the Conover's non-parametric multiple comparison test was used to determine which group differs from the others. Categorical data were analysed by the Pearson's χ 2 or Fisher's exact test (where applicable). The degree of association between the duration of symptoms and vitamin D levels was evaluated by Spearman's correlation analysis. Multiple Logistic Regression analysis was applied for determining the best predictor(s) to discriminate between case and control groups. Odds ratio and 95% confidence intervals for each independent variable were also calculated. A value of p less than 0.05 was considered statistically significant. Results The study included 41 patients (26 males, 15 females) aged between 20 and 50 (mean: 32.8 ±7.5). The control group included 32 healthy people (18 males, 14 females) aged between 20 and 51 (mean: 32.7 ±7.5). There was no statistically significant difference between patient and control groups with respect to the mean age. Fifteen patients had a single patch and 26 patients had multiple patches; all lesions were on the scalp (Table 1). Among the 41 patients with AA, a family history of AA was present in 12 (29.3%) patients; Hashimoto thyroiditis was found in 4 (9.8%) patients; type I diabetes mellitus occurred in 6 (14.6%) patients, and rheumatoid arthritis was found in 2 (4.9%) patients. An autoimmune disease was present in 4 (9.8%) patients (Table 2). Serum 25(OH)D levels in patients with AA ranged from 5.0 to 38.6 ng/ml with a mean of 8.1 ng/ml. Overall, 93.8% had a vitamin D deficiency, 3.1% had a vitamin D insufficiency, and 3.1% had sufficient levels of vitamin D. Serum 25(OH)D levels in the healthy control group ranged from 3.6 to 38.5 ng/ml with a mean of 9.8 ng/ ml. Overall, 85.3% had a deficiency, 9.8% had an insufficiency, and 4.9% had sufficient levels of vitamin D. There was no statistically significant difference in the serum vitamin D level between AA patients and healthy controls (p > 0.05) (Figures 1 and 2). When the proportion of men and women in the study groups was investigated, there was no statistically significant difference in 25(OH)D levels (p > 0.05). The serum 25(OH)D levels under 10 ng/ml were observed in 53.7% of patients and 59.4% of the control group. There was no statistically significant difference in the serum vitamin D levels under 10 ng/ml between AA patients and controls (p > 0.05). There was no statistically significant difference between the patient and control group with the respect to the levels of calcium, phosphorus, ALP, and PTH (p > 0.05). Results of the logistic regression models are presented in Table 3. The logistic regression analysis showed no significant effect of age, gender, or skin type on vitamin D status. Discussion The hair follicle is a highly hormone-sensitive organ [2]. Vitamin D is a hormone that plays an important role in regulation of calcium homeostasis, both in cell growth and differentiation, as well as immune system regulation [4,5]. Based on biological effects, a normal 25(OH)D level is ≥ 30 ng/dl. Vitamin D deficiency is being increasingly recognised worldwide due to differences in the dietary intake of vitamin D, varying durations of exposure to sunlight or the use of supplements, the prevalence of vitamin D deficiency shows different patterns across various populations [11]. Various studies report that vitamin D levels are associated with the incidence and/or severity of some autoimmune disorders including type I diabetes mellitus, systemic lupus erythematosus, multiple sclerosis, and inflammatory bowel disease [6]. Recent studies have indicated that vitamin D deficiency can be a significant risk factor for AA occurrence [12][13][14][15]. A study by Yılmaz et al. revealed low serum 25(OH)D levels in patients with AA compared with healthy controls [12]. Mahamid et al. found a strong correlation between AA and vitamin D deficiency in a study of 23 patients [13]. In another study, Aksu et al. showed that serum 25(OH)D levels were inversely correlated with disease severity of AA [14]. In a study based on 156 patients with AA and 148 controls, d'Ovidio et al. found that the insufficiency or deficiency of 25(OH)D was not significantly different between patients with AA and controls. However, a deficiency in 25(OH)D was present in 42.4% of patients, which was significantly higher than the 29.5% observed in healthy controls. In addition, the decreased level of 25(OH)D was not correlated with a pattern or extent of hair loss [15]. Our results do not agree with previous reports demonstrating the association between AA and vitamin D deficiency. In our study, we found that patients had a deficiency of 25(OH)D, but there was no statistically significant difference in the serum vitamin D levels between AA patients and healthy controls (p > 0.05). This may be due to the universal tendency toward lower values of 25(OH)D in our geographical area. Hekimsoy et al. found a high prevalence of vitamin D deficiency (74.9%) and an insufficiency (13.8%) in a population-based sample in the Aegean region of Turkey [16]. In a study on 1010 paediatric patients in Turkey, Orun et al. found that the deficiency (24.3%) and insufficiency (16.5%) of 25(OH)D were frequent in childhood, especially in the adolescent period [17]. Van der Meer et al. demonstrated that vitamin D status in the Turkish population varied widely in Turkey, according to sunscreen usage, insufficient intake of vitamin D in the diet, darker skin colours, and the habit of using clothing to cover most of the body [18]. Vitamin D is recognized as the sunlight vitamin. The major source of this vitamin is skin synthesis of vitamin D. More than 90% of the vitamin D requirement for most people comes from casual exposure to sunlight. Natural dietary sources of vitamin D are limited [19]. In Middle Eastern populations who live in sunny climates, very low vitamin D levels have been reported, particularly populations from Lebanon, Iran, Jordan and Turkey [20][21][22]. This may be due to common environmental factors such as latitude, seasonality, pollution, customs or cultural issues, diet, or fortified-food policies. In addition, individual sociocultural and behavioural factors such as clothing, use of sunscreens with high sun protection factor, sunbathing habits, skin pigmentations, time spent outdoors, and insufficient playgrounds may affect the status of serum vitamin D levels. Our study has a few limitations. The study sample of 41 healthy individuals was small and the fact that blood samples were collected only once during the late fall and winter months between October and March. It would be useful to evaluate patients at different times of the year to study seasonal variations. Multicentre studies from different geographic areas around Turkey are needed. Conclusions We found decreased serum 25(OH)D levels in patients with AA, but there was no statistically significant difference in the serum vitamin D level between AA patients and healthy controls. Further studies are needed to clarify the association between a deficiency of 25(OH)D and AA. But still in our opinion, we recommend screening blood vitamin D levels in AA patients and if deficient, adding oral vitamin D to the AA treatment protocol.
2018-04-03T06:13:54.232Z
2017-05-29T00:00:00.000
{ "year": 2017, "sha1": "07c9e3518829e69cb6cab5fcc9a650301d91cc66", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-7/pdf-29966-10?filename=25-hydro.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07c9e3518829e69cb6cab5fcc9a650301d91cc66", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13947072
pes2o/s2orc
v3-fos-license
Prospective Investigation of Video Game Use in Children and Subsequent Conduct Disorder and Depression Using Data from the Avon Longitudinal Study of Parents and Children There is increasing public and scientific concern regarding the long-term behavioural effects of video game use in children, but currently little consensus as to the nature of any such relationships. We investigated the relationship between video game use in children, degree of violence in games, and measures of depression and a 6-level banded measure of conduct disorder. Data from the Avon Longitudinal Study of Parents and Children were used. A 3-level measure of game use at age 8/9 years was developed, taking into account degree of violence based on game genre. Associations with conduct disorder and depression, measured at age 15, were investigated using ordinal logistic regression, adjusted for a number of potential confounders. Shoot-em-up games were associated with conduct disorder bands, and with a binary measure of conduct disorder, although the strength of evidence for these associations was weak. A sensitivity analysis comparing those who play competitive games to those who play shoot-em-ups found weak evidence supporting the hypothesis that it is violence rather than competitiveness that is associated with conduct disorder. However this analysis was underpowered, and we cannot rule out the possibility that increasing levels of competition in games may be just as likely to account for the observed associations as violent content. Overall game exposure as indicated by number of games in a household was not related to conduct disorder, nor was any association found between shoot-em-up video game use and depression. Introduction Despite frequent expressions of concern in the media, the behavioural and psychological effects of prolonged video game playing are not well understood. Although it has been claimed recently that there is a consensus among video game researchers [1], the evidence for this statement is not convincing [2,3]. Furthermore, in 2013, 230 scholars [4] wrote to the American Psychological Association expressing concerns that policy statements on media violence would be untenable, as they would be based on a literature that shows considerable inconsistency regarding the nature of suggested links between violent media and antisocial or aggressive behaviour [5][6][7][8][9][10][11][12]. One issue is the lack of consistent and robust methodologies across different studies [13]; in particular, few longitudinal studies looking at the relationship between the prolonged use of video games and adolescent social behaviour have been conducted. A recent three-year longitudinal study of game violence and aggression in children aged 10-14 years, which included measures of personality, family attachment, peer delinquency, family violence and depression, suggested that game violence was not associated with any negative behavioural outcomes, and did not predict youth aggression [14]. Others have suggested that violent game use is associated with subsequent development of aggressive behaviour [15], although the magnitude of these associations is small. Aside from differences in the range and type of potential confounding factors that are adjusted for across such studies, others have made the more general observation that it is difficult to isolate the violent content of video games from other features. For example, after systematically analysing games for levels of violence (and matching for factors such as level of difficulty and pace of action), Adachi and Willoughby [5] found experimentally that more competitive games resulted in more aggressive short-term behaviours. Similarly, a 2013 longitudinal study [16] of nearly 1,500 teenagers concluded that playing competitive games predicted higher levels of aggression over a period of 4 years, after adjusting for violent content. There is therefore a clear need to further assess the relative contributions of competitiveness and violent content in games, and the impact these have on negative behavioural outcomes. One theoretical framework that has been developed in an attempt to explain some of these findings is the Catalyst Model [17]. In this model, the role of internalised motivations and biological factors is emphasised, as well as 'catalysing' effects of environmental factors. Potentially aggressive personalities develop through initial biological dispositions, and environmental stress factors (e.g., a negative family environment) may make a person more likely to engage in aggressive behaviours, and more attracted to playing violent games. Under this model, and unlike other models proposed such as the General Aggression Model [18], violent game use does not cause negative behavioural outcomes. Instead, while aggressive behaviours may be modeled on behaviours seen in games, they would manifest in other ways had the individual not been playing a video game. The same stress factors may also precipitate poor mental health outcomes more generally, given that aggression has been linked to depressive symptoms [19]. It remains unclear how adolescent social behaviour and mental health is affected by video game use. Many studies tend to look at the effects of games generally, while in reality there are a wide range of differences in game style, content and play that may confer differing, even opposite effects on behaviour. We were therefore interested in whether the level of violent content in games is associated with negative behaviour outcomes. We investigated the association between video game use at age 8/9 years, and subsequent measures of conduct disorder and depression at age 15 years, taking into account a wide range of potential confounding factors. We also attempted to assess any relationship between the degree of violence in the games played, and any behavioural outcomes. Methods Participants The Avon Longitudinal Study of Parents and Children (ALSPAC) is a prospective, populationbased birth cohort study that recruited 14,541 pregnant women living in Avon, UK, with expected delivery dates between 1 st April 1991 to 31 st December 1992. The cohort has been described in detail previously [20]. The current study is based on the 5,400 young people who completed the Development and Well-Being Assessment (DAWBA) semi-structured interview at the ALSPAC clinic at 15 years of age (4,745 responded to conduct questions and 5,369 to depression questions). Ethics Statement Ethics approval for the study was obtained from the ALSPAC Ethics and Law Committee and the Local Research Ethics Committees. Please note that the study website contains details of all the data available through a fully-searchable data dictionary (http://www.bris.ac.uk/alspac/ researchers/data-access/data-dictionary/). Responses were used to derive a 3-level measure intended to capture whether or not participants played video games and, if they did, the degree of violence in the games played: 0) does not play video games; 1) plays puzzle games; 2) plays "shoot-'em-up" games. If participants played games in more than one category, they were categorised upwards. Number of video games was also assessed: participants were asked "How many computer games do you have at home?" and could respond 0, 1-2, 3-4, 5-9 or 10 or more. A further binary measure was created of those who used only racing or sports games (competitive games), and those who reported using only shoot-em-up games (violent games). Conduct Disorder and Depression Conduct disorder and depression were assessed at age 15 via the DAWBA semi-structured interview. Conduct disorder was assessed via parental report and depression via child selfreport. From participants' responses, DAWBA "bands" can be created, corresponding to ordered categorical measures of likelihood of conduct disorder and depression. These are created using computer algorithms, and have been validated in samples of children in the United Kingdom. The bands contain 6 levels, and are based on the probability of disorder, ranging from very unlikely to probable. As well as the ordered categorical variables, the top two levels in each category can be used to create binary "disorder" variables [21]. The DAWBA has been validated in clinical and community samples, and is a useful assessment for large-scale data collection such as that occurring in cohort studies. DAWBA band scores for parental-rated conduct disorder and depression at age 7 were used for exclusion, and adjustment at baseline was used to attempt to minimise the possibility of reverse causation. Potential Confounders Potential confounders, for which there was a theoretical or empirical basis for considering that they might influence any observed association between video game use and subsequent mental health, were: a) pre-birth confounders, including family history of mental health problems (binary measures assessed via maternal questionnaire); maternal education (a 5-level categorical variable assessed via maternal questionnaire); ancestry (assessed via maternal questionnaire); religiosity (assessed as "belief in a divine power" via maternal questionnaire); family structure during pregnancy (assessed via maternal questionnaire); and offspring sex; and, b) childhood confounders, including IQ at age 8 (assessed via Wechsler Intelligence Scale for Children [22]); whether child was a victim of bullying at age 8 (a binary measure assessed via parent and child report); peer problems, emotional problems, and conduct problems at age (assessed via the Strengths and Difficulties Questionnaire [23], analyses of conduct disorder did not adjust for conduct problems, and analyses of depression did not adjust for emotional problems). When we substituted maternal socioeconomic position (a 6-level categorical variable assessed vie maternal questionnaire) for maternal education our results were not substantially altered (results available on request). Statistical Analysis We assessed the relationship between video game use and DAWBA ordered categorical bands of conduct disorder and depression using ordinal logistic regression, before and after adjustment for confounders. We excluded pre-existing conduct problems or depression measured at baseline, in order to assess incidence of the outcome, and adjusted for variation in baseline conduct disorder or depression in those who remained. We confirmed the proportional odds/parallel regression assumption had not been violated using the Brant test [24]. The relationship between video game use and DAWBA binary conduct disorder and depression variables was assessed using logistic regression, before and after adjustment for confounders. We examined the impact of grouped confounders on the association between video game use and conduct disorder and depression by comparing unadjusted estimates (model 1) with those adjusting for pre-birth confounders (model 2), and those further adjusted for childhood confounders (model 3). Relationships of these potential confounders with video game use were assessed via polychoric correlations (available on request). Analyses were carried out in STATA version 12 (Stata Corp LP, College Station TX, USA). We also conducted multiple imputation of 100 datasets in order to investigate and account for potential bias introduced due to attrition. Over 50 auxiliary variables were used to make the missing-at-random assumption plausible, including measures related to pre-birth and childhood factors. The main analyses were repeated using imputed exposure and confounder data (imputed sample N = 4,745 for conduct outcomes and 5,369 for depression outcomes). These imputed datasets also accounted for pre-existing conduct problems or depression, as appropriate. Characteristics of Participants Of 14,701 children in the initial full cohort, 5,400 completed at least some of the DAWBA at age 15. From this sample, 2,453 provided video game usage data at age 8/9. The final sample available for analysis consisted of 1,815 children for whom data on the above confounders were available. Of these, a total of 26 participants met the criteria for conduct disorder case status, which increased slightly by video game exposure category (none: 1 / 0.7%; puzzle games: 11 / 1.2%; shoot-em-ups: 14 / 1.6%). A total of 22 participants met the criteria for depression case status, which decreased slightly by video game exposure category (none: 2 / 2.5%; puzzle games: 10 / 1.0%; shoot-em-ups: 8 / 0.9%). The characteristics of all participants, grouped by video game exposure category, are presented in Table 1. For the analyses investigating conduct disorder, 11 participants were excluded from the analysis as they had pre-existing conduct problems, or missing data on that question at age 8/9. For analyses investigating depression, 8 were excluded for pre-existing depression at baseline. Conduct Disorder Ordered regression of conduct disorder band (6 levels) at age 15 and genre of game played (none, puzzle, shoot-em-up) at age 8/9 years indicated weak evidence of an association in the unadjusted model (OR = 1.16, 95% CI 0.99 to 1.37, p = 0.067) which was not altered substantially in the partially adjusted (OR = 1.15, 95% CI 0.98 to 1.37, p = 0.095) or fully adjusted (OR = 1.19, 95% CI 1.00 to 1.42, p = 0.050) models (Table 2). That is, in the fully adjusted model, there was a 19% increase in risk of being in a higher conduct disorder band per categorical change in genre of game played at age 8/9 years, although statistical evidence of an association was weak. When conduct disorder case status was used as the outcome, substantially larger odds ratios were seen (as would be expected when using a binary rather than categorical outcome), with an association in the unadjusted model (OR = 1.90, 95% CI 1.04 to 3.48, (Table 2). There did not appear to be any association between the number of games played at age 8/9 years (which was not stratified by genre) and conduct disorder band at age 15 in the unadjusted, partially adjusted or fully adjusted models. These results are presented in Table 3. Violent versus Competitive Games A further sensitivity analysis was conducted whereby those selectively playing shoot-em-up games were compared with those selectively playing competitive racing or sport games. Despite larger effect sizes, there was only weak evidence that shoot-em-ups were associated with an increased risk of conduct disorder in comparison to competitive games ( Table 4). The effect size observed increased with adjustment for potential confounders. Depression In order to test the specificity of any associations with mental health outcomes, we also looked at whether there was an association between video game use and depression. Ordered regression of genre of game played at age 8/9 years on depression band (6 levels) at age 15 indicated some evidence of a negative association in the unadjusted model (OR = 0.84, 95% CI 0.73 to 0.97, p = 0.016), but after partial adjustment results attenuated to the null (partially adjusted Table 2. Ordered regression of conduct disorder band (6 levels), conduct disorder (case status) and depression band (6 levels) at age 15 and genre of game played (3 levels) at age 8/9 years excluding those with severe pre-existing outcome symptoms (conduct N = 1,804, depression N = 2,498) and further adjusting for pre-existing outcome symptoms in those remaining. Table 2). The analyses of the multiply imputed datasets were of a smaller magnitude than the complete case findings for conduct disorder outcome, but were not markedly different. Findings for depression were very similar to the complete case (Table 5). Discussion Our results indicate that playing video games that are more likely to include violent content (i.e., shoot-em-ups) in childhood is weakly associated with an increased risk of conduct disorder in late adolescence. There was also weak evidence that individuals who selectively play shoot-em-ups differed in risk to those who selectively play competitive games. However, the absolute risk of developing conduct disorder is small, and the modest effect sizes we observed should be interpreted in this context. Overall game exposure, as indicated by number of games in a household, was not related to conduct disorder, nor was any association found between video game use and depression. While our results are broadly in line with findings suggesting that violent game content is associated with increased aggressive tendencies, the associations we observe (and statistical evidence for these) are modest, and do not seem to be consistent with claims that the effects of playing violent video games on aggressive behaviour are of a sizeable magnitude. For example, some have claimed [25] that the magnitude of this effect is larger than the effect of exposure to smoke at work on lung cancer rates; our findings do not support such claims. Although Table 4. Ordered regression of conduct disorder band (6 levels) at age 15 and competitive versus shoot-em-up games, excluding those with pre-existing conduct problems (N = 1,042), further adjusting for pre-existing CD in those remaining. Reference group: competitive games (sports/racing) † Model adjusted for sex, family history, maternal education level, marital status during pregnancy, and religion. ‡ As above, with additional adjustment for experience of bullying, peer problems, emotional adjustment, conduct problems, and Wechsler Intelligence Scale for Children total score at age 8/9 years. doi:10.1371/journal.pone.0147732.t004 Table 5. Ordered regression of conduct disorder band (6 levels) and depression band (6 levels) at age 15 and genre of game played (3 levels) at age 8/9 years excluding those with pre-existing outcome in 100 multiply imputed datasets (conduct disorder N = 4745, depression N = 5369. attrition could have biased our complete case findings, analyses using multiple imputed datasets as an attempt to account for this support the complete case analyses. It is worth considering alternative explanations for our findings. One possibility is unmeasured confounding; for example, Ferguson and colleagues [19] show that adjusting for preexisting emotional, family and social problems removes the apparent association between violent video game use and aggression. Although we adjusted for potential confounders, we cannot be certain that we fully accounted for residual confounding. Another possibility is reverse causation; Przybylski and colleagues [11] suggest that individual differences in levels of dispositional aggression may influence preference for violent game use. Early-onset versus adolescentonset conduct disorder have been suggested as different subtypes of conduct disorder, with different aetiologies [26]. We removed those identified as having conduct disorder at baseline, and further adjusted for conduct disorder DAWBA band at age 7 in those who remained, but reverse causation remains a possible explanation for our findings. Overall, our data are consistent with the Catalyst Model. Although there is a weak association between playing shoot-em-ups at a young age and subsequent negative behaviours, the effect is small and the statistical evidence for these associations is weak. Moreover, there was only very weak evidence that individuals who selectively played shoot-em-up games (which we considered in our analysis as the more violent genre) differed in risk from those individuals who selectively played competitive games, suggesting that violent video game content alone may not be a sufficient indicator of risk for later aggressive behaviour. Furthermore, given that in the present study we do not consider the impact of media exposure aside from video games, an assumption that any association between violent game use and aggression is causal [27] is inappropriate here. It is also important to consider the nature of the questions asked in the study. Children aged 8 or 9 years old may not know what a 'shoot-em -up' game is, so may have not correctly classified games that they were playing. Participants were given exemplar games for some categories, but not all. Nevertheless, given that there was no 'first/third person shooter' category, it is reasonable to assume that any games in this genre may have been included in the 'shoot-em-up' category. Although video games carry age classifications that should restrict access to inappropriate games, these games may nevertheless be present in the home due to the presence of older siblings, or parents that play them. However, if it is the case that children aged 8-9 are not playing extremely violent games, one possible interpretation of our results is that they may actually underestimate the association of playing shoot-em-ups with later aggressive behavioural outcomes. It would be interesting to explore this association at a later age (e.g., 15 or 16 years), when children have more potential access to other types of games that may contain more violent content. We attempted to differentiate between types of video game played, and one difference between the game categorisations is the potential amount of violent content within each genre-shoot-em-ups are more likely to contain violence than puzzle games or sports games. Nevertheless, our categorisation of violence may not fully reflect the true nature of the violence in these games. This is a general limitation of all current video game research, and is important because violent content and play styles can vary considerably between games that are broadly categorised as 'violent'. Moreover, the time point at which participants took part in our study means any games they reported playing are now over 15 years old, which should be taken into consideration when using these results in the context of the potential effects of more recent games. As such, our categories may not adequately represent levels of violence in modern games and do not take into account the amount of time played (although it is worth noting that a recent study [28] showed that the amount of time spent playing games did not have any effect on conduct problems). The ways in which video games are now being played has changed considerably since these data were collected, especially with the advent of more social methods of play (e.g., cooperative, smartphone-based apps). Therefore our data may not be representative of gaming today. Moreover, modern video games blur traditional genre boundaries in increasingly complex ways-for example, puzzle games (ostensibly a 'non-violent' category) now range from simple numerical tasks to more graphic crime scene investigations. Given the nature of the data, we were unable to assess the specific games that individuals played. Therefore, there is still a need to look at how the specific content within individual games may impact upon behaviour. Given that there are differences in the literature in outcomes depending on the tested game content, we believe that it is essential that future research pays closer attention to both the specific content within video games, and also the context in which they are played, and moves away from a generalised discussion of 'video game' use. We also attempted to determine whether factors other than violent game content may be associated with aggressive behaviours. Previous studies [5,29] have shown that when competitiveness is matched in violent and non-violent games, violent content alone is not sufficient to affect aggressive behavioural outcomes. Our analysis suggested that there was weak evidence that those individuals who selectively played shoot-em-up games differed in risk of conduct disorder compared to individuals who selectively played competitive sports and racing games (i.e., those who did not report playing shoot-em-ups). While we were unable to determine the relative strength of association of competitive game styles and violent content to aggressive behaviours, at the very least our results highlight the need for greater caution in studies which consider the impact of violent content in isolation from other potentially important factors. Specifically, we were unable to rule out the possibility that competitiveness, rather than violence, may underlie the associations we observe (assuming that they are indeed causal). In our study, it may well be the case that increasing levels of competition in the games that our participants played may be just as likely to account for the associations we observed. Future research should therefore explore the relationships between both competitiveness and violent content on subsequent behaviour. In conclusion, our data show a weak association between playing shoot-em-up games at an early age, and an increased risk of conduct disorder in late adolescence. Additionally, our data show weak evidence that those who selectively played shoot-em-ups differed in risk to those who selectively played competitive games; however, our analysis was not sufficiently powered to be able to answer with any certainty whether or not the level of competition present in games accounts for the associations we observed. Larger studies, ideally with richer data on video game use, are required needed to address this question. It is also important to note that most of our sample played shoot-em-up games when they were aged 8, and importantly, only very few met the criteria for conduct disorder case status at age 15. Therefore, the association between violent game use and conduct disorder in fact appears to be weak. We observed no association between violent game use and subsequent depression in adolescence, a result that supports earlier studies [30]. Further studies should more closely examine child, family and situational characteristics that may contribute to any observed behavioural problems. Given this, we caution that our results do not have any necessary policy implications.
2016-05-12T22:15:10.714Z
2016-01-28T00:00:00.000
{ "year": 2016, "sha1": "a92a69b450546ce63a180f9bb498ac959b45c8ea", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0147732&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a92a69b450546ce63a180f9bb498ac959b45c8ea", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
262200762
pes2o/s2orc
v3-fos-license
Quartz-Enhanced Photoacoustic Spectroscopy Assisted by Partial Least-Squares Regression for Multi-Gas Measurements We report on the use of quartz-enhanced photoacoustic spectroscopy (QEPAS) for multi-gas detection. Photoacoustic (PA) spectra of mixtures of water (H2O), ammonia (NH3), and methane (CH4) were measured in the mid-infrared (MIR) wavelength range using a mid-infrared (MIR) optical parametric oscillator (OPO) light source. Highly overlapping absorption spectra are a common challenge for gas spectroscopy. To mitigate this, we used a partial least-squares regression (PLS) method to estimate the mixing ratio and concentrations of the individual gasses. The concentration range explored in the analysis varies from a few parts per million (ppm) to thousands of ppm. Spectra obtained from HITRAN and experimental single-molecule reference spectra of each of the molecular species were acquired and used as training data sets. These spectra were used to generate simulated spectra of the gas mixtures (linear combinations of the reference spectra). Here, in this proof-of-concept experiment, we demonstrate that after an absolute calibration of the QEPAS cell, the PLS analyses could be used to determine concentrations of single molecular species with a relative accuracy within a few % for mixtures of H2O, NH3, and CH4 and with an absolute sensitivity of approximately 300 (±50) ppm/V, 50 (±5) ppm/V, and 5 (±2) ppm/V for water, ammonia, and methane, respectively. This demonstrates that QEPAS assisted by PLS is a powerful approach to estimate concentrations of individual gas components with considerable spectral overlap, which is a typical scenario for real-life adoptions and applications. Introduction Multi-gas detection devices that can detect a wide range of gasses and gas concentrations are highly attractive since they significantly enhance safety in various environments and can help enforce regulations with respect to allowable emission and concentrations of various gasses.There is an unmet need for accurate trustworthy gas detection devices [1][2][3].Multi-gas sensors are valuable for real-life applications, where multiple gasses may be present or where gas concentrations can vary significantly, such as in process-control applications, airborne pollutants, medical diagnostics, and in general for monitoring industrial and urban emissions of greenhouse gasses [4][5][6][7][8][9][10][11].Having a single device for detecting multiple gasses saves time during gas monitoring activities.Instead of having to use multiple detectors sequentially or wait for results from different devices, a multi-gas detection system can provide immediate and simultaneous measurements of several gasses [12,13].This allows for quicker responses to potential gas leaks or unsafe conditions, enabling faster decision-making and remedial actions. Optical spectroscopy is particularly useful for gas sensing, including gas concentration estimation [14][15][16].Of the many optical spectroscopic methods developed during the past century, photoacoustic spectroscopy (PAS) [17][18][19][20][21] has attracted considerable interest due to its powerful, yet simple, trace gas detection method and capability to detect multiple gasses with a single device.The PAS method is different from other optical absorptionbased methods, where the absorbed energy translates into kinetic energy, which forms an acoustic wave that can be detected with a simple pressure transducer [17,22,23].A variant of PAS is quartz-enhanced photoacoustic spectroscopy (QEPAS), which was introduced in 2002 [23] and is today an established spectroscopic technique [24][25][26][27][28][29][30].For QEPAS, the generated acoustic wave is detected using a quartz tuning fork (QTF) with a high quality factor (Q > 10 3 at atmospheric pressures) and an eigenfrequency matching the laser modulation frequency [23,31].The PAS/QEPAS technique is, however, not an absolute technique and requires calibration using certified reference gas samples.Further absolute gas concentration measurements with PAS becomes highly nontrivial in complex gas mixtures, and detailed knowledge about the chemical gas composition of the complete gas matrix is needed.This entails that absolute environmental gas concentration measurements can only be achieved upon applying a correction factor to the PAS signal, which will depend on the gas matrix.For example, the presence of water vapor in a gas sample acts as a buffer for the relaxation process and thereby enhances, or diminishes, the generated sound waves [26,[32][33][34][35][36].Furthermore, in many situations, multi-gas measurements are made difficult, if not impossible, due to strongly overlapping spectra.Therefore, most experimental PAS demonstrations have been conducted using a single gas in a simple N 2 matrix, and only recently have applications for real-life situations been addressed [13,19,26].However, it has been demonstrated that both identification and concentration estimation can be made possible using multivariate analysis (MVA), such as partial least-squares regression (PLS) together with gas matrix correction factors [13,37].PLS is a machine learning technique that is widely used for regression and classification tasks.PLS combines elements of both principal component analysis (PCA) and multiple linear regression to handle situations where there are high-dimensional datasets with multicollinearity. In this work, we demonstrate QEPAS measurements of "complex" gas mixtures with 0-12,000 ppm/V water vapor (H 2 O), 2-100 ppm/V methane (CH 4 ), and 50-300 ppm/V (ammonia).The measurements were all acquired in the mid-infrared (MIR) region from 2.85 µm to 3.50 µm, enabled by a home-built wavelength-tunable optical parametric oscillator (OPO).Note that our system can scan a much broader wavelength range and is thus capable of exciting more trace gasses with the same setup.To estimate the mixing ratios and concentrations, a PLS model was trained with both HITRAN spectra and experimental PAS spectra for single components and synthetic spectra of mixtures and tested on experimental acquired PAS spectra of different mixtures of gasses.The PLS algorithm was trained to estimate the gas mixture ratio.Hereby, the concentration of all gasses in the mixture can be estimated by having just one known gas concentration.The motivation for using HITRAN spectra as training data is that instead of acquiring PAS training data, which can involve measuring many calibration data of different concentrations and be time-consuming, one simply downloads spectra from the HITRAN database and trains the PLS model.Thereby, any gas mixtures (within the wavelength range available) can, in principle, be identified, and mixing ratios can be estimated. We are not the first group to use PLS with QEPAS.The validity of the method has been shown in the literature for QEPAS and PAS spectroscopy [13,[38][39][40] and was verified to establish well-fitting spectra for both two-gas mixtures and three-gas mixtures with highly overlapping spectra.However, we claim that we, for the first time, apply PLS on PAS spectra acquired in very large spectral ranges, generated by a fast ns tunable MIR-OPO, using completely unknown gas concentrations in a realistic environment.Our work should be seen as an extension of the work presented in Refs.[13,39,40].Additional differences between our work and those of Refs.[13,39,40] are direct amplitude measurements with the QEPAS by repetition rate synchronization and the use of both HITRAN and experimental PAS spectra for training data.In our work, the spectra are acquired in a much broader wavelength range; thus, more spectral lines are used in our PLS method, which provide more information about the complete gas matrix and makes the PLS analysis very sensitive to any potential pollutants that might influence the estimation.This adds to the complexity of the analysis compared to Refs.[13,38].In Refs.[13,38], 2-3 PLS components are used, while we used 50 PLS components, which was found to be optimal for this wavelength range and gas composition.We believe that using PLS methods in a narrow wavelength range and with only a few PLS components as in Refs.[13,38], one needs to have prior knowledge about the gas matrix under investigation, or the PLS method may result in large uncertainties.This is not the case in our analysis since most gasses present in the atmosphere have fundamental absorption bands in the scanning range of our MIR-OPO.Our method allows us to identify other molecular components the gas matrix contains and subsequently add all observed spectra to our PLS method.We noted from our PLS analysis our samples also contained CO 2 ; however, it was found that CO 2 did not have a noticeable effect on the amount of gas estimation or on the PAS dynamics.CO 2 was therefore removed from the PLS analysis.Finally, we applied absolute calibration of the QEPAS sensor using the PLS method, taking into account molecular relaxation effects and applying PLS to a "realistic" environment with completely unknown gas concentrations and with all natural gasses present together with enriched concentrations of methane and ammonia, with 2-3 orders of magnitude difference in concentrations. Experimental Setup The experimental setup is shown in Figure 1.The main parts in the setup include a homebuilt mid-infrared (MIR) nanosecond pulsed OPO, a QEPAS sensor module (ADM01, Thorlabs, Newton, NJ, USA), optical detectors for power measurement, humidity/temperature/pressure sensors, a mass-flow control system, and a lock-in amplifier and an oscilloscope for data acquisition.The MIR pump source is based on an actively Q-switched nanosecond Nd:YAG pump laser (BrightSolution, Pavia, Italy), which emits 15 ns pulses at a center wavelength of 1064 nm with a repetition rate of 12.457 kHz matching the resonant frequency of the QEPAS cell.The 1064 nm pulses are focused into a 40 mm long fan-out structured periodically poled lithium niobate crystal (PPLN) (HC Photonics) placed inside a 55 mm long linear cavity with a waist of approximately 150 µm.By translating the PPLN crystal with a step motor, MIR light from 2.85 µm to 3.55 µm with 50 mW of mean optical output power was generated.This wavelength range matches ro-vibrational lines of water (H 2 O), ammonia (NH 3 ), and methane (CH 4 ).More details on the MIR-OPO can be found in [26].The QEPAS module contains a quartz tuning fork (QTF) with an eigenfrequency of f 0 = 12, 457 Hz and a quality factor of ~5300 ± 50 at 1 atm [31].The QTF is piezo-electrically active in the mechanical mode for which the two prongs oscillate 180 degrees out of phase (asymmetrical stretching mode) [23,29].Acoustic coupling is further improved by two microresonator (MR) tubes each having a length of 12.4 mm.In-and out-coupling of the MIR light through the module happens through two BaF 2 windows with a combined transmittance of ~0.9 [18,23,[26][27][28][29]. The gas-flow control was realized using triplet mass-flow controllers (Brooks 0254, MFCs).Two MFCs were used for setting the in-flow rate of dry air/N 2 and 100 ppmV CH 4 in N 2 .Lab air and ammonia (NH 3 ) gas flow was combined with a valve-controlled inlet that enables suction of wet laboratory air and NH 3 into the QEPAS cell using a mini vacuum pump with variable flow rate.The humidity was measured with a commercial humidity sensor (Extech RH25, measurement accuracy 0.3%) and verified by fitting HITRAN spectra to the acquired PAS spectra, while the ammonia had a completely unknown concentration.The combined gas flow was led through a third MFC, which was used to monitor and log the total gas flow to the QEPAS module, thus estimating the concentrations of the mixed gasses with an uncertainty of ±5%.The overall gas flow was always kept at 10 mL/minute through MFC3.Data processing was enabled by a lock-in amplifier receiving the electrical local oscillator signal from the active laser Q-switch with an integration time of 30 ms.The lock-in amplifier demodulates the PA output signal of the transimpedance amplifier built in the QEPAS using a 1-f configuration (i.e., amplitude modulation) [12].The output from the lock-in amplifier was digitized using a fast 12-bit oscilloscope, and spectra in the 2.85-3.5 µm were obtained as a function of scanning time, as shown in Figure 2. Partial Least-Squares Regression Partial least squares (PLS) is a machine learning technique that is widely used for regression and classification tasks.PLS is a statistical method that finds a linear regression model by projecting the predicted variables and the observable variables to a new space [37].PLS can be thought of as a hybrid between multiple linear regression and principal component analysis (PCA).In PLS, the predictor variables are first transformed into a set of latent variables using singular value decomposition.These latent variables are then used to predict the response variable in a multiple linear regression model.Thus, PLS is different from simple linear regression in that it projects the variables to a new space, where the variance is maximized along one direction in the variable hyperplane.This makes the method more powerful for fitting spectra than other fitting approaches.For example, PCA is not well suited for identifying gas mixtures in complex spectra, where one gas component completely dominates the measured spectrum, and here the PLS method performs well and can establish the actual mixture ratio with high fidelity.However, the PLS regression method is not without its limitations.It is sensitive to the scaling of the predictor variables and can be influenced by outliers in the data.It is also not suitable for cases where the predictor variables are highly correlated with the response.The general form of the PLS equation can be written as: where Ŷ is the predicted response, X is the matrix of predictor variables, B is the matrix of regression coefficients, and C is the intercept term.The matrix B is obtained by performing singular value decomposition on the predictor matrix X.In this work, the PLS method is used to estimate the mixing ratio, thus the gas concentrations of the three gasses (H 2 O, NH 3 and CH 4 ), where they have a very high degree of spectral overlap.The PLS method used here is an implementation from the scikit-learn community [41].A training data set was developed using either the measured single-gas PAS spectra or HITRAN spectra as reference spectra, as shown in Figure 2. The main motivation for using HITRAN spectra is that instead of acquiring PAS training data, which can be time-consuming, one simply downloads spectra from the HITRAN database to train the PLS model.This will in principle enable the estimation of any gasses (within the wavelength range available) and mixing ratios.The PLS algorithm was trained with a data set of 5000 artificial spectra composed of the three reference spectra using random ratios of the gas mixture, with a small amount of white noise added.Thus, the algorithm was trained to estimate the gas mixture ratio of the three gasses.The PAS technique is not an absolute technique, and therefore, calibration of the acquired PAS signal is required against a known gas sample.We used the obtained lock-in voltage signal for 100 ppm/V methane in N 2 to set the expected PAS voltage level for 100 ppm/V water and ammonia.The PAS data and HITRAN data can now be used to estimate the concentration of unknown mixtures of these gasses and be used to train the PLS method.These are shown in Figure 2. Results The acquired PAS spectra for the training of the PLS are shown in Figure 2 together with absorption spectra from HITRAN.The HITRAN data were convolved with a Gaussian instrument profile of 5 cm −1 .This shows that our QEPAS sensor has a spectral resolution bandwidth of 5 cm −1 .The pressure used for these simulations was 1013 mbar and at a temperature of 25 • C.These parameters closely resemble experimental conditions in our lab.Prior to the PLS analysis, the experimental data were post-processed to make them useful for training and validation, as well as to be compared with the PLS model trained with the corresponding HITRAN data.The raw spectral data were acquired as a function of time, and subsequently, the time x-axis was transformed into a wavelength x-axis using the Sellmeier equation for PPLN [42].To achieve this, we merely measured the start and end wavelengths of the scan.The absolute wavelength of the probe light was measured using a calibrated optical spectrum analyzer (OSA205C, Thorlabs,Newton, NJ, USA) to establish the start and end of the wavelength x-axis.It was confirmed that the wavelength of the scans conforms very well with the Sellmeier equation.However, we found some of the acquired spectra had small deviations and nonlinear behavior of the transformation from time to wavelength, which was related to the heating effects of the PPLN crystal and acceleration-deacceleration of the stepping motor.To make the experimental data useful and give reliable predictions using both experimental PAS data and HITRAN for training, we therefore shifted the wavelength axis at the start and end to secure maximum overlap between HITRAN and experimental PAS data.Continuous measurement of the wavelength is being implemented for future work to improve this. Performance and Calibration of the PLS Method The PAS spectra in Figure 3 show test data containing gas mixtures of water and methane with known concentrations.The ambient concentration of water in the lab air was approximately 10,400 ppm/V measured with a humidity sensor.The methane concentration with an uncertainty of ±5% was estimated using the gas flow into the QEPAS.For the data shown in Figure 3, we used flow ratios of 50/50, 25/75, and 10/90, resulting in methane concentrations of 50 (±2.5)ppm/V, 25 (±1.25)ppm/V, and 10 (±0.5) ppm/V, respectively.In Figure 3 (black traces), HITRAN data are directly fitted to the PAS data to estimate the concentrations of water and methane.The concentration of the water and methane was also estimated by the PLS algorithm trained on a linear combination of 5000 experimental training PAS spectra.The training data are shown in Figure 2a,c and superimposed with Gaussian noise to mimic experimental variations.The experimental spectra were normalized in the prediction step, which was found to significantly improve the prediction reliability of the PLS implementation.We found that both the HITRAN and PLS fitting overestimate the methane concentrations.The reason for this is that the measured PA signal is gas-matrix-dependent, meaning that PA-based trace gas sensors, while they can be extremely sensitive, can quickly become inaccurate without adequate calibration of the necessary gas matrix corrections.Most prominent, and relevant for reallife adaptation of PA sensors, is the presence of water vapor, which acts as a catalyst for the molecular relaxation process.The ro-vibrational relaxation is a result of inelastic scattering between the excited molecule and other molecules in the gas matrix.The existence of water vapor strongly mediates the relaxation process, which can result in a misleading enhancement or, in some cases, attenuation of the PA signal [26,34].This demands that absolute environmental gas concentration measurements can only be achieved upon correction for the water content. Water Correction Factors for Absolute Calibration of the PLS Method Figure 3 demonstrates how the presence of water molecules enhances the PA signal of methane.In the literature, the enhancement has previously been demonstrated to be both gas-and wavelength-dependent, and the factor does not necessarily seem to be a simple linear function of absolute humidity, as also demonstrated here [26].The measured enhancement factor is depicted in Figure 4a as a function of absolute humidity.The humidity was measured by a relative humidity sensor (Extech RH25, measurement accuracy 3%).Using a high-end calibrated humidity sensor, we confirmed that the measurement accuracy is indeed within 3%.We found that while acquiring the experimental PAS data, the humidity fluctuates slightly in our lab between 9000 ppm/V and 12,000 ppm/V.We applied three different methods for estimating the water enhancement factor: by fitting directly with the PAS voltage signal from the lock-in amplifier followed by normalization to the voltage level for 100 ppm/V-N 2 concentration, by fitting with HITRAN data, and by using the PLS training methods.For this, we used two models: one trained with 5000 experimental PAS spectra and a second model trained with 2500 experimental spectra + 2500 HITRAN spectra.We found that within the range of humidity used in this work, the enhancement factor shows quadratic growth, and the PA signal for methane is seen to be enhanced by more than a factor of 1.75 as a result of an absolute humidity level of approximately 10,000 ppm/V.Note that similar quadratic behavior has also been demonstrated for ammonia detection with PAS [43].The standard deviation (STD) across all four fitting methods is for a 10/90 flow ratio: 711 ppm/V for water and 2.5 ppm/V for methane, for a 25/75 flow ratio: 290 ppm/V for water and 2.6 ppm/V for methane, and for a 50/50 flow ratio: 540 ppm/V for water and 2.9 ppm/V for methane.Figures 3 and 4a strongly underline that absolute PA-based concentration measurements in humidity conditions necessarily involve a highly sensitive measurement of the absolute humidity level.Thus, a sensitive humidity measurement can either be performed directly using the PA effect or, more conveniently, by embedding humidity sensors [26,33].Besides estimating the water enhancement factor, we also estimated the natural abundance of methane in our lab air.The lab air was used for diluting the ammonia and methane gasses and to mimic a real-life environment with varying humidity.Not shown here, but from the comparison of HITRAN spectra with the PAS spectra, we found that the lab air also contains approximately 2 ppm/V of methane.This offset needs to be considered since the PLS method will underestimate the concentration while fitting directly with HITRAN (or the PAS voltage) will overestimate the concentration of methane.In Figure 4b, we applied the compensation and found that the STD for the estimation of methane across all fitting methods is indeed improved for all three flow cases.Explicitly, we found an STD for a 10/90 flow ratio of 1 ppm/V, for a 25/75 flow ratio of 1.2 ppm/V for methane, and for a 50/50 flow ratio of 2.5 ppm/V, respectively.Taking the ambient methane into account, the estimated concentrations in Figure 3a are as follows: for HITRAN, fitting 58 (±3) ppm/V, and for PLS, fitting 53 (±3) ppm/V; according to Figure 3b, for HITRAN, fitting 34 (±2) ppm/V, and for PLS, fitting 31 (±2) ppm/V; and according to Figure 3c, for HITRAN, fitting 17 (±1) ppm/V, and for PLS, fitting 13 (±1) ppm/V.Within the measurement uncertainties, both fitting methods are found to agree well.Thus, the estimated gas mixture ratios show good correspondence with the experimentally estimated mixtures.However, the trained PLS algorithm seems in general to have a slightly lower estimate of the concentrations compared to the direct fitting with HITRAN data.This is because the HITRAN data do not consider the background signal that the training and test data contain.This can clearly be seen using 5000 combinations of HITRAN spectra for training the PLS algorithm, where we found that the estimation becomes very inaccurate and gives concentration estimates of 8457/76 ppm/V (Figure 3a), 12,560/66 ppm/V (Figure 3b), and 15,376/50 ppm/V for water/methane (Figure 3c), respectively.Thus, to enable the direct use of HITRAN data for training the PLS algorithm and use them on experimental PAS data, one needs to baseline correct the experimental data.The baseline correction involves removing the underlying baseline signal that can obscure the true spectral features.Various mathematical algorithms, such as polynomial fitting, were employed and tested for baseline correction.The goal was to improve peak detection and enable more reliable quantification of the concentrations using the HITRAN spectra.However, in practice, we found that using the experimental PAS spectra for training always gave higher accuracy.We notice that using different trained PLS models on the same training data gives slightly different results, as seen in Figure 4.This is due to the addition of random Gaussian noise that was superimposed on the simulated spectra.We found that for the 5 PLS models trained on experimental spectra STD of 12% for water in the 1000-5000 ppm/V range, and 4% in 7000-12,000 ppm/v, respectively.For methane, the PLS concentration estimations have a relative uncertainty with an STD of 6% at the 10 ppm/V level and an STD of 2% for concentrations higher than 20 ppm/V.However, we are confident that our PLS algorithm, together with the water correction and methane compensation, gives the correct estimates of the mixing ratios between water and methane within the uncertainties of our measurements and training of the models.In the following section, we will use the calibration of the PLS algorithm to estimate the unknown mixtures of water, ammonia, and methane. Performance of the PLS Method on Unknown Mixtures of Water, Ammonia, and Methane The PAS spectra shown in Figure 5 depict the test data containing unknown mixtures of water, ammonia, and methane.As can be seen, the mixtures with the three gas components have high spectral overlap.In the following, we make a benchmark test of the relative accuracy of the PLS method compared with direct fitting with HITRAN spectra.The two main effects that can influence the performance and accuracy of the PLS method are high concentration of water relative to ammonia and methane concentrations and deviations in the wavelength axis compared to the training spectra.The four gas mixtures displayed in Figure 5 were chosen as extreme cases to evaluate the performance of the PLS algorithm.The PLS method was again implemented using a standard training-test approach.The training data set was built starting from the three reference spectra for water, ammonia, and methane, as shown in Figure 2. The data set was then expanded by means of calculating a linear combination of 5000 reference spectra by superimposing Gaussian noise distributions.Based on the above calibration of the PLS method for the known concentrations we can estimate the concentrations of the unknown mixtures.We assume that the water enhancement factor of ammonia has a similar behavior as methane.Figure 5a shows data for a low concentration of water and relatively similar PAS signal strength for ammonia and methane with minimal deviation in the wavelength (less than 0.2 nm) axis compared to the training/HITRAN data.Keep in mind that the Q-branch peak of methane is 10.4 times higher than the Q-branch of ammonia (as shown in Figure 2).We found that the PLS method and HITRAN fitting agree very well within the measurement uncertainties, as shown in Figure 6.The overall estimation of the water, ammonia, and methane concentrations with the PLS method has an accuracy of 92%, 86%, and 92%, respectively, relative to the HITRAN fitting.In Figure 5b, the water content is increased to 9000 ppm/V, thereby diluting the concentration of ammonia and methane.In this case, we found that the relative accuracy of the estimation of water concentration increased to 94% compared to the HITRAN fitting.Meanwhile, the accuracy of the ammonia and methane concentrations decreased to 62% and 48%, respectively.This decrease in accuracy is due to the relatively low concentration of ammonia and methane.For methane, it is mainly because of the mismatch between the wavelength axis of the test spectra and the training spectra.Comparing Figure 5b with the training data in Figure 2c, we find that the experimental PAS spectra are shifted approximately 2 nm to the left in the 3.2 to 3.35 µm range.This can be seen from Figure 5b and is the reason that the PLS fitting for methane becomes very inaccurate and underestimates the presence of methane.Note that the experimental conditions are very similar for the experimental PAS spectra shown in Figure 3b, where the wavelength deviation is less than 0.2 nm.We apply the same three-gas-trained PLS methods to the data in Figure 3b and find concentrations of 7196 ppm/V, −4 ppm/V, and 30 ppm/V for water, ammonia, and water, respectively.Thus, we conclude that the main reason for the inaccuracies shown in Figure 5b is the deviation of the wavelength axis compared to the training/HITRAN spectra, and that in order for the PLS to estimate with high accuracy, the deviation in wavelength should be less than 1 nm.In Figure 5c,d, the deviation is less than 0.5 nm, and we continue with the investigation of how a very asymmetric mixing ratio affects the accuracy of the PLS method.In this case, the concentrations of ammonia and methane become relatively low compared to the water content.The water spectra will therefore dominate the trained PLS model due to the high level of spectral overlap with ammonia and methane.To test this behavior and make an estimation of the PLS method's lower accuracy (sensitivity) limit, we increased the amount of water and ammonia, thus diluting the content of methane at the same time.This is depicted in Figure 5c, where we find that the PLS method is making a completely wrong estimation of the concentration by estimating it to be negative (−5 ppm/V).Due to this apparent wrong estimation of the methane concentration, the PLS method predicts a 3% higher concentration of water relative to the HITRAN fitting.This fitting behavior is in general very different from the fitting behavior that we have seen in the previous cases in Figures 3 and 4, where the HITRAN fitting always estimated higher concentrations.This suggests that for high concentrations of water and for relatively low concentrations of ammonia and methane, the PLS algorithm uses the ammonia and methane coefficients as a parameter to make a better fit of the water spectra.This is clearly seen from the fact that the relative precision between the PLS method and HITRAN fitting becomes higher in this case.Therefore, to estimate the accuracy and sensitivity, we diluted the concentration of water and ammonia by purging with methane, as shown in Figure 5d.It can be seen that the PLS method now gives only positive coefficients for the estimation of the concentrations.However, the estimations of ammonia and methane are still underestimated relative to the HITRAN fitting, while the water is overestimated at 2%.If we apply the enhancement and correction factors found in Figure 4 for the calibration of absolute concentrations of methane, we find a methane concentration of 3 (±2) ppm/V and 5 (±2) ppm/V for the PLS method and HITRAN fitting, respectively.Thus, the estimations of the contractions with the two methods are in good agreement with the measurement uncertainties.By applying the same water correction factor from Figure 4b to the estimated ammonia concentration, we find that the absolute concentration should be approximately 50 (±10) ppm/V and 67 (±10) ppm/V for the PLS method and HITRAN fitting, respectively.We conclude from these measurements and tests that the absolute sensitivity of the three-gas training PLS is approximately 300 (±50) ppm/V, 50 (±5) ppm/V, and 5 (±2) ppm/V for water, ammonia, and methane, respectively. Discussion QEPAS assisted by PLS has been demonstrated previously and reported in Refs.[13,38,40].Compared to this work, we see our work as a natural extension, and we believe that our work is important and novel.With our analysis, we address and ask the right questions for QEPAS assisted by PLS to become a practical tool for real-life adaptations.For example, we are investigating a much wider scanning range with gas concentrations varying by two orders of magnitudes in concentration.Using the wider scanning range makes the PLS analysis become very sensitive to any potential pollutants that might influence the estimation.This results in more than 50 PLS components being needed compared to the 2-3 PLS components in Refs.[13,38,40].However, this also adds to the complexity of the analysis, and we show how different parameters affect the analysis, such as small jitters in the wavelength x-axis, and demonstrate how it is divesting for the concentration estimation.Compared to previous work, we are testing a novel method for training and calibration of the PLS analysis and estimating the performance of the PLS method using both pure HITRAN data, experimental QEPAS data, and a mixture for training.The PLS algorithm was trained to estimate the gas mixture ratio.Thus, by proper normalization of the data, the concentration of all gasses in the mixture can then be estimated by having just one known gas concentration, in our case, methane reference gas with a concentration of 100 ppm/V.The performance of the different trained PLS models is summarized in Figure 6.The error bars are a sum of the uncertainties of the gas preparation, PAS measurement, and the PLS analysis.Three PLS models were trained, as shown by the color codes (red columns: trained with 5000 experimental PAS spectra, blue columns: with 2500 experimental PAS spectra + 2500, and green columns: 5000 HITRAN spectra).Ideally, we aimed in this pilot study to apply HITRAN spectra for the training of the PLS model.However, we found that the direct use of HITRAN becomes highly inaccurate and overestimates the concentrations, as shown in Figure 6.This is because the HITRAN spectra do not consider the relaxation mechanisms, which are both gas-and wavelength-dependent, and the potential error sources such as electrical noise and background signals that arise from stray light.The relaxation mechanisms can by proper calibration, as shown in Figure 4, indeed be considered, as also demonstrated here.The noise sources and background signals are more tedious to compensate for and are the main reason for the failure to train the PLS algorithm with only HITRAN spectra.The background signals of the experimental PAS spectra change slightly over time.However, we tried to compensate for this by realigning the MIR laser light through the QEPAS cell during a measurement series.As an alternative method, we also investigated baseline subtraction using higher-order polynomial fitting.This resulted in a slightly higher overall uncertainty in the estimation of the concentrations, and in general, we found that the direct use of the HITRAN spectra overestimated the concentration by 10-15%. Conclusions In conclusion, the combination of QEPAS with PLS was proven to be a strong method for the estimation of gas concentrations in complex mixtures.The mixing ratios and concentrations were estimated using PLS models trained with both HITRAN and experimental PAS spectra for single components and synthetic spectra of mixtures.The training data was built starting from the single reference spectra and was enlarged by means of simulated spectra, calculated as linear combinations of reference ones.Gaussian noise was superimposed on the simulated spectra to consider the experimental noise involved in the measurements.An absolute sensitivity, after correction for water enhancement factors and ambient methane, of approximately 500 (±50) ppm/V, 50 (±5) ppm/V, and 5 (±2) ppm/V for water, ammonia, and methane, respectively, was demonstrated. By employing PLS to PAS sensors, concentration levels of many different target gasses can be estimated simultaneously, based on their unique spectral fingerprints.We foresee that this approach holds great promise for many different applications, such as environmental monitoring and industrial safety, providing a non-invasive and efficient solution for detecting and quantifying gasses in real-time and therefore might become a valuable tool in advancing gas detection technologies and addressing critical challenges in various fields. Figure 1 .Figure 2 . Figure 1.Block diagram of the main parts of the experimental setup.Actively Q-switched nm ns pump laser.QEPAS: Quartz-enhanced PAS.MIR-OPO: Mid-infrared (MIR) pulsed optical parametric oscillator.MFC: Mass-flow controller.MIR filter for removing the pump.DAQ: Signal generator for trigger signal for the 1064 nm pump laser and generating the local oscillator (both at 12,457 kHZ) for the lock-in amplifier.The downmixed signal is then acquired by an oscilloscope.The generated MIR wavelength is measured with an optical spectrometer. 4 E s i g n a l f r o m l o c k -i n a m p l i f e r [ V o l t ] b ) M F C : 2 5 p p m / V C H 4 c ) M F C : 1 0 p p m / V C H e l e n g t h [ u m ] Figure 3 . Figure 3.Test data containing a mixture of known water and methane concentrations.The red curves are the experimental PAS data, and the black curves are the HITRAN-fitted spectra with corresponding coefficients.The blue traces are the fitted PLS method with coefficients.The model was trained on combinations of 5000 experimental PAS spectra with superimposed Gaussian noise.The concentration of methane was controlled using the MFC and the water humidity was measured by the humidity sensor: (a) 50 ppm/V (±2.5 ppm) of methane and 5000 ppm/V (±250 ppm) of water humidity; (b) 25 ppm/V (±1.25 ppm) and 7500 ppm/V (±375 ppm) of absolute humidity; (c) 10 ppm/V (±0.5 ppm) of methane and 9000 ppm/V (±450 ppm) of absolute humidity. FF l o w 1 0 / 9 0 Figure 4 . Figure 4. (a) Enhancement factor of the methane signal as a function of absolute humidity.Four flow settings (ratios between the methane and lab air) were used for the data: 100/0, 10/90, 25/75, and 50/50, respectively.Three different methods are used to estimate the enhancement factor, as shown with the color code.The shaded areas show the uncertainty area for the estimation of water and methane concentrations across the three different methods.The fitted blue curves are quadratic functions.(b) Same data as in (a) compensated for 2 ppm of ambient methane.Note that the uncertainty area is decreased by compensation for the ambient methane. s i g n a l f r o m l o c k -i n a m p l i f i e r [ V o l t ] W a v e l e n g t h [ u m ] Figure 5 . Figure 5.Test data for the PLS analysis of unknown concentrations of water, ammonia, and methane.The red traces are the experimental PAS spectra, and the black are the HITRAN spectra.The HITRAN spectra are fitted with the coefficients shown in black typing.The blue traces are the fitted PLS method using the coefficients shown in blue typing.The PLS model was trained on combinations of 5000 experimental PAS spectra with a superimposed Gaussian noise (0.01). 5 Figure 6 . Figure 6.Summary of the different PLS models for estimating the unknown concentrations of (a) water, (b) ammonia, and (c) methane.The black columns are the fitted coefficients for the HITRAN spectra for comparison.Red columns: PLS trained on experimental PAS spectra.Blue columns: mix of HITRAN and experimental spectra.Green columns: HITRAN spectra.The error bars are given by the estimated measurement and systematic uncertainties.
2023-09-24T15:38:41.887Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "14126ca8b8d7520a2bd82667fb980194572e27e4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/18/7984/pdf?version=1695191265", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c18591728c4673692f0a68f757f548e0af8e7ab4", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
210826547
pes2o/s2orc
v3-fos-license
An Energy-Based Concept for Yielding of Multidirectional FRP Composite Structures Using a Mesoscale Lamina Damage Model Composite structures are made of multidirectional (MD) fiber-reinforced polymer (FRP) composite laminates, which fail due to multiple damages in matrix, interface, and fiber constituents at different scales. The yield point of a unidirectional FRP composite is assumed as the lamina strength limit representing the damage initiation phenomena, while yielding of MD composites in structural applications are not quantified due to the complexity of the sequence of damage evolutions in different laminas dependent on their angle and specification. This paper proposes a new method to identify the yield point of MD composite structures based on the evolution of the damage dissipation energy (DDE). Such a characteristic evolution curve is computed using a validated finite element model with a mesoscale damage-based constitutive model that accounts for different matrix and fiber failure modes in angle lamina. The yield point of composite structures is identified to correspond to a 5% increase in the initial slope of the DDE evolution curve. The yield points of three antisymmetric MD FRP composite structures under flexural loading conditions are established based on Hashin unidirectional (UD) criteria and the energy-based criterion. It is shown that the new energy concept provides a significantly larger safe limit of yield for MD composite structures compared to UD criteria, in which the accumulation of energy dissipated due to all damage modes is less than 5% of the fracture energy required for the structural rupture. Introduction The ever-increasing use of advanced polymer composites such as fiber-reinforced polymer (FRP) composites, nanocomposites, etc., as high strength-to-weight ratio and high-stiffness structural materials in advanced industrial applications [1][2][3], presents a unique design challenge with highly anisotropic material responses of the composites [4][5][6][7][8][9]. The FRP composites are essentially brittle with no plastic deformation, thus adequately described using a bilinear stress-strain curves with elastic and the early stage of the loading [10,43,45,51]. In some cases, the MD FRP composite structure was able to sustain up to 10-fold higher loading above the level corresponding to the onset of matrix and interface damages [10,43,45]. Accordingly, considering the UD lamina level yield criteria to estimate the failure of an MD composite structure normally results in the prediction of structural failure at 5%-10% maximum load capacity of the structure [2,10,43,44,52], which prevents the optimum design of light composite structures. Therefore, material and loading-related parameters should be developed to provide a consistent identification of the yield point for the MD FRP composite structures [2,5,10,[43][44][45]. In this study, an energy concept is developed based on the energy dissipated during the inelastic deformation process of lamina using a mesoscale damage model, to estimate the yield of MD FRP composite structures. The critical level of the accumulated damage dissipation energy (DDE) in FRP composite laminate is proposed as the parameter that indicates the yield of the material. The continuum damage mechanics that account for damage initiation and propagation in the material point are used to quantify the material softening process and compute the rate of DDE growth, to establish the critical DDE level. The characteristic evolution of the DDE is established through FE simulations of actual tests using a validated FE model. The yield point is inferred from the DDE curve, when a sudden increasing rate is observed. The approach is illustrated for different types of antisymmetric MD FRP composite structures with the objective of determining the yield strength. The experiments were implemented on carbon and glass fiber-reinforced polymer (CFRP and GFRP) composite structures, such that only lamina damage could occur with negligible interface delamination; therefore, interlaminar damage was not considered [10,43]. A new approach with FE model-based configurations called single-and multi-layer models [10] was used to simulate the FRP composites manufactured using different methods, to model the mesoscale inter-and intralaminar constructions of the composites. The simulation results were validated with the structural response of the composites in the experiments. The method is recommended for determining the yield limit of any type of MD composite structure under different types of load. Damage Model of FRP Composite Lamina The response of the FRP composite laminates to applied load such that yielding of the material is achieved is predicted in this study, based on the damage mechanics approach. The damage model of the FRP composite lamina is described below. The uniaxial behavior of UD FRP composite lamina in orthogonal axes (1-2 axis, Figure 1a) for elastic-damage behavior under tension and compression is shown in Figure 1b. The four bilinear elastic softening curves represent the equivalent stress-displacement behavior of composite lamina in different failure modes of matrix cracking and crushing, and fiber breakage and buckling (following the load arrows of UD lamina, Figure 1b). In an angle lamina under global loading (x-y axis, Figure 1a), the global deformations are mapped into local deformation and used to compute the effective stress parameters. The elastic behavior of the lamina is computed following the classical theory of lamina [53,54]. The stress-displacement relationship of each damage mode is defined in the sections below. Damage Initiation The initiation of damage in the lamina for the different failure modes is estimated using Hashin's quadratic stress-based failure model [24]. The model is expressed as a quadratic function of the ratio of the effective stress to strength terms to calculate the values of damage variables for the respective failure mode. Matrix cracking and crushing: Fiber fracture and buckling/kinking: In the equations above, represents the effective stresses in the lamina, and X T , Y T , X C , Y C , S L , and S T are the strength properties. In Equations (1)-(4), , and , are the internal damage variables in the fiber and matrix phases of the lamina, under tension or compression loadings. Since no plastic deformation is observed in the FRP composite [2,10,55], the permanent deformation of the lamina is considered in the damage evolution processes. Damage Initiation The initiation of damage in the lamina for the different failure modes is estimated using Hashin's quadratic stress-based failure model [24]. The model is expressed as a quadratic function of the ratio of the effective stress to strength terms to calculate the values of damage variables for the respective failure mode. Matrix cracking and crushing: Fiber fracture and buckling/kinking: In the equations above, [σ] represents the effective stresses in the lamina, and X T , Y T , X C , Y C , S L , and S T are the strength properties. In Equations (1) no plastic deformation is observed in the FRP composite [2,10,55], the permanent deformation of the lamina is considered in the damage evolution processes. Post-damage initiation: Once the onset of damage is predicted in one of the modes, the properties of the material reduce in the other directions/modes and result in early damage. Such effects could be captured by updating the elastic stress tensor (effective stresses in Equations (1)-(4) through internal damage variables. σ ij = σ ij Prior to any damage initiation D σ ij I f any o f the f our damages has initiated (5) where σ is the stress computed using classical lamina theory, [σ] is the effective stress in Equations (1)-(4), and the damage operator, D, is used to consider the effect of early damage initiations. The hypothesis of strain equivalence is used to derive the damage operator as follows [54,56]: where d f , d m , and d s are the fiber, matrix, and shear internal damage variables corresponding to the lamina damage modes in Equations (1)-(4). Damage Propagation The evolution of damage to failure at a local material point is obtained through the softening process using energy-based criterion [10,43,45]. In this process, the damage dissipation energy, G DDE (Figure 2a), is employed to determine the constitutive model of the material in each failure mode, which is expressed as the stress-displacement relation. The fracture energy, G C , is the energy that, if dissipated fully, causes the material to fail (G XT C , G XC C , G YT C , and G YC C are the fracture energies in different failure modes; Figure 1b). The value of dissipated energy due to damage is obtained using with the corresponding damage evolution variable, d p , defined as where k 0 eq is the equivalent elastic stiffness, δ 0 eq is the equivalent displacement at the onset of damage in the respective mode d p = 0 , and δ f eq is the equivalent displacement at the separation of the material point d p = 1 . In each failure mode, the critical value of equivalent dissipation energy, G C , is considered as the fracture energy of the lamina. The evolutions of the damage initiation variable (d i in Equations (1)-(4)) and damage propagation variable d p are shown in Figure 2b. The relation between the equivalent stress-displacement for each failure mode, after onset of damage (dotted lines in Figure 1b,c), is expressed by the equations below [10,[43][44][45]. Matrix tension (σ 22 ≥ 0): Polymers 2020, 12, 157 6 of 18 Matrix compression (σ 22 < 0): Fiber tension (σ 11 ≥ 0): Fiber compression (σ 11 < 0): In these equations, L c is the element characteristic length with magnitude depending on the geometry and the element formulation. For the first-order element, L c is considered as the length of a line across the element. The terms G XT C , G XC C , G YT C , and G YC C are the fiber and matrix fracture energy parameters of the lamina under tension and compression loadings. In Equations (9)-(12), σ o ij , τ o ij , and ε o ij indicate the effective stresses at the onset of damage [10,43]. Polymers 2020, 11, x FOR PEER REVIEW 6 of 18 Fiber compression (σ < 0) : In these equations, is the element characteristic length with magnitude depending on the geometry and the element formulation. For the first-order element, is considered as the length of a line across the element. The terms , , , and are the fiber and matrix fracture energy parameters of the lamina under tension and compression loadings. In Equations (9)-(12), σ , τ , and ε indicate the effective stresses at the onset of damage [10,43]. Damage Dissipation Energy The energy stored in the FRP composite laminate through elastic-damage deformation, commonly called the internal energy, can be employed to describe the progressive damage process of the composite structure [10,43,45,57]. The internal energy, , can be written for non-viscous composites as = : . where σ is the stress derived from the constitutive equation of a lamina. The strain rate term is decomposed as where , , are the time rates of elastic, plastic, and creep strains, respectively. Since FRP lamina behaves in the form of elastic-brittle material, thus = = 0, and Equation (13) can be simplified to where is the elastic strain energy. The elastic strain is not recoverable when damage initiates in a Damage Dissipation Energy The energy stored in the FRP composite laminate through elastic-damage deformation, commonly called the internal energy, can be employed to describe the progressive damage process of the composite structure [10,43,45,57]. The internal energy, E U , can be written for non-viscous composites as where σ c is the stress derived from the constitutive equation of a lamina. The strain rate term is decomposed as ε cr = 0, and Equation (13) can be simplified to where E S is the elastic strain energy. The elastic strain is not recoverable when damage initiates in a material point. Hence, σ c can be expressed in the following form: where σ u is un-damaged stress, and d is the continuum damage parameter which varies from "zero" for the undamaged state to "one" for the fully damaged state of the material point in the composite lamina. Therefore, substituting σ c into Equation (15) gives elastic strain energy as It is assumed that the damage parameter remains fixed at time t until unloading. Thus, the recoverable strain energy and the dissipated energy during damage can be expressed as follows: Considering the undamaged elastic energy function f u , interchanging the integrals in Equations (18) and (19) yields Since, at time t, d = d t and, at time zero, f u = 0, the first term of the last expression in Equation (21) is zero. By defining the damage strain energy function f c as (1 − d t ) f u , Equations (20) and (21) can be written as follows: The parameter f c can be written for a linear elastic energy function as Substituting Equation (24) into Equations (22) and (23) gets Polymers 2020, 12, 157 The present study employed these equations through FE simulations to establish the characteristic evolution of the DDE and to illustrate the proposed concept for determining the yield limit of MD FRP composite structures [43,45]. The yield point was identified corresponding to a 5% increase in the initial slope of the total DDE evolution curve of the composite structure under specific loading conditions. Materials and Experimental Procedures Three different types of antisymmetric MD FRP composite laminate panels were manufactured and machined into beam specimen geometries for flexural tests. These tests were performed in accordance with the ASTM standard [58]. The antisymmetric MD composite lay-up, sample geometry, and load and boundary conditions were selected such that, under the flexural deformation, the various lamina failure modes under tension and compression were activated, while the interlaminar damage and delamination phenomenon were minimized. Subsequently, microscopic fractographic analysis was used to examine the lamina and interface damage events, which indicate dominant lamina damage with negligible interface delamination [10,43,45]. The first group of specimens was fabricated from a glass fiber-reinforced polymer (GFRP) composite panel with thermoplastic resin and eight layers of UD E-glass fiber mats. The GFRP composite was prepared using vacuum-assisted infusion molding (VAIM) process, which resulted in the formation of laminate with no interface between the laminas [10]. The next two groups of the composite samples were made of carbon fiber-reinforced polymer (CFRP) composite laminate. A panel of the CFRP composite was prepared by pre-impregnation of the UD CFRP lamina (M40J fibers and NCHM 6376 resin, Structil France) and cured in an autoclave, resulting in a composite laminate with interface laminas [10]. Details of the manufacturing process of the FRP panels were provided elsewhere [2,10,43]. Microscopic images of the longitudinal cross-section of the FRP composite specimens are shown in Figure 3a. A schematic view of the composite beam in the test set-up and boundary conditions is provided in Figure 3b, in which the lay-ups and dimensions of the beams, and the load configurations are described in Table 1. Several GFRP composite specimens and the first batch of the CFRP composite samples were tested under three-point bending (3PB) conditions, while the second batch of CFRP composite samples was tested under four-point bending (4PB), as mentioned in Table 1. A continuous flexural load was applied to the specimens until a significant amount of degradation in load-deflection response, which represented the occurrence of multiple damages, was observed. The results of the tests in terms of monotonic reaction force versus the deflection of the composite beams were recorded and utilized in the validation procedure of the FE models and simulation processes. Polymers 2020, 11, x FOR PEER REVIEW 8 of 18 using vacuum-assisted infusion molding (VAIM) process, which resulted in the formation of laminate with no interface between the laminas [10]. The next two groups of the composite samples were made of carbon fiber-reinforced polymer (CFRP) composite laminate. A panel of the CFRP composite was prepared by pre-impregnation of the UD CFRP lamina (M40J fibers and NCHM 6376 resin, Structil France) and cured in an autoclave, resulting in a composite laminate with interface laminas [10]. Details of the manufacturing process of the FRP panels were provided elsewhere [2,10,43]. Microscopic images of the longitudinal cross-section of the FRP composite specimens are shown in Figure 3a. A schematic view of the composite beam in the test set-up and boundary conditions is provided in Figure 3b, in which the lay-ups and dimensions of the beams, and the load configurations are described in Table 1. Several GFRP composite specimens and the first batch of the CFRP composite samples were tested under three-point bending (3PB) conditions, while the second batch of CFRP composite samples was tested under four-point bending (4PB), as mentioned in Table 1. A continuous flexural load was applied to the specimens until a significant amount of degradation in load-deflection response, which represented the occurrence of multiple damages, was observed. The results of the tests in terms of monotonic reaction force versus the deflection of the composite beams were recorded and utilized in the validation procedure of the FE models and simulation processes. Finite Element Simulation Since the manufacturing process of the FRP composite laminates dictates the resulting interface condition between the laminas, the FE models of these cases are different. The GFRP composite laminate, produced by the VAIM method with no apparent interface, was simulated with the single-layer model. The CFRP composite laminate, fabricated via lay-ups of prepreg laminas and autoclave curing with distinct interfaces, was modeled using the multi-layer model. The single-layer and multi-layer FE model constructions are illustrated in Figure 4, while details of these mesoscale FE models were discussed elsewhere [10,43,45]. Polymers 2020, 11, x FOR PEER REVIEW 9 of 18 laminate, produced by the VAIM method with no apparent interface, was simulated with the singlelayer model. The CFRP composite laminate, fabricated via lay-ups of prepreg laminas and autoclave curing with distinct interfaces, was modeled using the multi-layer model. The single-layer and multilayer FE model constructions are illustrated in Figure 4, while details of these mesoscale FE models were discussed elsewhere [10,43,45]. The three-dimensional (3D) FE model geometry of the CFRP composite laminate beam specimen is shown in Figure 5. The damage model (Section 2) was applied in each lamina using the standard model definition step in ABAQUS software [57]. Each lamina was discretized into a layer of eightnode continuum shell elements (SC8R) with reduced integration points for efficient computation [37]. The element mesh was refined with smaller-size elements, defined for the central region of the specimen when the maximum deflection and, thus, damage evolution was anticipated. The loading and support rollers were modeled as rigid bodies and discretized using rigid, four-node continuum elements (R3D4) [57]. The three-dimensional (3D) FE model geometry of the CFRP composite laminate beam specimen is shown in Figure 5. The damage model (Section 2) was applied in each lamina using the standard model definition step in ABAQUS software [57]. Each lamina was discretized into a layer of eight-node continuum shell elements (SC8R) with reduced integration points for efficient computation [37]. The element mesh was refined with smaller-size elements, defined for the central region of the specimen when the maximum deflection and, thus, damage evolution was anticipated. The loading and support rollers were modeled as rigid bodies and discretized using rigid, four-node continuum elements (R3D4) [57]. Frictionless contact was assumed between all contacting bodies. In the multi-layer model (Figure 4b), the interfaces between adjacent laminas were modeled using a surface-to-surface tie with finite displacement interaction of the sharing node pair. This allows relative displacement between adjacent surfaces of the laminas [10,45]. A two-step mesh convergence process was performed to eliminate the effect of element size on the FE-calculated results for the elastic and damage calculations [43,45]. A finer element mesh size than that adequately identified in the elastic analysis is required for element size-independent damage calculations. The resulting element size, at mesh convergence, had an edge length of 0.2 mm. The boundary conditions of the model are illustrated in Figure 3b, while the loading was identical to that used during the test. The elastic and strength properties of GFRP and CFRP composite laminates used in the FE simulations were obtained through standard tests (ASTM-D4762, [59]) [10,43,45], while the values of fracture energies were extracted from the properties of similar materials in the literature [60][61][62], as listed in Table 2. These properties were utilized rigorously in the FE simulation exercises of the various composite specimen geometries and loading cases, demonstrating comparable load-displacement results with measured responses [2,10,[43][44][45]52]. The three-dimensional (3D) FE model geometry of the CFRP composite laminate beam specimen is shown in Figure 5. The damage model (Section 2) was applied in each lamina using the standard model definition step in ABAQUS software [57]. Each lamina was discretized into a layer of eightnode continuum shell elements (SC8R) with reduced integration points for efficient computation [37]. The element mesh was refined with smaller-size elements, defined for the central region of the specimen when the maximum deflection and, thus, damage evolution was anticipated. The loading and support rollers were modeled as rigid bodies and discretized using rigid, four-node continuum elements (R3D4) [57]. Frictionless contact was assumed between all contacting bodies. In the multi-layer model ( Figure 4b), the interfaces between adjacent laminas were modeled using a surface-to-surface tie with finite displacement interaction of the sharing node pair. This allows relative displacement between adjacent surfaces of the laminas [10,45]. A two-step mesh convergence process was performed to eliminate the effect of element size on the FE-calculated results for the elastic and damage calculations [43,45]. A finer element mesh size than that adequately identified in the elastic analysis is required for element size-independent damage calculations. The resulting element size, at mesh convergence, had an edge Results and Discussion The FE-calculated and measured flexural responses of the MD FRP composite beam specimens, expressed in terms of the load-deflection curves, were compared. A comparable response was indicative of the validity of the FE simulation procedures. Subsequently, the calculated deformation and damage responses of the composite laminates were interpreted for the respective failure mechanisms. The characteristic evolution of the DDE could then be established and inferred for the onset of yield of the composite structure. Structural Response and Damage Evolution of GFRP Composite Beam under Three-Point Bending The FE-calculated load-deflection response of the MD GFRP composite beam specimen (Case 1, Table 1) was compared with the measured curve, as shown in Figure 6a. The reasonably good prediction of the flexural response rendered the FE simulation valid. The GFRP composite beam showed an initial linear response up to the deflection of about 12 mm, suggesting structural elastic behavior; however, the flexural stiffness of the beam (Figure 6a) indicated a 1.5% reduction of the stiffness compared to initial condition. This was followed by a slight deviation with lower flexural stiffness to the maximum load, likely attributed to damage and possible softening of the structure. The corresponding evolution of the calculated strain energy (SE) and DDE with increasing beam deflection is shown in Figure 6b. A sudden load drop was observed at the maximum load, due to the composite rupture by fiber buckling in the first lamina under compression, as shown in Figure 6c. composite structure, yield began when the rate of the DDE increased to 0.914 N/(mm • s) at a 5% increase in the initial slope DDE evolution curve. Thus, this load (stress) and deflection level of (350 N, 13.5 mm) at which the rate of the DDE abruptly increased was identified as the yield point of the GFRP composite laminates. Deformation beyond the yield limit to failure was dominated by the softening of the structure. The total DDE of the GFRP composite corresponding to the maximum load at failure was 234 N/mm, which was 3.4% of the total SE of the structure at 6960 N/mm (Figure 6b). The flexural stiffness of the GFRP composite was reduced to 3% and 15.3% from the initial value, at the yield point and maximum load level, respectively. Structural Response and Damage Evolution of CFRP Composite Beam under Three-Point Bending Load The FE-calculated load-deflection response of the MD CFRP composite beam specimen (Case 2, Table 1) was compared with the measured curve, as shown in Figure 7a. A reasonably good prediction of the flexural response was claimed; thus, the validity of the FE simulation was ensured. Linear elastic response was observed up to a displacement of about 12.5 mm. A noticeable difference between the FE-predicted and measured deformation at larger deflection, with apparent minute load drops predicted along the load-displacement curve. Such a load drop was artificially induced by the relative slip between the CFRP composite laminate beam specimen and the support rollers under the assumed friction-free condition [45]. As described for the GFRP composite (Case 1) above, the observed reduction in the flexural modulus as reflected in the stiffness curve (Figure 7a) was due to the accumulated damage by the multiple modes of the failure of the composite constituents. The corresponding evolution of the calculated DDE with increasing beam deflection, as shown in Figure 7b, exhibited identical characteristics to that of the GFRP composite (Case 1). Based on the proposed method of specifying the yield point of the composite structure, yield began when the rate of the DDE abruptly increased to 2.1 N/(mm • s) (i.e., 5% increase in the initial slope). Thus, the yield point was identified to correspond to the load and deflection levels of (150 N, 9 mm), when a sudden increase in the rate of the DDE was observed. It is worth mentioning that catastrophic fracture did not occur for this MD CFRP composite beam specimen at the maximum prescribed central displacement of 28 mm. However, extensive softening of the material by the various damage mechanisms was expected to have occurred. The FE model predicted various matrix cracking and crushing phenomena in different laminas with respect to the level of beam deflection, as shown in Results show that a negligible amount of energy was dissipated prior to the beam deflection of 13.5 mm. This suggests that limited damage took place on the structure up to this critical load level. However, the rate of energy dissipation abruptly increased for larger composite beam deflection. Although an MD GFRP composite was considered under complex bending load, only the single damage mode for fiber failure occurred in the first lamina (0 • ) under compression, while other laminas remained elastic until the maximum load was experienced. The fiber damage initiation was predicted at 13.5 mm deflection. Based on the proposed method of specifying the yield point of the composite structure, yield began when the rate of the DDE increased to 0.914 N/(mm·s) at a 5% increase in the initial slope DDE evolution curve. Thus, this load (stress) and deflection level of (350 N, 13.5 mm) at which the rate of the DDE abruptly increased was identified as the yield point of the GFRP composite laminates. Deformation beyond the yield limit to failure was dominated by the softening of the structure. The total DDE of the GFRP composite corresponding to the maximum load at failure was 234 N/mm, which was 3.4% of the total SE of the structure at 6960 N/mm (Figure 6b). The flexural stiffness of the GFRP composite was reduced to 3% and 15.3% from the initial value, at the yield point and maximum load level, respectively. Structural Response and Damage Evolution of CFRP Composite Beam under Three-Point Bending Load The FE-calculated load-deflection response of the MD CFRP composite beam specimen (Case 2, Table 1) was compared with the measured curve, as shown in Figure 7a. A reasonably good prediction of the flexural response was claimed; thus, the validity of the FE simulation was ensured. Linear elastic response was observed up to a displacement of about 12.5 mm. A noticeable difference between the FE-predicted and measured deformation at larger deflection, with apparent minute load drops predicted along the load-displacement curve. Such a load drop was artificially induced by the relative slip between the CFRP composite laminate beam specimen and the support rollers under the assumed friction-free condition [45]. As described for the GFRP composite (Case 1) above, the observed reduction in the flexural modulus as reflected in the stiffness curve (Figure 7a) was due to the accumulated damage by the multiple modes of the failure of the composite constituents. Polymers 2020, 11, x FOR PEER REVIEW 12 of 18 Figure 7c. The time to the onset of matrix cracking was predicted firstly in the bottom most lamina (−45°) of the composite beam when loaded up to 4.7 mm deflection, which represented the elastic deformation limit of the composite structure. The accumulated DDE at this displacement was 680 N/mm, which was 14% of the total strain energy of the composite structure (4825 N/mm, Figure 7b). The initial flexural stiffness of the CFRP composite beam of 17.6 N/mm was reduced to 17.44 and 10.66 N/mm (0.91% and 39.4% reduction) at the yield point and maximum load level, respectively. Structural Response and Damage Evolution of CFRP Composite Beam under Four-Point Bending The FE-calculated load-deflection response of the MD GFRP composite beam specimen (Case 3, Table 1) under 4PB was compared with the measured curve, as shown in Figure 8a. Again, a reasonably good comparison was demonstrated, and a valid FE simulation procedure of the test was claimed. A similar trend of the flexural deformation of the MD FRP composite beam specimens was observed in the three different specimens considered in this study. A larger applied total load of 600 N was recorded over a much shorter prescribed displacement of 8 mm, when compared with the 3PB (Case 2, Table 1) and different anti-symmetric lay-ups of the specimen. Linear elastic flexural response was measured up to the central deflection of about 4.6 mm, while the flexural stiffness curve ( Figure 8a) was reduced by 7.1% from the initial condition. The corresponding evolution of SE and DDE in this antisymmetric CFRP composite beam specimen, with the applied displacement of the loading rollers, under the 4PB load, is shown in Figure 8b. Based on the proposed method of identifying the yield point of the composite, yield commenced when the rate of the DDE sharply rises to 11.1 N/(mm • s). The corresponding load and deflection level at yield were 313 N and 3 mm, respectively. It is worth noting that the load-deflection curve remained fairly linear beyond the yield point up to about 4.7 mm. Within this small deflection range, the rapid evolution of matrix damage in all of the laminas under tensile deformation (Figure 8c) only contributed to a fraction (about 10%) of the total DDE; thus, the effect on the softening of the material remained insignificant. A similar observation applied to the previous different FRP composite specimens and loading conditions, as described above. The accumulated DDE at the end of the prescribed displacement was 598.4 N/mm, which was 23% of the total SE of the composite beam (2567 N/mm, Figure 8b). The flexural stiffness of the CFRP The corresponding evolution of the calculated DDE with increasing beam deflection, as shown in Figure 7b, exhibited identical characteristics to that of the GFRP composite (Case 1). Based on the proposed method of specifying the yield point of the composite structure, yield began when the rate of the DDE abruptly increased to 2.1 N/(mm·s) (i.e., 5% increase in the initial slope). Thus, the yield point was identified to correspond to the load and deflection levels of (150 N, 9 mm), when a sudden increase in the rate of the DDE was observed. It is worth mentioning that catastrophic fracture did not occur for this MD CFRP composite beam specimen at the maximum prescribed central displacement of 28 mm. However, extensive softening of the material by the various damage mechanisms was expected to have occurred. The FE model predicted various matrix cracking and crushing phenomena in different laminas with respect to the level of beam deflection, as shown in Figure 7c. The time to the onset of matrix cracking was predicted firstly in the bottom most lamina (−45 • ) of the composite beam when loaded up to 4.7 mm deflection, which represented the elastic deformation limit of the composite structure. The accumulated DDE at this displacement was 680 N/mm, which was 14% of the total strain energy of the composite structure (4825 N/mm, Figure 7b). The initial flexural stiffness of the CFRP composite beam of 17.6 N/mm was reduced to 17.44 and 10.66 N/mm (0.91% and 39.4% reduction) at the yield point and maximum load level, respectively. Structural Response and Damage Evolution of CFRP Composite Beam under Four-Point Bending The FE-calculated load-deflection response of the MD GFRP composite beam specimen (Case 3, Table 1) under 4PB was compared with the measured curve, as shown in Figure 8a. Again, a reasonably good comparison was demonstrated, and a valid FE simulation procedure of the test was claimed. A similar trend of the flexural deformation of the MD FRP composite beam specimens was observed in the three different specimens considered in this study. A larger applied total load of 600 N was recorded over a much shorter prescribed displacement of 8 mm, when compared with the 3PB (Case 2, Table 1) and different anti-symmetric lay-ups of the specimen. Linear elastic flexural response was measured up to the central deflection of about 4.6 mm, while the flexural stiffness curve (Figure 8a) was reduced by 7.1% from the initial condition. The corresponding evolution of SE and DDE in this antisymmetric CFRP composite beam specimen, with the applied displacement of the loading rollers, under the 4PB load, is shown in Figure 8b. Based on the proposed method of identifying the yield point of the composite, yield commenced when the rate of the DDE sharply rises to 11.1 N/(mm·s). The corresponding load and deflection level at yield were 313 N and 3 mm, respectively. It is worth noting that the load-deflection curve remained fairly linear beyond the yield point up to about 4.7 mm. Within this small deflection range, the rapid evolution of matrix damage in all of the laminas under tensile deformation (Figure 8c) only contributed to a fraction (about 10%) of the total DDE; thus, the effect on the softening of the material remained insignificant. A similar observation applied to the previous different FRP composite specimens and loading conditions, as described above. The accumulated DDE at the end of the prescribed displacement was 598.4 N/mm, which was 23% of the total SE of the composite beam (2567 N/mm, Figure 8b). The flexural stiffness of the CFRP composite was reduced by 1.1% and 31% of the initial value at the yield point and maximum load level (8-mm deflection), respectively. Comparison of the Estimated Yield Limits Based on UD Hashin Criteria and Energy-Based Criterion Three cases of FRP composite structure were tested under flexural loading condition, while the FE model was used to predict the material behavior and specify the structural yielding based on UD criteria, as well as the energy-based model proposed in this study. A summary of the results in terms of the maximum load and deflection capacities of the structures, yielding according to the UD criteria and energy-based criteria, and the percentage of the yield value to the maximum capacity (MC) of the structure is listed in Table 3. Comparison of the Estimated Yield Limits Based on UD Hashin Criteria and Energy-Based Criterion Three cases of FRP composite structure were tested under flexural loading condition, while the FE model was used to predict the material behavior and specify the structural yielding based on UD criteria, as well as the energy-based model proposed in this study. A summary of the results in terms of the maximum load and deflection capacities of the structures, yielding according to the UD criteria and energy-based criteria, and the percentage of the yield value to the maximum capacity (MC) of the structure is listed in Table 3. Table 3. Results of the yield values (UD and energy-based criteria) of the MD composite structure under flexural loading condition. Comparison of the Estimated Yield Limits Based on UD Hashin Criteria and Energy-Based Criterion Three cases of FRP composite structure were tested under flexural loading condition, while the FE model was used to predict the material behavior and specify the structural yielding based on UD criteria, as well as the energy-based model proposed in this study. A summary of the results in terms of the maximum load and deflection capacities of the structures, yielding according to the UD criteria and energy-based criteria, and the percentage of the yield value to the maximum capacity (MC) of the structure is listed in Table 3. Results indicated that, in the condition of the single failure mode (Case 1), the UD criteria (i.e., Hashin model) and energy-based criterion would suggest a similar range for yielding of the MD composite structure. However, MD composite structures mostly failed due to multiple damage phenomena (Cases 2 and 3), in which case, considering UD criteria would result in the assumption of Results indicated that, in the condition of the single failure mode (Case 1), the UD criteria (i.e., Hashin model) and energy-based criterion would suggest a similar range for yielding of the MD composite structure. However, MD composite structures mostly failed due to multiple damage phenomena (Cases 2 and 3), in which case, considering UD criteria would result in the assumption of structural yielding at only 10%-20% maximum capacity of the structure (displacement or load). While using the energy-based criterion, the yielding limit could safely (controlled condition in which the accumulation of all damage modes contributed to less than 5% of energy dissipation) increase to 30%-50% of the maximum capacity of the structure. The knowledge of the safe limit of structural yielding could be used for the optimum design of composite structures that are typically implemented under complex loading conditions. Conclusions A new criterion was proposed to identify the onset of yield in MD FRP composite laminate structures subjected to any type of loading condition. The new energy concept provides a significantly larger safe limit of yield for MD composite structures that normally fail due to multiple damage phenomena, in which the accumulation of energy dissipated due to all damage modes is less than 5% of the fracture energy required for the structural rupture. The criterion is based on the damage energy dissipated through the multiple softening processes of composite laminate materials under different mesoscale failure modes. The characteristic evolution of the DDE was calculated using a validated FE model with damage-based constitutive equations. The criterion was examined for antisymmetric MD GFRP and CFRP composite laminates under three-and four-point bending test configurations. The following can be concluded: 1. The yield point of the FRP composite laminate structures could be identified by a 5% increase in the initial slope of the DDE evolution curve with respect to the applied load parameters. 2. At the yield point, the extent of damage by the various modes depended on material, lay-ups, load, and test configurations. 3. The yield points of the MD GFRP and CFRP composite laminates (cases 1, 2, and 3) were identified to occur upon flexural loading when the rate of the DDE reached 0.914, 2.1, and 11.1 N/(mm·s), respectively. The corresponding deflections were 13.5, 9, and 3 mm, respectively. 4. The initial flexural stiffness of the MD GFRP and CFRP composite structures (cases 1, 2, and 3) were measured at 28, 17.6, and 108.26 N/mm, reduced to 27.2, 17.44, and 107.1 N/mm at the yield point, indicating 3%, 0.91%, and 1.1% reductions in the stiffness of the beams, respectively. Therefore, an average 2% reduction in flexural stiffness could be suggested as a mean for the determination of the yield point in MD FRP composite structures under three-and four-point bending loads. 5. In general, the UD criteria resulted in the assumption of structural yielding at 10%-20% maximum capacity of the structure (displacement or load), whereas, using the energy-based criterion, the yield limit could be safely increased to 30%-50% of the maximum capacity of the structure. Interaction between the nodes-in-contact as separation and sliding motions (mm) N i Interpolation function of the interface segments ρ n Interface segment curvature D ij Stiffness matrix of interface with linear coupled elastic behavior (MPa) F i Force of interface node i th (N) u i Motion of interface node i th (mm)
2020-01-09T09:14:37.446Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "eac3a88ea6ec9cbc34e186bd944209c3a87ab14d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/12/1/157/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf3d1d6d88a7e90a30a225e9ebc68f56631fadf8", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
251180775
pes2o/s2orc
v3-fos-license
Trouble in Paradise: Expanding Applications of the Getty Thesaurus of Geographic Names ® to Enhance Intellectual Discoverability of Circum-Caribbean Materials This article examines how the Circum-Caribbean region ’ s cultural and geographic complexities make it difficult to describe or index relevant archival materials using the mainstream authority controls used in galleries, libraries, archives, and museums (GLAMs). This difficulty stems from the fact that authority controls utilised by GLAMs are primarily created by North American or European authorities and, therefore, have Western-centric views imbued with colonialist overtones. When these systems are used to catalogue, index, or describe Circum-Caribbean-related collection materials, a tension arises: a system with a white, Euro-American perspective is applied to material reflective of a significantly multicultural place, culture, subject, and population. The rigidity of controlled vocabularies and their applications — which typically follow specific indexing methodologies — cannot accommodate the fluidity necessary to accurately denote the complex Circum-Caribbean region, especially with regard to geographic indexing. This article demonstrates the difficulties that emerge from trying to delimit and define the Caribbean region; provides an abbreviated analysis of the Circum-Caribbean ’ s representation in the Getty Thesaurus of Geographic Names ® (TGN), which mirrors the difficulties of defining and delimiting the region; and presents a case study in which the West Indian Postcard Collection at Cambridge University Library was indexed using augmented applications of the TGN. The research presented in this paper supports the theory that employing both general and specific indexing strategies creates enhanced access to Caribbean-related collection materials by enabling regional, sub-regional, and territorial/national avenues to retrieve collection materials. Introduction Galleries, libraries, archives, and museums (GLAMs) have historically suppressed, overlooked, and excluded non-Western-centric perspectives in collecting, describing, and indexing collection materials. Many institutions and scholars are increasingly critiquing collections and cataloguing practices, striving to find ways to decolonise 1 and reconcile them in an effort to visibilise narratives that have been historically suppressed in GLAMs (Buckley 2008;Campt 2012Campt , 2017Turner 2015Turner , 2020Bosum and Dunne 2017;White 2018). However, very little academic work addresses the importance of decolonising and reconciling with 1 Following my presentation at the Museums Assocation of the Caribbean's 2019 conference, the discussion among Caribbean peers in the field made it clear that there is growing disagreement about what the term decolonisation means and if it is appropriate or inflicts further harm. When I use this term or its derivatives, it is from the distinct perspective that decolonial work is not about "undoing" or "erasing" the colonial past. Rather, I approach decolonial work as "the effort to interrogate and transform the institutional, structural and epistemological legacies of colonialism" and "an additive process which heightens the accuracy and completeness of our knowledge of the world" (Meghji 2021). collection materials pertaining to the Circum-Caribbean despite the cataclysmic colonisation that dominated the region for centuries. 2 It is in this context that this paper examines the inefficiency of rigid mainstream authority controls used in GLAMs, such as controlled vocabularies and specific indexing methodologies, and demonstrates how this rigidity stunts intellectual discoverability of Caribbean-related collection materials. This paper will demonstrate that North American-and European-created authority controls used in GLAMs need to be investigated to understand how their creators' biases suppress non-Western-centric cultures during the cataloguing of collection materials and suggests a dual-indexing approach to improve discoverability of Caribbean-related materials. The first half of the article provides a brief history of modern Anglo-American information organisation and standardisation in GLAMs, introduces the traditionally binary concepts of general and specific indexing, and demonstrates the complexity of defining and delimiting the Circum-Caribbean region. The second half of the article presents an abbreviated analysis of how the Circum-Caribbean is depicted within the Getty Thesaurus of Geographic Names® (TGN) and discusses a case study conducted by the author in which the TGN was applied to a postcard collection held at Cambridge University Library to determine how different methods of application impacted intellectual discoverability. As will be further explained in this article, the research presented here measures intellectual discoverability as the number of geographic place names that are applied as indexed terms and, consequently, act as controlled access points. 3 Ultimately, this paper aims to highlight the less-than-accurate ways in which Circum-Caribbean materials are often organised in GLAMs' collection catalogues. It proposes a methodology that can be used to critique authority controls to make them more adaptable and suitable to culturally specific materials. Though this paper is focused on doing so through a Circum-Caribbean lens, the same methodology could be applied to these standardised authority controls from any inaccurately or underrepresented place or culture. Information Organisation and Standardisation Before analysing existing information organisation systems, it is important to understand the earliest predecessors of such systems, the objects for which they were created, and subsequent developments. For almost two centuries, efforts to systematise information organisation and archival description have been driven largely by librarians and archivists focused primarily on developing authority control systems for bibliographic standardisation of textual materials (Enser 1993). Sir Antonio Panizzi's plan to organise books in the British Library with a new catalogue from 1839 into the early 1840s is lauded as the start of the modern Anglo-American history of systematic information organisation in the tradition of descriptive and subject cataloguing (Svenonius 2000;New World Encyclopedia contributors 2016). 4 His argument for a catalogue to group "like" items and differentiate among similar ones indirectly referenced bibliographic objectives (later explicitly stated by Charles Ammi Cutter), and his ensuing efforts laid the groundwork for the birth and development of all major subsequent cataloguing systems used worldwide today (Svenonius 2000;New World Encyclopaedia contributors 2016). However, the "book catalogue" format of Panizzi's era was onerous: it consisted of one or more large volumes in which bibliographic information about works (initially books or literary items) was entered by hand. It required empty spaces to be left where future bibliographic listings could be inserted. Significantly, Panizzi recognised the importance of extensive "See" and "See also" cross-references to demonstrate relationships between certain items in order to assist in navigation and collocation (Svenonius 2000). However, it was not until the second half of the nineteenth century that American librarian Charles Ammi Cutter revolutionised the field of bibliographic information organisation and introduced the card catalogue (Svenonius 2000). Unlike book catalogues, bibliographic information of a work was detailed on a card-usually one card per work. This was an improvement on the book catalogue as it allowed the card catalogue to grow as needed without the necessity of leaving blank spaces for newly acquired items. The card catalogue system also has a certain degree of adaptability; it is much easier to remove cards for deaccessions, and the catalogue can be reorganised if the collection is broken up into smaller departments later on, whereas a book catalogue would have to be cut into pieces to do this (and, even then, items may be listed on both sides of a page). In 1901, the Library of Congress began distributing card-catalogue copy to American libraries, a milestone towards economising bibliographic effort and "recognising the possibility of universal bibliographic control through standardization" (Svenonius 2000). This can be seen as an early instance of cooperative cataloguing. Cutter's system also included an author index and a "classed catalog"-in short, a precursor to a subject index (Svenonius 2000;New World Encyclopaedia contributors 2016). Card catalogues eventually transitioned from analogue to digital to online. Geoffrey Bowker and Susan Leigh Star (1999, 2) state that these control systems "were jolted in the twentieth century by information explosions, the computer revolution, the proliferation of new media, and the drive toward universal bibliographic control." Each catalogue type has its own set of unique challenges to achieving universal bibliographic control. According to Elaine Svenonius, the relational and syndetic structures that were clear in book and card catalogues were eroded in the eventual transition to online catalogues, causing a "steady deterioration in the integrity of bibliographic structures" and "an undermining of bibliographic objectives," resulting in less user-friendly catalogues (2000,(63)(64). More recently, GLAMs and archival associations have contributed to the development, implementation, and maintenance of so-called best practices for globally standardised archival description methods. Such contributions include the Anglo-American Cataloguing Rules (AACR, first developed in 1967), the Bureau of Canadian Archivists' Rules for Archival Description (first published in 1990), the General International Standard for Archival Description (ISAD[G], first published in 1994), and the International Federation of Library Associations and Institutions' (IFLA's) Fundamental Requirements for Bibliographic Records (FRBR, first published in 1998). The archival description systems that are used in cataloguing today are often complemented by more specific authority controls-for example, geographic thesauri, name authorities, or subject heading indexes such as the Library of Congress Subject Headings (established in 1898), the Canadian Subject Headings (begun in 1968; first published in 1978), and the Getty Thesaurus of Geographic Names® (begun in 1987; first published in 1997). At the core of these complementary systems are a controlled vocabulary and syntaxes that govern the many authority records within the authority control and their relationships to each other, the creation of new records, and the implementation of the control system. But their controlled vocabularies are, by nature, often exclusionary. Bowker and Star (1999) emphasise how the construction of each standard or categorisation system valorises one perspective above others, and they urge us to consider how such choices are made. Controlled vocabularies seldom meet the needs of all institutions or collections or permit the necessary flexibility to accurately describe multifaceted cultures or subjects. As a result, they often undermine intellectual discoverability and efficient collation of items. Furthermore, disregarding the needs of a collection or institution in favour of industry-prescribed standards has the potentially unintended effect of further colonising collections and oppressing certain viewpoints if the standards applied do not consider the nuances of the materials being catalogued. Cataloguing librarian Linnea Marshall (2003) explores the classic arguments of specific versus general cataloguing by recounting debates amongst librarians during the latter half of the twentieth century, carefully considering the relativity of specificity as it relates to indexing vocabularies, recall and precision, and the value of both general and specific headings in subject searching. Marshall concludes that the principle of specificity does not serve all catalogue users since it "favors not only the narrowly-focused researcher over the comprehensive researcher [who wants recall], but also it favors the narrowly-focused and knowledgeable specialist over the narrowly-focused but less knowledgeable researcher" (2003,69). The author discusses several projects that support the idea that "assigning both specific and generic subject headings to a work would enhance the subject accessibility for the diverse approaches and research needs of different catalog users" (Marshall 2003, abstract). If regional place names are considered in place of "general headings" and territorial/national place names in place of "specific headings," employing a dual-indexing strategy by applying both general (regional) and specific (territorial/ national) place names could simultaneously serve a user seeking all materials from a region and another user searching for a single territory. This article argues for this dual-indexing approach, noting, however, that the primary downfall to this strategy is that it could become more difficult for users to recall materials that deal comprehensively with the region in general, excluding territory/nation-specific materials. Defining and Delimiting the Circum-Caribbean One of the major issues in describing and indexing Circum-Caribbean materials in GLAMs is the problematic and inconsistent use of nomenclature relating to the Circum-Caribbean in authority controls-for example, the unexplained and inconsistent use of Caribbean and West Indies in the collection catalogues of multiple institutions. A considerable amount of Caribbean scholarship engages with the difficulty of what to name Caribbean places and how to define or delimit the Caribbean area, but institutions (especially those outside the region) take these place names and their delimitations for granted. Eurocentric narratives of the Caribbean area are well known; however, works by Caribbean scholars and authors provide historical narratives that are more nuanced and better reflect the region's geographical, socio-political, cultural, and historical complexities. This section draws on primarily Caribbean authors to discuss how cataloguing Caribbean-related materials is complicated by the ambiguous origins of place names, the notion of Creolisation, and multiculturalism and multilingualism in the region. These factors all influence attempts to delimit the Caribbean area as a region and space and challenge attempts to standardise Caribbean place names. This context is crucial for indexing materials related to the region and underpins my argument for a dual-indexing approach to applying place names. Sources on the history of the Caribbean offer different accounts of the origins of regional place names. For example, John Parry, Philip Sherlock, and Anthony Maingot (1987, 6) explain that when Spanish explorer Christopher Columbus reached the islands, he mistakenly claimed he had reached the East Indies, soon dubbing the islands the West Indies once the Spaniards realised their mistake. Meanwhile, ethnic studies scholar Tony Castanha (2011, xvi) argues that the term Caribbean is derived from the name of one of the region's Indigenous populations, the Caribs. In this sense, these place names could be said to hold different-colonialist versus self-representative-connotations. Antonio Gaztambide-Géigel (2004) summarises multiple competing hypotheses about how the term Caribbean originated to demonstrate the ambiguity of the name's true origins. He states that the word Caribbean was not widely used until the United States imposed it but now it is generally accepted as the name of the region (Gaztambide-Géigel 2004, 128), a direct contradiction to Castanha and others' hypothesis that it was self-derived by the Caribs. Gaztambide-Géigel argues that the term Caribs was used by Christopher Columbus to refer to one of the region's Indigenous populations (2004,129). The indeterminate origins of these terms pose problems for cataloguing in GLAMs which already use Caribbean and West Indies inconsistently. While critics have proposed that Caribbean should be the term used, can the term Caribbean be considered more appropriate than West Indies in an anti-colonial framework if it was also imposed by Euro-American outsider populations? Furthermore, according to Gaztambide-Géigel, geographic terms such as Lesser Antilles, West Indies, and Leeward and Windward Islands are also imprecise because although different colonising powers may have used the same terms, they were not always referring to the same islands (2004,. The uncertain origins of all these place names make it challenging to establish standardised terms for cataloguing purposes and make it difficult to determine which names (if any) are less colonial and perhaps, therefore, more reflective of the region's self-derived identity. Cataloguing Caribbean-related materials is further complicated by the concept of Creolisation, a term introduced by Edward Kamau Brathwaite in 1974 to describe when elements of different cultures are blended together to create a new culture. Creolisation is explored in depth by the canonical Martinican poet-novelist-theorist Édouard Glissant (1969Glissant ( [1989), who coined the term Antillanité, or "Caribbeanness." Both concepts remain pertinent to Caribbean studies and, I argue, should be considered and used as a framework to influence how Caribbean-related collection materials are catalogued. Using a Creoleinformed approach would provide space to necessarily nuance the Caribbean place names used, but would not easily conform to the rigid metadata standards implemented across GLAMs that strive to enforce strict and simplistic categorisation, with little to no flexibility to account for more nuanced or multifaceted concepts, such as Creolisation or multilingualism. As Gaztambide-Géigel (2004) notes, one of the most prominent challenges facing Caribbean GLAM professionals within the region is how to make Caribbean-related materials accessible for a multilingual regional audience. Due to its multinational colonial past, English, Spanish, French, and Dutch are all official languages in different parts of the region. The existence of numerous official and unofficial Creolised languages like Haitian Creole, Papiamento, and patois only further complicates attempts to standardise Caribbean place names (Sanabria 2007). 5 In addition to struggling to name Caribbean places, Caribbean studies scholars also struggle to define or delimit where, exactly, constitutes the Caribbean area. Ileana Sanz discusses the "oscillating pendulum" between regionalism and nationalism within the "Circum-Caribbean" or "Wider Caribbean," which she defines as "a space which includes the insular Caribbean, together with the northern coastal states of South America, Central America, and the Caribbean coast of Mexico" (2009, 1). Despite noting that relevant literature remains sparse because Circum-Caribbean theory is still fairly new, Sanz demonstrates how challenging it is to extricate the Caribbean from other historically linked areas. For example, distinguishing Latin America from the Caribbean is difficult due to their similar geographical and historical colonial contexts, used by "visionary and . . . founding father of Latin America" Simón Bolívar to invent "his [Latin] America" (Sanz 2009, 4-5 ), which included island territories such as Puerto Rico, Cuba, and the Dominican Republicplaces that are, geophysically, part of the Caribbean archipelago. Likewise, distinguishing the Caribbean from Marcus Garvey's "Black America" is difficult since Garvey sought to integrate all people of African origin-be they in the Caribbean, the United States, or Africa (Sanz 2009, 10). Sanz's article is situated within the discourse of space and place, topics also addressed by Latin and Caribbean studies professor Michelle Stephens (2013) and Caribbean and Africana studies professor Carole Boyce Davies (2013). Davies suggests that "Caribbean Spaces" are ever expanding and without boundaries, a phenomenon signified by the Caribbean diaspora, "re-creations of Caribbean communities following migration" (2013, 1), while Stephens questions whether the Caribbean transcends geographical place (2013,8). The tensions between using insular versus archipelagic and nationalist versus regionalist approaches are clearly visible in both authors' works. Overall, complex historical integrationist approaches to identity and sovereignty still exist and impact how the Caribbean is represented in institutional collection catalogues. The origins of terms like Caribbean and Windward Islands remain unclear, and the multiple languages spoken in the region further complicate attempts to catalogue Caribbean-related materials in a way that will serve their communities. Below, I will demonstrate how these complexities make it difficult to conform to Euro-American-created authority controls used by GLAMs in their collection cataloguing. Using an applied case study that focuses on the Getty Thesaurus of Geographic Names® in conjunction with a postcard collection at Cambridge University Library, I outline how a dual-indexing approach can alleviate some of these cataloguing challenges. Analysing the Getty Thesaurus of Geographic Names® I chose the Getty Thesaurus of Geographic Names® (henceforth referred to as the TGN) for analysis since it is the primary geographic authority control used in the Royal Commonwealth Society (RCS) department at Cambridge University Library, where my case study collection resides (Paul 2017, Section 6). The TGN is a "structured [resource] that can be used to improve access to information" (Getty Research Institute 2019) and contains over 2.5 million place names. Like all other Getty Vocabularies, it is constructed to enable its use in linked open data (LOD) and released as LOD to promote making "knowledge resources freely available to all." The TGN's development claims to be "focused on the historical world and places necessary for cataloguing and discovery of visual works" and cites "adding archaeological sites, lost sites, and other historical sites, particularly Pre-Columbian places and places in Asia, Middle East, Africa, and others" 6 as a core focus for its development (Getty Research Institute 2019). Primary users of the TGN include art museums, special collections, art libraries, archives, visual resource collection catalogers, bibliographic projects concerned with art, researchers in art and art history, and the information specialists dealing with the needs of these users. The TGN is available to users in a variety of formats. Users can search the thesaurus online (the primary method used for this research) and download datasets in various formats from the Getty Research Institute's website. In order to analyse the TGN's representation of the Circum-Caribbean, I visually charted all authority records pertaining to the Circum-Caribbean in the TGN to a national or territorial level using software called Creately. This allowed me to review all relevant authority records simultaneously and made it easier to analyse the thesaurus's successes and failures by seeing the relationships between authority records all at once. In this way, the authority control's representation of the region could be viewed as an incomplete puzzle, but one that is complete enough to begin envisioning what the missing pieces might look like, where they might fit, or which pieces have been misplaced. I then analysed this visual chart in an effort to determine how well the TGN represents the Circum-Caribbean, asking questions like: what names are included or excluded? What sub-regional groupings are represented? Who are the main sources providing information about this region? This methodology provides a framework to approach and analyse other authority controls from a Circum-Caribbean perspective and can be adapted to investigate the representation of any other place, culture, or concept in a chosen authority control. TGN at Cambridge University Library Before analysing the TGN, it is important to explain how it is integrated and used at Cambridge University Library (the UL), as this context significantly impacted my analysis. Many institutions using the TGN as an authority control in their cataloguing workflows have TGN data incorporated within their collection management system, obtained from third-party vendors. Indexing using the principle of specificity, cataloguers identify and select the most specific and relevant location of an object from the list of TGN data in the collection management system's respective field, and the hierarchical relationship(s) are automatically generated from the LOD. However, not all institutions can support this embedded functionality. At the time of this research, the UL did not have the capacity to incorporate TGN datasets into its collection management system. Instead, cataloguers manually identified and assigned index terms by referring to the library's identified authority control lists-of which the TGN is one. This manual application process is not ideal because it requires substantially more time for the cataloguer and creates ample room for human error. This research takes into consideration how this manual application at the UL can affect users. This research was carried out in early to mid-2020, prior to the UL's transition from a digital repository system called JANUS to the current platform, ArchiveSearch. The user interface represented in screenshots within this paper has thus changed, and some of the issues highlighted here may have been fixed during this system migration. Physical and Political Approaches to Geography in the TGN The TGN attempts to label places beyond strictly geophysical boundaries. The standardised place names, called preferred names, are classed by place type as either physical features or political entities. This categorisation of place type can be seen in all levels of the TGN: from directly below World (facet) at the upper level of the hierarchy, all the way down to localities within a specific territory or nation. If a user sees the text [view physical features], it means they are currently viewing political entities in a hierarchy, and vice versa. These two main branches can be broken down into further place types that are indicated as qualifiers in brackets following the place name-for example: (island), (island group), (nation), and (dependent state) (see Figure 1). The TGN's acknowledgment that there are different-for example, physical and political-approaches to defining or classing places is commendable. However, in so doing, some places end up with duplicate or near-duplicate index terms to reflect these different approaches, but with a near-identical or seemingly identical name. For example, a physical feature name like Jamaica (island) and a political entity name like Jamaica (nation) both refer to the island country Jamaica in the Caribbean. In the back end of a cataloguing system, the cataloguer might index an object using both Jamaica (island) and Jamaica (nation). However, the place type-(island) or (nation)-might not be visible on the front-end user interface of the collection management system, depending on the software being used. This was the case with the RCS's online catalogue, and it is problematic for two reasons. First, it can easily confuse the user since (without a visible place type qualifier) it appears as though there are multiple identical headings for the same place in the index list-e.g., Jamaica and Jamaica. This can be difficult to navigate as a user. Further, if consistent cataloguing methods are not performed, seemingly duplicate or near-identical index terms can fracture the index's ability to perform accurate collocation-for example, if some objects are catalogued with Jamaica (island), others with Jamaica (nation), and others with both index terms. Similar issues arise when one place is indexed with near-identical index terms-for example, Bahama Islands (island group) and Bahamas (nation) (Figure 2). A general user, who is not an expert in cataloguing or indexing, will likely be confused as to why there are multiple terms that seem to denote the same place and would need to conduct a verification process on any retrieved results to ensure that they have collocated all relevant materials and/or not obtained duplicate results for their query. Therefore, while considering both physical and political concepts of geography can expand the breadth of potential research avenues of a collection's holdings-at least for the Circum-Caribbean region-it is clear that further thought and consideration is necessary to determine how near-duplicates within an authority control can be reduced to enable more consistent implementation by the cataloguer and better functionality for the user. Polyhierarchical Relationships and Multiple Index Paths The polyhierarchical nature of the TGN leads to duplicated names in another way, whereby the same place can be traced through several different index paths (see Figure 3). Each TGN authority record provides the term's hierarchical position (what I refer to as the "preferred hierarchy") as well as additional parents ("additional hierarchies"). However, there is no apparent explanation as to how the Getty determines the default (preferred) hierarchy. According to the Getty's application guidelines, the most specific place should be selected from the preferred hierarchy path when indexing materials. However, this begs the question: why bother having the other hierarchies if they are not recommended for use? It seems as though each institution or department using the TGN would be best served by creating a detailed cataloguing handbook to guide all staff (and/or volunteers) in applying the TGN in a consistent manner-including if staff should only index according to the Getty's guidelines or should expand their indexing strategies to include all hierarchies and all unique terms from all hierarchies combined. Since the thesis of this Gooding: Trouble in Paradise paper is that a dual-indexing strategy incorporating general and specific (regional and territorial/national) subject heading approaches would serve more users, including all terms from all hierarchies is beneficial because it increases the number of controlled access points that could allow for regional, sub-regional, or territorial/national levels of result retrieval. If a cataloguer consulted only one path (the preferred hierarchy) for a place that is referenced by multiple possible paths (additional hierarchies), they will potentially omit other index terms-unique to the additional hierarchies-that could enable additional discoverability and collocation for some users' queries at regional or sub-regional levels. For example, there are three index paths that could be used to index the Caribbean territory Anguilla. These are: World (facet) > British West Indies (general region) > Anguilla (dependent state) 7 2. World (facet) > West Indies (archipelago) > British West Indies (general region) > Anguilla (dependent state) 3. World (facet) > North and Central America (continent) > Anguilla (dependent state) If a cataloguer used only the third index path, they would omit the additional terms British West Indies (general region) and West Indies (archipelago) found in paths one and two, terms that could help a user collocate items at a broader regional or sub-regional level. Worse still, if a cataloguer simply (manually) indexed using the term Anguilla (dependent state), the omission of any other terms would only enable a researcher to gather information on a national level, excluding Anguilla-related materials in any regional or sub-regional searches. It seems unlikely that all multiple index paths can be eradicated, but there are some that seem unnecessary, such as the example illustrated below for the island nation Dominica (see Figure 3). It is unclear why both hierarchies need to exist for Dominica (island) and why Windward Islands (island group), in the additional parents hierarchy, is not simply incorporated into the hierarchical position hierarchy since both that term and Dominica (island) fall below North and Central America (continent). Thus, a proposed single path including all terms is: World (facet) > North and Central America (continent) > Windward Islands (island group) > Dominica (nation) > Dominica (island) Such a solution may solve some instances of multiple index paths, but not all. For example, Figure 3 shows only two of at least six possible index paths leading to Dominica. Furthermore, a simple but frustrating issue with the TGN's hierarchies is the sheer number of inconsistencies that can be found within the structure itself. For example, it is unclear why the Greater Antilles (island group) is hierarchically below the continental level North and Central America (continent) but that the same is not applied to Lesser Antilles (island group) since they are both listed under West Indies (archipelago) in an additional parent-and West Indies (archipelago) is part of North and Central America (continent) (Figure 4). Another structural inconsistency is seen for the nation Antigua and Barbuda: it is unclear why Antigua is listed as a physical feature and Barbuda is listed only as a political entity when, in reality, it is also its own distinct island ( Figure 5). Should Barbuda not also be represented as a physical feature, namely Barbuda (island)? Another structural issue is the Getty's attempt to hierarchically represent places by continent. For example, Suriname is often overlooked when most people think about the Caribbean despite its similar colonial history and its membership in regional organisations such as the Caribbean Community (CARICOM). 8 Yet, there is absolutely no explicit indication of the nation's affiliation to the region in the TGN. Further, its index path World (facet) > South America (continent) > Suriname (nation) provides no possible way to collocate Suriname-indexed items with items of other Caribbean territories. Like Suriname, Belize is often overlooked when people hear the name Caribbean, but it is another mainland country affiliated with Caribbean regional organisations. However, unlike Suriname, it has several index paths. Of those paths, at least one explicitly indicates some affiliation with the Caribbean and another one, branched under North and Central America (continent), would allow for a continental-wide collocation of Caribbean-related items since most island territories are directly or indirectly hierarchically linked to this continental heading. Perhaps most interesting is the TGN's use of a further term, Central America (general region), in one of Belize's hierarchies. This raises further uncertainty as to why the TGN has not implemented an umbrella term under North and Central America (continent) that physically encompasses, at the very least, all of the insular Caribbean. 9 By labelling 8 CARICOM (formed July 4, 1973) uses functional cooperation to achieve regional integration in the Caribbean, with four main driving pillars: economic integrity, foreign policy coordination, human and social development, and security. There are fifteen member states and five associate members. 9 The insular Caribbean is generally considered to be the islands extending from the northern coast of South America all the way up to and including the Bahamas and Turks and Caicos. If the TGN included a term such as Caribbean (general region) for this insular definition, it could also implement a term such as Circum-Caribbean (general region) that could account for politically associated mainland territories and neighbouring countries. Caribbean territories under North and Central America (continent) but not having an intermediary term (or listing them under Central America [general region]), the TGN implies that Caribbean territories are part of North America, which is simply not the reality. 10 The final structural inconsistency that I will mention is that British West Indies (general region) is not indexed below a continent like other political entities. Meanwhile, the Netherlands Antilles (former nation/ state/empire) 11 and Antilles Françaises (general region) have been structured underneath a continental parent-although doing so poses unique challenges due to colonial histories. 12 While physically in the North and Central American area, politically the Netherlands Antilles is affiliated with the Netherlands, in Europe. Thus, the island group appears under both continental parents (see Figure 6). The same conundrum arises for overseas territories like Martinique. However, while confusing, these multiple continental headings are beneficial because they enable collocation from a geophysical or political perspective, and better reflect the complexity of some Caribbean places. Ingrained North American and European Bias An overwhelming number of Euro-American sources and contributors inform the place names and historical notes comprising the TGN, and their Western biases are inherently embedded in the system, as seen by the Eurocentric focus in historical notes on authority records and a lack of Indigenous place names included in each record's list of alternative names. Sources and contributors are included toward the bottom of each record's web page (Figure 7). A quick look usually confirms that most contributors are North American or European, like the Cambridge World Gazetteer, Webster's New Geographical Dictionary, and UN Terminology bulletins. A cursory glance at this section for more 10 At some point between September 2020 (when this original research concluded) and May 2021 (when this article was submitted), the TGN did just this, integrating a new physical feature index term Caribbean (general region). Time did not permit investigations of this newly added term and its impacts on the TGN's representation of the Caribbean before the article's publication. Further research is therefore required to determine how the addition of Caribbean (general region) has impacted the TGN's representation of the Caribbean. 11 It is also unclear why the TGN selectively applies the place type of former nation/state/empire since every territory in the Circum-Caribbean that is today independent constitutes part of a former state or empire. 12 Though not discussed previously, this challenge of trying to index places physically and politically leads to multiple index paths by way of continental hierarchy paths. Additionally, Notes are decidedly biased to Eurocentric historical perspectives. Most notes indicate when territories were colonised, but very few include important national information such as dates of independence. Such significant dates should be considered unequivocally historic and given, at the bare minimum, equal importance as the colonial histories that presently dominate the Notes. Some notes falsely claim the extinction of Indigenous populations. 13 Further yet, there is a sense of contradiction that occurs between the Notes on some authority record pages, such as the one for Guyana. Despite making comments on Amerindians in Notes on other authority record pages, the Notes for Guyana simply state that "little is known" about the Amerindians. Even a cursory online search will show that much research has been conducted about these Indigenous populations. Further, the vast majority of Caribbean-related preferred names in the TGN do not offer any Indigenous names in their Names list, going directly against the TGN's claim that it is focused on "the historical world" and "adding . . . Pre-Colombian place names" to parts of the world impacted by post-Colombian colonisation (Getty Research Institute 2019). In fact, during my research, the only Caribbean authority records in 13 There are still Indigenous groups living in several Caribbean territories, including Dominica, Guyana, and St. Vincent. the TGN I found with an Indigenous name included in their list of variant names was Jamaica (nation) and Jamaica (island), which both listed Xaimaca as an "Arawak" name meaning "land of the springs." However, even the use of the term Arawak to denote a language is vague because there are multiple distinct Arawak languages, as evidenced in the Diachronic Atlas of Comparative Linguistics. 14 These aspects demonstrate that the Getty still has a lot of work to do to make its vocabulary anti-colonial and more inclusive of the region's multiculturalism. The only way that these Euro-American biases will be overcome and balanced by more neutral and culturally relevant perspectives is if the Getty takes action to involve Caribbean institutions and experts on the region's social, political, and geographic histories and cultures. Overall, the difficulty of defining and delimiting the Circum-Caribbean is reflected in the Getty TGN's attempts to organise this region into an indexable and hierarchical form. This analysis demonstrated that there are many weaknesses in how the TGN depicts the Circum-Caribbean region. The most notable weaknesses include a recommended hierarchy that often omits important place names or relationships that could enable additional discoverability and an overabundance of Euro-American sources and contributors that leads to a biased, Eurocentric perspective of this previously colonised and multicultural region. One of the redeeming qualities of the TGN is its attempt to recognise geography from both physical and political perspectives, although this brings its own set of challenges. The Getty Thesaurus of Geographic Names® and the West Indian Postcard Collection at Cambridge University Library: A Case Study This section summarises the methodology and findings of a case study in which I indexed the West Indian Postcard Collection (WIPC) at the RCS department at the UL using augmented applications of the TGN. 15 Background and Methodology The level of cataloguing across different RCS collections varies widely. The WIPC had only undergone minimal cataloguing prior to my arrival. There was an eleven-page, typescript, hard-copy catalogue that listed postcards at item level with an accession number, the published caption, and sometimes a short description of a set of cards. I sought to augment the application of the TGN to improve the intellectual discoverability-determined by the number of TGN place names used as controlled access points-of this collection. My primary practical tools in carrying out this research were two spreadsheets that I created: a consistency control spreadsheet and a catalogue spreadsheet. I began by listing each unique place identified in the WIPC postcards in one column in the consistency control spreadsheet, so that each row represented a unique place. Each place name was listed only once. I then added additional columns to analyse different ways of indexing using TGN names. These "listing" columns created a space for me to do the following for each place name: • List only the preferred names in the preferred hierarchy (see Table 1, row 1) • List the preferred names in preferred and additional hierarchies (see Table 1, row 2) • List the LOD names for preferred names in the preferred hierarchy (see Table 1, row 4) • List the LOD names for preferred names in preferred and additional hierarchies (see Table 1, row 5) Breaking down different applications of the TGN in these ways allowed me to see how many more controlled access points could be gained by incrementally incorporating preferred names in additional hierarchies and then also by including LOD names linked to preferred names. I then inserted calculation columns to the right of each of the aforementioned listing columns. The calculation columns counted the number of names in each listing cell left-adjacent to the calculation cell. Finally, two additional columns used these counting columns to calculate the rate of improvement of accessibility. One column divided the number of all preferred names in all hierarchies for a given place by the number of preferred names only found in the preferred hierarchy for that place (see Table 1, row 3). The other column divided the number of all preferred names and their corresponding LOD names from all hierarchies for a given place by the number of preferred names only found in the preferred hierarchy for that place (see Table 1, row 6). The average of all counting columns was then calculated and inserted as a total row at the bottom of the consistency control spreadsheet. The consistency control spreadsheet served as a tool to ensure my own consistency when indexing the WIPC in the catalogue spreadsheet. Primarily using only information available on the postcard itself, I listed each object as a row in the catalogue spreadsheet, including the accession number, maker(s), published title, and postcard set title (if applicable) in corresponding columns. I then indexed each postcard in the catalogue spreadsheet by identifying the location depicted in the postcard, finding that place's row in the consistency control spreadsheet, and copying and pasting the corresponding listing and calculation columns from that row into the corresponding object's row in the catalogue spreadsheet (ensuring that calculation formulas adapted to the new spreadsheet). Thus, the collection was indexed using multiple applications of the TGN. Finally, the averages from each spreadsheet were averaged to give a final overall result (see Table 1). These are the rates discussed below. The calculations generated in the consistency control spreadsheet represent the truest average (in this project) for how much the intellectual discoverability (as defined above) of Circum-Caribbean materials could benefit from each indexing approach, since each place was listed only once in that sheet. Ideally, these same steps and calculations would be applied using a similar consistency control sheet which incorporates all Circum-Caribbean places identified in the Getty TGN (not just those places found in the WIPC). Results and Observations There were sixty-three unique places identified in the WIPC that could be identified in the Getty TGN. These sixty-three places are represented as sixty-three rows in the consistency control spreadsheet, and the 240 postcards that were catalogued are represented as 240 rows in the catalogue spreadsheet. Occasionally, I conducted minimal additional research to determine which parish the location of a postcard was in if the city itself was not listed in the TGN. This allowed the postcard to be indexed more specifically based on the available index terms in the TGN rather than just by its territory. Since some territories are represented in the collection by multiple postcards, the calculations performed in the catalogue spreadsheet provide the specific rates of improvement of intellectual discoverability through expanded indexing specifically for the WIPC. Surprisingly, the calculated results in the catalogue spreadsheet did not differ greatly from those in the consistency control spreadsheet even though several places are represented substantially more than others in the WIPC (and therefore in the catalogue spreadsheet). What quickly became clear was that places benefit significantly from having a more extensive TGN authority record. In short, if there are no LOD names or only one name variant-for example, West Indies only has one-including the LOD really does not do much to improve the intellectual discoverability of that place. However, a place with dozens of LOD names-for example, Bermuda and some places in Jamaica-benefits significantly in terms of intellectual discoverability by having those LOD name variants included. In the context of this project, which considers a higher number of controlled names as an aid to intellectual discoverability, this statistical result demonstrates the importance of having additional LOD name variants listed on the authority record. Furthermore, having numerous index paths (i.e., additional hierarchies) may significantly benefit the intellectual discoverability of a place because it means that there are more (regional and sub-regional) place names under which the (national/territorial) place can be found. This also means that users conducting broader research-for example, trying to identify all material pertaining to Leeward Islands rather than one specific island-could benefit from including additional names found only in the additional hierarchies. In many cases, broader place names (such as sub-regional groupings) are not included in the TGN's preferred hierarchies. Based on observations that more LOD names and more available index paths lead to increased discoverability, the places that benefited the most from a broader application of the TGN thesaurus are Montego Bay (inhabited place), which gained an additional thirty-four access points using LOD, and Bermuda (dependent state), which gained an additional thirty-three access points. It is worth noting that several other places in Jamaica (Blue Mountains, Spanish Town, Westmoreland, Port Antonio) gained twenty-nine or thirty additional access points using LOD, likely due to the fact that they have longer index paths that include more sub-regional, country, parish, and county names as well as corresponding LOD names for each of those places. Places in Barbados benefited the least from these broader indexing applications. They gained only an additional eleven to thirteen access points through LOD because Barbados and its places do not have LOD names lists as long as other places like Jamaica and, unlike many places in the Caribbean, Barbados only falls under one sub-regional name in the TGN: British West Indies ( general region). Most places fall under a political or historical subset like this one as well as additional geophysical subsets like Lesser Antilles, Greater Antilles, Leeward Islands, or Windward Islands, which thereby increase index paths and consequently the number of place names, which in turn increase the number of LOD names. 16 It is interesting to note that the term West Indies (archipelago)-which would be useful for a user seeking a broader regional collocation of results-has only LOD name listed on its authority record. If there were more names listed, a broader application of the TGN could greatly improve accessibility through this term. For example, if terms like Caribbean were linked as LOD names for West Indies, users searching in a database with enabled LOD could retrieve results labelled as West Indies or Caribbean. It is unclear why this regional labelling has not received greater attention in the TGN, especially when there are several historical terms that could be added as LOD names to further expand discoverability. 17 The data in Table 1 shows that broadening the application of the TGN can significantly improve discoverability by creating more controlled access points (names), especially when systems employ LOD frameworks. The Getty recommends using the TGN's preferred names through the preferred hierarchy. These names and hierarchies are what the RCS department uses to index their collections since their catalogue database does not have the capacity to handle importing the TGN's LOD. The data above indicated that when all preferred names from the preferred hierarchy as well as additional hierarchies 16 For each preferred name (expressed as a row in the consistency control spreadsheet) and each postcard (expressed as a row in the catalogue spreadsheet), this rate of improvement was calculated by dividing the "Total No. of Preferred Names from All Hierarchies" by the "No. of Getty TGN's Preferred Names in Preferred Hierarchy." 17 The recent addition of Caribbean (general region) likely alleviates some of these concerns. The delimiting definitions of "West Indies" and "Caribbean" would need to be carefully considered and thoroughly investigated prior to conducting further research on the TGN's addition of Caribbean (general region) in order to determine which TGN places are linked to West Indies hierarchies, which are linked to Caribbean hierarchies, and which are linked to both. were included, the average increase in accessibility was just more than doubled-2.038 times the number of access points when compared to using only the TGN's preferred names in the preferred hierarchy in the catalogue spreadsheet and 2.112 times the number of access points as calculated in the consistency control spreadsheet (Table 1, row 3). Intellectual discoverability, in terms of the number of access points, further increased when the use of the TGN's LOD names in the indexing approaches was considered. As seen in Table 1 (row 4), even including LOD names just for the preferred names in preferred hierarchies gave an additional 13.667 access points in the consistency control spreadsheet and 12.404 additional access points in the catalogue spreadsheet, an average increase of an additional thirteen controlled names between the two spreadsheets (rounded down to the nearest whole number). Finally, when the indexing approach was expanded to include all preferred names in all (preferred and additional) hierarchies and all of those names' relevant LOD names, accessibility increased yet again. On average, the number of access points increased from 3.746 (using only preferred names in the preferred hierarchy) to 28.650 (using all preferred names in all hierarchies and their corresponding LOD names) in the consistency control spreadsheet, an increase of 24.904 additional access points. In the catalogue spreadsheet, the number increased from 3.324 to 24.955, a difference of 21.631 access points. On average, then, the number of controlled access points between the spreadsheets increased by twenty-three when all preferred names and their LOD names in both preferred and additional hierarchies were included. Finally, by dividing the total number of all possible access points including preferred names in all hierarchies and their LOD names (Table 1, row 6) by the number of preferred names in preferred hierarchy (Table 1, row 1), the overall average discoverability was seen to increase by just over seven-fold. Conclusion Authority control seeks to standardise aspects of cataloguing and indexing. Controlled vocabularies and thesauri-a key component of authority control systems-might help in standardising the terminology used and in facilitating interoperability across software and institutions. However, they ultimately can hinder accurate indexing with regard to the Circum-Caribbean (and many other non-Euro-American regions) by leaving out valuable and nuanced access points. The controls we use to classify, categorise, and organise our collections in informational systems have become dangerously ingrained in the daily grind of processing and administering collections, to the point of becoming second-nature for veteran cataloguing professionals. This project and my ongoing research seek to invoke an "infrastructural inversion" (Bowker and Star 2000, 156) in which we think more critically about what it means to organise information and how the decisions made in creating information organisation systems can influence knowledge production, perpetrate biases, and affect information retrieval. Considering how information is organised and what knowledge is created or privileged in the construction of that organisation is essential to ensuring equitable access to all collection materials. This research demonstrated that the intellectual discoverability of Circum-Caribbean materials in collections could be enhanced if a dual-indexing approach is used when indexing with the TGN. In this way, general (regional) place names and specific (territorial/national) names would both be included. As scholars such as Linnea Marshall (2003) have argued, broad and specific subject headings (or, in this case, geographical indexing) serve different sets of users, and it would be beneficial to consider how best to serve both sets of users. I have begun studying several other authority controls from a Circum-Caribbean perspective, but I have yet to find one that is nuanced enough to adapt to the peculiarities and Creolistic aspects of the Caribbean or that does a good job of representing the region's Indigenous histories and languages. In conclusion, while information standardisation has come a long way and certainly has its advantages, there is room for improvement. GLAM professionals could better serve their users by questioning the authority controls, controlled vocabularies, and cataloguing procedures they are implementing. Doing so could reveal the biases in the systems and cataloguing practices being used and could demonstrate how their use suppresses materials related to certain places and cultures. In the case of the TGN's representation of the Circum-Caribbean, a dual-indexing approach is one possible strategy to make Circum-Caribbean-related materials more visible.
2022-07-30T15:08:53.695Z
2022-07-27T00:00:00.000
{ "year": 2022, "sha1": "2e2d54f002121e38945a055da5ee446bd22564c0", "oa_license": "CCBY", "oa_url": "https://kula.uvic.ca/index.php/kula/article/download/227/482", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4c7c44732ed5d50b14de4a86200b5bcc83bc7352", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [] }
247203479
pes2o/s2orc
v3-fos-license
Epidemiological analysis of cardiac trauma victims at a referral trauma hospital: a 5 year case series ABSTRACT Objective: to describe, analyze, and trace the epidemiological profile for cardiac trauma victims on a referral trauma hospital of a major urban center. Methods: a case series study to review, describe, compile and analyze medical records of all patients sustaining traumatic cardiac injuries, from January 2015 to January 2020 admitted to the referral trauma hospital of Curitiba, Brazil. Patients sustaining traumatic heart injuries were identified using the hospitals database. Patients who died prior to reaching hospital care were excluded. Results: all 22 cases were urban victims, mostly penetrating injuries (12 stab wounds, 9 gunshot wounds); 82% were male; mean age, 37.1 years. 17 cases (77%) occurred during night hours, 15 between Friday and Sunday, and 15 were admitted hemodynamically stable. Only 27% were diagnosed with FAST, the remainder requiring other imaging exams. About incisions, 14 had thoracotomies, 6 median sternotomies and in 2 cases both. Of injuries, 8 affected the right ventricle, 3 right atrium, 9 left ventricle, 1 right coronary sulcus and 1 anterior wall. All had cardiorrhaphy repair. 3 patients died, 17 were discharged and 2 were transferred. 17 received postoperative echocardiograms, revealing ejection fractions ranging 55.1% to 75%. Patients spent a mean of 9.6 days on ICU and a mean of 15.2 days of total hospital stay. The mortality rate was 14%. Conclusions: cardiac traumas predominantly occurred in adult males, due to violent causes, during night hours on weekends. The overall mortality rate found (14%), as well as total hospital stay, accords with the literature. INTRODUCTION C ardiac injuries are among the most severe and lethal conditions that can occur in trauma victims, being surpassed only by central nervous system injuries as the leading cause of death 1 .Some of the most severe available reports have even showed mortality rates that can reach up to 95% at the pre-hospital environment following with up to 50% of the victims that survive to reach the hospital in some cases 1,2 . Cardiac traumas are classified primarily by their mechanism of injury, which may be penetrating or non-penetrating (blunt cardiac injury).Blunt cardiac injury is reported as being the most common, being majorly caused by abrupt deceleration resulting from car accidents (50%), followed by pedestrian hit-and-run (35%), motorcycle accidents (9%) and falls from height (6%) 1,3 , while penetrating cardiac injuries are mostly due to stab wounds (59%), gunshot-injuries (26%) and other causes (15%) 4 . The initial hemodynamic state presentation is a major factor for the outcome of these cases, being clearly evidenced by the poor survival rates of the instances in which patients are admitted to emergency departments already in shock, which are of 35% after penetrating injuries and a mere 2% after blunt cardiac traumas 5 . Another major factor for the outcome is the diagnostic management.Hemodynamically stable patients may be screened for cardiac trauma with electrocardiography, with a sensitivity of 89%, or computed tomography, with sensitivity of up to 100% and specificity of 96% for hemopericardium 2,3 .These exams are preferred instead of troponin dosing, since the latter showed a sensitivity of only 23% 2 .Nevertheless, the most important tool for rapid assessment of cardiac 1 -Hospital do Trabalhador, General Surgery -Curitiba -PR -Brasil 2 -Faculdades Pequeno Príncipe, Faculty of Medicine -Curitiba -PR -Brasil Lucas Mansano sarquis 1 ; arnon césar Brunet-schuLtze 2 ; Bruno Berardi GazoLa 2 ; iwan auGusto coLLaço, tcBc-Pr 1 ; aLan Junior de aGuiar 1 ; hector Fontes 1 . Original article A B S T R A C T A B S T R A C T trauma injuries in the emergency department, especially for hemodynamically unstable cases, is the FAST (Focused Assessment with sonography in trauma) exam 4 . In addition to the variables in diagnosis, the different surgical techniques that can be employed for repair are also reported to play a major role on the outcome, although median sternotomy cardiorrhaphy is the most common approach.Special attention should also be given to the different types of injuries that can be found, noting however, that in most cases those are found on the right ventricle 2,5,6 . Patient Selection The selected hospital is the referral for severe trauma in Curitiba, Brazil, a state capital city, with an estimated population of 1.9 million during this period. Patients sustaining traumatic heart injuries were identified using the hospital's database, therefore, patients who did not survive to reach hospital care were excluded from the study.All patients that sustained traumatic heart injuries and underwent treatment at the referral trauma hospital during the 5 year period of January 2015 to January 2020, were included. Study Design and Data Collection A case series descriptive study was conducted through the compilation and analysis of the relevant data and epidemiological variables of electronic medical records and test results of the selected patients.Excluding this case for that reason, the mean length of ICU stay was 9.6 days, and the mean length of total hospital stay was 15.2 days. DISCUSSION Trauma due to violent causes among the young population represents a significant portion of the demand of emergency surgical services, especially in large urban centers.Cardiac trauma can be classified as blunt or penetrating 1 , and according to current evidence, its mortality rate considering only the cases who don't survive to reach the hospital ranges from 80% to 95% 1,2,4,5 .For the patients who do reach the hospital, it has been shown a predominance of this type of trauma in the young male population with a mean age of 30.8 years 7 , which roughly matches the data we found in our study, since 82% of the cases were men, although with a fairly higher mean age of 37.1 years. To our knowledge none of the literature studies specifically report data with regards to cardiac trauma injury and the variables of specific hours of the day or weekdays as a potential association between them, therefore, no comparisons can be made.In our report, there was a higher occurrence of cardiac trauma injuries occurring during night hours (77%) and during weekends (68%), suggesting that these findings are relatively novel. The most frequent blunt injury mechanism in developed countries are abrupt decelerations, such as those that occur in vehicle collisions or in sporting events 3,5 .Penetrating cardiac injuries come with a mortality rate of approximately 40% 2 , mostly by stab wounds, which is also the most frequent mechanism in developing countries 5 , but also frequently by gunshot wounds, which are often more lethal than the former 8 . Sarquis Epidemiological analysis of cardiac trauma victims at a referral trauma hospital: a 5 year case series Since this research was conducted in Brazil, a developing country, our study has shown evidence that supports those statements.Regardless of the mechanism, however, it was expected for the right ventricle to be the most frequently injured, based on the literature, which stated that it occurred in 53% of the cases, followed by 32% of involvement of the left ventricle 5 .It is noteworthy that our study showed they had significantly similar rates of injury, with even a higher incidence for the left ventricle (41%), as opposed to only 36% of the cases showing right ventricle injuries. Among the most common associated injuries, a study involving 1,359 patients with thoracic trauma showed 49% of them had rib fractures, 20% had pneumothorax, 12% had pulmonary contusion and 6% had thoracic vascular injury 9 .When comparing those numbers with our findings, it can be said they were disparaging at best.Most notably, our study showed a high incidence of abdominal and diaphragm injury, which is barely of any note in the former report, and on the other hand, while it showed that rib fractures play a major role, in our case they only accounted for 13.3% of the associated injuries.Nevertheless, it must also be noted that that study also showed thoracic vascular injury with an importance quite close to our findings of 6.6%. Pre-hospital care needs to be fast, being essential to ensure initial stabilization and transportation to the trauma center for definitive treatment 10 .The major factors that translate into an increased mortality rate include total scene and transit time longer than ten minutes, the need for CPR (cardiopulmonary resuscitation), exsanguination, decreased Glasgow Coma Scale, massive hemothorax, need for resuscitative thoracotomy, hypotension (Systolic BP <75mmHg) and bradycardia (HR <50bpm) 11 .In the modern era, the ability to save the lives of patients suffering from penetrating cardiac injury is an important marker of quality of a trauma center 9 , and to achieve that, it requires proper diagnostic tools and a team qualified to manage the cases that survive to reach the hospital. Of the diagnostic equipment, FAST has become widely available in the last decade 12 , bringing along its high sensitivity (92-100%) and specificity (99-100%) for pericardial effusions 4 , which in turn allows for an early identification (mean of 0.8 minutes) 13 of the presence of free fluid around the heart and lungs, as well as in the abdominal cavity 3 .That said, it should be reminded that its accuracy still depends on the operator 12 , and as such, some authors consider complementing FAST with a transthoracic echocardiogram.The latter might show focal myocardial contusions or movement abnormalities, as well as secondary injuries such as ventricular septal defects, thrombus, or cardiac aneurysms, while also further decreasing the risks of a false negative FAST 3 . Nevertheless, the results of our study showed an unexpectedly lower use of this tool for diagnosing cardiac injuries, being performed on a mere 27% of the cases.Although FAST findings continue to be overridden by overall clinical assessment as the basis for final management decision 12 , there is still some divergence on whose patients should undergo the exam.For hemodynamically unstable cases, some authors advocate immediate operative exploration, restricting their use of FAST for stable cases, with the ones yielding a positive result being followed by a subxiphoid pericardial window (SPW) or other tests for confirmation before proceeding to the definitive treatment 7 .In contrast, other authors discourage the use of SPW due to concerns about possible uncontrolled hemorrhages, preferring instead to employ FAST extensively, for both stable and unstable patients, indicating surgical management for the unstable when a positive FAST is identified, and reserving SPW for when it is negative when clinical suspicion remains high 12 . Another diagnostic tool widely used for hemodynamically stable patients 1 is computed tomography, since it has 100% sensitivity and 96% specificity, for the diagnosis of hemopericardium 3 .It also allows surgeons to identify the trajectory of the injury and the structures affected 2 , a feature not achieved by the FAST.Our study has shown that CT scans were the most frequently used resource for diagnosis, being performed in 73% of the patients.Chest X-rays could help identify a pneumothorax, a hemothorax, an increased cardiac area and bone fractures, as well as locating projectiles and ballistic fragments 14 , though in our hospital, it was required for only 23% of the cases. The ECG has a reported sensitivity of 89% 2 and could be considered an important component in the screening of hemodynamically stable patients, often Rev Col Bras Cir 49:e20223120 Sarquis Epidemiological analysis of cardiac trauma victims at a referral trauma hospital: a 5 year case series showing sinus taquicardia 3 .Troponin dosage could also be performed, with a sensitivity of 23% 2 .Neither of those methods were used in any of the patients in the current study, which can be safely justified by the longer time they require to be performed or yield results, in comparison to the imaging exams. The management of patients with cardiac trauma is usually surgical, rather than conservative, initially applying the concept of damage control, which involves the rapid containment of hemorrhage through cardiorrhaphy, followed by resuscitation and planned reoperation 2 .In this regard, the findings of our study showed the surgical approach being chosen as a rule, meaning none of the 22 cases were treated with a conservative management. Generally, for hemodynamically unstable patients, the chosen surgical incision is the left anterolateral thoracotomy and for stable patients, median sternotomy 14 .Other options include the "claw" incision (bilateral anterolateral thoracotomy in the 5 th intercostal space plus a transverse sternotomy) and the extended anterolateral thoracotomy with transection of the sternum 14 .In our study, we found that at admission, fifteen patients presented hemodynamically stable, while the other seven were unstable.Although as mentioned, the choice of incision is not a rule, it was noticeable that our results deviated a bit regarding that, since despite having fifteen stable patients, only nine underwent left anterolateral thoracotomy as the only incision and six the median sternotomy as a sole incision.The remaining seven patients underwent either both of those incision options or one of the additional options. The expected mortality rate based on the other studies was between 9.4% 10 and 30% 13 , and the mean length of total hospital stay between 5 and 12.8 days 5,6 .Analyzing our findings, it can be said that these expectations were met regarding mortality rate, which was 14%, and though the mean length of total hospital stay was higher (15.2 days), the median of 11 days (range 1-75) was within expected.Interestingly, when those variables were analyzed separating cases in 2 groups, an association between blood transfusions and decreased hospital stay was seen, although without statistic comparisons, this cannot fully be supported. Patients who received blood transfusions spent less time in the hospital, with a mean length of total hospital stay of 10.2 days (Median 10, range 1-19), in contrast with those who did not receive blood transfusion, with a mean of 25.2 days (Median 22, range 6-75).Although a more focused study on this specific finding would be required for confirmation, it already suggests that blood transfusions may have a major impact on this outcome. A study of the echocardiograms of patients undergoing cardiorrhaphy, performed before their discharge from the ICU, indicated that only 12% presented with a persistence or onset of new symptoms and abnormal findings.The others were asymptomatic and with a normal result, suggesting that the benefits of performing echocardiographic studies would be mostly reserved for patients who actually present with new symptoms or an abnormal exam after the surgical procedure 13 .The findings of our study, however, might suggest the opposite, since it was noted that under the same conditions, our postoperative echocardiograms, performed in seventeen of the twenty-two patients, revealed abnormal findings in 45%.Although this report was not a specific aim of this research, finding this contradiction was an unexpected, but noteworthy result. Study Limitations The relatively small number of cases should be seen with caution, since that hinders statistical comparisons.Nonetheless, it should also be pointed out that even in large urban center hospitals, which are regional trauma referrals, the annual incidence of cardiac traumatic injury is low 1,5,6 .Furthermore, it is also important to mention that some of the medical records did not clearly contain all the variables initially proposed to be collected, though not an impediment to the reported findings, being the reason why the authors do not feel it changed the results in any significant manner. CONCLUSION Low annual incidence of cardiac lesions occurred, predominantly in 30 to 50-year-old men, due to violent causes, mostly during night hours on weekends.These patients arrived at the hospital mostly Epidemiological analysis of cardiac trauma victims at a referral trauma hospital: a 5 year case series hemodynamically stable, although the majority required blood transfusions.In addition, diagnostic management by imaging and surgical incision may vary depending on the case, while the surgical treatment approach is likely to be chosen as a rule.A slightly higher incidence of left ventricle injuries was seen, along with associated injuries and intraoperative complications.The overall mortality rate was similar or better than found in the literature. Objetivo: descrever, analisar e traçar o perfil epidemiológico das vítimas de trauma cardíaco em hospital de referência em trauma de grande centro urbano.Métodos: uma série de casos para descrever e analisar prontuários de todos os pacientes que sofreram lesões cardíacas traumáticas, entre janeiro, 2015, a janeiro, 2020, admitidos no hospital referência em trauma de Curitiba, Brasil.Pacientes que sofreram lesões cardíacas traumáticas foram identificados no banco de dados do hospital.Pacientes em óbito antes da chegada aos cuidados hospitalares foram excluídos.Resultados: todos os 22 casos foram vítimas urbanas, maioria ferimentos penetrantes (12 por arma branca, 9 por arma de fogo); 82% homens; idade média 37.1 anos.17 casos (77%) ocorreram no período noturno, 15 entre sexta-feira e domingo.15 Considering the high severity and number of relevant variables that impact the outcome of cardiac trauma injuries, it is therefore necessary to describe and analyze the data from previous cardiac trauma cases on referral trauma hospitals to further the current scientific evidence and improve the basis for future intervention initiatives aiming to optimize the care for these critical patients.This research aimed to describe the data from previous cardiac trauma cases at the referral trauma hospital of a large urban center, and to analyze them based on current evidence, seeking to trace the epidemiological profile of the victims of this condition at that setting.METHOD This study was conducted with the approval of the Referral Hospital's Research Ethics Committee, on June 25, 2020, under the CAAE (Ethics Appreciation Presenting Certificate): 33118320.0.0000.5225.Considering that no direct contact was established with the selected patients, limiting the study to the usage of previously recorded documents (medical records, test results, archived images, etc), and also considering that it would be unfeasible to directly contact all patients (unknown address/telephone number, deaths), the Ethics Committee waived the requirement for a free and informed consent form.Nonetheless, all authors assure their rigorous commitment in preserving the confidentiality of all patient's private information included in the study and have done so by taking several necessary precautions to ensure respect and anonymity to those involved, which most importantly included, but were not limited to, refraining from reporting identity compromising data (individual name and surname, neighborhood of residence, exact date and hour of occurrence). Cardiac lesions found during operation, were predominantly in the left ventricle, being present in 41% of the cases, while there were 36% right ventricle lesions.There were also three lesions in the right atrium, one in the right coronary sulcus and one in the anterior wall and apex.Associated lesions were described in thirteen of the twenty-two patients, being two of them rib fractures, one spinal cord injury, one left mammary artery injury, three abdominal injuries, one diaphragmatic injury, one traumatic brain injury (subarachnoid hemorrhage), four pulmonary lobe injuries, one pneumothorax and one hemopneumothorax.Blood transfusions were required by 14 (63%) of the patients, with a median of 4 U (1-12 U) of packed red blood cells as part of the resuscitation efforts.There were three cardiac arrests during the operation of three different patients, with two of them returning to normal sinus rhythm after open resuscitation under direct cardiac compressions and aortic clamping, for 18 and 20 minutes, respectively.The third case underwent the same procedure but was declared dead after 25 minutes.Regarding hospital outcome, seventeen patients were discharged, two were transferred to other hospitals and three died.Only these two transferred cases resulted in a loss of follow up, which translates into a rate of 4.4% for this study.Of the three deaths, one occurred during the operation and two in the postoperative period, respectively on the first and tenth day of hospitalization.The two patients that died on postoperative period, had five or more penetrating injuries, meaning they sustained other injuries in addition to the cardiac trauma.The patient who died during the operation had multiple associated lesions, including diaphragmatic injury, posterior stomach lesion, small bowel injury, liver injury and a reported presence of hematoma in the retroperitoneum.All patients who died were admitted in the hospital hemodynamically unstable.Postoperative echocardiogram was performed in seventeen patients, and it showed values ranging between 55.1% and 75% (mean of 66.2%).Nine of them had evidence of mild effusion in the pericardium, with no signs of tamponade or restriction of cardiac relaxation.Only one revealed a large volume effusion and signs of cardiac tamponade.Regarding postoperative complications, one case of pleural empyema was reported.This patient had a hemopneumothorax at admission and died on the tenth day of hospitalization due to sepsis.A total of 21 patients required post-operative ICU (intensive care unit) and hospital care, with the only exception being the patient who died during operation, as mentioned. Table 1 . Data relation of the most relevant epidemiological, diagnostic, treatment, surgical, and outcome variables. RESULTSA total of 22 patients presented with traumatic heart injuries at the selected trauma referral hospital in the 5 year period analyzed.After reviewing electronic medical records, no cases were excluded.A final sample of 22 traumatic heart injury cases comprised the study group.Data pertaining the most relevant findings are shown in Table1.All 22 cases were urban victims due to violent mechanisms, two of them in 2015, seven in 2016, five in 2017, five in 2018 and three in 2019.The penetrating mechanism accounted for the majority of injuries (95%), being twelve of those by stab wounds, nine by gunshot wounds and only one patient, who was found unconscious in a public road, suffered blunt cardiac trauma, as a result of physical assault.Ten of the twenty-one victims of penetrating mechanism had the cardiac injury as an isolated injury, while the rest also presented with associated injuries.Most of the victims were men (82%), with an age range of 19 to 55 years, mean age of 37.1 years.Women's mean age (32.5 years) was lower than that of men (37.1 years).Fifteen of the twenty-two cases occurred between Friday and Sunday.Most of the cases were also admitted in night hours (77%).cardiacinjury in only one case, although it revealed pericardial fluid in the pericardial window in four other patients.CT (Computed tomography) was necessary for diagnosis in sixteen cases, seven of whom were of the angiotomography type, eight chest scans and one fullbody scan tomography (skull, cervical, chest, abdomen and pelvis) for the exceptional case of the patient found unconscious in a public road, who also had, along Of the used incisions, left anterolateral thoracotomy was chosen in nine cases, median sternotomy in six, and seven additional incisions: five bilateral thoracotomies, and two left anterolateral thoracotomies combined with median sternotomies.Values are presented as number of cases and corresponding percentage of total cases (%).FAST, Focused Assessment with Sonography in Trauma;CT, Computed Tomography; CPR, Cardiopulmonary Resuscitation; EF, Ejection Fraction.SarquisEpidemiological analysis of cardiac trauma victims at a referral trauma hospital: a 5 year case series foram admitidos hemodinamicamente estáveis.27% diagnosticados com FAST; demais demandaram outros exames.Das incisões, 14 receberam toracotomias, 6 esternotomias medianas, 2 casos ambas.Das lesões, 8 afetaram ventrículo direito, 3 átrio direito, 9 ventrículo esquerdo, 1 sulco coronário direito, 1 parede anterior.Todos receberam cardiorrafias. 3 pacientes morreram, 17 tiveram alta e 2 foram transferidos.17 receberam ecocardiograma pós-operatório, revelando frações de ejeção de 55.1% a 75%.Os pacientes passaram em média 9.6 dias em UTI e 15.2 dias de internamento hospitalar total.A taxa de mortalidade foi de 14%.Conclusões: traumas cardíacos ocorreram predominantemente em homens adultos, devido a causas violentas, durante o período noturno nos finais de semana.A taxa de mortalidade encontrada, assim como o tempo total de internamento hospitalar, esteve em acordo com a literatura.Coração.Traumatismos Cardíacos.Cirurgia Geral.
2022-03-03T16:30:59.274Z
2022-02-18T00:00:00.000
{ "year": 2022, "sha1": "8a41040cf64f713b0b177f38b37541590c08b1db", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rcbc/a/L7JW56NcbNqqGXpLPkZzKKC/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21de0b58cca91b2c369758e011f4252004086d5a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249856807
pes2o/s2orc
v3-fos-license
Applications of Game Theory and Advanced Machine Learning Methods for Adaptive Cyberdefense Strategies in the Digital Music Industry As the likelihood and impact of cyber-attacks continue to grow, organizations realize the need to invest in specialized methods to protect their digital data and the information they circulate or manage. Due to its broad use, game theory has evolved into a concept that can be applied practically while analyzing and modifying existing cyber protection methods to arrive at the best possible conclusions. This study presents an innovative hybrid model that combines game theory and advanced machine learning methods for adaptive cyber defense strategies. Specifically, a repetitive game methodology is implemented to analyze cyber-attacks and model behaviors and study how defenders and attackers make decisions in a competing field. Based on Bayesian inference, the proposed method can predict the next steps in the game to produce the appropriate countermeasures and implement the best cyber defense strategies that govern an organization. The suggested system introduced to the academic literature for the first time was successfully tested in a particular application scenario involving the digital music industry and coping with impending cyber-attacks. Introduction As the cyberspace landscape evolves rapidly, a dynamic framework emerges in which very delicate balances are observed to make optimal decisions. e asset, information, and data values that modern organizations must manage constantly increase. New cyber-attack techniques are continually developing while existing technological defense systems age, making active defense strategies resilient. Furthermore, economic changes, institutional reorganizations, consumer trends, and legal and regulatory compliance requirements all impact decision-making and the overall development of sound defense strategies for an organization or company. To maintain its competitive advantage, the organization in question will need to continuously improve its defense strategies, which will be based on ongoing knowledge and utilization of the cyber threat landscape [1]. An accurate inventory of all assets classified by their value to the organization is critical in assessing the severity of the risks they face and, as a result, the decisions that must be made about them. Game theory [2] can be used in cybersecurity to create tangible solutions that will allow the maximum utilization of existing strategies and optimize them to create a robust and long-term security environment at the organization level [3][4][5]. Using game theory principles, cyber security professionals can implement a network of controls that specialize and, as a result, reduce the risk to their valuable assets. ey can also apply areas with a low level of risk, maximizing their return on investment. As a result, using specialized scenarios based on game theory, it is possible to predict the attackers' strategy at each stage of the attack cycle, assisting in developing intelligent models to improve cyber security and the development of new intelligent systems to deter attackers. is study presents applications of game theory and advanced machine learning methods for adaptive cyber defense strategies in the digital music industry. An innovative application of a repetitive game methodology for cyber-attack analysis is proposed; using Bayesian inference [6], the next steps in the game can be predicted so that the best cyber defense strategies can be implemented to shield an organization from cyber-attacks. Literature Review e literature on game theory for adaptive cyber defense approaches is extensive, whether at the network level, where we must deal with massive amounts of raw data or at the strategy level. In 2015, Laszka et al. [7] developed a strategy for reducing spear-phishing assaults by targeted per-user filtering criteria. ey framed the task of screening harmful e-mails, both targeted and untargeted, as a security game. ese defined optimum filtering techniques and demonstrated how they might be computed in practice. ey put their theoretical hypotheses to the test by comparing them to two datasets taken from the actual world. Using two different sets of real data, they demonstrated that the recommended baselines result in less harm than the nonstrategic restrictions. In addition, they found that the improvement over nonstrategic criteria was more substantial for the comprehensive information. is improvement was unaffected by an increase in the number of targeted customers. is indicated that their technique scaled effectively, both analytically and in terms of how well it performed. Schlenker et al. [8] explored the critical issue of allocating cyber alarms to a restricted number of professionals in cyber security activities. ey proposed the cyber-alert allocation game to investigate this issue and demonstrated how to compute the defender's best options. ey offered a unique technique for addressing implement ability concerns in determining the defender's best marginal tactic to resolve this game. Finally, they presented heuristics for solving big games like the one described and an objective assessment of the suggested framework and solution methods. Nguyen et al. [9] conducted a study of deep reinforcement learning (DRL) techniques used in cyber protection. ey discussed various critical topics, such as DRL-based security approaches for cyber-physical infrastructure, independent intrusion detection approaches, and multiagent DRL-based game theory simulations for cyber-attack defensive tactics. Additionally, extensive debates and potential study paths on cyber security focusing on DRLs are provided. ey hoped that this exhaustive assessment would provide the groundwork for and support future research into the ability to develop DRL to address more sophisticated digital privacy concerns. Alpcan and Basar [10], in 2010, in their book, about network security and game theoretic approaches, aimed to present a theoretical foundation for making resource allocation decisions that balance available capabilities and perceived security risks in a principled manner. ey focused on analytical models based on game, information, communication, optimization, decision, and control theories applied to diverse security topics. At the same time, connections between theoretical models and real-world security problems are highlighted to establish the critical feedback loop between theory and practice. Hemberg et al. [11] presented an architecture for adversarial AI called RIVALS that abstractly replicated the hostile, competing for a coevolutionary mechanism in security contexts. e purpose was to develop a system capable of pro-active cyber security against dynamic automated attackers. ey reviewed its present uses and how it is used to develop defensive measures. Further work will involve expanding it to enable more cyber defense purposes, creating more effective or genuine reality methods, and applying other Nash equilibrium-finding algorithms to other cyber security challenges with established Nash equilibria [12] and analyzing efficiency. Proposed Game Strategy Many cyber-attacks follow a pattern built on repeating tactics or procedures over time. Infrastructure vulnerability control strategies, for example, can compete with current defensive systems and change their applications over time. In this sense, attackers and defenders engage regularly, and these interactions may be depicted using repetitive games, which are a type of dynamic game [13]. e concept described here is using a repeated game to evaluate cyber-attacks and simulating some cooperative behaviors without a clear endpoint. A repetitive game begins with a static game repeated infinitely or intermittently many times. A reward is given to each player who completes a specific action in this strategic stage game. e sum of each player's gains throughout the game constitutes their final reward. In addition, Bayesian inference [14] can predict the next steps in the game to produce the appropriate countermeasures and implement the best cyber defense strategies that govern a complex system with high uncertainty [15]. e starting point for modeling the proposed system is a static game of the following format: where Ν � {1, 2,. .., n} is the set of players, X i is the set of the player's ί pure strategies, and u is its performance function. Assume that this stage game is repeated T times, where T is finite or infinite. Each repetition takes place over a period. A typical time (or stage) is denoted by ί, where ί � 1, 2, ..., Τ. e interaction evolves as follows [16,17]: (1) In period 1, players simultaneously select actions, which we symbolize as where the pointer symbolizes the player and the exponent symbolizes the stage (time period). e action x 1 1 belongs to the set X i , i ∈ N. Each player has been informed afterward of the choices of the other players. e performance of player i in this period is (2) In period 2, players choose at the same time actions (4) If T is finite, the interaction is complete (at the end of period T). Otherwise, the game continues in perpetuity In the repetitive game, we suggest each player earns a sequence of payoffs (one payoff for each period). is yield sequence is assumed to be valued using the discounted sum of the sequence terms. e term discount expresses the assumption that a person does not value current and future returns equally. Parameter δ is the discount rate of an individual [18]. e closer δ is to 0, the less the individual values a future versus a present performance. In other words, the smaller the δ is, the less the person is interested in the future or the more impatient he is. Conversely, the higher the δ (the closer it is to 1) is, the more a person values a future performance or is more patient [19][20][21]. Let a finite terminal story h T � (x 1 , x 2 , . . . , x T ). If the discount rate of player i is given by the parameter δ i ∈ (0, 1), then the discounted sum of the payoffs of player i is Let now an infinite terminal story h ∞ � (x 1 , x 2 , . . .). In this case, the discounted sum of player's payoffs is In the case of infinity T, instead of the discounted sum of the odds, we use the discounted average of the player's odds as a valuation function: Based on the relation, player i obtains the sequence of odds (u 1 i , u 2 i , . . .). We define a number c such that i is indifferent between the series of yields (u 1 i , u 2 i , . . .) and the sequence (c, c 1 , ..., c n ). So, we take the following relation [22]: Consequently, we have so that . e functions V i and (1 − δ i )V i express the same preferences since one is a positive monotonic transformation. us, the set of all terminal stories is the set of all sequences [3,23]: So, player i evaluates the terminal history h ∞ based on the function: or it is equivalent: So, we have the repetitive game G ∞ (δ). A rule that dictates the player's behavior in response to any and all scenarios is an example of a strategy. Consequently, a single-player strategy is a guideline for picking actions at each stage (repetition) of the stage game as a function of previous decisions when playing a game that features observable actions and is repetitive. To be more exact, the energy that the player will pick at each narrative stage is determined by the strategy used in a game with just one player and repeated steps. e action must be feasible; that is, it must belong to all the available options of the gaming stage [24,25]. Odds pairs corresponding to the four pairs of clear strategies are feasible (in the sense that there are strategies that generate specific returns). It should also be noted that all convex combinations of two or more yield pairs of pure methods constitute achievable yields (using mixed strategies) [5,26]. In general, the total possible odds of the stage game are given by the quadrilateral in Figure 1, which has as vertices the four pairs of payoffs corresponding to the clear strategies described by the set {(5, 5), (0, 6), (6, 0), (1,1)}. Application Testing In order to model the proposed system, a specialized threat scenario was implemented with an application study in the music industry. is was done because the large number of visitors combined with the enormous amounts of music content spent on a daily basis creates a new landscape of threats, in which the clear strategies for cybersecurity need to be rearranged on an ongoing basis. According to this logic, they frequently employ advanced techniques, including zero-day attacks, to launch attacks on music streaming platforms that modern cybercriminals have targeted [24,27,28]. For the implementation of the proposed system, we consider a static game Γ � N, (X i , u i ) i∈N , such that (1) (u * 1 , u * 2 , . . . , u * n ) is the payoff vector of a Nash equilibrium of Γ (the Nash equilibrium is a solution technique for non-cooperative games with two or more participants in game theory. We assume that each player is aware of the equilibrium strategies that the other players would employ and that no player stands to gain by simply changing his strategy [2] if each participant has decided on a strategy and no one has the option of modifying their strategy. On the contrary, the other players maintain their strategy constant; their set of plans and results are the Nash equilibrium. e discount rate δ is quite large. en, there is a perfect subgame balance for the game G ∞ (δ) with average odds (v 1 , v 2 , . . . , v n ). To prove the above scenario, we will assume that the vector (v 1 , v 2 , . . . , v n ) can be achieved through a vector of pure strategies x � (x 1 , x 2 , . . . , x n ). at is, u i (x) � u i , i ∈ N. It should be noted that it is not necessary to assume clean strategies. If this is not the case, vector V is achieved through mixed strategies [5,10]. Suppose that the equilibrium returns (u * 1 , u * 2 , . . . , u * n ) of Γ are derived from the strategy vector (x * 1 , x * 2 , . . . , x * n ). We consider the following firing strategy: We will examine the players' motivations for adopting or not the above strategy. Let us look at player i if all other players adopt the firing strategy. We must determine the optimal response of i in each period where it has observed a result x and its optimal response in each period where it has observed a result different from x. It should be recorded the actions that i will take if it observes in a period t that in the previous period t-1 a result other than x was outcome. Player i knows that, in period t, the other players will choose the actions x * − i and that this will be done in perpetuity (because they follow the firing strategy). erefore, the optimal reaction of 1 is to choose x * i in perpetuity. Let us now see what player i will do in those periods for which history contains only the result x. Suppose that, in some of them, i chooses to deviate from the energy x i . e optimal deviation, which we denote by x d i d, is what solves the problem. In particular, e yield of i during the deviation period is where it is true that From the next period, the other players will start choosing x * − i , after noticing the choice x d i of i. e punishment period of i will therefore begin. Given this, the optimal response of i is to select the energy x * i . us, from the period following the deviation to perpetuity, i will have a payoff of u * i per period. Its average discounted return from the breach is, therefore, If, on the contrary, i, after each story that contains only x, chooses the action x i ; its return will be u i because after such a story, the other players choose x − i . For each of the following periods, the same result will be repeated, and i will continuously gain u i per period. e average payoff of i in this case is (19) erefore, player i will select x i after each story that contains only x if Computational Intelligence and Neuroscience If the following inequality relations are valid, so it applies And so e possible payoffs that exceed the Nash payoffs are shown in Figure 2. Next, to determine which value can best estimate the random variable, we apply Bayesian inference to create a loss function, which will measure how wrong the estimate is, that is, how different the estimation of the parameter is from its actual price. is means that the goal is for the value of the estimate to minimize the loss function. For example, in the absolute error loss function, underestimation and overestimation of the parameter are punished in the same way, depending on the deviation of the estimate. In contrast to the preceding statement, the linear loss function penalizes underestimation with a different weight than overestimation. e evaluation of the decision rules is done through the risk function, which is defined as the average value of the loss function. If L(δ, θ) is the loss function and δ � δ(x) is the decision rule, the hazard function of the decision rule δ is mathematically expressed as In the Bayesian approach, it is possible to balance the risk function of each decision rule, based on ex-ante personal opinion, through u(θ). So, the Bayesian risk is defined as [29][30][31] e decision rule that minimizes Bayesian risk is the Bayesian rule. In particular, the Bayesian risk of decision rule δ, as a function of the loss of a square error, is equal to Since the margin function g(x) is a nonnegative function, we are interested in minimizing the which becomes Computational Intelligence and Neuroscience e above function is minimized for δ(x) when So, through a hypothesis check, we can determine the correctness of a hypothesis that concerns an action without dealing with estimating the possible values of the strategy. Based on this view, it is possible to predict the future prices of the process as, through the ex-post distribution, the forecasting process is immediate and accurate. Assuming that, for a random sample of observations x � x 1 , x 2 , . . ., x n , it is desirable to predict the future observation y, and it is necessary to calculate the a posteriori distribution of prediction π(y | x). For the mathematical calculation of the ex-post distribution, the mathematical analysis of the ex-prediction distribution g(x) is required: Observing the equation, it is easy to see that the integral contains the product of the probability function f(x|θ), with the ex-ante distribution u(θ). us, based on the above procedure, the mathematical definition of the ex-post prediction distribution π(y|x) will be made: e probability function of the following observations is equal to the integral of the joint probability function f(y, θ | x). It should be noted that this time the shared distribution function is bound to the observed data. Reapplying the Bayesian theorem, we conclude that the ex-post distribution of prediction is again equal to the product of probability f(y|θ), but this time, with the ex-post distribution p(θ|x), this is completed for the variable θ. e effect of the ex-post information is therefore apparent. It should be noted that, from the probability function f(y, θ|x), we came to the probability function if all a posteriori information originates from the parameter θ. To prove the validity of the methodology described above, we will make the assumption that we will count the number of privilege escalation attacks made against a randomly chosen account on the music streaming platform that is the focus of the current example. Suppose that the number of specific attacks over a period follows the "Poisson" distribution (θ), θ ∈ Θ � (0, ∞), i.e., For n observation time, the probability of the sample is Suppose that the ex-ante distribution of θ follows the Gamma distribution (a β), where a, β > 0; quantities are known as a function of probability density: erefore, the distribution margin is erefore, the a posteriori distribution of θ is 6 Computational Intelligence and Neuroscience From the above relation, it becomes clear that the a posteriori distribution of θ is [32] Gamma e ex-post distribution following the Gamma distribution was expected since the Gamma distribution family, where the ex-distribution belongs, is conjugated to the Poisson distribution. e ex-post average value for the above Gamma distribution is [30,32,33] Let r ∼ Poisson(θ) be a new independent observation. e equation gives the forecast distribution: where for the specific statistical model it is [34,35] which is a negative binomial with parameters [36][37][38]: Conclusions is research offered an original and forward-thinking application of game theory and cutting-edge machine learning methods for adaptive cyber protection measures. It suggested a methodology based on game theory, which concerns the study of elements that characterize situations of competitive interdependence with an emphasis on the decision-making process of more than one decision maker. We make the assumption that every player is informed of the equilibrium tactics that the other players will use and that no player has anything to gain by merely altering the strategy that he uses. erefore, in order to figure out which value can provide the most accurate estimation of the random variable, we use Bayesian inference to devise a loss function. is function will measure how inaccurate the estimate is, or more specifically, how far off the estimation of the parameter is from the actual price. is indicates that the objective is to have the value of the estimate be as low as possible concerning the loss function. Specifically, the study of features that characterizes conditions of competitive interdependence includes the opponents. To be more specific, a methodology based on a repeated game is used to examine cyber-attacks and model behaviors and research how defenders and attackers make decisions in an environment where they compete. e unique approach developed can forecast the future moves in the game to generate the proper countermeasures and apply the most acceptable cyber defense tactics that govern a company. is ability is based on the use of Bayesian inference, which is a type of statistical inference. e suggested system was tested with great success in a particular application scenario in the digital music sector and how to cope Computational Intelligence and Neuroscience 7 with upcoming cyber-attacks. e testing was successful on both fronts. A crucial step in further developing the proposed model is the further investigation of how they can adapt the method parameters to processes of modern and asynchronous change of the initial parameters of the evaluators. is is one of the processes that will further the development of the proposed model. As a result, the expansion and empirical exploration of the characteristics of the method estimators in finite samples, which call for the use of Monte Carlo simulations, is also a vital component of the proposed system's evolutionary parameter. Data Availability e data used in this study are available from the author upon reasonable request. Conflicts of Interest e author declares no conflicts of interest.
2022-06-20T15:03:31.371Z
2022-06-17T00:00:00.000
{ "year": 2022, "sha1": "e2d35f02b8ce952795359164b7ac2f2de481b7c1", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/2266171.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20c22d0cae61ba421496c5e18780c51fe1edb83d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
195760038
pes2o/s2orc
v3-fos-license
Indel detection from Whole Genome Sequencing data and association with lipid metabolism in pigs The selection in commercial swine breeds for meat-production efficiency has been increasing among the past decades, reducing the intramuscular fat content, which has changed the sensorial and technological properties of pork. Through processes of natural adaptation and selective breeding, the accumulation of mutations has driven the genetic divergence between pig breeds. The most common and well-studied mutations are single-nucleotide polymorphisms (SNPs). However, insertions and deletions (indels) usually represents a fifth part of the detected mutations and should also be considered for animal breeding. In the present study, three different programs (Dindel, SAMtools mpileup, and GATK) were used to detect indels from Whole Genome Sequencing data of Iberian boars and Landrace sows. A total of 1,928,746 indels were found in common with the three programs. The VEP tool predicted that 1,289 indels may have a high impact on protein sequence and function. Ten indels inside genes related with lipid metabolism were genotyped in pigs from three different backcrosses with Iberian origin, obtaining different allelic frequencies on each backcross. Genome-Wide Association Studies performed in the Longissimus dorsi muscle found an association between an indel located in the C1q and TNF related 12 (C1QTNF12) gene and the amount of eicosadienoic acid (C20:2(n-6)). Introduction Pork is one of the world's most produced meat. Selective breeding in pigs has been developed in parallel to the increase and intensification of this productive sector. Over the last decades, genetic selection has notably improved meat-production efficiency in commercial pig breeds. However, this artificial selection had the unwanted drawback of reducing the pork sensorial and technological properties of meat. These modifications were driven by the reduction of intramuscular fat (IMF) content and fatty acid (FA) composition changes [1]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Commercial breeds as Landrace possess an efficient meat production with a rapid growth and a leaner carcass, but the resulting meat has lower IMF and higher polyunsaturated FAs (PUFA) content compared with some indigenous pig breeds, such as the Iberian pig [2]. The Iberian breed is characterized by its higher IMF content with a great proportion of monounsaturated FAs (MUFA) [3]. In addition, MUFA have a more oxidative stability than PUFA, improving the organoleptic properties of meat [4]. In contrast, PUFA consumption, in particular omega-3, has the beneficial role of decreasing the total cholesterol concentration, while saturated FAs (SFA) increase the risk of suffering cardiovascular diseases [5,6]. Fatty acid composition in muscle is determined by physiological conditions such as fed and fasted states [7], environmental factors such as nutrition [4,8] and genetic factors; carcass and FA composition traits in pigs that range from moderate to high heritability values [9][10][11][12]. The genetic divergence between breeds has been driven by the accumulation of mutations through processes of natural adaptation to the environment and selective breeding. Genetic mutations can be produced by base pair substitution, but also by insertion, inversion, fusion, duplication or deletion of DNA sequences. The development of next generation sequencing (NGS) technologies has improved the detection of these genomic variants. Hitherto, the most well-known variants studied with this method have been the substitutions of single nucleotide polymorphisms (SNPs), which represent almost the 80% over all the detected variants [13][14][15]. In contrast, insertions and deletions (indels) have been less characterized although the genome-wide ratio of indels to SNPs has been estimated as 1 indel for every 5.3 SNPs [16]. Studies in Drosophila melanogaster and Caenorhabditis elegans have determined that indels represent between 16% and 25% of all genetic polymorphisms in these species [17,18]. In addition, studies performed in humans and chimpanzees evidenced that indels instead of SNPs were the major source of evolutionary change [19][20][21]. As it has been described over the last decades, the most frequently found indel was the 1 base pair (bp) long [22,23]. Furthermore, a major proportion of deletions than insertions was observed in the genome of 18 mammals, with the exception of the opossum [24]. A mechanism that favours the occurrence of deletions was proposed by de Jong & Rydén [25], in which the loops formed by slipped mispairing after DNA strand breakage are trimmed off. In pigs, recent studies using whole genome sequencing (WGS) have detected the 1 bp long indel as the most frequent indel, but the deletion/insertion ratios differ [26][27][28][29]. Indels can produce frameshifts in the reading frame of a gene or modify the total number of amino acids in a protein, but they can also affect gene expression levels. In pigs, indels were found to affect backfat thickness [30] and fat deposition [31] through the alteration of gene expression, underlining the importance of these variants for animal production. The objectives of this study were to identify indels from WGS data of Iberian and Landrace pigs, which were founders of an experimental cross (IBMAP) with productive records for FA composition, and to study the association between a selection of indels and meat quality traits in three different genetic backgrounds. Ethics statement The present study was performed in accordance with the regulations of the Spanish Policy for Animal Protection RD1201/05, which meets the European Union Directive 86/609 about the protection of animals used in experimentation. All experimental procedures followed national and institutional guidelines for the Good Experimental Practices and were approved by the IRTA (Institut de Recerca i Tecnologia Agroalimentàries) Ethics Committee. Animal material and phenotypic records The pigs used in this study belonged to the Iberian and Landrace breeds. The Iberian line, called Guadyerbas, is a unique black hairless line that has been genetically isolated in Spain since 1945 [32]. The Landrace line belonged to the experimental farm Nova Genètica S.A. (Lleida, Spain). WGS data of seven founders of the IBMAP experimental population [32], two Iberian boars and five Landrace sows, were used for indel detection. Analysis of indel segregation and association with meat quality traits were performed in 441 individuals of different backcrosses: 160 BC1_LD ((Iberian x Landrace) x Landrace), 143 BC1_DU ((Iberian x Duroc) x Duroc) and 138 BC1_PI ((Iberian x Pietrain) x Pietrain). All animals were reared in the experimental farm of Nova Genètica S.A. (Lleida, Spain). Population structure of these three backcrosses is depicted in S1 Fig. Animals were fed ad libitum with a cereal-based commercial diet and slaughtered at an average age of 179.8 ± 2.6 days with an average carcass weight of 72.2 kg. Blood from founder animals was collected in 4 ml EDTA vacutainer tubes and stored at -20˚C until analysis. Samples of diaphragm tissue were collected from backcrossed animals, snap-frozen in liquid nitrogen and stored at -80˚C until analysis. Genomic DNA was extracted from all samples by the phenol-chloroform method [33]. At the slaughterhouse, 200 g of Longissimus dorsi muscle samples were collected from the three backcrosses. The IMF composition was measured with a protocol based on gas chromatography of methyl esters as described in Pérez-Enciso et al. [32]. In total, 20 traits were analysed: 17 intramuscular FAs and 3 FA metabolism indices (Table 1). Data values were normalized applying a log 2 transformation when needed. Whole genome sequencing The whole genome of seven founders of the IBMAP population was sequenced at CNAG (National Centre for Genome Analysis, Barcelona, Spain) on an Illumina HiSeq2000 instrument (Illumina, San Diego, CA, USA). Paired-end sequencing libraries, with approximately 300 bp insert size, were generated using TruSeq DNA Sample Prep Kit (Illumina, San Diego, CA, USA). For each sample, around 40 million 100 bp-long paired-end reads were produced with an average sequencing depth of 11.7x. Whole genome sequencing files of the seven BC1_LD founders are described in Revilla et al. [34] and were deposited in the NCBI Sequence Read Archive (SRA) under accession nos. SRR5229970, SRR5229971, SRR5229972, SRR5229973, SRR5229974, SRR5229975 and SRR5229976. Sequences were trimmed based on their quality using the FastQC [35] software. Then, reads were mapped against the reference genome sequence assembly (Sscrofa10.2) using the Burrows-Wheeler Alignment (BWA) tool [36]. Duplicated reads or those which were under a Phred-based quality score of 20 were removed. Finally, alignment result files (in bam format) were prepared for indel detection. Indel detection and effects prediction Several programs allow performing indel calling from WGS bam files. Following the article of Neuman et al. [37] on the comparison of short indel detection programs, we applied the recommended pipelines on the use of these three programs: Dindel (version 1.01) [38], SAMtools mpileup (version 0.1.19) [39], and Genome Analysis Toolkit (GATK) (version 3.4-46) [40]. The Variant Effect Predictor (VEP) (version 82) [41] tool of Ensembl (http://www.ensembl. org/) was used to quickly and accurately predict the effects and consequences of indels previously found on Ensembl-annotated transcripts [41]. Furthermore, to predict the possible effect of an indel in the secondary structure of a protein, JPred4 [42] was used. Finally, ten indels were selected for indel validation and association analysis if they followed any of these two criteria: 1. those start or stop variants related with lipid metabolism 2. those indels with high or moderate severity that were found at extreme frequencies in the founder animals (IB = 1 & LD�0.2 or IB = 0 & LD�0.8). Among this subset of 127 indels, those involved in lipid metabolism were prioritized. Genome-Wide association analysis Genome-Wide Association Studies (GWAS) were performed between the measured phenotypes of IMF composition and the previously genotyped variants of the three backcrosses (38,424 SNPs and nine indels) along the pig reference genome assembly (Sscrofa11.1). The studies were conducted with GEMMA [44] following the mixed linear model: where y ijklm indicates the value of the phenotypic observation in the l th individual; sex (two categories), batch (fourteen categories) and backcross (three categories) are fixed effects; β is a covariate coefficient with c being carcass weight; u l is the infinitesimal genetic random effect and distributed as N(0, Kσ u ), where K is the numerator of the kinship matrix; δ l represents the allelic effect, calculated as a regression coefficient on the l th individual genotype for the m th SNP or indel (values -1, 0, +1); a m represents the additive effect associated with the m th SNP or indel; and e ijklm is the random residual term. Genomic kinship was obtained selecting the "-gk 1" option in GEMMA software [44], which calculates a centred relatedness matrix using the genotypic information of the individuals. GWAS were also performed individually for each one of the three backcrosses following the previously described model, except for the fixed effect of the backcross which was removed from the model. The multiple test correction was conducted with the p.adjust function incorporated in R (www.r-project.org) using the false discovery rate (FDR) method developed by Benjamini and Hochberg [45]. In order to consider a SNP or an indel as significant or suggestive a cut-off was set at FDR�0.05 or FDR�0.1, respectively. Genome-wide detection of indels in Iberian and Landrace animals Whole genome sequencing data of seven founders of the IBMAP population (two Iberian boars and five Landrace sows) were used for indel detection with Dindel, SAMtools mpileup and GATK software. Dindel was the program that detected the highest number of indels (3,380,221) as opposed to SAMtools mpileup and GATK (2,749,596 and 2,957,377, respectively). To reduce the rate of false positives, only indels (1,928,746) that were found in common between the three programs were considered for further analyses (Fig 1). In addition, 50,528 indels were discarded for not displaying the same genotype in at least two programs. Repetitive elements, such as microsatellites, are short insertions or deletions that can interfere with the detection and annotation of indels. Thus, to reduce the interference of repetitive elements in the next steps, 105,783 variants were discarded if they were triallelic or the alternative allele was different among individuals for the same chromosomal position. Moreover, 141,391 indels were trimmed because they were homozygous for the alternative allele in all samples and may not be segregating in our population. Hence, we only considered the final list comprising 1,631,044 indels for further analysis (S1 Table). In a preliminary study of our group, in which SNP calling was performed from WGS of these seven IBMAP founders, the number of SNPs identified after the quality filter was 4.9 million in the Iberian boars and 6 million in the Landrace sows. Therefore, the number of indels detected (1.6 million indels) was within the expected range (16-25%) of the total number of variants detected [13][14][15][16][17][18]. Nevertheless, another study in pigs reported that indels were less frequent than SNPs in a proportion of 1 to 10 [26]. The distribution of the indels found along all the Sus scrofa chromosomes (SSC) showed that sexual chromosomes (SSCX and SSCY) had lower density of indels than autosomes (Fig 2). Disregarding the pseudoautosomic regions, this low density of indels in the sexual chromosomes is probably caused by the low recombination rate, only possible for the X chromosome in females, and by the appearance of hemizygous recessive lethal mutations in males. In addition, males present one copy of each heterosome, and accordingly, the density of mutations in autosomes, which have two copies of each chromosome, is higher than in heterosomes. The autosome that had the highest density of indels was SSC10, while SSC1 had the lowest (Fig 2). In accordance with the literature, indel frequencies decreased as their length increased [27,46] and thus, 1 bp long indel was the most frequent indel found (Fig 3), either insertion or deletion [22,23]. Insertions were more frequent than deletions in single bp indels, but from the 1.6 million indels, 52.9% were deletions from 1 to 54 bp and the rest were insertions (47.1%) from 1 to 32 bp. Therefore, deletions were found to be more frequent than insertions, which has been previously reported by some other studies made in pigs [26,28] and follows the mutational mechanisms described by de Jong & Rydén (1981). Consequence and severity predictions of the indels detected The effects (consequence type and severity) of the 1.6 million indels were estimated by the VEP platform and are summarized in Table 2. Since a variant may co-locate with more than one transcript, one line of output was provided for each instance of co-location and thus, there were more lines written (1,790,722) than indels entered (1,631,044). In addition, the total number of predicted effects was 1,809,798 as some indels can result in more than one effect in the same transcript (e.g., an indel could cause a frameshift along with a stop gained). Around the third part of the 1.6 million indels (33.1%) did not fall within intergenic regions (539,920 indels) and only 1,758 indels were inside a coding region (0.11%). Finally, the VEP platform classified the 1.6 million indels by their possible severity as high (1,289), moderate (561) or low (1,018) impact, and the rest of indels were considered as modifiers. Indel selection for genotyping From the total of indels with high and moderate impact (1,850), ten indels were selected to be genotyped in three different genetic backgrounds. These indels were chosen regarding their possible consequence, if they were inside genes that could be related with lipid metabolism and/or considering their frequencies in the founder animals. Table 3 summarizes the list of genes with indels selected for genotyping: 1. The aspartate beta-hydroxylase (ASPH) gene (ENSSSCG00000025087), located on SSC4, contained a predicted frameshift variant (rs691136075) with a high impact. The expression of this gene was found to be negatively correlated with insulin-stimulated sprouting in mice adipose tissue [47]. 2. The calpain 9 (CAPN9) gene (ENSSSCG00000010182) is located on SSC14 and contained a predicted inframe deletion (rs704351652). CAPN9 is a member of the calpain family and some of its members have been associated with body fat content and insulin resistance in human and mice [48,49]. This variant was found at extreme frequencies in the founder animals being the alternative allele (CAPN9:c.2013_2015delGAA) fixed in the Iberian boars. 3. The C-C motif chemokine receptor 7 (CCR7) gene (ENSSSCG00000017466) is located on SSC12 and contained a predicted frameshift variant (rs789030032). CCR7 codifies for a chemokine receptor that plays a crucial role in inducing adipose tissue inflammation, insulin resistance and obesity [50,51]. The allele frequency for this indel (CCR7:c.1142dupA) in the Landrace sows was 0.5 while the two Iberian boars were homozygous for the reference allele. 4. The C-reactive protein (CRP) gene (ENSSSCG00000021186), located on SSC4, contained a frameshift variant (CRP:c.515delT). High levels of CRP has been related with overweight and obesity in human adults [52]. This variant was found fixed in the Iberian boars for the alternative allele (CRP:c.515delT) and the alleles of the Landrace sows were as the reference. 5. The C1q and TNF related 12 (C1QTNF12) gene (ENSSSCG00000003333) is located on SSC6 and contained an inframe deletion (C1QTNF12:c.557_559delCCG). This gene is also known as CTRP12 and FAM132A. C1QTNF12 functions as an adipokine that is involved in glucose metabolism and obesity in mice [53,54]. This deletion was found at extreme frequencies in the founders being the alternative allele (C1QTNF12:c.557_559delCCG) fixed in the Iberian boars. 6. The granzyme A (GZMA) gene (ENSSSCG00000016903), located on SSC16, contained an inframe insertion (rs792025734). This gene was differentially expressed in the mesenteric adipose tissue of beef cattle with distinct gain [55]. The insertion (GZMA: c.129_131dupGTT) was found with a frequency of 0.8 in the Landrace sows while the Iberian boars were homozygous for the reference allele. 7. The jumonji domain containing 1C (JMJD1C) gene (ENSSSCG00000010226) is located on SSC14 and contained an inframe deletion (JMJD1C:c.5964_5966delCAG). JMJD1C was found in a human GWAS as a candidate gene for very low-density lipoprotein particles [56]. This variation was found at extreme frequencies in the founders being the alternative allele (JMJD1C:c.5964_5966delCAG) fixed in the Iberian boars. 8. The lysosomal trafficking regulator (LYST) gene (ENSSSCG00000010151), located on SSC14, contained an inframe insertion (rs713515754). This gene has been related with hypertriglyceridemia and anomalous lipid and FA composition in the erythrocyte membranes of Chédiak-Higashi human patients [57]. This variation (LYST: c.6287_6289dupCCA) was found with a frequency of 0.8 in the Landrace sows while the Iberian boars were homozygous for the reference allele. 9. The peroxisomal biogenesis factor 19 (PEX19) gene (ENSSSCG00000023091) is located on SSC4 and contained a predicted frameshift variant (rs702520311). PEX19 is assumed to be under regulation by peroxisome proliferator-activated receptor gamma coactivator-1 alpha (PGC-1α) increasing the mitochondrial FA oxidation in human primary myotubes [58]. In addition, peroxisomes are intimately associated with lipid droplets and they are able to perform FA oxidation and lipid synthesis [59]. The frameshift variant was found to be fixed in the Iberian boars for the alternative allele (PEX19:c.98_102dupAAGTC), whereas in the Landrace sows the alternative allele was present with a frequency of 0.2. 10. The sterile alpha motif domain containing 4B (SAMD4B) gene (ENSSSCG00000016927), located on SSC16, contained a predicted frameshift variant that causes a stop gained (rs709630954). This gene was found to produce leanness and myopathy in mice due to the dysregulation of the rapamycin complex 1 (mTORC1) signalling [60]. Segregation analysis of the selected indels The ten selected indels were genotyped in 143 BC1_DU, 160 BC1_LD and 138 BC1_PI individuals. Table 4 shows the genotype frequencies of indels in each backcross. Allele genotyping of the CRP:c.515delT indel failed and this indel was discarded for posterior analysis. GWAS results Nine indels located within genes related with lipid metabolism and genotyped in the three experimental backcrosses were selected for the association analysis. GWAS was performed with a linear-mixed model (GEMMA software) among the genotypes of 38,424 SNPs segregating in the three backcrosses and the nine selected indels and the fatty acid composition in muscle. GWAS results in the merged dataset showed no significant association between the nine genotyped indels and the 20 FA composition traits in IMF. However, a suggestive association between the C1QTNF12:c.557_559delCCG indel and the eicosadienoic acid (C20:2(n-6)) (pvalue = 1.77×10 -5 , FDR = 5.34×10 -2 ) was identified in the BC1_PI backcross-specific GWAS (Fig 4). This association was not found in the other two backcrosses BC1_DU (pvalue = 1.65×10 -1 , FDR = 8.92×10 -1 ) and BC1_LD (p-value = 1.63×10 -1 , FDR = 9.11×10 -1 ) (S2 Fig). Eicosadienoic acid is the elongated product of linoleic acid, an essential FA that is taken from the diet [61,62] and can be desaturated into arachidonic acid which participates in multiple regulatory pathways [61,62]. The BC1_PI pigs carrying the C1QTNF12:c.557_559delCCG allele had a lower proportion of C20:2(n-6). This result was not observed in the rest of backcrosses despite the C1QTNF12 indel was segregating at similar frequencies in the three backcrosses (Table 4). We hypothesize that other mechanisms could be modulating the levels of C20:2(n-6) in the BC1_DU and BC1_LD backcrosses and masking the effect of the C1QTNF12 indel. C1QTNF12 is a gene member of the C1QTNF family which preferentially acts in adipose tissue and liver regulating glucose uptake and fatty acid metabolism [54]. C1QTNF12 can also form heterodimers with the protein encoded by the ERFE (erythroferrone) gene, another gene member of the C1QTNF family, which is mainly expressed in skeletal muscle and is able to reduce the circulating levels of free FAs without affecting adipose tissue lipolysis [63]. Therefore, alterations of the C1QTNF12/ERFE heterodimer may modify the circulation of free FAs and their accumulation in IMF. Based on the data from the Ensembl project (www.ensembl.org; release 92) using the Sscrofa11.1 assembly, the porcine C1QTNF12 gene consists of 8 exons and 7 introns (Ensembl ID: ENSSSCG00000003333). The identified indel produces an inframe deletion of three bases (CCG) in the exon 5 of C1QTNF12, which has the consequence of removing the alanine in the position 186 of the final protein. This alanine deletion was located in the C1q/TNF-like domain of C1QTNF12, a domain that is highly conserved among the C1QTNF12 gene of mammals ( Fig 5) and other vertebrate species [64], and is characteristic of the C1QTNF family. Furthermore, the alanine deletion in the position 186 was predicted to cause a new α-helix formation in the secondary structure of C1QTNF12, which could produce an impairment in the protein function (Fig 6). Nonetheless, the C1QTNF12 indel was not the most significant genetic variant on SSC6 (Fig 4 and S2 Table). Thus, further studies are required in order to analyse whether other genes or other C1QTNF12 polymorphisms may be the cause for the differences in the eicosadienoic acid abundance. In conclusion, in this study we used three different programs that increased the accuracy of indel detection. Nine indels of the 1.6 million indels detected in silico were validated through genotyping in three different backcrosses, showing different allelic frequencies. In addition, a suggestive association was found between the C1QTNF12:c.557_559delCCG indel and the eicosadienoic acid abundance. Thus, indels can also be used as genetic markers associated with phenotypic traits of interest. were below the genome-wide significance and suggestive threshold (FDR � 0.05 and FDR � 0.1, respectively). (TIF) S1 Table. Compressed vcf file containing the 1,631,044 indels found in common between the three programs (Dindel, SAMtools mpileup and GATK) used for predicting the consequence type and severity of the indels by the VEP platform. (RAR) S2 Table. GEMMA output for the suggestive (FDR�0.1) SNPs found in the GWAS analysis for the log2 normalization of the relative abundance of eicosadienoic acid in the Longissimus dorsi muscle of the BC1_PI population.
2019-07-02T13:47:47.132Z
2019-06-27T00:00:00.000
{ "year": 2019, "sha1": "8014bf0331d569f622f1755c440544a5d111585c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0218862", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8014bf0331d569f622f1755c440544a5d111585c", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
80408273
pes2o/s2orc
v3-fos-license
A rare case of Candida prosthetic joint infection: Diagnostic and therapeutic challenges in a resource poor country Candida infection of knee prostheses is rare but increasingly reported. This report describes a candida prosthetic joint infection in a healthy woman treated with antifungal therapy following removal of knee prosthesis in a Teaching Hospital in Sri Lanka. Introduction Prosthetic joint infections due to Candida species are extremely uncommon and most are caused by Candida albicans followed by Candida parapsilosis. 1 It is commonly caused via direct inoculation of the organism or due to transient candidaemia, while underlying host immune suppression acts as the main predisposing factor. 2 Clinical recognition is often difficult due to its indolent onset and lack of other symptoms of infection during the initial stage. Definitive diagnosis is made by isolation of the causative agent from multiple samples taken intra-operatively. Inadequacy of clinical evidence of improvement during management poses a therapeutic challenge and published case reports vary widely in outcome and therapeutic approach. 3 This report describes one patient seen at the orthopaedic unit of a Teaching Hospital in Sri Lanka with prosthetic joint infection due to Candida spp. treated with antifungal drugs following removal of prosthesis. Diagnostic and therapeutic challenges during her management which contributed towards the poor outcome are discussed. Case Report A 63-year-old healthy female underwent total left knee replacement in January 2013 followed by right knee replacement in August 2014 for severe osteoarthritis. Two weeks after the latter, she developed pain and effusion in her right knee joint for which ten days of intravenous meropenem was given, to which she responded. She had no history of diabetes mellitus or prolonged antibiotic therapy and was not immunosuppressed. One year after surgery, she presented to the hospital with fever, pain, moderate swelling and decreased range of motion in her right limb. She had an elevated erythrocyte sedimentation rate (ESR) of 60 mm/one hour and C reactive protein (CRP) of 24 mg/L levels with a normal white blood count. X-ray of the knee showed bone destruction around the prosthesis suggestive of septic loosening. Bone scan showed evidence of an active inflammatory process in the right lower femur and right upper tibia. Joint fluid and blood culture were negative. Joint fluid aspirate was turbid and full report showed raised protein of 5.5 g/dl (normal level 2.5-3 g/dl) with neutrophils 400/mm 3 , lymphocytes 110/ mm 3 and RBC 2440/ mm 3 . She was treated with intravenous meropenem and vancomycin for 7 days and discharged. Pain over her right knee joint was continuous for which she undertook native treatment intermittently. She was readmitted with unbearable pain in 2017. On admission, she was afebrile with an ESR of 92 mm/one hour and CRP of 265 mg/L with normal white blood count (total count 8800 and differential count: 58% neutrophils, 37% lymphocytes). Blood culture was negative. Complete removal of the right knee joint prosthesis was carried out and direct examinations showed yeast cells. Culture of debrided bone, joint fluid and peri-prosthetic tissue grew Candida spp. (germ tube negative) in pure growth. Histology of bone particles showed chronic granulomatous inflammation, but no yeast cells were seen. Disc diffusion method of antifungal susceptibility testing showed that the isolate was susceptible to amphotericin with no zone of inhibition to fluconazole and voriconazole. No minimum inhibitory concentrations of antifungals (MIC) were performed. The patient was started on intravenous amphotericin B deoxycholate 1 mg/kg once daily. A good therapeutic response with dramatic reduction of ESR and CRP were observed after a week of amphotericin B deoxycholate. However, she developed severe infusion related side effects (fever and chills). It was decided to switch to anidulafungin (100mg twice a day for one day followed by 100mg daily) from intravenous amphotericin B deoxycholate for a minimum period of 6 weeks. There was an undue delay in initiation of anidulafungin for three weeks. Meanwhile, she succumbed to the infection. The timeline of the patient's clinical course is given below. Prosthetic joint infection (PJI) due to Candida species is still a rare phenomenon. However, cases are reported increasingly due to rising number of joint arthroplasties worldwide. 1 Underlying immunocompromised status, systemic illness (diabetes mellitus), the use of antineoplastic agents, prolonged use of antibiotics and indwelling catheters are some of the risk factors for fungal PJI. 2 Despite this, nearly half the cases occur in patients without any identifiable risk factor 3 as in this patient. Discussion Pain and swelling are the main symptoms of PJI due to Candida species, although the onset of symptoms can be insidious, and development of the disease can be slow 2 as seen in this patient. Most of the reported cases do not have systemic fungal disease. Most authors recommend obtaining multiple samples, prolonged incubation of cultures, special staining and histological examination for laboratory diagnosis. The diagnosis was made through clinical and radiological findings along with isolation of Candida spp from multiple intraoperative bone and tissue specimens in this patient. The cornerstone of diagnosis remains isolation and speciation of fungus along with in vitro susceptibility testing. 3,4 The susceptibility of Candida spp. to available antifungal agents is generally predictable if the species of the infecting isolate is known. 5,6 Clinical and Laboratory Standard Institute (CLSI) and European Committee on Antimicrobial Susceptibility Testing (EUCAST) have developed susceptibility testing of yeasts against most, but not all antifungal drugs which is an essential tool to guide treatment. C. albicans is the most common cause for fungal PJI reported in the literature followed by C. parapsilosis and other non-albicans spp. 2 The majority of C. albicans infections are associated with biofilm formation on the host or on the surfaces of medical devices or prostheses leading to resistance to fluconazole. 2 Candida spp. isolated from our patient was found to be a germ tube negative but speciation was not carried out. Antifungal sensitivity testing showed sensitivity to amphotericin B with resistance to fluconazole and voriconazole. Choice of antifungal treatment was made based on results obtained and availability of antifungal agents. The long-term antifungal use with two-stage exchange arthroplasty is currently regarded as the gold standard to eradicate the infection which showed the highest success rate (85%). 4 However, the ideal interval between implant removal and re-implantation, the usefulness of antifungal-loaded cement spacers and the type, duration of systemic antifungal treatment are still controversial. 1 Amphotericin B is the gold standard, but is highly nephrotoxic and may not be useful for long term administration. 3 Lipid formulations of amphotericin B or fluconazole would be better options. A high bio-availability, extended half-life, absence of serious side effect and high concentration in joint fluid make fluconazole a better choice though some nonalbicans candida species are resistant to fluconazole. 4,7 Echinocandins may be a good alternative due to its low toxicity, broad spectrum of activity or if the patient is intolerant to amphotericin B deoxycholate. The Infectious Diseases Society of America recommends treatment with antifungal for at least 6 weeks after removal of the arthroplasty while chronic suppression is recommended if removal of prosthesis is not an option. 6,8,9,10 Our patient was treated with amphotericin B deoxycholate following removal of the infected prosthesis to which she responded clinically. However, she did not tolerate conventional amphotericin B deoxycholate for more than a week and it was decided to switch to anidulafungin based on the available literature. 1,4,5,7,8 As it was an expensive and restricted drug, there was a considerable delay in obtaining the drug during which the patient was not on any anti-fungal treatment. She succumbed to the infection three weeks after amphotericin was stopped. Delays in diagnosis, lack of species identification with no MIC values and non-availability of newer antifungals compounded the difficulties experienced in the management of a rare cause of prosthetic joint infection in this patient. Collaboration between orthopaedic surgeons, radiologists, histopathologists and microbiologists is needed to improve outcomes in Sri Lanka. A multidisciplinary approach to determine risk and design strategies to maximize the use of available facilities would be helpful for earlier diagnosis and starting appropriate treatment. Informed consent The patient gave her informed verbal consent for the inclusion in this publication.
2019-03-17T13:12:36.350Z
2018-05-04T00:00:00.000
{ "year": 2018, "sha1": "c5c2523cc848cdc96f442a3ddc28db2b43a1442b", "oa_license": "CCBY", "oa_url": "http://sljid.sljol.info/articles/10.4038/sljid.v8i1.8156/galley/6085/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9f4c314c56c16c41c5c737ed7570d636c1b83892", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207808157
pes2o/s2orc
v3-fos-license
Prolonged Heat Acclimation and Aerobic Performance in Endurance Trained Athletes Heat acclimation (HA) involves physiological adaptations that directly promote exercise performance in hot environments. However, for endurance-athletes it is unclear if adaptations also improve aerobic capacity and performance in cool conditions, partly because previous randomized controlled trial (RCT) studies have been restricted to short intervention periods. Prolonged HA was therefore deployed in the present RCT study including 21 cyclists [38 ± 2 years, 184 ± 1 cm, 80.4 ± 1.7 kg, and maximal oxygen uptake (VO2max) of 58.1 ± 1.2 mL/min/kg; mean ± SE] allocated to either 5½ weeks of training in the heat [HEAT (n = 12)] or cool control [CON (n = 9)]. Training registration, familiarization to test procedures, determination of VO2max, blood volume and 15 km time trial (TT) performance were assessed in cool conditions (14°C) during a 2-week lead-in period, as well as immediately pre and post the intervention. Participants were instructed to maintain total training volume and complete habitual high intensity intervals in normal settings; but HEAT substituted part of cool training with 28 ± 2 sessions in the heat (1 h at 60% VO2max in 40°C; eliciting core temperatures above 39°C in all sessions), while CON completed all training in cool conditions. Acclimation for HEAT was verified by lower sweat sodium [Na+], reduced steady-state heart rate and improved submaximal exercise endurance in the heat. However, when tested in cool conditions both peak power output and VO2max remained unchanged for HEAT (pre 60.0 ± 1.5 vs. 59.8 ± 1.3 mL O2/min/kg). TT performance tested in 14°C was improved for HEAT and average power output increased from 298 ± 6 to 315 ± 6 W (P < 0.05), but a similar improvement was observed for CON (from 294 ± 11 to 311 ± 10 W). Based on the present findings, we conclude that training in the heat was not superior compared to normal (control) training for improving aerobic power or TT performance in cool conditions. Heat acclimation (HA) involves physiological adaptations that directly promote exercise performance in hot environments. However, for endurance-athletes it is unclear if adaptations also improve aerobic capacity and performance in cool conditions, partly because previous randomized controlled trial (RCT) studies have been restricted to short intervention periods. Prolonged HA was therefore deployed in the present RCT study including 21 cyclists [38 ± 2 years, 184 ± 1 cm, 80.4 ± 1.7 kg, and maximal oxygen uptake (VO 2max ) of 58.1 ± 1.2 mL/min/kg; mean ± SE] allocated to either 5 1 /2 weeks of training in the heat [HEAT (n = 12)] or cool control [CON (n = 9)]. Training registration, familiarization to test procedures, determination of VO 2max , blood volume and 15 km time trial (TT) performance were assessed in cool conditions (14 • C) during a 2-week lead-in period, as well as immediately pre and post the intervention. Participants were instructed to maintain total training volume and complete habitual high intensity intervals in normal settings; but HEAT substituted part of cool training with 28 ± 2 sessions in the heat (1 h at 60% VO 2max in 40 • C; eliciting core temperatures above 39 • C in all sessions), while CON completed all training in cool conditions. Acclimation for HEAT was verified by lower sweat sodium [Na + ], reduced steady-state heart rate and improved submaximal exercise endurance in the heat. However, when tested in cool conditions both peak power output and VO 2max remained unchanged for HEAT (pre 60.0 ± 1.5 vs. 59.8 ± 1.3 mL O 2 /min/kg). TT performance tested in 14 • C was improved for HEAT and average power output increased from 298 ± 6 to 315 ± 6 W (P < 0.05), but a similar improvement was observed for CON (from 294 ± 11 to 311 ± 10 W). Based on the present findings, we conclude that training in the heat was not superior compared to normal (control) training for improving aerobic power or TT performance in cool conditions. INTRODUCTION It is well documented that natural heat acclimatization as well as laboratory-based heat acclimation (HA) improve exercise performance in hot environments (see Daanen et al., 2018 for review andLorenzo et al., 2010;Karlsen et al., 2015a,b;Racinais et al., 2015 for specific studies). In contrast, if HA also leads to physiological adaptations that will improve exercise performance in cool conditions remains controversial (Corbett et al., 2014;Minson and Cotter, 2016;Nybo and Lundby, 2016). On one hand, Scoon et al. (2007), Karlsen et al. (2015b), and Keiser et al. (2015) report no effect of HA (or similar performance effects as reported for a matched control group training in cool settings) on exercise endurance performance in cool conditions. On the other hand, a few randomized controlled trials (RCTs) studies have reported beneficial effects (Lorenzo et al., 2010;McCleave et al., 2017;Rendell et al., 2017) on VO 2max , time trial (TT), and lactate threshold. Furthermore, studies that did not include a control group in the study design also report that training in the heat may benefit aerobic performance in cool setting (Hue et al., 2007;Buchheit et al., 2011Buchheit et al., , 2013Racinais et al., 2014;Neal et al., 2016a,b). The difference in findings from the above studies may, to some extent, relate to the heterogeneity of the studies, as they differ in the participants' training status, the conditions undertaken by the control group and, in particular, the duration of the intervention period. To conclude if HA may translate into improved aerobic performance in cool conditions, it is not sufficient to merely demonstrate improved performance in untrained or recreationally active adults, as improvements in performance could relate to a standard training effect rather than environmental stress. Also, for athletes, the improvement accomplished by training in the heat would need to be superior compared to control training that includes highintensity intervals (Levine and Stray-Gundersen, 1997). The proposed ergogenic effect of HA for subsequent performances in cool conditions has been attributed to a combination of hematological, cardiovascular, and skeletal muscle adaptations (Corbett et al., 2014). One mechanism of particular interest has been the expansion of plasma volume (and general increase of extra-cellular volume; see Patterson et al., 2004) that could translate into increased hemoglobin mass (Hb mass ) and higher capacity for systemic oxygen delivery (Scoon et al., 2007); although the erythropoietic effect was not observed in the study by Patterson et al. (2004). Diverse effects on Hb mass as well as plasma and blood volume across available HA studies may relate to relative short intervention periods deployed in previous studies allowing limited time for erythropoiesis to occur. In that context, significant effects of environmental interventions, e.g., altitude exposure is typically considered to require several weeks depending on the strength of the stimuli (Rasmussen et al., 2013;Siebenmann et al., 2015). Therefore, to determine whether HA positively enhances erythropoiesis in trained subjects, HA studies with sufficient duration are warranted. In addition to central hemodynamic responses, peripheral adaptations of relevance for performance have been proposed to be enhanced by heat training (Coyle, 1999;Bassett and Howley, 2000). For example, improved gross efficiency (GE) following heat training has been observed in some studies (Shvartz et al., 1977;Sawka et al., 1983) but not in others (Karlsen et al., 2015b;Rendell et al., 2017). While enhanced exercise efficiency has been proposed to involve changes in skeletal muscle recruitment patterns in response to heating (Shvartz et al., 1977;Sawka et al., 1983;Corbett et al., 2014), the improved efficiency could merely relate to acquaintance with the experimental testing as the studies reporting this effect have not controlled for familiarization effects. Therefore, the present study was conducted to evaluate the efficacy of prolonged HA (induced by training in the heat) compared to continued normal training (in settings with low thermal stress) with focus on the potential for improving exercise performance and aerobic capacity in cool conditions. Specifically, a long-term (5 1 /2 weeks) laboratorybased heat-training period was employed to ensure adequate time for any potential erythropoietic effect to occur. We included a group of endurance trained male cyclists that, following familiarization and a controlled lead-in phase, were randomly allocated to a heat training (HEAT) or control (CON) group. It was hypothesized that long-term heat training would improve aerobic capacity, peak power output and prolonged exercise performance, as determined by a 15-km time-trial (TT). We evaluated if potential performance effects involved improved exercise efficiency, thermoregulatory factors related to sudomotor adaptations and hematological adaptations leading to plasma and blood volume expansion. The present paper is focused on the overall performance effects (TT, peak power, and aerobic capacity), while we refer to the accompanying publication by Oberholzer et al. (2019, submitted to the special issue) for details on the hematological adaptations and in-depth analyses of mechanisms involved. Participants Twenty-four well-trained, sub-elite male cyclists [38 ± 9 years, 184 ± 4 cm, 80.4 ± 8.0 kg, and maximal oxygen uptake (VO 2max ) of 58.1 ± 5.3 mL/min/kg; Mean ± SD] with at least 3 years of cycling experience were initially recruited (see Table 1 for group-specific overview of baseline descriptive data). Participants had conducted their usual off-season training (environmental temperatures <15 • C) leading up to the study and were thus assumed to be only partly heat acclimated due to training status (Armstrong and Maresh, 1991). Following pre-intervention testing, participants were block-allocated into two performance-, VO 2max -, and age-matched groups (n = 12) that subsequently were randomly designated as either the heat training (HEAT) or control (CON) group, however, due to personal reasons unrelated to the study, three subjects withdrew before commencement of the intervention, resulting in 12 and 9 participants in the HEAT and CON completing the study, respectively. Data from drop-outs were excluded from the analysis. Before providing their written consent to participate, subjects were informed of potential risks and discomforts associated with the experimental procedures. The study was conducted in accordance with the Helsinki declaration and approved by the ethics committee of the Capital Region of Denmark (protocol: H-17036662). Study Overview An overview of the study protocol is displayed in Figure 1. Upon enrolment into the study, participants first completed one familiarization session, completing a 30 min preload, followed by a 15 km TT. During the following 2 weeks, participants were monitored in a lead in phase, with registration of weekly training Intense training during intervention (min/week) 157 ± 90 * 122 ± 57 * * Denotes a main effect of time compared to lead in (P < 0.05). All other comparisons were not different (P > 0.05). Values are mean ± SD. volume -both total and high intensity. On three subsequent sessions, baseline assessment of VO 2max , cycling efficiency and TT performance, as well as hematological parameters (see accompanying paper by Oberholzer et al., 2019 for details) were conducted. Participants were then allocated to their respective groups and trained in a 5 1 /2 week period (see "Intervention Period" section for more detail). Following the training period, a post-test battery identical to the pre-intervention battery was conducted. All performance testing and training was conducted at the same time of day (within 2 h) using the participants' personal bikes installed in a stationary Tacx-trainer device (Tacx Neo Smart T2800, Wassenaar, Netherlands) and associated software (Tacx Trainer software 4, Wassenaar, Netherlands). For each subject the same personal bike and Tacx-trainer were used during both pre-and post-intervention testing to circumvent any equipment differences. Pre-intervention performance testing was conducted during a 2-week period preceding the intervention (hematology within 2 days) and post-intervention testing within 6 days of the intervention's conclusion. A recovery period lasting a minimum of 24 h separated all performance tests to preclude residual fatigue confounding the results. Subjects were instructed to abstain from performing any exhaustive exercise the day leading up to a performance test and to refrain from consumption of caffeine for 12 h and alcohol for 24 h prior to testing. Time Trial Endurance performance was evaluated through the fastest possible completion of a simulated 15 km TT (mean slope of 0.1%, ∼600 m of uphill cycling) preceded by a 30-min preload at 60% of VO 2max . The TT and preload were separated by 5 min of passive rest. Subjects had visual access to real-time information regarding heart rate (HR), distance completed/remaining, speed, cadence, and power output, but were blinded to elapsed time. The TT was conducted in temperate ambient conditions (14 ± 0.2 • C, RH: 54 ± 3%; Galloway and Maughan, 1997) with airflow of ∼3 m/s directed toward the subjects' frontal surface [WBGT of ∼10 • C (Reed Heat Index checker 8778, Reed Instruments, United States)]. Provision of a maximal effort was facilitated by verbal encouragement throughout the test in addition to a prize being rewarded for the best performance. VO 2max, Cycling Efficiency, Incremental Peak Power Output, and Anthropometry Upon arrival to the laboratory and prior to performing any exercise, body mass and fat percentage were quantified on an electronic bio-impedance scale (InBody 270, InBody, Denmark). Cycling efficiency, VO 2max , and incremental peak power output (iPPO) were assessed through completion of an incremental cycling test to volitional exhaustion. Following warm-up stages consisting of 5 min at 100 W and 5 min at 175 W (80 RPM), respectively, the work load was increased by 25 W/min, terminating when the subject was incapable of maintaining a pre-defined and self-selected cadence despite strong verbal encouragement. Breath by breath recordings of VO 2 and VCO 2 were obtained throughout the test [Jaeger Oxycon Pro, Viasys Healthcare, Germany (calibrated for room humidity, flow, and O 2 /CO 2 concentration prior to each test)] and subsequently interpolated to 5 s mean values. Values ≥4 standard deviations from the local mean were discarded. A plateau in VO 2 despite increased work load and/or attainment of a respiratory exchange ratio (RER) ≥1.15 served as test validation criteria. VO 2max was defined as the highest observed value over a 30-s period and iPPO as the last completed work stage (W) plus the fraction (s) of the last non-completed stage [iPPO = (Last completed work stage (W)) + (25 W/60 × t(s)]. Intervention Period Participants in the HEAT group underwent 60-min heat training sessions in a climatic chamber on 5 weekly occasions FIGURE 1 | Study overview and time course. CO, carbon monoxide rebreathing procedure. Frontiers in Physiology | www.frontiersin.org (28 ± 2 total sessions), while subjects in CON reported to the laboratory and trained once a week in cool conditions (∼15 • C) to minimize group differences in the level of familiarization to stationary cycling and completed all other training and habitual intervals in cool settings. Both groups performed this part of the training at a constant intensity corresponding to 60% of VO 2max (204 ± 3 W). For HEAT, ambient temperature was set at 35 • C the first week (3 days) and subsequently increased by one degree each week ending at 40 • C (RH: 30 ± 2%), in order to accommodate for the relative decrease in intensity as HA was induced (Daanen et al., 2018). Rectal core temperature (T core ) was elevated to ≥38.5 • C after 35 ± 8 min of training and end T core was 39.6 ± 0.4 • C in all training sessions. Subjects were encouraged to undertake each training session without airflow for as long as subjectively tolerated but were provided with individually adjusted airflow when requested (ventilation with a floor fan of ∼1-3 m/s) to facilitate evaporation and provide some perceptual benefit to ensure that the exercise component could be completed. HR, T core , and sweat Na + were quantified during the first and last weekly training session. Warm water was ingested ad libitum during training to avoid fluid consumption acting as a heat sink. Body mass was measured before and after each training session and subjects were instructed to replenish 150% of lost fluid during the following hours to re-establish euhydration. During the entirety of the intervention period, outside environmental temperature did not exceed 15 • C. Confirmation of Acclimation Status Testing To confirm acclimation status, a sub group of six participants from HEAT underwent a heat tolerance test (HTT) on day 1, 14, and 28. In order to avoid any partial HA, none of the participants from the CON group completed HTT testing. The HTT was conducted as a time to exhaustion (TTE) test under standardized 40 • C at 60% VO 2max , with no access to fan or other cooling (see "Measurement" section for details). Additionally, all participants in HEAT, were monitored for HR, T core , and changes in sweat Na + concentration, in the beginning and end of all weeks of training. Measurements Heart Rate and Core Temperature Heart Rate was assessed using participants' personal HR monitors (Garmin edge 500/520/820/1000/1030, Garmin Ltd., United States), was provided as a continuous feedback tool during TT, and was logged for acclimation status. T core was recorded by a flexible rectal probe (Ellab, Denmark) self-inserted ∼10 cm beyond the anal sphincter. Both HR and T core were measured during the first and the last HA training of all weeks of training as well during the HTT. Values were manually logged every 10 min and at TTE. Sweat Rate To calculate sweat rate (adjusted for fluid consumption) and to account for the effect of body mass during the TT, body mass (towel dried while wearing cycling shorts) was measured (InBody 270, InBody, Denmark) prior to the preload and following the TT. Sweat was obtained for Na + content analysis (ABL 800 Flex, Radiometer, Denmark) by absorbent pads (Tegaderm +Pad, 3M, Denmark) placed on the upper back at the level of the scapulae, after thorough cleansing of the skin with demineralized water. Training and Training Quantification Training quantification was carried out during a 2-week lead in phase prior to the intervention and a 2-week period during the intervention, to quantify the impact of the intervention on participants' habitual training procedures. Participants were instructed to fill out a training log containing information regarding total weekly training volume and total high-intensity training volume, the latter defined as training at HR above 80% of maximum (Karlsen et al., 2015b). Both groups were instructed to preserve their usual interval training routines alongside the intervention and to subtract the training associated with the intervention from their habitual training, to maintain total training volume as reported in their initial training log reported in the lead in phase. Cycling Efficiency Gross efficiency was calculated as the ratio of external mechanical work (W) to energy expenditure (EE). EE was calculated from steady state VO 2 (confirmed in each participant by visual inspection of the VO 2 -time curve), RER and corresponding VO 2 values obtained during the last 90 s of exercise at 100 (GE100) and 175 W (GE175), respectively, completed with fixed cadence of 80 RPM: Statistical Analyses All data are expressed as mean values with standard error unless otherwise stated. Pre-intervention group characteristics, performance results, and training logs were assessed using a student's independent T-test to confirm homogeneity between groups. The changes pre-to post-intervention values between the heat training and control group for TT power output and completion time, relative, and absolute VO 2max , iPPO, GE100, GE175, training time, and training time above 80% HR max were assessed with a two-way mixed-measures ANOVA, with the repeated factor of time point (two levels: Pre and Post) and the independent factor of intervention (two levels: HEAT and CON). Adaptations (T core , HR, sweat rate, and sweat Na + ) in the HEAT group during the intervention period were evaluated with a paired samples T-test. When applicable, post hoc testing was carried out using a Holm-Sidak test. The probability of making a Type 1 error in all tests was maintained at 5%. All statistical analyses were carried out using GraphPad Prism (version 7.0, GraphPad Software, La Jolla, CA, United States). Heat Training and Acclimation Effects In the HEAT group T core increased during each training session in the heat and was above 39 • C at the end of all sessions but decreased from the initial to the last training session (39.9 ± 0.4 vs. 39.4 ± 0.3 • C, P < 0.05; see the weekly progression in Table 2). Similarly, end-training HR declined during the first 2 weeks of training in the heat and was significantly lowered from the first to the last training session in the heat (169 ± 10.5 vs. 155 ± 17 BPM, P < 0.05). In addition, sweat Na + was reduced by 46 ± 14%, from 91 ± 5 mmol/L during the first week to 69 ± 8 mmol/L (P < 0.01), in the third week of training in HEAT, but did not decrease thereafter (see Table 2 Training and Training Quantification Compared to the lead-in period, there was a main effect of time (P < 0.05) for weekly training volume, resulting from a non-significant increase in both groups (HEAT: +13 ± 7%; CON: +11 ± 9%, P > 0.05). Also, the weekly training volume above 80% of maximum heart rate increased for HEAT from 102 ± 21 to 157 ± 27 min/week and for CON from 102 ± 18 to 122 ± 19 min/week (both P < 0.05 from pre to post, but not significantly different across groups). Hematological Parameters Main effects of time were detected for both BV and PV (both P < 0.05, but no time × intervention interaction). For BV with a 5.7 ± 1.5% increase from pre to post in the HEAT group (P < 0.01) and a 3.2 ± 1.6% increase for CON (P < 0.05) with no significant differences between groups (P > 0.05). From PRE to POST, PV increased by 6.5 ± 2.2% in the HEAT group (P < 0.01) and there was a 4.5 ± 2.4% increase in CON (P < 0.05), with the overall response not different across groups. As previously mentioned, we refer to the accompanying paper (Oberholzer et al., 2019) for detailed description of the hematological parameters and involved mechanisms. DISCUSSION The prolonged HA period employed in present study was associated with significant sudomotor adaptions (with the reduction in sweat [Na + ] leveling off after 2-3 weeks of training in the heat), improved exercise endurance in hot environmental settings (i.e., increased TTE at fixed submaximal workload in 40 • C), plasma volume expansion and an elevation of total blood volume. However, when transfer effects to endurance performances in cool conditions were tested post HA (in settings with low environmental heat load; i.e., below 15 • C), the heattraining group did not increase peak aerobic power, improve submaximal exercise efficiency or VO 2max , and TT performance effects were similar compared to the matched control group. We refer to the accompanying paper (Oberholzer et al., 2019) for detailed discussion of potential benefits and mechanisms involved in the hematological response observed for the HEAT group, but from the present measures of performance and aerobic power in a population of endurance-trained cyclists, it appears that the potential physiological advantage of a slightly increased Hb-mass was outweighed by the concurrent hemodilution. To characterize HA status and the gradual heat adaptation, we measured sweat [Na + ], resting and exercise T core as well as HR responses at the beginning and end of every week of training, and hematological responses were measured at the start and end of the study. End exercise T core and HR decreased during the intervention period, despite an increase in ambient temperature, and PV expanded by ∼7%, as expected following HA (Périard et al., 2016). Further, in the subgroup of six participants who were tested at day 1, 14, and 28 under standardized ∼40 • C conditions, TTE was increased from 38.7 min to 64.3 min. For all subject, the total sweat rate or sweat Na + measured during the TT in cool conditions did not increase (from pre to post). This apparent lack of adaptation may be explained by testing in compensable conditions where sweating is dictated by the evaporative requirements (Ravanelli et al., 2018) and due to rapid alterations in the onset and decay of sweat sodium Na + (Williams et al., 1967;Armstrong and Maresh, 1991). Taken together with the lowered resting rectal temperature, these measures provide strong evidence that the participants were successfully acclimated following the prolonged heat training period. In terms of exercise performance as well as the muscular and cardiovascular adaptations required to improve performance, the present findings are in contrast with previous investigations reporting beneficial effects of heat training for aerobic performance in cool or temperate conditions Buchheit et al., 2011;Neal et al., 2016a;Rendell et al., 2017). One explanation for this discrepancy could be the lack of a control group in these studies or failure to control for training quality in the lead-in phase as well as the during the intervention period. One study showed an increase in both VO 2max and iPPO after nine consecutive days of HA (2 h per day at 49 • C, 20% RH), while others report improved TT, iPPO or intermittent exercise performance (Buchheit et al., 2011;Neal et al., 2016a;Rendell et al., 2017). If the present study had not included a control group, our findings would be in line with some of these observations, as both the HEAT and CON group improved their TT performance, likely resulting from improved fractional VO 2max utilization (Bassett and Howley, 2000), as cycling efficiency and VO 2max remained unaffected. Supporting our findings, but with a shorter intervention period, Karlsen et al. (2015b) demonstrated that compared to a matched control group, there was no difference in TT performance, VO 2max and iPPO in trained cyclists, after 2 weeks of a heat training camp in hot dry environment (all training in heat group conducted outdoors in Qatar). Collectively, these findings suggest that many of the previous observations suggesting that a period with training in hot conditions improves exercise performance in temperate conditions relates to the physical training or additional training load, rather than the environmental heat stress per se. Supporting this notion, previous studies reporting beneficial effects of heat training for subsequent performance in temperate conditions employed untrained individuals (Nadel et al., 1974;Shvartz et al., 1977;Sawka et al., 1983Sawka et al., , 1985King et al., 1985;Young et al., 1985;Takeno et al., 2001). Of particular relevance, many of these studies reported improved exercise economy following heat training (Shvartz et al., 1977;Sawka et al., 1983;King et al., 1985;Young et al., 1985). Improvements in exercise economy are likely explained by alterations in skeletal muscle fiber type composition and enhanced muscular strength (Corbett et al., 2014). In the present study, which employed a sub-elite cycling population, exercise economy was unchanged by 6 weeks of training in either the control or heat training group. Likewise, Keiser et al. (2015) observed no improvements in either TT performance or VO 2max in well trained cyclist in a laboratory-based acclimation study with a cross-over design. This finding further suggests that previous improvements in exercise performance with heat training had more to do with training untrained participants, rather than the direct effect of environmental heat stress. One exception to the above studies (Lorenzo et al., 2010), reported increases in both TT performance and VO 2max , compared to a control group, using a sample of endurance trained participants. Both groups performed their usual training FIGURE 3 | (A) Maximal oxygen uptake, (B) incremental peak power output in watts, and (C) cycling efficiency in percent at 175 W during an incremental test to exhaustion in cool/temperate conditions (∼13 • C). Open bars and circles illustrate average and individual measurements, respectively, before intervention, black bars and circles illustrate average and individual measurements, respectively, after intervention. Values are shown as mean ± SE. outdoors in cool temperate conditions, while adding HA or CON training at low intensity (50% VO 2max ) in the laboratory; however, additional training was not controlled for in either HEAT or CON. In contrast to the study by Lorenzo et al. (2010), both Keiser et al. (2015) with laboratory-based acclimation and Karlsen et al. (2015b) with natural acclimation (all training in outdoor hot environment) report no superior effect of training in the heat compared to control and the overall effect may depend on the "quality" of the participant habitual training (e.g., if the regularly include high intensity training). Since elite endurance athletes in general will optimize and include high quality, and specifically intense training in preparation for competitions, we find it relevant to ensure that high intensity training is included both in the lead-in phase and during the intervention, as it will clearly influence the potential to develop or maintain VO 2max in endurance trained individuals (Gormley et al., 2008). How the overall training impact or load (considering both volume and intensity) is quantified and subsequently matched across groups (in studies where superimposed environmental heat stress elevates HR for a given power output) may always be a matter of debate. In the present study we secured that both groups maintained high-intensity intervals in cool settings and although there was an increased weekly time with HR above 80% of maximum compared to the lead-in period, there was no significant differences across groups; neither for total training volume nor time with HR above 80% of maximum during the lead-in period or the intervention phase of the study. Also, HEAT graduate improved heat tolerance and raised cool TT power output during the post-testing indicating that overload (accumulated fatigue) was not limiting their performance or physiological adaptation to training. Considering that systemic oxygen delivery and hence VO 2max depends on both cardiac output and arterial oxygen content (Ekblom, 1986;Bassett and Howley, 2000), it is likely that beneficial effects of an increased blood volume on cardiac filling and cardiac output may be outweighed by lower arterial oxygen content per liter blood induced by the lower [Hb] associated with plasma volume expansion. It should also be considered that the total training volume and weekly volume with HR above 80% of maximum HR increased for both groups during the intervention period. Further, a perfect match between groups (when aiming at maintaining similar volume, intensity and still optimized training) is an issue in studies with superimposed environmental stress, or as intended in the present study, with part of the training substituted by training in the heat. Thus, training quality is a multifaceted matter that may not be adequately quantified by the total volume and/or relative HR intensity. Some participants in the present study indicated that the physiological strain associated with the intervention compromised their ability to uphold habitual high-intensity interval training procedures, and it is well-established that training intensity is imperative toward development and maintenance of VO 2max in endurance trained individuals (Gormley et al., 2008). However, considering that HEAT by both measures of training quality (total volume and HR above 80% of maximum) was exposed to similar training load as CON and that they in fact improved TT performance, it is unlikely to be the cause for the unchanged peak power or effects on VO 2max . McCleave et al. (2017) reported improvements in 3 km TT, but it should be mentioned that their running TT performance was identical in the heat training and control group in the tests conducted immediately following the 3 week intervention, but superior in the heat group following additional 3 weeks of return to normal training practice. Potentially, timing of the follow-up testing may be important and performance could be optimal in the post acclimatization period, when PV returns toward normal. However, that relies on the premise that total red blood cell mass remain elevated (i.e., red blood cells follows expected life time of ∼120 days; see Kurata et al., 1993) and the normalized PV results in advantageous hemoglobin up-concentration (Convertino et al., 1980). Additionally, heat training purportedly improves exercise performance through both thermal and non-thermal adaptations (Corbett et al., 2014). Performance testing in the present study was conducted in ∼10 • C wet bulb globe temperature, and therefore likely did not meet the threshold required to impose a thermal limitation (Junge et al., 2016). However, exercise in environmental conditions on the warmer side could meet the threshold, or rather range, of thermal conditions where improved thermoregulatory capacity induced by HA would be ergogenic. Also, the performance tests applied in the present study (incremental peak power test and the 15 km TT including the preload period) had duration of less than 1 h and potential effects of initiating exercise with increased PV and higher total body water during prolonged physical activities (ultra-sports) cannot be excluded. CONCLUSION Overall, we conclude that training in the heat was not superior compared to normal (control) training for improving aerobic power or TT performance in cool conditions. However, during a competitive season athletes may be exposed to varying environmental conditions, and although HEAT was not superior compared to CON for improving endurance in cool settings, it is noteworthy that the replacement of a substantial part of overall training volume with heat training did not compromise the effect of training toward temperate exercise performance. Implementation of heat training could therefore be advantageous as part of an integrated pre-season training preparation but the timing of a specific heat training camp may depend the specific competition schedule, considering that the benefit from HA may decay in term of benefitting performance in the heat, whereas for performance in cool settings, a period with return to training without superimposed heat stress may be beneficial. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/supplementary material. Some data can be found in the accompanying article (doi: 10.3389/fphys.2019.01379). ETHICS STATEMENT The studies involving human participants were reviewed and approved by the ethics committee of the Capital Region of Denmark (protocol: H-17036662). The patients/participants provided their written informed consent to participate in this study.
2019-11-01T13:14:08.399Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "33c8e4711069d35aac6e0227c69fb4e384558ffc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.01372/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33c8e4711069d35aac6e0227c69fb4e384558ffc", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17326156
pes2o/s2orc
v3-fos-license
Isomorphisms between Algebras of Semiclassical Pseudodifferential Operators Following the work of Duistermaat-Singer \cite{DS} on isomorphisms of algebras of global pseudodifferential operators, we classify isomorphisms of algebras of microlocally defined semiclassical pseudodifferential operators. Specifically, we show that any such isomorphism is given by conjugation by $A = BF$, where $B$ is a microlocally elliptic semiclassical pseudodifferential operator, and $F$ is a microlocal $h$-FIO associated to the graph of a local symplectic transformation. Introduction In the study of pseudodifferential operators on manifolds, there are two important regimes to keep in mind. The first is a global study of pseudodifferential operators defined using the local Fourier transform on the cotangent bundle. If X is a compact smooth manifold and T * X is the cotangent bundle with the local coordinates ρ = (x, ξ), we study pseudodifferential operators with principal symbol homomgeneous at infinity in the ξ variables. Let Y be another compact smooth manifold of the same dimension as X, and suppose there is an algebra isomorphism from the algebra of all pseudodifferential operators on X (filtered by order) to the same algebra on Y , and suppose that isomorphism preserves the order of the operator. Then Duistermaat-Singer [DS] have shown that this isomorphism is necessarily given by conjugation by an elliptic Fourier Integral Operator (FIO). The other setting is the semiclassical or "small-h" regime. One can study globally defined semiclassical pseudodifferential operators, but many times it is meaningful to study operators which are microlocally defined in some small set (see §2 for definitions). Then we think of the h parameter as being comparable to |ξ| −1 in the global, non-semiclassical regime. Thus the study of small h asymptotics in the microlocally defined regime should correspond to the study of high frequency asymptotics in the global regime. We therefore expect a similar result to that presented in [DS], although the techniques used in the proof will vary slightly. Let X be a smooth manifold, dim X = n ≥ 2, and assume U ⊂ T * X is an open set. Let Y be another smooth manifold, dim Y = n, and let V ⊂ T * Y . Let Ψ 0 /Ψ −∞ (U ) denote the algebra of semiclassical pseudodifferential operators defined microlocally in U filtered by the order in h, and similarly for V (see §2 for definitions). Theorem 1. Suppose is an order preserving algebra isomorphism. For every U ⋐ U open and precompact, there is a symplectomorphism To put Theorem 1 in context, we observe that every algebra homomorphism of the form (1.1) is an order preserving algebra isomorphism, according to Proposition 2.3 in §2. Automorphisms of algebras of pseudodifferential operators have also been studied in the context of the more abstract Berezin-Toeplitz quantization in [Zel]. Acknowledgements. The author would like to thank Maciej Zworski for suggesting this problem and many helpful conversations. This work was started while the author was a graduate student in the Mathematics Department at UC-Berkeley and he is very grateful for the support received while there. Preliminaries Let C ∞ (T * X) denote the algebra of smooth, C-valued functions on T * X, and define the global symbol classes We define the essential support of a symbol by complement: By multiplying elements of S m (T * X) by an appropriate cutoff in C ∞ c (U ), we may think of symbols as being microlocally defined in U , and define the class of symbols with essential support in U S m (U ) = a ∈ C ∞ ((0, 1] h ; C ∞ c (U )) : |∂ α a| ≤ C α h −m . We write S m = S m (U ) when there is no ambiguity. We can think of elements of S m as formal power series in h: where each a j is in C ∞ c (U ) and has derivatives of all orders bounded in h. We have the corresponding spaces of pseudodifferential operators Ψ m (U ) acting by the local formula (Weyl calculus) For A = Op w h (a) and B = Op w h (b), a ∈ S m , b ∈ S m ′ we have the composition formula (see, for example, the review in [DiSj]) with ω the standard symplectic form. Observe # preserves essential support in the so that Ψ m (U ) is the class of pseudodifferential operators with wavefront set contained in U . We denote We will need the definition of microlocal equivalence of operators. Suppose T : C ∞ (X) → C ∞ (X) and that for any seminorm · 1 on C ∞ (X) there is a second seminorm · 2 on C ∞ (X) such that for some M 0 fixed. Then we say T is semiclassically tempered. We assume for the rest of this paper that all operators satisfy this condition (see [EvZw,Chap. 10] for more on this). Let U, V ⊂ T * X be open pre-compact sets. We think of operators defined microlocally near V × U as equivalence classes of tempered operators. The equivalence relation is In the course of this paper, when we say P = Q microlocally near U × V , we mean for any A, B as above, or in any other norm by the assumed pre-compactness of U and V . Similarly, we is the algebra of bounded semiclassical pseudodifferential operators defined microlocally in U modulo this equivalence relation. It is interesting to observe that this equivalence relation has a different meaning in the high-frequency regime. There, Ψ −∞ (X) corresponds to smoothing operators, although they may not be "small" in the sense of h → 0. We have the principal symbol map We will use the following well-known semiclassical version of Egorov's theorem (see [Ch1,Ch2] or [EvZw] for a proof). Proposition 2.1. Suppose U is an open neighbourhood of (0, 0) and κ : U → U is a symplectomorphism fixing (0, 0). Then there is a unitary operator F : Observe that Proposition 2.1 implicitly identifies a neighbourhood U with its coordinate representation. To make a global statement, we use the following Lemma. is a symplectomorphism and let F j be the quantization of κ| Uj , j = 1, 2, as in Proposition 2.1. Then Proof. From [Ch2, Corollary 3.4] we can for 0 ≤ t ≤ 1 find a family of symplectomorphisms κ t : R 2n → R 2n , a Hamiltonian q t , and linear operators F j (t) : L 2 (R n ) → L 2 (R n ), j = 1, 2 satisfying: and F j (0) = id + O(h 2 ) as a pseudodifferential operator. The adjoints satisfy as a pseudodifferential operator. A calculation shows F 1 F * 2 and F * 1 F 2 are constant. The conclusion of the Lemma holds at t = 0, so it holds at t = 1 as well. The next proposition is a more global version of Proposition 2.1. Proposition 2.3. Suppose U ⊂ T * X is an open set and κ : U → κU is a symplectomorphism. Then for any U ⋐ U open and precompact, there is a linear operator F : L 2 (X) → L 2 (X), microlocally invertible in U × κ( U ) such that for all A = Op w h (a), Proof. The idea of the proof is to glue together operators from Proposition 2.1 with a partition of unity. Let 1 = j χ j be a partition of unity of U so that H 1 (supp χ j , C) = {0} for each j. Let U j = supp χ j ∩ U , V j = κ(U j ), and let F j be the quantization of κ| Uj as in Proposition 2.1. Set F = j F j χ w j , where χ w j = Op w h (χ j ), so that F * = k χ w k F * k . We first verify: microlocally on U since U is covered by finitely many of the U j s. Further, as above. Hence F * is an approximate left and right inverse microlocally on U ×κ( U) so F is microlocally invertible. Now for each j, chooseχ j ∈ C ∞ c (T * X) satisfyingχ j ≡ 1 on supp χ j with support in a slightly larger set so that jχ j ≤ C on U , for C > 0 fixed. We calculate for A = Op w h (a): Then from Lemma 2.2 we have and since we can cover U with finitely many of the U j , the error in (2.3) is O(h 2 ) microlocally on U and the Proposition follows. Remark. This notion of quantization of symplectic transformations is a constructive version of the more general definition due to Hor2,Mel] as an integral operator with a distribution kernel supported on the Lagrangian submanifold associated to the symplectic relation (see also [Dui] and the recent semiclassical treatment in [Ale]). Let Y be another smooth manifold of the same dimension as X, and let V ⊂ T * Y be a non-empty, pre-compact, open set. We say is an order preserving algebra isomorphism (of algebras filtered by powers of h) if The Proof of Theorem 1 We break the proof of Theorem 1 into several lemmas. Lemma 3.1. The maximal ideals of S 0 /S −1 (U ) + C are either of the form Proof. Clearly for each ρ ∈ U , M ρ is a maximal ideal. Also, M ∂U is maximal, since any ideal M satisfying M ∂U M must contain a constant, and therefore is equal to S 0 /S −1 (U ) + C. Suppose M is another maximal ideal which is not of the form (3.1) for any ρ ∈ U . Then for each point ρ ∈ U , there is a ρ ∈ M such that a ρ (ρ) = 0. Further, by multiplying by a (positive or negative) constant if necessary, we may assume for each ρ there is a neighbourhood U ρ of ρ such that a ρ | Uρ ≥ 1. Let a(x, ξ) ∈ M ∂U , and let K = ess-supp h (a) ⋐ U. As K is compact, we can cover it with finitely many of the U ρ , The following three lemmas are a semiclassical version of [DS] with a few modifications to the proofs. is an order preserving algebra isomorphism. Then there exists a diffeomorphism κ : U → V . Proof. We first "unitalize" our algebra of pseudodifferential operators by adding constant multiples of identity. That is, let by defining for C ∈ C and P ∈ Ψ 0 (U ) g(C + P ) := C + g(P ). Observeg induces an algebra isomorphism Since g 0 takes maximal ideals to maximal ideals, we can define a map κ : U → V. By applying g −1 0 , we immediately see κ is bijective. Now for p ∈S 0 (U ) and ρ ∈ U , observe For each ρ ∈ U , let (x, ξ) be local coordinates for X in a neighbourhood of ρ which does not meet ∂U . Choosing a suitable cutoff χ ρ equal to 1 near ρ, the χ ρ x j and χ ρ ξ k are approximate coordinates near ρ: χ ρ x j , χ ρ ξ k ∈ S 0 (U ) for all j, k; and similarly for χ ρ ξ j for all j. Composing with inverse coordinate functions in a neighbourhood of κ(ρ) implies κ −1 is smooth on U . The same argument applied to g −1 shows κ is smooth on V , hence a diffeomorphism. Lemma 3.4. Suppose is an order-preserving automorphism which preserves principal symbol. Then there exists B ∈ Ψ 0 ( U ), elliptic on U such that Proof. The proof will be by induction. We drop the dependence on U since the lemma is concerned with automorphisms. Suppose for l ≥ 1 we have for every m and every P ∈ Ψ m This induces a map β : S m /S m−1 → S m−l /S m−l−1 , which, using the Weyl composition formula (2.1), satisfies Consider the action of β on S 0 , and observe from property (i) above, for p, q ∈ S 0 , β(pq) = β(p)q + pβ(q) ∈ S −l , so β is h l times a derivation on S 0 . Replace g 1 (P ) with Bg 1 B −1 . Now for the purposes of induction, assume g 1 (P ) − P ∈ S m−l ( U ), and apply the above argument to get B l ∈ S −l so that g 1 (P ) − B −1 l P B l ∈ S m−l−1 ( U ). Then replacing g 1 (P ) with B l g 1 (P )B −1 l finishes the induction. Thus there exists B ∈ S 0 ( U ) so that Bg 1 (P )B −1 = P mod O(h ∞ ).
2008-03-05T18:35:58.000Z
2008-03-05T00:00:00.000
{ "year": 2008, "sha1": "98f1ccc7320c6ff41f8aeefc3e58192302d1f992", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "145ae4e6c89ce60f83daea31f1e96ff96451af03", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
10609003
pes2o/s2orc
v3-fos-license
Predictors of Death among Patients Who Completed Tuberculosis Treatment: A Population-Based Cohort Study Background Mortality among patients who complete tuberculosis (TB) treatment is still high among vulnerable populations. The objective of the study was to identify the probability of death and its predictive factors in a cohort of successfully treated TB patients. Methods A population-based retrospective longitudinal study was performed in Barcelona, Spain. All patients who successfully completed TB treatment with culture-confirmation and available drug susceptibility testing between 1995–1997 were retrospectively followed-up until December 31, 2005 by the Barcelona TB Control Program. Socio-demographic, clinical, microbiological and treatment variables were examined. Mortality, TB Program and AIDS registries were reviewed. Kaplan-Meier and a Cox regression methods with time-dependent covariates were used for the survival analysis, calculating the hazard ratio (HR) with 95% confidence intervals (CI). Results Among the 762 included patients, the median age was 36 years, 520 (68.2%) were male, 178 (23.4%) HIV-infected, and 208 (27.3%) were alcohol abusers. Of the 134 (17.6%) injecting drug users (IDU), 123 (91.8%) were HIV-infected. A total of 30 (3.9%) recurrences and 173 deaths (22.7%) occurred (mortality rate: 3.4/100 person-years of follow-up). The predictors of death were: age between 41–60 years old (HR: 3.5; CI:2.1–5.7), age greater than 60 years (HR: 14.6; CI:8.9–24), alcohol abuse (HR: 1.7; CI:1.2–2.4) and HIV-infected IDU (HR: 7.9; CI:4.7–13.3). Conclusions The mortality rate among TB patients who completed treatment is associated with vulnerable populations such as the elderly, alcohol abusers, and HIV-infected IDU. We therefore need to fight against poverty, and promote and develop interventions and social policies directed towards these populations to improve their survival. Introduction Tuberculosis (TB) continues to be one of the leading causes of death from infectious diseases worldwide [1][2]. Its mortality especially affects low income countries with high incidence rates of human immunodeficiency virus (HIV) infection. However, in developed countries, where TB is less common and mortality has gradually declined in recent decades, TB patients also die, though not always from TB itself [3][4][5][6][7]. Various studies have reported TB mortality and associated factors during active disease [4][5][7][8][9] and have emphasized the importance of treatment adherence to reduce the probability of dying [3,[7][8][9][10][11]. However, there is little data about the long-term survival for successfully treated TB patients. It has been suggested that the treatment outcome may not reflect final patient status, in part due to pulmonary impairment after TB disease [12]. This has implications for patient care, such as more aggressive case prevention strategies and post-treatment evaluation, as well as for TB control of default and lost to follow-up cases [12][13][14]. The HIV and injecting drug users (IDU) epidemics in developed countries during the 1990's had a considerable impact on TB rates and mortality [15,16]. It is therefore of interest to determine the status of the TB patients who survived and whether they still represent a vulnerable group, despite drug abuse care programs and the use of highly active antiretroviral therapy (HAART). The identification of the characteristics of patients who die prematurely will help identify the most vulnerable populations and help to target them in future public health interventions. Therefore, the aim of this study was to determine the incidence of death and its predictors in a cohort of successfully treated TB patients. Ethics statement Demographic and clinical data were obtained from the epidemiological questionnaire used by TB Prevention and Control Program (TBPCP). The data was treated and analysed anonymously. The analysis was carried out retrospectively and involved data collected on a routine basis within the National Tuberculosis Program approved by the Spanish Ministry of Health. Therefore, no ethical approval nor informed consent was required. All data were treated in a strictly confidential manner following the ethical principles of the Helsinki Declaration of 1964 revised by the World Medical Organization in Edinburgh, 2000 and the Organic Law 15/1999 of Data Protection in Spain. Setting The study was conducted in Barcelona (Catalonia, Spain), which had a population of 1,508,805 inhabitants living in an area of 100.4 square km during the enrollment period. The TBPCP has been operating in the city since 1987. Study design and population This retrospective population-based cohort study included all patients detected by the TBPCP who began treatment between October 1 st , 1995 and October 31 st , 1997, with culture confirmation and drug susceptibility testing (DST) results, and who resided in the city of Barcelona. Cases that completed treatment according to the European recommended treatment outcome definition [17] were selected and followed to determine mortality rates and factors associated with death. The follow-up period continued until December 31 st , 2005, at which time all cases were classified as either survived, transferred, or dead. Definitions A ''definite TB case'' was defined using international recommendations [14]: a patient was considered to have TB if the culture was positive for M. tuberculosis complex. All patients who completed TB treatment, regardless of the availability of bacteriological confirmation, were considered cured [8,18]. TB recurrence was defined as any new clinical and/or microbiological TB diagnosis presented by a patient who completed treatment after being TB disease-free for at least one year since treatment completion [19][20]. Variables and information sources All data came from the Barcelona TBPCP epidemiological surveys preformed by public health nurses on all detected cases. The Epidemiology Service collects data on TB and AIDS cases reported by physicians and also performs active surveillance for undeclared cases (via microbiological laboratories, hospital discharges, the city mortality registry, and social services). Sociodemographic variables included age, sex, country of birth (foreign or Spain), city district residence (inner-city or other), homelessness, prison history, smoking, alcohol abuse and IDU. Consumption of over 280 g of alcohol per week for men and over 168 g for women was considered alcohol abuse. Clinical variables included presence of HIV infection and route of infection, recurrence, type of TB (pulmonary, extrapulmonary or both (mixed TB) and chest radiograph results (abnormal non-cavitary, normal, or caviatry). Microbiological and treatment variables included smear test results, type of resistance (none, primary or secondary), drug resistance to isoniazid and rifampin (multi-drug resistance, or MDR), previous TB treatment, and treatment under directly observed treatment (DOT). DOT was used for some patient groups and defined as a observation from a healthcare worker of a patient as they take their TB medication. Primary resistance was defined as the presence of resistance to one or more anti-TB drugs in strains obtained from patients who had never received treatment. Secondary resistance was defined as resistance to one or more anti-TB drugs in strains recovered from patients who had received previous anti-TB treatment. After a patient was included in the study, they were actively followed to verify vital status at the end of the study, as well as any recurrent TB episode during the follow up and/or date of transferout. Hospital clinical records, primary health care registries, the municipal census, the city mortality registry, and the Barcelona city drug abuse information system were reviewed to minimize lost to follow-up patients and compare duplicated information. At the end of the study, patients were considered to be lost to follow-up if no vital status or moving date was available. Statistical analysis A descriptive analysis of the cohort was preformed, which included the medians and interquartile range (IQR) for quantitative variables, because none of the quantitative variables displayed normal distribution. The x 2 test was used for categorical variables and two-sided Fisher test were used when applicable. The overall mortality rate was calculated, as well as that specific to immigrants, HIV-infected patients and IDU, expressed in terms of cases per 100 person-years of follow-up (PY) and relative risks (RR). Follow-up time to death, transfer out, or end of the study was calculated in reference to the time elapsed since the end of TB treatment. Patients who died were compared to the rest of the cohort. In the bivariate analysis, survival curves were estimated using the Kaplan-Meier curves and were compared using the Log-rank test. Possible interactions between patient clinical and demographic characteristics were considered and a forward inclusion approach was used. On a multivariate level, a Cox proportional hazards regression was preformed with time dependent covariates in relation to TB recurrence. Variables significantly correlated at the univariate level (p-value,0.10), and those of epidemiological interest. Hazard Ratios (HR) with their 95% confidence intervals (CI) were used to measure association. The proportional hazards assumption was tested by graphical methods and by goodness-offit analysis using Schoenfeld residuals. Analyses were preformed using the statistical packages: SPSS, v. 13.0 (SPSS Inc., Chicago, IL, USA) and R, v. 2.6.0 (The R Foundation for Statistical Computing). Figure 1 shows the flow-chart of cohort selection. Of the 999 (80.7%) patients with culture Mycobacterium tuberculosis growth and DST results, 762 (76.3%) cases correctly completed treatment and thus constituted the study cohort, 6 (0.6%) died from causes attributed to TB, 150 (15%) died due to other causes before finalizing treatment, 45 (4.5%) moved outside of Catalonia, Spain during treatment, and 36 (3.6%) were lost to follow-up before completing treatment. Cohort description The median age of the cohort was 36 years (IQR 28-52), with a predominance of men, Spanish patients, smokers and alcohol abusers. One in a five patients lived in the inner-city district. The most frequent presentation of TB was pulmonary (PTB), followed by extra-pulmonary TB. Smear test results were positive in 419 (55%) cases. A total of 35 (4.6%) patients presented with some primary drug resistance, while 13 (1.7%) presented with some secondary drug resistance. During the follow-up, there were 30 (3.9%) cases of recurrence; 26 (86.7%) in men and 4 (13.3%) in women. Table 1 shows socio-demographic, clinical, microbiological and treatment characteristics by vital status at the end of the follow-up. The median follow-up duration among patients who completed TB treatment was 8.04 years (IQR 5. 8-8.8). At the end of followup, 173 (22.7%) patients had died, 83 (10.9%) had moved outside of Catalonia, Spain, and a total of 506 (66.4%) were alive. The patients who died were older, male, Spanish, residents in the inner-city, alcohol abusers and HIV-infected IDU. Table 1 compares the patients who died during follow-up with the survivors. None of the 11 non-HIV-infected IDU died during follow-up and were excluded from the multivariate analysis. The median duration of follow-up after TB treatment completion and prior to death was 3.7 (IQR 1.5-6.3) years. The cumulative probabilities of dying at 1, 3, 6 and 9 years of follow-up were 4.5, 11.1, 17.7, and 25.9 percent, respectively. The incidence of recurrence was 0.5/100 PY, or 530 cases per 100,000 inhabitants; 1.1/100 PY among HIV-infected patients, and 0.4/ 100 PY among non-HIV-infected. Factors associated with death The following factors were significantly associated with death on a univariate level: male, age over 41 years, born in Spain, residence in the inner-city, alcohol, HIV-infected IDU, mixed TB, treatment under DOT, cavitary and normal chest-X-ray, and previous TB treatment (Table 2 and Figures 2, 3, 4). On the multivariate level, age was associated with death, with a significant gradient and 3.5-fold higher risk of dying after an age 41 years. HIV-infection and IDU were also significantly associated with a 7.9-fold higher risk of dying. Alcohol abusers had a 1.7 times higher risk. TB recurrence was not found to be significantly associated with death ( Table 2). Discussion Factors such as alcohol abuse, age, and being an HIV-infected IDU are associated with a higher risk of death, despite treatment completion for an episode of TB while TB recurrence or being non-IDU HIV infection were not associated. Several studies have analyzed the probability of dying during TB treatment [4][5][6][7][8][9]20], but only few study patients who completed TB treatment to identify factors associated with the probability of dying [13,14,21,22]. Almost a quarter of the study population died during the 8-year follow-up period. Mortality rate in our study population was higher (3,355 per 100,000 PY) compared to that of the general population (general mortality rate: 1,147 per 100,000 inhabitants) and mortality rate among older then 64 years was also higher (9,285 per 100,000 PY) compared to that of the general population.64 years old (4,843 per 100,000 PY) [23]. Higher mortality rate could be due to the considerable number of IDU, and high percentage HIV-infected IDU. Similarly, literature from the USA [12,24], the Netherlands [21] and Finland [25] claim that TB patients have a higher risk of dying than the general population during TB treatment. However, it has also been suggested that the final treatment outcome may not reflect the patient's final status, perhaps in part due to pulmonary impairment after an episode of TB. This disease can produce changes and chronic lesions in the lungs such as bronchiectasis, pulmonary fibrosis that have been shown to be associated with decreased survival [12]. According to previous studies, the following factors were associated with death: age [5][6][7][8]23], HIV infection [5,8,9,14,24], being IDU [8,15,19,24] and alcohol abuse [5,8,24]. It has also been suggested that an increase in the risk of death may not be due to TB, but rather to individual factors such as HIV infection, alcohol or drug abuse, and precarious living conditions [8,22,25]. These modifiable risk factors provide a possibility of improvement through prevention and/or treatment; thus adherence to HAART among IDU patients should be warranted. Age, a non-modifiable factor, can also influence the final patient outcome. For example, older patients may experience longer diagnostic delays due to different clinical manifestations than younger patients [25][26][27][28]. The delay in diagnosing TB, probably due to the associated morbidity, as well as their lower immunity status in the older populations have been suggested as possible explanations for their increased mortality [28]. As expected in our setting [29], a considerable proportion of HIV-infected TB patients (23.4%) were IDU. Some studies have shown an increased mortality of up to 10 times higher in coinfected patients compared to non HIV-infected TB patients during the pre-HAART era. The increased use of HAART, which is effective and widespread in many high income countries but very often lacking in most rural areas in the low income countries, has led to changes in mortality among TB/HIV co-infected patients [30]. In Spain, HAART became available and free of cost in 1996. Unlike any other study [26,31], we observed that coinfection with HIV among non-IDU is not associated with mortality. This could possibly be due to of better adherence to HAART than within the IDU population. It is known that patients previously treated for TB have a higher risk of presenting MDR TB [31][32][33] and of dying [34] than new TB patients. However, in our study, previous TB treatment was not associated to mortality in the adjusted model. Abnormal noncavitary and cavitary chest-X-ray were not associated to mortality in the adjusted model. Nevertheless, cavitary chest-X-ray showed a protection against mortality tendency at multivariate level (p-value 0.056) probably due to the well known association between good immune system and cavitary TB [35]. Type of TB was highly correlated with chest-X-Ray variable and was therefore excluded from the adjusted model. Contrary to results of studies performed in the USA, the European Union, Mexico, or Azerbaijan [7][8][9][10][11][12][13]36,37], we did not find any association between MDR TB and mortality perhaps because our study population consisted of patients who completed TB treatment. Among the 156 excluded patients who died during treatment, only four of them (2.6%) were MDR-TB, 42 (26.9%) were IDU, and 59 (37.8%) were HIV-infected. Sixty-seven died in 1995-1996, when HAART treatment was still not as available. The prevalence of MDR-TB in our city is low because of the universal use of Fixed Dose Combination treatment and extensive DOT among vulnerable populations. Despite existing evidence concerning the association of smoking with death and with TB [38,39], we did not find a relationship between smoking and mortality (HR: 1.1 (CI 0.8-1.4), perhaps because the effects of tobacco occur over a longer period of time than our follow-up period of only eight years. We also found no effect of being foreign born on mortality, although some of the foreign born may have returned to their home countries and the final status was not recorded for some cases. Adjusting for type of TB instead of chest-X-Ray (correlated variables), disseminated TB presentation (mixed TB) was not associated with a higher risk of mortality as reported in a study from the USA [24]. As in similar settings [8], DOT was associated with a higher risk of death because it is usually implemented in the most vulnerable populations such as HIV infected, IDUs and alcohol abusers. Thus, it was considered as a confounder and not included in the multivariate analysis. High TB recurrence rates have been reported in countries with high TB incidence and limited control programs [13,22]. The recurrence rate found during the follow-up among our study population is one of the lowest when compared to those reported in recent publications [22,[40][41][42][43], ranging between 0-14%. Possible reasons for the recurrent rates in our study include lower TB incidence, smaller percentages of MDR TB, and/or varying inclusion criteria and definitions [40]. The existence of an effective TBPCP, which has prioritized the use of DOT strategy among high-risk groups to achieve compliance rates of 95% since 1995 [5,8], could have also influenced our results. Additionally, few studies have assessed the influence of TB recurrence on mortality. We found that TB recurrence among patients who completed treatment was not associated with a higher risk of mortality. This was also observed in low incidence settings, such the USA [24], but not in high incidence settings such as Uzbekistan [13]. These differences could be due in part to the relapse and recurrence definitions. Above all, the lack of an association between TB recurrence and mortality in our study could be due to a high percentage (over 80%) of recurrent cases that correctly underwent treatment for their second TB episode. Sputum samples frequently cannot be obtained from patients who complete treatment. For this reason and because the prevalence of drug-resistance is low, we considered treatment completion to be a good approximation for cured [13,18]. Mortality data among HIV, alcohol and drug abusers in the general population was not available. Therefore, performing direct comparisons and calculations among these hidden populations is not possible. Other possible limitations of this population-based study include the limited access to clinical data such as HAART treatment and CD4 cell count, the limited presence of subpopulations who are usually culture negative such as children, and the absence of appropriate molecular techniques to determine if recurrence was due to reactivation of the same TB strain or due to re-infection by a different strain [44]. Identification of the origin of recurrence could have further implications in public health policy for TB [13,31,[42][43][44]. Though the percent of lost to follow-up patients was small (3.6%), we compared lost to follow-up patients to the patients included in our analysis and found that the lost to follow-up population had a larger presence of foreign-born and inner-district residents. These patients could be underrepresented and could have perhaps skewed our final results in some way. We conclude that HIV-infected IDU, those over 41 years old, and alcohol abusers have a poor long-term prognosis after completing TB treatment while TB recurrence and MDR TB are not associated with mortality when the first episode is successfully treated. The decrease in mortality among TB patients requires new public health interventions [44] and the enhancement of existing control programs to improve both prevention and treatment. Interventions should be directed at modifiable risk factors, such as alcohol abuse, treatment of HIV-infection and treatment of IDU patients. Priority must be given to promote relevant social policies to fight against poverty and achieve a decline of the mortality as well as an improvement in the quality of life and perceived health status of our patients.
2016-05-04T20:20:58.661Z
2011-09-28T00:00:00.000
{ "year": 2011, "sha1": "a9ce36c2ca0a1b53d867b60271369ba5a6bc06f4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0025315&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9ce36c2ca0a1b53d867b60271369ba5a6bc06f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269952174
pes2o/s2orc
v3-fos-license
The influence of risk factors, biomarkers and care settings on SUDEP counseling Although sudden unexpected death in epilepsy (SUDEP) is the most feared epilepsy outcome, there is a dearth of SUDEP counseling provided by neurologists. This may reflect limited time, as well as the lack of guidance on the timing and structure for counseling. We evaluated records from SUDEP cases to examine frequency of inpatient and outpatient SUDEP counseling, and whether counseling practices were influenced by risk factors and biomarkers, such as post-ictal generalized EEG suppression (PGES). We found a striking lack of SUDEP counseling despite modifiable SUDEP risk factors; counseling was limited to outpatients despite many patients having inpatient visits within a year of SUDEP. PGES was inconsistently documented and was never included in counseling. There is an opportunity to greatly improve SUDEP counseling by utilizing inpatient settings and prompting algorithms incorporating risk factors and biomarkers. Introduction Sudden unexpected death in epilepsy (SUDEP) is a tragic epilepsy outcome feared by patients and families, with an incidence of 1 in 1000 per year in children and adults [1,2].The American Academy of Neurology and American Epilepsy Society recommend that neurologists routinely counsel patients about SUDEP risk [1,3].Adults with epilepsy and parents of children with epilepsy also want SUDEP information [4].However, neurologists infrequently discuss SUDEP with patients due to limitations of time and knowledge, as well as anxiety and discomfort about these discussions [4][5][6][7]. Neurologists' hesitance to discuss SUDEP likely reflects lack of guidance on initiating these conversations.The timing, frequency, and approach to counseling are not well-characterized and require patientcentered communication and dynamic risk stratification.Although SUDEP mechanisms are incompletely defined, risk factors include frequent generalized tonic-clonic seizures, nocturnal seizures, and medication non-adherence [1,3].Potential SUDEP biomarkers include interictal heart rate variability, autonomic dysregulation, and post-ictal generalized EEG suppression (PGES) [8][9][10].PGES is electrographic activity <10 uV in all electrodes in a bipolar montage, within 30 s of seizure offset [11].PGES is more common after convulsive seizures, particularly with a prolonged tonic phase, and may drive autonomic dysregulation and heart rate variability [8,12].Applying risk factors and biomarkers to SUDEP counseling is poorly defined and there are few risk assessment tools.There is also no consensus on whether SUDEP conversations should be held in inpatient, outpatient or both settings. We retrospectively evaluated the records of SUDEP cases to examine frequency of inpatient and outpatient counseling, whether PGES was documented in EEG reports and whether these findings influenced counseling practices. Materials and methods We retrospectively identified pediatric and adult definite, probable, and near-SUDEP cases at the University of California, San Francisco, and New York University Langone Medical Center between 2009 and 2015.All patients in our study were managed by epileptologists.We reviewed electronic medical records (EMRs) of patients ages 6 months to 65 years admitted to the epilepsy monitoring unit.Deceased patients were identified through EMR review and submission to the National Death Index.For deceased patients, deaths were considered SUDEP if they had definite epilepsy (two seizures >24 h apart or single seizure with EEG epileptiform abnormalities) and met the criteria for SUDEP [13].Patients were excluded if the epilepsy diagnosis was uncertain, or they did not have an epilepsy monitoring unit (EMU) admission with medical records. Subject data were de-identified and stored on an encrypted password-protected computer.Subject risk was determined as minimal.The data was collected as part of a multi-site study with a central IRB from the primary site, New York University Langone Medical Center.EEG reports were reviewed to identify terms indicating concern for PGES including "attenuation" "suppression" and "electrodecrement."EEGs were reviewed by an epileptologist to confirm they met PGES criteria [11].For cases where there was no mention of PGES or PGES markers, the post-ictal EEGs were re-reviewed by an epileptologist.We assessed if SUDEP counseling occurred inpatient or outpatient based on chart documentation using the search function for the terms "SUDEP" and "Sudden Unexpected Death in Epilepsy."We examined age at death, gender, race, ethnicity, primary language, and history of medication nonadherence.We qualitatively evaluated documentation to determine the rationale for SUDEP counseling. Ten participants (36 %) had PGES following ≥ 1 seizure based on independent review.PGES was mentioned in the EEG reports of four patients (14 %).Four additional (14 %) reports mentioned "diffuse attenuation", or "diffuse suppression" confirmed to be PGES on EEG review (Fig. 1).Two reports (7 %) did not comment on PGES, attenuation or suppression, but at least one seizure met EEG criteria for PGES.Twelve participants (43%) did not have seizures captured during EMU admissions.No patient with PGES had documented SUDEP discussions. The median time between last visit and SUDEP was 5 months (IQR 1.5-9).In comparing the timing of outpatient versus inpatient visits to the time of SUDEP, most (79 %;n = 22) had outpatient visits closer to the time of death.Eleven (39 %) patients had an inpatient or outpatient visit within 1 month of death; SUDEP counseling was not documented at any of these visits.Five patients with PGES (50 %) had a visit within 1 month of SUDEP (Table 2, Supplementary File A). Discussion SUDEP counseling is challenging for neurologists and best practices are not well-defined [5][6][7].Our results affirm deficient SUDEP counseling, despite modifiable SUDEP risk factors such as medication nonadherence.The limited counseling that occurred was restricted to outpatient settings.Despite many patients having visits within one month of SUDEP, the topic was not discussed at these visits.PGES was not consistently identified or documented in EEG reports and its presence did not influence prevalence of counseling.Although evidence for the role of PGES in SUDEP pathophysiology is mixed, many studies have identified it as a SUDEP risk factor and most acknowledge its association with other defined SUDEP markers [14,15,[8][9][10][11][12]. Mixed reports on its role partly stem from individual differences in its identification and reporting; improved documentation (potentially facilitated by new automated analysis tools) may help resolve this ambiguity [16].This documentation could then be incorporated into SUDEP risk algorithms in combination with other risk factors to prompt conversations in inpatient and outpatient settings. As our understanding of SUDEP biomarkers (such as PGES) advances, epileptologists should focus on the identification and documentation of these variables.The presence of SUDEP risk factors should, in turn, motivate counseling about this condition.The role of counseling in modifying SUDEP risk remains unproven, but survey data indicate that patients and families want information [4].Counseling may also impact behaviors such as medication adherence that reduce risk.Clinical checklists can be used to encourage SUDEP discussions.A standardized SUDEP risk screener at each clinic visit or EMU admission could prompt risk reassessment and conversations between physicians and their patients [17].One such electronic health record-based screening algorithm doubled rates of SUDEP counseling.The algorithm divided patients into "high" and "low" risk groups and re-stratified patients at each visit with different messaging based on their categorization [18].Similar reminders could also prompt epileptologists to screen for and document SUDEP biomarkers like PGES in EEG reports. Many participants had inpatient visits within one month of their death during which SUDEP was not discussed.SUDEP counseling should be considered during inpatient stays.EMU admissions are typically an inflection point in epilepsy care and often uncover changes in SUDEP risk through characterization of nocturnal seizure burden or identification of biomarkers.New risk identification can then serve as a natural segue for SUDEP discussions.Patients and families also desire SUDEP information shortly after a new epilepsy diagnosis, which often occurs in the hospital [3,4].A patient-centered approach is still needed to determine the best counseling setting.Some patients feel overwhelmed with the extensive information provided at epilepsy diagnosis and may prefer information be divided between multiple visits [4].In these cases, it may be best to defer SUDEP conversations to outpatient.Individual patient assessment is critical to tailor the frequency, timing, and content of counseling for maximum efficacy.Study limitations include generalizability to outpatient epilepsy care, where identification of PGES is less feasible.Our study population was primarily limited to white, non-Hispanic, English-speaking patients; our results may not apply to other groups who have high rates of health care disparities and SUDEP.This may be exacerbated by disparities in EMU admissions, accentuating inequities in care.Finally, SUDEP counseling documentation does not assess counseling that was not documented in the medical records which may make our results an under-representation. Conclusions We need to improve neurologist SUDEP counseling practices.Improved identification and documentation of PGES may raise clinician awareness about SUDEP risk, and trigger SUDEP counseling, thereby satisfying an unmet patient need.SUDEP counseling may also be an avenue toward reducing modifiable risk factors.Each case needs an individual, patient-centered approach to determine the ideal counseling setting and timing, but inpatient counseling appears to be underutilized. Fig. 1 . Fig. 1.Example of PGES.EEG of a young adult with medically refractory generalized epilepsy.Settings with LFF of 1 Hz, HFF of 70 Hz, notch of 60 Hz, sensitivity at 7 µV/mm and page speed at 30 mm/sec.Seizure semiology of behavioral arrest, left gaze deviation, left arm flexion and right arm extension.Electrographically, the seizure had generalized onset with diffuse theta, followed by 8 s of background attenuation at seizure offset as demonstrated above, meeting criteria for PGES. Table 2 Summary of SUDEP Counseling Characteristics.
2024-05-23T13:43:34.805Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "f15e7a7757bee780e962c6304272953580988249", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.yebeh.2024.109845", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "088f104577a4ec95808d7ea45031611ad57f2989", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254300881
pes2o/s2orc
v3-fos-license
Tuning of the Stretchability and Charge Transport of Bis‐Diketopyrrolopyrrole and Carbazole‐Based Thermoplastic Soft Semiconductors by Modulating Soft Segment Contents Polymer semiconductors are promising materials for stretchable, wearable, and implantable devices due to their intrinsic flexibility, facile functionalization, and solution processability at low temperatures. However, the crystalline domain of the conjugated structure for high charge carrier mobility in semiconducting polymers exhibits lower stretchability than that of the semi‐crystalline or amorphous domains. Herein, a set of thermoplastic soft semiconductors is synthesized with different ratios of diketopyrrolopyrrole–carbazole–diketopyrrolopyrrole (DPP‐Cz‐DPP)‐based hard segments and thiophene‐based aliphatic soft segments, having the similar structure of thermoplastic elastomers. The polymers exhibit decreased glassy temperatures with the increased content of the soft segments. The polymers show high crystallinity after copolymerization with a large‐sized DPP‐Cz‐DPP core and non‐conjugated segments due to an aggregation property of the conjugated core, still possessing a semi‐crystalline domain after annealing. The polymer films exhibit stretchability under strains of up to 60%. Organic field‐effect transistors fabricated using stretchable polymers show a mobility range of 0.125–0.005 cm2 V−1 s−1 with different proportions of the soft segment. The stretchability is improved significantly and the mobilities are decreased less when the content of the soft segment is increased. Therefore, this study presents a design principle for the development of a high‐performance stretchable semiconducting polymer. dicarboxamide-based spacers, and isophthalamide-based spacers, have been introduced into the backbone to improve the stretchability of copolymers. [33] Thermoplastic elastomer containing hard segments of high melting point and soft segments, which are easily decomposed at low temperatures, has excellent stretchability and strong mechanical properties. Similar to the structure of thermoplastic elastomer, the thermoplastic soft semiconductor is synthesized with hard segments of high crystallinity and soft segments, exhibiting an effective charge transport from the microstructure with semi-crystalline or amorphous domains. [19,22] And only the soft segments in the stretched thermoplastic soft semiconductor will be deformed, while the hard segment maintains the crystalline domain. In previous studies, diketopyrrolopyrrole-carbazole-diketopyrrolopyrrole (DPP-Cz-DPP)-based organic semiconductors exhibited high thermal stability and a highly crystalline microstructure. [51][52][53][54][55] Although the chemical structure of DPP-Cz-DPP has some torsion due to the steric hindrance between DPP and carbazole, it achieves high crystallinity and good aggregation properties, as shown from the differential scanning calorimetry (DSC)' curves of DPP-Cz-CPP. [52] Therefore, DPP-Cz-DPP moiety is suitable to be used as a hard segment of a thermoplastic soft semiconductor, owing to a larger crystalline domain than that of the previously reported moieties for stretchable polymers such as diketopyrrolopyrrole or thiophene-based small-sized conjugated cores. Herein, we synthesized a set of thermoplastic soft semiconductors, denoted as PbDCT-based polymers, containing a DPP-Cz-DPP core and thiophene-based aliphatic soft segments via random copolymerization (Scheme 1). The mechanical and electrical properties of the polymers were investigated by observing the film morphology and fabricating organic field-effect transistors (OFETs). In the thermal phase transition analysis through DSC, the thermoplastic polymers exhibited decreased glassy temperatures owing to the increased content of the soft segments in the polymer backbone. The PbDCT-based copolymer films with semi-crystalline domains exhibited enhanced stretchability owing to the incorporation of the soft segments. OFETs with PbDCT-based polymers showed decreased mobilities with increasing content of the soft segments. However, the mobilities decreased less, even if the content of the soft segment is increased, whereas the stretchability is improved steeply. Therefore, our method demonstrates an effective approach for fabricating high-performance stretchable electronics by modulating soft segments in thermoplastic soft semiconductors, with minimal sacrifice of mobility. Materials and Characterization All starting materials and reagents were purchased from commercial sources and used without further purification unless otherwise specified. 1,12-Di(thiophen-2-yl)dodecane (T12) and Br-DPP-Cz-DPP-Br were prepared according to a previously reported procedure. [50,52] Proton nuclear magnetic resonance ( 1 H NMR) spectra were recorded on a Bruker AVANCE III 300 MHz spectrometer. The number-average molecular weight (M n ), weight-average molecular weight (M w ), and polydispersity index (PDI = M w /M n ) of the polymers were determined via gel permeation chromatography (GPC) at 40 °C using polystyrene as the standard and chloroform as the eluent. The solution and thin film absorption spectra were measured using a Thermo Scientific Evolution 600 UV-vis spectrophotometer. Thermogravimetric analysis (TGA; TA Instruments Auto TGA Q500) and differential scanning calorimetry (TA Instruments DSC 250) were performed under a nitrogen atmosphere at a heating rate of 10 °C min −1 . A conventional three-electrode cell was used for the cyclic voltammetric (CV) measurements. A platinum was used as the counter electrode, a thin indium tin oxide (ITO) film was used as the working electrode, and an Ag wire was used as the reference electrode in a computer-controlled VersaSTAT3 instrument (Ametek) at room temperature. The films were prepared by spin-coating a chlorobenzene (CB) solution of the polymers on the indium-tin-oxide working electrode. All measurements were performed in anhydrous acetonitrile with 0.1 M tetrabutylammonium perchlorate as the conducting electrolyte. Synthesis of 1,12-Bis(5-bromothiophen-2-yl)dodecane (DiBr-T12) T12 (0.50 g, 1.5 mmol) was added to a flask containing 20 mL chloroform. A solution of N-bromosuccinimide (0.56 g, 3.15 mmol) was added slowly in three folds to the reaction mixture and stirred for 6 h at room temperature. The reaction mixture was poured into deionized (DI) water and extracted with dichloromethane. The organic phase was dried over anhydrous magnesium sulfate and the solvent was removed under vacuum. The residue was purified via column chromatography on silica gel using hexane as the eluent to obtain pure DiBr-T12 (0.54 g, 73%) as an orange solid. 1 Figure S1, Supporting Information. Film Characterization To characterize the surface morphology of the PbDCT-based polymers, they were dissolved in anhydrous CB with a concentration of 5 mg mL −1 and spin-coated at 2000 rpm for 1 min on SiO 2 substrates. The coated films of PbDCT, PbDCT12_7:3, PbDCT12_5:5, PbDCT12_3:7, and PbDCT12_1:9 showed 4, 11, 16, 17, and 20 nm thickness, respectively. And the annealed films exhibited 4, 11, 13, 15, and 17 nm thickness, respectively. Height and electrical conductivity images of the organic semiconductor films were obtained via atomic force microscopy (AFM; Park Systems, NX-10) at the Cooperative Center for Research Facilities, Yonsei University. For the conductive AFM, a bias of 5 V was applied to the polymer films. To analyze the thin-film microstructure, grazing incidence wide-angle X-ray scattering (GIWAXS) was performed at the 9A beamline of the Pohang Accelerator Laboratory. The 2D scattering patterns of the samples were obtained via X-ray diffraction at a grazing angle of 0.12°. To measure the crack-forming strain of the polymer films, the PbDCT-based polymer films were transferred to PDMS-based stretchable substrates using polyvinyl alcohol (PVA)-modified substrates. PDMS substrates were prepared by blending a precursor (Sylgard 184) and a cross-linker with a ratio of 12:1. The thickness of the PDMS substrate is approximately 1.5 mm after cross-linking. PVA was dissolved in DI water to 5 wt% and spin-coated at 2000 rpm for 1 min on a SiO 2 substrate as the sacrificial layer. The PbDCT-based polymers were spin-coated at 2000 rpm for 1 min on PVA-modified substrates. The samples were annealed at the optimized temperature of 15 min and cooled at room temperature. The polymer-coated samples were attached to stretchable PDMS substrates. Water was then added to the gap between the two substrates. After the dissolution of PVA, the polymer films remained on the PDMS substrates. Organic Field-Effect Transistor (OFET) Fabrication and Characterization OFETs composed of PbDCT-based polymers were fabricated with a top-gate/bottom-contact (TG/BC) structure. The source and drain electrodes of Au/Ni (15/5 nm thickness) were patterned on Corning Eagle 2000 glass substrates using conventional photolithography and lift-off processes. The substrates were cleaned using bath ultrasonication in DI water, acetone, and isopropyl alcohol for 10 min each. After drying in an oven, the substrates were treated with oxygen plasma for 3 min at 120 W. For the active layers, dissolved PbDCT-based polymers in CB were spin-coated at 2000 rpm for 1 min on the substrates in a nitrogen-filled glovebox. The samples were annealed at various temperatures for 20 min. For the dielectric layers, poly(methyl methacrylate) was dissolved in n-butyl acetate, spin-coated at 2000 rpm for 1 min to a thickness of 500 nm, and annealed at 80 °C for 1 h. The Al gate electrode (50 nm thickness) was deposited with shadow masks using a thermal evaporator. Electrical characterization of the OFETs was performed using a semiconductor parameter analyzer (Keithley 4200-SCS) in a nitrogenfilled glovebox. The mobilities of the saturation regions were evaluated using the standard formula: µ 2 Polymer Synthesis First, a thiophene-based aliphatic monomer (DiBr-T12) was synthesized to endow stretchability into the polymer backbone through bromination with N-bromosuccinimide from T12, as shown in Scheme 2. To determine the difference in properties according to the mixing ratio of the highly aggregated DPP-Cz-DPP and flexible aliphatic spacer (T12) in the polymer backbone, the polymers were synthesized with different ratios of DPP-Cz-DPP (100%, 70%, 50%, 30%, 10%, and 0%.) The synthesis procedures of all PbDCT-based polymers are summarized in Scheme 1, and the details are described in Experimental Section. Six PbDCT-based polymers were prepared via the Stille cross-coupling polymerization with different mixing ratios of the monomers under the same reaction conditions. After random copolymerization, the crude polymers were washed using Soxhlet extraction with methanol, acetone, and hexane to remove any by-products and oligomers. Finally, the chloroform fractions were recovered through precipitation in methanol, filtered, and dried under vacuum to obtain a dark purple solid. Except for the polymer without DPP-Cz-DPP, denoted as PT12T, the other five polymers exhibited sufficient solubility in organic solvents, such as chloroform, CB, and o-dichlorobenzene. The M n and PDIs of the five polymers were determined via gel GPC at 40 °C using chloroform as the eluent relative to the polystyrene standards. Thermal Properties The thermal stabilities of the polymers were investigated via thermogravimetric analysis; the results were summarized in Table 1 and Figure S7, Supporting Information. All polymers exhibited high thermal stability, and no decomposition was observed below 400 °C. As the amount of flexible aliphatic backbone spacer increased, the ash content decreased at temperatures above 500 °C owing to the low thermal stability, as shown in Figure S7, Supporting Information. The thermal phase transition behaviors of the random polymers were investigated via DSC, as shown in Figure 1. All polymers were scanned in the range of −50 to 330 °C under nitrogen at a heating rate of 10 °C min −1 . Unlike the previously reported DPP-Cz-DPP-based polymer, namely, PCbisDPP, [52] PbDCT exhibits broad melting (T m ) and recrystallization (T c ) temperatures during the heating and cooling scans. After incorporating the flexible aliphatic backbone spacers, T m and T c decreased, with more pronounced peaks. In addition, the T m of the flexible aliphatic backbone spacers was observed below 100 °C, which implies high ductility. [22] On the other hand, the PT12T composed of a flexible aliphatic backbone showed a higher T m than other random polymers, probably due to the alternated sequence in the chain of PT12T. Optical and Electrochemical Properties The normalized UV-vis-NIR absorption spectra of the five PbDCT-based polymers in CB solution and as thin films were presented in Figure 2. The corresponding optoelectronic properties, including absorption peak wavelengths, absorption edge wavelengths, and optical band gaps, were summarized in Table 1. All the polymers in the solution and film states exhibited broad and dual absorptions in the range of 300-800 nm, which are attributed to the localized π-π* and inter-and intramolecular charge transfer transitions. These absorption profiles are typically observed for donor-acceptor-type conjugated polymers. As the amount of flexible aliphatic backbone spacers increased, a hypsochromic shift was observed in the solution and thin film states because of the decrease in the conjugation length. In addition, vibronic features were clearly observed for PbDCT and PbDCT12_7:3, whereas those of PbDCT12_5:5 to PbDCT12_1:9, which contained higher contents of flexible aliphatic backbone spacers, decreased. This phenomenon can be ascribed to an increase in solubility owing to the flexible aliphatic backbone. To evaluate the electronic structures of the polymers, CV measurements were performed on the polymer films in acetonitrile with tetrabutylammonium hexafluorophosphate as the supporting electrolyte. The highest occupied molecular orbital (HOMO) of each polymer was calculated from the onset of the electrochemical oxidation. The HOMO levels of PbDCT, PbDCT12_7:3, PbDCT12_5:5, PbDCT12_3:7, and PbDCT12_1:9 were −5.08, −5.09, −5.14, −5.17, and −5.19 eV, respectively. As the proportion of flexible aliphatic backbone spacers increased, the HOMO levels of the polymers deepened owing to the increasing bandgap. Film Morphology Characterization To observe the surface morphology of the PbDCT-based polymer films via AFM, the thin films were spin-coated on SiO 2 substrates and heated at 100 °C or annealed at optimized temperatures, which are close to the transition temperatures of the polymers. The optimized temperatures were 250 °C for PbDCT, 200 °C for PbDCT12_7:3, PbDCT12_5:5, and PbDCT12_3:7, and 150 °C for PbDCT12_1:9. Nanograined morphologies were observed in the polymer films, as shown in Figure 3a and Figure S8, Supporting Information. After heating at 100 °C, PbDCT, PbDCT12_7:3, PbDCT12_5:5, PbDCT12_3:7, and PbDCT12_1:9 exhibited root-mean-square surface roughness values of 0.60, 0.63, 0.52, 0.69, and 2.68 nm, respectively, which decreased to 0.57, 0.62, 0.49, 0.60, and 1.34 nm in the annealed films, respectively. The smoothest surface was achieved with the annealed PbDCT12_5:5 film by incorporating flexible aliphatic moieties. However, the roughness of the surface increased again with the addition of more aliphatic moieties. In particular, the aliphatic composition formed large grains, leading to phase separation in PbDCT12_1:9. Therefore, the excessive aliphatic content of the backbone disrupts the stacking of DPP-Cz-DPP, thereby reducing the conjugation lengths. From the conductive AFM measurements at a bias of 5 V, the brightness of the PbDCT film increased, as shown in Figure 3b, indicating its high electrical conductivity. In addition, several bright spots exhibiting high electrical conductivity were observed on the PbDCT12_7:3 film. However, low conductivities were measured for the other films. These results suggest that the decreased electrical conductivity is attributed to the broken conjugation owing to the increased content of the insulating non-conjugated spacer in the backbone, which is consistent with the results of the OFET performance evaluation, which are discussed in a subsequent section. To investigate the structural order and molecular distance of the semi-crystalline polymers, GIWAXS measurements were performed, and two-dimensional patterns were obtained from the organic semiconductor films, as shown in Figure 3c and Figure S9, Supporting Information. The thin films were prepared similarly to that of the AFM measurement samples on Si substrates. After annealing, the PbDCT-based polymers were reorganized and exhibited lamellar stacking in the (100), (200), and (300) plane directions, except for PbDCT and PbDCT_1:9. PbDCT and PbDCT12_1:9 exhibited nearly isotropic orientations. From the DSC curve, PbDCT had no glassy temperature, suggesting that the reorganization of the polymers by annealing occurred in random directions. In contrast, the crystallinity of the PbDCT12_1:9 film decreased because of the excessive aliphatic spacers, as seen in the AFM image. From the one-dimensional GIWAXS profiles of the PbDCT-based polymer films in the out-of-plane direction (Qz direction), π-π stacking peaks are observed for PbDCT12_7:3, PbDCT12_5:5, and PbDCT12_3:7 after annealing ( Figure S10 (200) planes did not differ significantly. However, the peak positions of the (300) plane varied depending on the content of the soft segments on the annealed films. For PbDCT12_3:7 in the in-plane, a prominent (010) π-π stacking peak is observed ( Figure S11, Supporting Information), which can be ascribed to the phase separation caused by excess aliphatic spacers. Crack Formation Strain The crack on-set strain of the polymers was measured as a representative stretchable property. And strain-induced morphological changes on polymer films were observed under single tensile loading, as a method of evaluating the fracture strain. To measure the fracture strain of the PbDCT-based polymers, the polymer films transferred to stretchable PDMS substrates were imaged using optical microscopy (OM) under various strains (Figure 4). The fracture strain of the polymer was determined by observing the crack propagation in the polymer films under strain. The film was stretched in one direction, and a strainrelated change was observed in the film morphology. PbDCT, which comprised hard segments only, was easily damaged at low strains. Similar to previously reported crystalline-domain polymers, the fracture strain was less than 10%. [21,22] In contrast, copolymers with soft segments were less damaged even under a higher strain. Therefore, thermoplastic soft semiconductors could withstand increased strain without cracks in the polymer film owing to the increased content of the soft segments. PbDCT12_1:9 with a soft segment proportion of 90% could withstand strains of less than 70%. OFET Performance To investigate the properties of the charge carrier transport in PbDCT-based OFETs, we fabricated devices using a solution process. The dissolved PbDCT polymers were spin-coated on rigid substrates with pre-patterned gold source-drain electrodes for the top-gate/bottom-contact structure. The deposited films were annealed at 100, 150, 200, and 250 °C (Figure 5, Figures S12 and S13, Supporting Information). The charge carrier mobility was evaluated using a gradual channel approximation equation, where the channel width and length were 1000 and 20 µm, respectively (see Experimental section). The threshold voltage (V th ) and the on/off ratio were calculated in the saturation regime. The performances of the OFETs are summarized in Table S1, Supporting Information. The OFETs exhibited enhanced mobility upon annealing at a temperature close to the transition temperature. The improved mobility is ascribed to the effective charge transfer in the oriented microstructures by annealing treatment. The OFETs annealed at 200 and 250 °C using PbDCT12_7:3 exhibited comparable performances owing to the transition temperature of 223 °C. From the DSC data, these optimized temperatures for obtaining semi-crystalline domains decreased because of the increased aliphatic spacer content. The performance of the OFETs decreased due to conjugation breaking, resulting from the increased content of the insulating spacers (Figure 5a-e). The PbDCT-based OFETs exhibited the highest mobility of 0.125 cm 2 V −1 s −1 without conjugation breaking, whereas PbDCT12_1:9 demonstrated the lowest mobility of 0.005 cm 2 V −1 s −1 owing to the decreased electrical conductivity caused by the excessive insulating spacers, as shown in the conductive AFM image. The mobilities of the OFETs and crack onset strains were plotted depending on the content of the DPP-Cz-DPP unit, as shown in Figure 5f. The extended π-conjugation is broken by an increased content of soft segment possessing the long aliphatic chain, and the charge carrier transport is disturbed. Therefore, the stretchability is improved by the increased content of the soft segment, but the mobility of the charge carrier is decreased, resulting in a trade-off relationship between mobility and fracture strain. A trade-off relationship between mobility and fracture strain was noted. [18,20] However, the mobilities decreased less with the decreasing DPP-Cz-DPP proportion from 70% to 30%, whereas crack-formed strains rapidly increased. Therefore, devices with improved mechanical properties and suitable charge transport behavior were successfully fabricated by introducing soft segments at an optimized proportion. Conclusion This paper presents the synthesis of a series of stretchable polymer semiconductors employing a new design of thermoplastics with random copolymerization, using a largesized DPP-Cz-DPP hard segment and a thiophene-based aliphatic soft segment. The soft segment in the backbone of the copoly mer affected the thermal, electrical, and mechanical properties of the polymer semiconductors. The PbDCT-based thermoplastic soft semiconductors exhibited decreased glassy temperatures with increasing content of the soft segments in the polymer backbone. PbDCT-based copolymers with a proportion of soft segments of 90% exhibited enhanced stretchability under strains of up to 60%. A trade-off relationship was observed between the field-effect mobility and fracture strain of the PbDCT-based stretchable polymer. The OFET with the PbDCT-based polymer exhibited the highest mobility of 0.125 cm 2 V −1 s −1 without conjugation breaking, whereas the OFETs with an insulating spacer with a content of 90% in the polymer backbone demonstrated the lowest mobility of 0.005 cm 2 V −1 s −1 . However, the mobilities decreased less depending on the increasing ratio of the soft segment, whereas the rate of fracture strain abruptly increased. Therefore, the OFETs with improved mechanical properties and suitable charge-transport behavior can be fabricated by introducing soft segments at an optimized proportion. Therefore, this study suggests a design guideline that a conjugated core forming a large-sized crystalline domain should be introduced in the backbone of stretchable semiconducting polymer to reduce the conjugation breaking. Additionally, the study findings demonstrated that the fabrication of high-performance stretchable electronics can be achieved by modulating the content of soft segments in thermoplastic soft semiconductors. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2022-12-07T16:48:41.990Z
2022-12-04T00:00:00.000
{ "year": 2022, "sha1": "6015bc6e7d549e815fa1377a67c97bcddbb66834", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/admi.202202186", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "5be306424642293f0f3b6ebf94b88a413b8a6020", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
10993760
pes2o/s2orc
v3-fos-license
Who uses new walking and cycling infrastructure and how? Longitudinal results from the UK iConnect study ☆ Objective. Toexamine how adults usenew local walkingand cyclingroutes, and whatcharacteristics predict use. Methods. 1849adultscompletedquestionnairesin2010and2011,beforeandaftertheconstructionofwalking andcyclinginfrastructureinthreeUKmunicipalities.1510adultscompletedquestionnairesin2010and2012.The 2010 questionnaire measured baseline characteristics; the follow-up questionnaires captured infrastructure use. Results. 32% of participants reported using the new infrastructure in 2011, and 38% in 2012. Walking for recreation was by far the most common use. In both follow-up waves, use was independently predicted by higher baseline walking and cycling (e.g. 2012 adjusted rate ratio 2.09 (95% CI 1.55, 2.81) for N 450 min/week vs. none). Moreover, there was strong speci fi city by mode and purpose, e.g. baseline walking for recreation speci fi - cally predicted walking for recreation on the infrastructure. Other independent predictors included living near the infrastructure, better general health and higher education or income. Conclusions. Thenewinfrastructurewaswell-usedbylocaladults,andthiswassustainedovertwoyears.Thus far, however, the infrastructure may primarily have attracted existing walkers and cyclists, and may have catered more to the socio-economically advantaged. This may limit its impacts on population health and health equity. Introduction In the past two decades, promoting walking and cycling has gained increased policy attention in multiple sectors including health, transport and climate change (Chief Medical Officers of England, 2011; Department of Health and Department for Transport, 2010; THE PEP, 2009;WHO, 2002). It is increasingly recognised that creating a supportive built environment may play a crucial role in enabling the success of individual-level interventions (Giles-Corti, 2006) and in promoting enduring population behaviour change (Butland et al., 2007; Institute of Medicine and National Research Council of the National Academies, 2009;NICE, 2008). Nevertheless, several reviews have highlighted the paucity of controlled, longitudinal studies evaluating new infrastructure for walking or cycling (e.g. Krizek et al., 2009;McCormack and Shiell, 2011;NICE, 2008;Pucher et al., 2009) and many of the studies that do exist have used repeat cross-sectional rather than cohort designs (Ogilvie et al., 2007;Yang et al., 2010). These studies cannot prospectively determine the individual, household or geographic predictors of using new infrastructure. Given that inactive people derive the most benefit from additional physical activity (US Department of Health and Human Services, 1996;Woodcock et al., 2011), new infrastructure would be expected to generate greater public health gains if it attracted new walking or cycling trips rather than existing walkers and cyclists (Ogilvie et al., 2007;Yang et al., 2010), but we know of no study examining associations between use and baseline activity levels. From an equity perspective, it may also be important to examine the socio-demographic predictors of use, and so evaluate whether the infrastructure meets the needs of all groups (Marmot, 2010;NICE, 2008NICE, , 2012. In addition to identifying who uses new infrastructure, it is also useful to examine what it is used for because this may affect its health and environmental impacts. For example, cycling is typically a higher intensity activity than walking and so may have a greater effect upon physical fitness (Yang et al., 2010). Similarly, transport trips may confer greater environmental benefits than recreational trips, because active travel seems to substitute for motor vehicle use whereas recreational walking may involve it (Goodman et al., 2012). Finally, whereas most previous longitudinal studies included only a single follow-up wave (Ogilvie et al., 2007;Yang et al., 2010), comparing results across multiple waves may provide insights into changing patterns of use or a changing profile of users. This may be important for understanding effects beyond the immediate post-intervention period: for example, although early adopters may be those who are already physically active, social modeling may subsequently encourage use by more inactive individuals (Ogilvie et al., 2011). This paper therefore aims to examine and compare patterns of using high-quality, traffic-free walking and cycling routes over oneand two-year follow-up periods. Specifically, we examine the journey purposes for which the infrastructure was used and the modes by which it was used. We also examine the individual and household predictors of use. Intervention, study sites and sample Led by the sustainable transport charity Sustrans, the Connect2 initiative is building or improving walking and cycling routes at multiple sites across the United Kingdom (map in Supplementary material). Each Connect2 site comprises one flagship engineering project (the 'core' project) plus improvements to feeder routes (the 'greater' project). These projects are tailored to individual sites but all embody a desire to create new routes for "everyday, local journeys by foot or by bike" (Sustrans, 2010). The independent iConnect research consortium (www.iconnect.ac.uk) was established to evaluate the travel, physical activity and carbon impacts of Connect2 (Ogilvie et al., 2011. As previously described in detail , three Connect2 projects were selected for detailed study according to criteria including implementation timetable, likelihood of measurable population impact and heterogeneity of overall mix of sites. These study sites were: Cardiff, where a traffic-free bridge was built over Cardiff Bay; Kenilworth, where a traffic-free bridge was built over a busy trunk road; and Southampton, where an informal riverside footpath was turned into a boardwalk . None of these projects had been implemented during the baseline survey in April 2010. At one-year follow-up, most feeder routes had been upgraded and the core projects had opened in Southampton and Cardiff in July 2010. At two-year follow-up, almost all feeder routes were complete and the core Kenilworth project had opened in September 2011. Fig. 1 illustrates the traffic-free bridge built in Cardiff (the 'core' project in this setting) plus the feeder routes implemented in 2010 and 2011 (the 'greater' network). The baseline survey used the edited electoral register to select 22,500 adults living within 5 km road network distance of the core Connect2 projects . In April 2010 potential participants were posted a survey pack, which 3516 individuals returned. These 3516 individuals were posted followup surveys in April 2011 and 2012; 1885 responded in 2011 and 1548 in 2012. After excluding individuals who had moved house, the one-year follow-up study population comprised 1849 participants (53% retention rate, 8% of the population originally approached) and the two-year study population comprised 1510 (43% retention, 7% of the original population). The University of Southampton Research Ethics Committee granted ethical approval (CEE200809-15). Table 1 presents the baseline characteristics examined as predictors of Connect2 use. Past-week walking and cycling for transport were measured using a seven-day recall instrument (Goodman et al., 2012;Ogilvie et al., 2012) while past-week recreational walking and cycling were measured by adapting the short form of the International Physical Activity Questionnaire (Craig et al., 2003). Most other predictors were similarly self-reported, including height and weight from which we calculated body mass index (categorised as normal/overweight/obese). The only exception was the distance from the participant's home to the nearest access point to a completed section of the greater Connect2 infrastructure (calculated separately in 2011 and 2012 to reflect ongoing upgrades: Fig. 1). This was calculated in ArcGIS 9 using the Ordnance Survey's Integrated Transport Network and Urban Path layers, which include the road network plus traffic-free or informal paths. For ease of interpretation, we reverse coded distance from the intervention to generate a measure of proximityi.e. treating those living within 1 km as having a higher proximity than those living over 4 km away (Table 1). Awareness and use of Connect2 At follow-up, participants were given a description of their local Connect2 project and asked "Had you heard of the [Connect2 infrastructure] before completing this survey?" (yes/no) and "Do you use the [Connect2 infrastructure]?" (yes/no). Participants reporting using Connect2 were then asked whether they (a) walked or (b) cycled on Connect2 for six journey purposes (commuting for work, travel for education, travel in the course of business, shopping or personal business, travel for social or leisure activities, and recreation, health or fitness). Statistical analyses We examined the predictors of (i) Connect2 awareness and (ii) Connect2 use using Poisson regression with robust standard errors (Zou, 2004). We initially adjusted analyses only for age, sex and study site, and then proceeded to multivariable analyses. Missing data across explanatory and outcome variables ranged from 0 to 8.1% per variable, and were imputed using multiple imputation by chained equations under an assumption of missing at random. To allow for potential correlations between participants living in the same neighbourhood, robust standard errors were used clustered by Lower Super Output Area (average population 1500). Statistical analyses were conducted in 2012-2013 using Stata 11. Characteristics of study participants Comparisons with local authority and national data suggested that participants included fewer young adults than the general population (e.g. 7% in the two-year sample vs. 26% of adults locally) and were also somewhat healthier, better-educated and less likely to have children. Otherwise the study population appeared to be broadly representative in its demographic, socio-economic, travel and activity-related characteristics (see Supplementary material). Retention at follow-up did not differ with respect to proximity to the intervention or baseline levels of walking and cycling (see Supplementary material). The oneand two-year study samples had very similar characteristics (Table 1), and all findings were unchanged in sensitivity analyses restricted to those who provided data at both time points. Awareness and use of Connect2 Awareness and use of Connect2 were fairly high at one-year follow-up, with 32% reporting using Connect2 and a further 32% having heard of it. At two-year follow-up these proportions had risen slightly to 38% and 35%. Among those taking part in both follow-up waves, the correlation between use at one and two years was 0.62, with (for example) 82% of those who used it at one year reporting also using it at two years (Table 2 and Supplementary material). Correlations for specific types of use were generally also fairly high, ranging from 0.35 to 0.76. : numbers add to fewer than the total number of participants for some variables due to missing data. Chi-squared tests provided no evidence (all p N 0.16) for a difference in the distribution of characteristics between the one-and two-year samples, except that the age distribution was (unsurprisingly) slightly older at two-year follow-up. The average number of types of Connect2 use reported by users was 1.96 at one-year follow-up and 1.97 at two-year follow-up. In both follow-up waves, walking for recreation was by far the most commonly reported type of Connect2 use, followed by cycling for recreation, walking for transport and cycling for transport (Table 2). This predominant use of Connect2 for walking was unsurprising given that walking was much more common in general than cycling among our participants. If anything, use of Connect2 for cycling was more common than might have been expected from baseline measures of past-week cycling. For example, at baseline around five times more participants reported doing any walking in the past week than reported any cycling (83% vs. 16%), whereas at follow-up 'only' around twice as many reported walking on Connect2 as reported cycling. In contrast, the dominance of recreational use of Connect2 could not be explained in this way, as baseline levels of walking or cycling were similar across recreation and transport purposes, with 65% vs. 66% reporting any in the past week. Among those who used Connect2 for transport, the most frequently reported journey purposes were social and leisure trips, followed by shopping and personal business. Only 8% of Connect2 users (11% of users who were in employment) reported using Connect2 for work or business at one-year follow-up, and 9% (13% of those in employment) at two years. Table 3 shows the predictors of using Connect2 for any purpose. In general, the associations at one-and two-year follow-up were very similar. Use was highest in Cardiff and lowest in Southampton (Table 3). The other strongest predictors were living closer to Connect2 and higher baseline walking and cycling. These variables both showed doseresponse associations of a very similar magnitude at one and two years, and were also associated with awareness of Connect2 and with the various different modes and purposes of Connect2 use (Fig. 2). With respect to baseline walking and cycling, these associations were highly mode-and purpose-specific: when past-week walking and cycling for transport and recreation were entered as four separate variables, the baseline behaviour in question was almost always the strongest predictor and was usually the only significant predictor (e.g. past-week walking for transport specifically predicted walking for transport on Connect2: see Supplementary material). All findings were very similar in sensitivity analyses using proximity to the core rather than to the greater Connect2 project. Predictors of awareness and use of Connect2 Other strong, independent predictors of Connect2 use were nonstudent status and household bicycle access, although the latter association was attenuated somewhat after adjusting for baseline walking and cycling. Higher income and education also predicted Connect2 use at both follow-up waves in minimally-adjusted analyses, although only one of these was ever significant in adjusted analyses. Older age (N 65 years), obesity and poorer health all predicted lower Connect2 use in minimally-adjusted analyses. However, these associations were generally attenuated to the null after adjusting for other characteristics, particularly baseline walking and cycling, and/or were not replicated across follow-up waves. The associations were generally similar across the three study sites, with most variables showing no consistent evidence (p b 0.1) of interaction across the two timepoints. The only exception was consistent, weak evidence (0.02 ≤ p ≤ 0.03 for interaction) that men were more likely to use Connect2 in Southampton but not in the other two sites (e.g. rate ratio 1.44 (95%CI 1.03, 2.02) for men vs. women in Southampton in 2012, versus point estimates of 1.03 in Cardiff and 0.97 in Kenilworth). The Supplementary material presents the predictors of using Con-nect2 for walking and cycling for transport and recreation, modelled as four separate outcomes. The findings were generally similar to those presented in Table 3, except that bicycle access and, to a lesser extent, higher education were more strongly associated with using Con-nect2 for cycling than for walking. Sustained usage, but dominance of recreational trip purposes The stated aim of Connect2 was to serve local populations and provide new routes for everyday journeys (Sustrans, 2010). Some success is indicated by the fact that a third of participants reported using Con-nect2 and a further third had heard of it, with higher awareness and use among residents living closer to the projects. The slight increase in awareness and use by two-year follow-up suggests that these findings do not simply reflect temporary publicity surrounding the Connect2 opening or a novelty effect of wanting to 'try it out' once. Yet despite Connect2's emphasis on "connecting places", we replicated previous research on American trails (Price et al., 2012(Price et al., , 2013 in finding that many more participants used Connect2 for recreational than for transport purposes. This did not simply reflect lower total walking and cycling for transport among participants, nor does the built environment appear to matter less for transport than for recreation in general (McCormack and Shiell, 2011;Owen et al., 2004). Instead the dominance of recreational uses may reflect the fact that these Connect2 projects did not constitute the comprehensive network-wide improvements that may be necessary to trigger substantial modal shift (NICE, 2008). In other words, although Connect2 provided all local residents with new (and apparently well-used) locations for recreation, it may not have provided most residents with practical new routes to the particular destinations they needed to reach. This interpretation is consistent with the observation that among those who did use Connect2 for transport, many more reported making shopping and leisure trips than commuting or business trips; the former may typically afford more opportunity to choose between alternative destinations than the latter. Broad socio-demographic appeal, but higher use in more advantaged and active groups Connect2 seemed to have a broad demographic appeal, with relatively little variation in use by age, gender, ethnicity or household Results based on 1826 British adults from the one-year sample in 2011 (excluding 1.2% with missing data), and 1490 from the two-year sample in 2012 (excluding 1.3% with missing data). † Pearson correlation between reporting behaviour at one and two years, among those taking part at both time points (N = 1235). The Supplementary material includes the numbers of individuals underlying these percentages and correlations. composition. Higher education or income did, however, independently predict Connect2 use, a finding consistent with one (Brownson et al., 2000) but not all (Brownson et al., 2004;Merom et al., 2003) previous studies. This association was particularly strong for cycling suggesting that, at least in this setting and in the short term, Connect2 may not reduce the existing or emerging socio-economic gradients in cycling in the UK (Goodman, in press;Marmot, 2010). Connect2 use was strongly predicted by higher pre-intervention levels of walking and cycling, an association which showed a marked specificity by mode and purpose. This suggests that many users may have changed where they walked or cycled without changing what they were doing. Such displacement would be consistent with previous studies reporting that most users of new off-road 'trails' had been walking or cycling prior to their construction (Burbidge and Goulias, 2009;Gordon et al., 2004). Our evaluation builds on those studies by showing the effect was stable over two years, with no suggestion that previously less active individuals formed a higher proportion of users over time. It is possible that attracting less active individuals may require larger infrastructure changes (e.g. network-wide improvements) or more time (e.g. with improved infrastructure being necessary but not sufficient, and with behaviour change being triggered by subsequent individual life events) (Christensen et al., 2012;Giles-Corti and Donovan, 2002;Jones and Ogilvie, 2012). On the other hand, even among the least active individuals the proportion using Connect2 was not trivial (e.g. 17-19% among those reporting no past-week activity at baseline), indicating some potential for such infrastructure to appeal to users of all activity levels. Strengths and limitations Strengths of this study include its cohort design and populationbased sampling, which allowed us to address novel substantive questions such as who used the new infrastructure. Nevertheless, there are also some key limitations. One is the potential for selection bias: given the low response rate, the study population cannot be assumed to be representative. Yet although on average older than the general population, participants generally appeared fairly similar in their demographic, socio-economic and travel-related characteristics; and retention at *p b 0.05, **p b 0.01, ***p b 0.001 for heterogeneity. CI = confidence interval, km = kilometers; RR = relative risk. Minimally-adjusted analyses adjusted for site, sex and age only, multivariable analyses adjusted for all variables in column. Intermediate multivariable models (e.g. including only demographic and socio-economic characteristics) available on request from the authors. Results based on 1849 British adults participating in 2010 and 2011, and 1510 participating in 2010 and 2012. follow-up was not predicted by proximity to the intervention or baseline physical activity, the two strongest predictors of infrastructure use. A second important limitation is that, for each mode and purpose, we measured only whether each participant used Connect2, not the frequency of use. It is plausible that frequent and habitual transport journeys such as commuting form a higher proportion of Connect2 trips than the 7% of Connect2 users who reported using the infrastructure to travel to work. This would be consistent with a previous intercept survey on the traffic-free routes making up the National Cycle Network, which found a more equal balance of trips made for transport (43%) and trips made for recreation (57%) (Lawlor et al., 2003). Conclusions At one-and two-year follow-up, Connect2 infrastructure was wellused and therefore has the potential to encourage environmentally sustainable physical activity in local communities. Thus far, however, its users have tended to be more physically active and socio-economically advantaged residents, which may limit its impacts on overall population health and health equity. We therefore intend to examine in future analyses the extent to which these relatively high levels of infrastructure use translate into overall increases in walking, cycling and physical activity, and into overall decreases in motorised travel and associated carbon emissions. We also intend to examine which particular changes in the Connect2 routes encourage use. This will involve integrating additional quantitative and qualitative research conducted within the broader iConnect program, and will capitalize on the observed heterogeneity between study sites in intervention characteristics and in levels of use. Through close attention to mechanisms and contexts, we hope to examine not only whether environmental interventions like Connect2 'work', but also why they do or do not work, for whom and in what circumstances (Ogilvie et al., 2011).
2015-03-27T18:11:09.000Z
2013-11-01T00:00:00.000
{ "year": 2013, "sha1": "aee467ccb5ac8f522ab49b4dbabd0e03e54bc1bb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ypmed.2013.07.007", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6a9baa42ec7d0610f63539e17dd8d06d0bdac50c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35992387
pes2o/s2orc
v3-fos-license
Draft Genome Sequence of Strain R_RK_3, an Iron-Depositing Isolate of the Genus Rhodomicrobium, Isolated from a Dewatering Well of an Opencast Mine ABSTRACT Rhodomicrobium sp. strain R_RK_3 is an iron-depositing bacterium from which we report the draft genome. This strain was isolated from ochrous depositions of a mining well pump in Germany. The Illumina NextSeq technique was used to sequence the genome of the strain. A 16S rRNA gene sequence comparison of strain R_RK_3 revealed a 94% similarity to Rhodomicrobium vannielii ATCC 17100 (GenBank accession no. CP002292). The search was conducted using BLASTn (1) and EZBioCloud (2). However, determination of the most abundant taxon for the genome bin based on weighted scaffold length revealed 27.0% similarity to Bradyrhizobium icense. The family Hyphomicrobiaceae contains the three phototrophic genera Rhodoplanes (3), Blastochloris (4), and Rhodomicrobium (5). Within the genus Rhodomicrobium, R. vannielii (5) and R. udaipurense (6) are currently the only recognized species. The draft genomes have been published for both of these species (7,8). For Rhodomicrobium vannielii, the oxidation of ferrous iron has been described as a side reaction (9). The current work presents the third draft genome of a Rhodomicrobioum strain, isolated from a novel habitat, i.e., the well pump of a dewatering well that also showed irondepositing activity. Strain R_RK_3 was isolated from ochrous deposits from a dewatering well pump at the Hambach opencast mine, in the Rhenish lignite-mining area. Sequential dilutions of sampled ochrous material were spread on modified Leptothrix medium (10) (deionized water was replaced by sterile well water), and reddish-brown colonies were selected after incubation for 17 to 21 days at room temperature. The iron-and manganesedepositing activity of the isolate was verified by dissolving the colonies with oxalic acid (10%) as previously described by Schmidt et al. (11). Strain R_RK_3 is a small Gramnegative bacterium with a coccus shape of 2-to 4-m length and a width of 1.0 m. Extraction of genomic DNA was done as previously described (12). The paired-end library was prepared by following the Illumina Nextera XT DNA library prep kit protocol. Genome sequencing was done on an Illumina NextSeq 500 sequencer using the NextSeq mid output kit v2 (300 cycles) by generating 74,681,576 raw reads. Demultiplexing was done with bcl2fastq v2.18.0.12, and quality filtering of raw reads was performed using Trimmomatic v0.36 (13), resulting in 48,202,264 filtered reads. Reads were then checked for ambiguous base calls and low complexity, employing the DUST algorithm (14), and filtered accordingly with an R script in Microsoft R Open v3.3.2 (15), followed by preassembly with SPAdes v3.10.0 (16) using default k-mer lengths up to 99 bp. Scaffolds Ն500 bp of this preassembly were subject to extension and second-round scaffolding with SSPACE standard v3.0 (17). The draft genome included 39 contigs with an N 50 assembly quality of 226,044 and L 50 of 10. The shortest scaffold was 2,822 bp, and the longest scaffold was 401,427 bp. The total size of the scaffolds was 5,786,497 bp with a GϩC content of 65%. Annotation resulted in 39 contigs including 5,135 coding regions for 5,205 genes, 587 signal peptides, no clustered regularly interspaced short palindromic repeat (CRISPR) unit, 4 rRNAs (16S, 23S), 48 tRNAs, 1 transfer-messenger RNA (tmRNA), and 17 miscellaneous RNAs. Accession number(s). This whole-genome shotgun project has been deposited at DDBJ/ENA/GenBank under the accession no. MWLK00000000. The version described in this paper is version MWLK01000000. ACKNOWLEDGMENTS Bioinformatic consulting was provided by omics2view.consulting GbR, Kiel, Germany. This work was supported by a grant of the "Bundesministerium für Bildung und Forschung (BMBF)," 02WT1184.
2018-04-03T05:59:29.034Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "616fc84c6f5110c9c9dca640a9b969fa13565328", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/genomea.00864-17", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f063a692ff37c142e061660483bb0ef5c0493dc4", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254214787
pes2o/s2orc
v3-fos-license
Association of Substance Use with Immunological Response to Antiretroviral Therapy in HIV-Positive Patients from Southwest Ethiopia: A Prospective Observational Study Background Use of psychoactive substances by HIV-positive patients in the course of antiretroviral drug treatment has become a public health problem globally. Substance use (alcohol, nicotine, and khat) during the course of treatment results in interactions with drugs that lead to undesired treatment outcomes. This condition is understudied, and the consequences of substance use among patients on antiretroviral treatment are not well explored. Methods A prospective observational study was conducted among people on antiretroviral therapy (ART) at Jimma University Medical Center in southwest Ethiopia from April 20 to November 27, 2019. Data were collected using the World Health Organization’s alcohol, smoking, and substance involvement screening test among adults who have followed antiretroviral therapy for a minimum of 6 months. Logistic regression analysis was done to identify factors associated with immunological response. The inadequate immunological response was defined as patients who were unable to achieve or maintain a CD4 cell count of >350 cells/mm³ after the 6-months of follow-up. Results Of the 332 patients enrolled, a majority (64.2%) of the respondents were females. The mean (±SD) age of the patients was 38.5 ± 9.5 years. The proportion of participants with a high level of health risk due to alcohol use was 8.4%, while 63.8% of them were non-alcohol users with no health risk. In multivariable logistic regression analysis, moderate and high levels of health risks from alcohol use were significantly associated with increased odds of inadequate immunological response (AOR: 2.9; 95% CI, 1.1–7.4) and (AOR: 4.3; 95% CI, 1.2–14.8), respectively, but the level of health risk from khat and cigarette use showed no association with inadequate immunological response in this study. Conclusion Moderate and high levels of health risk from alcohol use were independently associated with inadequate immunological response. People living with HIV/AIDS should regularly be screened for and be educated about substance use and its potential negative impact on CD4 cell recovery. for PLWHA. An outpatient treatment service for HIV was established in 2005. Since 2017 antiretroviral therapy is initiated for all clients tested HIV positive regardless of their CD4 level (with the principle of test and treat). A total of 5518 people with HIV/AIDS were registered at the JUMC ART treatment center. Since 2019, about 3217 patients are on ART and 2953 of them are adults aged ≥18 years. Study Design A hospital-based prospective observational study was used. Study Subject The study subjects were adult PLWHA who have followed treatment for a minimum of 6 months at JUMC. Severely ill patients who required emergency medical help were excluded. Sample Size and Sampling Technique The sample size for this study was determined for both Inadequate immunological response (CD4 ≤350 cells/mm 3 ) among adult RVI patients on ART and substance use among RVI patients. To obtain the maximum sample size for our study. We used the maximum calculated sample size from the two, based on a single population proportion formula, a 95% confidence interval and proportion (p) taken as 27% for poor treatment outcome (CD4 ≤350 cells/mm 3 ) among adult RVI patients on ART by Stefanie Kroeze in sub-Saharan Africa, 2018, 2 which gives the maximum sample size from prospective studies with similar outcome measure to our study Where: z = standard score corresponding to 95% CI=1.96 p = proportion of poor treatment outcome among RVI patients on HAART = 0.27 d = Margin of error/precision = 5% According to the smart care database of the ART clinic, since the source population was 2953 which is less than 10,000 populations the correction formula was used. -n\(1+n/N) = 303/(1+ 303/2953) = 276, then when adjusted for 27.6 (10%) non-response. The sample size was found to be 304. Also, by using single population proportion formula, with a 95% confidence interval and a proportion (p) of 32.6% for alcohol use among PLWHA by Matiwos Soboka at JUMC in 2014, 29 that gives the maximum sample size from the studies conducted on substance use among RVI patients on ART, the sample size was calculated and found to be 332. Therefore, the maximum calculated sample size from the two (332) was considered the final sample size for this study. A consecutive sampling technique was used. During the participant recruitment period (April 20 to May 19, 2019), all eligible adult clients of the ART clinic were invited and enrolled consecutively until the calculated sample size was obtained. Study Variables Outcome/Dependent Variable Immunological response (CD4 cell count). Main Exposure/Independent Variable Substance Use-Related Factors Alcohol use health risk level, cigarette use health risk level, khat use health risk level. Treatment-Related-Factors Baseline ART adherence, current adherence, treatment duration in months, initial regimen, current regimen, ever switched regimen, single tablet regimen, co-medications. Exposure Variable The level of health risk from substance use was measured by using the WHO alcohol, smoking, and substance involvement screening test (ASSIST), 38 validated for use in primary care settings. 39 By the recommended scoring from 0 to 27+, scores in the mid-range on the ASSIST are likely to indicate hazardous or harmful substance use ("moderate risk") and higher scores are likely to indicate substance dependence ("high risk"). Questions particularly associated with dependent or "high risk" use are a compulsion to use (question 3), failed attempts to cut down (question 7), and injecting behaviour (question 8). Level of Health risk Alcohol Use: (0-10 low, 11-26 moderate, and 27+ high risk of health and other problems from current pattern of use). Level of Health Risk Tobacco Use: (0-3 low, 4-26 moderate 27+ high risk of health and other problems from current pattern of use). Level of Health Risk Khat Use: (0-3 low, 4-26 moderate, 27+ high risk of health and other problems from current pattern of use). If the patient never used either khat or alcohol or cigarette in his/her lifetime he/she was considered as having no health risk for the substance he/she never used. 38 A pre-tested structured questionnaire adapted from a review of related literature that contains patient, clinical, and treatment-related factors was applied to all participants. All participants voluntarily participated and signed informed consent forms before enrollment in the study. A face-to-face interview was conducted by five data collectors specifically selected from the ART clinic. Data were coded by assigning a three-digit unique identification number, but together with this, a five-digit unique ART clinic card number was also used as a supportive identification tool. Information regarding patient and substance use-related factors was obtained from an interview during the participant recruitment period (April 20 to May 19). Applying the recommended cutoff points, the score obtained from ASSIST was grouped into four predefined health risk groups no risk, low, moderate, and high. Outcome Variable Immunological Response Immunological response was measured after six months of the follow-up period. For all the patients included in the analysis, CD4 cell counts were done during the follow-up period from May 20 to November 27, 2019, and taken as the measure of immunological response. Adequate Immunological Response Patients who achieved or maintained a CD4 cell count of >350 cells/mm³ during the 6-months of follow-up. Inadequate Immunological Response Patients who were unable to achieve or maintain a CD4 cell count of >350 cells/mm³ during the 6 months of follow-up. Data on baseline ART history were collected from documented medical records. The most recently documented CD4 cell count was taken as baseline CD4 cell count for this study, baseline CD4 cell count measurements were conducted within the year of the study (mean of 3 months). The immunological response was measured every six months with BD FACSpresto TM cartridge by the Hospitals antiretroviral therapy clinic laboratory unit with a staff of BSc in laboratory science and MSc in immunology, and samples were analyzed by using BD FACSPresto™ cartridges with finger-stick samples. The fingertip is punctured with the lancet, and then the first drop of blood is wiped. The second drop of blood is added into the inlet port of the cartridge. The finger is cleaned, the cartridge is capped, and a bandage is applied to the finger. Baseline plasma HIV-1 RNA load was measured by COBAS ® AmpliPrep. Adherence Adherence = Number of doses of HAART taken ÷ Number of prescribed doses of HAART × 100% Good adherence, >95%, fair adherence, 85-95%, and poor adherence, <85% doses taken. 13,41,42 From the HIV care ART follow-up form, the documented adherence of participants from their most recently documented CD4 cell count was measured up to the date of enrolment of participants and was taken as baseline adherence. During the follow-up period, we followed the participants till their CD4 cell counts were conducted. We registered adherence at each follow-up visit. Then, for analysis purposes, we classified adherence as (Good adherence if >95% doses were taken at every follow-up visit, Poor if at least one ≤95% adherence was encountered at the follow-up visit). 43 New/recurrent opportunistic infections were considered if the patient was initiated on treatment for that opportunistic infection. Data Processing and Analysis Procedures The collected data were entered into Epi-data version 3.1 then cleaned and exported to SPSS version 21 for analysis. Descriptive statistics were used to show the distribution of frequency. The analysis was performed using bivariate and multivariate logistic regression. Independent variables with a P-value of less than 0.25 in bivariate analysis were included in backward step regression. The overall statistical significance of the model was reported by adjusted odds ratios (AOR) with its corresponding 95% confidence interval, a p-value <0.05 was considered statistically significant. Immunological Response Out of 332 clients, all had their CD4+ cell count measured during the observation period. From which assessment of immunological response revealed 261 (78.6%) had CD 4 cell count of >350cells/m³ and 71 (21.4%) showed CD 4 cell count ≤350 cells/mm 3 . Association of Substance Use and Other Variables with Immunological Response In bivariate analysis, CD 4 cell count ≤350 cells/mm 3 Table 4). Discussion Identifying and managing substance use is a challenge to a treatment program. Most treatment failure occurs due to the difficulty of delivering quality care. In this study, we found the extent of inadequate immunological response among PLWHA as 21.4% and an association of inadequate immunological response with moderate and high levels of health risk alcohol use. 8445 The inadequate immunological response rate in this study (21.4%) is lower than in other studies with a modest comparative difference observed: a cohort of sub-Saharan Africa (27%), 2 in Oman Sultan Qaboos University Hospital (27%), 4 a study throughout Australia (28%). 3 In our study, the inadequate immunological response was a heterogeneous group of both viral-suppressed and unsuppressed immunological non-respondents. This should have made the proportion of inadequate immunological response higher. However, our result showed a lower percentage than from studies that exclude virally unsuppressed non-respondents. These may be due to the longer treatment duration that our participants had been on ART before entry to the current study (mean of 74 months) compared to the above studies in Australia, Omani, and sub-Saharan Africa. However, this finding is not comparable with a retrospective analysis conducted in Tanzania (50.25%) with a cutoff point of 350cells/mm 3 . 5 This difference could be due to variations in socioeconomic class, the clinical status of clients, study design, and adequate patient care. Substance use and inadequate immunological response are tied together. Alcohol-mediated effects on immune function occur through a combination of behavioral effects (adherence) and its direct effect on the immune system, which results in chronic inflammation, and effect on the liver, which affects drug metabolism through enzyme induction;- 19,20,24 this may lead to increased HIV disease progression. In our study, the risk of inadequate immunological response in a moderate level of health risk alcohol users was around three times higher than those with no level of health risk. As in several previous studies, those with levels of health risk for alcohol use showed a high rate of inadequate immunological response than patients who are non-alcohol users. 19,[21][22][23] However, a prospective study from America reported that for patients with baseline CD4 of >200 cells/mm³ moderate alcohol use does not affect the rate of decline of CD4+ cells to ≤200 cells compared to abstainers but for highrisk alcohol use does have. The discordance to the findings of 2010 Baum et al could be due to the difference in the definition of the outcome measure they used, a cut point of CD4 ≤200 cells to measure the association. Also, another significant difference was in contrast to our study patients included in Baum et al were those having baseline CD4 cells of >200 cells/mm³. 21 In this sample of people living with HIV/AIDS in Jimma, a high level of health risk for alcohol use was associated with more than four times the odds of having inadequate immunological response compared to non-drinkers (no health risk level). This finding is supported by multiple studies 19,22 and similar to a finding of a prospective study from America that reported PLWHA with frequent alcohol use had a significantly inadequate immunological response 2.91 times more likely to present a decline of CD4 to ≤200 cells/mm³. 21 This result could have occurred because of consistent alcohol use findings among patients on ART in the study area, as showed in 2014 hazardous drinking, harmful drinking, and alcohol dependence were found in a significant number of patients in the study area. 29 These findings imply that patients who present with alcohol use and CD 4 < 350 cells/mm³ need a much closer clinical follow-up to obtain a desirable immune recovery. Short-term interventions that aim to cut down alcohol use could be useful, but the high level of health risk alcohol users is more likely to be alcohol dependent and so require more extensive support. However, in this study, contrary to those who reported moderate and high health risk level alcohol use, those who reported low health risk level alcohol use did not have a significant association with inadequate immunological response compared to non-alcohol users (no health risk level). In this study, levels of health risk for cigarette use had a significant association with inadequate immunological response in univariate analysis but were not found to have a significant association in the multivariate model. This is possibly due to the small number of samples in the cigarette use group. Levels of health risk for khat use did not have an association with inadequate immunological response. Previous studies on khat mostly focus on its association with nonadherence, but it has not been previously shown that khat use is not associated with worse HIV immunological response. In our study, we found that poor current adherence was strongly associated with an inadequate immunological response where patients with poor current adherence were 6.646 times more likely to have inadequate immunological response than those with good current adherence. A retrospective follow-up study done in Tigray, Northern Ethiopia, also supported this finding, as clients with ≤95% adherence were 5.68 times more likely to have immunological failure compared to those having >95% adherence. This is mostly due to the indirect effect of poor adherence, which comes through an increase in HIV RNA replication. 12 Among the patients with inadequate immunological response, 36 (50.7%) of them are the levels of health risk for alcohol use. Ten percent of patients with levels of health risk alcohol use has poor adherence. Overall, it is likely that poor adherence has played a role in the immunological response of patients with substance use problems but that may be a partial proxy for antiretroviral therapy adherence. We have not used a model to examine the potentially mediating effect of adherence. This may not enable us to show the independent effects of adherence and level of health risk alcohol use on inadequate immunological response. Other factors that were significantly associated with inadequate immunological response, patients with baseline CD4 ≤350 cells/mm³ were more than five times more likely to have inadequate immunological response than patients who had CD4 >350 cells/mm³. A cohort study from Australia also supported this finding that 28% of patients with a CD4 cell count of <350 cells/μL showed an inadequate immunological response (CD4 cell <350 cells/μL) after 24 months of treatment. 3 The probable reason could be due to variation in the degree of immune suppression at treatment initiation or clinical status that patients with CD4 ≤350 cells/mm³ had. It was also observed that patients with hemoglobin <12 mg/dl were significantly associated with inadequate immunological response. Hence, the odds of inadequate immunological response were increased more than seven times among patients with hemoglobin <12 mg/dl. A study by Kowalska et al 14 also reported that the recently measured hemoglobin level was significantly associated with disease progression than baseline hemoglobin measured at the start of treatment. This implies that the recently measured hemoglobin level could be useful in identifying patients with a higher risk of short-term disease progression; this is easily done in our setup because the BD FACSPresto™ machine also measures total hemoglobin concentration on the same sample and provides all the results concurrently. Maintaining optimal hemoglobin levels helps to prevent the development of AIDS-defining illnesses. Patients with a viral load of ≥1000 copies/mL were more than nine times more likely to demonstrate inadequate immunological response compared to those <1000 copies/mL. This may be due to a shorter time interval in which the viral load was measured (within 3 months) or poor enrolment and current adherence, which a larger proportion of patients with viral load ≥1000 copies/mL had. In Gonder patients with viral load, ≥20 copies were 5 times more likely to have an immune failure. 13 A study in Ghana also supported this finding, patients with viral load ≥1000 copies/mL were 2 times more likely to have CD4 cells <350 cells/mm³. Besides, this finding is consistent with the scientific facts in that a higher HIV RNA results in more destruction of CD4 cells. Therefore, extra time may be needed with adherence support to obtain a CD4 cell count of >350cells/mm³. 6 A causal inference between substance use and inadequate immunological response cannot be established in this study and it may be difficult to generalize the finding to a larger population because of the small sample size, single study area, and short-term follow-up period used. Also, this result cannot be generalized for severely ill patients who required emergency medical help. However, this study could imply the need for further longitudinal studies in Ethiopia to strengthen these findings and their clinical significance. Social desirability bias could be a limitation as persons who use alcohol and other substances tend to under-report or deny their use when interviewed, and we used the self-report of adherence to antiretroviral therapy, which is likely to underestimate the level of non-adherence. Conclusion Around a quarter of the participants were with an inadequate immunological response. Participants with a moderate and high level of health risk for alcohol use were independently associated with inadequate immunological response. In an area where alcohol is widely consumed, defining a pattern of responsible use and developing community-based interventions are valuable healthcare priorities to be included as a comprehensive care package for people living with HIV/AIDS.
2022-12-04T18:00:45.839Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "a57bfcafb9e61a00e45c5982817da8d59aa51ffe", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "71c304b84123b784af4b9bc55fd203b143955a6a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
228948981
pes2o/s2orc
v3-fos-license
The Relationship Between Activity Daily Living Level And Quality Of Life Of Geriatric Patients In RSUP Dr. Mohammad Hoesin Palembang Background The quality of life of the elderly is influenced by several factors such as physical health, psychological health, and good social relationships. Physical health is related to daily living activities that a person does in his daily life including ambulation, eating, dressing, bathing, brushing teeth and making up. At old age, a person will experience a decrease in physical condition which will affect the value of fulfilling one's daily living activities. This study aims to analyze the relationship between the level of independence of daily living activities and the level of quality of life of geriatric patients. Methods: This study was an observational analytic study with a cross sectional design. The samples were geriatric patients in Dr. Mohammad Hoesin Palembang in September-October 2019. The sample of this study was 55 respondents. Results: From 55 Geriatric patients in Dr. Mohammad Hoesin Palembang found 60% male respondents and 40% female respondents, the 60-74 years age group as much as 74.5%, the 75-90 year age group as much as 25.5%, the 50-60 year age group as much as 26%. Chi-square test results showed a significant relationship between the level of Activity Daily Living and the level of quality of life of geriatric patients (p value = 0.000). The results of the odds ratio in geriatric patients state that the probability of improving the quality of life is 27.7 times greater in patients who have an independent daily living activity level than in patients who have a total dependent daily living activity level. In the analysis, it was also found that the level of daily living activity of geriatric patients is a protective factor for the quality of life of geriatric patients. (95% CI = 0.006-0.206). Conclusion: There is a significant relationship between the level of daily living activity and the quality of life of geriatric patients in , dr. Introduction Indonesia as a developing country is facing demographic changes. This demographic change leads to a decrease in the proportion of children under five and an increase in life expectancy for the elderly. According to several sources, this condition causes the proportion of elderly to be more than the proportion of children under five (Hidayati, 2018). Currently, worldwide the number of elderly people is estimated to be more than 629 million people and half of the world's 400 million elderly are in Asia (Data Information & Health RI, 2013). It is estimated that the number of elderly people in 2017 is 9.77%, in 2020 it is 11.34%, and in 2025 it is 13.4% of the total population of Indonesia (Ministry of Health, 2017). It is estimated that in 2000 to 2025, Indonesia will experience an increase in the elderly population by 414% or 4 times, which is one of the highest increases in the world 1 . Geriatric patients are elderly patients who are more than 60 years old and have multi-pathological characteristics, decreased physiological reserve, and are usually accompanied by functional disorders 2 . At old age, the body has decreased the body's function of several organ systems in the body. Physiological changes and the five senses experienced are generally increased blood pressure, skin that begins to wrinkle, decreased hearing, blurred vision, decreased sense of smell, and decreased physical strength 3 . Old age is also a process that moves slowly from individuals to withdraw from social roles or from social contexts. This situation will cause the interaction of elderly individuals to begin to decline both in terms of quality and quantity (Sudarman, 2008). With the decline in social roles, it will result in the individual experiencing disruption in fulfilling their daily needs, causing dependence on other people. Old age is also a process that moves slowly from individuals to withdraw from social roles or from social contexts. This situation will cause the interaction of elderly individuals to begin to decline both in terms of quality and quantity (Sudarman, 2008). With the decline in social roles, it will result in the individual experiencing disruption in fulfilling their daily needs, causing dependence on other people 4 . The dependency ratio for elderly people in Indonesia tends to increase. The results of the 2015 National Statistics Agency's Susenas data show that the ratio of elderly dependence in Indonesia is 13.28, meaning that every 100 people of productive age must support around 14 elderly people. The development of this dependency ratio has not experienced any significant changes since 2012 6 . An increase in the dependency ratio on the elderly will result in an increase in the burden on the family, community, and government, especially on special services such as health which will also cause a high social burden due to the increasing growth of the elderly 7 . One result of the increase in the number of elderly people is an increase in dependence which will reduce the level of independence of one's daily living activities 8 . Activity of Daily Living are activities that are usually carried out during a normal day; These activities include ambulation, eating, dressing, bathing, brushing teeth and making up with the aim of fulfilling / relating to his role as a person in the family and society. Conditions that result in a need for assistance in ADL can be acute, chronic, temporary, permanent or rehabilitative 9 . The ability of the elderly to do ADL will affect the quality of life of the elderly. Quality of life is the individual's perception of life in accordance with the cultural context and value system he adopts so that it is related to the expectations, goals, standards, and concerns of the individual 10 . There are 4 aspects of a person's quality of life, namely physical health, psychological health, social relationships, and environmental aspects 11 . The quality of life assessment instrument is broadly divided into 2 types, namely the general instrument (generic scale) which is used to generally assess functional abilities, disabilities, concerns that arise due to illness and a special instrument (specific scale) used to measure something. a particular disease, a particular population or a special function such as emotions. WHOQOL-BREF is an example of a general quality of life instrument (generic scale) which when compared with other general instruments the use of WHOQOL-BREF has been widely used for various chronic diseases and has been developed by several researchers 12 . Previous research conducted in Assam, India, showed that the relationship between daily living activities and the quality of life of the elderly was positive, that is, the better the daily living activity, the higher the quality of life of the elderly 13 . The high quality of life of the elderly is caused by good physical, psychological, environmental and social factors 14 . Research related to the level of independence of daily living activities and the level of quality of life has never been carried out in Palembang, especially in Dr. Mohammad Hoesin, so it is not known whether there is a significant relationship between the level of independence of daily living activities and the level of quality of life in geriatric patients in the Dr. Mohammad Hoesin Palembang. Seeing the increasing number of elderly people in Indonesia which continues to increase and the differences in economy, environment, and culture, it is necessary to do research in Indonesia, especially South Sumatra, considering that the specifically for geriatrics has just started operating. Therefore, this study will analyze the relationship between the level of daily living activity and the quality of life of geriatric patients in Dr. Mohammad Hoesin Palembang Method This type of research is an analytic study with a cross sectional design. This study studies the dynamics of the relationship between risk factors and effects, by means of an observation approach or data collection at once. Each research subject was observed only once and measurements were made of the character status or subject variables at the time of examination 15 . The purpose of this study was to determine the relationship between the level of daily living activity and the level of quality of life of geriatric patients in Dr. Mohammad Hoesin Palembang. The research took place from the time of taking the research sample data to processing the research results, from August 2019 to October 2019. The research was carried out at Dr. Mohammad Hoesin Palembang. The population in this study were all inpatients at Dr. Mohammad Hoesin Palembang for the period August -October 2019. The sampling technique used in this study was consecutive sampling; in this way, all patients who met the inclusion criteria were consecutively drawn until the minimum target sample was reached. The selected sample was adjusted according to the inclusion and exclusion criteria. If it does not meet the inclusion criteria or is included in the exclusion criteria, it will be sampled again. The inclusion criteria in this study were geriatric patients over 60 years of age who were admitted to Dr. Mohammad Hoesin Palembang. Willing to be research respondents and fill out informed consent. Able to communicate well. The exclusion criteria in this study were patients with communication or speech impairments, and patients with severe hearing loss. Patients with severe visual impairment Patients with psychological disorders. Acute patient who has decreased consciousness and shortness of breath. The dependent variable in this study was the level of quality of life for geriatric patients in Dr. Mohammad Hoesin Palembang. The independent variable in this study is the level of daily living activity of geriatric patients in Dr. Mohammad Hoesin Palembang. Processing and analysis were carried out using computer assistance through Microsoft Excel and SPSS programs. The data processing steps include checking the data that has been collected, data input using Excel, and editing the data if an error occurs when entering data. After the data is clean from errors, the data is analyzed using SPSS. Result Research on the relationship between daily living activity levels and the quality of life of geriatric patients was conducted from September 2019 to October 2019 at the , dr. Mohammad Hoesin Palembang using primary data. Data retrieval is taken gradually every day. The study, which used a cross-sectional research design, observed two variables, namely the level of daily living activity measured using the Katz Index and the level of quality of life measured using the WHOQOL-BREF questionnaire. Retrieval of patient data using consecutive sampling technique. The study population was inpatient geriatric patients at Dr. Mohammad Hoesin Palembang. Sampling was carried out by structured interviews using a questionnaire. Table 1 shows the sociodemographic characteristics of geriatric patients in Dr. Mohammad Hoesin Palembang. Of the total sample, the number of geriatric patients treated in the who were male was 33 respondents (60%), while the female respondents were 22 respondents (40%). Then regarding the age division, it was found that 41 respondents aged 60-74 years (74.5%) and those aged 75-90 years were 14 respondents (25.5%). Based on the last education level of the respondents who did not go to school, there were 3 respondents (5.5%), SD was 15 respondents (27.3%), SMP was 10 respondents (18.2%), and the respondents who attended SMA were the most number of respondents (49.1%). Katz Score Measurement Data Analysis In table 2, it can be seen that the number of patients who have an independent daily living activity level is 15 respondents (27.3%), partially dependent is 26 respondents (47.3%), and those who experience total dependence are 14 respondents (25.5%). ). It is known from the data, geriatric patients in the on average have a level of partial dependence independence. Bivariate Analysis Bivariate analysis was used to determine the relationship between the level of Activity Daily Living and the level of quality of life. Bivariate analysis was performed using the Chi-square test with SPSS 24.0 application. Analysis of the Relationship of Daily Living Activity and Quality of Life In table 5, data on the relationship between daily living activity level and quality of life of geriatric patients in Dr. Mohammad Hoesin Palembang. From this data, there were 11 respondents who experienced total dependence with a poor quality of life, 3 respondents who experienced moderate dependence with a poor quality of life, and 2 respondents who were independent with a poor quality of life. In addition, respondents who experienced total dependence with a good quality of life were 3 respondents, while those who experienced moderate dependence with a good quality of life were 23 respondents, and respondents who were independent with a good quality of life were 13 respondents. Based on the analysis, the value of p = 0.000 (p <α) is obtained, so the hypothesis is accepted, which means that statistically there is a significant relationship between the level of daily living activity and the level of quality of life of geriatric patients in Dr. Mohammad Hoesin Palembang. The probability of an increase in quality of life was 23.8 times greater in patients who had an independent daily living activity level than in patients who had a partially dependent daily living activity level, while the probability of increasing the quality of life was 27.7 times greater in patients who had a daily living activity level. independent compared to patients who have a level of daily living activity total dependence. In the analysis, the 95% CI results are 0.06-0.208, which means that the level of independent daily living activity is a protective factor against poor quality of life. Discussion The sample of this study was 55 people who had met the inclusion and exclusion criteria. This research was conducted from September 2019 to October 2019 in dr. Mohammad Hoesin Palembang. Collecting data using consecutive sampling techniques; in this way, all patients who met the inclusion criteria were consecutively drawn until the minimum target sample was reached. The variables of this study were the level of daily living activity and the level of quality of life for geriatric patients in . From the data obtained, the level of daily living activity has a significant relationship (p <0.05) with the quality of life of geriatric patients in , which indicates that the level of daily living activity affects the quality of life of geriatric patients in . Activity Daily Living Based on the research results that have been described in table 3, it is known that from 55 respondents, the results of the level of independence of geriatric patients are mostly in the partial dependency category and the least total dependency category. This research is in line with previous research conducted by Pradhitya in 2014 with the percentage of elderly who are partially dependent (43%), independent (33%), and totally dependent (24%). This can be due to the characteristics of elderly respondents who are mostly aged 60-74 years (74.5%) where at this age the elderly can still do some daily activities but have started to show dependence due to a decrease in their ability to do daily activities as they get older 16 . Research conducted at the elderly, especially in geriatric patients, there are several changes in the musculoskeletal system, cardiovascular system, digestive system, respiratory system, and endocrine system which can affect the decline in physical function in the body. These changes will generally affect how individuals meet the needs of their daily activities 17 . As the age increases, the physical condition will decrease which can cause physical, psychological, and social functional disorders and disorders which in turn will lead to individual dependence on others 18 . This is also in accordance with research that there is a significant relationship between age and the level of independence in the elderly. One of the factors affecting the level of daily living activity is the level of education of the respondents which is quite good. In the results, the distribution of the highest education level of geriatric patients in was Senior High School (SMA) with 27 respondents (49.1%) 19 . Stating that education is the basic intellectual knowledge of a person, the higher the education, the greater one's ability to behave so that it will affect one's quality of life 20 . Quality of Life The distribution of the quality of life of the respondents shows an average distribution of 56,422 which indicates that the average quality of life of patients in the Dr. Mohammad Hoesin Palembang is good. This research is in accordance with research conducted by Indriyani in 2017 with a presentation of the elderly with a good quality of life of 53.7%. Table 4 shows the quality of life of the elderly in the physical domain. The physical domain of the elderly was experienced by 36 respondents (65.5%). The physical domain according to WHOQOL-BREF assesses the level of pain, medical therapy, fatigue, rest, activity, and work. This can be due to the fact that in old age, the individual will experience deterioration of the body's function which causes a person to experience dependence and will affect one's quality of life. Most of the respondents in this study aged 60-74 years (74.5%) have a good quality of physical life. In general, respondents aged 75-90 years will experience significant physical, psychosocial, and mental setbacks. Physical factors that function properly allow the elderly to achieve quality aging while poor physical factors will make a person lose the opportunity to actualize himself due to individual physical limitations 21 . The distribution of the psychological domain among respondents is mostly 39 respondents (70.9%) good. The assessment of the psychological dimensions in WHOQOL consists of 6 components, namely positive feelings, meaning of life, concentration, selfesteem, self-image, and negative feelings. Most of the respondents in this study had high school education. Education is also a factor that affects the psychological domain because in education it can form emotional intelligence where emotional intelligence will shape individual satisfaction and depression drive which will also affect the quality of individual life. The social domain of geriatric patients in Dr. Mohammad Hoesin Palembang is good, namely 32 respondents (58.2%) from a total of 55 respondents. The social domain in the WHOQOL-BREF questionnaire contains 3 questions about social relationships, sexual life, and support from others. In the elderly, the closest social environment is family. According to Sukitno (2011), the quality of life of the elderly is good when the family can carry out its function for the elderly as a support. Good family support, social support from a good living environment will improve the quality of life for the elderly 22 . Relationship between Activity Daily Living Level and Quality of Life The results of the analysis showed that there was a significant relationship between the level of daily living activity and the quality of life of geriatric patients in Dr. Mohammad Hoesin Palembang. The likelihood of an increase in quality of life was 23.8 times greater in patients who had an independent daily living activity level than in patients who had a partially dependent daily living activity level. Meanwhile, the possibility of an increase in quality of life was 27.7 times greater in patients who had an independent daily living activity level than in patients who had a total dependent daily living activity level. In the analysis, it was also found that the level of daily living activity of geriatric patients is a protective factor for the quality of life of geriatric patients. The quality of life is said to be good if the physical, psychological and social health are good. In this case, physical health is related to the basic daily living activities that a person does in everyday life so that Lisa who is in good physical condition will have a good level of daily living activity. Elderly with decreased physical condition allows them to depend on their surroundings in fulfilling their daily living activities and this allows them to experience a decrease in quality of life. The results of this study are consistent with research conducted on 96 elderly people, which states that the level of daily living activity can affect the quality of life of the elderly which is stated by the Spearman correlation value of 0.692 with a significance value (p-value) of 0.001 23 . Another study stated that the better the level of daily living activity of the elderly can improve the quality of life which is stated by the value of p = 0.000 with a sample size of 30 people 24 . This study also shows that there are elderly people who have an independent daily living activity level but have a poor quality of life and there are also elderly people who experience heavy dependence but have a good quality of life. This condition shows that there are factors that affect the quality of life of the elderly. Explains that there are several factors that affect the quality of life of the elderly, including age, physical factors, social and environmental factors, psychological factors, mental factors, and education level 25 . Meanwhile, the same research was conducted on the factors that influence the quality of life of the elderly population in Lumajang Regency. In this study, it was concluded that the factors affecting the quality of life of the elderly population include physical factors, social factors, and the number of children 26 . Research Limitations When conducting interviews, the language between the patient and the interviewer was often different, making it difficult to understand. At the time of conducting the interview there were difficulties because the cognitive in the geriatric patient was not assessed so that he could not judge whether the patient fully understood the question. Conclusion From the research results regarding the relationship between the level of daily living activity and the level of quality of life of geriatrics in Dr. Mohammad Hoesin Palembang in December 2019 the following conclusions can be drawn: Characteristics of geriatric patients in Dr. Mohammad Hoesin Palembang, the highest distribution based on gender is male, age 60-74 years, and high school education level. The average daily living activity level of geriatric patients is partially dependent, while the average quality of life for geriatric patients is good. There is a significant relationship between the level of daily living activity and the level of quality of life of geriatric patients in the Dr. Mohammad Hoesin Palembang
2020-11-12T09:09:45.722Z
2020-10-03T00:00:00.000
{ "year": 2020, "sha1": "39b6833d1b11b15163c47ffa3e3621292cef2622", "oa_license": "CCBYNCSA", "oa_url": "https://www.jurnalkedokteranunsri.id/index.php/UnsriMedJ/article/download/226/105", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "47265f30805b363c60a4cc31a545f12e08aec162", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253514492
pes2o/s2orc
v3-fos-license
Effects of β-glucan with vitamin E supplementation on the growth performance, blood profiles, immune response, fecal microbiota, fecal score, and nutrient digestibility in weaning pigs Objective This study was conducted to evaluate effects of β-glucan with vitamin E supplementation on the growth performance, blood profiles, immune response, fecal microbiota, fecal score, and nutrient digestibility in weaning pigs. Methods A total of 200 weaning pigs with an average body weight (BW) of 7.64±0.741 kg were allotted to five treatment groups and were divided based on sex and initial BW in four replicates with ten pigs per pen in a randomized complete block design. The experimental diets included a corn-soybean meal-based basal diet with or without 0.1% or 0.2% β-glucan and 0.02% vitamin E. The pigs were fed the diets for 6 weeks. A total of 15 barrows were used to evaluate the nutrient digestibility by the total collection method. The BW and feed intake were measured at the end of each phase. Blood samples were collected at the end of each phase, and fecal samples were collected at the end of the experiment. Results The addition of β-glucan with vitamin E to weaning pig feed increased BW, average daily gain, and average daily feed intake. A significant decrease in yeast and mold and Proteobacteria and a tendency for Lactobacillus to increase compared to the control was shown when 0.1% β-glucan and 0.02% vitamin E were added. The fecal score in weaning pigs was lower in the treatments supplemented with 0.1% or 0.2% β-glucan and 0.02% vitamin E compared to the control. In addition, vitamin E was better supplied to weaning pigs by increasing the concentration of α-tocopherol in the blood of weaning pigs when 0.02% vitamin E was supplemented. However, there was no significant difference in either the immune response or nutrient digestibility. Conclusion Inclusion of 0.1% β-Glucan with 0.02% vitamin E in a weaning pig’s diet were beneficial to the growth performance of weaning pigs by improving intestinal microbiota and reducing the incidence of diarrhea. INTRODUCTION Antibiotics have been widely used in feed for weaning pigs to improve feed efficiency, promote growth, and reduce diseases. However, the European Union and Korea banned the use of antibiotics as feed additives for growth promotion in 2006 and 2011, respectively [1]. For this reason, research on various antibiotic substitutes, such as plant extracts, probiotics, and β-glucan (BG), has been actively conducted. β-Glucan is a complex carbohydrate extracted from mold, grains, and the cell walls of yeast. Glucans with β-1,3 and β-1,6 glycosidic bonds are major structural components of yeast and fungal cell walls, where these bonds play a role in disease defense and growth promotion [2]. β-Glucan can stimulate a series of pathways that activate the immune system and enhance both innate and adaptive immune responses [3]. β-Glucan has antitumor and antibacterial activities by enhancing host immune function [4]. It has a beneficial effect on the growth of weaning pigs because it induces a specific immune response and increases nonspecific immunity and tolerance to oral antigens as an immune modulator [5]. Vitamin E, which is a very important nutrient for pigs, is known to enhance immunity through antioxidant action. In particular, vitamin E (VE), which acts as an antioxidant at the cellular level, has a structural function and performs various functions related to reproduction [6]. In addition, VE improves the immune response to antigens by stimulating the production of lymphocytes [7]. Additionally, it is an important component of all membranes found in cells, including plasma, mitochondria, and nuclear membranes [8]. There are many previous studies on the effects of BG and VE individually on weaning pigs, but there is insufficient evidence to verify the synergistic effect of BG and VE on weaning pigs. Thus, it was hypothesized that the synergistic effects of BG and VE could improve immunity, leading to an increase in growth performance in weanling pigs. Therefore, this study was conducted to evaluate the effects of BG with VE on the growth performance, blood profiles, immune response, fecal microbiota, fecal score, and nutrient digestibility of weaning pigs. MATERIALS AND METHODS All experimental procedures involving animals were conducted in accordance with the Animal Experimental Guidelines provided by the Seoul National University Institutional Animal Care and Use Committee (SNUIACUC; SNU-200209-2) Experimental animals and housing environment A total of 200 weaning pigs ([Yorkshire×Landrace]×Duroc) with an initial body weight (BW) of 7.64±0.741 kg were allotted to one of five treatments considering sex and initial BW in four replicates with ten pigs per pen in a randomized complete block design. Pigs were randomly assigned to their respective treatments by the Experimental Animal Allotment Program (EAAP) [9]. Pigs were housed in an environmentally controlled facility. The pens had fully concrete floors (1.54×1.96 m 2 ). Feed and water were provided ad libitum through a feeder and a nipple during whole experimental periods. The temperature was kept at 30°C during the first 7 days and lowered 1°C every week. The experimental period was 6 weeks (phase I, 0 to 3 weeks; phase II, 3 to 6 weeks). Body weight and feed intake were measured at the end of each phase to calculate the average daily gain (ADG), average daily feed intake (ADFI), and gain:feed ratio (G:F ratio). In addition, feed given to all piglets was recorded each day, and feed waste in the feeder was recorded at the end of each phase. Experimental design and diet Dietary treatments included i) CON (corn-soybean meal [SBM]-based diet), ii) LB (corn-SBM-based diet + BG 0.1%), iii) LBE (corn-SBM-based diet + BG 0.1% + VE 0.02%), iv) HB (corn-SBM-based diet + BG 0.2%), and v) HBE (corn-SBM-based diet + BG 0.2% + VE 0.02%). A corn-SBM-based diet was used as feed in this experiment, and all nutrients in the experimental diet except crude protein (CP) met or exceeded the nutrient requirements of the National Research Council (NRC) [10] for weaning pigs. The CP was set to 6.25 times more than the standard total nitrogen in the requirement of NRC [10] to calculate CP requirements. In the present study, BG and VE products were provided by E&T Company (E&T CO., Ltd, Daejeon, Korea). β-Glucan consisted of (1,3)-(1,6)-β-D-glucan and mannan. Vitamin E was in the form of VE-acetate. In the case of VE, 65 IU/kg was present in the vitamin premix, and 110 IU/kg of VE was additionally supplemented to the LBE and HBE treatments. All nutrient contents in the feed were formulated equally, and the formula and chemical composition of the experimental diet are presented in Table 1 and 2. The CP content of phase I in the weaning pig feed was 20.56%, the lysine content was 1.35%, the methionine content was 0.39%, the cysteine content was 0.35%, the threonine content was 0.79%, the tryptophan content was 0.22%, the calcium (Ca) content was 0.80%, and the total phosphorus (P) content was 0.65%. The CP content of phase II in the weaning pig feed was 18.88%, the lysine content was 1.23%, the methionine content was 0.36%, the cysteine content was 0.32%, the threonine content was 0.73%, the tryptophan content was 0.20%, the Ca content was 0.70%, and the total P content was 0.60%. Blood profiles and immune response Blood samples were taken from the jugular vein of three pigs near the average BW in each treatment after 3 hours of fasting on the initial day and at the end of each phase to measure VE, selenium (Se), tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), and lymphocytes. Fecal microbiota Measurements of microorganisms in feces were performed at the end of the experiment. Fecal samples were collected based on the BW of the pigs in the treatments, transported to the laboratory on ice, and stored in a -80°C freezer until further analysis. One milliliter of the pretreated sample was diluted 10-fold in steps in 9 mL of sterile 0.1% peptone water, and 1 mL of sample was taken at each dilution concentration and dispensed in 3 M dry film medium to analyze aerobic bacteria, coliform, E. coli, lactic acid bacteria, yeast, and mold. Subsequent triplicate spread plating was performed on Petrifilm aerobic plate count (APC) plates, Petrifilm coliform count plates, and Petrifilm yeast and mold count (YM) plates according to the manufacturer's instructions. APC and coliform plates were incubated aerobically at 37°C for 24 h, and yeast and mold plates were incubated aerobically at 25°C for 72 h in an aerobic incubation chamber. Counts were recorded as colony forming units per gram (CFU/g). In addition, fecal sample deoxyribonucleic acid (DNA) was extracted for metagenomic analysis using the DNeasy Pow-erSoil Pro kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol for comparison with the culturomic approach. All bacteria isolated through 16S rRNA sequencing were identified and classified, and the microbial composition of fecal samples was analyzed through metagenomics using next-generation sequencing (NGS) technology. Fecal score Observations of fecal scores were made every day at 08:00 throughout the feeding trial (35 days). Data were recorded by one trained researcher for each pen. Fecal scores were given according to the condition of feces (0 = normal feces; 1 = moist feces; 2 = mild feces; 3 = watery diarrhea) [11]. Slightly wet feces on the rump area were used to designate contaminated piglets. After recording the data, we cleaned away the feces by wiping off the fecal areas or the pig's butt, preparing for a new measurement the next day. Nutrient digestibility A total of 15 crossbred barrows, averaging 12.48±0.37 kg BW, were allotted to individual metabolic crates (40×80×90 cm) in a completely randomized design with three replicates to evaluate nutrient digestibility and nitrogen retention. The total collection method was used to determine the apparent total tract digestibility of dry matter (DM), CP, crude ash, and crude fat [12]. After a five-day adaptation period, there was a five-day collection period. To determine the first and last day of collection, 8 g of ferric oxide and chromium oxide were added to the first and last experimental diets as selection markers. During the experimental period, all pigs were fed the phase II diets twice per day, at 07:00 and 19:00, which provided three times the maintenance energy [13], and water was provided ad libitum. Collection of feces was started when ferric oxide appeared in the feces and was maintained until the appearance of chromium oxide in the feces. Urine samples were collected during the collection period in plastic containers containing 50 mL of 4 N H 2 SO 4 to prevent evaporation of nitrogen prior to nitrogen retention analysis. Fecal and urinary samples were stored at -20°C until the end of the collection period, and the feces were dried in a drying oven at 60°C for 72 h and then ground to 1 mm in a Wiley mill (CT 193 Cyclotec; FOSS, Höganäs, Sweden) for chemical analysis, including moisture, CP, crude fat, and crude ash contents, by the Association of Official Analytical Chemists (AOAC) methods [14]. Statistical analysis All obtained data were processed by Excel 2010 first, and then analyzed by one-way ANOVA procedure using Statistical Analysis System 9.4 TS1M7 (SAS Inst. Inc., Cary, NC, USA). Each pen was used as the experimental unit for growth performance and fecal score, while individual pigs were used as the experimental unit for fecal microbiota, blood profiles, and nutrient digestibility. The orthogonal polynomial contrasts were used to determine the effects of diet (BG and VE against the control), BG, VE, as well as the interaction between BG and VE. Data were presented as means and their pooled standard errors. The differences were considered as statistically significant when p<0.05, while 0.05≤p<0.10 was considered to indicate a trend in the data. Growth performance The effects of BG with VE supplementation in the weaning pig diet on growth performance are shown in Table 3. As a result, BW at week six, ADG for the entire period of the experiment, and ADFI for phase II (3 to 6 weeks) were significantly higher in all treatment groups to which BG or VE was added compared to the control group (Diet, p<0.05). In addition, the treatment groups supplemented with 0.2% BG compared to those supplemented with 0.1% BG showed significantly higher in BW at week six (BG; p<0.01), and a higher trend in ADG at phase II and the overall experimental period (BG; p = 0.09, p = 0.08). The addition of 0.02% VE significantly increased BW at week six and ADG in phase II (VE; p<0.01). The HB and HBE treatments with 0.2% BG had significantly higher ADFI than the treatments with 0.1% BG in phase II and entire experimental period (BG; p<0.05). Adding 0.02% VE showed a significantly higher G:F ratio in phase II than treatments without VE (p<0.05). Park et al [15] observed that supplementation with BG linearly increased ADG in phase I (0 to 2 weeks) and the entire period (6 weeks) and linearly decreased the feed conversion ratio in phase I (0 to 2 weeks), phase II (2 to 6 weeks), and the entire period (6 weeks) when they compared supplementation of BG by level (0%/0.1%/0.2%/0.4%) in the weaning pig diet with the treatment supplemented with 0.003% anti- biotic Tiamulin. Luo et al [16] reported that supplementing 0.01% BG in the weaning pig diet increased ADG linearly and quadratically (p<0.05) during the entire experimental period (28 days) when BG by level (0%/0.0025%/0.005%/ 0.01%/0.02%) was supplemented. Pigs fed 0.005% BG had significantly higher ADG (p<0.05) during the whole experimental period (28 days) and increased ADFI (p<0.05) during 0 to 28 days and 28 to 35 days when treatments with 0.005% BG were compared with the control [17]. Pigs supplemented with 0.1% each BG from mulberry leaves and curcuma had significantly higher ADGs and G:F ratios than the control (p<0.05) in Lee et al [18] experiment during phase I (1 to 14 days). On the other hand, Zhou et al [19] reported that there was no significant difference in the growth performance of weaning pigs when 0.01% BG was fed to weaning pigs challenged with lipopolysaccharide. Most previous studies reported that the addition of BG to weaning pig feed had a positive effect on growth performance, but the exact mechanism for improvement in growth performance was not elucidated [20]. However, BG, which is used as a broad-spectrum immune enhancer, has contributed to increasing the growth performance of animals such as swine by strengthening the intestinal mucosa of piglets and improving the intestinal environment when it is supplemented in the weaning pig diet [18,21]. The ADFI of weaning pigs was higher than that of the control because the immune and health status of weaning pigs were improved considering the results of previous studies. Therefore, as a result of the present experiment, the addition of BG and VE to weaning pig feed increased BW, ADG, and ADFI. Additionally, supplementation with 0.2% BG and 0.02% VE had a positive effect on the growth performance of weaning pigs. Blood profiles and immune response The effects of BG with VE supplementation in the weaning pig diet on blood profiles and immune response are shown in Table 4. There was an increasing trend in VE concentration in the blood profiles in groups supplemented with 0.02% VE (p = 0.08). However, there was no significant difference in the blood concentrations of selenium, TNF-α, IL-6, and lymphocytes (p>0.05). Moreira and Mahan [22] reported that there was a significant increase in the average level of VE between days 7 and 35 in treatments supplemented with VE compared to the control without VE when VE was added by level (0 IU/20 IU/40 IU/60 IU) in a weaning pig diet (p<0.05). In addition, supplementation with 250 IU VE increased the concentration of VE significantly on days 42 and 68 compared to the treatment supplemented with 40 IU in the study of Rey et al [21] (p<0.01). Various factors influence the VE status of pigs before and after weaning. Neonatal piglets are born with a low α-tocopherol concentration in their tissues [6]. Diarrhea after weaning lowers serum α-tocopherol concentration and worsens VE absorption [23]. In addition, VE deficiency occurs most frequently in weaning pigs during the first few weeks after weaning, as postweaning serum VE concentrations decrease due to low feed intake and increased stress. In the current experiment, the treatments with 0.02% VE added to the weaning pig feed showed an increasing trend in VE concentration at week three compared to the treatments without VE. As in the previous studies of Rey et al [21] and Moreira and Mahan [22], the concentration of VE in the blood tended to increase with additional VE supply. This means that VE is being easily delivered into the piglets according to the additional VE supply in the weaning pig feed. It can also be expected that VE will have a positive effect on improving the antioxidant status of weaning pigs and increasing the immune response through enhanced cell protection. In the current experiment, the addition of 0.02% VE in the diet of weaning pigs had a positive effect on the VE concentration in weaning pigs. Fecal microbiota The effects of BG with VE supplementation in the diet of weaning pigs on fecal microbiota, including aerobic count (AC), coliform count (CC), E. coli/coliform count (EC), lactic acid bacteria count, and YM, are shown in Figures 1, 2, 3, 4, and 5. As a result, Lactobacillus was decreased in the HBE and YM treatments and was decreased in the LB and LBE treatments compared to the control, as shown in Figure 1 (Diet, p<0.05). Treatments with 0.1% BG showed significantly lower YM compared to that of the control, as shown in Figure 2 (Diet, p<0.01). Treatments with 0.02% VE also showed significantly lower YM compared to that of the control, as shown in Figure 3 (Diet, p<0.05). In addition, the number of Proteobacteria (phylum containing pathogenic microorganisms such as E. coli, Salmonella, Shigella, etc.) was significantly lower in the treatments with BG and VE than in the control, as shown in Figure 4 (Diet, p<0.05). According to Figure 5, pigs fed 0.1% BG showed an increasing trend of Lactobacillus compared to that of the control. According to a previous study by Park et al [15], no significant difference was found among treatments in Lactobacillus and Salmonella, but coliform bacteria decreased linearly in feces as the amount of BG increased in week six when dietary supplementation of BG by level (0%/0.1%/0.2%/0.4%) in the weaning pig diet was compared with the treatment with 0.003% antibiotic Tiamulin (p<0.05). Additionally, Lactobacillus and E. coli in feces in week two and five were not affected by supplementation with 0.1% each of BG from mulberry leaves and curcuma in the weaning pig feed [18]. In the present experiment, YM decreased when 0.1% BG or 0.02% VE was added to the weaning pig feed. In addition, Proteobacteria decreased compared to the control when 0.1% and 0.2% BG and 0.02% VE were added. Metzler-Zebeli et al [24] reported that supplementing BG could help the composition and metabolic activity of the microbiome in the gastric cavity, cecum, and colon. In another previous study, it was reported that BG, in a mixed form as a grain or concentrate, was easily fermented, decreased the number of intestinal bacteria, and increased the intestinal butyrate concentration of growing pigs [25]. In addition, weaning pigs fed a diet supplemented with BG for two weeks after weaning had reduced susceptibility to enterotoxigenic E. coli, a major cause of diarrhea [26]. No significant difference was found in the aerobic bacteria/E. coli/coliform bacteria. However, in the current experiment, the decrease in YM and Proteobacteria and the tendency for Lactobacillus to increase when supplementing 0.1% BG and 0.02% VE would have benefitted pig health through improvement of the intestinal environment. Fecal score The effects of BG with VE supplementation in the weaning pig diet on fecal score are shown in Table 5. As a result of the experiment, the treatment groups to which BG or VE was added had significantly lower fecal scores than those of the control at week three and six (Diet, p<0.05). Furthermore, treatments with 0.2% BG had significantly lower fecal scores than treatments with 0.1% BG (p<0.05). Treatments with 0.02% VE also significantly showed lower fecal scores than treatments without VE at week three (p<0.05). In general, diarrhea in weaning pigs occurs over a period of 1 to 2 weeks after weaning due to a change in feed and causes damage to the digestive system. In addition, diarrhea occurs due to a decrease in absorption capacity, which is affected by shortening the length of villi, increasing the depth of crypts, and decreasing the action of digestive enzymes. A decrease in the absorption capacity of the small intestine is associated with the growth of enterotoxic bacteria or a decrease in the fermentation of digestible nutrients in the large . Fecal microbiota according to the addition of β-glucan with vitamin E in weaning pig feed (relative abundance phylum; next generation sequencing analysis). Differences were declared significant at p<0.05 with * marks and highly significant differences were expressed at p<0.01 with ** marks. CON, corn-SBM based diet; LB, corn-SBM based diet+0.1% β-glucan; LBE, corn-SBM based diet+0.1% β-glucan+0.02% vitamin E; HB, corn-SBM based diet+0.2% β-glucan; HBE, corn-SBM based diet+0.2% β-glucan+0.02% vitamin E. Low β-glucan meant 0.1% β-glucan and high β-glucan meant 0.2% β-glucan. intestine, which causes diarrhea in weaning pigs. According to a previous study by Park et al [15], no significant difference was found among treatments when dietary supplementation of BG by level (0%/0.1%/0.2%/0.4%) in the weaning pig diet was compared with the treatment supplemented with 0.003% of the antibiotic Tiamulin. Lee et al [18] also reported that the fecal score was not affected by supplementation with 0.1% each of BG from mulberry leaves and curcuma in weaning pig feed. On the other hand, the treatment supplemented with 0.0108% BG showed a significantly lower fecal score compared to that of the control and the treatment supplemented with 0.0054% BG when BG by level (0%/0.0054%/0.0108%) was supplemented to weaning pigs experimentally infected with a pathogenic E. coli (p<0.05) [27]. In the current experiment, the fecal score was significantly lower when 0.1% or 0.2% BG and 0.02% VE were added to weaning pig feed. Kim et al [27] reported that the fecal score after weaning was low due to the enhancement of the barrier function and immunity of weaning pigs by adding BG. In addition, supplying additional VE is important because VE absorption can be greatly reduced before and after weaning for pigs with diarrhea [23]. Therefore, it was assumed that the fecal score was low because of strengthening the gut integrity of weaning pigs, improving immunity, and reducing oxidative stress through antioxidant effects with supplementation of VE. Therefore, the results of the present experiment showed that the addition of 0.1% or 0.2% BG and 0.02% VE to weaning pig feed significantly lowered the fecal score. Nutrient digestibility The effects of BG with VE supplementation in the weaning pig diet on nutrient digestibility are shown in Table 6. Supplementing BG and VE did not affect nutrient digestibility and nitrogen retention. According to the previous study of Lee et al [18], treatments supplemented with 0.1% each of BG from mulberry leaves and curcuma showed higher digestibility of DM and energy than those of the control over two weeks. Hahn et al [4] conducted an experiment with the addition of BG by level (0%/0.01%/0.02%/0.03%/0.04%) in weaning pig feed. The digestibilities of DM, gross energy, CP, ether extract, Ca, and P increased linearly (p<0.05) as the addition level of BG increased. When dietary supplementation of BG by level (0%/0.1%/0.2%/0.4%) in the weaning pig diet was compared 3) N-retention = N intake (g)-fecal N (g)-urinary N (g). with the treatment with 0.003% of the antibiotic Tiamulin, supplementation of BG linearly increased apparent total tract digestibility of DM and energy during 1-14 and 1-42 days as the amount of BG increased from 0.1% to 0.4% [15]. On the other hand, there was also a study in which the addition of BG in pig feed showed different results from previous studies in terms of nutrient digestibility. The addition of 0.1% BG to growing pig feed had no effect on the digestibility of DM, gross energy, CP, crude ash, or P [28]. In the present study, the addition of BG and VE to weaning pig feed did not affect nutrient digestibility. This current study did not show an increase in nutrient digestibility, as in previous studies by Hahn et al [4], Lee et al [18], and Park et al [15]. The results also differed from those of a previous study by Brennan and Cleary [29], who reported that the addition of cereal mixed-linked β-(1,3)-(1,4)-d-glucan had a negative effect on the nutrient digestibility and growth performance of pigs. Further research is needed on the effect of BG obtained from brewer's yeast used in the current experiment on nutrient digestibility in weaning pigs because the structure, chemical composition, and the effect of BG were different depending on the source type. In the current experiment, the addition of BG and VE to weaning pig feed had no effect on the nutrient digestibility of weaning pigs. CONCLUSION A significant decrease in yeast and mold count and Proteobacteria and a tendency of increased Lactobacillus compared to the control was shown when 0.1% β-glucan and 0.02% vitamin E were added. The fecal score in weaning pigs was lower in the treatments supplemented with 0.1% or 0.2% β-glucan and 0.02% vitamin E compared to the control. In addition, vitamin E was better supplied to weaning pigs by increasing the concentration of α-tocopherol in the blood of weaning pigs when 0.02% vitamin E was supplemented. Therefore, the addition of 0.1% β-glucan and 0.02% vitamin E to weaning pig feed is thought to have a positive effect on the growth performance of weaning pigs by improving the intestinal microbial composition and reducing the occurrence of diarrhea while efficiently supplying vitamin E.
2022-11-15T16:03:15.856Z
2022-11-14T00:00:00.000
{ "year": 2022, "sha1": "4cb5258b109cd4f774ce18924566814ebeaa3833", "oa_license": "CCBY", "oa_url": "https://www.animbiosci.org/upload/pdf/ab-22-0311.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c090dde9a6845620cc41cd5debc02ffade55883f", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
244684130
pes2o/s2orc
v3-fos-license
Steep Subthreshold Swing and Enhanced Illumination Stability InGaZnO Thin-Film Transistor by Plasma Oxidation on Silicon Nitride Gate Dielectric In this paper, an InGaZnO thin-film transistor (TFT) based on plasma oxidation of silicon nitride (SiNx) gate dielectric with small subthreshold swing (SS) and enhanced stability under negative bias illumination stress (NBIS) have been investigated in detail. The mechanism of the high-performance InGaZnO TFT with plasma-oxidized SiNx gate dielectric was also explored. The X-ray photoelectron spectroscopy (XPS) results confirmed that an oxygen-rich layer formed on the surface of the SiNx layer and the amount of oxygen vacancy near the interface between SiNx and InGaZnO layer was suppressed via pre-implanted oxygen on SiNx gate dielectric before deposition of the InGaZnO channel layer. Moreover, the conductance method was employed to directly extract the density of the interface trap (Dit) in InGaZnO TFT to verify the reduction in oxygen vacancy after plasma oxidation. The proposed InGaZnO TFT with plasma oxidation exhibited a field-effect mobility of 16.46 cm2/V·s, threshold voltage (Vth) of −0.10 V, Ion/Ioff over 108, SS of 97 mV/decade, and Vth shift of −0.37 V after NBIS. The plasma oxidation on SiNx gate dielectric provides a novel approach for suppressing the interface trap for high-performance InGaZnO TFT. Introduction In recent decades, InGaZnO based oxide TFTs have been widely investigated to compete with conventional silicon-based TFTs for active matrix organic light-emitting display (AMOLED) due to its advantages of high field-effect mobility [1], excellent uniformity for large-scaled display panels [2], and high optical transparency in the visible spectrum [3]. Moreover, the InGaZnO shows great penitential for the application of flexible electronic devices owing to its insensitive to intrinsically distorted metal-oxygen-metal chemical bonds [4] and low-temperature fabrication process [5][6][7]. To investigate the further potential for advanced electronic applications such as high refresh rate display and low power consuming devices, the field-effect mobility, stability, and SS should be critically considered. Among the strategies of boosting the performance of InGaZnO TFTs, the modification of interface between InGaZnO and gate dielectric is one of the effective ways [8][9][10]. The plasma treatment technique has been widely applied to tailor the surface properties of semiconductors [11,12]. Additionally, the interface between the post-deposited thin film and the former layer could also be affected by plasma treatment. In this work, the plasma oxidation on SiN x gate dielectric for InGaZnO TFT with fairly low SS and excellent illumination stability has been reported. The effect of plasma oxidation on the electrical characteristic, interface trap density, and chemical component for InGaZnO have been investigated in detail. The proposed plasma oxidation method on SiN x gate dielectric provides a novel approach for achieving the high-performance InGaZnO TFT. Materials and Methods The InGaZnO TFT was fabricated on the 210 nm-thickness SiN x gate dielectric with heavily n-doped (As) Si as a gate electrode. The SiN x layer was deposited by low-pressure chemical vapor deposition at the pressure of 160 mTorr from NH 3 and SiCl 2 H 2 precursor with the gas flow of 40 sccm and 175 sccm, respectively. The as-deposit SiN x layer was treated in the oxygen plasma for 60 s. The oxygen plasma was generated by the capacity coupling configuration via a radio frequency power supply (Seren, R301) and matching box with a fixed power of 40 W under DC bias about 180 V at a constant pressure of 5 mTorr. The whole substrate was rotated at 5 revolutions per minute to keep the uniformity during the whole plasma oxidation process. Afterwards, a 30 nm thick InGaZnO channel layer was deposited by magnetron sputtering from an InGaZnO target (1:1:1 at%) with a power of 200 W in the same chamber without exposure to the atmosphere. Then, the source/drain electrodes were thermal evaporated Al metal via the shadow mask process with a channel length (L) of 100 µm and width (W) of 1000 µm, respectively. Finally, the InGaZnO TFT with plasma oxidation SiN x gate dielectric (hereinafter referred to as 'POG. TFT') was postannealed at 250 • C in air for 1 h. The reference sample (hereinafter referred to as 'Ref. TFT') was with the same fabrication sequence except for plasma oxidation on SiN x dielectric. The process flow diagram and structure of InGaZnO TFTs in this work are shown in Figure 1. The electronic characteristics were evaluated using a source-meter unit (2636B, Keithley, Beaverton, OR, USA) and an LCR meter (IM3536, Hioki, Japan). The chemical status of the thin films was analyzed by XPS (Nexsa, Thermo Scientific, Waltham, MA, USA) with all the XPS data calibrated by C1s BE at 284.8 eV. Surface morphology was performed by the atomic force microscope (AFM, Dimension Icon, Bruker, Billerica, Germany). investigated in detail. The proposed plasma oxidation method on SiNx gate dielectric provides a novel approach for achieving the high-performance InGaZnO TFT. Materials and Methods The InGaZnO TFT was fabricated on the 210 nm-thickness SiNx gate dielectric with heavily n-doped (As) Si as a gate electrode. The SiNx layer was deposited by low-pressure chemical vapor deposition at the pressure of 160 mTorr from NH3 and SiCl2H2 precursor with the gas flow of 40 sccm and 175 sccm, respectively. The as-deposit SiNx layer was treated in the oxygen plasma for 60 s. The oxygen plasma was generated by the capacity coupling configuration via a radio frequency power supply (Seren, R301) and matching box with a fixed power of 40 W under DC bias about 180 V at a constant pressure of 5 mTorr. The whole substrate was rotated at 5 revolutions per minute to keep the uniformity during the whole plasma oxidation process. Afterwards, a 30 nm thick InGaZnO channel layer was deposited by magnetron sputtering from an InGaZnO target (1:1:1 at%) with a power of 200 W in the same chamber without exposure to the atmosphere. Then, the source/drain electrodes were thermal evaporated Al metal via the shadow mask process with a channel length (L) of 100 μm and width (W) of 1000 μm, respectively. Finally, the InGaZnO TFT with plasma oxidation SiNx gate dielectric (hereinafter referred to as 'POG. TFT') was post-annealed at 250 °C in air for 1 h. The reference sample (hereinafter referred to as 'Ref. TFT') was with the same fabrication sequence except for plasma oxidation on SiNx dielectric. The process flow diagram and structure of InGaZnO TFTs in this work are shown in Figure 1. The electronic characteristics were evaluated using a source-meter unit (2636B, Keithley, Beaverton, OR, USA) and an LCR meter (IM3536, Hioki, Japan). The chemical status of the thin films was analyzed by XPS (Nexsa, Thermo Scientific, Waltham, MA, USA) with all the XPS data calibrated by C1s BE at 284.8 eV. Surface morphology was performed by the atomic force microscope (AFM, Dimension Icon, Bruker, Billerica, Germany). The field-effect mobility in saturation region is extracted from the following equation: Results and the SS is calculated by [13]: where C ox , L, W, I ds , and V gs are the gate capacitance per unit area, channel length, channel width, current of drain to source, and gate bias voltage, respectively. All the extracted parameters are summarized in Table 1. The field-effect mobility in saturation region is extracted from the following equation: and the SS is calculated by [13]: where Cox, L, W, Ids, and Vgs are the gate capacitance per unit area, channel length, channel width, current of drain to source, and gate bias voltage, respectively. All the extracted parameters are summarized in Table 1. 16.46 cm 2 /V·s and the Vth slightly shifted from 1.95 V to −0.10 V. The SS has an obvious decrease from 312 mV/decade to 97 mV/decade. Generally, the value of SS is dominated by the density of trap state in semiconductor bulk and interface trap between semiconductor and gate dielectric. The roughness of the gate dielectric could directly influence the interface between the InGaZnO and dielectric [14][15][16]. Hence, to investigate the condition of the interface, the AFM topography was obtained for the SiNx sample with/without plasma oxidation under the identical process condition as described before, as shown in Figure 2b,c. The value of the surface roughness is decreased from 1.23 nm to 0.95 nm after plasma oxidation. The large cluster SiNx or absorbed carbon contaminant could be partly peered off from the surface of the SiNx by the plasma bombardment which provides a smoother surface for the following sputtering of InGaZnO. Different from using electron cyclotron resonance (ECR) remote plasma to treat the thin film [17], capacitive coupling was used to provide O2 plasma with a stronger bombardment effect to treat SiNx insula- 16.46 cm 2 /V·s and the V th slightly shifted from 1.95 V to −0.10 V. The SS has an obvious decrease from 312 mV/decade to 97 mV/decade. Generally, the value of SS is dominated by the density of trap state in semiconductor bulk and interface trap between semiconductor and gate dielectric. The roughness of the gate dielectric could directly influence the interface between the InGaZnO and dielectric [14][15][16]. Hence, to investigate the condition of the interface, the AFM topography was obtained for the SiN x sample with/without plasma oxidation under the identical process condition as described before, as shown in Figure 2b,c. The value of the surface roughness is decreased from 1.23 nm to 0.95 nm after plasma oxidation. The large cluster SiN x or absorbed carbon contaminant could be partly peered off from the surface of the SiN x by the plasma bombardment which provides a smoother surface for the following sputtering of InGaZnO. Different from using electron cyclotron resonance (ECR) remote plasma to treat the thin film [17], capacitive coupling was used to provide O 2 plasma with a stronger bombardment effect to treat SiN x insulators. These strongly bombarded O 2 plasmas would treat the SiN x insulators more adequately. To further verify the impact on elements composition after plasma oxidation, the elements composition by XPS for the surface of SiN x thin film without/with plasma oxidation is shown in Figure 2d. For the sample of SiN x without plasma oxidation, the oxygen atoms are mainly attributed to the surface absorbed oxygen from the environment on the surface of SiN x . After plasma oxidation, the O atoms ratio increased from 25.0% to 38.2%, the Si-O bonding would be formed on the surface of SiN x thin film. To verify this speculation, the XPS spectra of the Si2p in the surface of SiN x thin film were also measured. As shown in Figure 3, the Si2p binding energy on the surface of SiN x at 102.5 eV (with plasma oxidation). This binding energy of Si2p is between those of the Si 3 N 4 (101.7 eV) and SiO 2 (103.5 eV), which indicate the formation of Si-O bonding on the surface of SiN x thin film with plasma oxidation. Since the SiN x thin film was mounted on the anode of the plasma generator in this work, an electric field point to the substrate could form on the SiN x surface. As a result, the cations in the plasma such as O 2+ or O + could sustain surface bombardment, which causes the Si-N bonds to break and oxygen atoms could substitute partly nitrogen atoms to form an oxygen-rich layer on the SiN x surface. In addition, the carbon atoms ratio has also been reduced after plasma treatment. Such carbon is mainly induced by the inevitable contamination from the vacuum chamber or transfer process. Since the carbon has been reported as an electron trap in InGaZnO [18], the reduction of carbon on the SiN x surface could also increase the mobility and meliorate the SS of the InGaZnO TFT. surface of SiNx. After plasma oxidation, the O atoms ratio increased from 25.0% to 38.2%, the Si-O bonding would be formed on the surface of SiNx thin film. To verify this speculation, the XPS spectra of the Si2p in the surface of SiNx thin film were also measured. As shown in Figure 3, the Si2p binding energy on the surface of SiNx at 102.5 eV (with plasma oxidation). This binding energy of Si2p is between those of the Si3N4 (101.7 eV) and SiO2 (103.5 eV), which indicate the formation of Si-O bonding on the surface of SiNx thin film with plasma oxidation. Since the SiNx thin film was mounted on the anode of the plasma generator in this work, an electric field point to the substrate could form on the SiNx surface. As a result, the cations in the plasma such as O 2+ or O + could sustain surface bombardment, which causes the Si-N bonds to break and oxygen atoms could substitute partly nitrogen atoms to form an oxygen-rich layer on the SiNx surface. In addition, the carbon atoms ratio has also been reduced after plasma treatment. Such carbon is mainly induced by the inevitable contamination from the vacuum chamber or transfer process. Since the carbon has been reported as an electron trap in InGaZnO [18], the reduction of carbon on the SiNx surface could also increase the mobility and meliorate the SS of the InGaZnO TFT. Furthermore, the influence of SiNx surface plasma oxidation on the post-deposit InGaZnO layer was evaluated by the O1s XPS profile at the interface between SiNx and InGaZnO for the sample of with/without plasma oxidation. Figure 4a,b shows the XPS spectra of the O1s core level in InGaZnO near the interface between SiNx and InGaZnO. The O1s peak of XPS spectra is deconvoluted into three peaks with a binding energy of about 530.3 eV, 531.3 eV, and 532.3 eV. The main peak centered at about 530 eV (OL) is related to the lattice oxygen. The peak centered at 531.3 eV (OM) and 532.3 eV (OH) is related to the oxygen vacancies and -OH hydroxide oxygen [19,20], respectively. The area ratio of OL:OM:OH for the sample without plasma oxidation and with oxidation is about 1:0.25:0.10 and 1:0.18:0.12, respectively. This result indicates that the amount of oxygen vacancies in a-InGaZnO near the interface between SiNx and InGaZnO is decreased after surface plasma oxidation of the SiNx layer. This phenomenon could be explained by the following mechanism. After InGaZnO layer deposition, the thermal post-anneal process could cause the oxygen atoms near the interface to diffuse from the InGaZnO layer into the SiNx layer, which is driven by the oxygen concentration gradient. Such a diffusion and InGaZnO is decreased after surface plasma oxidation of the SiN x layer. This phenomenon could be explained by the following mechanism. After InGaZnO layer deposition, the thermal post-anneal process could cause the oxygen atoms near the interface to diffuse from the InGaZnO layer into the SiN x layer, which is driven by the oxygen concentration gradient. Such a diffusion process could be partly restrained by reducing the concentration gradient via pre-implantation oxygen atoms at the SiN x surface leading to a reduction in oxygen vacancies in InGaZnO near the interface between SiN x and the InGaZnO layer. Since the oxygen vacancy is considered as the origin of the defects in InGaZnO [21,22], it could be deduced that the density of interface traps in InGaZnO TFTs after O 2 plasma treatment should be decreased. On the other hand, the reduction in oxygen vacancy only near the interface between SiN x and the InGaZnO layer would not significantly decrease the carrier concentration in the InGaZnO layer. Hence, the SS should also be decreased and the mobility should be increased, which is consistent with the decrease in the extracted value of SS and the increase in mobility from transfer curves of TFTs in Table 1. that the density of interface traps in InGaZnO TFTs after O2 plasma treatment should be decreased. On the other hand, the reduction in oxygen vacancy only near the interface between SiNx and the InGaZnO layer would not significantly decrease the carrier concentration in the InGaZnO layer. Hence, the SS should also be decreased and the mobility should be increased, which is consistent with the decrease in the extracted value of SS and the increase in mobility from transfer curves of TFTs in Table 1. [23,24]. Therefore, owing to the effective decrease in the amount of oxygen vacancies near the interface between InGaZnO and SiNx by plasma oxidation, the holes trapped by interfacial traps could also be reduced, resulting in a smaller ΔVth after NBIS. TFT. During the NBIS test, the holes and electrons are generated by the light illumination, while the oxygen vacancy could act as the hole trap to capture the photoinduced holes which are likely to drift toward the channel/dielectric interface under negative gate bias, resulting in NBIS instability in InGaZnO TFTs [23,24]. Therefore, owing to the effective decrease in the amount of oxygen vacancies near the interface between InGaZnO and SiN x by plasma oxidation, the holes trapped by interfacial traps could also be reduced, resulting in a smaller ∆V th after NBIS. To directly obtain the Dit between SiNx gate dielectric and the InGaZnO layer, the conductance method [25] was employed. This small-signal steady-state method has been widely used to analyze the properties of the interface trap owing to its accuracy and sensitivity in extracting the Dit. The Dit can be calculated from the equivalent parallel conductive (Gp) divided by ω from the following equation: To directly obtain the D it between SiN x gate dielectric and the InGaZnO layer, the conductance method [25] was employed. This small-signal steady-state method has been widely used to analyze the properties of the interface trap owing to its accuracy and sensitivity in extracting the D it . The D it can be calculated from the equivalent parallel conductive (G p ) divided by ω from the following equation: The G p /ω can be directly calculated from the measured equivalent parallel conductance (G m ) and measured capacitance (C m ) by the following express: where C ox is capacitor per unit area. At maximum Gp/ω, the ω is equal to 1/τit, the D it can be expressed by the measured maximum conductance as: Two metal-oxide-semiconductor (MOS) capacitors have been fabricated with a similar structure except for the plasma oxidation on the SiN x surface. The G p /ω as a function of frequency is shown in Figure 5d. The inset is the structure of the MOS capacitor. The extracted D it is 3.02 × 10 12 cm −2 ·eV −1 and 1.45 × 10 12 cm −2 ·eV −1 for Ref. MOS and POG. MOS capacitors, respectively. This result also proved that the interface traps at SiN x /InGaZnO are reduced by the SiN x surface plasma oxidation, which is consistent with the aforementioned XPS and NBIS results. Table 2 represents the performance metrics of InGaZnO TFT in this work and other reported SiN x -related InGaZnO TFTs. Among all SiN x -based InGaZnO TFTs, the TFT from our work exhibits a combination of high I on /I off and the lowest SS. Conclusions In this work, we demonstrated an a-InGaZnO TFT with plasma oxidation SiN x gate dielectric. With plasma oxidation of SiN x gate dielectric, the SS and ∆V th under NBIS were significantly improved from 312 mV/decade to 97 mV/decade and −4.75 V to −0.37 V, respectively. The plasma oxidation on SiN x could provide a smoother surface and form an oxygen-rich layer at the SiN x /InGaZnO interface. The XPS result indicates that the amount of oxygen vacancy near the SiN x /InGaZnO interface was effectively reduced after plasma oxidation. Furthermore, the interface trap density has been extracted by conductance method, which shows a decrease from 3.02 × 10 12 cm −2 ·eV −1 to 1.45 × 10 12 cm −2 ·eV −1 after plasma oxidation. The plasma oxidation on SiN x gate dielectric in this work provides a potential approach for suppressing the interface trap in SiN x based InGaZnO TFT for an advanced electronic application.
2021-11-28T05:27:32.346Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "ebe3392b3449e2a05440752cf3a2c53823cfae0d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0375/11/11/902/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ebe3392b3449e2a05440752cf3a2c53823cfae0d", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
235463823
pes2o/s2orc
v3-fos-license
Rates of cervical screening amongst females admitted to the psychiatric inpatient hospital in Jersey, Channel Islands Aims Patients with enduring mental health conditions are known to have higher morbidity and mortality rates than the general population. It has been identified that this is due to lifestyle risk factors, medication side effects and barriers to receiving physical health care. National screening programmes; including cervical screening, save lives, however depends upon patient engagement. We hypothesised that due to the factors stated above, psychiatric inpatients are more at risk of cervical cancer and less likely to engage in cervical screening. This study aimed to assess the cervical screening history of patients discharged from the psychiatric inpatient hospital in Jersey, Channel Islands. Method Using computerised laboratory records, the cervical smear history of female patients discharged from the paychiatric inpatient hospital was analysed. Inclusion criteria were: being aged between 25–64 years and having a cervix in situ. Exclusion criteria were total hysterectomy. Cervical smear history was compared to the national guidelines of having routine smears every 3 years for women aged 25–49 and every 5 years for women aged between 50–64 years. Result In the period 1 December 2019–1 December 2020 there were 45 females discharged from the psychiatric inpatient hospital that fit the inclusion criteria. 26 (58%) were up to date with their cervical smears in accordance with national guidelines. 12 (27%) had previously had a smear but were not up to date. 19 smears were done at the GP, 13 at the sexual health clinic and 6 at gynaecology clinic. 7 (16%) had never had a cervical smear. Of these 7 patients it was identified that one patient was in a same sex relationship and one was a victim of sexual assault. Conclusion 58% of women discharged from the psychiatric inpatient hospital were up to date with their smears. This is down from the 72.2% coverage rate of the general population. Although this was a small study, it highlights that engagement with cervical screening amongst psychiatric inpatients is less than the general population. Admission presents a crucial contact between patients and healthcare services and this could be utilised to engage patients in physical health screening. Cervical screening history could be checked upon admission and patients not adequately screened, assisted to make an appointment on discharge. Aims. To increase participation in the 2019 UK general election amongst inpatients on a high intensity rehabilitation ward, by supporting patients to both register to vote (RTV) and vote. Background. In 2000, the franchise was extended to those under section 2 or 3 as well as informal inpatients. Unfortunately, voting rates remain low: studies of the 2010 general election show voting rates amongst psychiatric inpatients to be 14%, compared to 65% for the general population. Engaging patients in the democratic process is not only just, it has been shown to be an effective avenue for rehabilitation through increasing social capital. The 2019 UK general election represents a singular opportunity for biopsychosocial rehabilitation. Method. In the three weeks up until 26/11/19the deadline to RTVvisual displays and verbal information were used to notify patients of: The election Their eligibility The need to RTV before casting a ballot The registration deadline Voting methods (in person, by post, by proxy) We gathered patients' intention to RTV and offered impartial, personalised support to register online or by paper, and to apply for a postal or proxy ballot if wished. Patients with no fixed abode were supported to use the ward as their declared place of residence. Of the 17 patients on the ward there were: Four informal patients 11 patients under section 3 One patient each under a section 37 and a section 37/41, both ineligible to vote Of the 15 eligible patients, one (6.7%) had already registered, six patients (40%) wanted to register and eight (53.3%) stated they did not want to register. Those wanting to register were supported according to individual patient preference. Of the registered seven, five (33.3%) reported voting, one (6.7%) reported not having voted and one (6.7%) declined to say. Two (13.3%) voted in person and five (33.3%) voted by postal ballot. Conclusion. Our intervention corresponded with an increase in number of patients registeringfrom one patient (6.7%) to seven (46.7%), with 5-6 (33.3-40%) casting their ballot. While the causal relationship should not be overstated, the uptake of assistance supports the intervention's efficacy. Result. Good rehabilitation increases a person's social capital, empowering them to actively participate in societal life. Registering to vote is a tacit assertion of this principle. Our study shows that brief interventions that are easily incorporated into everyday care are a simple, effective and ultimately necessary tool in holistic mental health rehabilitation. Aims. Patients with enduring mental health conditions are known to have higher morbidity and mortality rates than the general population. It has been identified that this is due to lifestyle risk factors, medication side effects and barriers to receiving physical health care. National screening programmes; including cervical screening, save lives, however depends upon patient engagement. We hypothesised that due to the factors stated above, psychiatric inpatients are more at risk of cervical cancer and less likely to engage in cervical screening. This study aimed to assess the cervical screening history of patients discharged from the psychiatric inpatient hospital in Jersey, Channel Islands. Method. Using computerised laboratory records, the cervical smear history of female patients discharged from the paychiatric inpatient hospital was analysed. Inclusion criteria were: being aged between 25-64 years and having a cervix in situ. Exclusion criteria were total hysterectomy. Cervical smear history was compared to the national guidelines of having routine smears every 3 years for women aged 25-49 and every 5 years for women aged between 50-64 years. Result. In the period 1 December 2019-1 December 2020 there were 45 females discharged from the psychiatric inpatient hospital that fit the inclusion criteria. 26 (58%) were up to date with their cervical smears in accordance with national guidelines. 12 (27%) had previously had a smear but were not up to date. 19 smears were done at the GP, 13 at the sexual health clinic and 6 at gynaecology clinic. 7 (16%) had never had a cervical smear. Of these 7 patients it was identified that one patient was in a same sex relationship and one was a victim of sexual assault. Conclusion. 58% of women discharged from the psychiatric inpatient hospital were up to date with their smears. This is down from the 72.2% coverage rate of the general population. Although this was a small study, it highlights that engagement S228 ePoster Presentations with cervical screening amongst psychiatric inpatients is less than the general population. Admission presents a crucial contact between patients and healthcare services and this could be utilised to engage patients in physical health screening. Cervical screening history could be checked upon admission and patients not adequately screened, assisted to make an appointment on discharge. Quality improvement project: delirium awareness and training in coventry memory services A brief training comprising NICE Guidelines and using Confusion Assessment Method (CAM) was delivered. The survey is repeated post training and differences in result of level of confidence is done to measure changes. The survey assessed knowledge, beliefs, practices and confidence level regarding delirium detection. Result. Pre training: 17 clinicians took part in the survey. 59% was aware that there is a delirium NICE guidelines. 12% felt strongly agree, 41% agree and 47% felt neutral in their confidence of detecting delirium. Post training: 10 clinicians took part in the survey. 50% felt strongly agree and 50% agree that they are confident in detecting delirium. Overall, the mean difference is 2 and the p value is 0.92034. we used Mann-Whitney Test to measure the difference in pre and post training which showed not significant at p < 0.05. Participants felt that the training was useful and relevant to practice. Conclusion. This study showed our clinicians have a good basic knowledge in detecting delirium. As a result of this study, we have created 'Delirium checklist' and Confusion Assessment Method (CAM) to be used during duty work. We also feel that the majority of delirium cases referred to us comes from the community base, thus our next step of the project will be to involve educational work with the community care home. Aims. To explore and monitor experience of hospital care provided to patients of Stoke Community Drug and Alcohol Services (CDAS) and Edward Myers Unit (EMU; detox inpatient based unit). Method. The sample was collected from patients who attended face-face clinics at CDAS and patients living in Stoke-On-Trent who were admitted to the Edward Myers Unit. The survey pertains to four locations, which include Royal Stoke Hospital, A + E, Harplands Hospital (Mental Health Unit), and EMU. Patient experience survey for community drug and alcohol service users in hospitals We collected data of over two months from September-November 2020. The cohort of patients from CDAS included new presentations or restart Opioid Substitution Treatment (OST) clinics and people known to the alcohol team at CDAS. We delivered a survey pertaining to experience of hospital care in the last 12 months. This includes treatment at A&E Royal Stoke Hopital, any of the wards at Royal Stoke Hospital, Harplands Hospital and Edward Myers Unit. Result. The uptake for the survey was 53/83 (64%) at CDAS clinic and 23/44 (52%) at Edward Myers Unit. The sample comprised more men than women. The majority were aged 31-40 years. Most common substances used were alcohol. Majority of patients has been admitted to the general hospital, either in the ward or seen at A + E. Most people were very satisfied with their treatment in all four locations. This include withdrawal symptoms, pain, mental health, and discharge plan. There were diverse reasons given of the satisfactory scores. EMU seems to have the best overall scores comparatively to the other units, with Harplands Hospital seems to be doing worse. The free text comments revealed that the staffs' courtesy, respect, careful listening and easy access of care was particularly the strongest driver of overall patient satisfaction. Patients look for supportive relationships, to be involved in treatment decisions, effective approaches to care, easy treatment access and a non-judgemental treatment environment. In some aspects, patients were dissatisfied with pain management, longer waiting times and inability to treat them as equal to non drug/alcohol users. Conclusion. On objective measures, patients were satisfied with treatment received, however, some has point out their dissatisfaction, particularly in the mental health setting. This project calls for greater attention and support for addiction service provision in emergency departments and hospital wards. Although these findings do not represent the views of all patients in SUD treatment, findings give insight into the ways treatment providers, service managers and policy makers might enhance the patient experience to improve patient treatment prognosis and outcomes
2021-06-18T13:17:48.797Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "3cdbed6ee26eb16191fa1c988f203914944f6462", "oa_license": "CCBYNCND", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/2A28688EE05BB5FF6324390EF3FAC026/S2056472421006098a.pdf/div-class-title-rates-of-cervical-screening-amongst-females-admitted-to-the-psychiatric-inpatient-hospital-in-jersey-channel-islands-div.pdf", "oa_status": "GOLD", "pdf_src": "Cambridge", "pdf_hash": "528ecf25509b56a075b60b67c5c013494f250895", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
18643523
pes2o/s2orc
v3-fos-license
Cytoreduction and HIPEC in the treatment of “unconventional” secondary peritoneal carcinomatosis Background Peritoneal metastasis (PM) is considered a terminal and incurable disease. In the last 30 years, cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) radically changed the therapeutic approach for these patients and is regarded as the standard of care for pseudomyxoma peritonei from appendiceal cancer and peritoneal mesotheliomas. Improved survival has also been reported in treating PM from ovarian, gastric, and colorectal cancers. However, PM often seriously complicates the clinical course of patients with other primary digestive and non-digestive cancers. There is increasing literature evidence that helped to identify not only the primary tumors for which CRS and HIPEC showed a survival advantage but also the patients who may benefit form this treatment modality for the potential lethal complications. Our goal is to report our experience with cytoreduction and HIPEC in patients with PM from rare or unusual primary tumors, discussing possible “unconventional” indications, outcome, and the peculiar issues related to each tumor. Methods From a series of 253 consecutive patients with a diagnosis of peritoneal carcinomatosis and treated by CRS and HIPEC, we selected only those with secondary peritoneal carcinomatosis from rare or unusual primary tumors, excluding pseudomyxoma peritonei, peritoneal mesotheliomas, ovarian, gastric, and colorectal cancers. Complications and adverse effects were graded from 0 to 5 according to the WHO Common Toxicity Criteria for Adverse Events (CTCAE). Survival was expressed as mean and median. Results We admitted and treated by CRS and HIPEC 28 patients with secondary peritoneal carcinomatosis from rare or unusual primary tumors. Morbidity and mortality rates were in line with those reported for similar procedures. Median survival for the study group was 56 months, and 5-year overall survival reached 40.3 %, with a difference between patients with no (CC0) and minimal (CC1) residual disease (52.3 vs. 25.7), not reaching statistical significance. Ten patients are alive disease-free, and eight are alive with disease. Conclusions Cytoreduction and HIPEC should not be excluded “a priori” for the treatment of peritoneal metastases from unconventional primary tumors. This combined therapeutic approach, performed in an experienced center, is safe and can provide a survival benefit over conventional palliative treatments. Background Patients with peritoneal metastasis (PM) are typically considered as having a terminal and incurable disease and justifiably treated only by palliation with a very poor prognosis [1,2]. Although ovarian cancer is one of the most chemotherapy-sensitive solid tumors and one of the few for which the 5-year survival rate has improved, long-term survival in most women with locally advanced disease remains well below 20 % [3][4][5]. Survival for PM from non-gynecologic malignancies is even worse. The EVOCAPE 1 multicenter study reports a median survival in patients treated with standard surgical and/or chemotherapy regimens of 6.5 and 5.2 months, respectively, in patients with primary gastric and colorectal cancer [6]. Over the past two decades, a novel therapeutic approach has emerged, combining cytoreductive surgery (CRS), performed to treat all visible disease, and hyperthermic intraperitoneal chemotherapy (HIPEC) used to treat microscopic residual disease [7,8]. This treatment radically changed the therapeutic approach for patients with peritoneal surface malignancies and is nowadays regarded as the standard of care for pseudomyxoma peritonei from appendiceal cancer and peritoneal mesotheliomas [9,10]. In the last two decades, many studies also reported with this combined approach improved survival for the treatment of peritoneal metastases from ovarian [11][12][13], gastric [14,15], and colorectal cancers [16][17][18]. Peritoneal metastases often complicate also the clinical course of many patients with other primary digestive and non-digestive cancers [19,20]. Due to the rarity of these conditions, the design of randomized clinical trials of CRS and HIPEC in these patients is unlikely. However, PM is frequently long-term confined to the peritoneal cavity without distant metastases, and death typically occurs for intractable bowel obstruction, development of malignant ascites and mesentery retraction, that often make it impossible to perform even the limited palliative surgery like a simple ostomy. A regional approach seems therefore reasonable in selected patients. Some medical oncologists remain skeptical mostly because of the complexity of the treatment and the perceived high complication rate [21] and the need to treat the patients only in highly specialized centers [22], but despite skepticism, many are the reports of CRS and HIPEC in the treatment of PM in these patients. Expanding literature reports helped to identify not only the primary tumors for which CRS/HIPEC offers a clear survival advantage but also the patients with rare or unusual primary ("unconventional") cancers who may benefit from this treatment modality for the potential lethal complications and survival advantage [23][24][25]. Our goal is to report our single-institution experience with CRS and HIPEC in patients with PM from rare or unusual primary tumors, discussing possible indications, outcomes, and the peculiar issues related to each tumor, hoping to contribute to extend the actual knowledge on the treatment of PM by this combined treatment. Methods From the clinical records of a series of 253 consecutive patients admitted in our Institution from November 2000 to December 2013 with a diagnosis of peritoneal carcinomatosis from various primary tumors and treated by maximal cytoreduction and HIPEC, we considered for this study only the patients with a diagnosis of secondary peritoneal carcinomatosis from "unconventional" primary tumors. All patients with primary peritoneal carcinomatosis and with secondary peritoneal carcinomatosis from ovarian, gastric, colorectal, and peritoneal mucinous adenocarcinoma of the appendix (PMCA) were excluded. All patients gave informed written consent and had a clear histologic diagnosis of peritoneal carcinomatosis. We included only patients with a performance status of 0-2 [26], adequate cardiac, renal, pulmonary and bone marrow function, and resectable disease. Exclusion criteria were extraperitoneal spread, other malignancies, unresectable disease, and severe associated medical conditions. At laparotomy extent of peritoneal carcinomatosis (PC) was recorded using the peritoneal cancer index (PCI) [27]. Complete surgical cytoreduction was then carried out to resect all visible disease. At the end of the surgical procedure, HIPEC was given with the closed technique. Four drains were positioned and connected to a closed extraperitoneal sterile circuit in which 4 to 6 L of perfusate was circulated by a peristaltic pump at a flow rate of 500 mL/min. The circuit was heated using an external heat exchanger connected to a heating circuit (EXIPER, Euromedical Italy). HIPEC was given at a temperature of 42-43°C for 60 min using various chemotherapeutic drugs according to the primary tumor (Table 1). At the end, the abdomen was washed with 3-4 L of sterile saline solution at 37°C. The medical oncologic staff planned systemic chemotherapy when deemed necessary. Patients were followed up every 3 months with clinical evaluation and serummarker monitoring. Imaging techniques were obtained if indicated by the patient's clinical presentation. Survival was expressed as mean and median. The Kaplan-Meier method was used to construct survival curves, and log-rank test was used to assess the significance of the differences (cutoff values p < 0.05). Results A total of 28 patients with secondary peritoneal carcinomatosis from unconventional primary tumors were admitted and treated by CRS and HIPEC in our Institution. The clinical characteristics and type of primary tumor are reported in Table 1. Mean PCI was 17.1. Twenty-five patients (89 %) had an optimal cytoreduction (17 CC0 and 8 CC1) while three (10.7 %) had CC2 residual disease. Peritonectomy procedures lasted a mean of 475 min (range 300-780) including 60 min of HIPEC. All operations led to major blood loss (mean 1350 mL, range 500-3900) and required intraoperative blood (mean 4 units, range 2-8) and plasma (mean 6 units, range 2-10) transfusions. Most patients (16, 57.1 %) had an uneventful recovery. The only HIPEC-related adverse event was a grade 1 renal cisplatin toxicity reversed by medical treatment. Grade 1/2 complications developed in six (21.4 %), grade 3 in two (7.1 %), and grade 4 in four (14.2 %) patients. Of the four patients with grade 4 complications, two underwent a second operation for fistulas (one colonic and one small bowel) caused by the surgical maneuvers needed to ablate bowel implants, one for postoperative bleeding, and one for an abdominal eventration. Mean postoperative stay was 19.2 days (range 8-71). Median survival for the study group was 56 months, and 5-year overall survival reached 40.3 %, with a difference between CC0 and CC1 patients (52.3 vs. 25.7), not reaching statistical significance (Fig. 1). Ten patients are alive disease-free, and eight are alive with disease (Table 1). Management of peritoneal metastases from breast cancer Peritoneal carcinomatosis from breast cancer (BC) is rare but carries high morbidity and mortality [30][31][32], and no clear guidelines are available regarding the role of CRS with or without HIPEC for those patients [33,34]. Literature reports are sporadic and only Gusani, one patient in 2008 [1], and Glehen, two patients in 2010 [15], reported PM from BC treated by CRS and HIPEC. Our study provides previously unavailable information on the treatment of women with PM from BC showing that once the correct diagnosis has been established [30], these patients can benefit from treatment and possibly argues against previous reports describing a poor prognosis. In our patients, a median 18 years (range 10-30) elapsed after BC was diagnosed and peritoneal carcinomatosis developed and accords with previous reports describing breast carcinoma as one of the most slowly growing solid tumors given that metastases may appear even decades after the initial diagnosis [34,35]. Of the five patients treated, four achieved long-term survival, one surviving even for 10 years with good QOL. Although CRS and HIPEC cannot be proposed as a standard care for patients with PM from primary breast cancer, the survival observed in our small series suggests that in highly selected patients with no extra peritoneal disease and in whom surgery can achieve adequate cytoreduction this combined procedure can offer patients a promising approach for long-term survival. Management of peritoneal carcinomatosis from small bowel adenocarcinoma Management of patients with PM from small bowel adenocarcinoma is unclear with literature reports episodic, even if PM is a frequent manifestation of small bowel carcinoma [36,37]. Typically, these tumors present after a significant delay in diagnosis for the vagueness of symptoms and imaging difficulty. Prognosis is poor with survival varying from 10 to 40 months. Marchettini and Sugarbaker [38] reported a median survival of 12 months with two of the patients treated with CRS and HIPEC with prolonged survival (57 and 59 months). Chua [39] reported seven cases treated with CRS and HIPEC (mitomycin C and EPIC with 5-FU), with a median disease-free survival of 12 months, and also reported a Kaplan-Meier analysis for a combined group of 19 patients treated with CRS and HIPEC with a median survival of 29 months. Shen et al. [40] reported a median survival of 45 months after treatment with CRS and HIPEC. Another large multiinstitutional experience is reported by the French Surgical Association [41], with a median survival for patients treated by CRS and HIPEC of 32 months. In the four patients treated in our Institution, mean survival was 31.2 months, with two patients alive disease-free at 43 and 22 months and two alive with disease at 33 (pulmonary metastases) and at 27 (abdominal recurrence) months. All the series reported show better results when compared to conventional treatments. Moreover, it has to be considered that CRS and HIPEC could represent the only valid surgical option for palliation in obstructed patients, in whom a simple surgical procedure aimed at bowel decompression is often impossible due to small bowel mesentery retraction or for the treatment of associated ascites. Management of peritoneal carcinomatosis from serous papillary (type II) uterine carcinoma (UPSC) Endometrial cancer is still the most common cancer of the female reproductive tract, and its treatment is surgical, alone or in combination with brachy and/or radiotherapy. Survival rates are approximately 90 % at 5 years [42]. When compared to type I tumors, type II endometrial cancers are more likely to present or develop metastatic disease and have a less favorable diagnosis. In the presence of peritoneal metastases, the management becomes more complex and prognosis is poor, with a median survival not reaching 1 year. Bakrin [43] reported five patients with endometrial cancer treated by this combined modality, with a median survival of 19.4 months. Two patients experienced recurrent disease and died, while three patients are alive disease-free at 7, 23, and 39 months after treatment. Glehen [44] in a multi-institutional review of the French Surgical Association of 1290 patients with peritoneal metastases from various primary tumors reported the treatment of 17 patients with uterine adenocarcinoma and epidermoid carcinoma (4 patients), failing to give specific survival data for this specific group of patients. Delotte [42] in 2014 reported CRS and HIPEC in 13 patients with endometrial cancer. Five patients died of the disease and three are alive with disease at 14, 26, and 28 months, while four patients are alive disease-free at 1, 60, 60, and 124 months. In our Institution, we treated eight patients with a diagnosis of type II UPSC with CRS and HIPEC. In four patients, we observed recurrent disease, and two of them died of the disease at 9 and 13 months, while two are alive with disease at 19 and 26 months. Four patients are alive disease-free at 9, 14, 26, and 33 months. Treatment strategies for stage IV endometrial cancer remain controversial. Some reports highlight the histologic characteristics and extent of the disease as the main prognostic determinants, while others favor the effects of a more aggressive surgical cytoreduction. The long-term survival reported in these observational studies, higher when compared to those reported in literature with conventional treatments, seems to justify a more aggressive surgical attitude with the aim to leave the patients without residual visible disease. Management of peritoneal carcinomatosis from imatinib-resistant GISTosis Survival of patients with gastrointestinal stromal tumors (GIST) greatly improved with the clinical use of molecular-targeted therapies [45]. However, the prognosis of imatinib-resistant GIST disseminated to the peritoneum (spontaneously or during surgery) is poor. Accepted conventional treatments including palliative surgery, chemo, and/or radiotherapy are ineffective [46]. As with PM from other gastrointestinal or gynecologic epithelial tumors, a strong rationale favors aggressive locoregional treatment in these patients including peritonectomy procedures combined with HIPEC [47][48] even if its use is controversial due to the rarity of the condition and the few available published reports [49]. The results of our small series of three small bowel imatinibresistant GIST treated with CRS and HIPEC (two patients alive disease-free at 34 and 108 months, one patient died of disease at 38 months) are in line with similar reports ( Table 2) and compare favorably with historical control groups justifying an effort to optimize treatment of the primary or recurrent GISTosis. Management of peritoneal carcinomatosis from other unconventional miscellaneous tumors The optimal management of patients with rare and unusual primary tumors metastatic to the abdominal cavity is a matter of intense debate. Systemic chemotherapy for PM improved but remains limited because of poor diffusion of the drugs into the peritoneum. This is why many authors [1,34,35,[50][51][52][53][54][55] reported small observational series of patients with PM from various unconventional tumors treated by CRS and HIPEC (Table 2). This combined treatment modality has been used in peritoneal metastases from pancreatic, abdominal sarcomas, gallbladder, liver, cholangiocarcinoma, adrenal, urachal, Conclusions We can conclude that CRS and HIPEC should not be excluded "a priori" for the treatment of peritoneal metastases from rare or unusual ("unconventional") primary tumors. This combined multimodality therapeutic approach, performed in selected patients in an experienced peritoneal surface malignancy center, is safe and has shown to provide not only a better palliation but also a survival benefit over conventional palliative treatments. Competing interests The authors declare that they have no competing interests. Authors' contributions MC designed the study; participated in acquisition, analysis, and interpretation of data; and drafted, wrote, and revised the manuscript. PS and DB participated in the draft of the manuscript and analysis and data interpretation and critically revised the manuscript. VM participated in data acquisition and interpretation and preparation and revision of the draft of the manuscript. SS and FA participated in data acquisition and analysis and interpretation and critically revised the manuscript. DM participated in the critical revision of the draft of the manuscript. BS participated in the critical revision of the draft of the revised manuscript after the reviewers' comments. ADG participated in data interpretation and critical revision of the manuscript. All authors gave final approval of the version to be published and agreed to be accountable for all aspects of the work.
2016-05-17T17:34:50.290Z
2015-10-22T00:00:00.000
{ "year": 2015, "sha1": "3baa07807544b14443c5c52348c206142654a3d0", "oa_license": "CCBY", "oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-015-0703-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3baa07807544b14443c5c52348c206142654a3d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239016750
pes2o/s2orc
v3-fos-license
The Dark $Z'$ and Sterile Neutrinos Behind Current Anomalies We show how, in the $B-L$ extension of the SM (BLSM) with an Inverse Seesaw (IS) mechanism for neutrino mass generation, a light $Z'$ state with moderate couplings to SM objects, hence `dark' in its nature, can be associated, in conjunction with light sterile neutrinos, to some present day data anomalies, such as the anomalous magnetic moment of the muon as well as a possible signal indicating the existence of sterile neutrinos in neutrino beam experiments. Introduction Despite its huge successes, the Standard Model (SM) of particle physics has several drawbacks which require one to conceive some Beyond the SM (SM) physics. Its Achilles' heel is probably the leptonic sector, though, as neutrino masses are forbidden in the SM, yet, experiments have verified that neutrino flavours oscillate which in turn implies that neutrinos have finite masses. Neutrinos are strictly massless in the SM essentially due to two reasons: (i) the absence of their right-handed eigenstates; (ii) an exact global Baryon minus Lepton (B − L) number conservation. However, a modification of the SM, based on the gauge group SU (3) C ×SU (2) L ×U (1) Y ×U (1) B−L , nicknamed the B − L extension of the SM (BLSM), wherein the additional Abelian group is elevated to be a local symmetry, can account for light neutrino masses through an Inverse Seesaw (IS) mechanism [1,2]. In such a construct, the aforementioned right-handed neutrinos would acquire Majorana masses at the B − L symmetry breaking scale, but they are not allowed to do so by the discussed B − L gauge symmetry and another pair of SM gauge singlet fermions with tiny masses, of O(1 keV), must be introduced. Therefore, such a small scale can be considered as a slight breaking of the underlying gauge symmetry, hence, according to 't Hooft criteria, its dynamics becomes natural. One of these two singlet fermions couples to right-handed neutrinos and is involved in generating the light neutrino masses. The other singlet (usually called inert or sterile neutrino) is completely decoupled and interacts only through the B − L gauge boson, a Z , ensuing from the spontaneous breaking of the additional U (1) B−L group [3], so that it may account for warm Dark Matter (DM) [4] (see also Ref. [5]), the lack of a viable candidate for it being another significant flaw of the SM. This construct, BLSM-IS for short, predicts several testable signals at the Large Hadron Colider (LHC) through some of the new particles that it embeds: the Z (neutral gauge boson) associated with U (1) B−L , an extra Higgs boson (h , in fact, an additional (pseudo)scalar singlet state is introduced to break the gauge group U (1) B−L spontaneously) and heavy neutrinos (ν h , which are required to cancel the associated new gauge anomalies and are thus necessary for the consistency of the whole model). Ref. [6] reviewed the LHC potential to access the BLSM-IS, including its Supersymmetric extension [7,8,9], when the Z mass is of order TeV and such a state is relatively strongly coupled to SM states. In this paper, we aim instead at considering the case of a very light Z , of MeV scale, very mildly coupled to SM objects, specifically, whether it can be responsible, together with the aforementioned sterile neutrinos, of data anomalies that have emerged from the E821 experiment at BNL and the Muon g − 2 one at FNAL as well as the MiniBooNE (MB) collaboration also at FNAL. In fact, the former two hinted at statistically significant deviations from the SM predictions of the anomalous magnetic moment of the muon, (g − 2) µ for short, which could be explained by a very light Z state, while the latter one was taken as a sign of the possible existence of sterile neutrinos. The plan of the paper is as follows. In the next section, we describe the BLSM-IS. In Sect. III, we discuss both direct and indirect experimental constraints on light Z and sterile neutrino states. We then move on to present our results for (g − 2) µ . After this, we discuss our explanation for the MB excess. Finally, in the last section, we present our summary. The model To start with, in the BLSM-IS that we consider here, we assume that the SM singlet scalar χ, which spontaneously breaks U (1) B−L , has B − L charge = −1. Also, the three pairs of SM singlet fermions, S 1,2 with B − L charge = ∓2, respectively, are introduced (see tab. I, wherein l L,R refer to leptons, Q L , u R , d R identify quarks and φ is the Higgs state of the SM). Limited to the leptonic sector, the BLSM-IS Lagrangian is given by withφ = iσ 2 φ * . Using the unitary gauge parameterisation, the kinetic terms become and where g ( with θ W the weak mixing angle while −π 4 ≤ θ ≤ −π 4 such that tan 2θ = 2g g 2 + g 2 The neutral gauge boson masses are determined by fixing the values of the new parameters as In the BLSM-IS model, the Majorana neutrino Yukawa interaction induces the masses onto the SM neutrinos after U (1) B−L symmetry breaking via the Lagrangian terms with a Dirac mass m D = 1 √ 2 λ ν υ and a Majorana mass m N = 1 √ 2 λ S υ . The 9 × 9 neutrino mass matrix can be written as In order to avoid a possible large mass term mS 1 S 2 in the Lagrangian, that would spoil the IS structure, one assumes a Z 2 symmetry under which ν R , χ, S 2 and the SM particles are even while S 1 is an odd particle. The neutrino mass matrix M ν can be diagonalised by the matrix V as 1 with V a 9 × 9 matrix defined as [10] The upper 3 × 3 block are the parameters for the effective Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix with its elements given by in terms of the actual one. The off-diagonal blocks of the V matrix are defined via with θ ∼ m D m −1 N . The matrix V 6×6 diagonalises the right-handed Majorana neutrinos and S 2 . The diagonalisation of the entire neutrino mass matrix leads to the following light and heavy neutrino masses: 1 Assuming there are no complex Majorana phases and the Lagrangian parameters are real. with the latter being pair degenerate. With this structure, the light neutrinos can be of order eV, as required by flavour oscillation experiments, and, with a small µ s value, the ensuing Yukawa coupling is no longer restricted to be very small, indeed, it can be of order one. Moreover, the mixing between light and heavy neutrinos is constrained from lepton flavour violation measurements to be of order O(0.01) as discussed in [11,12,13] and references therein. The tree level coupling of the Z with charged and neutral fermions is expressed as while the ones with active and sterile (light and heavy) neutrinos are given by with θ constrained from LEP experiment to be ∼ 3 × 10 −3 [14]. Direct and Indirect constraints on light Z and sterile neutrinos In this section we discuss the direct and the indirect constraints for low mass Z and sterile neutrinos. In fig. 1 we show the most severe constraints on the light Z mass as a function of the Z gauge coupling g (B−L) and the gauge kinetic mixing parameterg from existing low energy experiments. To recast these bounds on those applicable to our model we used the method of Ref. [15] and produced fig. 1 by using the code advertised in the same paper. The key here is that low energy experiments setting bounds on the low mass photon (a dark photon, A ) considered therein also set limits on a light dark Z , so long that one accounts for the gauge kinetic mixing and axial coupling. Recasting a dark photon search that used the final state F in constraints onto our model can be done, for each Z mass, by equating the upper limit total cross section of dark photon models to the Z one in our model as follows: with σ Z /A being the production cross section and BR(Z /A → F ) the Branching Ratio (BR) of the light gauge boson into the final state F while Z /A is the detector efficiency. Therefore, one can see that, in order to recast the aforementioned experimental limits in terms of our model parameters, we only need the ratios σ Z /σ A , BR(Z → F )/BR(A → F ) and Z / A . In the following, we are going to discuss how these ratios can be obtained for each experiment. • The BaBar detector at the PEP-II B-factory [16] has collected 53 fb −1 of e − e + collisions looking for events with a single high-energy photon and large missing (transverse) momentum or energy which is consistent with the process e − e + → γX and X → invisible, with X being a light gauge boson with spin equal to 1. Further, in [17], the BaBar experiment searched for a single high energy photon plus a dilepton final state, e − e + → γX and X →ll, with l = e, µ. In both searches no statistically significant deviations from the SM predictions have been observed and a 90% Confidence Level (CL) upper limit on the light gauge boson coupling to leptons in the mass range of 0.02 − 10.2 GeV has been set. Recasting this limit onto our model we obtain with g Z being the Z coupling to charged and neutral leptons, eqs. (1) and (15), and g X being the measured gauge boson coupling to charged and neutral leptons. • The A1 Collaboration at the Mainz Microtron (MAMI) [18] searched for the signal of a new light U (1) gauge boson in electron-positron pair production. Since no deviation from the SM value for the corresponding cross section has been observed, A1 set a limit on the light gauge boson coupling over the mass range 40 − 300 MeV. To recast this limit on our model parameters, we have again made use of eq. (17). • Electron beam dump experiments (like E141, E774 and those at KEK and Orsay) also have sensitivity to a new light gauge boson. An overview of the different electron beam dump experiments and their properties is given in [19]. For the SLAC E141 experiment [20], an upper limit is set for neutral particles with masses in the range 1 − 15 MeV following the nonobservation of any excess above the SM bremsstrahlung rate for events of the type e + N → e + N + X. From the Fermilab E774 experiment [21], an upper limit for neutral particles which decay into electron-positron pairs was set. In the electron beam dump experiment at KEK [22], no signal was observed in their search for axion-like particles. The electron beam dump experiment in Orsay [23] also found no positive signal when looking for light Higgs bosons decaying into electron-positron pairs. Combining and reinterpreting these last three experiments, one is able to exclude a light boson over the mass range 1.2 − 52 MeV. • Proton beam dump experiments, like NOMAD [24] and CHARM [25], also found no positive signal while looking for axion like-particles decaying to leptonic pairs, following which the 0.1 − 20 MeV mass range is also precluded to a dark photon or Z . • The NA64 experiment at the CERN SPS [26] found no deviation from the SM expectation while looking for dark photons in the process e − N → e − N A . Hence, a new limit has been set on the A (dark photon) mixing and the absence of invisible A decays excluded the mass range M A ≤ 100 MeV. • The DELPHI experiment at LEP2 [27] analysed single photon events in looking for extra dimension gravitons. As in [28], since the measured single-photon cross sections are in agreement with the expectations from the SM, an upper limit on the coupling and mass of the dark candidate was set, the latter being above 10 GeV. Before moving on to study the relevant experimental observables, we should mention that we have used SPheno [29,30] to generate the model spectrum as well as HiggsBounds and HiggsSignals [31,32,33,34,35] to check the constraints on the Higgs sector of it. Also, we have used FlavourKit [36] to check lepton flavour violation constraints. The muon anomalous magnetic moment The Lande g factor for muons, and its deviation from the tree level value of 2, represents one of the most precisely measured quantities in the SM. Therefore, it is also an excellent probe for new physics. Currently, there exists a long standing and statistically significant discrepancy between its measurement and the theoretically predicted value [37,38,39,40] In this section, we focus on a light Z as a means to solve the current muon anomalous magnetic moment anomaly. Following the general formula in [42], the interaction Lagrangian of a Z with muons can be rewritten as where C V and C A are the vector and axial couplings introduced in eq. (14). The Z modifies the muon magnetic moment via the one loop diagram in fig. 2. The Z contribution can be obtained as [43] with x being the Feynman parameter. For a low mass Z , its contribution to the muon anomalous magnetic moment is This viable region of model parameters is also compliant with the constraints given in [43], which included the following ones. 2. Neutrino scattering bounds: several neutrino scattering experiments results on couplings to muons and muon neutrinos, the most stringent ones of these being from Borexino [46] and CHARM-II [47]. Observation of energy loss in supernovae due to Z − interactions set constraints on the B − L model parameters [45,48,49,50]. A Z mass up to 100 MeV is constrained in the (M Z , g (B−L) ) plane [45]. Both cosmological (BBN) and astrophysical (SN1987A) limits are model dependent. For instance, the chameleon effect due to the environmental matter density and late reheating can weaken the SN1987A [51] and BBN [52] limits, respectively. In the study of the astrophysical limit, a pure Z model has been considered [45], but in our model we have an extended scalar sector. In the presence of new scalar states, the limits on Z change dramatically and can be avoided when a neutral state couples to a dark matter particle [53], as is the case in our model. A model-independent fit to all such experimental data (thus including (g − 2) µ ) reveals the following parameter values as viable. 2. An axial coupling of the Z to electrons larger than the vector one: 3. A large vector coupling to muons, 5 × 10 −4 < |C V µ | 0.05, and an axial coupling C Aµ that is smaller by at least a factor of a few. We now move on to study the MB anomaly and its theoretical implications. MiniBoone In this section, we will study the anomaly registered by the MB experiment, wherein the beam primarily consists of ν µ 's produced via pion decay. The relevant process leading to an electron excess is ν µ N (k) → N (k )ν 4 e + e − −, as shown in fig. 4, where ν 4,5 are sterile neutrinos, with m ν5 > m ν4 . (Previous work explaining the results in Ref. [54] using sterile neutrinos can be found in [55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71].) The process is mediated by the new Z producing a collimated e + e − pair (producing the visible light that makes up the signal) through the decay of ν 5 into ν 4 , which then makes the cross section proportional to |U ν5ν4 | 2 . By calculating the liftime of ν 4 , it is found to be 1.76 second. The corresponding decay length is 5.3 × 10 8 m which is way greater than the MB dimention of 12.2 m. Thus, ν 4 will decay away outside the detector. So that there is no additional EM deposition in the detector due to the ν 4 decay. The form factor of the coupling of the Z with nucleons N is where k and k are the initial and final nucleon momenta whereas The isoscalar form factors F 1 V (q 2 ) and F 2 V (q 2 ) for the nucleon are given by [72] where m N = 0.938 GeV, F D (q 2 ) = (1 − q 2 /0.71 GeV 2 ) −2 with a p ≈ 1.79 and a n ≈ −1.91 being coefficients related to the magnetic moments of the proton and neutron, respectively. The total differential cross section has two components, an incoherent and a coherent one, which we will both consider. The total differential cross section, for the target in MB, i.e., CH 2 , is given by The incoherent contribution from the single nucleon cross section is multiplied by the total number of the nucleons present in CH 2 , i.e., 14. However, the entire carbon nucleus contributes to the coherent process weighted by the exponential factor exp(2b(k − k) 2 ) [73], where b is a numerical parameter, which for C 12 has been found to be 25 GeV −2 [74,73]. The coherent process decreases as q 2 = (k − k) 2 increases, where q 2 is negative. The number of events is given by [75] with E h ∈ [E h , E h + ∆E h ] and where Φ ν is the incoming muon neutrino flux. Here, n is the number of nuclei in the fiducial volume of the detector. In the case of MB, the target is 818 tons of mineral oil (CH 2 ) with atomic mass 14 [54], as mentioned, so that n = 3.5174 × 10 31 . Furthermore, η = 0.2 contains all the detector related information like efficiencies, Protons-on-Target (POT), etc. The latest data set for the neutrino mode, corresponding to 18.75 × 10 20 POT, as detailed in [54,75], has been used in our fit. Finally, for these values, the calculated lifetimes of the ν 5 and Z states in their rest frame are 10 −17 s and 1.8 × 10 −12 s, respectively. The value of E ν5 is related to the visible energy, E vis = E e + + E e − , as follows (26) Furthermore, the Mandelstam variables in terms of the neutrino(lepton) energy E ν (E l ) are Then, t and E l lie in the intervals where the energy and momentum of the neutrino and lepton in the center of mass (cm) system are The threshold neutrino energy to create the charged lepton partner is given by where m l , M p and M n are the masses of the charged lepton, proton and neutron, respectively. The differential cross section in the laboratory frame is given by where We have then verified our analytic calculations with MadGraph [76], where the nucleon form factor in eq. (22) is implemented effectively in the Universal FeynRules Output (UFO) files [77], by fixing q 2 = M 2 Z . To measure the goodness of the fit between the BLSM-IS and the measured data, we constructed a χ 2 test function as with δ ij the covariance matrix that contains the uncorrelated intrinsic experimental statistic and systematic uncertainties in its diagonal entries. Fig. 5 we show the prediction for two BLSM-IS signals obtained by adopting two benchmark points with M Z = 20, 30 MeV and fixed g (B−L) = −10 −4 ,g = 0.2, θ = 3 × 10 −3 , m ν4 = 60 MeV and m ν5 = 110 MeV, together with the background and against the data collected by MB which appear anomalous. We find a good agreement between predictions and data up to a 5σ CL. Finally, in fig. 6, shows the result of the above fit to the measured MB data extracted from [54] over the Z mass range 2 − 130 MeV with fixed m ν4 , m ν5 and θ values while g (B−L) andg have been chosen at their maximal allowed values for the given M Z (as in fig.1). The fit shows that we can reach the 5σ CL for Z masses in the range of 15 − 25 MeV. Conclusions In summary, in this letter, we have argued that two anomalies presently stemming from noncollider experiments, specifically, in the measurement of the anomalous magnetic moment of the muon at the E821 experiment at BNL and the Muon g − 2 one at FNAL as well as in the study of appearance data in the MB short-baseline neutrino experiment at FNAL, hint at a common explanation relying on some BSM physics that might involve both a light Z and light neutrinos, all being extremely weakly coupled to the visible sector (so as to being dubbed dark and sterile, respectively). There is a BSM scenario that can incorporate these new force and matter states in a minimal formulation, thereby being notionally able to explain the aforementioned data sets without invoking an excessing number of new parameters. This is the so-called BLSM-IS, wherein the SM gauge group is supplemented by an additional, spontaneously broken U (1) B−L invariance, obtained by localising the accidental global B − L conservation of quantum numbers that appears in the SM, in combination with an IS mechanism for neutrino mass generation. The requirement of theoretical self-consistency of this BSM scenario in fact imposes the simultaneous presence of a Z state following the B − L breaking, which can be made light rather naturally, and of multiple sterile neutrinos, which are per se rather light. Herein, we have put the BLSM-IS explanations to the aforementioned data anomalies on firm quantitative grounds. In fact, solutions have been found to both anomalies simultaneously for the following ranges of BLSM-IS parameters: M Z = 15 − 25 MeV, g (B−L) ∼ −10 −4 ,g ≈ 0.2, θ = 3 × 10 −3 , m ν4 = 60 MeV and m ν5 = 110 MeV.
2021-10-19T01:15:44.459Z
2021-10-16T00:00:00.000
{ "year": 2021, "sha1": "e4f33d568ac331fd8187f7c98e34eae247967870", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2022.136945", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "e4f33d568ac331fd8187f7c98e34eae247967870", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235829866
pes2o/s2orc
v3-fos-license
Self-Determined Reciprocal Recommender System with Strong Privacy Guarantees Recommender systems are widely used. Usually, recommender systems are based on a centralized client-server architecture. However, this approach implies drawbacks regarding the privacy of users. In this paper, we propose a distributed reciprocal recommender system with strong, self-determined privacy guarantees, i.e., local differential privacy. More precisely, users randomize their profiles locally and exchange them via a peer-to-peer network. Recommendations are then computed and ranked locally by estimating similarities between profiles. We evaluate recommendation accuracy of a job recommender system and demonstrate that our method provides acceptable utility under strong privacy requirements. INTRODUCTION Recommender systems support users in their search for relevant information, products, and services by presenting items to a user that match her preferences or needs. In a reciprocal recommender system, users are recommended to each other. These recommendations are different from one-sided product recommendations, where products are recommended to the user but not vice versa. A typical example for reciprocal recommender systems are dating platforms, where people search for a matching partner [28]. Other areas that use two-sided recommender systems are mentoring systems and job recommendations-the use case of our work. Most existing reciprocal recommender systems provide a platform that connects users to each other. The systems are based on a centralized client-server architecture. In order to receive recommendations, users disclose a significant amount of personal data to create a user profile, which is collected and stored on a server. This approach poses a serious threat to users' privacy and data control. In this paper, we propose a content-based distributed recommendation approach. We develop a distributed recommender system in which users exchange their profiles directly to calculate similarities and to receive recommendations. By following this approach, users determine whom to share the data with and therefore gain control over their data. To this end, we use Bloom filters [5] to encode user profiles and apply the randomized response technique (RRT) to provide strong privacy guarantees, i.e., differential privacy [8]. In order to exchange user profiles, we avoid a central architecture. At the same time, we believe that a synchronous peer-to-peer (P2P) network would not be practically viable. Instead, we propose to exchange data via a P2P data network, specifically the InterPlanetary File System (IPFS) [3], which offers asynchronous data access. We apply and discuss our approach in the context of a P2P job marketplace, which we use as running example. Not only since the EU's General Data Protection Regulation (GDPR), processing personal data is regulated. In particular, personal data are considered sensitive and require special protection. Hence, we believe that a job marketplace is an ideal use case as it has two sides (recruiters and candidates) and emphasizes the data protection needs. In our evaluation, we expose the impact of the privacy guarantees on utility. In particular, we show that comparable utility can be achieved using a Bloom filter to encode profiles, instead of a binary vector. Even for strong privacy guarantees ( < 10), parameters can be found which recommend more than 75 % of the top 20 candidates accurately with high precision for the top ranked candidates. The main contribution of our work is to provide a distributed reciprocal recommender system that is self-determined and offers strong privacy guarantees. The remainder of this paper is organized as follows: We describe our running example, the P2P job marketplace in Section 2. In Section 3, we introduce our privacy-preserving recommendation system and provide information about the privacy guarantees. Subsequently, we describe data distribution in Section 4. In Section 5, we evaluate our system regarding the privacy-utility trade off. We review related work in Section 6 and conclude the paper in Section 7. USE CASE: JOB MARKETPLACE Recommender systems are usually implemented as centralized platforms. This however makes the platform less self-determined, prone to data breaches, and implies privacy risks. In order to emphasize this situation, we use a job marketplace as running example. According to the GDPR, processing Human Resource (HR) data is allowed under restricted circumstances and increased means of data protection only. In particular, it is difficult for consent to be freely given by job candidates, meaning that it is unlikely to provide a valid basis for processing HR data. Therefore, the marketplace is a relevant example for a distributed system with self-determined data control. Example: Two-sided Job Marketplace. In recent years, recruiting and selection processes are part of the success of the organization's future growth and retention of employees. The marketplace is divided into two parts: job recommendation and candidate recommendation [15]. Matching jobs with candidates is based on similarities of the corresponding profiles. These profiles may include features that consist of keywords extracted from job requirements and candidate skills/preferences. That is, job candidates receive suitable job recommendations and recruiters receive candidate recommendations for a job. More specifically, top ranked jobs or the top ranked candidates that best fit the candidate profile or job profile, respectively, are returned. The marketplace can therefore be classified as a reciprocal recommender system. On the marketplace, recruiters provide a job description including a job profile with requirements and necessary meta-data for the job application. Candidates, interested in using the marketplace, can provide their profile including skill set or interests, or only receive job profiles to calculate possible recommendations. Since job profiles contain no personal information, we assume that they can be made publicly available. Moreover, we assume that marketplace users (candidates and recruiters) are globally distributed. The system is most suitable for global inter-or intra-company recruitment. Our approach concentrates on the recommendation process. The recruitment procedure takes place outside of the marketplace, thus it is out of scope of this work. RECOMMENDATION APPROACH In this section, we describe the profile representation and the similarity score for our privacy-preserving content-based recommendation approach. Content-based recommendations use features of items and compute similarities between items to recommend similar items that users have preferred in the past. That means for the proposed marketplace, we assume that candidates prefer jobs that are similar to their previous jobs. Profile Representation The job profiles as well as the candidate profiles are based on keywords extracted from job requirements and candidate skills, respectively. This set of keywords can be represented as a vector. While in principle the overall set of keywords is finite and well defined, the number of possible keywords can be very large and new keywords may not appear in new jobs. Therefore, we propose to use a probabilistic data structure to represent profiles, keep the memory footprint small, and to be able to handle unknown keywords. Hence, we represent the set of keywords in a Bloom filter [5]. Bloom filters have been introduced to approximate membership queries but can also serve as item representation for recommender systems [20]. The data structure of a Bloom filter consists of a bit B j of job "data analyst": "statistics" "python" array of a fixed length using different hash functions [5]. At the beginning, the Bloom filter is empty, i.e., all bits in the array are set to 0. The hash function ℎ ( ) with ∈ {1, . . . , } takes an input and determines a position in the bit array that is accordingly set to 1. Membership of a value can be tested by repeating the process and checking all respective positions ℎ ( ) in the array. If there is at least one bit set to 0, the value is not member of the Bloom filter. If all corresponding bits are set to 1, the value may be a member of the Bloom filter, but we cannot be certain. Due to bit collisions, influenced by the parameters and , false positives can occur. For our use case, each job/candidate profile consists of a Bloom filter of the same length and with the same number of hash functions. In Figure 1, we sketch the procedure of adding the keywords to the Bloom filter. Assume for example the job profile "data analyst" has the keywords "statistics" and "python". These keywords are hashed and the corresponding bits are set to 1. Job Recommendation In order to obtain recommendations, candidates can derive personalized recommendations locally without disclosing their profile. To this end, we use the cosine similarity as metric as it is directly applicable to vectors and widely used in content-based recommender systems [22]. That is, candidates download the Bloom filter of a job offering and compare it to their own Bloom filter . They calculate the cosine similarity of the two Bloom filters, given by where the numerator is the scalar product. Since our proposed recommender system is self-determined, candidates can identify the top jobs with the highest similarity. Additionally, candidates can also set a threshold value, i.e., jobs with a similarity higher than a customizable value are recommended. Candidate Recommendation A recruiter needs candidate profiles to obtain the top candidates for her job offering. We assume that the recruiter has received candidate profiles in the representation of a Bloom filter as described in the previous section. Due to the false positive rate of a Bloom filter, there is a certain uncertainty that provides some privacy. For example in Figure 1, a hash value of "statistics" and "python" are both mapped to the same bit (5th bit). While this provides some privacy, it is unable to guarantee strong privacy [4]. For example, an adversary could still determine that a profile does not have certain keywords by inspecting the bits set to 0. Our goal is therefore to offer strong privacy guarantees for profiles, i.e., differential privacy which enables plausible deniability [8]. Differential privacy is a strong privacy definition, providing privacy guarantees to the input of a computation regardless of the amount of background knowledge of an adversary. Local differential privacy (LDP) satisfies differential privacy in the local setting. In other words, a data collector is not trusted and therefore candidates perturb their profile locally. A function provides -differential privacy [8] if for all neighboring pairs of profiles and ′ and all ⊆ Range( ) satisfy In other words, the result of should be similar independent of the input, i.e., it is irrelevant whether a candidate reports or ′ . LDP is guaranteed by flipping the bits of the Bloom filter based on the RRT, a method used for surveys [27]. With a probability , the bit of the Bloom filter is flipped, i.e., a 1 changes to 0 and vice versa a 0 to 1. Otherwise, the bit is not changed and remains the same. In other words, each bit in the Bloom filter (0 and 1) yields plausible deniability as it remains unclear whether a set bit is a result of randomization or truly corresponds to a certain keyword. Differential privacy is guaranteed with = 1 1+ / . A candidate distributes the differentially-private Bloom filter ′ to recruiters. The recruiter can then calculate the scalar product between the perturbed candidate's profile ′ and the job profile to determine the similarity between the candidate and job profiles. Due to the randomization, however, he obtains a perturbed scalar product˜. To obtain an unbiased cosine similarity, the scalar product is corrected with the number of 1 in the candidates perturbed Bloom filter ∥ ′ ∥ 1 according to [1] bŷ The cosine similarity is then calculated with the correctedˆand Equation (1). For more details, we refer the reader to [1]. Note that correcting the scalar product only removes noise from the similarity, not from the candidate Bloom filters, effectively preserving privacy. CONTENT DISTRIBUTION & STORAGE The recommendation is calculated locally. Therefore, candidates and recruiters need to exchange candidate and job profiles, respectively. Recruiters and candidates effectively build a P2P network. In a naive approach, the P2P network needs to be synchronous to enable direct exchange between recruiters and candidates. While we might be able to assume that recruiters are available at all times, candidates can go offline unexpectedly. We therefore propose to use the InterPlanetary File System (IPFS) as distributed storage layer, which we illustrate in Figure 2. IPFS is a data network [6], offering distributed data storage and sharing. IPFS allows to distribute candidate and job profiles asynchronously without introducing a central server. IPFS therefore combines the best of both worlds and users gain more control over their data. Data is stored only on a user's device and only replicated on demand. Table. For our use case, we face three main challenges: ensuring profile availability, job and candidate profile discovery, and access control. Profile Availability In order to ensure the data is available even after a node leaves the network, data has to be replicated by multiple peers of the network. In IPFS, data is replicated passively, by volunteers or cache-based. Initially, data is made available by a single node only (the data source), and will be available as long as the node remains online. In addition, cache-based replication improves availability of popular content. In order to ensure asynchronous availability, however, data needs to be actively replicated. For active replication, there are two possibilities: Filecoin [21] and IPFS Cluster. Filecoin is a blockchain-based incentive layer, where storage and retrieval of data is compensated via the filecoin token (FIL). In a job marketplace, this generally might be a feasible option: recruiters pay a fee to make the job profiles available, increasing the range of possible candidates. Candidates pay the fee to make their profiles available to increase their consideration for job offerings. At the time of writing storing data in Filecoin costs around 4.8 / for one epoch (30 ) 1 and the minimum duration for storage deals is 180 . The costs are therefore low with roughly 2.5 to store 1 (1 ≈ 72 $) 2 and negligible for our use case. Nevertheless, we believe that using Filecoin is more suitable for an inter-company marketplace than an intra-company marketplace. In an intra-company marketplace, it might be questionable, who would be willing to pay for storage. In general though, the main disadvantage is that using Filecoin introduces a blockchain and can 1 https://file.app/ (2021-06-03) 2 https://coinmarketcap.com/en/currencies/filecoin/ (2021-06-03) therefore introduce additional privacy problems. A blockchain is an immutable public ledger, and therefore reveals information about the pseudonymous participants. Instead, IPFS Cluster provides the possibility for pinset orchestration. The cluster ensures the availability of files by managing a certain degree of replication (the pinset) between the cluster peers. It actively replicates data, even when nodes leave the cluster. The cluster peers build an overlay network, separate from the IPFS overlay network, and the replication is independent from the IPFS network. Hence, recruiters build a cluster to increase the availability, ensuring candidates can acquire available job profiles. Additionally, the cluster can replicate candidate profiles, if candidates give their consent. Since recruiters have a strong interest to make job profiles constantly available, IPFS Cluster is a viable solution to offer asynchronous, yet distributed and self-determined, data sharing. Profile Discovery In order to query candidate and job profiles, marketplace users need to know the respective CIDs of profiles. Specifically, data is addressed by the root CIDs of the Merkle DAG. Since CIDs are based on the content, changing data changes the CID and CIDs are in general not human-readable. As a result, we need a mechanism to inform marketplace users of existing job and candidate profiles. To this end, we utilize the Publish/Subscribe (pubsub) architecture from libp2p to announce CIDs. In the network, peers subscribe to a topic, i.e., job or candidate profiles, and are informed of published content. That is, recruiters publish their job profile in IPFS and distribute the respective root CID in the network, where subscribed candidates receive new CIDs and use it to retrieve the profile. Similarly, candidates can publish their profile and recruiters receive the candidate profiles. The information is disseminated in the network via gossiping. While pubsub ensures that active users receive new messages, new users also want to receive information about relevant profiles. Therefore, a bootstrapping mechanism for CIDs is necessary. To this end, we maintain a list of CIDs. After creating a new profile, the list is updated and new data can be identified. Due to contentaddressing of CIDs, the list's CID changes with every update. In order to address this issue, we suggest to use IPFS's name service, i.e., Interplanetary Name System (IPNS) that maps the hash of a public key to a CID. That is, changing content changes the mapping, effectively informing about the availability of new job profiles. The maintenance of the file itself will be managed by the IPFS Cluster. If candidates do not want to share their data with potentially everyone on the marketplace, they can share their data deliberately using an out-of-band channel, e.g., sending an e-mail with the CID. Access Control A downside of IPFS is the lack of built-in data access control. Once shared, data subjects lack control over replication and access of data. Moreover, there is no dedicated way to delete data. In general, everyone can request all blocks and curious users can observe requested/announced CIDs. Since we consider job offerings public anyway, the lack of access control is not a problem. However, candidate profiles contain personal data such as name and therefore need additional protection. We suggest to use encryption of candidate profiles to maintain confidentiality. The decryption key is then used as a way to gain access by sharing it only deliberately via other channels with selected individuals. Some research proposes a blockchain for managing access and sharing decryption keys [26]. EVALUATION We perform our evaluation on a real dataset 3 provided by the online employment website Careerbuilder. The dataset contains, inter alia, information about job postings, candidates, and their application history. For our training set we randomly select 10,000 job postings from the job dataset. The candidate profiles are generated from the application history. To this end, we selected all users with at least five applied jobs that are in the job posting dataset as well. Our test dataset for recommending candidates consist of new jobs of the training set that none of the candidates have in their application history. In total, we have 10,000 jobs, 128 candidate profiles, and 7,175 open jobs profiles. We assume that the requirements for a job offering are vectorized. Therefore, we extract keywords from job titles, descriptions, and requirements by using TF-IDF (term frequency-inverse document frequency), a commonly used scheme. Keywords that occur frequently in one job (term frequency), but rarely in other job profiles (inverse document frequency) have a higher score and are more likely to be relevant to describe a job. For each job, we used all keywords. In total, each job is on average described by 176 keywords (min 2 keywords; max 588 keywords). In the experiment, we evaluate the candidate recommendation. More specifically, for each job offering, we compute the cosine similarity for each applied job in the candidates profile. In order to generate candidate recommendation, we rank the candidates according to their mean similarity of their applied jobs. In our evaluation, we aim at comparing the set of computed recommendations of the binary vector model with a Bloom filter model (bf) and a Bloom filter model with differential privacy guarantees (bf-DP). The binary vector uses a binary representation of the keywords of each job and represents our ground truth. In the other two models, keywords are added to Bloom filters. Additionally, in bf-DP, the Bloom filter is perturbed according to an . We evaluate the utility of our recommendations by considering a scenario where the recommendation list contains the top candidates. Therefore, we adopt precision@N as utility metric since it is widely used to evaluate recommender systems [22,28]. Pre-cision@N is the number of relevant candidates within the top divided by . Typically, for utility, precision@N is evaluated along with recall@N, which is the number of relevant candidates within the top divided by the number of relevant candidates. Since we consider the top candidates recommended by the binary vector model as relevant, the number of relevant candidates is also , hence precision@N and recall@N are identical. That is, we measure the utility only in terms of precision@N. A drawback of precision is that the order of candidates is not taken into account. Therefore, we additionally use the average precision that is defined as the average of precisions at each rank for a relevant candidate up to rank . For example, consider a set Parameter Selection To implement our recommender system, we need to specify a number of parameters. For the Bloom filters, length and the number of hash functions must be specified. To this end, we measure the mean precision@20 of 50 randomly selected jobs over 10 runs with = ln 3 to show how and affect the utility of bf-DP. In Figure 3, we plotted the mean precision@20 with on the x-axis and varying values for . The results generally indicate that a longer Bloom filter leads to more utility and with more hash functions the precision decreases. We obtain the highest precision with = 1 and = 4096. Therefore, we set the Bloom filter length = 4096, which corresponds to the length of the binary vector. Privacy-Utility Trade off We quantify the cost of privacy by comparing bf with bf-DP. Please note that bf, does not satisfy the definition of differential privacy. In Figure 4, we present the results for precision@20 ( Figure 4a) and average precision (Figure 4b) versus the privacy loss parameter for the two models. For all results, we show the mean of 50 randomly selected jobs over 10 runs; standard deviation is used to draw the error bars. Since only bf-DP depends on , the utility changes with varying . As expected, the precision@20 as well as the average precision of bf-DP increases with higher . For a typical = ln 3 [9], the precision of bf-DP is below bf. However, for > 7, the precision@20 of bf-dp approaches the precision of bf. Even for a small = ln 3, the average precision is 0.75, i.e., a high proportion of relevant candidates appear early in the recommendation list. Discussion Our results demonstrate that Bloom filter can be used to generate recommendations with strong privacy guarantees. With an = 4, we recommend approximately 75 % of the top 20 candidates correctly with an average precision of 0.93. By ensuring LDP, we provide a reciprocal recommender system that is self-determined. Candidates can decide whether to provide their profile. For the sake of clarity, in our evaluation we assumed the same privacy guarantee ( ) for each candidate. However, since the unbiased similarity is computed independently for each user, it is possible that each candidate chooses their own desired privacy guarantee. This makes our system more self-determining. When collecting data over time, the time series can leak information. If noise is repeatedly added to the same or only slightly modified profiles, the noise component can be averaged over a period of time and thus eliminated. This becomes relevant when a candidates profile changes or if a candidate has several similar applied jobs that are independently perturbed. Therefore, repeatable collection of profiles requires an adjustment of the RRT. Otherwise, a profile can be disclosed. Future work should consider longitudinal attacks and make appropriate adjustments such as memoization [11]. Compared to bf-DP, bf provides almost the same accuracy as the binary vector. Since a Bloom filter already provides certain deniability due to bit collisions, we recommend using a Bloom filter instead of a binary vector. Compared to a centralized approach, the distributed approach has some disadvantages in performance and storage. Due to splitting files into blocks by IPFS and the construction of a Merkle DAG, a storage overhead is introduced. Furthermore, profiles are replicated on multiple devices increasing overall storage requirements. The usage of Bloom filters reduces the size of stored and transferred data, making the data small enough to fit into one block. Since the data fits into one block the additional transfer time due to traversing and finding the DAG can be omitted. An advantage of P2P systems is the self-scalability of data. However, the small data size reduces the benefits due to replication. Future work will concentrate on analyzing and minimizing possible storage and transfer overhead. RELATED WORK Various approaches have been proposed for privacy-preserving recommender systems. They are based on cryptographic primitives [10,17] or data perturbation [12,13,16,19,23,24]. The recommendations, however, are generated using collaborative filtering, which uses preferences of many users to make recommendations. While collaborative filtering is often preferred in terms of accuracy, we use content-based filtering as it provides user-specific classification. That is, recommendations are based on past user preferences, which suits our distributed self-determined approach better. Since it additionally considers the content, it is particularly relevant for reciprocal recommender systems and the application domain considered in this work [14,15]. Privacy-preserving content-based recommender systems are mostly concerned with targeted advertising [25], where privacy is achieved through anonymization or pseudonymization. However, in a reciprocal recommender system, the use of these approaches would not be feasible since the recommendation is two-sided and therefore the candidates must also be known. Puglisi et al. [22] propose a privacy-preserving content-based recommender by injecting arbitrary keywords into the user profile. In contrast to Puglisi et al., we guarantee LDP for profiles. Usually, differentially private recommender systems are designed using a centralized architecture [13,17,23]. We instead propose a distributed approach that offers self-determined privacy guarantees, using IPFS. In the literature, IPFS is often used in combination with blockchains [26] or to store and exchange data [2,18]. We use it for content distribution, enabling asynchronous data access. CONCLUSION In this paper, we present a distributed reciprocal recommendation system. We use Bloom filters to allow local computation of recommendations with differential privacy guarantees. We show that Bloom filter as profile representation provide reasonable utility under strong privacy guarantees. Utilizing IPFS, we allow asynchronous exchange of data increasing users' control over their data.
2021-07-15T01:16:04.958Z
2021-07-14T00:00:00.000
{ "year": 2021, "sha1": "b10dbaf412396b9fbed844bb4751b8c82b0bd928", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2107.06590", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b10dbaf412396b9fbed844bb4751b8c82b0bd928", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
118131281
pes2o/s2orc
v3-fos-license
Ontological models and the interpretation of contextuality Studying the extent to which realism is compatible with quantum mechanics teaches us something about the quantum mechanical universe, regardless of the validity of such realistic assumptions. It has also recently been appreciated that these kinds of studies are fruitful for questions relating to quantum information and computation. Motivated by this, we extend the ontological model formalism for realistic theories to describe a set of theories emphasizing the role of measurement and preparation devices by introducing `hidden variables' to describe them. We illustrate both the ontological model formalism and our generalization of it through a series of example models taken from the literature. Our extension of the formalism allows us to quantitatively analyze the meaning contextuality (a constraint on successful realistic theories), finding that - taken at face-value - it can be realized as a natural interaction between the configurations of a system and measurement device. However, we also describe a property that we call deficiency, which follows from contextuality, but does not admit such a natural interpretation. Loosely speaking, deficiency breaks a symmetry between preparations and measurements in quantum mechanics. It is the property that the set of ontic states which a system prepared in quantum state psi may actually be in, is strictly smaller than the set of ontic states which would reveal the measurement outcome psi with certainty. I. INTRODUCTION Quantum mechanics is famously plagued by certain conceptual problems, the resolution of which drive attempts to understand the theory. These attempts have resulted in the appearance of a diverse number of interpretations of quantum mechanics -ideas about how to relate mathematical objects from the theory to some picture of (or viewpoint regarding) physical reality. Somewhat incredibly, there is still not even a consensus on precisely which features of quantum mechanics are the source of these conceptual problems. One approach that has been advocated is to simply deny the need for understanding quantum mechanics in terms of a metaphysical picture of reality at all. We will have nothing to say about such a dismissive approach in this paper. However, if, as will be assumed here, it is desirable to understand quantum mechanics in a realistic framework, then many possibilities arise. The simplest realistic approach is to simply assert that the quantum state itself is in one-to-one correspondence with reality. This, as Einstein and others have emphasized [1,2], entails accepting a view of physical reality with arguably quite undesirable features (e.g. violent nonlocality, discontinuous dynamics, ambiguous emergence of a classical ontology etc.). Our goal in this paper is to lay out and expand upon a framework and a language in which (almost) any theory attempting to correlate quantum mechanics to a picture of reality can be formulated. This framework, first introduced in [3], includes the just-mentioned possibility that the quantum state is the state of reality. However, as emphasized in [1], it also includes possibilities wherein the quantum state is supplemented by some "hidden variables". Regardless of whether there are such hidden variables besides quantum states, it is possible that one might be able to interpret the quantum state epistemically [4,5,6,7] -that is, in terms of probability distributions over some space (see [8,9] for explicit examples of such an epistemic construction). If a theory for the reality underpinning quantum mechanics can be formulated in the general terms we propose then we refer to it as an ontological model. Following [3], the "true states of reality" posited by the model will be called "ontic states". The terminology is chosen to emphasize that while such theories are not necessarily 'hidden variable theories', they do attempt to formulate a picture of physical reality consistent with quantum mechanics. Although one might not expect an ontological model to precisely follow the laws of classical mechanics, there are certain features, commonplace in classical physics, that one would hope could be retained -for instance, conservation laws and locality. Amazingly, Bell's theorem [10] shows that locality must be abandoned in any theory whatsoever that describes our universe [11], including, of course, any ontological model. This feat of generality rested on Bell's ability to abstract generic features possessed by all realistic models. Consolidating and extending such generality is one goal of the ontological model formalism that we build upon. Besides nonlocality, the other primary non-classical feature which any attempt at explaining quantum mechanics in a realistic frame-work must contend with is contextuality. Contextuality, first considered for quantum mechanics by Kochen and Specker [12] and then extended to deal with arbitrary theories by Spekkens [3], is much less understood and appreciated than nonlocality. Increasing the generality of the ontological model formalism also works towards a second goal of this paper; to elucidate the precise manner in which contextuality must manifest itself in all such models. In addition to the above motivations, which originate from foundational considerations, a second series of motivations for this research stem from practical issues in the field of quantum information theory. Precise formulations of a spectrum of realistic theories potentially underpinning quantum mechanics are of use to work in this field, regardless of their metaphysical consequences. Such formulations allow us to probe and elucidate those features of quantum mechanics distinguishing it from classical realistic theories -the theories upon which all of classical information theory is predicated. While the role of quantum nonlocality (and entanglement in particular) in distinguishing quantum and classical information theory has been much speculated upon, contextuality has received far less attention in this regard [3]. We believe this neglect to be a serious mistake. Furthermore, Aaronson [13] has recently discussed how one can define complexity classes in terms of the increased computational power one might expect if one were able to access individual ontic states (and obtain more information about a system than the quantum formalism itself allows). We show in Sec. III 3 how the theories considered by Aaronson can be expressed in the ontological model formalism. In particular, it then becomes clear that not all ontological models yield the computational advantages that Aaronson identifies. The paper begins in Sec. II by presenting the ontological model formalism as it can be applied to quantum systems. In the next section a variety of ontological models, chosen to illustrate the breadth of possibilities, are discussed. These models include the two famous examples from Bell's papers [10,14], Kochen and Specker's non-contextual model of a qubit [12], Aaronson's model [13], a model (due to Beltrametti and Bugajski) which takes the quantum state itself as real [15], and some interesting models of Aerts [16,17]. It will quickly become clear that the formalism of Sec. II needs some augmentation, particularly if we want to be able to discuss the physical reality of preparation and measurement devices themselves (as any posited realistic theory of the whole universe should). In Sec. IV we therefore undertake formulating such an extension, and find that several interesting new possible features arise which can distinguish different ontological models. We then turn to a deeper examination of how ontological models deal with the Kochen-Specker theorem. In doing so we identify a property we term deficiency, which all ontological models possess, and which forms the subject of Sec. VI. Deficiency involves the explicit breaking of the symmetry between preparations and measurements that is en- joyed by quantum mechanics (e.g. that a preparation of a quantum state can be achieved by a projective measurement onto that state of a suitable system.) We also show how deficiency elucidates the fact that measurements in an ontological model must be disturbing. II. ONTOLOGICAL MODELS AND QUANTUM MECHANICS The formalism of quantum mechanics is well known and relatively unambiguous, but opinions are varied on just what this formalism is meant to describe, i.e. how it corresponds in some sense to reality. One of the most popular views is the operational one [19]; wherein the only concern of the theory is to reproduce outcomes of various experimental procedures employed by a scientist. The question of how quantum mechanics relates to reality is then taken to be outside the theories scope. This approach to quantum theory is one of two views that are commonly held (often implicitly). The other frequently maintained position is that reality is completely described by the quantum state -so that within its domain of validity it is the end of the story. This is implicitly a realistic view, and therefore one that will be incorporated in the ontological model formalism. Unless otherwise stated, when referring to quantum mechanics we will always have in mind an operational interpretation of the theory. If it is to be experimentally verified, any such operational theory needs to be able to describe the paradigm illustrated in Fig. 1. A system S, initially interacts with a preparation device P which is configured according to some macroscopically determinable setting S P . This setting is manipulated in order to alter what the state of S will be as it leaves P. S then travels towards a measurement device 1 , M, configured according to some setting S M . M will then register some particular outcome dependent on both the state of S and the setting S M . We can (for our purposes) define operational quantum mechanics by how it determines measurement statistics within this kind of P → M scenario, Definition 1 Quantum Mechanics is a theory that describes Fig. 1 by associating a density operator ρ SP (on a suitably chosen Hilbert space H) with a preparation procedure S P , and a positive operator valued measure (POVM) {E k } k with the measurement procedure S M , there being one 'POVM effect', E k for each of the possible measurement outcomes. The quantum prediction for the probability of the k th outcome in S M occurring conditioned on a preparation S P is then given by the Born rule, Pr (k|S P , S M ) = tr (E k ρ SP ). Of course, special cases of this formalism are that quantum mechanics associates rays |ψ ∈ H with pure state preparations and projection operators with sharp (rank one) measurements, which can be thought of as 'testing' whether or not a system is in a particular pure state. Quantum mechanics, defined in this operational way, is exceedingly successful at reproducing observed statistics, but it doesn't give us any picture of what "really" goes on inside a system when experimental procedures are performed on it. In Newtonian mechanics one deals with measurements, preparations and evolutions of a particle's position, and this position is posited to be ever-existing, simply revealed to us by measurement, so the theory is quite clear on how its predictions relate to reality. In comparison, quantum mechanics deals with transformations of state vectors and no prior relation is specified between these state vectors and reality. A realistic view of quantum mechanics adds to this picture with the aim of providing a link between the quantum mechanical formalism and an underlying reality. There is of course no unique way in which one might achieve this kind of realistic interpretation, and in fact many such constructions have been given to date, the most famous surely being Bohmian mechanics [20,21]. In Bohmian mechanics the quantum state of a particle and a specification of its position are taken to correspond directly to elements of the 'underlying reality'. Other attempts at realistic constructions can be found in [8,10,12,13,15,22,23] We provide a more detailed consideration of a representative selection of these constructions (and show how they can be expressed in the formalism we use) in Sec. III. To identify features common to these realistic constructions we would like a general language which allows us to abstract away the specific details of any one particular realistic view. We use the term ontological model to refer to a very natural, although non-exhaustive, formalism which does just this job. For the remainder of the paper we will implicitly restrict our attention to those realistic constructions expressible in this formalism 2 , referring to them as ontological models [3]. So what will 2 One of the reasons that the ontological model formalism does not exhaust all of the possible realistic ways of interpreting quantum mechanics is because it employs several assumptions about the a general ontological model look like? Any such model should pick up precisely where operational quantum mechanics leaves off, and specify just what it is that a quantum state allows us to infer about the real state of a system. The model can then be filled out by considering how each of the operations in Fig. 1 are taken to act on these hypothesized real states of the system. We would expect that acting a preparation procedure P on a system S would configure S so that it possesses some particular real state after the preparation. A measurement procedure M would then correspond to some kind of interaction with S -an interaction tailored to be such that M registers one or another measurement outcome dependent on the prior real state of S. An ontological model quantifies these realistic notions, by introducing a set Λ of ontic states λ to be associated with S. These constitute a complete description of whatever reality the model takes to underpin the system, so that a specification of λ is a complete description of any attributes that S might possess. The precise form taken by Λ will depend on the particular ontological model under consideration and the nature of the underlying reality that it introduces. In the simplest possible realistic interpretation, we can take quantum states to be direct and complete descriptions of reality. Then we obtain an ontological model in which the ontic state space Λ is precisely equal to the projective Hilbert space of S, i.e. Λ = PH. More generally however it might be the case that Λ = PH. Then either the quantum state is not a complete description of reality and must be supplemented by extra 'hidden' variables (PH ⊂ Λ), or the quantum state does not play a realistic role at all (PH Λ), and must simply represent our state of knowledge of the real state of S. For example, in Bohmian Mechanics, elements of Λ consist of a specification of the system's quantum state and a specification of the system's position and therefore Λ takes the form of a cartesian product Λ = PH × R 3 . So the state space Λ provides a description of the real state of the system, S. Preparation and measurement devices P and M, ultimately being physical systems, should also be describable in terms of their own set of ontic states. However, the ontological model formalism has traditionally been restricted to a realistic description of S alone, simplifying matters by treating P and M as external to the theory. In Sec. V B we show how to extend the ontological model formalism to also provide ontological treatments for these devices, allowing us to consider a wider class of models and affording an insight into the manifestation of contextuality in realistic theories. For now, however, we will restrict our attention to the traditional formulation of providing a realistic description of the system only. behavior of reality. This will become apparent when we consider extending the conventional formalism in Sec. V B. In this simplified picture (wherein we neglect ontological descriptions of P and M) how does an ontological model quantify preparations and measurements in terms of operations on the real states of the system, S? In general, performing a preparation with setting S P will result in the system S being prepared in some particular ontic state λ ∈ Λ. Simply knowing S P may, however, be insufficient information to deduce precisely which λ a system is in. Thus, in general, an ontological model will associate a probability distribution µ (λ|S P ) over Λ with preparation procedure S P . This distribution encodes our epistemological uncertainty as to the precise ontological configuration of S, and so we refer to it as an epistemic state. Note that since a system must be described by some λ ∈ Λ we will require that, Λ dλ µ(λ|S P ) = 1. (1) Associating |ψ with a probability distribution is obviously compatible with the notion of quantum states having no direct relation to the ontic states, but it is also consistent with quantum states being taken to be precisely the ontic states themselves. To allow for this we need only take Λ = PH and write µ(λ|ψ) = δ(λ − λ ψ ) with δ being the Dirac delta function and λ ψ the unique ontic state associated with preparation settings consistent with |ψ . Hence the view where quantum states are taken to be complete descriptions of reality can easily be expressed in the ontological model formalism. In the next section we will see an explicit example of a model that achieves this. Consider now a measurement wherein M is configured according to some setting S M . The outcome of this measurement will be determined by the ontic state λ of the system and how it interacts with M (a point which we elaborate on in Sec. V B). Now the most general possibility is that λ might only probabilistically determine a measurement outcome. Following [3], we refer to models wherein even a complete description of reality only allows one to make probabilistic predictions, as being outcome indeterministic. Conversely if the ontic state λ of S is sufficient to completely determine a measurement's outcome then we call the model outcome deterministic. To allow for both these possibilities we therefore represent the k th outcome of a measurement performed according to S M by a distribution ξ (k|λ, S M ) over Λ, telling us the probability that a given λ ∈ Λ will yield the k th outcome. We refer to such distributions as indicator functions (considered as functions of λ). In outcome deterministic models, ξ (k|λ, S M ) ∈ {0, 1} -so that the indicator functions are idempotent, i.e. we have ξ 2 (k|λ, S M ) = ξ(k|λ, S M ) for all λ. Where might the probabilities appearing in outcome indeterministic models arise from? There are two possibilities. Firstly they could occur because of our failure to take into account the precise ontological configurations of either P or M, a possibility which we address in Sec. V B. Alternatively it could be that the probabilities are an inherent property of the reality described by the model, so that even if one had complete knowledge of the configuration of the whole universe, one would be unable to make any certain statements about the system's future configuration. Since one or the other outcome of any measurement S M must occur -no matter what λ describes S -we have, k ξ(k|λ, S M ) = 1 ∀λ. ( The settings S P and S M will play a crucial role in many of our discussions. Clearly different settings S P can describe situations in which P is set -within an operational quantum mechanical description -to prepare a system according to different density operators. Similarly, different settings S M can describe cases where M is set to implement different POVM measurements. However, there are also many distinct settings of P and M consistent with a quantum mechanical description given by the same density operator or POVM. The settings will then specify different instances of some other extraneous property of P or M. We will later see that there exist quantitative extraneous properties which, although not altering the POVM implemented, must alter the indicator function used by an ontological model. Thus the quantum mechanical POVM description of a measurement can actually be thought of as being a function E(S M ) of the measurement setting of M -in that each POVM corresponds in general to a certain set of settings of M. Hence although specifying S M will uniquely fix a POVM E, knowledge of only E may be insufficient to completely determine S M . The full setting, S M , of M is referred to as the measurement context (a term we define in more detail later). Hence, fully specifying the measurement context may require stating not just a POVM E, but also some 'extra' information which completely determines M's setting 3 . So although we may occasionally write ξ(k|λ, E), we should really make explicit the precise setting S M by writing either ξ(k|λ, S M ) or (if we still want to make clear the POVM), ξ(k|λ, E, S M ). Similarly, a density operator ρ may be compatible with many preparation settings S P , and so although we will often write epistemic states as µ(λ|ρ), we should really express them in the form µ(λ|S M ) or µ(λ|ρ, S M ). To summarize then, for the purposes of this paper, we can define an ontological model by the following criteria, Definition 2 An ontological model posits an ontic state space Λ. The probability of the ontic state being λ, given the preparation procedure S P is denoted by a probability distribution which we refer to as an epistemic state, µ(λ|S P ). The probability of measurement outcome k occurring given that the ontic state is λ and the measurement procedure was S M is given by an indicator function, written ξ(k|λ, S M ) (with ξ 2 (k|λ, S M ) = ξ(k|λ, S M ) in outcome deterministic models). We then demand that a successful ontological model of quantum mechanics should reproduce the required statistics by satisfying, dλ ξ(k|λ, E, S M )µ(λ|ρ, S P ) = tr (ρE k ) . ( Seen from the viewpoint of an ontological model, a quantum mechanical picture of reality generally corresponds to a coarse-graining over the ontic states. By explicit construction, all ontological models will yield the same statistical predictions at this coarse-grained 'quantum level'. In many models, complete knowledge of the ontic configuration of a system would lead one to make predictions differing in some way from those of quantum theory. Serious advocates of ontological models might claim that the reason we do not see these deviations from quantum predictions is because our current experiments are still too 'coarse-grained' to be able to operate on the level of individual ontic states. Another possibility, is that ontological models might inherently exhibit a restriction such that maximal possible knowledge of a system's ontological configuration is always incomplete knowledge [4]. The ontic states describing a system would then, to some extent, be 'inherently unknowable'. Although such a restriction-of-knowledge principle has been shown to have the potential to reproduce many characteristic features of quantum mechanics [8], it is not a necessary feature of all ontological models. Even though manipulation of individual ontic states is potentially forbidden (either technologically or inherently), we will still have occasion to consider the predictions that a model would be able to make if we hypothetically were somehow able to prepare and distinguish between individual ontic states. In particular we will find it useful to refer to a kind of equivalence between models, which we define as follows, Definition 3 An ontological model O is said to be ontologically equivalent to a second modelÕ if all statistics predicted byÕ are exactly reproduced by model O, even in cases where one is able to perform preparation and measurement procedures that distinguish between individual ontic states. III. EXAMPLES OF ONTOLOGICAL MODELS The formalism that we have described so far is sufficient to describe many existing ontological models. However, there exist models which lie outside of its scope because of the way that they treat the measurement apparatus. In this section we present some examples of ontological models that show both the utility and limitations of the standard ontological model formalism 4 . The limitations we encounter will act as motivations to generalize the formalism, a task we undertake in Sec. IV. The Beltrametti-Bugajski model The model of Beltrametti and Bugajski [15] is essentially a thorough rendering of what most would refer to as an orthodox interpretation of quantum mechanics 5 . The ontic state space postulated by the model is precisely the projective Hilbert space, Λ = PH, so that a system prepared in a quantum state ψ is associated with a sharp probability distribution 6 over Λ, where we are using ψ interchangeably to label the Hilbert space vector and to denote the ray spanned by this vector. λ ψ denotes the unique ontic state associated with the quantum state ψ. Thus the model posits that the different possible states of reality are simply the different possible quantum states. Quantum statistics are reproduced by assuming that the probability of obtaining an outcome k of a measurement procedure S M depends indeterministically on the system's ontic state λ as, where |λ ∈ H denotes the quantum state associated with λ ∈ Λ, and where E = {E k } k is the POVM that quantum mechanics associates with S M . It follows that, and so the quantum statistics are trivially reproduced. If we restrict consideration to a system with a two dimensional Hilbert space then Λ is isomorphic to the Bloch sphere, so that the ontic states are parameterized by Bloch vectors of unit length, which we denote by λ. The Bloch vector associated with the Hilbert space ray ψ is denoted ψ and is defined by |ψ ψ| = 1 2 1 1+ 1 2 ψ · σ where σ = (σ x , σ y , σ z ) denotes the vector of Pauli matrices and 1 1 denotes the identity operator. The epistemic states and indicator functions for this two dimensional case of the Beltrametti-Bugajski model are illustrated schematically in Fig. 2. The Kochen-Specker model We now consider a model for a two-dimensional Hilbert space due to Kochen and Specker [12]. The ontic state space Λ is taken to be the unit sphere, so that individual ontic states can be written as unit vectors, λ ∈ Λ. A quantum state ψ is then associated with the probability distribution, where ψ is the Bloch vector corresponding to the quantum state ψ and Θ is the Heaviside step function. This epistemic state assigns the value cos θ to all points an angle θ < π 2 from ψ, and the value zero to points with θ > π 2 . This is illustrated in Fig. 3. Upon implementing a measurement procedure S M associated with a projector |φ φ| a positive outcome will occur if the ontic state λ of the system lies in the hemisphere centered on φ, i.e., It can be checked that the overlaps of µ( λ|ψ) and ξ(φ| λ) then reproduce the required quantum statistics, This model is outcome deterministic, and therefore demonstrates how one can reproduce quantum statistics solely through a lack of knowledge about which ontic state a system is prepared in. Aaronson's models In a recent paper [13] Aaronson developed a formalism for describing a certain class of ontological models in terms of stochastic matrices. Aaronson then went on to consider the computational complexity of simulating models from this class. The idea behind Aaronson's models is to replace the Hilbert space vector |ψ describing a quantum system with a vector v ψ of the amplitudes of the state when written in some preferred basis Ω The action of any unitary transformation on |ψ is then mimicked by a map S : Ω → Ω, represented by a stochastic matrix S acting on the vector v ψ . As Aaronson shows in [13], such a matrix must depend not only on the unitary transformation that it attempts to enact, but also on the particular quantum state that it is to be acted on. Thus we can write the stochastic matrix intended to reproduce the action of a unitary U on a state |ψ as S(U, ψ). The specific form of these stochastic matrices is dependent on the particular hidden variable theory from Aaronson's formalism. In order to make sure that these theories reproduce quantum mechanical predictions, the matrices must satisfy, In this scheme, any attempt to perform a measurement on |ψ in a basis B other than Ω is interpreted as a unitary evolution, U , rotating ψ into the basis Ω (represented by a relevant stochastic matrix), followed by a measurement in this preferred basis. The outcome that would occur in a measurement of basis B can then be inferred from the outcome in basis Ω by the association that U makes between elements of B and Ω. One might suspect therefore that the ontic state spaces of Aaronson's models consist of the discrete set of basis states Ω ⊂ PH, so that Λ = Ω. However the basis states Ω do not suffice to give a complete description of the ontic configuration of a system, and we in fact have, Λ = Ω × PH. A specification of the preferred basis states from Ω must be supplemented by specifying the system's quantum state. Thus the quantum states describing a system play a dual role, defining epistemic distributions over the subset of ontic states from Ω whilst also playing an ontic role themselves. The epistemic states of Aaronson's models take the form, That |ψ must also play an ontological role becomes clear from the indicator functions implied by Aaronson's models. These are determined by the elements of the model's stochastic matrices. For example, suppose that one wishes to perform a measurement in a basis B on a system in state |ψ . Then, recalling Aaronson's construction, we should rotate |ψ with the unitary U : B → Ω. The probability of obtaining an outcome |j ∈ B given that the initial ontic state from Ω was ω i is simply given by the ji th element of the stochastic matrix S(U, φ) (where we use the subscript j to denote the basis state from Ω which leads us to infer an outcome |j ∈ B). Hence the indicator function associated with outcome |j ∈ B (i.e. with the projector |j j|) is given by, Note that because we must implement a rotation U in order to perform our measurement in the preferred basis, and because the stochastic matrix associated with such a rotation necessarily depends on the quantum state |φ , then the indicator function is also dependent on the system's state as well as the basis state from Ω. Thus we see that the most complete description that the model can give of measurement outcomes requires specifying the system's quantum state, not just the particular ω i ∈ Ω. Therefore the quantum state itself must play an ontologi-cal role 7 . These choices for epistemic states and indicator functions reproduce the quantum statistics as required, Where in the last line we have used the constraint on S given in (13) and the fact that U : B → Ω. Eq. (15) shows that in Aaronson's models, the indicator functions are dependent on the preparation procedure S P (i.e. what quantum state a system is prepared in). However, this is not as pathological as one might suppose, since (as was also the case in the Beltrametti-Bugajski model) the whole preparation procedure S P has an ontological status. Thus the dependence of M on S P is directly mediated through the ontic states of the system. In Sec. IV, we generalize the ontological formalism in a way that can describe models in which indicator functions have a dependence on S P that cannot be explained so simply. It should also be noted that the ontic state space of Aaronson's models is that of the Beltrametti-Bugajski model supplemented with the preferred basis Ω. Clearly, access to ontic states from the Beltrametti-Bugajski model will not increase one's computational power beyond that possible with standard quantum mechanics. It is intriguing then that Aaronson is able to show in [13] that models incorporating Ω as well as the Beltrametti-Bugajski state space can offer increased computational power. Bell's first model In the paper preempting his famous theorem [14], J. Bell described a very simple and (by his own admission) artificial way of introducing 'hidden variables' so as to reproduce the predictions of quantum mechanics for a spin- 1 2 system. The model he introduced is outcome de-terministic and valid for quantum systems described by Hilbert spaces of any dimensionality. The ontic states Λ of Bell's first model can be written as the cartesian product of two subspaces, Λ = Λ ′ × Λ ′′ . The first of these subspaces is isomorphic to the projective Hilbert space of the system in question, Λ ′ = PH, whilst the second subspace is the unit interval Λ ′′ = [0, 1]. A system prepared according to a quantum state |ψ is described in the Bell model by an epistemic state that is separable over Λ ′ and Λ ′′ , The distribution over Λ ′ picks out the relevant λ ′ ψ ∈ Λ ′ corresponding to |ψ ; µ(λ ′ |ψ) = δ(λ ′ − λ ′ ψ )dλ ′ , whilst the distribution over Λ ′′ selects a λ ′′ according to a uniform probability distribution, regardless of the system's quantum state; µ(λ ′′ |ψ) = dλ ′′ . Thus the epistemic state over whole ontic state space Λ reads, Now suppose that we wished to perform an N outcome PVM measurement P , described in quantum mechanics by the projectors . Suppose furthermore that the system has been prepared in a state |ψ . The ontic state of the system will then be given by the pair (λ ′ ψ , λ ′′ ) (with λ ′′ uniformly selected from the unit interval). The model reproduces quantum statistics by partitioning the unit interval, Λ ′′ , into N subsets, such that for every i ∈ {1, . . . , N } a fraction tr(P i |ψ ψ|) of λ ′′ ∈ Λ ′′ are taken to yield a positive outcome for P i . Quantitatively then, Bell's first model associates a deterministic indicator function with the i th outcome which takes the form, Where the values x i (λ ′ ) (determining the λ ′′ over which ξ(i|λ ′ , λ ′′ , P ) has support) are given by, and, for all other values of i. This gives precisely the partitioning of the unit interval that we require. Note that we assume some ordering of PVM elements is chosen for every measurement, so that permuting the label, i, of the does not change the indicator functions associated with the projectors. This model easily reproduces the quantum statistics for performing a projective measurement P φ = |φ φ| on a system prepared in state |ψ , Bell's second model Bell also published a second hidden variable theory for spin-1 2 systems, which was presented in the same paper as his famous theorem [10]. As was the case in his first model, two subsets of ontic states are employed in Bell's second model, so again we write Λ = Λ ′ × Λ ′′ . This time however, the first set of ontic states, Λ ′ , are taken as isomorphic to the set of points on the unit sphere. Thus any given λ ′ ∈ Λ ′ can be represented by a unit vector, λ ′ . However, we will very shortly see that as in the case of Aaronson's model, the indicator functions of Bell's second model are dependent on the quantum state a system is prepared in. Therefore, a complete description of the system also requires a specification of a system's quantum state. The second set of ontic states, Λ ′′ , is hence also isomorphic to the set of points on the unit sphere (since we only consider spin-1 2 systems, this is equivalent to taking Λ ′′ = PH). A spin- 1 2 system prepared with its spin oriented along a direction p is then taken to be described by a pair ( λ ′ , λ ′′ ), where λ ′′ = p, and λ ′ ∈ Λ ′ is chosen to lie, with equal probability, at some point in the hemisphere of Λ ′ defined by p. Thus a preparation with S P = p is described by an epistemic state over Λ of, Now consider performing a measurement for whether or not the system's spin lies along a direction a. Bell's second model specifies that we receive a positive outcome if the system's ontic state λ ′ ∈ Λ ′ happens to lie in the hemisphere centered on a vector a ′ . The vector a ′ is obtained by rotating the system's ontic state λ ′′ towards a through an angle π 2 (1 − λ ′′ · a). Thus the indicator function for a measurement of spin up along direction a is given by, the dependence on λ ′′ being implicit within a ′ . This model reproduces the required spin-1 2 quantum statistics as we would expect, Where (θ, φ) are polar coordinates and θ pa ′ is the angle separating the unit vectors p and a ′ . As it stands, Bell's second model can be comfortably expressed in the standard ontological model formalism. However, a slightly modified version of this simple model shows the limitation of the traditional formalism. In the above model the probabilistic nature of the quantum statistics derives from an uncertainty in the preparation of a system's ontic state (as can be seen from Eq. (23)). But it is also possible to reformulate the model so as to move this epistemic uncertainty into a lack of knowledge of how the measuring device is configured. Such a possibility cannot be conceived of within the traditional ontological model formalism, which only postulates ontic states for the system. Clearly one needs to extend the formalism to include new ontic states γ M from a new ontic state space Γ M that act as a complete physical description of M. We will show in Sec. IV how one can introduce such an extension whilst still reproducing quantum mechanical predictions. It is also worth noting that, as in Aaronson's models, the measurement devices in both of Bell's models exhibit a dependence on the preparation device's setting, but a dependence that is mediated through the system's ontic state. Aerts' model Our final example model is the strongest motivation for the extension we outline in the next section. Aerts has studied ontological models which are entirely incompatible with the standard ontological model formalism, since they explicitly treat the measurement device at an ontological level. We will consider the model given by Aerts in [16,17] for spin-1 2 quantum systems. This model attempts to reproduce the quantum statistics through a rule for distributing small spheres of charge on a unit sphere 8 . The preparation of a system S having spin along a direction r is represented in the model by the placement of a small sphere carrying a fixed positive charge +q at point r on the unit sphere. The charge +q is in fact arbitrary, and therefore a complete description of S is given by r alone. Thus the ontic state space Λ of the system is isomorphic to the set of points on the surface of the unit sphere and we can write the ontic state pertaining to S as λ ∈ Λ. If the preparation device P prepares a pure quantum state with Bloch vector ψ, then the epistemic state describing S will be, A measurement of the system's spin along an arbitrary direction a is represented by placing another two small spheres at positions ± a, joined by a straight rigid rod passing through the origin. These two spheres are also charged with negative charges −s and −(1−s), where s ∈ [0, 1]. The particular value of s is assumed to be unknown to the experimentalist. These two charges, joined by a rigid rod, constitute the measurement device, M, of the model. Given this arrangement Aerts specifies that the outcome of a spin measurement is determined by which of the two charged spheres at ± a exerts the greater force on +q, consequently attracting it. If the charge +q ends up moving towards the sphere at position a then an outcome of spin-up along a is declared. If however, +q ends up being attracted to the sphere at − a then 'spin-down' is announced. Now note that there is no epistemic uncertainty in the ontological configuration of the system S; if one knows S P then one also knows the ontic state (see (26)). Therefore, as in the Beltrametti-Bugajski model, the model must implement indeterministic indicator functions over Λ that directly mimic the quantum statistics that one expects when measuring the spin along a of a system prepared according to ψ, ξ( a| λ) = cos 2 θ aλ 2 . Where θ aλ is the angle between the vectors λ and a. The epistemic states and indicator functions of Aerts' model take essentially the same form as those from the Beltrametti-Bugajski model and thus one might be tempted to see the two models as equivalent. However the models differ crucially in how they treat M. The Beltrametti-Bugajski model does not specify the nature of the indeterminism appearing in its indicator functions. Aerts' model meanwhile, exhibits a specific construction for how these probabilities could arise from an epistemic uncertainty of the configuration, s, of M. It is clear from the description we have already given that Aerts' model provides more structure to the operation of M, structure that we need in order to be able to distinguish it from the Beltrametti-Bugajski model. To quantify this structure we will need to create the extension of the ontological model formalism that we also found lacking in our discussion of the Bell model. We now finally present this extension, which will also allow us to view contextuality as a restriction on interactions between the ontic configurations of S and M (or S and P in the case of preparation contextuality). In Sec. IV we will return to Aerts' model in detail, distinguishing between two different possible manifestations of outcome determinism which we term micro and macrodeterminism (originally alluded to in Sec. II). IV. ONTOLOGICAL TREATMENT OF MEASUREMENT AND PREPARATION DEVICES Our discussion so far has been greatly simplified by considering preparation and measurement devices as external objects not thoroughly treated by the theory, much as in an operational view of quantum mechanics. This approach of only associating an ontic state space Λ with the system S, has been the traditional approach for discussing ontological models. Let us now suppose that we provide P and M (which, after all, are also physical systems) with the same ontological treatment as S, by introducing two new sets of ontic states, γ P ∈ Λ P and γ M ∈ Λ M . The ontic states from these sets describe the complete configurations of P and M respectively. Recall that the settings S P and S M denote configurations of the devices when they are set to perform certain preparation or measurement procedures. There are many different ontological configurations of M that we could imagine being consistent with it still performing the same measurement and thus being set according to the same S M . For example, if we simply changed the color of the paint on M then its ontological configuration -being its complete description -would change, but of course the measurement it performs, and thus the setting S M describing it, would not be expected to change. Therefore we can think of settings of P and M as defining subsets of their ontic state spaces. We denote the equivalence classes of ontic states consistent with settings S P and S M (possibly implemented according to some context 9 ) byS P ⊂ Λ P andS M ⊂ Λ M . One thing worth noting about this idea of 'setting subsets' is that subsets corresponding to different settings of either P or M will necessarily be disjoint;S M ∩S ′ M = ∅ andS P ∩S ′ P = ∅ for S M = S ′ M and S P = S ′ P . This should be true since knowledge of the ontic state of a device (being a complete specification of its realistic description) allows us to completely and uniquely infer the device's setting. How do preparation and measurement procedures on a system S appear in terms of Γ P and Γ M ? Performing a measurement on S involves an interaction between the ontic states of S and M, an interaction ultimately allowing an observer to infer pre-measurement information about the ontic state of S from some macroscopic property of M. Similarly, a preparation of S corresponds to an interaction between ontological configurations of S and P. Clearly then, the occurrence of measurement and preparation procedures in an ontological model are crucially dependent on how the model relates Λ P , Λ and Λ M . In order to be clear about what assumptions we make about such relations we will begin with a very general picture -one in which the three ontic state spaces do not even individually exist -and gradually refine it by applying appropriate assumptions on how they can interact. Eventually we arrive at a formalism in which the ontological role of M (and P) within the standard formalism from Sec. II is clear. The most general possible description of P, S and M is one in which the three systems are represented by a single non-separable reality, so that we cannot even talk about individual systems P, S, M or their individual ontic state spaces. Then the best we can do is to speak of a single 'global' ontic state space Γ, containing ontic states ν which describe a configuration of the whole P, S, M scenario. We then have epistemic states µ(ν|S P , S M ) encoding the probability of preparing a particular ν ∈ Γ given some settings S P and S M of P and M. Similarly the indicator functions ξ(j|ν) in such a non-separable model denote the probability of obtaining some outcome j of a measurement corresponding to setting S M given a particular ν. The statistical predictions of such a model are given by; Note that we do not write ξ(j|ν) as depending on S M since here we are allowing for the more general case, where the indicator function not only depends on the set of ontic states defined by a setting S M , but potentially on individual ontic states themselves -albeit non-separable ones, ν. In such non-separable models it is hard to build any intuitive picture of reality whatsoever, with even the concepts of system, preparation and measurement devices making little sense 10 . Consequently, all existing models assume a separable picture of reality for P, S and M. This amounts to the assumption that the 'global' ontic state space of the three systems can be written as a cartesian product of ontic state spaces for each individual system, Γ = Λ P × Λ × Λ M , so that ν = (γ P , λ, γ M ). Models employing this assumption are constrained to reproduce quantum statistics according to, Where we adopt the shorthand P,S,M = dγ P dλdγ M . The model thus now employs epistemic states µ(γ P , λ, γ M |S P , S M ) and indicator functions ξ(j|γ P , λ, γ M ) which treat P, S and M as having separate ontic states. Thus we have arrived at a formalism incorporating models in which indicator functions are dependent on the settings of both the preparation and measurement devices. This formalism allows for cases where, unlike Bell's second model and those considered by Aaronson, a dependence on S P is not simply mediated through the ontic states of S. In fact Eq. (29) can describe cases of even greater generality, wherein measurement outcomes are dependent on individual ontic states of P and M, not just the sets of ontic states defined by their settings. Eq. (29) employs single joint distributions over the ontic states from all three systems, implicitly allowing for the possibility that there is a statistical dependence between the ontic states of each system. There are a few reasonable assumptions that we can make about the statistical relations that might exist between the systems. The validity of these assumptions can ultimately be called into question, but in fact our motivation for using the formalism is precisely so that we can study the ways in which such assumptions may fail to do justice to our universe. The hope is that we can pinpoint precisely which assumptions are the troublemakers. In most models, the configuration of a preparation device is taken to only indirectly affect the outcome of any measurement via its influence on the system S. Our second assumption (after separability), is therefore a statistical independence between M and P. Then not only are the ontic configurations of the two devices independent of each other, but furthermore the outcome of a measurement exhibits no direct statistical dependence on the preparation device's ontic state -any such dependence having to be mediated through S. Under this assumption, Eq. (29) becomes, Where in the second line we have marginalized over the dependence on the γ P , which (given our most recent assumption) only appeared within the epistemic state µ(γ P , λ, γ M |S P , S M ). Although we can also consider an ontological treatment of P, for brevity we will now focus our attention solely on the measurement device. To this end we can use an identity of probabilities to write µ(λ, γ M |S P , S M ) = µ(γ M |S P , S M )µ(λ|S P , S M ), allowing us to further simplify (30) to, (31) Where we have again used our assumption of statistical independence of P and M to write µ(γ M |S P , S M ) = µ(γ M |S M ). Note that the epistemic state µ(λ|S P , S M ) allows the λ ∈ Λ to depend on the setting S M of M. This kind of dependence is a formal expression of what will introduce in Sec. V B as 'λ-contextuality' -one of the possible ways of implementing the kind of contextuality required by the Kochen Specker theorem within the ontological model formalism. For the kind of models that we consider, we make the explicit assumption that this kind of dependence does not occur (as we justify in Sec. V B), so that which λ ∈ Λ applies to S is not dependent on the ontic state γ M describing M. Enforcing this assumption we therefore obtain, (32) This is precisely the form that we need in order make it clear how the traditional formalism can be adapted to provide an ontological model for the measurement device as well as the system. Given knowledge of the measurement setting S M describing M, we obtain a distribution µ(γ M |S M ) over its ontic states. The particular ontic state describing M, along with λ ∈ Λ, then determines the outcome it produces -as is clear from the form of the indicator function ξ(j|λ, γ M ). This formalism allows us to describe ontological models such as that of Aerts, which provide a more thorough realistic treatment of M. We explicitly show how Aerts' model can be expressed according to (32) in the next section. The expression in (32) thus shows how the standard ontological model formalism would look if it were furnished with an ontological model for M. We can return to our completely standard formalism (as introduced in Sec. II) by making one final assumption; that the measurement outcome depends only on the measurement setting of M and not on the particular ontic state γ M . We can employ this assumption by marginalizing the indicator function over γ M ∈ S M , to give a 'coarse-grained' distribution,ξ, In doing this we are essentially eliminating the need for a model of M. Eq. (32) then becomes, Which is precisely our original formalism, as first introduced in (3). Clearly the implicit assumptions in this standard formalism, highlighted in our above derivation, leave it unable to describe a significant class of models, including those of Aerts and the adapted version of Bell's second model. Note that although here we have focused on showing how a measurement device can be furnished with an ontological treatment, it is clear that we can provide an ontological treatment of the preparation device in an exactly analogous manner. This would lead us to introduce a set of ontic states γ P ∈ Γ P and an epistemic distribution, µ(γ P |S P ) describing our knowledge of the ontic configuration of P given that it is configured according to a setting S P . A. Models that measure with uncertainty Eq. (32) is exactly what we need to completely describe Aerts' model, which we found ourselves ill-equipped to deal with in Sec. III 6. Recall that Aerts' model aims to reproduce measurements made on a spin- 1 2 system, representing a measurement of spin along direction a by spheres with negative charges of magnitudes s and 1 − s lying at points ± a on the unit sphere and being connected by a rigid rod. The value of s is chosen uniformly at random from the interval [0, 1]. Further recall that a system prepared according to |ψ is measured as having spin-up (spin-down) along a if the net Coulomb force on a sphere with charge +q, located at point ψ on the unit sphere, attracts it towards the negatively charged sphere located at a (− a). The epistemic states and indicator functions of Aerts' model are as given in (26) and (27) M , is isomorphic to the unit sphere, and γ M is simply taken to be the vector a defining the rod's orientation. The second subspace, Γ M , is given by the unit interval, with s being the charge on one of the spheres. Now in Aerts' model, it is assumed that the value of s, although it takes some definite value, is not known by the experimenter 11 . Thus there is an epistemic uncertainty with respect to the precise configuration of M. Therefore, following the formalism of this section, we introduce an epistemic state µ(γ M |S M ) describing the configuration of the measurement device. Since the measurement setting S M of M is given by the direction a (along which we wish to measure the system's spin) and s is taken to be drawn uniformly at random from the interval [0, 1], we have that, To complete the ontological description of M we need an indicator function specifying the outcome that M will produce for given ontic states of S and M (of course the production of an 'outcome' by M is actually a certain evolution of M's ontic configuration). In Aerts' model a measurement outcome is determined by the relative strengths of the Coulomb attraction F −a (acting on charge +q at ψ due to the charge −s located at − a) and the Coulomb attraction F a (due to the charge −(1 − s) located at a). Specifically, an outcome corresponding to spin-up along a will occur if F a > F −a . Using Coulomb's law, this requirement becomes [17], Where we have denoted the angle separating the unit vectors a and ψ as θ aψ . According to Eq. (36), independently of q, an outcome of spin up along a requires that we have s > sin 2 θ aψ /2. Therefore the indicator function ξ(+ a|γ M , λ) (for the outcome corresponding to measuring spin-up along direction a) can be written as, Suppose we were to choose to coarse-grain over M's ontic configuration, effectively ignoring any information we have about its ontological model. Following (33) we obtain an indicator function of the following form, This is precisely the 'trivial' indicator function that we attributed to Aerts' model in Sec. III 6. Aerts and, The idea being that measurement results in such outcome deterministic models are macroscopically determined by the setting S M , being insensitive to the precise ontic state of M. It is of course alternatively possible that the outcome of a measurement might be completely determined only if we know the specific ontic state γ M ∈ Γ M of M as well as λ ∈ Λ. In these models, specifying S M isn't enough, and measurement outcomes are determined by the 'microscopic' ontological configuration of M. Thus we term this class of models microdeterministic, Definition 5 An ontological model is said to be microdeterministic if the outcome of a measurement is not completely determined by knowledge of the measurement setting S M of a device M, but is furthermore dependent on the ontic configuration γ M ∈S M of M. i.e., and, Thus a microdeterministic model allows us to determine definite outcomes for measurements so long as we know the precise ontic configuration of the measuring device. Now the interesting point to note [16,26] is that microdeterministic models appear outcome indeterministic if we coarse-grain over Γ M . That is to say that if measurement outcomes are dependent on the individual γ M ∈ Γ M but we are ignorant of the exact value of γ M , then the best we can do is assign probabilities for measurement outcomes based on our restricted knowledge. If a model is microdeterministic then although we may have ξ(j|λ, γ M , S M ) ∈ {0, 1}, the marginalized stateξ(j|λ, S M ) can, in general, only be expected to satisfy 0 ≤ξ(j|λ, S M ) ≤ 1 (see (33)). This is illustrated nicely by Aerts' model, which falls into the class of microdeterministic models. Knowledge of s is crucial in order to determine a measurement outcome, and upon marginalizing ξ(j|λ, γ M , S M ) over γ M ∈S M we obtain an indeterministic indicator function. Thus we see a mechanism by which a determinism -apparently inherent as seen from the traditional ontological model formalism -can actually arise from an epistemic uncertainty regarding the precise configuration of a measurement device. This possibility has been investigated in rigorous mathematical detail by Coecke [26,27]. V. CONTEXTUALITY So far we have developed a way of describing reality according to ontological models, but that does little to tell us what kind of reality any particular ontological model might describe. This information is expressed by the structure of its ontic state space, Λ. Remarkably, there exist arguments constraining the structure of any realistic interpretation of quantum mechanics (including ontological models) to possess certain properties, such as nonlocality (Bell's theorem [10]) and contextuality (the Kochen Specker theorem [12]). As described in the introduction, a key motivation for studying ontological models is to identify such properties. Thus a pertinent question is how known properties are manifested within the ontological model formalism, a question which we address in this section for the case of contextuality. Contextuality has been the subject of much debate (see [14] and [24] for contrasting views) and 40 years after its inception it is still not clear what its necessity can teach us about realism in quantum mechanics. After reviewing the idea of contextuality we will use our extension to the ontological model formalism (from Sec. IV) to show how it is specifically manifested within these models. We are led to conclude that contextuality, as it stands, can be implemented as a very intuitive and unsurprising dynamical constraint. But the effect of contextuality on ontological models can be more subtle, and in Sec. VI we will show how it implies a property which we call deficiency. As we discuss in Sec. VI A, deficiency prevents a natural relationship between preparations and measurements in quantum mechanics from being carried over to ontological models. We consider this to be one case in which contextuality can quantitatively be seen to give rise to unexpected behavior. A. What is Contextuality? Contextuality has a long history, beginning in 1967, when Kochen and Specker (KS) [12] first introduced a notion which, following [3], we refer to as traditional contextuality 12 (TC). Consider performing a projective measurement |ψ ψ| on a system. In a two dimensional Hilbert space such a projector can be uniquely implemented by a measurement procedure with outcomes corresponding to |ψ and |ψ ⊥ (where ψ|ψ ⊥ = 0). However, in a Hilbert space with dimension greater than two, there is no unique way to physically implement such a projector onto a single quantum state |ψ . In an N dimensional Hilbert space (N > 2) one implements |ψ ψ| as part of an N outcome PVM, where each outcome corresponds to one of N orthogonal basis states. Since there are a continuum of N dimensional bases containing the vector |ψ , there exist a continuum of PVM measurements that can realize the projector |ψ ψ|. KS refer to the different PVMs that contain a given rank one projector |ψ ψ| as the contexts of that projector. In any outcome deterministic and realistic view of nature (regardless of whether or not it can be formalized in terms of an ontological model), a projector P is at all times assigned a definite outcome 'value', v(P ) ∈ {0, 1}, even before it is measured. KS considered the possibility that a realistic outcome deterministic theory might have to 'change its mind' about whether a value 0 or 1 is as-sociated with a projector P dependent on which PVM is used to implement it. Such a dependence is what we refer to as traditional contextuality; Definition 6 An outcome deterministic ontological model is said to be traditionally contextual (TC) if there exists at least one projection operator, P , such that the pre-determined outcome v(P ) associated with P is dependent on which PVM is used to implement it. TC therefore tells us that specifying that a measurement device is configured to measure a projector P is not sufficient in order to uniquely identify the 'real' value assigned to the result of its measurement. Rather we must specify the whole PVM that we would set M to measure. Incredibly, KS managed to show that TC, so defined, must be possessed by all outcome deterministic realistic theories reproducing the (experimentally verified) predictions of quantum mechanics. We reproduce their ingenious proof in Appendix A, translated into the language of the ontological model formalism. In one sense, KS's proof of TC is extremely general. Associating a pre-existing value v(P ) to a projector P is a requirement of any realistic outcome deterministic theory and therefore TC is defined (and proven by KS to be necessary) for any such theory, not only those that can be expressed in the ontological model formalism. There are however, a few shortcomings of TC. Definition 6 only applies to systems described in quantum mechanics by a Hilbert space of dimension greater than or equal to 3. Furthermore, it applies only to outcome deterministic realistic theories. Yet as was emphasized by Bell [28] and discussed in Sec. II, an assumption of outcome determinism is quite distinct from one of realism. Another shortcoming of TC is that changing the PVM implementing a projector is not the only change of M's setting that quantum mechanics predicts should leave measurement outcome statistics unaltered. For example there are many different ways of convexly decomposing elements of a given POVM measurement 13 , each of which provides a different experimental arrangement in which one could physically measure the same POVM elements. Thus there are several reasons why TC appears a somewhat restricted notion of contextuality, and one is led to wonder whether it is possible to generalize the idea. Such a generalization was provided by Spekkens in [3]. To begin with, one can broaden the definition of a measurement context [3], forming a valid probability distribution. surement settings S M which do not alter the frequency of the outcome when the measurement is performed on any particular preparation of a system S. Different measurement procedures in quantum theory will give the same outcome statistics so long as they are all described by the same POVM element. Any different settings S M resulting in an outcome being described in quantum mechanics by the same POVM (although perhaps written in another form) are therefore, according to our above definition, different contexts of that outcome. We have already mentioned measurement contexts associated with different PVMs realizing a given projector, and different convex decompositions of a POVM. By Definition 7 there are clearly innumerable other possible contexts. The macroscopic nature of M ensures that there are a multitude of degrees of freedom one can manipulate whilst effectively leaving the measurement operation of the device un-altered. Of course many of these contexts would be hard to formally quantify, and we restrict our consideration to those contexts that can be described in a meaningful manner. We can use Definition 7 to introduce a generalized notion of measurement contextuality for both outcome deterministic and indeterministic ontological models [3], According to this new definition, measurement contextuality is a non-equivalence of a model's mathematical representations of those measurements which quantum mechanics treats as being operationally identical. As we noted previously, one can conceive of many different measurement contexts and an ontological model could potentially exhibit measurement contextuality with respect to any of them. Therefore we must take care to specify with respect to which context we might consider measurement contextuality at any given time. As shown in [3], Kochen and Specker's TC is now seen to be a special case of this generalized measurement contextuality. Specifically, TC corresponds to 'measurement contextuality with respect to the choice of PVM' in models that exhibit outcome determinism for projective measurements. In fact, following [3], we can widen our concept of contextuality even further by adapting Definition 7 to apply to preparations. We define a preparation context as follows, Definition 9 The possible contexts of a preparation performed by device P are all those preparation settings S P of P which prepare a system S in states all yielding identical measurement statistics for any particular measurement performed on them. Preparations that are described in quantum theory by the same density operator always yield the same measurement statistics. Thus different settings S P of P described in quantum theory by the same density operator (albeit perhaps the same density operator written in a different form) are contexts of that preparation. As was the case with measurement contexts, there are many ways one could vary S P without altering the density operator describing the measurement. For example, there are many different ways of convexly decomposing a mixed state density operator ρ. Each of these provide a distinctly different probabilistic preparation procedure realizing ρ, but yet all result in the same statistical predictions for any measurement. Thus different convex decompositions of a density operator form different contexts of a preparation. Definition 9 puts us in a position to consider the possibility of preparation contextuality within ontological models [3], Definition 10 An ontological model is said to be preparation non-contextual if it only associates a single epistemic state µ(λ|ρ) with a given density operator, ρ, regardless of the preparation context. Conversely a model is said to be preparation contextual if the epistemic state that it assigns to ρ depends on its context, i.e. there exists S P , S ′ P such that µ(λ|ρ, S P ) = µ(λ|ρ, S ′ P ) (where S P and S ′ P represent different preparation contexts that realize the density operator ρ). It should be noted that there are cases where these generalized definitions of preparation and measurement contextuality are genuinely independent of each other. The Beltrametti-Bugajski model for example exhibits preparation contextuality with respect to the convex decompositions of a mixed state, but does not exhibit measurement contextuality in the generalized sense of Definition 8. To see this, note that in the Beltrametti-Bugajski model a convex decomposition ρ = i p i |ψ i ψ i | of a mixed state ρ into a set of pure states corresponds to an epistemic state µ(λ|ρ) = i p i δ(λ − λ ψ ) (see Lemma 5 in Appendix B for a justification). Clearly then, different convex decompositions of ρ will give epistemic states having different supports, since the elements of the decomposition are precisely the ontic states. Hence we have preparation contextuality. Conversely, the model will never exhibit measurement contextuality since the indicator function it associates with a measurement is formed directly from that measurement's quantum mechanical statistical predictions. This clearly implies, according to Definition 7, that the Beltrametti-Bugajski indicator functions will remain unaltered under any change of context. For a more in-depth example of contextuality, we can consider the KS model, first introduced in Sec. III 2. This exhibits both preparation and measurement contextuality. Its preparation contextuality is with respect to the different possible convex decompositions of a mixed state. To see this, consider a mixed state described by a density operator ρ which can be prepared by either of the following two convex decompositions, Where | ± π 8 = cos π 8 |0 ± sin π 8 |1 . Denote the preparation setting that implements the first of these convex decompositions as S P , and that which implements the second decomposition as S ′ P . Lemma 5 in Appendix B shows that an ontological model is constrained to employ epistemic states for each of these settings that respect the convex structures in (43), Now recall that in the Kochen Specker model, the epistemic state associated with a quantum state has a support equal to the hemisphere defined by the quantum state's Bloch vector. These hemispheres are such that, Supp(µ(λ|ρ, S P )) = Supp(µ(λ|0)) ∪ Supp(µ(λ|1)) and, Where 0, 1, π 8 and − π 8 denote the Bloch vectors associated with the states |0 ,|1 ,| π 8 and | − π 8 respectively. Thus Supp(µ(λ|ρ, S P )) = Supp(µ(λ|ρ, S ′ P )), and consequently, according to Definition 10, the Kochen Specker model is preparation contextual. More specifically, note that (45) and (46) imply that there are cases wherein the model realizes this contextuality by changing the support of an epistemic state as the preparation context changes. Now consider measurement contextuality in the KS model. To begin with, note that since the model is for a two dimensional Hilbert space it cannot possibly exhibit TC (in fact this was Kochen and Specker's motivation for presenting this model). However, the model does display measurement contextuality with respect to convex decompositions of a POVM. Furthermore, the KS model implements this measurement contextuality by changing the support of an indicator function as the measurement context is altered. We can see this by employing precisely the same kind of construction as we used to show its preparation contextuality. Specifically, consider the POVM {E 1 , E 2 } where the POVM elements have a computational basis matrix representation of, (47) In particular, consider the element E 1 . Two possible ways in which we can realize this in terms of projective measurements are, Eqs. (48) and (49) In the KS model, the supports of indicator functions associated with projective measurements are hemispheres defined by the Bloch vector of the state onto which they project. As before, the hemispherical supports of these distributions are such that, and, Thus Supp(ξ(E 1 |λ, S M )) = Supp(ξ(E 1 |λ, S ′ M )), and we see that in some cases the KS model implements measurement contextuality by having an indicator function's support depend on the measurement context. Having seen these examples, one might wonder to what extent ontological models must exhibit these generalized kinds of contextuality. In fact Spekkens has shown in [3] that any ontological model associating deterministic indicator functions with projective measurements must exhibit measurement contextuality with respect to different convex decompositions of a POVM 14 . Furthermore, any model must exhibit preparation contextuality with respect to different convex decompositions of a density operator. Thus the epistemic states and indicator functions that an ontological model associates with certain preparations and measurement outcomes must change dependent on the context that realizes them. It is worth noting that, although it was the case in the KS model, such a dependence does not necessarily require the supports of epistemic states or indicator functions to change. i.e. we may not necessarily have a change in which ontic states could have been prepared by a preparation or might produce a given measurement outcome. Instead it could be that only the non-zero probability assignments are altered for some context-independent set of ontic states. The case is, however, more clear-cut within those ontological models that are outcome deterministic. Then measurement contextuality requires that indicator functions must change their supports since deterministic indicator functions only assume values of 0 or 1. Any change in their assignments amounts to a change of support! Although the kinds of measurement and preparation contextuality introduced in Definitions 8 and 10 are the only kinds of contextuality typically considered, there is another interesting possibility. Recall that Acarinoses's indicator functions are dependent on the quantum state that a system is prepared in, and therefore are dependent on the preparation setting S P . These models thus introduce the possibility of a strange kind of contextuality in which the indicator function associated with a measurement is dependent on the setting used for a system's preparation. In Aaronson's model this kind of contextuality is somewhat trivial, since S P is in fact an ontic state, so it is entirely natural for the indicator functions to be dependent on the setting of P. In fact, as was seen in Sec. IV, most models implicitly assume a lack of direct statistical dependence between P and M, so this strange contextuality will only ever apply to a small subset of ontological models. B. What does contextuality mean? In Definition 6 we gave a mathematical definition of TC, and in Definitions 8 and 10 we generalized the notion to preparation and measurement contextuality. But we still lack a clear picture of exactly what it is that these ideas of contextuality really mean in our ontological model formalism -what kind of structure might they enforce on the ontic state space Λ and its dynamics? Understanding this is crucial for us to even begin to judge to what extent contextuality, like nonlocality, goes against our intuition and might fundamentally prohibit a realistic view of the quantum world. First consider TC in our ontological model formalism. It is clear that, since the definition of this property relies crucially on assigning definite values to projective measurements, it can only make sense in outcome deterministic ontological models. We therefore temporarily restrict ourselves to such cases. But even under a deterministic restriction our ontological model formalism does not explicitly talk about 'assigning outcomes' to measurements, as Definition 6 does. Rather our formalism employs indicator functions; assigning outcomes dependent on the ontic state of S. How then can we import TC into our formalism? There are two ways that TC can be manifested within an ontological model. Consider a system S, described in quantum mechanics by a three dimensional Hilbert space. Suppose that this system actually resides in an ontic state λ and that we use a device M to perform a projective measurement P 0 = |0 0| on S. Now consider two settings S M and S ′ M of M that can realize this measurement, taken to respectively correspond to the PVM contexts {P 0 , P 1 , P 2 } and {P 0 , P ′ 1 , P ′ 2 }. In an outcome deterministic ontological model, what outcome is assigned to projector P 0 (what we might refer to as v(P 0 )) is determined by whether or not λ ∈ Supp (ξ(P 0 |λ)). Hence in order for the outcome assigned to P 0 to be dependent on the PVM setting (as required by TC), our model must ensure that the inclusion of λ in Supp (ξ(P 0 |λ)) is dependent on this setting. We explicitly derive this requirement in Appendix A, where we recreate the original KS argument for TC in the ontological model language (associating a deterministic indicator function ξ(P |λ) with a projective measurement P , as opposed to a 'value assignment', v(P )). Clearly there are two ways in which the inclusion of λ in Supp(ξ(P 0 |λ)) could change; either by changing Supp(ξ(P 0 |λ)) or by changing λ. We can classify ontological models according to which of these possibilities they use to realize TC, Definition 12 An ontological model is said to be λcontextual if it realizes traditional contextuality by changing the ontic state associated with a system as the setting of M changes, S M → S ′ M . In ξ-contextual models we have that a change of M's setting simply changes the indicator function associated with M, thus changing how M will respond to S during the measurement process. In λ-contextual models however, a change of measurement setting can result in the ontic configuration of the system S being changed -a potentially nonlocal effect if S and M are spacelike separated. Two models that differ only in which of these approaches they use to realize TC are, according to Definition 3, ontologically equivalent. The λ-contextual versus ξ-contextual distinction is thus a purely metaphysical one; for any model implementing λcontextuality there is an entirely equivalent model that implements ξ-contextuality. Bearing this in mind, we can justify the assumption we made in Sec. IV; that µ(λ|S P , S M ) = µ(λ|S P ). Throughout the remainder of the paper we continue to assume that TC is always implemented through ξ-contextuality. Having discussed the manifestation of TC we now turn to the generalized notions of preparation and measurement contextuality given in Definitions 8 and 10. Understanding these types of contextuality requires using the ontic state spaces Γ P and Γ M for P and M that we outlined in the formalism derived in Sec. IV. To begin with, refer to Eq. (32) from that section. This shows explicitly that the measurement process within an ontological model amounts to an interaction between the ontic states of S and M. Specifically, the indicator function ξ(j|λ, γ M ) tells us whether the result of an interaction between a given λ ∈ Λ and γ M ∈S M would leave M in a configuration such that we would infer the j th measurement outcome to have occurred. Now recall that a change of context implies a change of the device setting S M and consequently a change of M's ontic state, γ M . Contextuality requires that ξ(j|λ, S M ) changes along with S M . Thus contextuality actually imposes a restriction on the interaction between S and M. In particular, the interaction -encoded within ξ(j|λ, S M ) -must change for any change of γ M that corresponds to an alteration of measurement context. The conclusion of this brief analysis, which can similarly be performed for P, is that the requirements of contextuality can be satisfied by the completely natural arrangement that the interaction of M and S be dependent on the configuration of M. A trivially simple example of how contextuality can in principle be manifested in this natural way can be found by introducing ontic states Γ P and Γ M to the KS model from Sec. III 2. In Sec. V A we saw how the KS model exhibits contextuality for convex decompositions of POVMs by having an indicator function ξ(k|λ, E) change dependent on whether E is performed using a setting S M in which either |0 0| or |1 1| is measured or a setting S ′ M in which either | π 8 π 8 | or | − π 8 − π 8 | is measured. Suppose that we adopt an ontological model for measurement devices within the KS model where γ M is given precisely by the Bloch vector associated with the projective measurement that M is configured to perform. We can then explicitly see that the two different settings of M correspond to different ontological configurations of the measurement device; γ M ∈ { 0, 1} for setting S M and γ M ∈ { π 8 , π 8 } for setting S ′ M . Thus contextuality simply amounts to the measurement outcome being dependent on the ontological condition of M. The discussion in this section suggests contextuality to be an entirely natural requirement of realistic theories, in no way comparable to the un-intuitive nature of nonlocality. This is an intuition which was also held by Bell with regards to traditional contextuality [14], "The result of an observation may reasonably depend not only on the state of the system (including hidden variables) but also on the complete disposition of the apparatus." But despite what we have seen, it is possible that contextuality may not be quite so simple to interpret. On analyzing the contextual interactions between S and M in more detail one might find stronger restrictions on their dynamics, restrictions that may seem less natural than a simple dependence of a measurement outcome on the interaction between S and M. The point we wish to emphasis however, is that as far as we are aware, there do not exist proofs showing the necessity of such stronger constraints 15 , and existing proofs hint only towards the natural kind of dependence outlined above. In the next section we use contextuality to deduce a property that must be possessed by ontological models: deficiency. Deficiency tells us that the realistic states in an ontological model are unable to respect certain operational relations between preparations and measurements from quantum mechanics. We argue in Sec. VI A that we would intuitively expect these relations to carry over to a realistic description of quantum mechanics, and thus deficiency is at least one aspect of contextuality that demonstrates restrictions stronger than one might have expected. VI. DEFICIENCY An interesting feature of the KS model is that the epistemic state associated with preparing a system according to state |ψ has a support equal to the support 15 It should be noted though that there is at least one exception to this statement. Consider tailoring the measurement arrangement of M to be such that a change of its measurement context corresponds to altering parts of M that are space-like separated. Then the requirement of non-contextuality actually becomes a requirement of nonlocality [24]. However, this special case does not shed any light on the implications of contextuality in situations where it possesses an identity separate from nonlocality. of the indicator function associated with performing a projective measurement |ψ ψ| (see Eqs. (10) and (11)). That is, Supp(µ(λ|ψ)) = Supp(ξ(ψ|λ)). This property is not possessed however, by Bell's first model. By considering how its epistemic states and indicator functions act over the subset Λ ′ of ontic states we can see that Supp(µ(λ|ψ)) ⊂ Supp(ξ(ψ|λ)). This lack of an equality between supports of the epistemic states and indicator functions associated with preparing or measuring the same quantum state |ψ is what we call deficiency. Bell's first model is thus deficient, whilst the KS model fails to exhibit the property. The KS model also fails to exhibit TC, which it could not possibly exhibit since it is only defined for two dimensional Hilbert spaces. Bell's first model however, being an outcome deterministic model for Hilbert spaces having dimension greater than 2, is bound by the Kochen Specker theorem to exhibit TC. One might therefore speculate at the possibility of some kind of relationship between TC and deficiency. In fact we will shortly show that any model exhibiting contextuality for projective measurements (of which TC is the only known quantified example 16 ) must exhibit deficiency. First however, we must take a moment to define deficiency more rigorously. In the brief introduction presented above, we referred to there being a single epistemic state µ(λ|ψ) associated with a preparation |ψ , and a single indicator function ξ(ψ|λ) associated with a projective measurement |ψ ψ|. The discussion in Sec. V A showed that in some cases one cannot get away with only associating a single indicator function with |ψ ψ|. Rather, TC implies that to unambiguously specify an indicator function one will also need to specify the context of a measurement. It is also a possibility (although it has not yet been proven to be a necessity) that more than one epistemic state could be associated with a given pure state preparation |ψ , depending on the setting S P used to prepare it. Thus referring to deficiency as meaning Supp(µ(λ|ψ)) ⊂ Supp(ξ(ψ|λ)) is somewhat ambiguous. With respect to which contexts do we need this expression to hold? Accordingly, we adopt a refined idea of deficiency. An ontological model will be said to not be deficient if Supp(µ(λ|ψ, S P )) = Supp(ξ(ψ|λ, S M )) for all S P , S M , where S P , and S M denote full specifications of the device's settings, including their context. Note that such an equality between supports is the only possibility in a non-deficient model, since we show in Lemma 1 of Appendix B that the epistemic states and indicator functions of any model must always satisfy Supp(µ(λ|ψ, S P )) ⊆ Supp(ξ(ψ|λ, S M )). Thus we rigorously classify ontological models as deficient by the following criteria, Definition 13 An ontological model is said to be defi- 16 Note however, that one can envision other (un-quantified) projective measurement contexts, such as the possibility of altering a measurement's von Neumann chain. cient if there exists a pure quantum state |ψ for which we have, for some particular S P and S M . In terms of the quantum formalism, deficiency states that the set of ontic states possibly describing a system prepared in a quantum state |ψ cannot be the same as those ontic states triggering a positive outcome for a measurement |ψ ψ|. It is quite simple to show that TC implies deficiency, so that any proof of the necessity of TC in ontological models implies the necessity of deficiency as a simple corollary. Theorem 1 Any ontological model capable of describing systems of dimension greater than 2 must be deficient in the sense of Definition 13. Proof. The proof of this theorem proceeds in two parts. We first present a simple argument showing that any outcome indeterministic ontological model must be deficient. Following this we complete the proof by showing that deficiency must also apply to all outcome deterministic models, so long as TC holds. First then, consider outcome indeterministic models, and suppose for a reductio ad absurdum, that deficiency does not hold. Then there would exist some quantum state preparation |ψ and associated projective measurement |ψ ψ| for which the model employs epistemic states and indicator functions satisfying, Supp(µ(λ|ψ, S P )) = Supp(ξ(ψ|λ, S M )), for all S P and S M . Now since we expect that a system prepared in a state |ψ should always pass a projective measurement test |ψ ψ| then we require, dλ µ(λ|ψ)ξ(ψ|λ) = 1. However, since µ(λ|ψ) is a normalized probability distribution over Λ (see Eq. (1)) then we can only satisfy (55) by having ξ(ψ|λ) = 1 for all λ ∈ Supp(µ(λ|ψ)). But if deficiency does not hold then this would also imply that ξ(ψ|λ) = 1 for all λ ∈ Supp(ξ(ψ|λ)) -i.e. that ξ(ψ|λ) is a deterministic indicator function, contrary to our initial assumption. Thus we conclude that if a model is outcome indeterministic then it must be deficient. Now we turn to outcome deterministic ontological models. For another reductio ad absurdum, we again consider an ontological model that is not deficient, so that again (54) holds. Now fix a preparation setting S P . Eq. (54) then implies that we will have, Supp(µ(λ|ψ, S P )) = Supp(ξ(ψ|λ, S M )) ∀ S M , (56) and thus, for any two measurement settings S M = S ′ M . But recalling Definition 8, Eq. (57) has shown that, and so, But we know from the Kochen Specker argument (reproduced in Appendix A) that there exists some |ψ and some settings S M , S ′ M for which TC can be proven to occur in any outcome deterministic ontological model of quantum mechanical systems having dimension greater than 2. Thus we deduce from (59) that any such outcome deterministic ontological model must be deficient. We have trivially been able to show that any outcome indeterministic model must be deficient. But the possibility remains that outcome deterministic models of 2 dimensional quantum systems may not be deficient. This is because Theorem 1 shows that deficiency results when deterministic indicator functions are dependent on the measurement setting S M . This occurs when a model exhibits TC, which it cannot if it describes a two dimensional system. But deficiency can also follow if deterministic indicator functions are dependent on the preparation setting of a system, S P . As we noted in Sections III 3 and III 5, both Aaronson's model and Bell's second model -through their choice of ontic state space -exhibit a dependence of M on S P , and indeed we can see that both these models are deficient. For example, in the case of Bell's second model, Supp(µ(λ| ψ)) = H( ψ) × { ψ} (where H( ψ) is the hemisphere of Λ ′ centered on ψ), so that epistemic states are restricted to have their supports over only one element ψ ∈ Λ ′′ , determined by their preparation setting S P . The indicator functions however, due to their dependence on the system's quantum state (i.e. S P ), have the larger support Supp(ξ(+ a|λ)) = λ ′′ ∈Λ ′′ H( a ′ ( λ ′′ )) × { λ ′′ }, where we have written a ′ ( λ ′′ ) to make clear the implicit dependence of a ′ on λ ′′ (see Sec. III 5). This support includes Supp(µ(λ| ψ)) as the special case λ ′′ = ψ, since then a ′ ( ψ) = ψ. Thus deficiency is achieved because of ξ( a|λ)'s dependence on Λ ′′ , i.e. on S P . One might also be led to think of deficiency as an implication of TC (since for outcome deterministic models it is the existence of TC that ensures deficiency). But we have also trivially managed to show that deficiency must exist in outcome indeterministic ontological models, for which TC cannot possibly be exhibited. Thus deficiency actually holds for a wider class of ontological models than TC. A. Interpreting deficiency We have seen that for a large class of ontological models the set of ontic states possibly describing a system prepared in a quantum state |ψ cannot be the same as those ontic states triggering a positive outcome for a measurement |ψ ψ|. But what implications does this deficiency have? How would a deficient ontological model behave? We now attempt, through an analogy, to describe the operational implications of deficiency, and show that it is in some sense a surprising property of ontological models. Of course arguments like this that intend to address 'intuitiveness' or 'surprisingness' are highly subjective, but regardless of how it is read, the analogy we provide below nevertheless gives some picture of the behavior of deficient ontological models. Crucial to our definition of deficiency is that we implicitly hold there to be an association between a quantum preparation |ψ and a quantum measurement |ψ ψ|, warranting a comparison of the associated epistemic states and indicator functions. This is of course motivated by the fact that |ψ ψ| is the unique rank one measurement for which the Born rule yields an outcome probability of 1 for a system prepared according to |ψ . Understanding the role of this association in an ontological model is key to understanding deficiency. To this end, we digress into a simple analogy from classical physics. Imagine a toy system consisting of a small ball b, and suppose that a complete description of the ball is given by a specification of its position. The ball is a completely classical object, and so it will always have some definite position regardless of whether or not it is observed. Suppose that the possible preparation and measurement procedures that one can perform on b are defined by boxes fixed at definite positions in space. Preparing b 'according to a box B P ' implies that b is known to reside at some definite but unknown position within B P immediately after the time of preparation. The boxes thus represent a restriction on our ability to know the exact position at which b is placed during a preparation. Similarly suppose that the measurements that we can perform on b are restricted to being performed 'according to some box B M '. By this we mean that the outcome of such a measurement would tell us only whether or not the ball resided within that box, but not its exact position. This 'box-world' is in many ways analogous to our ontological model constructions. The position of b -being a complete description of the ball system -is analogous to our system ontic states λ ∈ Λ and the boxes B P and B M are representative of the supports of epistemic states and indicator functions over Λ. Now scientists living in box-world, perhaps through some perverse historical accident, have come to adopt a theory that they refer to as the 'box-o-centric' theory. In this theory it is asserted that there is no ball b, and no concept of real positions at all -only the abstract concept of a box 17 . Such a theory, wherein we talk only about boxes, is very strange, since, although the concept of a box exists, there is no notion of 'being contained' within a box, and furthermore (since box-o-centrics do not find position to their taste) no way in which to distinguish between boxes in terms of the positions at which they reside. We mentioned above how we associate a quantum preparation |ψ and projective measurement |ψ ψ| because of the probabilities obtained through the Born rule. Suppose that a box-o-centric scientist wanted to similarly try and associate box preparations and measurements. Now a box-o-centric advocate is unable to compare the positions of two boxes as a reference for such associations, since she does not believe in such concepts. Thus the only way a box-o-centric could identify a measurement as being a measurement of box B would be in a way analogous to how we identify |ψ ψ| as a measurement of |ψ in quantum mechanics. That is, test whether that measurement always gives a positive outcome when performed on systems prepared according to box B. A box satisfying this criterion would, as far as the scientist is concerned, be the best candidate box for performing a measurement of box B. Thus box-o-centrics are restricted to only compare boxes in an operational fashion. But box-o-centrics are not the only scientists in boxworld, there are also box-realists, who heretically believe that positions exist. They propose that b always resides somewhere, regardless of the fact that box-world scientists are somehow condemned to only ever possess incomplete information about its position. Now these box-realists, who have no qualms with positions, would naturally hope that preparation and measurement boxes which had previously been identified with each other by box-o-centrics would actually be equal -i.e. enclose the same positions. Then whenever a measurement of box B had been performed, it really would have been telling us that b had been prepared with a position inside the box B. We can level a similar hope at quantum mechanics; that upon introducing the idea of ontic states, a measurement |ψ ψ| will remain associated with a preparation |ψ . By this we mean that we would like the ontological description to be such that any system yielding a positive outcome for |ψ ψ| will have been described by an ontic state that a preparation |ψ could have left it in. i.e. we would hope that Supp(µ(λ|ψ)) = Supp(ξ(ψ|λ)). Since |ψ ψ| is the best measurement that quantum mechanics provides for testing whether a system is in a state |ψ , our proof of deficiency shows that there is no quantum measurement that would allow us to deduce with certainty whether the ontological configuration of a system was compatible with a preparation |ψ . So deficiency tells us that the real description introduced by an ontological model cannot, as we might have hoped, maintain the operational association we make between preparations and measurements in quantum mechanics. Any such realistic theory must instead exhibit a more complicated structure. One can also use deficiency to restrict the dynamics an ontological model implements upon measurement. For every state, there are occasions when a measurement of the projector onto that state will necessarily induce a disturbance of a system's ontic state. To help show this, denote, Deficiency shows that there exist states ψ such that D(ψ) = ∅. It will also be helpful to note that for any λ ∈ D(ψ) one can always find a state |φ such that λ falls in the support of its epistemic state. Furthermore, since its epistemic state has a finite overlap with D(ψ), this state |φ will neither be equal, nor orthogonal to |ψ . Now the update rule that quantum mechanics specifies when obtaining an outcome Π ψ = |ψ ψ| of some PVM performed on a system in state ρ φ = |φ φ| is, One might expect that the analogous update rule in a deterministic ontological model would be, Which essentially 'projects' the system's epistemic state onto the support of ξ(ψ|λ). However, this non-disturbing update rule must fail in deficient ontological models because deficiency requires λ to be disturbed upon measurement, as we now show. Suppose that we implement the preparation of a system in some quantum state by filtering the results of a measurement on the system. For example, assuming a von Neumann collapse rule, a preparation of a system S in state |ψ can be effected by performing a PVM measurement, P, containing the rank one projector |ψ ψ|, on S and then post-selecting only those systems that yield the outcome corresponding to |ψ ψ|. The systems that will survive this measure-and-filter procedure and be prepared in state |ψ are thus those yielding the outcome |ψ ψ| of P. Therefore any system described by an ontic state satisfying λ ∈ Supp(ξ(ψ|λ)) will successfully be prepared in state |ψ by this method. Now naturally, we require that the ontic state of any system said to be prepared in a state |ψ should satisfy λ ∈ Supp(µ(λ|ψ)). But as we noted previously, a system can be configured according to an ontic state λ ∈ D(ψ), such that the measure-and-filter procedure will prepare it in state |ψ , but yet it does not satisfy the associated requirement λ ∈ Supp(µ(λ|ψ)). Measurements in a deficient model must employ some kind of ontic state dynamics to rectify this inconsistency, and consequently Eq. (62) cannot correctly describe the measurement process in ontological models. Whilst one can give much simpler proofs to show that realistic theories must provide such a disturbance on measurement, our derivation shows explicitly how disturbance can be related to the contextual nature of a theory. VII. CONCLUSIONS We have outlined how one can increase the scope of an existing formalism for realistic theories to include models that consider measurement procedures in more detail. An often debated topic is whether or not contextuality is a truly surprising requirement of ontological models. In Sec. V B we have shown how our quantitative description of measurement devices allows contextuality (to the extent that it is normally considered) to be realized as a reasonable dynamical constraint on the interaction of a system and measurement device. However, as we stressed previously, this leaves open the possibility that these constraints might, under further investigation, take on a more pathological form. Indeed, in Sec. VI we went some way in this direction by arguing that deficiency -the fact that one cannot faithfully associate measurements with a given preparation -provides at least one aspect of contextuality which is manifestly not so reasonable. If nothing else, we see this as evidence that there is more to be said about contextuality, and that judgement of its implications should be reserved until one can quantitatively analyze its effects in more depth. Addressing problems from quantum information using a realistic approach to quantum mechanics can be a powerful tool, a fact highlighted by Aaronson's work on complexity and hidden variables. Using the ontological model formalism, we have characterized those models to which his results apply. Another key motivation for our study of ontological models is to quantify the conceptual problems of quantum mechanics relative to a realistic framework. Crucially, we see the utility of a realistic approach as being able to highlight these problems in a familiar language, regardless of whether the approach satisfactorily solves them. Essentially, our motivation is to see what properties a realistic theory must possess in order to reproduce quantum mechanics. In this paper we have tried to build on this approach by going some way to quantitatively clarifying the status of contextuality and introducing the property of deficiency. Much work remains to be done in order to fully understand the requirements of any realist theory reproducing quantum mechanics, but there has been evidence [8] to show that this approach may indeed be a fruitful one. [12] to provide a geometric impossibility argument proving the necessity of traditional contextuality. VIII. ACKNOWLEDGEMENTS We thank Robert Spekkens for many helpful discussions and for proof reading this paper. NH acknowledges support from Imperial College London. TR acknowledges support from EPSRC. The argument that Kochen and Specker (KS) used [12] to prove that any realistic theory must exhibit traditional contextuality (TC) involves a remarkable geometric construction which we now outline. KS did not derive their proof using the formalism we introduced in Sec. II, but instead worked in a simpler (yet more restrictive) formalism wherein they simply consider assigning a definite outcome value v(P ) ∈ {0, 1} to any projective measurement P . We now review their proof within our ontological model approach, allowing us to directly see traditional contextuality's repercussions for these theories. The argument begins by considering a specific set, , of 116 states pertaining to a quantum system S described by a three dimensional Hilbert space. KS represent these states graphically by assigning a graph vertex to each of the 116 states, and connecting vertices by an edge if they correspond to orthogonal states. The spectacular resulting graph, shown in Fig. 4, can be considered an 'orthogonality map' of the set Ψ. The elements of Ψ are taken to define a set of 116 projective measurements that one could performed on a 3 dimensional quantum system, P = {|ψ i ψ i |} 116 i=1 . An element |ψ i ψ i | ∈ P represents a test for whether a system is in the state |ψ i . An outcome deterministic ontological model introduces a set of 116 indicator functions Ξ = {ξ (i|λ)} 116 i=1 , which are distributions over the ontic state space of S. These are taken to be in one-to-one correspondence with the set P of projectors. We restrict our attention, as KS did in their original argument, to outcome deterministic ontological models 18 . Therefore, when evaluated at some fixed, but otherwise arbitrary ontic state λ ′ ∈ Λ, each element of Ξ will specify an assignment of either 0 or 1 to its corresponding projector from P. Equivalently, we can think of each element of Ξ as specifying, for each λ ′ ∈ Λ, an assignment of 0 or 1 to each vertex in Fig. 4. The task at hand for an ontological model is to perform these binary assignments to graph vertices in a way consistent with the predictions of quantum theory. KS re-word this task as a graph coloring problem by representing the assignment of 0 or 1 to each element of P as a coloring of the corresponding vertex in Fig. 4 as either red (for an assignment of value 0) or green (for an assignment of value 1). The task faced by an ontological model is then to color the vertices of Fig. 4 in a way consistent with quantum mechanics. But just what are the restrictions that quantum theory imposes on such a coloring? Having defined their coloring scheme, KS derive a set of three coloring constraints, which are imposed on any coloring of Fig. 4 by the predictions of quantum mechanics, 1. Every vertex must be colored either red or green. 2. Any three vertices forming a triangle (a triad of vertices) must be colored such that one and only one is green. 3. Any two connected vertices must be colored so that both of them are not green. (A1) The first of these constraints is the defining requirement of realism, whilst the second and third can be deduced from Lemmas 3 and 4 in Appendix B. To see how, note that since we consider a 3 dimensional quantum system, vertices forming a triad are associated with three mutually orthogonal states. Thus each triad defines a PVM measurement on S. Lemmas 3 and 4 together imply that one and only one of the triad of (assumed indeterministic) indicator functions associated with a PVM can assign a value of 1 for any given λ ∈ Λ. Thus one and only one vertex from a triad can be colored green. Having given a coloring scheme and a set of constraints on how the scheme must be applied, KS then employ a geometrical argument to show that coloring Fig. 4 according to conditions (A1) is impossible (see [12,24,29] for outlines of this geometrical argument). There is however, an implicit and subtle assumption required for this proof to go through. Specifically, there 18 The generalized notion of contextuality that we introduce in Sec. V A can be applied to outcome indeterministic models. is a subset of vertices in Fig. 4 which belong to more than one triad, and the geometrical part of the KS argument implicitly needs to assume that one does not alter the color assigned to such a vertex dependent on which triad it is considered to reside in. Since every triad defines a PVM then such a dependence would constitute a reliance of the outcome assigned to a projector on the particular PVM that a measurement device M employs to measure it. This is precisely TC. Hence for the KS impossibility argument to go through one must assume a realistic theory which assigns values in a traditionally non-contextual manner. Thus Fig. 4 can be colored consistently only by traditionally contextual theories. Consistently coloring Fig. 4 is a mandatory requirement of any realistic interpretation of quantum mechanics (since such a view should assign all attributes pre-existing values) and so we are forced to conclude that any realistic theory must exhibit TC. It is worth mentioning that in the traditional formulation of the KS argument there are several subtleties involved in assuming that quantum mechanical statistics can be taken to imply constraints on an realistic theories predictions for individual measurement outcomes (see [29] for more details). These philosophical subtleties still remain in our adaption of the argument. In particular, they implicitly arise in our derivation of Lemmas 3 and 4 which are crucial in deriving the coloring constraints (A1). APPENDIX B: ELEMENTARY RESULTS There are several simple relations between the epistemic states and indicator functions of any ontological model which can be seen to follow almost immediately from the definitions of these distributions. In this appendix we outline those relations which are of use to us in the main text. Whether or not we can see the inclusion in (B1) to be strict is the subject of our discussion of deficiency in Sec. IV. We can furthermore deduce two simple relations for the supports of indicator functions associated with a PVM measurement. Lemma 3 The set of indicator functions {ξ(k|λ)} k associated with elements {|k k|} k of a PVM measurement must have supports completely spanning the system's ontic state space Λ, k Supp(ξ(k|λ)) = Λ. (B4) Proof. This Lemma follows directly from (2), which encodes the quantum mechanical requirement that a PVM should always exhibit some outcome. (B4) ensures that an ontological model will predict some PVM outcome, no matter what ontic state describes a system. We can in fact see that the supports of indicator functions associated with elements of a PVM not only span Λ (as in (B4)), but -if the indicator functions are idempotent -furthermore partition it, Lemma 4 A set of deterministic indicator functions {ξ(k|λ)} k associated with elements {|k k|} k of a PVM measurement must have mutually disjoint supports, Supp(ξ(k|λ)) ∩ Supp(ξ(l|λ)) = ∅. (B5) Where k and l label any two elements of the PVM measurement. Proof. This result follows from the quantum mechanical prediction that we should only ever obtain one outcome in any complete PVM measurement. The disjointness of the indicator functions associated with different PVM outcomes is necessary to ensure that there is no λ ∈ Λ that would yield more than one outcome for the PVM. There are also several quantum mechanical relations between density operators and POVM elements that must carry over to relations between epistemic states and indicator functions, as was noted in [3]. Firstly, consider different convex decompositions of a density operator. Lemma 5 If a density operator can be prepared according to a convex decomposition ρ = i p i |ψ i ψ i | (corresponding to configuring a measurement device according to some setting S P ) then the epistemic state associated with ρ when prepared in this way must satisfy a similar relation, µ(λ|ρ, S P ) = i p i µ(λ|ψ i ). (B6) Proof. This follows from a purely operational argument. We wish µ(λ|ρ, S P ) to give the probability of the ontic state of S being λ given that P was configured with a setting S P . Now S P corresponds to a convex decomposition wherein, with probability p i , the chance of obtaining λ is given by the probability with which we would expect to find λ if the system was described by quantum state ψ i . But the ontological model's prediction for the latter probability is just µ(λ|ψ i ) and so the overall probability of obtaining λ is given by i p i µ(λ|ψ i ), as stated above. Similarly, we can show that any convex structure of POVM measurements must carry over to the ontological model description of measurements, Lemma 6 If a POVM can be prepared by probabilistically performing one of a set of PVM measurements, so that a particular effect E from the POVM can be implemented as E = i p i |ψ i ψ i | (corresponding to a setting S M of a measurement device M) then the indicator function associated with E when implemented in this way must satisfy, ξ(E|λ, S M ) = i p i ξ(ψ i |λ). (B7) Proof. The proof of this Lemma follows from an operational argument similar to the proof given for Lemma 5. Performing the measurement E = i p i |ψ i ψ i | implies an operational procedure wherein we choose a label i according to the probability distribution {p i } i and then implement the associated rank one measurement P i = |ψ i ψ i | by performing the PVM {P i , 1 1 − P i }. Now one can ask how we might write the probability ξ(E|λ, S M ) for obtaining the outcome associated with E given that the ontic state of the system was λ. Had we performed a rank one projective measurement P i then the probability of getting a positive outcome would be ξ(ψ i |λ). Since E corresponds to implementing P i with probability p i then it follows from elementary probability theory that ξ(E|λ, S M ) = i p i ξ(ψ i |λ), which is the desired result.
2007-09-26T22:20:33.000Z
2007-09-26T00:00:00.000
{ "year": 2007, "sha1": "4b6b3f3874ea5de9522b282c1155f5efd54e9f3d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4b6b3f3874ea5de9522b282c1155f5efd54e9f3d", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
244629692
pes2o/s2orc
v3-fos-license
Research Progress of Micro-RNAs in Apoptosis of Osteosarcoma Osteosarcoma is a primary malignant bone tumor with no effective treatment. Apoptosis, one of the programmed cell death, is any pathological form of cell death mediated by intracellular processes. Under the pathological state, the unregulated regulation of apoptosis can disrupt the balance between cell proliferation and death, causing osteosarcoma proliferation and metastasis. As carcinogenic or tumor suppressor factors, microRNAs (miRNAs) regulate apoptosis of osteosarcoma cells by regulating apoptosis-related signaling pathways and apoptosis-related genes. This review provides the current knowledge of miRNAs and their target genes related to the apoptosis of osteosarcoma. Introduction Osteosarcoma is a malignant tumor originating from mesenchymal tissue, accounting for 20% of primary malignant bone tumors. It is the most common primary malignant bone tumor in children and adolescents, with 70-80% of patients aged 10 to 25 years and an annual incidence rate of 1 to 3 cases per million [1]. Osteosarcoma is a solid tumor that produces osteoid, which occurs in fast-growing epiphysis (such as proximal humerus, distal femur, and proximal tibia), but rarely in the spine, pelvis, and sacrum. It is often accompanied by amputation, lung metastasis, death, and other consequences, and the cure rate of simple operation is only 15-20% [2]. At present, the clinical treatment of osteosarcoma is unsatisfactory. Although some progress has been made in multi-drug chemotherapy and surgical resection, for patients with osteosarcoma who underwent radical amputation combined with chemotherapy, the ve-year survival rate can be increased to 50%-70%, 10% of the patients still have a local recurrence [3]. Therefore, it is essential for osteosarcoma research to nd new treatment methods to further improve the patients' survival rate. MicroRNAs (miRNAs) is a kind of single-stranded RNAs, with long 19 ~ 25 nt, which does not have an open reading frame and does not encode any protein [4]. MiRNAs has been implicated in the occurrence and progression of cancer in a growing number of studies that miRNAs in malignant tumor cells present abnormal, which can positively or negatively regulate cancer progression, such as maintaining proliferative signal transduction, anti-apoptosis, inducing angiogenesis, and cancer cell invasion and metastasis [5]. Recent studies have shown that the expression pattern of miRNAs in osteosarcoma has changed. Lulla et al. showed that differential expressions of twenty-two kinds of miRNAs between osteosarcoma cells and osteoblasts [6], which can regulate the proliferation and apoptosis of osteosarcoma cells in a variety of ways, play a role in promoting or inhibiting cancer in the occurrence and development of osteosarcoma. Although the molecular regulatory mechanism of miRNAs on osteosarcoma cells has attracted widespread attention, its speci c biological effect is still not very clear (see in Table 1). The whole life process of the body is accompanied by cell death, which also occurs in the pathological process of tumor growth. Therefore, it is imperative to determine how to induce cell death in treating human osteosarcoma effectively [7,8]. Cell death can be divided into programmed cell death (PCD) and unprogrammed death according to the mode of death. Amongst these, PCD, an active cell death controlled by speci c genes and regulated by a unique signal transduction pathway, plays a vital role in the occurrence and development of osteosarcoma. As a polygene-controlled death process, apoptosis is the most typical and well-studied amongst the forms of PCD. The body can remove aging or abnormal cells through apoptosis so that the homeostasis of the internal environment is maintained in the normal physiological state, while in the pathological state, the disorder of apoptosis regulation will destroy the balance between cell proliferation and death, which has a destructive effect on the body and causes a series of diseases including osteosarcoma. Current studies have found that miRNAs can participate in the apoptosis process of osteosarcoma cells by regulating apoptosis-related proteins and pathways [9][10][11]. It has gradually become a hotspot to explore the mechanism of miRNAs that regulates apoptosis in osteosarcoma Based on the regulatory process of apoptosis and the biological roles of miRNAs, this review focuses on the regulation of apoptosis by miRNAs in recent years, and discuss remaining problems and likely prospects associated with curing osteosarcoma by promoting miRNAs-induced apoptosis, with a view of more comprehensively exploring the important role of miRNAs in the occurrence and development of osteosarcoma, and then provide new ideas for molecular targeted therapy of osteosarcoma. Mirnas And Apoptosis-related Proteins There are many proteins involved in the regulation of apoptosis, such as the Bcl-2 family, the Caspase family, the matrix metalloproteinase family (MMPS), and other apoptosis-related proteins. These factors can in uence the apoptosis process of osteosarcoma cells by regulating the three major apoptosis pathways: death receptor pathway, mitochondrial pathway, and endoplasmic reticulum pathway (see in Bcl-2 family B-cell lymphoma/leukemia-2 (Bcl-2) family plays a vital role in the regulation of apoptosis in the signal transduction of apoptosis pathway, which is also one of the most noticed genes amongst many genes related to apoptosis. A study related to human follicular lymphoma has been rst discovered the Bcl-2 gene [12]. Bcl-2 protein, the rst discovered protein among many Bcl-2 family members, is also the rst proved anti-apoptotic protein and plays an essential role in the cell apoptotic pathway. Chen et al. found that miRNA449a may exert its pro-apoptotic function by inhibiting the expression of Bcl-2 [13]. The CCK-8 and apoptosis assays results suggested that cell proliferation could be inhibited while apoptosis was promoted by restoring of miRNA449a expression in osteosarcoma U2OS cell lines and Saos-2 cell lines. In addition, Bcl-2 was identi ed as the target of miRNA449a by using TargetScan prediction technology and luciferase reporter gene detection technology, and the nal experimental results also showed that the expression of Bcl-2 was negatively correlated with miRNA449a. Studies have shown that both miRNA326 and miRNA143 promoted cell apoptosis by targeting the down-regulation of Bcl-2 [14,15]. Bax (Bcl-2 associated X protein), a pro-apoptotic protein closely related to the function of Bcl-2. Overexpressed Bax tends to follow increased apoptosis, on the contrary, Bcl-2 acts as an anti-apoptotic protein. Zhang et al. showed that transfection of miRNA144 analogs inhibited cell proliferation and signi cantly increased apoptosis in U2-OS cells. Furthermore, the detected results showed that the expression of Bax and caspase-3 was increased, while the expression of anti-apoptotic protein Bcl-2 was decreased [16]. The interaction relationship between Bax and Bcl-2 can be described: increased Bax can numerously and signi cantly lead to the production of Bax/Bax homodimers, while overpressed Bcl-2 can depart many Bax/Bax dimers so that generates a more stable Bcl-2/Bax heterodimer, and eventually play a role in inhibiting apoptosis. All of these demonstrate that the ratio of Bax/Bcl-2 can affect cell apoptosis. The Mitochondrial pathway is one of the molecular regulatory pathways of apoptosis, which is also called the Bcl-2 regulatory pathway because it is regulated by Bcl-2 family [17]. It can be regulated by the interaction among Bcl-2 family members. As an effector, Bax can form pores in the outer mitochondrial membrane through oligomerization, eventually leading to MOMP, and then triggering the Caspase cascade Response to promote cell apoptosis because of the release of pro-apoptotic proteins such as cytochrome C and Smac located in the mitochondrial membrane space into the cytoplasm. Wang Caspase family Caspase family is a group of proteases that can regulate cell apoptosis and be activated sequentially Compared with normal osteoblast cell lines, miRNA221 and miRNA196a were increased in osteosarcoma cell lines. Overexpression of miRNA221 and miRNA196a inactivated caspase3. Transfection of the two inhibitors into osteosarcoma cell lines, respectively, showed a signi cant increase in caspase3 levels. It signi cantly inhibited cell proliferation, migration and invasion, and cell cycle stagnation in G0/G1 phase [31,32]. The expression of miRNA638 and miRNA190b was down-regulated in patients with osteosarcoma, and the overexpressed miRNA638 and miRNA190b could up-regulate caspase3 and induce apoptosis [33,34]. MiRNA34 increases the content of caspase3 by targeting the expression of TGIF2, thus promoting the apoptosis of osteosarcoma cells [35]. These ndings suggest that miRNA221, miRNA196a, miRNA638, miRNA190b, and miRNA34 can promote the apoptosis of osteosarcoma cells by acting on caspase3, a common downstream key factor of different apoptosis pathways. The role of MiRNAs in regulating caspase3 is also related to the mitochondrial pathway. Related studies have shown that miRNA143 can activate caspase3 by targeting Bcl-2 and induce apoptosis of osteosarcoma cells [15]. The activation of miRNA302b on caspase3 can regulate the expression of Bcl-2/Bim, inhibit the proliferation of osteosarcoma cells, and increase apoptosis [24]. Inhibition of miRNA421 can promote cell apoptosis rate, caspase3 activity and Bax/Bcl ratio [36]. Fan et al. showed that the overexpression of miRNA661 could activate caspase9, inhibit the growth of osteosarcoma cells, and promote apoptosis [37], indicating that miRNA661 can activate the signal transduction of death receptor pathway and lead to the apoptosis of osteosarcoma cells by promoting the activity of caspase9, the initiation effector of apoptosis. MMPs Because the progression of osteosarcoma is closely associated with matrix metalloproteinases ( [39]. Currently, in humans, there are 23 recognized MMPs that stimulate cancer survival and spread, and they represent the target group of anticancer drugs [40]. As one of the MMPs family, the degradation substrates of MMP9 mainly include gelatin, type IV, and V collagen, elastin, glass adhesive protein, etc. Because the basement membrane is mainly composed of type IV collagen, glycoprotein and proteoglycan, MMP9 is the most intensely studied and important amongst the MMP family. The overexpression of MiRNA495 can inhibit the proliferation and invasion of osteosarcoma cells and induce their apoptosis, which is related to the inhibition of the expressions of HMGN5, Cyclin B1, Bcl-2 and MMP9 [41]. In addition, both the overexpressed miRNA29b and miRNA181a can target down the expression of MMP9 and promote cell apoptosis [42,43]. MiRNA138 can reduce the invasion of osteosarcoma cells and promote apoptosis by down-regulating the expression of MMP2 and MMP9 [44]. In addition, miRNAs can also affect the apoptosis of osteosarcoma cells by acting on other MMPs. The expression of miRNA2682-3p was signi cantly decreased in osteosarcoma tissues and cell lines, and the overexpressed miRNA-2682-3p could inhibit cell proliferation and promote apoptosis by downregulating the expression levels of CCND2, MMP8 and Myd88 [45]. Transfection of miRNA192 mimics into osteosarcoma cell lines showed that overexpressed miRNA192 could down-regulate MMP11 content, reduce cell proliferation, migration and invasion, and promote apoptosis [46]. Other apoptosis-related proteins Due to regulating cell apoptosis is a complex network, the simple way to study osteosarcoma cells apoptosis often cannot well explain the tumor biological behavior, so the related factors of osteosarcoma cells apoptosis in more in-depth research is likely to reveal preliminarily in its role in the development of osteosarcoma cells apoptosis mechanism, and to provide more broad prospects in the treatment of osteosarcoma. Survivin gene belongs to the inhibitor of apoptosis (IAP) family, and its expression can regulate cell cycle, inhibit apoptosis, promote cell proliferation and angiogenesis [47]. con rmed that mirNA-493-5p could target down KLF5 and block the PI3K/Akt signaling pathway, reduce the activity, migration and invasion of osteosarcoma U20S cells, and promote the apoptosis of osteosarcoma cells. In addition to the regulation of cell growth and proliferation, SOX family can also participate in the regulation of cell apoptosis [52]. The three domains contained in Sox4 protein, the high mobility group (HMG) DNA-binding domain (DBD), and the glycine-rich domain (AA152-227) can all participate in the regulation of apoptosis [53]. Pan et al. [54] studied the function of pediatric osteosarcoma and found that the expression of miRNA188 was negatively correlated with SOX4, and restoring the expression of SOX4 could eliminate the pro-apoptotic effect of miRNA188 on OS cells. SOX5 is involved in chondrogenesis and promotes chondrocyte differentiation, and can directly bind DNA or regulate gene expression through binding with other proteins [55]. Literature has shown [56] that P53 inhibits tumorigenesis by regulating cell apoptosis, metabolic network, free radicals, and senescence. MiRNA34a can reduce FOXP1, known as a B-cell oncogene, through the p53 network [57]. By targeting FOXP2, the overexpression of miRNA139 can promote apoptosis of osteosarcoma cells [58]. Mirnas And Apoptosis-related Signal Pathway MiRNAs affect downstream related factors and thus participate in regulating the proliferation, invasion, and metastasis of tumor cells and in uencing tumor cells' biological behavior through the activation status of related signaling pathways, thus affecting tumor development and outcome regression. Currently, miRNAs have been found to regulate classical signaling pathways such as MAPK/ERK signaling pathway, PI3K/Akt signaling pathway, Wnt/β-catenin signaling pathway, Notch signaling pathway, and NF-кB signaling pathway, etc. In osteosarcoma, miRNAs can interact with signaling pathways to exert their oncogenic or oncogenic effects (see in Fig. 1 and Fig. 2) [59]. As a negative regulator of PI3K-dependent Akt signaling, gene of phosphates and tensin homolog deleted on chromosome 10 (PTEN) deletion can lead to PI3K/Akt hyperactivation. In a study of osteosarcoma, thicket (HNK) was found to decrease miRNA21 expression in a dose-dependent manner while upregulation of PTEN levels and PI3K/Akt signaling inhibition pathway were detected [70]. miRNA19a, miRNA21, and miRNA221 also exhibited similar effects [71][72][73]. Wnt/β-catenin signal pathway Several studies have proposed that the Wnt signaling pathway is aberrantly activated in malignant bone tumors as well as tumor bone metastases, such as multiple myeloma, Ewing's sarcoma, osteosarcoma, and bone metastases from breast or prostate cancer [74]. A growing number of studies have shown that the Wnt/β-catenin signaling pathway is closely related to osteosarcoma development [75]. wnt1, wnt4, wnt5a, wnt7a, and wnt14 were expressed in osteosarcoma cell lines. wnt/β-catenin signaling can upregulate oncogenes (e.g., c-Myc, CCND1, c-MET), leading to osteosarcoma development [76]. Osteosarcoma contains tumor stem cells, and the Wnt/β-catenin signaling pathway also has an important role in osteosarcoma cancer stem cells [77]. Discussion Osteosarcoma is a malignant bone tumor that is very common in adolescents or children, and its treatment is currently based on surgery and neoadjuvant chemotherapy. Although the current treatment protocols for osteosarcoma can improve patients' survival rate, most patients still suffer from local recurrence or pulmonary metastasis after treatment, which seriously affects the prognostic outcome of the patients involved. In recent years, with the continuous research on the molecular mechanism of osteosarcoma, studies have shown that miRNAs play a crucial role in the biological processes of proliferation, metastasis, invasion, and drug resistance of osteosarcoma. Amongst them, the regulation of miRNAs on apoptosis has become a research focus in recent years. With the continuous improvement in the study of apoptosis mechanism and the elucidation of the relationship between apoptosis and osteosarcoma occurrence, it has been found that promoting apoptosis of tumor cells is gradually becoming a new strategy for osteosarcoma treatment. Through indepth research on the regulation of apoptosis by apoptosis-related proteins and apoptosis signaling pathway, inducing apoptosis in osteosarcoma cells become a new popular osteosarcoma treatment option. In radiotherapy, which triggers DNA damage in osteosarcoma cells by radiation, p53 gene expression initiates apoptosis and removes osteosarcoma cells, a process that is also mediated by the death receptor ligand Fas/FasL pathway of the apoptosis pathway and involves the Bcl-2 gene family. Many antitumor drugs (e.g., DNA damaging agents, antimetabolites, topoisomerase inhibitors, etc.) can kill tumor cells by inducing apoptosis, and their mechanisms of action include (1) damaging the DNA of osteosarcoma cells and initiating the p53 gene; (2) regulating the Bcl-2/Bax ratio or inhibiting the activity of Bcl-2; and (3) regulating mitochondrial membrane permeability and promoting the release of cytochrome C. In contrast, heat therapy involves p53, Bcl-2, and Bax in addition to heat shock proteins and calcium ions. Apoptosis-related gene therapy involves introducing apoptosis-related genes into osteosarcoma cells to enhance the sensitivity of osteosarcoma cells to apoptosis or their resistance to apoptosis induction by radiotherapy. In addition, therapeutic measures targeting apoptosis-related receptor ligands, caspases family, and NF-κB are also available. Although comprehensive international therapeutic measures for osteosarcoma have improved survival rates, patients often die of complications, such as pulmonary metastases, shortly after surgery. MiRNAs can regulate their target genes by degrading miRNAs or inducing miRNAs translation silencing. Abnormal miRNAs expression can be detected in almost all human malignancies. Since miRNAs have been found to regulate the expression levels of more than 90% of protein-coding genes, indirectly in uencing the biological behavior of tumors by interfering with miRNAs expression, which is an extremely promising therapeutic approach. Mature miRNAs can play an essential role in the pathogenesis of osteosarcoma as tumor suppressors or promoters, as miRNAs expression is signi cantly associated with cell proliferation, adhesion, invasion, migration, metastasis, and apoptosis. MiRNAs are stable in circulation because they are resistant to degradation of endogenous circulating RNA enzymes thus quantitatively detected in plasma, serum and whole blood. It seems that circulating miRNAs re ect the pathological changes of miRNAs spectrum in tissues. Circulating miRNAs have great promise as diagnostic, prognostic or predictive biomarkers in the clinical treatment of patients with osteosarcoma. Besides, compared with other bone malignant tumors insensitive to radiotherapy and chemotherapy, radiotherapy and chemotherapy play a vital role in the treatment of osteosarcoma. Many miRNAs regulate the sensitivity of osteosarcoma to chemotherapy and radiotherapy, affecting the therapeutic effect of osteosarcoma. Thus, reducing the expression of pro-oncogenic miRNAs, such as miRNAs silencing, antisense blocking, and miRNAs modi cations, may be regarded as a potential therapeutic option. However, there also exist many limitations for treating osteosarcoma, for instance, the sequence error of miRNAs sequence library, inferior RNA extraction methods, the variability of detection and analysis, the diversity of biological information data analysis, and the non-standard clinical testing of miRNAs. In this paper, we review that the miRNAs family plays a role in inhibiting or promoting osteosarcoma cell death by regulating apoptosis, suggesting that miRNAs regulation of programmed cell death in osteosarcoma may be an emerging therapeutic option for osteosarcoma treatment. In recent years, exosomes have become the focus of extensive attention of scholars. Almost all cells in the human body can secrete exosomes, and exosomes can carry miRNAs and exist stably in peripheral blood and body uids. Since abnormal expression of the corresponding miRNAs can be detected in the early development stage of osteosarcoma, detecting the content of tumor cell-derived exosomes carrying miRNAs associated with osteosarcoma progression in early peripheral blood is expected to provide early diagnosis and disease progression monitoring for osteosarcoma. Previous studies have shown that the level of miRNA-25-3p in blood and secrete can be used as a prognostic indicator for patients with osteosarcoma. In addition, mesenchymal stem cell-derived exosomes (MSC-EXO) have attracted extensive attention due to their low immunogenicity, easy access, and storage and are considered to have good potential as targeted gene therapy vectors. Using MSC-EXO as vectors to transfect corresponding miRNAs analogs or inhibitors can regulate the apoptosis of osteosarcoma cells more precisely and stably. In summary, given the key role of miRNAs in the regulation of apoptosis in osteosarcoma cells, future therapeutic measures targeting apoptosis-related miRNAs genes are expected to play a crucial role in treating osteosarcoma. In addition, the better utilization of exosomes and miRNAs will facilitate the early diagnosis of osteosarcoma and targeted therapy as drug carriers. AUTHOR CONTRIBUTIONS All authors contributed to the manuscript and approved of the nal version. Con ict of Interest: All author declare that they have no con ict of interest.
2021-10-18T18:31:44.430Z
2021-09-28T00:00:00.000
{ "year": 2021, "sha1": "86655abff3778e5903da8786a906b245e4df7917", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-922088/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "382352b0e6ab609147fde105f1431ab91a29cf45", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology" ] }
245716491
pes2o/s2orc
v3-fos-license
Manipulation under local anaesthesia for idiopathic adhesive capsulitis Adhesive capsulitis which is said to be a self limiting diseases is manipulated under anaesthesia and this process has been used to speed up the recovery of the disaese. Twenty six patients with idiopathic unilateral frozen shoulder underwent suprascapular nerve block and intraarticular local anesthesia with Methyl prednisolone acetate followed by manipulation of the glenohumeral joint and this randomized prospective clinical trial was performed in bone and joints hospital Barzulla Srinagar Kashmir. Differences in range of motion and pain were assessed before manipulation and at first week; 4 th week; 6 weeks; 8 weeks and 12weeks. Passive range of motion increased significantly for forward flexion, abduction, external rotation, and internal rotation. There was a significant decrease in visual analogue pain (VAS) scores between initial and follow-up assessments. This technique is very simple, safe, cost effective and minimally invasive procedure for shortening the course of an apparently self-limiting disease improves shoulder symptoms function. Introduction Shaffer et al. showed that 50.0% of patients treated conservatively experienced either mild pain or stiffness, or both, after an average of seven years [1] . Common cause of shoulder disability occuring in the 40 to 60-year-old age group and affects 2.0 to 5.0% of the general populatio.is because of frozen shoulder [2] . Duplay is considered to be the first one who described in 1872, a painful, stiffening condition of the shoulder, which he termed "périarthrite scapulo-humérale". He suggested manipulation under anaesthesia as its treatment [3] . In 1934 Codman given the name "frozen shoulder", stating that it was characterized by insidious onset, pain near the insertion of the deltoid, inability to sleep on the affected side, painful and restricted elevation and external rotation, but normal radiological appearance [4] . Later in 1945, based upon his findings of synovial changes in the glenohumeral joint Neviaser introduced the term "adhesive capsulitis" [5] . Frozen shoulder is thought to be a self limiting disease, with complete remission occurring within two years. Etiology and the most suitable treatment of this condition is still not clear but various different modalities of treatments have been recommended and a large number of studies have demonstrated successful results. Types of treatment include supervised neglect, oral steroids, intra-articular injections, physiotherapy programmes, manipulation under anaesthesia, arthroscopic capsular release and open surgical release [6][7][8][9][10][11][12][13][14][15][16][17][18][19] . In this study we performed manipulation after local infiltration of coracohumeral ligament with local anesthetic, intraarticular injection of local anesthetic and Methyl prednisolone acetate combined with suprscapularnerve block using similar solution. Technique of manipulation was also different from the conventional techniques described. After manipulation patients performed home exercises. Range of motion improved and there was relief in pain. Materials and methods Patient selection and assessment The study was conducted in Bone and Joints Hospital Barzulla Srinagar. A total of 26 patients, who came to our Out Patient Department from June 2018 to June 2019 were selected randomly using computer generated serial numbers after taking informed consent. Inclusion criteria were, age above 40 years, no preceding trauma in the same shoulder, Unilateral involvement, and Contralateral normal shoulder, normal blood sugar level, normal x-ray of the shoulder. We followed the criteria used by Rizk et al. forthe diagnosis of frozen shoulder, which includes passive combined abduction less than 100 degree, external rotation of less than 50 degree and internal rotation of less than 70 degree [20] . The patients who did not meet the criteria were excluded from the study. Clinical assessment of both, normal and affected shoulders were done and Range of motion and pain were evaluated. Pain at rest and at extreme shoulder movements were evaluated using VAS. These were constructed of 10 centimeter lines anchored at one end by '0' means no pain and at the other end '10' which means severe unbearable pain with no intermediate indications. Range of motion was assessed in standing posture using Goniometer. Combined passive abduction was evaluated by measuring the angle formed by the arm and thorax after passively abducting the shoulder (Fig.1). With the arm adducted and the elbow at the side and flexed to 90degree, the angle formed by the forearm and the saggital plane of the body was measured as Passive external rotation. Passive internal rotation of the shoulder was assessed by bringing the hand behind and determining the vertebrae level that they could reach by the thumb. All the movements were in degrees. Technique All the procedure was done in the Out patient Department (OPD) in a separate room maintaining aseptic conditions as required for minor surgical procedure. Intra-articular injection Via anterior approach a mixture of 40 mg of Depot methyl prednisolone, 7 ml of 1% xylocaine and 4 ml 0.5% Bupivacaine was introduced into the glenohumeral joint using a 21G needle. Patient was put supine and the affected shoulder was prepared with povidone iodine solution. Coracoid process was palpated, the needle was inserted one centimetre inferolateral to the coracoid). The coracohumeral ligament was infiltrated with 2 ml of mixed solution. and about 10 ml was injected in the joint. Suprascapular nerve block A mixture of 40 mg of Depot methyl prednisolone, 5 ml xylocaine 1% and 4 ml 0.5% bupivacaine was injected using the technique described by Dangoisse et al. [21] A 21G ' needle was introduced through the skin 2 cm cephalad to the midpoint of the spine of the scapula. The needle was advanced parallel to the blade of the scapula until bony contact was made in the floor of the suprascapular fossa where whole of the 10 ml solution was injected. This technique has previously been demonstrated to be safe and can be used to effectively block the articular branches of the suprascapular nerve. Manipulation Manipulation was first done with the patient supine, after about 10 minutes, when the desired effect of the local anaesthetic was achieved. With the shoulder adducted and the elbow extended, the distal arm was held by the surgeon to perform passive external and internal rotation of the shoulder. Each movement was held for 10 seconds and repeated for 10 times each. Now patient was asked to clamp both the hands in front of the chest. With the help of the sound hand patient was asked to lift affected arm over the head. Patients could comfortably bring the arm over head without much pain. The limb was kept in the same position for 2 minutes. Now the patient was asked to put both hands behind the head and asked to gradually bring the elbows to the level of the bed to gain external rotation. In some anxious patients the surgeon needed to assist this movement by gently pushing with the index finger. Then the patient was asked to sit on the bed and repeat the same movement at least 5 times. In sitting position patient was asked to touch the scapula with the help of other hand so as to gain internal rotation. Immediate postmanipulation evaluation of Range of Motion was done. Analgesia and home exercises Indomethacin 25 mg thrice daily, Omeprazole 20 mg twice daily and Amitryptyllin10mg at bed time for 7 days were prescribed at discharge. Additional 20 tablets of Paracetamol 500mg was also given to relieve pain on SOS basis. All patients were given verbal and written instructions regarding a home exercise program. Patients were advised to continue same manipulation movements at home at least 10 repetitions three times a day. Results Total of 26 patients ranging from 40 years to 70 years (mean 58.24) were evaluated, out of which 55.3% were female and 44.7% were male. Frozen Shoulder affected in 65% of non dominant shoulder. A marked restriction of shoulder active ROM was observed in Frozen Shoulder patients before the procedure. Patients also showed a reduction (p< 0.05) inactive shoulder internal rotation, external rotation and abduction of involved shoulder compared to contralateral normal shoulders before the procedure (Table-1). After 12 week after the procedure, the score of shoulder internal rotation, external rotation and abduction active ROM in FS patients for involved extremity were increased (p< 0.05) compared with the pre-procedure level. Both pain at rest and at activity were markedly decreased (p< 0.05) ( Table-2). Discussion Neer et al. [22] observed, in a cadaver study, that release of the coracohumeral ligament increased external rotation both with the arm at the side and with it in 90 degrees of forward elevation Adhesive capsulitis is a common condition seen in the outpatient department characterized by pain and stiffness of shoulder. Though it is considered to be a self limiting disease but the course of disease is protracted and there is some limitation of movement [23,5] . Patho-physiology seems obscure but certain facts has been discovered. In frozen shoulder, the main anatomical change is the thickening of coraco-humeral ligament. The coracohumeral and superior glenohumeral ligaments are considered to be structural contents of the rotator interval capsule, but each have separate origins and insertions [24] . Several authors have recommended release of the coracohumeral ligament, to increase glenohumeral motion, when a frozen shoulder is treated with open release [22,25] . The interval capsule plays a major role in the range of certain motions, in the obligate translation, and in the allowed translation of the glenohumeral joint. The magnitude of these effects varied among shoulders, but the direction of the effect was consistent. Sectioning of the interval capsule increased the ranges of flexion, extension, adduction, and external rotation, and imbrication decreased these ranges of motion. Positions of abduction and internal rotation relaxed the interval capsule [19,26] . This ligament restrains the joint in external rotation when shoulder is adducted. In our technique we performed gentle but firm external and internal rotation movements to stretch the shoulder capsule gently. We also infiltrated thecoracohumeral ligament with 2 ml of local anaesthetic mixture to anesthetized the ligament at the time of manipulation. There is always pain and stiffness in the shoulder which altogether produces vicious circle leading to progressive stiffness. The pain in frozen shoulder is neither typical of inflammatory pain nor of neurogenic type which is more severe during night [27] , These suggest of it being related to Complex Regional Pain Syndrome [28] . The suprascapular nerve supplies sensory fibres to about 70%of the shoulder joint, including the superior and posterosuperior regions of the shoulder joint and capsule, and the acromioclavicular joint [29] We blocked suprascapular nerve using three different drugs with different actions. Xylocaine relieved pain immediately, Bupivacaine worked for 24 to 72 hours after that methylprednisolone worked for weeks. Literature shows addition of glucocorticoids in local anesthetic blocks transmission of nociceptive c fibers. The block prolonging effect of steroid is due to its local effect. The action of steroid has been related with the alteration of functions of potassium channel on the excitable tissue [30][31][32][33] . As the direct action of Bupivacaine cannot extend beyond a few hours or days there must be an effect of Depot methyl prednisolone on the underlying pathology, which owes in part to the patient's ability to perform an adequate exercise program. All the manipulations were active and assisted. No passive manipulations were done as passive stretching of the joint produces pain which evokes reflex contraction of antagonistic muscles. After completion of the manipulation the patients were asked to bring the affected limb over the head with the help of the other hand. All the range of movements were performed by patients themselves at home. Thus all range of motions were possible without significant pain, sometimes an audible pop could be heard as a result of breakage of adhesions. Patients were able to perform the same assisted active range of motion exercise at home regularly without pain. Study done by Ronald L. Diercks, showed that result of intensive physiotherapy involving stretching exercises up to pain threshold is worse than supervised neglect 64.0% verses 90.0%.6Most non invasive therapeutic strategies are based on stretching or rupturing the tight capsule by manipulative physical therapy with success rate for achieving good to fair results [7,28,34] . The good result of physical therapy with intra articular corticosteroid injections, with or without hydraulic distension, ranges from 44.0% to 80.0%. [35][36][37][38] more aggressive interventions, such as manipulation under anesthesia and arthroscopic or open release, are a popular form of therapy especially for resistant frozen shoulder. The published success rate for this therapy varies 69% to 97.0%. [14,[39][40][41] The study of using suprascapular nerve block for frozen shoulder showed improvement in pain and range of motion in76.0% of patients at 12 weeks [fig2]. 42In this study we used a combined approach (Intrarticularinjection of local anesthesia with corticosteroid plus coracohumeral infiltration plus Suprascapular nerve block plus gentle manipulation and active assisted range of motion exercises) to the management of FS. We have achieved significant improvements in the range of motion as well as relief of pain in our patient.
2022-01-06T16:17:23.317Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "6c782faab9d547a039131331802263cdc78795cc", "oa_license": null, "oa_url": "https://www.orthopaper.com/archives/2021/vol7issue4/PartL/7-4-135-725.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "092c29c97373d68652ac1da5acc7d0ef5f03483e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
230643917
pes2o/s2orc
v3-fos-license
Design Flood Discharge of 50 Years in Garang River Using Nakasayu Synthetic Unit Hydrograph Method Various kinds of buildings in civil engineering require a careful planning. For example, the planning of a water building needed a method to calculate the design flood discharge before starting to plan the dimensions of the building to meet the effectiveness of the water structure. Design flood discharge can be determined using several hydrograph methods that have been used in water building planning in Indonesia. One of the popular hydrograph methods is the Nakayasu Synthetic Unit Hydrograph method. In this case, the design flood discharge is in the Garang watershed, precisely in Semarang, Central Java province, using rainfall data for the past 16 years. Hydrological analysis is carried out first before determining the design flood discharge with a return period of 2, 5, 10, 25, and 50 years. The results of the design flood discharge using Nakayasu method respectively were 305,522 m3/s, 390,742 m3/s, 447,783 m3/s, 520,560 m3/s, and 574,912 m3/s. INTRODUCTION The calculation of design flood discharge is the most important aspect of planning the water structure. Design flood discharge is a component which is needed to determine the magnitude of peak flow discharge in a Watershed. This flood discharge will be used in calculating the dimensions of water structures such as dams, groundsill, and so on. Design flood discharge can be calculated using rational methods and several hydrograph methods that previously have been used in the planning of water structures in Indonesia. Hydrograph is a method that uses diagrams to illustrate the relationship between flow rate and time. A hydrograph must be adjusted by observing and analyzing hydrology to determine the characteristics in a watershed. Some popular hydrograph methods include ITB, GAMA-1, SCS, ITS-1, ITS-2 and Nakayasu [1]. The method used in this study is Nakayasu Synthetic Unit Hydrograph method. The calculated flood discharge is 2, 5, 10, 25, and 50 years. The location chosen for research is in the Garang watershed, starting from the head of the river to the coordinates 7° 1' 40.444" S and 110° 24' 7.999" E where it is known with its fast flow in a short time [2]. Therefore, the determination of design flood discharge is needed especially for the advantage of the local society if they want to plan the building's construction around the river. The research location is shown in the figure 1: Before drawing a hydrograph curve, the first step is to look for its constituent components, such as rainfall intensity and base flow. Rain intensity is the level of rainfall per unit of the time, it can be calculated using the Manonobe method which is described as follows [3]: where : I = rain intensity (mm/hour) t = rain duration (hours) R24 = maximum rainfall (mm) The base flow which has a meaning as groundwater flow due to rainfall that come through infiltration and percolation can be searched with the following formula The Nakayasu Synthetic Unit Hydrograph method is the first hydrograph method developed in Japan [5]. This method has been applied several times in water structures in East Java. Sutapa [6] opined that until now the use of the Nakayasu method has given satisfactory results. The equation used to draw a hydrograph is as follows [7]: Meanwhile, drawing Nakayasu hydrograph curve is divided into 3 conditions which are described as follows: where : Qp = flood peak flow rate (m 3 /s) C = runoff coefficient A = watershed area (km 2 ) Ro = unit rain (1 mm) Tp = time interval from the beginning of the rain until the flood's peak unit Tg = concentration time Tr = rain time unit T0,3 = the time required by a decrease of peak discharge up to 30% of peak discharge. METHODOLOGY This case used several methods including the method of observation by direct observation of the study site to determine the conditions around the area and other components needed in the study. Documentation methods were also carried out by collecting data, such as rainfall data from the three nearest rain stations and concerning data of watersheds such as topography, area, and length. Last is the literature method by taking references from journals, modules, books that are supporting research. The initial step to solve this case is to observe the research location and data collection. The step is continued by proceeding data such as regional rainfall analysis, frequency and probability analysis, data compatibility test, rainfall intensity analysis and design flood discharge analysis. RESULT AND DISCUSSION The rainfall data used 16 years of data from the nearest rain station, which is Sumurjurang, Simongan, and Gunungpati using the polygon-Thiessen method. The results of the regional rainfall analysis are provided below: The distribution selection results using the log-Pearson III distribution method with rain return periods of 2, 5, 10, 25 and 50 years are described as follows: The results of the data validity test using the chi-square method by dividing class intervals of 5 classes are presented in the following table: The value of the chi-square test was 4, while the critical value in the table was 7,81. Then, it can be concluded that the tested data represent some or all the existing data. While the results of rainfall intensity analysis using the manonobe method for 24 hours are described as follows: The width of the watershed and the length of the river are obtained through the analysis of the ArcGIS software. So, the results obtained from the base flow that have been calculated are described as follows: Design flood discharge analysis curve using Nakayasu Synthetic Unit Hydrograph with a return period of 2, 5, 10, 25, and 50 years is presented in the following figure: CONCLUSION Based on the results of the discussion, it can be concluded that in the case of Garang watershed using rainfall data for 16 years with several hydrograph methods. One of them is using the Nakayasu Synthetic Unit Hydrograph. The values obtained were 305.522 m 3 /s for the 2-years return period, 390.742 m 3 /s for the 5-years return period, 447.783 m 3 /s for the 10-years return period, 520.560 m 3 /s for the 25-years return period and 574.912 m 3 /s for the 50-years return period.
2020-12-17T09:07:32.153Z
2020-10-24T00:00:00.000
{ "year": 2020, "sha1": "ca46f79903a54d0ad0891b267f08f364d58d5ebf", "oa_license": "CCBY", "oa_url": "https://journal.unnes.ac.id/nju/index.php/jtsp/article/download/26847/10924", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7787535e99574e95228775700835fa3b4dbcc906", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
252693158
pes2o/s2orc
v3-fos-license
ON THE DOUBLING CONDITION IN THE INFINITE-DIMENSIONAL SETTING Abstract We present a systematic approach to the problem whether a topologically infinite-dimensional space can be made homogeneous in the Coifman–Weiss sense. The answer to the question is negative, as expected. Our leading representative of spaces with this property is $\mathbb {T}^\omega = \mathbb {T} \times \mathbb {T} \times \cdots $ with the natural product topology. Introduction Given a nonempty topological space (X, T ), its topological dimension dim(X) is the smallest number n ∈ N ∪ {0} with the property that each open cover B of X has a refinement B (that is, a second open cover with all elements being subsets of elements of the first cover) such that each point x ∈ X belongs to no more than n + 1 elements of B. If no such n exists, then we put dim(X) = ∞. The following note is devoted to explaining why a topologically infinite-dimensional space cannot be doubling.We shall refer to the doubling condition by using the notion of homogeneity in the Coifman-Weiss sense (see Definition 1.6).THEOREM 1.1.Let (X, T ) be a topological space.If dim(X) = ∞, then it is not possible to find a quasimetric ρ and a Borel measure μ for which T ρ = T and (X, ρ, μ) is homogeneous in the Coifman-Weiss sense. The same is true if the small ind(X) or large Ind(X) inductive dimension is used instead (for the definitions of the inductive topological dimensions, see for example [5]).Indeed, homogeneous spaces are metrisable (see Facts 2.2 and 2.3) and separable (see [14,Proposition 2.2]), while all dimensions are topologically invariant and ind(X) = Ind(X) = dim(X) holds for separable metric spaces (see [5,Preface]). It should be emphasised that Theorem 1.1 can be derived from general theory in just a few lines, by using several results that are already known, as a black box (see the proof in Section 2).However, the problem lies at the intersection of different fields of research, and the solution relies on analytical, geometrical and topological arguments that should be combined in the appropriate way.Therefore, we believe that it is worth making the topic more systematic by presenting a detailed approach which will be both elementary and instructive to the reader.We break down the original problem into several simpler subtasks, explain the reasons for making each reduction, and comment on possible obstacles or alternative paths along the way. Studying this kind of problem was originally motivated by a recent question by Roncal, related to analysis on the infinite-dimensional torus.Since this space can be seen as a model example of X from Theorem 1.1, we would like to look at the problem from the standpoint of this particular setting first, and only then pass to the general case. 1.1.The infinite-dimensional torus T ω .By T ω , we mean T × T × • • • , that is, the product of countably many copies of the one-dimensional torus T. One can equip T ω with the usual product topology T T ω and the normalised Haar measure dx (that is, the product of uniformly distributed probabilistic measures on T) to make it a compact Hausdorff group and a metrisable probabilistic space.Then a lot of classical analysis can be developed in the context of T ω , including harmonic analysis on which we focus here. Although the structure of T ω seems nice at the first glance, careful examination of specific problems in this setting often leads to negative results or counterexamples to what we know from the Euclidean case R d .To mention just a few such issues, one observes: • divergence of Fourier series of certain smooth functions [7]; • no Lebesgue differentiation theorem for natural differentiation bases [8, 10]; • unboundedness of maximal operators [10,11]; • problems with introducing a satisfactory theory of weights [11]. The instances we have chosen share one common feature.Precisely, they all originate in R d -related questions to which answers are positive in the qualitative sense for each d but also worse in the quantitative sense the bigger d is.In many cases, the key reason for this phenomenon is the behaviour of the so-called doubling condition.Indeed, although the estimate This fact usually becomes the main obstacle while trying to prove results with dimension-free bounds. From this point of view, one may expect that for T ω , the doubling condition is unlikely to hold as, loosely speaking, for each d, a piece of R d can be embedded in T ω .In this direction, the following question was asked by Roncal.QUESTION 1.2.Can one equip T ω with a quasimetric ρ and a measure μ so as to assure the doubling condition and, at the same time, keep the structure of T ω ?https://doi.org/10.1017/S0004972723000011Published online by Cambridge University Press D. Kosz [3] Several remarks regarding Question 1.2 are in order. (1) In the literature devoted to studying T ω , the most popular metric is given by with the toric distance (here x, y ∈ T are understood as elements of [0, 1)) For (T ω , ρ T ω , dx), the doubling condition fails to hold (see [6,Ch. 2.3]).(2) Bendikov in [2, Remark 5.4.6]defines a family of metrics ρ A on T ω by where A is the space of all summable sequences with strictly positive entries. It was asked in [6, Nota 2.34] whether there exists an assumption on a sequence A ∈ A under which (T ω , ρ A , dx) is a space of homogeneous type.This can be seen as a special case of Question 1.2.(3) The last part of Question 1.2 is essential, and omitting it would make the problem trivial.Indeed, in this case, the following (not insightful) answer could be given: Yes, because there exist doubling spaces of the same cardinality as T ω . For example, one could take R with the standard distance and Lebesgue measure, and equip T ω with ρ and μ transferred from R via a given bijection π : R → T ω (in other words, one chooses ρ and μ so that π is a measure-preserving isometry). We show that the answer to Question 1.2 is negative, as expected.This result has important consequences for the whole field of harmonic analysis on T ω , as it reveals that this subject goes beyond the theory of doubling spaces.In what follows, we present two theorems referring to either the geometrical or topological structure of T ω .THEOREM 1.3.Suppose that ρ is a bounded translation invariant quasimetric on T ω .Then it is not possible to find a measure μ, defined on the σ-algebra generated by ρ, for which (T ω , ρ, μ) is homogeneous in the Coifman-Weiss sense.THEOREM 1.4.Suppose that ρ is a quasimetric on T ω such that T ρ = T T ω .Then it is not possible to find a Borel measure μ for which (T ω , ρ, μ) is homogeneous in the Coifman-Weiss sense. Theorem 1.3 is independent of Theorem 1.1, while Theorem 1.4 is a special case with a simpler proof.Both results answer the question in [6, Nota 2.34] in the negative. 1.2.Homogeneous spaces.Finally, we briefly recall the notion of homogeneity (see [3]).Alongside, we conduct a short discussion on quasimetrics, strongly inspired by [14].DEFINITION 1.5.A quasimetric on a nonempty set X is a mapping ρ : X × X → [0, ∞) satisfying the following conditions: If the last condition is satisfied with K = 1, then ρ is called a metric. There is a canonical way to introduce a topology on X that corresponds to a given quasimetric ρ.Namely, for each x ∈ X and r ∈ (0, ∞), we denote by This definition turns out to be good for several reasons (see the detailed comments later on): (a) it extends the standard definition used in the metric case K = 1; (b) it ensures that topology properties behave well under perturbations of ρ; (c) it always leads to a topology which is metrisable. However, one needs to be careful because, quite surprisingly, in the case where K > 1, it may happen that balls are not open or even Borel according to the properties of T ρ .DEFINITION 1.6.Given a nonempty set X, a quasimetric ρ and a Borel measure μ, we call the space (X, ρ, μ) homogeneous in the Coifman-Weiss sense if μ(B ρ ) ∈ (0, ∞) holds for all balls B ρ ⊂ X (in particular, it is assumed that balls are μ-measurable sets), and there exists a numerical constant If this last condition holds, then we say that μ is doubling with respect to ρ. Notice that in general, under the assumption that all balls are measurable, the doubling condition leads to the following trichotomy: Thus, the condition μ(B ρ ) ∈ (0, ∞) in Definition 1.6 excludes only trivial examples. 2. Proofs of Theorems 1.1, 1.3 and 1.4 2.1.Analysis: from quasimetric to metric spaces.Instead of dealing directly with the problems stated in Section 1, we opt to make some reductions in advance to get rid of several technicalities such as measurability of balls.The first reduction refers to the 'metamathematical principle' [14, Section 3], which says that many quasimetric-related questions can be boiled down to the metric case. One reason the definition of quasimetric is convenient to use is that if ρ is a quasimetric and ρ is symmetric and comparable to ρ, then ρ is a quasimetric as well.For metrics, the corresponding statement is not true.However, the strength of this flexibility sometimes turns into weakness.Indeed, the definition of topology using balls is perfectly suited to the metric case and we pay a certain cost to ensure (a).If K = 1, then the triangle inequality ensures that for an arbitrary reference point x and two points y, z lying close to each other, the distances ρ(x, y), ρ(x, z) are similar.Precisely, we have |ρ(x, y) − ρ(x, z)| ≤ ρ(y, z).Thus, if y ∈ B ρ (x, r), then also B ρ (y, r) ⊂ B ρ (x, r) for some appropriately chosen r, so that the ball B ρ (x, r) is open.This is not true in general if K > 1.To see this, take R with the standard metric ρ R (x, y) = |x − y| and modify it putting In view of the discussion above, ρR is a quasimetric.Moreover, looking at the notion of convergence, we would expect it to generate the same topology on R as the standard one.However, the exact forms of the balls B ρR (0, r) with r ∈ (1, 4) are strongly dependent on the form of E itself, while this set is arbitrary.In particular, one can choose E so that none of these balls is Borel (see [14, Example 1.1] for a simple example of quasimetric space such that all balls fail to be Borel). Nonetheless, once we realise that instead of balls, the topology T ρ is what we should look at, things start to look more optimistic.To see this, we need the following definition.DEFINITION 2.1.Two quasimetrics on X, ρ 1 and ρ 2 , are called equivalent if there exists It is easy to verify the result below, which one can relate to (b).FACT 2.2.If ρ 1 and ρ 2 are equivalent, then T ρ 1 = T ρ 2 .Also, for a quasimetric ρ and α ∈ (0, ∞), the mapping ρ α defines a quasimetric such that T ρ α = T ρ . The next fact, which justifies (c), can be used to reduce our problems to the metric case. The proof of Fact 2.3 can be found in [13,Proposition] (see also [1]).We now explain briefly the motivation behind such a definition of ρ q .If K > 1, then ρ(x, y) > ρ(x, z) + ρ(z, y) can occur.Thus, to assure the triangle inequality, we would like to make the distance between x and y not larger than the right-hand side.The same applies to ρ(x, z), ρ(z, y) so we eventually take into account all finite chains going from x to y. However then, as the number of intermediate points goes to infinity, the corresponding expressions may go to zero (for example, if ρ(x, y) . Hence, we need to adjust our original idea and, as it turns out, penalising long chains by using q close to zero does the job perfectly.COROLLARY 2.4.Regarding Theorems 1.1, 1.3 and 1.4, it is enough to consider metrics.Indeed, by using Facts 2.2 and 2.3, one can verify that if (T ω , ρ, μ) is a quasimetric space which is homogeneous in the Coifman-Weiss sense, then (T ω , ρ q , μ) is a homogeneous metric space that enjoys the same topology.Also, if ρ is bounded and translation invariant, then so is ρ q . From now on, we can concentrate solely on metrics.However, to satisfy the reader's curiosity, we shall comment on which results have their quasimetric analogues. 2.2.Geometry: from doubling to geometrically doubling spaces.Our next goal is to show that yet another important reduction can be made.Namely, although both ρ and μ are involved in verifying whether (X, ρ, μ) is homogeneous or not, it is actually the metric that plays the more important role here. It is clear that if μ is doubling with respect to ρ and the second option in the above trichotomy occurs, then one should not be able to find arbitrarily many disjoint balls of radius r/2 centred at points y ∈ B ρ (x, 2r).Indeed, if that would be the case, then at least one of these balls, say B ρ (y 0 , r/2), should have very small measure compared to μ(B ρ (x, 4r)) (because there are many disjoint balls, each of them satisfying B ρ (y, r/2) ⊂ B ρ (x, 4r)), and the doubling condition would fail for one of the balls B ρ (y 0 , r/2), B ρ (y 0 , r), B ρ (y 0 , 2r). The discussion above motivates the following definition. DEFINITION 2.5.A quasimetric space (X, ρ) is called geometrically doubling if there exists a number N ∈ N such that every ball B ρ (x, 2r) can be covered by no more than 2 N balls of radius r.In this case, we also say that ρ is geometrically doubling. It turns out that, in some sense, failing to be geometrically doubling is the only obstacle that prevents a given space from becoming homogeneous after a suitable choice of μ. FACT 2.6.If a metric space (X, ρ, μ) is homogeneous in the Coifman-Weiss sense, then ρ is geometrically doubling.Conversely, if ρ is a geometrically doubling metric on X, then there exists a Borel measure μ such that (X, ρ, μ) is homogeneous in the Coifman-Weiss sense, provided that (X, ρ) is complete.Indeed, the first part of Fact 2.6 is a known fact mentioned by the authors in [3] (see also [9]).Precisely, if ρ is not geometrically doubling, then for each M ∈ N, there exist a ball B ρ (x, 2r) and points y 1 , . . ., y M ∈ B ρ (x, 2r) such that ρ(y i , y j ) ≥ r if i j, so that the balls B ρ (y 1 , r/2), . . ., B ρ (y M , r/2) are disjoint.Then the doubling condition cannot hold in view of the previous discussion.The reverse part is harder and its proof can also be found in [12] (see also [15]).The quasimetric analogue of Fact 2.6 is also true (in the reverse part, we additionally assume that ρ is such that all balls are Borel).Finally, in general, the completeness assumption cannot be ignored (to see this, consider Q and ρ R restricted to Q × Q, as mentioned in [14]).COROLLARY 2.7.Regarding Theorems 1.1, 1.3 and 1.4, one only needs to look for geometrically doubling metrics satisfying the desired properties. Indeed, this follows clearly by combining Corollary 2.4 and Fact 2.6.Precisely, we expect negative answers so it suffices to show that each metric ρ which is either bounded and translation invariant (Theorem 1.3) or such that T ρ coincides with the given topology (Theorems 1.1 and 1.4) cannot be geometrically doubling. To use the geometrical doubling property, we introduce the concept of r-separated sets.DEFINITION 2.8.For a nonempty quasimetric space (X, ρ), we say that a given subset E ⊂ X is r-separated, r ∈ (0, ∞), if ρ(x, y) ≥ r for all distinct x, y ∈ E. We denote by ℵ(X, ρ, r) the biggest number n ∈ N such that there exists at least one r-separated set with n elements.If arbitrarily large r-separated sets can be found, then we put ℵ(X, ρ, r) = ∞. The following lemma will be very helpful later on.LEMMA 2.9.Let (X, ρ) be a bounded metric space.If ρ is geometrically doubling with some N ∈ N, then there exists C ∈ (0, ∞) such that ℵ(X, ρ, 2 −l ) ≤ C2 Nl for all l ∈ N. PROOF.Take L ∈ Z such that sup x,y∈X ρ(x, y) < 2 L .Then for an arbitrary reference point x ∈ X, we have B ρ (x, 2 L ) = X and iterating the covering procedure, we conclude that for each l ∈ N, the space X can be covered by 2 Nl balls of radius 2 L−l , so that ℵ(X, ρ, 2 L−l+1 ) ≤ 2 Nl holds (to see this, notice that if ρ(x, y) ≥ 2r, then there is no ball of radius r containing both x and y).A suitable reparametrisation gives the statement with some C depending on L, N. A quasimetric version of Lemma 2.9 is also true, but with C2 Ml instead of C2 Nl , where C depends on K, L, N, while M depends only on K, N. We are ready to prove the first of the two T ω -related theorems. Since both n, j may be arbitrarily large, one can use Lemma 2.9 to deduce that ρ cannot be geometrically doubling.Indeed, there is no N ∈ N such that ℵ(T ω , ρ, 2 −l ) ≤ C2 Nl holds for all l ∈ N with some C ∈ (0, ∞), as otherwise one gets a contradiction by taking any n greater than N and sufficiently large j depending on N, C, C n . Thanks to Fact 2.10, we can adapt the idea behind the previous proof to the case of metrics which are not necessarily translation invariant.PROOF OF THEOREM 1.4.Suppose that ρ is such that For each n ∈ N, consider the set E n {(x 1 , . . ., x n , 0, 0, . ..) ∈ T ω : (x 1 , . . ., x n ) ∈ [0, 1 2 ] n }, which will play the role of the cube [0, 1] n from Fact 2.10.For i ∈ {1, . . ., n}, denote 2 }, and set Using compactness again, we deduce that each f n,i is continuous.Indeed, assuming f n,i (x) ≥ f n,i (x ), we get 0 ≤ f n,i (x) − f n,i (x ) ≤ ρ(x, y * ) − ρ(x , y * ) ≤ ρ(x, x ), by taking y * ∈ E − n,i for which the value f n,i (x) is attained (again, it is important here that ρ is a metric).Moreover, f n,i (x) = 0 for x ∈ E − n,i and f n,i (x) ≥ C n for x ∈ E + n,i .Next, choose j ∈ N and take By Fact 2.10, there exists Both n, j may be arbitrarily large so one can use Lemma 2.9 to deduce that ρ cannot be geometrically doubling.Indeed, there is no N ∈ N such that ℵ(T ω , ρ, 2 −l ) ≤ C2 Nl holds for all l ∈ N with some C ∈ (0, ∞), as otherwise one gets a contradiction by taking any n greater than N and sufficiently large j depending on N, C, C n . Visualization of the 'cube' E n for n = 2. Thick dots are the elements of the set E 2,j with j = 2, while dashed lines represent the corresponding level sets of the functions f 2,1 , f 2,2 (this is an oversimplified scheme, as in general, the structure of the level sets may be much more complicated).We pick two points This time, it was crucial that only metrics, not quasimetrics, were considered in the proof. It remains to prove Theorem 1.1.To this end, let us recall the concept of the Hausdorff dimension.Given a metric space (X, ρ), for each E ⊂ X, we define is taken over the empty set).The proof of Lemma 2.9 reveals that if (X, ρ) is geometrically doubling with some N ∈ N, and x ∈ X is any reference point, then dim H (X) = lim r→∞ dim H (B ρ (x, r)) ≤ N. Similarly, the proof of Theorem 1.4 hints that [0, 1] n equipped with any metric generating the standard topology should have Hausdorff dimension at least n.The latter is a special case of the following general result. https://doi.org/10.1017/S0004972723000011Published online by Cambridge University Press D. Kosz [11] PROOF OF THEOREM 1.1.Assume that ρ is a metric for which T ρ = T and (X, ρ) is geometrically doubling.Then, dim H (X) is finite by Lemma 2.9.Also, (X, ρ) is separable because the geometrical doubling property ensures that for any M ∈ N, the whole space X can be covered by countably many balls of radius 2 −M .Thus, Fact 2.11 gives dim(X) ≤ dim H (X) < ∞ = dim(X). This contradicts the existence of ρ with the desired properties. The following remarks highlight why the problem stated in Question 1.2 was delicate.REMARK 2.12.In general, being geometrically doubling is not a topological property.Indeed, one can change ρ R to make R with its natural topology not geometrically doubling.It suffices to take ρ log(1 + ρ R ) and consider the balls B ρ (0, n) with n → ∞.REMARK 2.13.The subspace {0, 1 2 } ω ⊂ T ω with the topology inherited from T ω can be made homogeneous in the Coifman-Weiss sense. 2. 3 . Topology: from Hausdorff to topological dimension.Next we prove Theorem 1.4.Here we use the following classical result that can be seen as a special case of the Brouwer fixed-point theorem or a multidimensional variant of the Darboux theorem. FIGURE 1. Visualization of the sets E n,j , j ∈ N, for n = 2. Thick dots and small dots correspond to the sets E 2,2 and E 2,3 , respectively.Given z ∈ E 2,3 , we have 2z ∈ E 2,2 and 2z = 0 if and only if z ∈ E 2,1 .
2022-10-05T01:16:13.937Z
2022-10-03T00:00:00.000
{ "year": 2023, "sha1": "ea38be77337cd8d0f20f89ac629b6d79b790fe08", "oa_license": "CCBYNCSA", "oa_url": "https://bird.bcamath.org/bitstream/20.500.11824/1730/1/FINAL.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "c12d42f6e4d0fbb8367a6725b3922ac545682dc2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
53315072
pes2o/s2orc
v3-fos-license
Challenges to miniaturizing cold atom technology for deployable vacuum metrology Cold atoms are excellent metrological tools; they currently realize SI time and, soon, SI pressure in the ultra-high (UHV) and extreme high vacuum (XHV) regimes. The development of primary, vacuum metrology based on cold atoms currently falls under the purview of national metrology institutes. Under the emerging paradigm of the"quantum-SI", these technologies become deployable (relatively easy-to-use sensors that integrate with other vacuum chambers), providing a primary realization of the pascal in the UHV and XHV for the end-user. Here, we discuss the challenges that this goal presents. We investigate, for two different modes of operation, the expected corrections to the ideal cold-atom vacuum gauge and estimate the associated uncertainties. Finally, we discuss the appropriate choice of sensor atom, the light Li atom rather than the heavier Rb. Introduction The emerging paradigm of the Quantum-SI focuses on building devices that obey three basic "laws": (1) the sensor must be primary, (2) the sensor must report the correct quantity or no quantity at all, and (3) the uncertainties must be quantified and fit for purpose. Cold atoms represent a useful tool in developing Quantum-SIbased devices because they can be exquisitely manipulated and controlled. Deployable cold-atom sensors have the potential to revolutionize many types of Quantum-SI based measurements such as time, inertial navigation, and magnetometry. Here, we focus on the difficulties of miniaturization of cold-atom technologies for the purposes of vacuum metrology in the ultra-high vacuum (UHV, p < 10 −6 Pa) to extreme high vacuum (XHV, p < 10 −10 Pa) regimes. A cold-atom vacuum gauge is based on the observation that the main source of atom loss from a cold-atom trap is collisions with background gas [1,2,3,4,5,6,7,8,9]. Because cold-atom traps tend to be shallow (W/k B 1 K, where W is the trap depth and k B is Boltzmann's constant) compared to room temperature, the vast majority of such collisions cause ejection of cold atoms from the trap. This random loss is wellcharacterized by an exponential decay of the trapped atom number with time. We are currently developing a laboratory-based cold-atom vacuum standard (CAVS) that will represent a primary standard for the pascal in the UHV and XHV ranges. This device will be capable of cooling and trapping different sensor atoms, including 6 Li, 7 Li, 85 Rb, and 87 Rb. The dominant background gas in vacuum chambers operating in the UHV and XHV regimes is H 2 . The determination of the loss rate coefficient for 6 Li+H 2 is, in principle, a tractable calculation, and therefore establishes the primary nature of the CAVS. Extension to other background and process gases and to other sensor atoms will be accomplished by measurement of relative gas sensitivity coefficients (ratios of loss rate coefficients) [10]. The laboratory-scale CAVS currently in development at NIST is not deployable; it is neither portable, small, nor easy to use. It currently occupies an optical table with roughly 2 m 2 of area. A large experiment is required because of the large number of components needed to laser cool and trap atoms. First, atoms can only be trapped in UHV environments, generally requiring a large vacuum chamber with ion or getter pumps. Second, the workhorse of laser cooling, the three-dimensional magneto-optical trap (3D-MOT), requires optical access from six directions along three spatial axes. Third, generally good magnetic field stability is required, typically obtained by using large coils that cancel local magnetic fields and gradients. Shrinking the CAVS to something deployable thus represents an impressive challenge. Despite the difficulties, mobile cold atom systems have been constructed (e.g., an atom-based accelerometer [11]), and miniaturization continues to be an active area of research (for example, a proposal to construct a fully integrated chip-scale device [12]). Presently, the most-widely-used gauge in the UHV and XHV regimes is the non- A single laser beam (large, red arrow) traveling alongẑ is diffracted into six different beams (small, red arrows) by three reflective, gold diffraction gratings whose lines form superimposed triangles and diffract light at θ d = π/4 with respect to the normal of the grating (−ẑ). The lines of the diffraction grating are not to scale. primary Bayard-Alpert ionization gauge [13,14,15], which requires 30 cm 3 and is controlled using a 2-U standard size rack-mountable controller. Thus, to make a deployable, cold-atom based gauge, we tailor our design to occupy a similar vacuum footprint ‡. Our current design for a portable CAVS (herein referred to as p-CAVS), shown in Fig. 1, is under active development. Currently, many of its individual components are being tested separately, and, as such, the final design is still in flux. At its core, it uses a micro-fabricated diffraction grating that generates the necessary spatial beams for laser cooling and trapping [16,17]. This planar MOT is a variant of previously developed nonplanar MOTs like tetrahedral [18] and pyramidal MOTs [19]. The p-CAVS can create both a magneto-optical trap and a quadrupole magnetic trap, yielding two possible modes of operation. In this paper, we focus on the physical principles for its operation and the associated uncertainties (Sec. 2). Secondly, we describe some of the technical design features and their motivation. These choices depend on the requirements for a deployable vacuum gauge, including how it will be used and treated in the field (Sec. 3). We conclude by motivating our choice of atomic species (Sec. 4). We include a short appendix describing the atomic physics used within this paper. Throughout the paper, we focus primarily on type-B uncertainties and assume k = 1. Type-A uncertainties are Li (2S) Li * (2P) Rb (5S) Rb * (5P) H 2 [20] 83 160 He [21,22] briefly discussed in Sec. 2.3. Principle of operation and associated uncertainties The number of cold atoms N (t) in a trap decays exponentially due to collisions with background gas molecules, i.e. N (t) = N 0 e −Γt , where Γ = n K is the loss rate, K = vσ is the loss rate coefficient, n is the number density of the background gas, σ(E) is the total cross section for a relative collision energy E = µv 2 /2 and relative velocity v. Here, µ is the reduced mass, N 0 is the initial number of trapped cold atoms, and · · · represents thermal averaging. In the XHV and UHV regimes, the ideal gas law is an excellent equation of state of the background gas, and thus we can relate the loss rate to the pressure through where T is the temperature of the background gas. Equation 1 represents the ideal operation of the CAVS and p-CAVS. Perhaps the most crucial quantity in Eq. 1 is K . We described the techniques for determining this quantity in a previous work [10]. We intend to calculate a priori the collision cross section for 6 Li+H 2 . For other gases, we plan to measure the ratio of loss rate coefficients to that of 6 Li+H 2 . In the present work, we will assume the uncertainty in K to be 5 %, an estimate based on the expected results of a laboratory-scale CAVS. Both theoretical scattering calculations and experimental work are ongoing. Ab initio quantum-mechanical scattering calculations are difficult, but we can estimate the cross section using semiclassical theory [23,24] for a cold, sensor atom of mass m c and a (relatively-hot) room-temperature background-gas atom or molecule of mass m h . In this theory, the isotropic, long-range attractive part of the inter-molecular potential fully determines the total elastic cross section. This part of the potential is dominated by a van der Waals interaction −C 6 /r 6 , where C 6 is the dispersion coefficient and r is the separation between the cold atom and the background gas molecule. Table 1 lists C 6 for various combinations of cold atoms (both ground S and first excited P states) and background gases as calculated using the Casimir-Polder relationship, for species A and B. Accurate dynamic polarizabilities α(ω) as a function of frequency ω exist for each alkali atoms' ground state [25]. The dynamic polarizability of the excited state has been calculated for Li (2P 3/2 ) [26] and can be inferred from transition frequencies and matrix elements for Rb (5P 3/2 ) [27]. For common background gases, we use dynamic polarizabilities found in the literature for water [28], nitrogen [29], oxygen [30], and carbon dioxide [30]. For Li, the dispersion coefficient is a factor of two smaller than Rb for the same background molecule. Coincidentally, there appears to be little to no difference in the C 6 coefficients for the 2P and 2S states of Rb. The largest correction to Eq. 1 is the lack of a one-to-one correspondence between a collision and the ejection of a cold atom from its trap [31,32]. To eject an atom, the final kinetic energy of the initially cold atom must be at least W , the depth of a trap that is equally deep in any direction. Atoms are not ejected for scattering angles θ r less than the critical angle θ c , defined by as follows from energy and momentum conservation assuming a cold atom initially at rest. The loss rate coefficient for such glancing collisions with an isotropic potential is where dσ/dΩ r is the differential cross section, where θ c (W ) is given by Eq. 6. In the semiclassical theory, the thermally-averaged result to first order in trap depth W is where ζ = 25π 13/10 [Γ(8/5)] 3 /(4 · 6 6/5 σ 0 ) = 0.3755 · · ·. We find the higher order corrections numerically by integrating where P (x) are the Legendre polynomials and η (E) is given by Eq. 3. These glancing collisions change the ideal CAVS operation (Eq. 1) to Figure 2 shows the CAVS loss rate coefficient with glancing collisions, K − K gl (W ) , for several cold atomic species and room-temperature background gases as a function of trap depth based on the numerical integration of Eq. 10. This plot has several interesting features. First, for the same background gas, Rb, with its larger van-der-Waals coefficients, has a larger loss rate coefficient than Li. Second, K for H 2 collisions is twice as large as for other gases, due primarily to its smaller mass. Third, the first order behavior, Eq. 7, is an excellent approximation until [ K − K gl (W ) ]/ K ≈ 0.9. At this point, the linear behavior starts to give way to a logarithmic dependence on W . This appears as a straight line on the log-linear scale. In fact, K gl (W ) / K ≈ 0.1 defines a crossover trap depth, W c , which scales as Thus, for the same background gas, Rb, which is both more massive than Li and has larger C 6 coefficients, has a smaller W c . As shown in Fig. 2, the transition in the There are two traps that are easy to realize in the p-CAVS given our design constraints: a MOT and a quadrupole magnetic trap. Each has a different trap depth and, consequently, different fractions of glancing collisions. MOTs generally have depths ranging from 200 mK to 5 K depending on their parameters, as shown in Fig. 2, where glancing collisions reduce the losses by over one-half. Quadrupole magnetic traps have depths of the order of 100 mK or lower, determined by the atomic state. As a result, the uncertainty budgets associated with operating these two types of traps are different. The determination of Γ from atoms contained within the traps is also different. In a MOT, the measurement proceeds by loading the trap and observing the loss of atoms from the trap by continuously monitoring their fluorescence. Thus, making a single MOT yields many points on the N (t) curve. This is in contrast to operation with a quadrupole magnetic trap, which first requires loading atoms into a MOT followed by optical pumping into the magnetically-trapped atomic state. After free evolution, the atoms in the magnetic trap are recaptured into the MOT and counted by measuring the fluorescence. In this operation, a single load of the magnetic trap yields a single point on the N (t) curve. Constructing a decay curve with a reasonable signal to noise thus requires loading and measuring multiple times. Thus, this mode of operation is significantly slower than that of the MOT; however, as we shall see, it is more accurate. Fast operation of p-CAVS: magneto-optical trap Operating the MOT as a pressure sensor presents several type-B (systematic) uncertainties, some of which were anticipated in Ref. [5]. Glancing collisions are the dominant correction to the ideal CAVS operation in a MOT. Translating the loss rate of atoms from the MOT into a pressure therefore requires knowledge of its trap depth. Two trap-depth-measurement techniques have been employed: inducing two-body loss with a known, final kinetic energy with a catalyst laser [33] and comparing the background-gas induced MOT loss rates to a magnetic trap with known depth [34]. These two methods have been shown to yield identical results [34]. Given their complexity, however, it is not clear whether such measurements could be implemented in a sensor. Models of the trap depth of a MOT have been developed and find quantitative agreement with measurements of two-body collisions between cold atoms [35]. The models assume an atom with an optical cycling transition between a ground state with electronic orbital angular momentum L = 0 (S) and an excited state with L = 1 (P). (Here, we ignore effects due to spin-orbit coupling and hyperfine structure.) The nonconservative force on an atom in a MOT results from the interplay of a spatially-varying magnetic field B(r) and multiple laser beams i with the same frequency detuning ∆ with respect to the atomic transition but different wavevectors k i and circular polarizations i = ±1. The resulting force on the atom with position r and velocity v is where s i = I i /I sat is the saturation parameter of beam i with intensity I i . Here, the saturation intensity I sat and linewidth γ are properties of the atom and µ B is the Bohr magneton. The probability of making a transition to an excited angular momentum projection m is where ξ i is the angle between k i and B(r) and d j mm (θ) is a Wigner rotation matrix. We model the MOT trap depth for the p-CAVS using Eqs. 13-14 with the beam geometries, polarizations, and magnetic field specific for our device as shown in Fig. 1b. We use the magnetic field gradient in cylindrical coordinates r = (ρ, φ, z) with parameter dB z /dz. The magnetic field is zero at r = 0. The diffraction grating shown is positioned at z g = +5 mm and is illuminated with a = +1 polarized Gaussian beam traveling along the +ẑ direction. The beam's 1/e 2 radius is 15 mm. The diffraction grating lines are made from superimposed equilateral triangles. The triangles continue outwards until clipped by a circle with diameter 22 mm. A central, triangle-shaped through-hole, fitting an inscribed circle of radius 2.5 mm, produces a vacuum connection to the rest of the chamber. The three sides of the triangles form three grating sections that each produce two beams with angle θ d = π/4 with respect to the normal of the grating (−ẑ), one points toward the central axis of the MOT and the other outwards. Only the inward beams contribute to forming the MOT. The polarizations of these reflected beams is σ − ; their intensity profile is assumed to be the same as the incident beam, but clipped according to the area of the grating section and translated along its k i vector. The grating produces no zero-order reflection and equal ±1 diffraction orders with efficiency η = 1/3 and absorbs 1/3 of the incident intensity. The resulting ratio of the reflected beam intensity to that of the incident is η/ cos θ d , where the cosine describes the decrease in the beam's cross section. The magnetic field zero does not specify the center of the trap for a grating MOT. Unlike a standard 3D-MOT [36] where P i (m = 0) = 0 along ρ = 0, P i (m = 0) is larger than P i (m = ±1) for the beams reflected from the grating, producing a positionindependent force from these beams [37]. We find the trap center r 0 = (0, 0, z 0 ) by placing an atom at rest at r = 0, integrating the equations of motion (including the shape of the beams) and following its damped motion to the center. For alkali-metal atoms, MOTs are either overdamped or slightly underdamped. For our parameters, z 0 > 0. The temperature of the cold-atom cloud is small compared to the trap depth; therefore, the atoms are initially concentrated near the center of the trap. After a collision with a background particle, they acquire momentum q c directed at azimuthal angle φ and polar angle θ in the laboratory frame. To determine the trap depth W , we can numerically integrate the equations of motion starting from the center of the trap. For each pair of (θ, φ), the trap depth W (θ, φ) is given by the initial kinetic energy q 2 e /(2m c ), where v e = q e /m c is the escape velocity. Figure 3a shows W (θ, φ) for a Li grating MOT with ∆/γ = −1, dB z /dz = 0.5 T m −1 , and the saturation parameter s = 1 for the incident beam. We observe significant anisotropy in the trap depth, varying from 0.1 K to 0.7 K (only azimuthal angles of 0 < φ < π/3 are shown because of the three-fold symmetry of the grating MOT). This is possible because MOTs are overdamped: an atom launched from the center of the trap with q c < q e does not move chaotically through the trap, but instead quickly returns to the center §. The polar angle at which the trap depth is largest is θ = π/4, corresponding an atom moving directly into the reflected beams. The azimuthal angle that maximizes the depth is φ = π/3, where two reflected beams both apply equal force. Finally, the shallowest direction corresponds to θ = π, or into the incoming laser beam. The anisotropy of W (θ, φ) complicates the calculation of K gl (W ) . The thermally averaged loss coefficient in this case becomes where H(x) is the Heaviside step function, dΩ r = sin θ r dθ r dφ r , and θ r and φ r are the scattering angles. Realizing that the angle between the initial p h and final q c is uniquely determined by θ r , we interchange variables and find where dΩ = sin θdθdφ. We compute an angle dependent K gl (W ) using W (θ, φ) and Eq. 10 for each (θ, φ) and average over all angles. For the present work, we use the , which is accurate within the currently known MOT uncertainties (see below). We have studied the angularly-averaged trap depth W for a Li grating MOT to investigate the dependence on detuning ∆, intensity of the incident beam I, and magnetic field gradient. The results are shown in Fig. 3. As with a standard six-beam MOT, the trap depth increases with increasing s for a given |∆/γ|, shown in Fig. 3b. For small s, the large P i (m = 0) component of the reflected beams creates a complicated dependence on |∆/γ|. It also causes a sudden breakdown of the trap for magnetic field gradients < 0.1 T m −1 , shown in Fig. 3c. This "critical" magnetic field gradient is the gradient required to balance the force toward the grating from the magnetic-field sensitive m = +1 component with the force away from the grating from the magneticfield insensitive m = 0 component. The uncertainty in the pressure due to uncertainty in the MOT's trap depth is suppressed. In particular, the fractional uncertainty in the measured pressure is δp/p = δW /W log(W /W 0 )|, based on Eq. 11 and K − K gl (W ) ∝ −A log(W /W 0 ) for MOTs, where A and W 0 are constants that depend on the background gas and sensor atom. For Rb, W 0 /k B ≈ 300 K for most collisions other than H 2 ; for Li, W 0 ≈ 1000 K for collisions other than H 2 . For example, consider an uncertainty δW /W ≈ 20 % and W /k B ≈ 1 K; here, δp/p ≈ 8 % for Rb and 7 % for Li. The actual uncertainty δW is currently difficult to establish. We have tested our model against the published data in Ref. [34], and find agreement to within the experimental error bars for the smallest trap depths. Based on this comparison, we currently estimate the fractional uncertainty δW /W of the order of tens of per cent. It is our intent to further improve the accuracy and uncertainty of these models. The second correction to the measured pressure by a MOT comes from the fact that a non-negligible fraction of atoms are in the excited P state, which has different C 6 coefficients compared to the ground S state (see Tab. 1). With this correction, Eq. 11 becomes where P ex is the probability of an atom to be in the excited state. For grating MOTs, µ B |B(r 0 )|/ ∆, and Typically, s i ≈ 1 and ∆/γ ≈ −1, making P ex ≈ 25 %. The uncertainty in P ex is dominated by that of s j , which at best has δs j /s j ≈ 5 %, leading to δP ex /P ex ≈ 12 %. From our numerical results, K − K gl (W ) ∝ (C 6 ) 0.35 in the MOT regime, and K − K gl (W ) excited / K − K gl (W ) ground ≈ (C 6,P /C 6,S ) 0. 35 . We estimate an uncertainty in the ratio of 14 % based on our uncertainty in C 6 . For a typical MOT, the fractional uncertainty in the measured pressure is relatively small: 3 % for both Li and Rb. Note that in this analysis we neglect the possibility of inelastic collisions with atoms in the excited state, which change the internal state of the cold atom. These effects will need to be further studied. Finally, another complication with using a MOT to measure pressure is the presence of light-assisted collisions between cold atoms [38,39,40,41]. With these collisions, the number of atoms in the trap N obey where K n is an n-body loss parameter that depends on the intensity and detuning of the MOT light. Figure 4 shows such a decay curve with large two-body loss measured in a standard, six-beam MOT of 7 Li atoms. The curvature observed at early times indicates the presence of two-body collisions. One can fit the data to Eq. 20 to accurately separate n-body loss from the exponential loss due to background gas collisions. No evidence of three-or higher-body loss was found in the data in Fig. 4. For these data, the MOT light is red-detuned to the F = 2 → F = 3 transition with ∆/γ = −2.0(1) and dB/dz ≈ 0.5 T/m. Each of the six Gaussian beams has an intensity of 7.4(4) mW/cm 2 with a 1/e 2 diameter of 1.42 (7) cm. Repump light is provided by the +1 sideband of an electro-optic-modulator operating at 813 MHz. Apporoximately 55 % of the power remains in the carrier (red detuned with respect to F = 2 → F = 3) and ≈ 22 % of the power is in the repump (tuned to resonance with the F = 1 → F = 2 transition). Accurate operation: Quadrupole magnetic trap Unlike MOTs, magnetic traps are conservative traps: an atom's kinetic energy must decrease by the same amount as its internal energy increases. In free space, Maxwell's equations only allow minima in |B(r)| (Earnshaw's theorem). Therefore, only states whose internal energy E increases with |B(r)|, i.e. dE/dB > 0, can be trapped. In this section, we consider the quadrupole trap generated by the MOT magnetic field given by Eq. 15. This trap has its center at r = 0 = r 0 . The energy of the internal states of 6 Li( 2 S), 7 Li( 2 S), and 85 Rb( 2 S) are shown in Table 2. Energy-maximizing magnetic fields B max , resulting trap depths W max , typical magnetic field gradients used in a magneto-optical trap dB z /dz, and resulting trap size z T = B max /(dB z /dz) for various species. Note that B max and W max are typically known to within a ppm, while dB z /dz and z T are estimates. Fig. 5. Here, we include the hyperfine and Zeeman interactions. The former gives rise to two non-degenerate states at B = 0, denoted by F = I ± 1/2, where I is the nuclear spin. For 6 Li, 7 Li, and 85 Rb, I = 1, 3/2, and 5/2 respectively. For non-zero B, the levels split according to projection m F = −F, −F + 1, · · · , F . Magnetic traps in the limit B → ∞ have infinite trap depth for states with F = I + 1/2 for these three atoms. Hence, these states are impractical for CAVS operation. Instead, we focus on the state |F = I − 1/2, m F = −(I − 1/2) , which has an energy where g = g I − g J , g I and g J are the nuclear and electronic gyromagnetic ratio respectively, and ∆ HF is the zero-field energy splitting. This state has a maximum energy at a finite B max and trap depth W max = E(B max ) − E(B = 0). Neglecting the g I m F µ B B term in Eq. 21 yields and Table 2 lists B max and W max for Li and Rb isotopes. The uncertainty B max and W max is set by the uncertainty in the atomic physics parameters, which are known to better than 1 ppm . Using the dB z /dz for a MOT sets the characteristic size of the magnetic trap through z T = B max /(dB z /dz). Table 2 lists both dB z /dz and z T . The size of initial cold atom does not equal z T , but is set by its temperature out of the MOT, 1 mK. One then expects from the virial theorem a cloud size z c ≈ 5 mm for Li and z c ≈ 20 mm for Rb. For 6 Li, with z c > z T , this causes some loss of atoms when transferred from Trap depths can be made arbitrarily smaller using a so-called RF knife, which applies a radiofrequency magnetic field that couples a trapped state to an untrapped state at a given magnetic field strength. In this case, the trap depth is set by the frequency of the oscillating magnetic field. the MOT to the magnetic trap. For Rb, with z c > z g , the cloud will expand into the grating, which is the closest in-vacuum component. This may require increasing the magnetic field gradient to reduce the size of the initial cold-atom cloud. The grating decreases the trap depth when z T > z g , as higher-energy atoms eventually collide with and, most likely, stick to the grating. (The classical orbits in a quadrupole trap are not closed.) The trap depth is then determined by geometry, i.e., W = |gm F µ B (dB z /dz)z g |; its fractional uncertainty is set by δz g /z g and δ(dB z /dz)/(dB z /dz). For Rb with z g = 5(1) mm and dB z /dz = 0.15(2) T m −1 , W = 1 mK and δW/W ≈ 25 %. In a magnetic trap, Eq. 8 is an excellent approximation and thus the fractional uncertainty in the glancing collision fraction is also 25 %. Glancing collisions in a magnetic trap can still lead to loss of atoms from the trap ¶. The average energy deposited by a glancing collision is Q = W/2. Moreover, the average amount of energy necessary to cause ejection is ≈ W −k B T c , where T c is the temperature of the cold atoms. Consequently, starting in the limit where k B T c W , glancing collisions only heat the gas and the loss rate is given by Γ = n( K − K gl (W ) ). As the trapped gas warms and k B T c W/2, more of the glancing collisions start contributing to the loss and Γ approaches n K . Because Γ depends on T c and time, we expect that this will cause non-exponential decay and thus may be separable in a manner similar to the n-body loss of Eq. 20. This heating through glancing collisions is a problem that we also anticipate with the laboratory-scale CAVS and are currently performing Monte-Carlo studies to understand. For the present analysis, however, we take the measured pressure with these glancing collisions to be the mean of the two limits, with a fractional uncertainty δp/p ≈ K gl (W ) /(2 K ). Majorana spin-flip losses also contribute to the loss in a quadrupole trap + . Because the trap has a location where B = 0, atoms that pass sufficiently close to the center can undergo a diabatic transition into the untrapped spin state. Reference [42] estimates the decay rate to be This estimate was found to be about a factor of 5 too small for the experimental data in Ref. [42]. For 7 Li, /m c ≈ 9 × 10 −3 mm 2 s −1 and Γ Majorana ≈ 10 −3 s −1 ; for 85 Rb, /m c ≈ 7 × 10 −4 mm 2 s −1 and Γ Majorana ≈ 10 −5 s −1 . These loss rates could be mistaken as N 2 pressures of approximately 10 −9 Pa and 10 −11 Pa, respectively. It is, however, possible that the Majorana loss is not exponential and could be separated out by fitting, much like with two body loss in a MOT. MOT (fast) Magnetic trap (slow) Effect Li Table 3. Estimated uncertainty in the pressure from various effects associated with the p-CAVS operating at 10 −7 Pa using a magneto-optical trap (MOT, left) and quadrupole magnetic trap (right). Note that loss rate coefficient here refers to the ground-state loss rate coefficient. Totals are quadrature sums. See text for details. Table 3 shows the estimated type-B uncertainties in a p-CAVS device. The uncertainties are roughly equal for Li and Rb. Table 3 does not include any uncertainties due to the background gas composition; the composition is assumed to be known. Additional requirements for a vacuum gauge, explored in the next section, therefore will dictate our choice of sensor atom. Summary of uncertainties While we have focused thusfar on type-B uncertainties, it is important to note there are type-A uncertainties as well. In particular, we anticipate the dominant type-A uncertainty to be statistical noise in the atom counting. The fit shown in Fig. 4 has a relative uncertainty 1 % with approximately 10 s of data. Translated into a pressure sensitivity (assuming N 2 as the background gas, W = 0, and room temperature), this corresponds to ≈ 10 −8 Pa/ √ Hz. Details of the planned device In addition to the Quantum-SI requirements of being primary and having uncertainties that are fit for purpose, a deployable vacuum gauge should satisfy the following requirements: (i) It must be able to withstand heating, in vacuum, to temperatures approaching 150 C to remove water from the surfaces and minimize outgassing of the metal components. After such a heat treatment, the predominant outgassing component will be hydrogen gas trapped within the bulk of the stainless steel, which can only be removed by heat treatment at temperatures exceeding 400 C. (ii) It must not affect the background gas pressure it is attempting to measure, or the extent to which it does must be quantified and treated as a type-B uncertainty. (iii) It must minimize its long-term impact on the vacuum chamber to which it is coupled. ¶ This is in contrast to a MOT, which recools atoms not ejected from the trap. + The laboratory-scale CAVS uses a Ioffe-Pritchard magnetic trap to suppress Majorana loss. The design shown in Fig. 1 incorporates these additional requirements, as detailed below. Sensor atom By far, the most commonly laser cooled atomic species is Rb, which offers easily accessible wavelengths for diode lasers and easy production inside vacuum chambers. As a result, much work has focused on miniaturizing Rb-based cold atom technology. On the other hand, Rb has a high saturated vapor pressure of 2 × 10 −5 Pa [43] at room temperature, which threatens to contaminate the vacuum it is attempting to measure. Second, Rb precludes baking a vacuum chamber, because its vapor pressure of 3 × 10 −1 Pa at 150 • C may cause any small, open source of Rb to be depleted during a bake. Lithium, on the other hand, has a saturated vapor pressure of 10 −17 Pa [44] at room temperature, the lowest of all the alkali-metal atoms. This limits its contamination of the vacuum chamber. At 150 • C, the saturated vapor pressure is approximately 10 −9 Pa, low enough to allow the vacuum chamber to be baked. The trap The magneto-optical trap itself is a novel design, and its features and performance will be detailed elsewhere. In short, a collimated, circular-polarized beam reflects from a nanofabricated triangular diffraction grating to produce three additional inward-going beams, the minimum needed for trapping. To generate the quadrupole magnetic field for the MOT, we intend to use neodymium rare-earth magnets mounted ex-vacuo. They are removable during baking, so as to not change their remnant magnetization. An aperture in the chip allows light and atoms to pass through the chip. The source is positioned behind the chip and the thermal atoms are directed toward the aperture. Light passing through the aperture can slow the atoms emerging from the source. We tailor the magnetic field profile along the vertical axis such that it starts linearly near the center of the MOT and smoothly transforms into a √ z behavior near the atomic source. This creates an integrated Zeeman slower that enhances the loading rate of the MOT. Finally, the aperture acts as a differential pumping tube, limiting the flow of gas from the source region to the trapping region of the device. Beam shaping and detection Laser light is delivered into the p-CAVS using a polarization-maintaining optical fiber with a lens for collimation and a quarter-waveplate for generating circular polarization. These components are maintained ex-vacuo and can be removed during installation to prevent breakage and baking to prevent misalignment. The light travels through a fused-silica viewport on the top of the vacuum portion of the device. Detection of the atoms can be accomplished through the same viewport, using a beamsplitting cube to separate the incoming light from the fluorescence light returning from the atoms in the MOT. An apertured photodiode (not shown) with an appropriate imaging lens will be used to detect the fluorescence. Atomic Source One problem that must be overcome with Li is building a thermal source that is UHV or XHV compatible. Heating the source to the necessary 350 • C to produce Li vapor while maintaining a low outgassing rate is a challenge. We recently demonstrated a low-outgassing alkali-metal dispenser made from 3Dprinted titanium [45]. The measured outgassing level, 5(2) × 10 −7 Pa l s −1 , would establish the low-pressure limit of the gauge. For example, an effective pumping speed * of 25 l/s between the pCAVS and the chamber to which it is attached will produce a constant pressure offset of approximately 10 −8 Pa relative to the pressure in the chamber under test. One can decrease this offset by adding pumps to the source portion of the pCAVS. As currently envisioned, the titanium dispenser will be surrounded by a non-evaporable getter pump, created by depositing a thin layer of Ti-Zr-V onto a formed piece of metal. Assuming roughly 100 cm 2 of active area, this translates to an approximate pumping speed of 100 L/s [46] with a capacity of the order of 0.1 Pa l [47]. Such a pump will reduce the pressure offset to 10 −11 Pa and have an estimated lifetime of 10 8 s, comparable to the lifetime of the dispenser. Further improvements can be made by minimizing the creation of other lithium compounds when loading the lithium into the dispenser [45]. For the p-CAVS to be accurate, the flow of alkali-metal atoms must be turned off while measuring the lifetime of the cold atoms in the trap. Otherwise, collisions between hot atoms from the source and cold, trapped atoms will cause unwanted ejections. These collisions have a loss rate coefficient that is almost an order of magnitude larger than those due to other gasses. To stop the flow of atoms, our current design incorporates a mechanical shutter. We are also considering other more speculative sources of lithium. Lithium, like other alkali-metal atoms, can be desorbed from surfaces using UV light [48]. However, UV light also desorbs other, unwanted species from surfaces, such as water and oxygen [49,50,51], increasing their background gas pressures. In a recent experiment [48], we observed that the increase in pressure due to unwanted gasses is significantly smaller than our low-outgasssing lithium dispensor. In addition, lightassisted desorption should be nearly instantaneous with application of the light, eliminating the need for a mechanical shutter. The combination of low-outgassing and instantaneous response make light assisted desportion an attractive source for the p-CAVS. Finally, a source based on electrically-controlled chemical reactions, like those in a battery, may also work as a nearly instantaneous source of lithium with low outgassing [52]. * The effective pumping speed is determined by the combination of pumping speed and conductance of the components leading to the pumps. Figure A1. (a) Schematic for a one-dimensional MOT. Two beams with opposite circular polarizations (measured alongẑ) and zero-field detuning ∆ are incident upon atoms in a magnetic field gradient (i.e, Eq. 15). The field is zero at z = 0. This gradient splits the magnetic sublevels of the upper orbital angular momentum state into three. (b) Hierarchy of splittings of a realistic alkali-metal atom. The orbital angular momentum states L = 0 (S) and L = 1 (P) used in (a) are first split into states denoted by L J by spin-orbit interactions with total electronic angular momentum J. These levels are again split when the nuclear spin I is coupled in via the hyperfine interaction to J, creating states of total atomic angular momentum F . One typically operates the MOT on the F = I + 1/2 to F = I + 3/2 transition (red arrow); however, because of off-resonant transitions between F = I + 1/2 to F = I + 1/2, a "repump" laser is added (green arrow). The dashed arrows show possible decay channels from excited states to the ground state manifold by spontaneous emission. Conclusion Our group is currently in the process of building a portable cold-atom vacuum standard, the p-CAVS. This gauge will be based on recent advances in grating MOT technology and fit in a footprint equal to that of commonly used gauges for this vacuum regime like Bayard-Alpert ionization and extractor gauges. As part of the emerging Quantum-SI paradigm, our device is primary (traceable to the second and the kelvin) and has errors that are well-characterized and fit for purpose. There are two atom traps that we can operate with this gauge, each offering different performance but also different speed. The estimated uncertainties discussed in the previous sections are summarized in Tab. 3. We find that the pressure uncertainty from the MOT is only slightly worse than the magnetic trap. These estimates, however, depend on the accuracy of the semiclassical model of K and K gl (W ) and are subject to change. In a parallel effort, we are constructing a laboratory-scale standard in which we intend to measure both K and K gl (W ) to better than 5 % accuracy. Appendix A. Atom trapping: a short introduction Here, we provide a brief explanation of magnetic-optical trapping and magnetic trapping, with a particular focus on the loading of atoms from one to the other. For a more thorough introduction, the interested reader can consult Refs. [36,53]. MOTs cool and trap atoms by a combination of the Doppler effect and spatially varying light forces. The forces arise from light pressure: when an atom scatters a photon from a laser with wavevector k, it receives a momentum kick k. The characteristic timescale for this process is the excited state lifetime 1/γ. The typical MOT is depicted in Fig. A1a in one dimension for an atom with electronic orbital angular momentum L = 0 in the ground state and L = 1 in the excited state and projections m L of that angular momentum along this direction. First, consider an atom at some distance +z with zero velocity. With the appropriately chosen polarizations, the right-(left-) going beam couples the m L = 0 to m L = +1 (m L = 0 to m L = −1), as indicated by the colors. The Zeeman effect due to the magnetic field gradient shifts the m L = 0 to m L = +1 transition into resonance with the leftward going laser, while the rightward-going laser is shifted out of resonance with m L = 0 to m L = −1 transition. This causes the atom to scatter photons from the leftward going beam and be pushed back toward the origin. The two laser beams interchange their roles for an atom placed at −z, causing the atom to be again pushed toward the origin. Second, consider the center of the trap where the magnetic field is zero and the m L levels are degenerate. (Figure A1a depicts a stationary atom.) If the atom is moving with velocity +v (−v), the Doppler effect will shift the left (right) moving beam into resonance and the atom will scatter photons and be slowed. This is the slowing or cooling force of a MOT. This picture is further complicated by the presence of additional angular momentum states in the atom, as shown in Fig. A1b. All alkali-metal-atom MOTs operate on an electron orbital angular momentum L = 0 (S) to L = 1 (P) transition. However, the atom also has an electron spin S = 1/2, and the total electronic angular momentum is J = L + S. This results in a single ground state with J = 1/2 and two excited states with J = 1/2 and J = 3/2. The degeneracy of the two excited states is broken by spin-orbit coupling. This presents us a choice of whether to operate a MOT on the P 1/2 state (the D1 line) or P 3/2 state (the D2 line). In general, one wants the transitions driven in laser cooling to be "cycling" transitions: the excited state only decays back to the original ground state. This condition is most easily achieved on the J = 1/2 to J = 3/2 transition and, therefore, most MOTs operate on the D2 line. This picture must also include the nuclear spin, which adds to J to make a total angular momentum F = I + J. For the ground state with J = 1/2, this makes two states F = I ± 1/2 (for I > 1/2) that are split by the hyperfine interaction. For the excited J = 3/2, it creates four states. The cycling transition is once again found on the F = I + 1/2 to F = I + 3/2 transition, which can only decay back to F = I + 1/2 (see the dashed decay paths in Fig. A1b). The hyperfine splitting in the excited state, however, is not sufficiently large compared to the excited state lifetime to completely prevent transitions between F = I + 1/2 to F = I + 1/2. If an atom is driven to this excited state, it can decay by spontaneous emission into either of the F = I ± 1/2 ground states. Typically, as depicted in Fig. A1b, one must apply a second laser to "repump" the atoms out from F = I − 1/2 back to F = I + 1/2. The repump laser can also be used to transfer atoms into a magnetic trap in a simple way. By merely turning off the repump laser, all atoms will eventually find themselves in the F = I − 1/2 ground state. After this occurs, all lasers can be turned off and the atoms that happened to be pumped into the m F = −(I − 1/2) state are magnetically trapped. This is the simplest means to load a magnetic trap from a MOT. By re-applying both lasers, the atoms trapped in the magnetic trap can be brought back into the MOT and counted.
2018-09-14T12:51:18.000Z
2018-09-14T00:00:00.000
{ "year": 2018, "sha1": "18df8e17d2c06c5b0e567c1a921587f328918ba7", "oa_license": null, "oa_url": "https://europepmc.org/articles/pmc6459404?pdf=render", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "18df8e17d2c06c5b0e567c1a921587f328918ba7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
233924565
pes2o/s2orc
v3-fos-license
Influences of sea water on the ethylene-biosynthesis, senescence- associated gene expressions, and antioxidant characteristics of Arabidopsis plants We evaluated the physiological and antioxidant characteristics of Arabidopsis thaliana (At) plants grown in different sea water (SW) products containing trace elements, namely RO3, 300K, and 340K, at various dilutions. The synthetic water (namely 300K-Test), a mixture of the main ions of SW including 143.08 mg L Mg, 5.74 mg L Na, 170 mg L K, and 33.5 mg L Ca with equal concentrations to those in 300K SW without trace elements, was also used to culture At plants and study the influences that the major ions had on regulating ethylene production. The ethylene-biosynthesis (ACS7 and ACO2) and senescence-associated (NAP, SAG113, and WRKY6) gene expressions in SWand ionic-treated At plants in response to transcriptional signaling pathways of ethylene response mechanisms were also investigated. Our results show that down-regulation of the ACS7 gene in 300K-treated plants significantly reduced the ethylene content but remarkably increased chlorophyll, total phenol, and DPPH radical scavenging accumulations and strengthened the salt tolerance of 300K-treated plants. The expression of the ACS7 gene of At plants under 300K, Ca, Mg, and Na treatments was correlated with decreases in NAP, SAG113, and WRKY6 gene expressions. The application of Ca increased total phenol content and reduced the accumulation of superoxide, which in combination decreases plant aging brought on by ethylene. However, K treatment inhibited SGA113 gene expression, resulting in reducing ACS7 gene expression and ethylene content. The characterization and functional analysis of these genes should facilitate our understanding of ethylene response mechanisms in plants. Introduction Sea water (SW) contains abundant essential minerals (i.e. Mg 2+ , Na + , K + , and Ca 2+ ), along with minute amounts of many trace elements, and has attracted attention in accordance with a rise of the consciousness of health from the standpoint of preventive medicine (Nakagawa et al., 2000).Studies have shown that SW exerts diverse biological activity, such as regulating the immune system and antioxidant activity in rats (Jung and Joo, 2006). Thus, it has therapeutic effects on lipid metabolism and IgA production (Kang et al., 2015;Shiraishi et al., 2017) and also has been applied in the food, cosmetic, health, and medical fields (Nani et al., 2016;Higgins et al., 2019). Moreover, SW can be favorable for agriculture under certain circumstances, being used as an additional nutrient supplement at different concentrations to improve the nutritional quality of fruits and vegetables (Yudi et al., 2007;Saito et al., 2009;Yamada et al., 2015). Turhan et al. (2014) reported that low concentrations of SW are suitable for lettuce production, which can be successfully grown using SW diluted to concentrations of 2.5% and 5%.The effects of salt stress induced by SW treatments evaluated in red lettuce showed that tested plants grown with dilute SW accumulated more chlorophyll compared to those grown in NaCl solutions, thus increasing their quality and nutritional value (Sakamoto et al., 2014). The use of SW has the potential to achieve horticultural crop biofortification, meaning the endogenous nutrient fortification of food (Ding et al., 2016). Atzori et al. (2016) concluded that SW can be used in hydroponics, allowing freshwater savings and increasing certain mineral nutrient concentrations. Furthermore, Caparrotta et al. (2019) also showed that the use of SW treatments in hydroponic spinach cultivation has positive effects on growth parameters. Senescence is the final phase of leaf development, characterized by key processes in which resources trapped in deteriorating leaves are degraded and recycled to sustain the growth of newly formed organs. As the gaseous hormone ethylene exerts a profound effect on the progression of leaf senescence, both the optimal timing and amount of its biosynthesis are essential for controlled leaf development (Sun et al., 2017). The ethylene biosynthetic pathway in higher plants has been well documented (Yang and Hoffman, 1984). The rate-limiting step is the conversion of S-adenosylmethionine (SAM) to 1-aminocyclopropane-1-carboxylic acid (ACC) catalyzed by ACC synthase (ACS), and finally ethylene is produced through the oxidation of ACC by ACC oxidase (ACO). The regulation of these enzymes is therefore essential for controlling the rate and level of ethylene production. The application of ethylene improves plant tolerance to high salinity, largely by enhancing the expression of reactive oxygen species (ROS) scavengers (Peng et al., 2014). Several key enzymes in ethylene biosynthesis have been addressed as affected by salinity stresses, in which ACS7 is one of the major contributors to the synthesis of ethylene (Dong et al., 2011;Lyzenga et al., 2012). Moreover, ethylene production by ACO2 appears to be a key regulatory step in Arabidopsis plants (Linkies et al., 2009;Sekeli et al., 2014). NAP and WRKY proteins are leaf senescence-associated transcription factor (TF) gene families based on their DNA-binding conserved domains of 60 amino acids with an N terminus and a C2H2zinc-finger motif at the C terminus (Zhang and Gan, 2012;Kou et al., 2012;Guet al., 2019). Senescence-associated gene113 (SAG113), a gene encoding a Golgi-localized protein phosphatase 2C family protein phosphatase, mediates abscisic acid (ABA)-regulated stomatal movement and water loss specifically during leaf senescence. Previous studies showed that high accumulations of the ACS7 protein lead to precocious leaf senescence as well as greatly up-regulating the expressions of NAP, WRKY6, and SAG113 genes (Robatzek and Somssich, 2002;Sun et al., 2017).The study presents the expressions of these ethylene-biosynthesis and senescence-associated genes involving the signaling transduction pathways for delaying senescence in Arabidopsis plants treated with SW and its major ionic solutions. Previously, we reported that various dilution rates of commercial SW applied to pakchoi (Brassica rapa subsp. Chinensis) and tomato (Solanum lycopersicum var. cerasiforme) in hydroponic cultivation conditions enhanced cell viability by increasing 1,1-diphenyl-2-picrylhydrazyl (DPPH) scavenging ability and 2.3.5triphenyl tetrazolium chloride (TTC) activity and decreasing malondialdehyde (MDA) content in tested plant leaves compared with plant leaves without SW treatment (Xie et al., 2020). TTC activity was used as a quantitative method in the evaluation of cell viability, while the higher TTC activity the higher cell viability. MDA is a final decomposition product of lipid peroxidation and has been used as an index for the status of lipid peroxidation, the lower the MDA content the higher cell viability. In the present study, Arabidopsis plants grown in different concentrations of SW were evaluated for their physiological and antioxidant characteristics. The influences that the major ions (Mg 2+ , Na + , K + , and Ca 2+ ) contained in SW have on regulating ethylene production were determined. Salt ion analysis revealed significantly different accumulations of Mg 2+ , Na + , K + , and Ca 2+ between bent grass cultivars in response to salt stress (Krishnan and Merewitz, 2015). Understanding how ethylene profiles change will help elucidate the mechanisms governing salt-stress tolerance in plants, in which ethylene inhibits receptors, suppresses salt sensitivity conferred by ethylene receptors, and promotes ethylene-responsive salt tolerance (Cao et al., 2007).In addition, in order to test whether ethylene is involved in transcriptional regulation signaling pathways during SW and ionic treatments, expression patterns among the ethylene-biosynthesis and senescence-associated genes in SW-and ionic-treated At plants in response to ethylene production are also discussed to facilitate our understanding of ethylene response mechanisms, the physiological and molecular aspects of salt stress sensing functions, and improve plant stress tolerance, all of which are critical for plant growth and productivity. Materials and Methods Germination test and growth conditions One hundred seeds of Arabidopsis thaliana (At) L. ecotype Columbia were sterilized with 1.5% sodium hypochlorite and rinsed with distilled deionized (dd)H2O. Seeds were then germinated and grown in half strength Murashige-Skoog (MS, from Sigma-Aldrich Co., San Jose, CA, USA) in Petri dishes for two weeks after sowing in a growth chamber under 200 μmol m −2 s −1 light with a 16 h photoperiod at a temperature of 23 °C, and a relative humidity of 80% for a week. The germination rate (%) was then calculated (Chiang et al., 2014). Three commercial SW products (namely RO3, 300K, and 340K), each in three different dilution ratios, were applied to determine the optimal dilution rate (X) without influencing seed germination and seedlings by comparing with those plants without SW treatment (control). Uniformly sized three-week old seedlings were individually transferred to 3-inch (7.6 cm) plastic pots, and treated with 100 mL of each SW and complete nutrient solution (Millero et al., 2008;Caparrotta et al., 2019) once per week. Pots were randomly placed in the above-mentioned growth chamber under the same growing conditions for one week. Table 1 lists the characteristics of the three commercial SW products at various dilutions and 300K-Test synthetic water suitable for plant growth: RO3 (2,000 X, high Na + -containing SW with trace elements), 300K (540X, high Mg 2+ , K + , and Ca 2+ -containing ocean water with trace elements), 340K (6,180 X, high Mg 2+ , K + , and Na + -containing SW with trace elements), and 300K-Test (major ions with equal concentrations to that in 300K SW without trace elements) compared to those non-SW treated plants. Electrical conductivity (EC) and pH values of three SWs and 300K-Test solution were measured by an EC meter (DEC-2, Atago Co., Tokyo, Japan) and pH meter (DPH-2, Atago Co.), respectively. Their values were calculated by using the concentrations of the major constituents (Mg 2+ , Na + , K + , and Ca 2+ ) in the commercial SW (Sakamoto et al., 2014). In addition, the mixture of Mg 2+ , Na + , K + , and Ca 2+ synthetic water (same as 300K but free of trace elements, namely 300K-Test) was also used to culture At plants in order to assess the effects of these major ions on the physiological and antioxidant characteristics on the plants compared to 300K (540X) SW. The "300K-Test" ion synthetic water contained equal amounts and concentrations of the major ions Mg 2+ (143.08 mg L -1 ), Na + (5.74 mg L -1 ), K + (170 mg L -1 ), and Ca 2+ (33.5 mg L -1 ) as in the commercial 300K (540X) SW but free of trace elements (Sohrin et al., 1998). The pH of the 300K-Test was adjusted to 5.7, identical to the three commercial SW products. The EC value of 300K-Test was 1.082 (ms/cm), with obtained values being reported in Table 1. 4 Table 1. Constituent, pH value, and electrical conductivity (EC) value of three commercial sea water products [SW -RO3 (2,000X), 300K (540X), and 340K (6,180X)] and a mixture of Mg 2+ , Na + , K + , and Ca 2+ ions only (namely 300K-Test, free of trace elements) *The three commercial SW was obtained from LOHA Water Tech Co., Taipei, Taiwan, and processed though electrodeionization and vacuum concentration.RO3 was used as the basic level, set as 1 to calibrate the salinity ratios of 300K (0.27) and 340K (3.09). Trace elements such as zinc, manganese, vanadium, chromium, and selenium are negligible in these three commercial SW solutions. +: with trace elements; -: without trace elements. The SW was obtained from LOHA Water Tech Co., Taipei, Taiwan, and processed though electrodeionization and vacuum concentration. However, high salinity from the non-diluted SW had a harmful effect on seed germination. RO3 was then used as the basic level, set as 1 to calibrate the salinity ratios of 300K (0.27) and 340K (3.09). Afterward, three various diluted rates (X) of each SW were established according to the salinity ratio and diluted with dd water to the same salinity. For example, the three dilution rates (500X, 1,000X, and 2,000X) of RO3 were multiplied by 0.27 and 3.09 to obtain the three dilution rates (135X, 270X, and 540X) of 300K and (1,545X, 3,090X, and 6,180X) 340K, respectively (Table 1). RO3 (500X) was used in equal concentration to that of 300K (135X) and 340K (1,454X) SW for studying the influences of the main elements and trace elements in SW on Arabidopsis plants. Each different dilution rate of each SW treatment was applied to 100 seedlings or plants in the experiment in a completely randomized design. The At plants grown under 1/2 MS medium without SW and 300K-Test treatments served as controls. Following each treatment, young, fully expanded leaves from each plant were clipped, frozen in liquid nitrogen, and stored at -80 °C in an ultra-freezer until used for the analyses. Determination of total chlorophyll, phenolic, and MDA contents, DPPH scavenging capacity, enzyme activity, and in situ ROS staining The total Chl content of leaves from three-week-old potted At plants from each treatment were determined using methods described by Arnon (1949). The analysis of DPPH radical scavenging activity in At leaf extracts was determined according to Shimada et al. (1992). The DPPH scavenging capacity was calculated as the percentage of free radical-scavenging activity. The measurement of MDA content using the thiobarbituric acid (TBA)-trichloroacetic acid (TCA) method was described by Kosugi and Kikugawa (1985). Ten plants per treatment were used for all analyses. ROS were stained in situ utilizing the principle of nitroblue-tetrazolium (NBT) reduction to blue formazan by superoxide radicals. The intracellular concentration of superoxide radicals was directly proportional to the development of the intensity of blue color in the leaves and previously described in Shafi et al. (2014). The relative changes in ethylene-synthesis (ACS7 and ACO2) and senescence-associated (NAP, SAG113, and WRKY6) gene expressions in response to various diluted SW and their four major ionic (Mg 2+ , Na + , K + , and Ca 2+ ) treatments were monitored by real-time quantitative (q)PCR and quantification of RNA levels. To test whether NAP, SAG113, and WRKY6gene expressions were induced by ethylene, two-week-old At plants were foliar sprayed by 1mM ethephon aqueous solution (Sigma-Aldrich Cat.# C0143) for one week as previously described (Wen et al., 2015). A real-time qPCR was performed based on our previous study (Lin et al., 2019). To normalize the total amount of cDNA in each reaction, AtActin-8 from Arabidopsis was coamplified asan internal control. The relative amounts of RNA were calculated by the ratio of the abundance of 300K-and major ionic-treated plants toAtActin-8 (Livak and Schmittgen,2001). 6 Ethylene emission measurements The rate of evolution of ethylene was determined on four-week-old At plants (10 different leaves of one rosette and ten plants per treatment) with a portable ethylene gas analyzer (CI-900; CID Bio-Science, Camas, WA, USA). Pots were kept inside the airtight chamber of the instrument for 2 min and their rates of ethylene evolution (ppm) (Krishnanand Merewitz, 2015) were read. TTC activity determination of At seeds The data on seed viability by TTC activity test were also recorded up to seven days after sowing and ten plants were used for each treatment. TTC analysis was performed based on Hussain and Reigosa (2014) using ELISA Reader (Spectrophotometer U-2900, Hitachi Tokyo, Japan) and expressed as A485 μg per plant weight (g) per hour (h). Statistical analysis The measurements of physiological and antioxidant parameters were performed using a paired t-test and one-way analysis of variance (ANOVA), with the least significant difference (LSD) test at p < 0.05 using the SAS program ver. 9 (SAS Institute, Cary, NC, USA). Morphology of At seedlings The At seedlings grown in 1/2 MS medium with RO3 (500X, 1,000X, and 2,000X), 300K (135X, 270X, and 540X), and 340K (1,545X, 3,090X, and 6,180X) treatments were impaired, epinastic, senescent, yellowish, and smaller in size relative to all of the control plants (photos not shown). On the other hand, no obvious differences were observed in the colors and sizes of seedlings cultivated in the highest dilution ratios (lowest concentrations) of SW treatments and controls. However, higher concentrations of SW displayed inhibitory effects on growth from salinity, and salt stressed seedlings suffered a changed cell water relation that displayed a cost for osmotic adjustment, which generally reduced the absorption and translocation of water (Munns, 2002). In fact, salinity negatively affects plant growth and physiology through different mechanisms, such as water and osmotic stress (Garcia-Sanchez and Syvertsen, 2009;Balal et al., 2012;Gonzalez et al., 2012). It is generally known that when SW is provided to plants during cultivation, minerals contained in the seawater may stimulate growth. Islam et al. (2010) reported that eggplant variety 'Ryoma' plants grown with the applications of 2% mineral controlled sea water to the standard nutrient solution under greenhouse condition had larger vegetative growth rate than with the control. Furthermore, with the 2% mineral controlled sea water treatment the plants increased 14% of fruit yield compared to the control. For Arabidopsis tolerance to salinity stress, Alet et al. (2012) reported that the inhibitory effects were observed when plants were grown under > 50 mM NaCl conditions. Throughout the duration of the experiment, seedlings appeared healthy and sported green and larger leaves when cultivated in the higher dilution ratios of RO3, 300K, and 340K treatments, withstanding the low osmotic pressure of the growing medium, in comparison to the above-mentioned lower dilution ratios. Therefore, RO3, 300K, 340K, and 300K-Test treatments were used for the following experiments. Moreover, these identified dilution systems could be used for the rapid monitoring and early detection of salt injury in the seedling stage. This means that hundreds of individual plants might be screened per day, providing for the large-scale discovery of individuals that exhibit tolerance to salt stress. Physiological and antioxidant characteristics of At seeds and plants No significant differences in the germination rates of At seeds were observed among all treatments and controls, with an average of 95.5% germination ( Figure 1A), suggesting that diluted SW does not affect At seed germination due to its tolerance to low salinity stresses. TTC activity responded differently to diluted SW 7 treatments ( Figure 1B), and TTC activity of Arabidopsis seeds cultivated in 300K and 340K at an average of 7.3 μg/g‧h was significantly higher than in controls, which averaged 6 μg/g‧h, whereas seeds under RO3 treatment displayed similar TTC activity to controls. Thus, 300K and 340K SW improved cell activity in seeds compared to RO3 and controls, suggesting that SWs with different concentrations of Mg 2+ , K + , and Ca 2+ (Table 1) may influence the tissue cell viability of At plants. The MDA content of all leaves grown in all SW treatments and controls did not show any significant differences (Figure 2A). Therefore, the SW concentrations used in the present experiment were unable to induce any change in MDA content -in other words, decreased lipid peroxidation-in the leaves of At plants, suggesting the possibility of cultivation at the tested SW concentrations. Thus, SWs with different Mg 2+ , K + , and Ca 2+ concentrations (Table 1) The significantly higher DPPH radical scavenging effect (41.2%) and total Chl content (0.58 mg/g FW) in tested leaves were observed with 300K compared to the control (40.4% and 0.48 mg/g FW, respectively), whereas no remarkable differences in DPPH radical scavenging effects and total Chl content were detected among RO3, 340K, and controls ( Figures 2B,C).These results demonstrate that only 300K treatment could increase antioxidant content and scavenge DPPH radicals, and also suggests that the major ions contained in 300K SW may enhance the synthesis and accumulation of Chl. Magnesium is a main constituent of the Chl molecule bound by four pyroll groups. Consequently, using a remarkably high concentration (143.08 mg L -1 ) of Mg 2+ in the 300K solution could critically influence the additional synthesis of Chl in the leaves. Furthermore, 300K-treated At plants exhibited lower accumulations of superoxide (•O 2− ) in leaves compared to other SW treatments and controls in in situ ROS staining ( Figure 2D), speculating that plants treated with diluted 300K had slightly lower ROS accumulations compared to the control. In addition, Mg 2+ , Na + , K + , Ca 2+ , and 300K also displayed lower levels of •O 2− accumulation in leaves as evidenced by the lower intensity of a fuscous precipitant (blue color) compared to the control ( Figure 2E). Therefore, 300K, Mg 2+ , Na + , K + , and Ca 2+ solutions were applied to tested plants for antioxidant capacity analysis. Plants offset the initial osmotic components of salt stress by adjusting the osmotic gradient, although the accumulation of Na + can lead to toxic effects in the long term (Alvarez-Aragon and Rodriguez-Navarro, 2017). Na + affects the hydration shell of other molecules, causes damage to the cell wall, disturbs the K + /Na + ratio of cells by several mechanisms, and impairs plant physiology (Julkowska and Testerink, 2015). Generally, plants that have the ability to excrete, exclude, or tolerate high levels of salt are salt-resistant, the differential expression genes of proton pumps or antioxidant capacity could play a role in causing the differential accumulation of Na + in plant leaves and roots between cultivars (Janicka- Russak et al., 2013;Pérez-López et al., 2013;Krishnan and Merewitz, 2015). In addition, the maintenance of all K + ion transporters and channels across the plasma membrane is essential for proper K + homeostasis in plants . Calcium can also improve K + transport under salt stress conditions (Maathuis, 2006). Although plants rely on a sufficient supply of Mg 2+ and other elements for normal growth and development, excessive Mg 2+ accumulation often causes toxicity to plant cells (Niu et al., 2018). Salt movement and accumulation in roots and leaves of At plants subjected to SW and ionic treatments for their contrasts in salt tolerance are worthy of further investigation. Arabidopsis plants treated with 300K and Ca 2+ cultures showed significantly higher total phenolic content (18 mg of GAE/g and 22 mg of GAE/g, respectively) compared to the control (13 mg of GAE/g) ( Figure 3E). Nevertheless, no significant differences were observed between Mg 2+ , Na + , K + , and Ca 2+ alone or combined (300K and 300K-Test) in cultures and controls. The four antioxidant activities of At leaf extracts from 300K and all individual ionic cultures were non-significantly different from controls ( Figures 3A-D), suggesting that CAT, SOD, APX, and GR did not participate in active ROS reduction irrespective of the plant 9 growth period (three weeks after sowing) when treated with 300K and those four major ions.ROS production and scavenging are interactive, maintaining relative stability in plants. Presumably, the accumulation of antioxidant system components and ROS formation are favored in salt tolerance. Increased levels of ROS in salt-stressed plants could lead to an increased capacity of the ROS scavenging system. Salt stress induces the production of ROS such as singlet oxygen (1O2), superoxide radicals (O2˙⎯), hydrogen peroxide (H2O2), and hydroxyl radicals (OH˙). These ROS are necessary for inter-and intracellular signaling, but at high concentrations they seriously disrupt normal metabolism in plants through the oxidation of membrane lipids, proteins, and nucleic acids (Hoque et al., 2007). Numerous studies have indicated that antioxidant systems are correlated with plant tolerance to salt stress, these enzymes and/or nonenzymes are required to maintain redox homeostasis, and the induction of antioxidants and osmolytes is part of an integrated strategy for salt stress defense (Lin and Pu, 2010). Pre-treating with SW and ionic solutions may influence the ability to maintain a balance between the formations and de-oxidation of ROS, leading to leaf vulnerability against oxidative stress. Salt stresses induce the production of ROS, which are necessary for inter-and intracellular signaling, but under stress conditions they seriously disrupt normal metabolism in plants through the oxidation of membrane lipids, proteins, and nucleic acids in the absence of protective mechanisms (Nguyen et al., 2018). In our study, total phenolic content is markedly accumulated in At leaves exposed to salinity stress, suggesting that total phenol content may be useful in screening salt-tolerant plants. Increased DPPH radical scavenging activity was observed in the extracts of At grown in cultures with 300K SW, which may be due to the contribution of phenolics accumulated in the leaves. Supplementation of 300K SW increased the salinity of the nutrient solution and subsequently might increase the Ca 2+ uptake; thus, leaves contained more total phenolic content. The phenolic compound alteration due to salinity stress is critically dependent on the salt sensitivity of the plant. In fact, salt stress creating both ionic as well as osmotic stress in plants resulting in increased polyphenol concentration indifferent tissues have been reported in a number of plants (Parida and Das, 2005). We assume that At plants were subjected to osmotic stress by the addition of Ca +2 to the cultures, and as a result, phenolics are produced and accumulate in leaf cells and function as osmolytes, and are believed to facilitate osmotic adjustments by acting as osmoprotectants. Errabii et al. (2006) reported that growth, proline and ion accumulation in sugarcane callus cultures under drought-induced osmotic stress and its subsequent relief, and a sudden osmotic up shift in the medium causes a water efflux from the cells, loss of turgor pressure, and concomitant reduced growth. It is known that hyperosmolality and various other stimuli trigger increases in cytosolic free calcium concentration. Environmental water deficiency triggers an osmotic stress signaling cascade, which induces short-term cellular responses to reduce water loss and longterm responses to remodel the transcriptional network and physiological and developmental processes (Yuan et al., 2014). The cell wall also contains phenolics, enzymes, proteins, and Ca 2+ , and osmotic stress can lead to the accumulation of ROS in the cell wall (Tenhaken, 2014). The calcium ion acts at a convergence point for integrating different signals, and may have a role in providing salt tolerance to plant cells. Some oxidant systems use Ca 2+ to stimulate oxidative bursts in leaf cells, but some do not. Manipulating Ca 2+ homeostasis by altering the concentration of Ca 2+ could be an important strategy to alter the behavior and survival of plants under salt stress (Lin et al., 2008). As a consequence, Ca 2+ may play an important role in the antioxidant system under salt stress. It is possible that both calcium and ROS could be important modulators of the cellular signaling of transduction events following salt-stress injury. Perhaps a higher level of Ca 2+ under non-stressed conditions allows for enhanced stress perception, signaling, or Ca 2+ -induced stabilization of cell structure at the onset of salt stress. The development of salt stress in leaves was more gradual or perhaps delayed by Ca +2 with 33.5 mg L -1 treatment. Determination of ACS7 and ACO2 genes expression and ethylene emission in At plants with ethephon treatment Ethylene biosynthesis may have resulted from gene activation and/or up-regulation of ethylene-induced ACS7 and ACO2 genes. To investigate the regulation and expression of ACS7 and ACO2 genes in At plants, a real-time qPCR analysis was performed with extracted RNA from two-week-old plants subjected to diluted SW and 300K-Test solutions for one week. Data were normalized with respect to the RNA level of AtActin8, a housekeeping gene that is consistently expressed in plants. Figures 4A and B show that RNA abundances of ACS7 andACO2 were significantly up-regulated in RO3 and 340K treatments in comparison to controls. However, RNA expressions of ACS7 were significantly lower in 300K, 300K-Test, and individual Ca 2+ , Mg 2+ , and Na + in 300K solutions than in controls ( Figures 4A and C). Moreover, the ACS7 gene was significantly and highly expressed in K + -treated plants compared to control plants. Total RNA in all tested plants was extracted from leaves of two-week-old plants subjected for one week to diluted RO3, 300K, 340K, and 300K-Test, followed by ACS7 (A) and ACO2 (B) gene expressions. Panel C is of two-week-old plants treated for one week with diluted 300K, and individual Mg 2+ , Na + , K + , and Ca 2+ ions, followed by the relative RNA expression of the ACS7 gene. Relative amounts were calculated and normalized with respect to the AtActin-8 gene. Controls were Arabidopsis thaliana plants grown in 1/2 MS medium without SW and ionic treatments. Values are the means of eight replicates with corresponding standard deviations. The relative ACS7 and ACO2 gene expressions are in comparison to control plants, and an asterisk indicates a significance level of p ≤ 0.05 Figure 5A shows that the ethylene level of three-week-old plants under 300K treatment (0.085 ppm) was significantly lower than in controls (0.105 ppm). Furthermore, ethylene levels in all tested plants in all ionic treatments (0.09 ~ 0.1 ppm) were significantly lower in controls (0.12 ppm) after exogenously applied 1μM ethephon treatment for one week, whereas ethylene levels in all plants in all ionic treatments were close to the levels of controls (0.105 ppm) without ethephon treatment ( Figure 5B).These results suggest that leaf senescence depends on the balance between ACS7-generated ethylene and ionic-dependent ethylene accumulation in Arabidopsis. Ethephon is an ethylene production inducer and is chemically converted to ethylene by oxidation. Ethylene biosynthesis occurs in all plant tissues and throughout all stages of leaf development, but endogenous ethylene levels vary according to the stage of leaf growth and development (Iqbal et al., 2017), which can be promoted or inhibited by ethylene, coupled with an increase or decrease in ACC synthesis, respectively (Ceusters and VandePoel, 2018). Plants at different developmental stages or with different genetic backgrounds express ACO at different levels, leading to regulation of ethylene production (Kim et al., 2003). Sun et al. (2017) reported that ACS7 degradation is highly regulated by senescence signals to enable optimal ethylene production at the appropriate times during At leaf development. Ethylene production and expressions of senescence-associated genes in At plants with ethephon treatment Relative RNA expressions of NAP, SAG113, and WRKY6 genes involved in the senescence response were analyzed for the possibility that ethylene is involved in the signaling pathways of NAP, SAG113, and WRKY6genes. Figure 6A shows that significant up-regulated expressions of NAP, SAG113, and WRKY6 were observed in plants treated with ethephon (> 2.2) compared to non-ethephon treatment (= 1). Nevertheless, when those plants were subjected to 300K, Ca 2+ , Mg 2+ , and Na + treatments, all expressions of NAP, SAG113, and WRKY6 (< 0.7) were significantly lower than in control plants (= 1) ( Figures 6B-D). Moreover, RNA levels of NAP and SAG113 in plants under K + treatment were significantly higher (= 1.2) and lower (0.38), respectively, than in controls (= 1) ( Figures 6B, C). These differences in gene expression may be because of the influence that major ions in SW have on the ethylene biosynthesis of At plants. This study used a mixture of these major ions to prepare the SW used to culture At plants in the attempt to understand the influence that the major ions in SW have on regulating ethylene-produced pathways. The results suggest that the combination of the four individual ionic waters may have a synergistic effect on the regulation of ethylene biosynthesis. These findings can serve as a valuable reference for improving electric dialysis and water separation techniques based on SW. Aging factors NAP, SAG113, and WRKY6 play important roles in delaying senescence, with different signaling pathways in these ionic-treated At plants. After ethephon application and ionic treatments of At plants, the up-regulation of NAP, SAG113, and WRKY6 gene expressions and obviously decreased ethylene content delayed senescence during plant development compared to control plants. The down-regulation of these senescence-associated genes in the ionic-treated plants activated the expression of the downstream target ACS7 gene that was also down-regulated in expression in Ca 2+ -, Mg 2+ -, and Na + -treated plants, but was highly expressed in K + -treated plants compared to control plants. Consequently, these ions acted as a signal to NAP, SAG113, and WRK6 genes and activated gene products involved in ethylene acclimation and tolerance of ethylene-downregulated pathway in ionic-treated plants. Both ethylene and Ca 2+ have been documented to play important roles in plant senescence. A balanced and timely supply of Ca 2+ sources for fruit and vegetable crops during the growing season and at the postharvest stage improves the shelf life and nutritional quality of horticultural produce (Gao et al., 2019). Furthermore, Ca 2+ supply to ornamental crops extends the vase-life of flowers by delaying senescence and reducing intensified ethylene production (Aghdam et al., 2012). When the Ca 2+ concentration changes, plants use a Ca 2+ effector protein to sense this signal, and then manage external 13 stimulation by regulating the expression of the plant stress gene. The Ca 2+ signaling process is activated with the presence of a Ca 2+ sensor and their target proteins . Monitoring the expressions of plant genes at the transcriptional level is an essential step in their functional analysis. Expression patterns in ACS7 and ACO2 in response to SW and ionic treatment stress provide a molecular basis for the ethylene biosynthesis pathway in plants. Thus, down-regulation in the ACS7 gene in 300K-treated plants (Figure 4) reduced ethylene content ( Figure 5), but increased Chl, total phenol, and DPPH radical scavenging accumulations (Figures 2 and 3) and strengthened salt tolerance in 300K-treated plants. The expression profile of the ACS7 gene in control plants and with 300K, Ca 2+ , Mg 2+ , and Na + treatments ( Figure 4C) was correlated with decreases in NAP, SAG113, and WRKY6 gene expressions ( Figures 6B-D). These ionic-induced transcriptional activations of senescence-associated genes correspond to decreases in the products of NAP, SAG113, and WRKY6 genes that protect cellular components against the effects of ROS accumulation as a consequence of delaying senescence. 14 Ethylene is an important plant growth substance, and ethylene responses may interact with other physiological processes or responses. Hua and Meyerowitz (1998) reported that ethylene treatment results in leaf senescence coupled with a decrease in Chl content and up-regulation of the SAG2 expression. Figure 7 demonstrates a prediction of signaling transduction pathways of delaying senescence in Arabidopsis plants treated with 300K DOW. The Mg +2 , Na + , and Ca 2+ in the 300K solution decreased the expressions of NAP, SAG113, and WRKY6, leading to decreased ACS7 gene expression and reduced ethylene production, but expression of the ACO2 gene was not affected by 300K treatment. The alternative pathway of Ca 2+ in the diluted 300K was to increase total phenol content and reduce the accumulation of free radicals (i.e., superoxide) that decrease plant aging from ethylene. However, the K + in the diluted 300K inhibited SGA113 gene expression, resulting in reducing ACS7 gene expression and reducing ethylene production. Although the main ions in SW induced ethylene biosynthesis via transcriptional regulation, whether the signaling transduction pathways of delaying senescence in At plants were affected by trace elements in the SW in this study remain unknown. The Mg +2 , Na + , and Ca 2+ in the 300K solution decreased the expressions of NAP, SAG113, and WRKY6, leading to a decrease in ACS7 gene expression and reduction of ethylene production, but expression of the ACO2 gene was not affected. The alternative pathway of Ca 2+ in the diluted 300K was to increase total phenol content and reduce the accumulation of free radicals (i.e., superoxide) in order to decrease ethylene-caused plant aging. However, the K + in the diluted 300K inhibited SGA113 gene expression, resulting in reducing ACS7 gene expression and reducing ethylene production. 15 The presence of NAP, SAG113, and WRKY6 transcripts in the 300K-, Ca 2+ -, Mg 2+ -, and Na + -treated plants was a rapid response to ethylene content, indicating the importance of these genes in the ROS defense system in plant cells. The 300K-treated plants exhibited stronger resistance to ethylene tolerance due to less •O 2− accumulation. The senescence stress tolerance caused by Ca 2+ treatment led to the reduced production of ethylene, consequently increasing total Chl and phenol content. The capacity of a plant to down-regulate the expression of NAP, SAG113, and WRKY6 defines that plant's tolerance to Ca 2+ stress, with enhanced transcription of these genes and subsequently increased total phenolic content and DPPH radical scavenging effects leading to reduced ethylene content and delayed senescence in plants. A high level of total phenol content and DPPH radical scavenging activity may result from the down-regulated expressions of ACS7, NAP, SAG113, and WRKY6 genes, which could eliminate ethylene induced •O 2− , and these genes in 300K-and Ca 2+ -treated plants were involved in •O 2− detoxification and thus helped overcome ethylene-induced stress. The 300K-and Ca 2+ -treated plants with higher total phenol levels and DPPH radical scavenging activity were benefitted by an increased capacity in their ROS-scavenging system. The ethylene-induced transcriptional activation of NAP, SAG113, and WRKY6 genes corresponded to an increase in total phenol content and DPPH radical scavenging activity, which protected cellular components against the effects of ROS and ethylene. This will be helpful for precisely controlling ethylene production and signaling to enhance salinity tolerance and improve the agronomic traits of crops. Although ACS7, NAP, SAG113, and WRKY6 play major roles in the biosynthesis of senescence ethylene and elucidate underlying mechanisms, how plants coordinate their ethylene biosynthesis with leaf senescence in the post-transcriptional regulation remains an area for further work. Conclusions Many lands and fields are located in coastal areas, and diluted SW can be considered a free local resource for possible use as irrigation. In present study, we found that the increased TTC activity in 300K SW suggests the enhance tissue cell viability of At plants. The 300K SW could down-regulation ACS7 gene and significantly reduced the ethylene content, but remarkably increased Chl, phenol, and DPPH analyses and strengthened the salt tolerance. In addition, Mg 2+ , Na + , K + , Ca 2+ , and 300K displayed lower levels of •O 2− accumulation in leaves. The induction of ACS7-generated ethylene for leaf senescence was countered by NAP, SAG113, and WRKY6dependent Ca 2+ , Mg 2+ , and Na + accumulation; moreover, phenol content increased and NAP, SAG113, and WRKY6-induced leaf senescence occurred in non-senescing leaves. Nevertheless, K + treatment inhibited SGA113 gene expression, resulting in reducing ACS7 gene expression and ethylene content. The increased tolerance to salt by At plants that down-regulated the ACS7, NAP, SAG113, and WRKY6 genes suggests that these genes may be helpful in decreasing the ethylene content. The use of 300K SW caused a delay in leaf senescence, and this may be the primary cultural use for Ca 2+ . The characterization and functional analysis of these genes should facilitate our understanding of ethylene response mechanisms in plants. Authors' Contributions CMC conceived and designed the experiments; WJC, MYH, and SFP performed the experiments; YSC, HCW, and HHL analyzed and interpreted the data; KHL and CMC prepared, wrote, and reviewed the manuscript. All authors read and approved the final manuscript.
2021-05-08T00:04:33.176Z
2021-02-10T00:00:00.000
{ "year": 2021, "sha1": "cda1650f9a651518dc735e7b5fd9c4d04dc95295", "oa_license": "CCBY", "oa_url": "https://www.notulaebotanicae.ro/index.php/nbha/article/download/12205/9103", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dd200c475682429b60c8829b31a44e6e3f1713b9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
13919368
pes2o/s2orc
v3-fos-license
Lopinavir/ritonavir significantly influences pharmacokinetic exposure of artemether/lumefantrine in HIV-infected Ugandan adults Background Treatment of HIV/malaria-coinfected patients with antiretroviral therapy (ART) and artemisinin-based combination therapy has potential for drug interactions. We investigated the pharmacokinetics of artemether, dihydroartemisinin and lumefantrine after administration of a single dose of 80/480 mg of artemether/lumefantrine to HIV-infected adults, taken with and without lopinavir/ritonavir. Methods A two-arm parallel study of 13 HIV-infected ART-naive adults and 16 HIV-infected adults stable on 400/100 mg of lopinavir/ritonavir plus two nucleoside reverse transcriptase inhibitors (ClinicalTrials.gov, NCT 00619944). Each participant received a single dose of 80/480 mg of artemether/lumefantrine under continuous cardiac function monitoring. Plasma concentrations of artemether, dihydroartemisinin and lumefantrine were measured. Results Co-administration of artemether/lumefantrine with lopinavir/ritonavir significantly reduced artemether maximum concentration (Cmax) and area under the concentration–time curve (AUC) [median (range): 112 (20–362) versus 56 (17–236) ng/mL, P = 0.03; and 264 (92–1129) versus 151 (38–606) ng · h/mL, P < 0.01]. Dihydroartemisinin Cmax and AUC were not affected [66 (10–111) versus 73 (31–224) ng/mL, P = 0.55; and 213 (68–343) versus 175 (118–262) ng · h/mL P = 0.27]. Lumefantrine Cmax and AUC increased during co-administration [2532 (1071–5957) versus 7097 (2396–9462) ng/mL, P < 0.01; and 41 119 (12 850–125 200) versus 199 678 (71 205–251 015) ng · h/mL, P < 0.01]. Conclusions Co-administration of artemether/lumefantrine with lopinavir/ritonavir significantly increases lumefantrine exposure, but decreases artemether exposure. Population pharmacokinetic and pharmacodynamic trials will be highly valuable in evaluating the clinical significance of this interaction and determining whether dosage modifications are indicated. Introduction Malaria and HIV are two infectious diseases causing significant morbidity and mortality worldwide. The two diseases have overlapping geographical distribution in sub-Saharan Africa, where over 90% of the world malaria burden and 67% of the global HIV burden occur. 1,2 Significant interactions occur between the two diseases, with HIV increasing the risks for malaria frequency and severity. 3,4 Infection with malaria stimulates immune mechanisms that activate HIV replication, causing a transient increase in HIV viral load. 5,6 Major effort has been made to ensure universal access to antiretroviral therapy (ART), with significant improvement in quality of life and survival of people living with HIV. In 2009, 1.2 million people were initiated on ART, a 30% increase in ART coverage in one year. 7 Successful treatment of infectious diseases such as HIV and malaria requires adequate drug concentrations at the target site to produce maximal efficacy with minimal toxicity. Drug pharmacokinetics might be influenced by drug -drug interactions. Antiretroviral drugs, specifically the non-nucleoside reverse transcriptase inhibitors and protease inhibitors, are potent inducers and/or inhibitors of cytochrome (CYP) enzymes and transporter proteins, with potential for drug -drug interactions when co-administered with other drugs. 8,9 The WHO recommends artemisinin-based combination therapy (ACT) for the treatment of uncomplicated malaria. 2 The combination of artemether and lumefantrine offers excellent efficacy against susceptible and multidrug-resistant Plasmodium falciparum. Both artemether and lumefantrine are metabolized predominantly by CYP3A4. 10 Artemether is metabolized to dihydroartemisinin, predominantly by CYP3A4/5 and to a lesser extent by CYP2B6, CYP2C9, CYP2C19 and possibly CYP2A6. 8,10 -12 Dihydroartemisinin is rapidly converted into inactive metabolites primarily by glucuronidation via uridine diphosphoglucuronyltransferases (UGTs) UGT1A1, UGT1A8/9 and UGT2B7. 10,12 -14 Both artemether and dihydroartemisinin possess potent antimalarial properties, causing a rapid reduction in asexual parasite biomass, with prompt resolution of symptoms. 15,16 Lumefantrine is slowly eliminated, mainly metabolized by CYP3A4 to desbutyl-lumefantrine. 10 -12 Lumefantrine eradicates residual malaria parasites thereby preventing recrudescence. 10,13,14 Total exposure to lumefantrine predicts parasite eradication and is the principal pharmacokinetic correlate of artemether/lumefantrine treatment. 16 Lopinavir and ritonavir are inhibitors of CYP3A4, so co-administration with artemether/lumefantrine may result in increased artemether and lumefantrine plasma concentrations. Elevated lumefantrine plasma concentrations are of particular concern because of the structural similarity to halofantrine, a drug associated with cardiac arrythmias and sudden death. 17 -19 In a previous study, co-administration of lopinavir/ritonavir with artemether/lumefantrine to healthy volunteers resulted in significantly increased lumefantrine exposure, decreased dihydroartemisinin exposure and a trend towards decreased artemether exposure. 20 The aim of the present study was to investigate the pharmacokinetics of artemether, dihydroartemisinin and lumefantrine after administration of a single dose of 80/480 mg of artemether/lumefantrine to HIV-infected adults, taken with and without lopinavir/ritonavir-based ART. To avoid unknown adverse effects, we administered a single dose of artemether/ lumefantrine to HIV-infected patients without malaria and vigilantly monitored their cardiac function. Study site The study was conducted between January 2008 and June 2009 at the Infectious Diseases Institute (IDI) and the Uganda Heart Institute, Mulago Hospital, Kampala, Uganda. Study design and population This was a two-arm parallel study to assess the pharmacokinetics of a single dose of artemether/lumefantrine co-administered with and without lopinavir/ritonavir-based ART to HIV-infected patients without malaria. Patients were eligible to participate if they were older than 18 years, with no evidence of systemic illness and no indication for medications with known potential for drug interactions with the study drugs. Patients with abnormal cardiac, liver or renal function, positive blood smear for malaria, pregnant mothers and those who reported use of any herbal medication were excluded. Ethical considerations The study was approved by the Uganda National HIV/AIDS Research Committee (ARC 056) and the Uganda National Council of Science and Technology (HS 195), and was registered with ClinicalTrials.gov (NCT 00619944). Study procedures were explained to participants in their local languages. Each participant received an information leaflet to take home. All participants provided written informed consent prior to study entry. Study procedures were conducted in accordance with the principles of Good Clinical Practice. Study procedures Patients were screened and enrolled consecutively from the cohort of patients attending the IDI. The artemether/lumefantrine plus lopinavir/ ritonavir arm consisted of HIV-positive patients stable on 400/100 mg of lopinavir/ritonavir plus two nucleoside reverse transcriptase inhibitors (NRTIs) taken twice daily for at least 1 month. The artemether/lumefantrine arm consisted of HIV-positive ART-naive patients who had not started ART and were not yet eligible for ART according to national guidelines. Patients in both arms took co-trimoxazole daily for prophylaxis against opportunistic infections. Adherence to study drugs was assessed using self-report and pill count at each clinical visit. On the evening prior to the study day, participants were reminded of their study-day appointment and were given detailed instructions to eat food; those in the lopinavir/ritonavir arm were reminded to administer their ART by 8.00 pm, and arrive at the hospital by 7.00 am in a fasting state. On the morning of the study day, patients were admitted to the Heart Institute. Blood smears for malaria parasites were performed, and patients found to have positive smears were given a standard six-dose course of artemether/lumefantrine and excluded from further study. A 12-lead electrocardiograph (ECG) monitor was attached for continuous cardiac function monitoring. An indwelling intravenous catheter was inserted following aseptic techniques, and blood samples were drawn for the determination of pre-dose concentrations of artemether, dihydroartemisinin and lumefantrine. A standardized breakfast with added fat to cater for the fat requirement for artemether/lumefantrine absorption was administered. 21 The intake of breakfast and study drugs was directly observed by study staff. All patients took a single dose of four tablets, equivalent to 80/480 mg of artemether/lumefantrine (Coartem w , Novartis Pharma AG, Basel, Switzerland; Batch number: F0660) with water immediately after breakfast. Patients in the lopinavir/ritonavir arm took 400/100 mg of lopinavir/ritonavir (Aluvia w , Abbott Laboratories, USA) plus two NRTIs with their study artemether/lumefantrine dose. The NRTI combination consisted of zidovudine plus didanosine, or tenofovir plus emtricitabine. Sampling was performed at 1, 2, 4, 6, 8, 12, 24, 48 and 72 h postartemether/lumefantrine dosing. An aliquot of 4 mL of blood was collected per sampling time in lithium-heparin tubes. Samples were centrifuged immediately for 10 min; plasma was separated and stored immediately at 2708C until shipment on dry ice to the Clinical Pharmacology Laboratory, Mahidol-Oxford Tropical Medicine Research Unit, Byakika-Kibwika et al. Mahidol University, Bangkok, Thailand for measurement of artemether, dihydroartemisinin and lumefantrine plasma concentrations. Safety assessment Medical history, physical examination, routine clinical laboratory tests, ECG and urine screens for pregnancy were performed at screening. On the study day, medical history, physical examination and blood smears for malaria parasites were performed. Standard 12-lead ECGs were recorded at screening, immediately prior to dosing, then continuously for 12 h post-dose of artemether/lumefantrine and once daily for 3 days thereafter. Participants were monitored for adverse events until 2 weeks post-sampling; the onset, duration, severity and relationship to the trial drugs (if any) were noted. Artemether, dihydroartemisinin and lumefantrine plasma concentration measurement Artemether and dihydroartemisinin concentrations were measured using solid-phase extraction and liquid chromatography/mass spectrometry. 22 Total-assay coefficients of variation for dihydroartemisinin and artemether during analysis were less than 5% at all quality control levels. The lower limit of quantification was 1.4 ng/mL and the limit of detection was 0.5 ng/mL for both drugs. 22 Lumefantrine concentrations were determined using a solid-phase extraction/liquid chromatographic assay with ultraviolet detection. 23 The coefficient of variation was less than 6% at all quality control levels. The lower limit of quantification was 25 ng/mL and the limit of detection was 15 ng/mL. 23 Analytical and pharmacokinetic methods Non-compartmental analysis was performed using WinNonlin Professional TM software, version 5.2 (Pharsight Corp., Mountain View, CA, USA). Pharmacokinetic parameters included the observed maximum concentration (C max ), time to C max (T max ), area under the plasma concentration-time curve from zero to the last observation (AUC 0-last ), area under the plasma concentration-time curve from zero extrapolated to infinity (AUC 0-1 ), elimination clearance (CL/F), apparent volume of distribution (V/F), elimination half-life (t 1/2 ) and absorption lag time (T lag ). The trapezoidal rule (linear-up/log-down) was used to estimate AUC. All parameters were calculated using actual blood sampling times. Drug concentrations below the lower limit of quantification of the bioanalytical assays were treated as missing data. The median values and ranges of the pharmacokinetic parameters were recorded for the two groups. Statistical analysis Data were analysed using STATA w version 10.0 (StataCorp, College Station, TX, USA). Baseline characteristics were summarized as mean with 95% CI and compared using the independent t-test. The Wilcoxon rank-sum test was used to compare pharmacokinetic parameters between the two groups. A P value of ,0.05 was considered statistically significant. Results A total of 36 participants were enrolled, of whom 29 completed the 72 h sampling. Of the seven participants who did not complete sampling, two dropped out before sampling started, one participant had only the first three samples drawn due to difficulty with cannulation and four patients had positive blood smears for malaria on the sampling visit; the latter were given the standard six-dose regimen of artemether/lumefantrine and excluded from further study. Analyses were performed on data from the 29 participants who completed sampling: 16 [9 (56%) female] in the artemether/lumefantrine plus lopinavir/ritonavir arm, and 13 [9 (69%) female] in the artemether/lumefantrine arm. All participants taking lopinavir/ritonavir-based ART had viral load below the level of detection (400 copies/mL). Mean (95% CI) of the log of viral load was 4.5 (4.0 -5.0) copies/mL among the ARTnaive patients. Participants in the two study arms were comparable for all other baseline characteristics measured except haemoglobin, which was significantly higher among patients taking lopinavir/ritonavir-based ART ( Table 1). All participants tolerated study drugs very well, with no adverse events reported. ECG parameters for patients in both study arms remained well within normal limits throughout the 72 h follow-up period. These data have been published elsewhere. 24 Discussion We investigated the pharmacokinetics of artemether, dihydroartemisinin and lumefantrine after administration of a single dose of 80/480 mg of artemether/lumefantrine to HIV-infected adults, taken with and without lopinavir/ritonavir-based ART. Co-administration of artemether/lumefantrine with lopinavir/ ritonavir significantly increased artemether clearance with a consequently significant reduction in artemether exposure. Dihydroartemisinin pharmacokinetic parameters were not affected by lopinavir/ritonavir. Lumefantrine clearance significantly decreased with a consequently significant increase in exposure. Our data for the direction of the interaction between lopinavir/ritonavir and artemether/lumefantrine show a similar trend to data from a previous study by German et al.; 20 however, differences in the magnitude of the interaction as well as the effect on dihydroartemisinin were evident between the two studies. The previous study demonstrated a trend towards decreased artemether exposure, significant reduction in dihydroartemisinin exposure and significant increase in lumefantrine exposure following standard six-dose artemether/lumefantrine administration with lopinavir/ritonavir to 13 healthy HIV-seronegative adults. 20 The differences in the results from the two studies possibly arise from differences in the study designs and population. German et al. 20 conducted a sequential cross-over study in which artemether/lumefantrine parameters were compared within the same individuals with and without lopinavir/ritonavir, while we employed a parallel study design with comparison of parameters from different individuals with and without lopinavir/ritonavir. The parallel study design was adequate for the objectives of our study, but has a limitation due to the high inter-individual variability of artemether and dihydroartemisinin. Comparison of pharmacokinetic exposures in the same individuals using the sequential design was not feasible given that lopinavir/ritonavir is used for second-line HIV treatment in our study setting. In addition, our population was composed of HIV-infected adults of African origin, unlike the HIV-uninfected healthy Byakika-Kibwika et al. volunteers of primarily white origin in the study by German et al. 20 Genetic variation may cause inter-individual pharmacokinetic variability due to polymorphisms of genes encoding drugmetabolizing enzymes. 25 -27 In addition, drug pharmacokinetics may differ in healthy volunteers compared with patients with disease. Further differences in the magnitude of the effects of interaction between our data and the German et al. 20 data could have arisen from the six-dose compared with the single-dose regimen of artemether/lumefantrine. We administered a single dose of artemether/lumefantrine to avoid any unknown adverse effects of co-administration of artemether/lumefantrine with lopinavir/ritonavir in HIV-infected participants. German et al. 20 administered the standard six-dose artemether/lumefantrine regimen to healthy volunteers. Artemether undergoes auto-induction of its metabolism, and artemether/dihydroartemisinin ratios after 3 days of treatment with the standard dose are lower than those seen after a single dose. 28 In both studies lumefantrine exposure was elevated during co-administration with lopinavir/ritonavir; however, despite the elevated lumefantrine exposure, participants tolerated the study drugs very well, with all reported adverse events consistent with what had previously been reported for artemether/lumefantrine and lopinavir/ritonavir. Our data did not demonstrate evidence of cardiac conduction abnormalities. However, caution and safety monitoring of HIV/malaria-coinfected patients receiving artemether/lumefantrine with lopinavir/ritonavir is advised. It will be important to determine if these effects are additive in the standard six-dose artemether/lumefantrine regimen in HIV/malaria-coinfected patients receiving lopinavir/ritonavir. Ritonavir-boosted lopinavir influences the activity of several CYP enzymes and drug transporters such as the efflux transporter P-glycoprotein. 29 Both lopinavir and ritonavir inhibit intestinal and hepatic CYP3A4 and P-glycoprotein. 29,30 Inhibition of CYP3A4 or P-glycoprotein expression decreases biotransformation, resulting in an increase in bioavailability of co-administered substrates. 9,31 -33 Previous data demonstrated increased artemether, dihydroartemisinin and lumefantrine exposure in the presence of the CYP3A4 inhibitors ketoconazole and grapefruit juice. 34,35 Inhibitions of CYP3A4 and P-glycoprotein are likely explanations for the increased lumefantrine exposure in our study. The reduction in artemether exposure was unexpected, since CYP3A4 is suggested to be the predominant CYP enzyme in the metabolism of artemether. 10 Although artemether is predominantly metabolized via CYP3A4/5, other CYP enzymes (CYP2B6, CYP2C9, CYP2C19 and possibly CYP2A6) are involved. 10,13,14 Lopinavir/ritonavir was shown to induce CYP1A2, CYP2B6, CYP2C9, and CYP2C19. 30,36 The observed increased clearance and decreased artemether exposure is likely due to induction of these CYP enzymes by lopinavir/ritonavir. Dihydroartemisinin is converted into inactive metabolites by UGT1A1, UGT1A8/9 and UGT2B7. 10,13,14 Induction and inhibition of UGTs by xenobiotics have been described previously, and lopinavir/ritonavir was shown to inhibit UGTs 1A1, 1A3, 1A4, 1A6, 1A9 and 2B7. 37 -39 However, we found no statistical difference in the pharmacokinetic parameters of dihydroartemisinin after lopinavir/ritonavir co-administration compared with administration alone. The reason for this is unclear, but might be due to the small numbers and large inter-individual variability. Artemether and dihydroartemisinin have very short half-lives and rapidly clear parasites from circulation. 11 Both are very potent antimalarial agents, although dihydroartemisinin is more potent. 16 Lumefantrine has a much longer half-life and mainly clears residual parasites, preventing recrudescence. Higher artemether and dihydroartemisinin exposure decreases parasite clearance time, 13 but the major determinant of radical cure is lumefantrine exposure. 40 Given that HIV/malariacoinfected patients present with higher parasite counts, 3,41 which is an independent predictor of poor treatment response, 42 reduction in artemether exposure may predispose patients to develop severe malaria due to slower parasite clearance. The clinical relevance of the present findings should be interpreted with caution given that we administered a single artemether/ lumefantrine dose while a six-dose artemether/lumefantrine regimen is administered for malaria treatment. The reduction in artemether exposure by lopinavir/ritonavir after the single artemether/lumefantrine dose may be offset Interactions between lopinavir/ritonavir and artemether/lumefantrine 1221 JAC by the increase in lumefantrine exposure. Previous data revealed that lumefantrine exposure is the key determinant for malaria cure, 43 therefore the increase in lumefantrine exposure during lopinavir/ritonavir co-administration may be beneficial for malaria cure. However, rapid clearance of artemether and reduced clearance of lumefantrine may create longer periods of exposure to lumefantrine monotherapy with the risk of development of resistance. Conclusions Co-administration of a single dose of artemether/lumefantrine with lopinavir/ritonavir significantly reduced artemether exposure, with a significant increase in lumefantrine exposure. Population pharmacokinetic and pharmacodynamic trials will be highly valuable in evaluating the clinical significance of this interaction and determining whether dosage modifications are indicated.
2018-04-03T02:21:08.304Z
2012-02-08T00:00:00.000
{ "year": 2012, "sha1": "60b9a174a996575d5dee1429df6621b36d369bc3", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jac/article-pdf/67/5/1217/3020206/dkr596.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9acf0f14da90cbe40409fee0e3a115e2d829a013", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
69878585
pes2o/s2orc
v3-fos-license
Simulative Study of Different Control Concepts of Cooling System for Machine Tools Power losses in machine tools, e.g. during the standby, idle-, and manufacturing process, are converted into heat energy. This causes the machine frame and other machine components to heat up. As a result, the Tool Centre Point (TCP) of the machine tools is moved. The accuracy of the machine is thus reduced during manufacturing. The current cooling system design of machine tools is based on a centrally fixed pump supply that provides a constant cooling volume flow for cooling all the machine tool components. This does not correspond to the individual temperature development of the components, after all, the high temperature fluctuation arises and causes the thermoelastic deformation of machine tools. The main objective of this paper is to highlight the deficit of the current concept of cooling systems and to present a simulative study on the different controls concepts of cooling systems for machine tools. The results depict that the new concepts under consideration have a large potential for better thermal behaviour and lower hydraulic performance compared to the current cooling system design. The simulation results show a stability of the components' temperature profile as well as a decreased energy consumption of the cooling system. Introduction Over the last few years, energy-saving has become a more and more important topic and the public awareness of environmental issues has increased significantly. Using environmentally-friendly and energy-efficient products, much energy and many raw materials could be saved. In recent times, the development in the industrial sector has been focused strongly on producing in a more energy-efficient way. In the field of manufacturing technology, machine tools are an essential part of a company's machine equipment. In addition to productivity, the demands on component accuracy and energy efficiency in production processes are also increasing [1]. During the production process, a part of the electrical energy is converted into heat energy and thermo-elastic deformations occur. These deformations affect the Tool Centre Point (TCP) position of the machine tools and lead to reduced accuracy. The heated components such as the rotary table, tool holders, linear guides etc. must be cooled. Therefore, fluidic systems such as cooling systems are installed to reduce the temperature fluctuations of the components. In order to reduce the thermo-elastic deformations that occur and to improve the manufacturing quality, it is necessary to minimize the heat input. Previous research projects such as [2][3][4] mainly focused on the energy requirements of the machine tool and its main drives, thus reducing the energy consumption of auxiliary units by developing more efficient components and control strategies. The thermal behaviour, effectiveness, and design of the cooling system of a machine tool has not yet been described in detail. A detailed analysis of the existing cooling system structures and their effectiveness are therefore carried out in the subproject (A04) of the research project SFB/TR96, Thermo-energy design of machine tools. At the conclusion, the aim is to achieve an even temperature distribution and an efficiency increase of the machine tools. The main target of this work is to show the deficit of the current cooling system structure by means of an experimental investigation carried out on a demonstration machine, and further, to present a simulative study of the different control concepts of cooling systems for machine tools. The results illustrate that the new concepts being considered have great potential for improved thermal behaviour and lower hydraulic performance compared to the current cooling system design. The simulation results show a stable temperature profile of the components and a lower energy consumption of the cooling system. The current cooling system structure and its deficit The selected demonstration machine for the experimental investigation is type DMU80 eVo linear and is used mainly for high speed cutting (HSC). For more information about the demonstration machine see [5]. The cooling system of DMU80 cools 13 components simultaneously as shown in Fig. 1. The main function of a cooling system is to provide the cooling media for the components or spots of the machine to dissipate the heat energy and to avoid high temperature fluctuations within the machine structure. This helps to reduce the thermoelastic deformation and lastly increases the accuracy in the production process. Here, a centrally fixed pump 1 provides the cooling medium, 45 l/min at 4.5 bar, to motor spindle 2, all the axis drives (3,5,7,8) the housing of the B and C axes 4 as well as the rails of X, Y, and Z (6, [9][10][11][12]. The determined hydraulic power of the central pump is about 340 W. A mixture of water and 30 % Glysantin® (G48®) is used as cooling medium. Moreover, cooling unit 13 of the DMU80's cooling system is not integrated in the return flow; it is mounted directly to the tank as bypass flow. Further-more, a threeway-valve 14 is placed into the return flow side. This valve is used as a diverting valve: with a defined setting, a part of the heated backflow is introduced directly to the inlet side of the pump and the remaining fluid flows back to the tank [6]. The controller of the three-way-valve adjusts the flow to the tank or to the inlet side of the pump so that the temperature on the pump inlet side always stays at approximately 25 °C, with only a minimal deviation [7]. To derive a statement about the thermal behaviour of the components to be cooled within the cooling system as well as to investigate the effectivity of the current cooling system, machine measurements of the cooling system of DMU80 with several sensors are carried out, as shown in Fig. 1,. The sensors used can help measure the temperature, pressure, and flow rate development. The measured process being considered for the experimental investigation is divided into four sub-processes: warm up process, idle process, set up process, and manufacturing process. The results of the idle-and the manufacturing process are shown in this paper. The idle process is based on the ISO 203-3 [8], where typical load cycles are considered and is carried out in a period of 830 s. The manufacturing process is an exemplary process and is carried out in a period of 880 s. Detailed information about the processes can be found in [9]. Fig. 2 and Fig. 3 show the cooling medium temperature development exemplified by the four components of the DMU80 during the idle-and the manufacturing process. The effect of the ineffective cooling of the components, a not demand-oriented supply, can been seen in the idle-and the manufacturing process. Over the entire idle and manufacturing process, the cooling mediums' inlet temperature of the drive B axis and spindle nut Z axis is higher than the outlet temperature. So the cooling medium is cooled while the components are warmed up. Other components, such as the motor spindle or the secondary part X axis, are cooled during the process. System modelling and validation of current cooling design By using suitable simulation models, technical processes at the development stage can easily be visualized, examined, and optimized. The aim of modelling is to represent the technical process with sufficient precision and to generate well-founded results of the real process behaviour through subsequent calculations. By evaluating different process variants and parameters, it is possible to obtain extensive knowledge about the behaviour of a technical system without cost-intensive tests. The fundamental principle of the system modelling of cooling systems is based on thermo-hydraulic node-element modelling. With their help, different physical domains can be simulated in the simulation models. A brief overview of the physical domains and their relevant potential and flow quantities can be found in Table 1 as summarized [10]. The nodes in the modelling represent the flow variable, which is also called capacitance. The elements are between the nodes as potential variables, which are considered as resistance. Fig. 4 illustrates the basic description of the thermo-hydraulic node-element model. Thermal resistance: Throttle equation: Pressurizing equation : Temperature equation: Figure 3. Basic description of the thermos-hydraulic nodeelement model. The calculation of the thermal and the hydraulic resistances R th and R hy between two nodes is based on the following questions: For the hydraulic and the thermodynamic domain, the laws of electrical engineering, such as Kirchhoff's circuit laws of series and the parallel connection of resistances can be used. In respect to Kirchhoff's node role Furthermore, the heat transport through the cooling medium in the hydraulic pipe (forced convection), the heat transfer through the heat conduction in the pipe, and the heat transfer at the outer surface (free convection) of the pipe are taken into account, as exemplified in Fig. 5. Detailed information about the equations used for these three heat transfer mechanisms in the simulation model can be found in [11]. For the current cooling system of the DMU80, a simulation model is developed based on the modelling method and the machine documents. The simulation model of the cooling system mainly consists of a pump, flow valves, hydraulic pipes, a cooling unit, and the components as heat source. Each hydraulic connection is modelled by a hydraulic volume and a hydraulic resistance. As mentioned, the heat transfer between the pipes and the environment is taken in consideration. Fig. 6 shows the model structure of the cooling system implemented in the simulation. Table 2 shows the most important model parameters for the simulation model under consideration. With the aid of the measured data, temperature, pressure drops and volume flow of each component, the heat flow of the respective components is calculated as in (5). For the modelling of the cooling system of the demonstration machine DMU80, a domain-crossing system simulation with a simulation program (SimulationX) is implemented. Fig. 7, Fig. 8, and table 3 depict a direct comparison of the simulation and measurement of the idle and the manufacturing process, respectively. The simulation model developed shows a high accuracy of the thermal (Fig. 7, Fig. 8) and the hydraulic quantities (Table 3) of the components under consideration. In the evaluation of the hydraulic quantities it is easy to see that the cooling system is characterized as a stationary system. In the contrast, the temperature profile of the components shows fluctuations over the entire process time. The simulation model is validated and thus can be used for the improvement of the current structure of the cooling system as well as for the development of process-and demand-oriented control strategies. Simulative study of different cooling system control concepts With regard to the above shown deficits of the current cooling system structure in the idle and the manufacturing process, the goal is to develop new structures for the cooling system to optimize its thermal behaviour and its effectivity according to the defined goal of a uniform temperature distribution at minimal energy consumption. Fig. 9 shows three new structures of a cooling system that can be applied for a demand-oriented supply. Controlled cooling systems based on the temperature development of the components enable an appropriate flow rate supply with an individually targeted temperature control at low energy consumption. The effectivity of the new structures will be evaluated first in regard to: a constant temperature behaviour at the components, a minimal pressure loss, and a minimal hydraulic power of the pumps. Figure 8. New structures for a cooling system. Structure 1 The first examined cooling system structure, structure 1, is a central, variable speed drive unit with proportional valves. As shown in Fig. 10 The cooling unit stays in the two-point temperature control as bypass cooling and refers to the mixing temperature of the tank. The flow rate through a proportional valve can be calculated using the following equation: The change of the supply pressure is described by the following equation: (7) In respect to the control strategy in Fig. 10, it can be seen, that the cooling structure under consideration has three control variables (component's temperatures) and four control elements (three proportional valves and a variable pump). This makes the system with an actual design concept over-determined. To solve this problem, three approaches can be taken into account [7]: x Definition of a constraint x Removal of a control element from the active control loop and x Definition of an additional control variable. Equations6 and 7 show the dependence of the individual flow rates on the components. Due to these correlations of the individual cooling circuits, the system considered is a so-called Multiple-Input-Multiple-Output System (MIMO system) and has cross-couplings. This means that one control element simultaneously influences several control variables. Structure 2 The second optimization structure of the cooling system structures under consideration shown in Fig. 11 is a decentralized, variable speed drive unit without flow control valves. In this structure, the components are also supplied individually but with the help of the variable speed pumps which are connected to a common tank and a cooling unit. This design does not require a proportional valve to distribute the cooling medium to the components. Exactly as in structure 1, the system control of the cooling system compares the actual and the set temperature of the individual components and on this basis adjusts the variable speed drive units. So, each pump supplies a different demand-oriented cooling volume flow. If the temperature development of a component does not exceed a predefined threshold, the pump remains inactive. Due to the individual supplies of each component with the decentralized pumps, the circuits operate independently. Therefore, this system consists of several in-dependent Single-Input-Single-Output control systems (SISO system). Consequently, each temperature can be regulated by a PI-controller. The I-term of the controller eliminates any remaining control deviation of the P-term. The system boundary condition of structure 1 applies also to structure 2 in regard to the components' set temperature, heat input, system inlet temperature and static operating points of the system, and the cooling unit. Figure 11. Control strategy of structure 3. Structure 3 The system design of structure 3 is a decentralized system. Each circuit in the system has a variable speed drive unit, a tank and a cooling unit. Exactly as in structure 2, this structure does not need any flow control valves. The system control of the cooling system compares the actual and the set temperature of the component and on this basis adjusts the variable speed drive units. The main idea for investigating this structure in Fig. 12 consists of completely separate cycles. This makes it possible to reduce the line lengths, especially for large processing machines, by placing the respective cooling system as close as possible to the component to be cooled. Another advantage of cooling system structure 3 is the possibility foe cooling the different components with different cooling media. This is not possible with cooling system structures 1 and 2 through the common tank. Results and evaluation The evaluation of the results should begin with the main task, the control of temperatures of the cooling system. As a result of the simulation models, for all three investigated system structures, the set temperatures are kept within the actual temperature limits. The calculated temperature development of the electrical cabinet, rotary table, and motor spindle are shown in Fig. 13 as a function of the heat input. The set temperatures are indicated by a dashed characteristic curve coloured light blue, the calculated actual temperatures by a continuous characteristic curve coloured orange and the temperatures of the current system structure by using a characteristic curve coloured purple. A comparison of the temperature curves of the three controlled system structures and the current system highlights two differences. Firstly, by using a feedback control system, the temperatures achieved are constant in all cooling circuits, in contrast to the current cooling system. This can be traced back to the adaptation of the cooling volume flows to the respective cooling requirement by the control unit. The second difference is the ability to adjust the temperature level of the respective component by setting the set temperatures on the control circuit. This is not possible with the current cooling structure: as a result of this system behaviour, the actual temperatures increase with the increasing heat input in the current cooling system. In addition to the circuits of the electrical cabinet and the rotary table, the dependence of the actual temperature on the thermal load (heat input) has a particularly significant effect on the circuit of the motor spindle. Here, the component temperature fluctuates between 27 and 29 °C. In contrast, constant component temperatures can be achieved in all circuits with the three controlled cooling system structures. For all controlled cooling system structures, there is a slight deviation of 0.2 °C between the actual and the set temperatures. This control deviation is due to the required control difference in the control loop. In the diagram of the volume flows, bottom right in Fig. 13, the reduced cooling volume flows of the controlled cooling system structures are to be noticed in comparison to the current system structure. In the cooling circuits of the rotary table and the electrical cabinet, the volume flows are reduced from 12 l/min in the current system to 2 to 2.5 l/min or 2.5 to 5 l/min. This corresponds to an averaged reduction of about 80 % in the cycle from the rotary table and about 70 % in the circuit of the electrical cabinet. The cooling volume flow rate of the current system is 12.5 l/min in the main spindle circuit and between 8 and 16 l/min in the new cooling system structures. It can be ascertained that the volume flow control based on the temperature development is a means for designing the system in a more energy-efficient way based on each individual component's demand. In principle, it is not essential for the temperature behaviour whether the flow rate is set by a controlled pump in each circuit, as in structure 2 and 3 or by a controlled pump and proportional valves, as in structure 1. For simplification, the diagrams of the three system structures have therefore been summarized in Fig 13. Electrical cabinet (EC) Rotary Temperature in °C Temperature current system system structure 1, 2, 3 system structure 1, 2, 3 system structure 1, 2, 3 current system current system system structure 1, 2, 3 system structure 1, 2, 3 The second objective of the study of different cooling system structures is to show a comparison of the required hydraulic power and the respective energy requirements of these systems with the current cooling system structure. The total hydraulic power of the pump in the current cooling system structure of two demonstration machines (DBF630 & DMU80) and structure 1, 2, and 3 for different heat inputs is pointed out in Fig. 14. With the variable central displacement pump (structure 1), the total hydraulic power is approximately 160 W at maximal heat input. Similar to structure 1, the total hydraulic power of the variable speed displacement pumps in structure 2 and structure 3 is about 120 W and 110 W at maximal heat input, respectively. Compared to this, the hydraulic power of the fixed displacement pump (current structure) of DBF630 amounts to 370 W (40 l/min at 5.5 bar) and of DMU80 to 340 W (45 l/min at 4.5 bar). A significant energy savings of 56.7 % to 53 % in structure 1 in contrast to the current structures of DBF630 and DMU80 are possible. Similar to structure 1, the energy savings of 67 % to 64.7 % for structure 2 and 70.5 % to 67.6 % for structure 3 in contrast to the current structures of DBF630 (370 W) and DMU80 (340 W) can be shown. The new cooling system structures under consideration are significantly more energy-efficient compared to the current cooling system structures of DBF630 or DMU80 with a continuous cooling volume flow. Summary and outlook The experimental investigation of the cooling system of a demonstration machine has been instrumental in determining the effectiveness as well as the energy consumption of the cooling system [9,12]. It could be shown that the machine components are not cooled specifically with the current cooling structures, and that cooling is adjusted insufficiently in reference to the component's demand and process requirement. Therefore, the investigation and evaluation of new cooling concepts, both simulative (network models) and experimentally (test rig), is of great importance. The new cooling structures examined in this paper, central, variable speed drive unit with proportional valves (structure 1), decentralized, variable speed drive units without flow control valves (structure 2) and decentralized, variable speed drive units, tanks and cooling units (structure 3), show high accuracy with respect to the temperature control of the components compared to the current cooling structure. Apart from this, the hydraulic pump performance of the new structures is about 53 % to 70.5 % lower than the hydraulic pump of the current cooling structures. The focus of further research of the projects will firstly address an energetic analysis of the overall system for each structure considered, the energy consumption of the electrical motor, frequency converter etc. Secondly, the new structures under consideration shall demonstrate their benefits in practice and not only in simulation. To this end, a test rig is being developed which will allow an experimentally sound statement about the structures regarding their effectiveness and efficiency.
2019-02-19T14:07:41.064Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "74201ee5c3072e47ca1c7da29c947ac99ceaebf7", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/79/matecconf_icmsc2018_08002.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d424c586b527e082a92e4c31453c6294c80c312c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237353624
pes2o/s2orc
v3-fos-license
Influence-Based Reinforcement Learning for Intrinsically-Motivated Agents Discovering successful coordinated behaviors is a central challenge in Multi-Agent Reinforcement Learning (MARL) since it requires exploring a joint action space that grows exponentially with the number of agents. In this paper, we propose a mechanism for achieving sufficient exploration and coordination in a team of agents. Specifically, agents are rewarded for contributing to a more diversified team behavior by employing proper intrinsic motivation functions. To learn meaningful coordination protocols, we structure agents' interactions by introducing a novel framework, where at each timestep, an agent simulates counterfactual rollouts of its policy and, through a sequence of computations, assesses the gap between other agents' current behaviors and their targets. Actions that minimize the gap are considered highly influential and are rewarded. We evaluate our approach on a set of challenging tasks with sparse rewards and partial observability that require learning complex cooperative strategies under a proper exploration scheme, such as the StarCraft Multi-Agent Challenge. Our methods show significantly improved performances over different baselines across all tasks. INTRODUCTION Deep Reinforcement Learning (DRL) has been applied to solve various challenging problems, where an agent typically learns to maximize the expected sum of extrinsic rewards gathered as a result of its actions performed in the environment (Sutton et al., 1998). Multi-Agent Reinforcement Learning (MARL) refers to the task of training a set of agents to maximize collective and/or individual rewards, while existing in the same environment and interacting with each other. Recent works have shown that agents with coordinated behaviors learn remarkably faster (Roy et al., 2019) since coordination helps the discovery of effective policies in cooperative tasks. Nevertheless, achieving coordination among agents still remains a central challenge in MARL (Jaques et al., 2019). Prominent works often resort to a popular learning paradigm called Centralized Training with Decentralized Execution (CTDE) (Lowe et al., 2017;, where each agent is evaluated using a centralized critic and has access to extra information about the policies of other learning agents during training. At the time of execution, policies' actions are restricted to local information only (i.e. their own observations). To that end, we propose a novel approach that aims at promoting coordination for cooperative tasks by augmenting CTDE MARL main returnmaximization objective with an additional multi-agent objective that acts as a policy regularizer; we refer to the latter objective as the inf luence f unction. To build intuition, a chosen agent, which we call the "influencer", assesses the progress that other agents are making given its current policy and consequently learns behaviors that will result in an improved performance of its teammates. Concretely, we formulate the influence of an influencer π as an estimation of the dissimilarity between other agents' behaviors and their targets given the current behavior of π. The influencer is encouraged to learn behaviors that are expected to minimize that dissimilarity. We also propose two approaches to estimate the influence and empirically show that they yield unbiased estimates of the true value. To that end, agents acting upon the proposed coordination paradigm learn to efficiently exploit the observed joint action space using available information. However, and since the joint space grows exponentially with the number of agents, it is highly unlikely that agents will have access to sufficient information to learn optimal behaviors to solve the task at hand; this problem arises in many scenarios such as sparse-reward environments, thus a proper exploration scheme is often required. However, many existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques (Liu et al., 2021;Rashid et al., 2018;Yang et al., 2018). Moreover, independent exploration proved to be inefficient in cooperative settings (Roy et al., 2019). Recently, this challenge was addressed through Intrinsic Motivation (IM) (Jaques et al., 2019;Du et al., 2019;Zhou et al., 2020b). Many approaches employ IM to encourage exploration of state-space (Han et al., 2020;Burda et al., 2019) or state-action space (Fayad & Ibrahim, 2021) by identifying novel configurations and rewarding an agent for visiting them. We provide an extension of these ideas into multi-agent settings and further build connection between reward shaping and coordinated behavior learning, where we choose an agent to act as an influencer (i.e. regularize its standard objective using the influence function) while other agents learn to maximize the expected sum of both extrinsic and intrinsic rewards. To sum up, our main contributions are threefold: 1) developing an influence function to promote learning coordinated behaviors and improve team performance; 2) extending exploration via random network distillation to multi-agent settings by crafting a "novelty" function that rewards underexplored behaviors; 3) formulating a novel intrinsic incentive to promote learning diverse team behaviors to help uncover complex behaviors in a collaborative way. We demonstrate the effectiveness of our methods on a comprehensive set of challenging tasks which include, but not limited to, the StarCraft Multi-Agent Challenges (SMAC) (Samvelyan et al., 2019) and the Multi-Agent Particle Environments (MAPE) (Mordatch & Abbeel, 2018;Lowe et al., 2017). Empirical results show a significant improvement over a wide variety of state-of-the-art MARL approaches. We also conduct insightful ablation studies to understand the relative importance of each component of the approach individually. BACKGROUND 2.1 MARKOV GAMES Also called Stochastic games (Littman, 1994), are the foundation for much of the research in multiagent reinforcement learning. Markov games are a superset of Markov decision process (MDPs) and matrix games, including both multiple agents and multiple states. Formally, a Markov game consists of a tuple N, S, A, T, R where: N is a finite set of agents |N | = n ≥ 2; S is a set of states, where the initial states are determined by a distribution ρ : S → [0, 1]; A = n k=1 A k is the set of joint actions; and T : S × A × S → [0, 1] is the transition probability function. In a Markov game, each agent is independently choosing actions and receiving rewards. Conventionally, an agent k aims to maximize its own total expected return R k = T t=0 γ t r (k) t where γ is a discount factor and T is the time horizon. MULTI-AGENT DEEP DETERMINISTIC POLICY GRADIENT MADDPG (Lowe et al., 2017) is a multi-agent extension of the DDPG algorithm (Lillicrap et al., 2015). It adapts the CTDE paradigm, where each agent i possesses its own deterministic policy µ (i) for action selection and critic Q (i) for state-action value estimation, respectively parameterized by θ (i) and φ (i) . All parametric models are trained off-policy from previous transitions ζ t := (o t , a t , r t , o t+1 ) uniformly sampled from a replay buffer D. t of all N agents. Each centralized critic is trained to estimate the expected return for a particular agent i from the Q-learning loss: Each policy is updated to maximize the expected discounted return of the corresponding agent i : Notice that while optimizing an agent's policy, all agents' observation-action pairs are taken into consideration. By that, the value functions of all agents are trained in a centralized, stationary environment, despite happening in a multi-agent setting. Moreover, this procedure allows for the learning of coordinated strategies, yet needs to be augmented with efficient exploration methods that reward novel action configurations which may lead to the discovery of higher-return behaviors. BASIC INFLUENCE Intuitively, one can define coordination in a team of agents as the behavior of each individual agent being informed by other agents. Furthermore, agents' behaviors can be inter-affected either directly through communication for example or indirectly through task-specific shared goals and/or rewards or the dynamics of the environment. We hypothesize that when agents learn in a cooperative setting, they tend to affect each other's exploitation processes, we confirm the hypothesis throughout the paper and build on that to formalize a general method to foster influential interactions and learn meaningful coordination protocols. Specifically, we introduce a novel framework to assess the influence that agent π, at timestep t, has on a set of agents upon taking an action a (π) t in a global state s t . More concretely, consider n agents, namely π, µ 1 , µ 2 , ..., µ n−1 . Define µ µ µ = [π, µ 1 , ..., µ n−1 ] T as the joint policy. We use this notation throughout the rest of the paper. Essentially, the agent π, which we call the "influencer", asks a retrospective question: "How much are agents {µ k } n−1 k=1 (i.e. the "influencees") expected to get closer to their target returns after π executes an action a (π) t in a global state s t ?" 1 Meaning that state-action pairs that lead agents {µ k } n−1 k=1 closer to their target returns are considered highly influential and are rewarded. The goal of this section is to show how π can learn effective policies that drive teammates' behaviors towards their targets by estimating its influence. INFLUENCE WITH SINGLE ESTIMATOR Formally, we quantify the influence F π of agent π on {µ i } n−1 i=1 by initializing a network Q cen : S × A → R n−1 with parameters φ cen ; Q cen (.; φ cen ) (i) estimates the updated q-value of agent µ i after frequent visits of π to (s t , a (π) t ), by minimizing the following loss: target is the target critic of agent µ i , and D is a buffer containing all agents' experiences with the exception that the agent π's experience is After obtaining an estimate of what the q-values would be after counterfactual rollouts of π starting from (o π (s t ), a (π) t ), we can now compute the influence F π : Where B is a buffer storing all agents' transitions. Minimizing F π as a regularizer of its return objective J π , π adjusts its actions' selections so that {µ k } n−1 k=1 can reach their goals faster and more efficiently, thus achieving sufficient coordination. Note that the second term in the expectation (i.e. the target vector y) is set to be undifferentiable with respect to π's parameters and thus does not propagate through its network. INFLUENCE WITH MULTIPLE INDIVIDUAL ESTIMATORS As seen earlier, agent π estimates the gap between each agent's value and its target value by employing a single network. Another desirable approach is to use multiple estimators where each estimator, namely Q (i) clone , individually calculates a fairly good approximation of what the q-value of µ i would be after counterfactual rollouts of π starting from (o π (s t ), a (π) t ). To reduce computational costs and arrive at better estimates, each estimator's network is initialized with the parameters of the corresponding critic network at each episode (i.e. Q (i) clone ← Q (i) ). The training is carried out similarly to that of the single estimator setting, The influence function could be expressed as: Which approach yields better estimates of the true value of the influence? The influence of an agent π on a team of agents T was defined as a measure of the improvement in the performance of T given the current behavior of π. However, measuring the influence using function approximators might result in inaccurate estimates. To resolve this concern, we plot the influence estimates of the two prior approaches over time while they learn on the Cooperative Navigation MAPE task (Mordatch & Abbeel, 2018), where the number of agents is N = 6. In Figure (2), we graph the average influence estimates over 40000 episodes and compare it to the true value. The latter can be expressed as the average distance between the true q-value and the true q-target of each agent given the current behavior of one pre-labeled agent (i.e. the influencer). This distance is then averaged over 1000 episodes following the current policies of agents and is reported every 5000 episodes. The plots show a relatively small bias of both methods during learning. However, as Figure (2) suggests, measuring influence using multiple individual estimators yielded more accurate values after enough training which substantiates its superiority over the shared network approach. Note that, although confirms our prior hypothesis, this experiment does not reflect the importance of employing the influence on the final performance of the agents as we will discuss that in Section (4). INTRINSIC MOTIVATION FOR DIVERSIFIED TEAM BEHAVIOR In this section, we introduce a framework for achieving cooperative exploration by ensuring that agents are consistently tilted towards visiting under-explored state-action configurations; we start by providing a simple demonstration which shows that the number of environment steps required for all agents to randomly traverse all possible action configurations increases at least exponentially with the number of agents. Proposition 1. Consider an L-action setting of n agents. In expectation, the number of steps T needed to visit all L n action configurations at least once without coordinated exploration grows at least exponentially with the number of agents. More concretely, E[T ] = Ω(nL n ). Proof. See Appendix A. To mitigate this issue, we assign agent π a prediction error as an intrinsic reward to facilitate recognizing and learning novel behaviors: Where φ is an autoencoder network regularly trained on data generated by the policy π and λ π is a hyper-parameter that balances the extrinsic and intrinsic reward terms. ψ's expression stems from the observation that when an autoencoder is trained on data from a particular distribution, it will be good at reconstructing data from that distribution, while it will perform poorly if the data is from a different distribution (Fayad & Ibrahim, 2021;. Thus, by employing ψ as an intrinsic bonus, π rewards states' observations and actions that do not belong to the data generated by it. In practice, φ is designed to be a relatively large network since we want it to be slightly overfitted to the training data so that it will not accidentally generalize to behaviors that we may deem novel. Nevertheless, assigning each agent a ψ is not sufficient as it makes the case equivalent to independent exploration approaches. Thus, we propose a coordinated exploration method that takes into account other agents' behaviors, encouraging agents to diversify team behavior while maintaining good performance. Specifically, we assign agents {µ i } n−1 i=1 intrinsic penalty defined as: This reward term aims at teaching the agents to recognize previous behaviors and synchronously select novel configurations. To build intuition, consider a case where N = 2. Whenever (π, µ) select an action tuple in the neighborhood of a frequently-visited tuple (a 1 , a 2 ) in a global state s, ψ will be relatively small and the penalty, r int µ , will be large. Conversely, if (π, µ) encounter a novel tuple, say (a 1 , a 2 ) in s, the small penalty of µ (Eq. (8)) together with a large reward for π (Eq. (7)) will drive both agents to further explore this encounter. In all, Fig. (2) shows how this framework can be augmented with the basic influence introduced earlier to reinforce learning and discovering coordinated behaviors. EMPIRICAL EVALUATION & ANALYSIS The goals of our experiments are to: a) verify the performance of our method on a comprehensive set of multi-agent challenges (SMAC, MAPE, sparse-reward settings, and continuous control environments); b) perform ablations to examine which particular components of the proposed framework are important for good performance. STARCRAFT MULTI-AGENT CHALLENGE StarCraft provides a rich set of heterogeneous units each with diverse actions, allowing for extremely complex cooperative behaviors among agents. We thus evaluate our method on several SC mi-cromanagement tasks from the SMAC 2 benchmark (Samvelyan et al., 2019), where a group of mixed-typed units controlled by decentralized agents needs to cooperate to defeat another group of mixed-typed enemy units controlled by built-in heuristic rules with "difficult" setting; the battles can be both symmetric (same units in both groups) or asymmetric. Each agent observes its own status and, within its field of view, it also observes other units' statistics such as health, location, and unit type (partial observability); agents can only attack enemies within their shooting range. A shared reward is received on battle victory as well as damaging or killing enemy units. Each battle has step limits set by SMAC and may end early. We consider 4 battle maps grouped into Easy (2s3z), Hard (5m_vs_6m, 3s_vs_5z), and Super Hard (corridor) against 6 baseline methods using their open-source implementations based on PyMARL (Samvelyan et al., 2019): COMA , IQL (Tan, 1993), VDN (Sunehag et al., 2018), QMIX (Rashid et al., 2018), LIIR (Individual Intrinsic Rewards) (Du et al., 2019), and LICA (Implicit Credit Assignment) (Zhou et al., 2020b). The corridor map, in which 6 Zealots face 24 enemy Zerglings, requires agents to make effective use of the terrain features and block enemy attacks from different directions. A properly coordinated exploration scheme applied to this map would help the agents discover a suitable unit positioning quickly and improve performance, while 2s3z requires agents to learn "focus fire" and interception. For the asymmetric 5m_vs_6m, basic agent coordination alone such as "focus firing" no longer suffices (Du et al., 2019) and consistent success requires extended exploration to uncover complex cooperative strategies such as pulling back units with low health during combat. The 3s_vs_5z scenario features three allied Stalkers against five enemy Zealots. Since Zealots counter Stalkers, the only winning strategy for the allied units is to kite the enemy around the map and kill them one after another, causing the failure of independent learning algorithms to learn good policies in this task. For all these scenarios, our method consistently shows the best performance with significant learning speed. Detailed results are reported in Figure (3) as they present the median win rate of the methods during the training across 12 random runs. SPARSE-REWARD SETTINGS We test on two additional tasks to show the effectiveness our method on sparse-reward settings and compare it to famous influence-based coordinated exploration algorithms (Table (1 Sparse Push-Box: A 15 × 15 room is populated with 2 agents and 1 box. Agents need to push the box to the wall in 300 environment steps to get a reward of 1000. Moreover, both can observe the coordinates of their teammate and the location of the box. However, the box is so heavy that only when two agents push it in the same direction at the same time can it be moved a grid. Agents need to coordinate their positions and actions for multiple steps to earn a reward. Sparse Secret Room: A 25 × 25 grid is divided into three small rooms on the right and one large room on the left where 2 agents are initially spawned. There is one door between each small room and the large room. A switch in the large room controls all three doors. A switch also exists in each small room which only controls the room's door. The agents need to navigate to one of the three small rooms, i.e. the target room, to receive positive reward. The task is considered solved if both agents are in the target room. The state vector contains (x, y) locations of all agents and binary variables to indicate if doors are open. MULTI-AGENT PARTICLE ENVIRONMENTS To understand how the proposed method helps agents achieve cooperative behavior in nonstationary settings, we conduct experiments on the grounded communication environment 3 proposed in (Mordatch & Abbeel, 2018;Lowe et al., 2017). Each task consists of multiple agents (N ≥ 2) and L landmarks in a two-dimensional world with continuous space and discrete time. Both agent and landmark entities inhabit a physical location in space and posses descriptive physical characteristics, such as color and shape type. For that purpose, we adapt the DDPG algorithm as a learning framework and train with 10 random seeds. Results of the following tasks are reported in Tables (2, 3, 4). Cooperative Navigation: In this environment, N agents must collaborate to reach a set of N landmarks with known positions. Agents are rewarded based on how far any agent is from each landmark, meaning that the agents learn to spread with each agent covering one landmark. The agents, which occupy a significant physical space, are aware of their relative positions to each other and are further penalized when colliding with each other. Cooperative Communication: Here, a stationary speaker must guide a listener in an environment consisting of three landmarks of differing colors. At each episode, one landmark of a particular color is set as a goal for the listener to be reached, however, only the speaker can observe which landmark the listener must navigate to. Moreover, The speaker can produce a communication output at each time step which is observed by the listener. The latter must navigate the environment to reach the correct landmark. Agents are collectively rewarded at the end of an episode based on the listener's distance from the correct landmark. Physical Deception: This environment consists of N agents and N landmarks, with one landmark as the target of all agents. The agents are rewarded based on the distance of the closest agent to the target landmark, making it sufficient for only one agent to reach it. An adversary agent also tries to reach the target landmark, while the agents are penalized as it gets closer to the target. The adversary, however, does not know which landmark is the target and must deduce it from the agents' behavior. For that reason, agents must cooperate to trick the adversary by learning to cover all the landmarks. This task shows that our algorithm is applicable not only to cooperative interactions but to mixed environments as well. Table 4: Results on the physical deception task, with N = 2 cooperative agents/landmarks. Success (succ %) for agents (AG) and adversaries (ADV) is if they are within a small distance from the target landmark. CONTINUOUS ENVIRONMENTS To confirm the scalability of our algorithm to large continuous settings, we measure the performance of our algorithm on a suite of PyBullet (Tan et al., 2018) continuous control tasks, interfaced through OpenAI Gym (Brockman et al., 2016). Gym environments, however, are mainly single-agent settings, thus to evaluate our approach, we reframe the problem by introducing an additional learning agent that acts as an auxiliary agent. Crucially, both agents work collaboratively in order to find a region of the solution space where an agent accumulates higher rewards. We use TD3 (Fujimoto et al., 2018) as our learning model and test it against state-of-the-art algorithms in 5 gym environments. Our algorithm outperforms all baselines across all different environments (e.g. our method attains 131% return of SAC final performance on Humanoid-v3). For detailed results, see Appendix B. Figure 4: Ablations for different components of our framework on 2s3z scenario. ABLATIONS We further investigate the significance of each component along with a symmetric extension of the proposed framework. Specifically, we consider the three cases: 1) No F: where the influence function does not contribute to the update rule to any of the policies; 2) No IM: where a randomly selected agent maximizes both the expected sum of extrinsic rewards along with the influence function, and other agents' policies are learned using the DDPG; 3) Symmetric: where all agents simultaneously play the role of an influencer and influencee: they learn to maximize an augmented reward function (extrinsic and intrinsic) along with the influence function. Results of the experiments conducted on the 2s3z SMAC scenario show that, in the absence of the intrinsic rewards (No IM), the agents experience a slightly decreased overall performance when compared to the significant decline induced by detaching the influence function (No F). In Figure (4), we observe that the agents following the Symmetric approach learn faster, and achieve a significantly higher median win rate. This approach, however, doubles the computational costs which restricts its applicability in larger settings. RELATED WORK We discuss recently developed methods for exploration in RL using intrinsic motivation, coordination in multi-agent RL, and influence-based coordinated exploration methods subsequently. Intrinsic motivation (IM) has been increasingly used both in single-agent RL and multi-agent RL. A core idea of IM is to encourage the agent to take new actions or visit new states, thus exploring the environment and obtaining more diverse behaviors. One common approach is to approximate state or state-action visitation frequency and add a reward bonus to states the agent rarely covers (Tang et al., 2017;Bellemare et al., 2016;Martin et al., 2017). A more related IM approach is to evaluate state visitation novelty (Klissarov et al., 2019;Han et al., 2020;Burda et al., 2019) or state-action visitation novelty (Fayad & Ibrahim, 2021). Inspired by the latter, we provided a natural extension for this approach to the MARL settings by the learning of a "novelty" function. Other works make use of single-agent IM to construct their multi-agent intrinsic reward (Du et al., 2019;Iqbal & Sha, 2019). Each agent in (Du et al., 2019) learns a distinct intrinsic reward so that the agents are stimulated differently, even when the environment only feedbacks a team reward. This reward helps distinguish the contributions of the agents when the environment only returns a collective reward. In (Iqbal & Sha, 2019), each agent has a novelty function that assesses how novel an observation is to it, based on its past experience. Their multi-agent intrinsic reward is defined based on how novel all agents consider an agent's observation. A recent work (Liu et al., 2021) assigns agents a common goal while exploring. The goal is selected from multiple projected state spaces via a normalized entropy-based technique. Then, agents are trained to reach this goal in a coordinated manner. Many works studied the cooperative settings in MARL; a straightforward approach is to use independent learning agents (fully decentralized learning). This approach, however, is shown to perform inadequately both with Q-learning (Matignon et al., 2012) and with policy gradient (Lowe et al., 2017). Therefore, we considered the CTDE paradigm, where each agent's policy takes its individual observation as many real life applications dictate, while the centralized critic permit for sharing of information during training. Policy gradient methods have been commonly used along with the CTDE paradigm in MARL, either by implementing a single centralized critic for all agents , or one centralized critic for each agent (Lowe et al., 2017). Adopting the latter, we enable agents with different reward functions to learn in competitive and mixed scenarios as well. Some other works encouraged cooperative interactions between agents by sharing useful information (Yang et al., 2020;Hostallero et al., 2020). In Hostallero et al. (2020), each agent broadcasts a signal that represents an assessment of the effect of the joint actions that all agents take on its expected reward. Different from our approach, this signal encourages agents to behave as is expected of them and does not benefit exploration. As for Yang et al. (2020), each agent learns an incentive function that rewards other agents based on their actions. Each agent's function aims to alter other agents' behavior to maximize its extrinsic rewards. To accomplish this, each agent requires access to every other agent's policy, incentive function, and return making this approach difficult to scale and execute. Additionally, Roy et al. (2019) proposed two policy regularizers approaches to promote coordination in a team of agent, one of which assumes that an agent must be able to predict the behavior of its teammates in order to coordinate with them, while the other supposes that coordinated agents collectively recognize different situations and synchronously switch to different sub-policies to react to them. Similarly to our work, (Jaques et al., 2019) proposed a similar idea of rewarding an agent for having a casual influence on other agents' actions. Their method showed interesting results in terms of learning coordinated behavior. However, this casual influence is designed to reward policies for influencing other policies' actions without considering the "quality" of this influence. Barton et al. (2018) propose causal influence as a way to measure coordination between agents, specifically using Convergence Cross Mapping (CCM) to analyze the degree of dependence between two agents' policies. Our method also draws inspiration from the work of (Wang et al., 2019), as they define an influence-based intrinsic exploration bonus by the expected difference between the action-value function of one agent and its counterfactual action-value function without considering the state and action of the other agent. CONCLUSIONS & FUTURE WORK We introduced a novel multi-agent RL algorithm for achieving coordination through assessing the influence an agent has on other agents' behaviors. Additionally, we proposed to learn an intrinsic reward for each agent to promote coordinated team exploration. We tested our algorithm on a wide variety of tasks with many challenges, such as partial observability, sparse rewards, and large spaces; these tasks include, but not limited to, SMAC, MAPE, as well as OpenAI gym continuous environments. Our methods achieved noticeable improvement over prominent algorithms on all tasks. One promising extension of our algorithm is to use Graph Attention Networks (Veličković et al., 2017;Zhou et al., 2020a) to learn the importance of the influencer in determining the influencees' policies and to establish a message-passing architecture in networked systems. The investigation of the effectiveness of these methods is left for future works. REPRODUCIBILITY STATEMENT We have provided an illustration of the proposed algorithm in Fig. (2) along with implementation details and hyperparameters selection in Appendix C. Furthermore, code is submitted with Supplementary Material and each algorithm is evaluated at least 10 times using random seeds on all environments. We compare our method to the original twin delayed deep deterministic policy gradients (TD3) (Fujimoto et al., 2018); soft actor critic (SAC) (Haarnoja et al., 2018); proximal policy optimization (PPO) , a stable and efficient on-policy policy gradient algorithm; deep deterministic policy gradient (DDPG); trust region policy optimization (TRPO) (Schulman et al., 2015); Tsallis actor-critic (TAC) (Chen & Peng, 2019), a recent off-policy algorithm for learning maximum entropy policies, where we use the implementation of the authors 4 5 ; and Actor-Critic using Kronecker-Factored Trust Region (ACKTR) , as implemented by OpenAI's baselines repository 6 . Each task is run for at least 1 million time steps and the average return of 15 episodes is reported every 5000 time steps. To enable reproducibility, each experiment is conducted on 10 random seeds of Gym simulator and network initialization. Results of the best performing agent of the two across different methods are reported in Figure (5). C TRAINING DETAILS C.1 GENERAL CONFIGURATIONS We use a buffer-size of 10 6 entries and a batch-size of 1024. We collect 100 transitions by interacting with the environment for each learning update. For all tasks in our hyper-parameter searches, we train the agents for 15, 000 episodes of 100 steps and then re-train the best configuration for each algorithm-environment pair for twice as long (30, 000 episodes) to ensure full convergence for the final evaluation. We use a discount factor γ of 0.95, an influence importance temperature β of 0.1, and a gradient clipping threshold of 0.5 in all experiments unless otherwise specified. Each cloned critic is updated 4 time per step. C.2 SPARSE PUSH BOX AND SPARSE SECRET ROOM, MAPE, & GYM We use the Adam optimizer (Kingma & Ba, 2014) to perform parameter updates. All models (actors, critics and proxy critics) are parametrized by feedforward networks containing two hidden layers of 128 units excpet for the autoencoder network where we use 7 hidden layers with dimensions (128, 64, 12, 3, 12, 64, 128), respectively. All models' parameters are initialized using Glorot Initialization method (Glorot & Bengio, 2010); while the autoencoder's parameters are initialized using Kaiming method (He et al., 2015). We employ the Rectified Linear Unit (ReLU) as activation function and layer normalization (Ba et al., 2016) on the pre-activations unit to stabilize the learning. C.3 SMAC The architecture of all agent networks is a DRQN (Hausknecht & Stone, 2015) with a recurrent layer comprised of a GRU with a 64-dimensional hidden state, with a fully-connected layer before and after. All neural networks are trained using RMSprop (α = 0.99 with no weight decay or momentum) with learning rate 5 × 10 −4 .
2021-08-31T01:16:00.941Z
2021-08-28T00:00:00.000
{ "year": 2021, "sha1": "bc18f1ec5209f8ebacb91eff42927a33664bc1a2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bc18f1ec5209f8ebacb91eff42927a33664bc1a2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256045748
pes2o/s2orc
v3-fos-license
Reactor rate modulation oscillation analysis with two detectors in Double Chooz A θ13 oscillation analysis based on the observed antineutrino rates at the Double Chooz far and near detectors for different reactor power conditions is presented. This approach provides a so far unique simultaneous determination of θ13 and the total background rates without relying on any assumptions on the specific background contributions. The analysis comprises 865 days of data collected in both detectors with at least one reactor in operation. The oscillation results are enhanced by the use of 24.06 days (12.74 days) of reactor-off data in the far (near) detector. The analysis considers the ν¯e\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overline{\nu}}_e $$\end{document} interactions up to a visible energy of 8.5 MeV, using the events at higher energies to build a cosmogenic background model considering fast-neutrons interactions and 9Li decays. The background-model-independent determination of the mixing angle yields sin2(2θ13) = 0.094 ± 0.017, being the best-fit total background rates fully consistent with the cosmogenic background model. A second oscillation analysis is also performed constraining the total background rates to the cosmogenic background estimates. While the central value is not significantly modified due to the consistency between the reactor-off data and the background estimates, the addition of the background model reduces the uncertainty on θ13 to 0.015. Along with the oscillation results, the normalization of the anti-neutrino rate is measured with a precision of 0.86%, reducing the 1.43% uncertainty associated to the expectation. E-mail: navas@lal.in2p3.fr, pau.novella@ific.uv.es Abstract: A θ 13 oscillation analysis based on the observed antineutrino rates at the Double Chooz far and near detectors for different reactor power conditions is presented. This approach provides a so far unique simultaneous determination of θ 13 and the total background rates without relying on any assumptions on the specific background contributions. The analysis comprises 865 days of data collected in both detectors with at least one reactor in operation. The oscillation results are enhanced by the use of 24.06 days (12.74 days) of reactor-off data in the far (near) detector. The analysis considers theν e interactions up to a visible energy of 8.5 MeV, using the events at higher energies to build a cosmogenic background model considering fast-neutrons interactions and 9 Li decays. The backgroundmodel-independent determination of the mixing angle yields sin 2 (2θ 13 ) = 0.094 ± 0.017, being the best-fit total background rates fully consistent with the cosmogenic background model. A second oscillation analysis is also performed constraining the total background rates to the cosmogenic background estimates. While the central value is not significantly Introduction In the last few decades, neutrinos have been proven to be massive particles in several oscillation experiments [1]. The oscillations among the three active neutrino species are now well established, connecting the mass eigenstates (ν 1 ,ν 2 ,ν 3 ) with the flavor eigenstates (ν e ,ν µ ,ν τ ). The 3-flavor neutrino oscillations are described by means of three mixing angles (θ 12 , θ 23 , θ 13 ), two independent mass square differences (∆m 2 21 , ∆m 2 31 ), and one phase responsible for the CP -violation in the leptonic sector (δ CP ). After the observation of the dominant oscillations in the so-called solar [2,3] and atmospheric sectors [4,5], respectively driven by (θ 12 , ∆m 2 21 ) and (θ 23 , ∆m 2 31 ), reactor neutrino experiments have recently observed the oscillation induced by the last mixing angle, θ 13 . Double Chooz, Daya Bay and RENO have provided precise measurements of θ 13 [6][7][8][9], relying on the observation of the disappearance of electron antineutrinos (ν e ) generated in nuclear reactors at typical flight distances of 1-2 km. The θ 13 value offered by reactor experiments is used as an external constraint in current and future accelerator-based experiments aiming at the measurement of δ CP (see for instance [10]). As a consequence, reactor neutrino experiments play a major role in the search for the leptonic CP-violation. Double Chooz and other reactor experiments detect the electron antineutrinos via thē ν e p → e + n interaction, usually referred to as inverse beta decay (IBD). The time and spatial coincidence of the prompt positron and the delayed neutron capture signals yields a large signal-to-background ratio. However, accidental and correlated events induced by fast neutrons and cosmogenic radio-nuclides can mimic the characteristic IBD signature, becoming non-negligible backgrounds. The oscillation analyses presented in [6][7][8][9] are based on background models built assuming a number of background sources. The rate and energy spectrum of each background contribution is estimated from the data collected during reactor-on periods, and incorporated to the total background expectation. Accounting for these background models, θ 13 is derived from the observed energy-dependent deficit ofν e with respect to a MC-based null-oscillation expectation or the unoscillated flux measured JHEP01(2021)190 at a near detector. As a consequence, the oscillation analyses are background-modeldependent and the uncertainty on the background expectations may have a non-negligible impact on the uncertainty of θ 13 . In this paper, an alternative background-model-independent oscillation analysis is presented: the Reactor Rate Modulation (RRM). In this approach, the comparison of the observed rate ofν e candidates with respect to the expectedν e rate in absence of oscillations is performed for different reactor operation conditions ranging from zero to full thermal power. This allows for a simultaneous determination of both θ 13 and the total background rate, without making any consideration about the individual background sources. This technique is particularly competitive in the Double Chooz experiment, as it collects data from only two nuclear cores. In addition, a background-model-dependent result on θ 13 is also obtained following the same RRM procedure. In this case, the precision on θ 13 is improved by incorporating to the analysis a background model based on [9], providing also a consistency test for the model itself. This paper extrapolates the RRM analysis described in [11], which uses only the far detector of the Double Chooz experiment, to a multi-detector setup considering both the near and far detectors. The oscillation analysis also incorporates for the first time the reactor-off data collected by both detectors during 2017, which offer a constraint to the BG rate and serves as independent validation of the BG model. Beyond the neutrino oscillation, the analysis also yields a measurement of the observed rate of IBD interactions. The value is found to be fully consistent with the antineutrino flux normalization provided by Bugey-4 [12], reducing its associated uncertainty. This paper is organized as follows. Section 2 describes the selection of theν e candidates and the corresponding expected backgrounds, while section 3 describes the reactor-on and reactor-off data samples used for the oscillation analysis. Section 4 follows with the definition of the RRM approach and the systematic uncertainties involved. Finally, the oscillation analysis results are presented in section 5 with and without the background model constraint, and section 6 concludes with an overview. 2ν e selection and expected backgrounds The setup of the Double Chooz experiment consists of two identical detectors measuring theν e flux generated at the two reactors (B1 and B2, with a thermal power of 4.25 GW each) of the Chooz B nuclear power plant, operated by Électricité de France. The average distances between the far (FD) and near (ND) detectors and the reactor cores are L ∼ 400 m and L ∼ 1050 m, respectively. Both detectors are identical and yield effectively identical responses after calibration, thus leading to a major reduction of the correlated systematic uncertainties in the oscillation analyses. The detectors consist of a set of concentric cylinders and an outer muon veto on the top. The innermost volume (neutrino target or NT) contains 10 m 3 of Gd-loaded (0.1%) liquid scintillator inside a transparent acrylic vessel. This volume is surrounded by another acrylic vessel filled with 23 m 3 of Gd-unloaded scintillator (gamma-catcher or GC). This second volume was originally meant to fully contain the energy deposition of gamma rays from the neutron capture on Gd and the positron annihilation in the target region. The GC is in turn contained within a third volume JHEP01(2021)190 (buffer) made of stainless steel and filled with non-scintillating mineral oil. The surface of the buffer is covered with an array of 390 low background 10-inch PMTs. The NT, GC and buffer tank define the inner detector (ID). The ID is surrounded by the inner muon veto (IV), a 50 cm thick liquid scintillator volume equipped with 78 8-inch PMTs. Finally, the upper part of the detectors is covered by an outer muon veto (OV), made of plastic scintillator strips grouped in different modules. While the ID is meant to detect the IBD interactions and to allow for the event vertex and energy reconstruction, the IV and OV are devoted to the suppression and rejection of backgrounds. The IBD candidates selection in the RRM analysis follows the lines described in [9]. The selection relies on the twofold-coincidence signature of the IBD process, providing a prompt trigger (e + ) and a delayed trigger (neutron capture). The energy of the prompt signal (visible energy) is directly related to the energy of the interactingν e : Eν e ≈ E e + + 0.78 MeV. While the Double Chooz detectors were originally designed to exploit the large neutron capture cross-section in Gd and the characteristic de-excitation gammas (∼8 MeV), the so-called Total Neutron Capture (TnC) selection approach accounts for neutron captures in H (see [13,14]), C and Gd. This implies that IBD detection volume considers both the NT and the GC, boosting the statistical sample of theν e candidates by almost a factor of 3. After a series of muon-induced background vetoes based on the ID, IV and OV data (see [9] for details), a time and spatial correlation between the prompt and delayed signals is required. The time difference between the signals is comprised between 0.5 µs and 800 µs, while the distance between the vertexes is imposed to be below 120 cm. In addition, an artificial neural network (ANN) has been developed relying on the promptdelayed correlation to set a cut reducing the rate of random coincidences, especially in the n-H events (see [14] for details). The energy windows considered for the prompt and delayed signals are 1.0-8.5 MeV and 1.3-10.0 MeV, respectively. Unlike in the IBD selection presented in [9], where the prompt energy signal is extended to 20 MeV to better constraint the background shapes, in this analysis it is restricted to be below 8.5 MeV (> 99.96% of the reactorν e ). As discussed below, the IBD candidates above this energy are used to infer a background expectation. The physical events mimicking the IBD signature have been discussed in [9]. In Double Chooz, given the small overburden of the detectors (depths of ∼100 m and ∼30 m for the FD and ND, respectively), the muon-induced cosmogenic backgrounds dominate. These correspond mostly to fast-neutrons and unstable isotopes produced upon 12 C spallation (mainly 9 Li, as no indication of 8 He is reported in [15]). While the fast-neutron twofold signature is due to a proton recoil on H followed by the n-capture, 9 Li undergoes a βn decay. Other cosmogenic backgrounds (like µ decay at rest or 12 B) are estimated to be negligible. Apart from the correlated backgrounds, random coincidences of natural radioactivity events and neutron captures (hereafter, accidental background) also become a non-negligible contamination in the IBD candidates samples. However, the estimation of the accidental background contribution relying on the rate of single events is very precise (<0.5% in the FD and 0.1% in the ND). The reactor-on-based background model adopted in the current analysis is built with the contributions of fast-neutrons, 9 Li and accidental events. The only difference with respect to the model in [9] is the range of the visible energy Fast-neutron 1.09 ± 0.03 8.89 ± 0.18 9 Li isotope 2.30 ± 0.30 14.09 ± 1.62 Table 1. Background expectation in the 1.0-8.5 MeV window. The accidental, fast-neutron and 9 Li decay contributions to the background model are derived from reactor-on IBD candidates. and the estimation of the 9 Li contribution from the candidates observed in the 8.5 MeV to 12.0 MeV energy window. The fast-neutron rate and energy spectrum is measured from events tagged by the IV up to 20 MeV. Subtracting the fast-neutron contribution in the 8.5-12.0 MeV range, as shown in figure 1, offers a direct measurement of the number of 9 Li decays (theν e contribution is 0.030 ± 0.009%). Given that the spectral shape of the 9 Li prompt signal is well known [15], the fraction of the spectrum below (above) 8.5 MeV is computed to be 89.3 ± 0.5% (10.7 ± 0.5%). This number allows to extrapolate the total number of 9 Li decays in the 1.0-8.5 MeV energy window considered for the RRM oscillation analysis. The expected rates for the fast-neutron, 9 Li and accidental backgrounds are summarized in table 1. The background estimates are quoted separately for the first phase of the experiment (single-detector, hereafter SD), operating only the FD, and the second phase (multi-detector, hereafter MD) with both detectors running. The increase in the FD accidental rate between both periods is due to the increase of the light noise background described in [16]. This noise has been suppressed in the ND covering the PMT bases with a radioupure polyester film, yielding a reduction in the accidental rate with respect to the FD. The increase in the FD accidental rate uncertainty is due to the smaller statistical sample of random coincidences used to estimate this background in the MD phase. JHEP01(2021)190 3 Reactor-on and reactor-off data samples The Double Chooz data have been taken under different reactor operating conditions. In particular, the totalν e flux changes significantly during the reactor refuelling periods, when only one of the cores is in operation. In addition, the flux depends on the cores fuel composition, thus evolving in time. As done in [9], the oscillation analysis presented in this work comprises the data taken between April 2011 and January 2013 (481 days), when only the FD was available, and between January 2015 and April 2016 (384 days), when the FD and the ND were simultaneously collecting data. According to the selection described in section 2, the number ofν e candidates (actualν e plus background contributions) in the single-detector period is 47351, while in the multi-detector period is respectively 42054 in the FD and 206981 in the ND. The time evolution of the candidates rate for the MD data samples is shown in figure 2, where the days with one (1-Off) and two (2-On) reactors in operation are clearly visible. A prediction of the unoscillatedν e flux during the reactoron periods has been carried out as described in previous Double Chooz publications [6]. The reactor flux model is adopted from [17,18]. While the 235 U, 239 Pu, and 241 Pu isotopes contributions are derived from the Institut Laue-Langevin (ILL) reactor data (see for instance [19]), the contribution from 238 U is predicted from [20]. The time evolution of the fission fractions are accounted for using dedicated Chooz reactor simulations. As done in past Double Chooz publications where the ND was not available, Bugey4 [12] data have been used as a virtual near detector to define the absolute flux normalization in SD. Although this is not required in oscillation analyses with MD data, the associated flux simulation still accounts for the Bugey4-based normalization in order to keep the consistency between the SD and MD flux expectations. This choice does not impact the oscillation analysis precision as the Bugey-4 uncertainty (1.4%) is fully correlated between the FD and the ND, and therefore suppressed to a negligible level in a multi-detector analysis. Among the θ 13 reactor-based oscillation experiments, Double Chooz is unique in obtaining reactor-off data (2-Off) when the two cores of the Chooz site are brought down for refuelling or maintenance. Since the Daya Bay and RENO experiments are exposed to thē ν e fluxes from 6 different cores, they have been so far unable to collect data when all of them are off. Double Chooz has taken reactor-off data samples during both the SD (two samples in 2011 and 2012) and MD (two samples in 2017) periods. The corresponding livetimes and number of IBD candidates in each period and detector are listed in table 2. In order to reduce the cosmogenic backgrounds, a time veto of 1.25 µs is applied after each detected muon. Thus, the corresponding livetimes differ in the near and far detectors during the MD period due to the different overburdens and muon rates in the experimental sites. Applying the selection cuts to the reactor-off data provides an inclusive sample of the different backgrounds, regardless of their origin, with a contribution from residual antineutrinos. After the nuclear reactors are turned off, β decays from fission products keep taking place generating a residual flux ofν e which vanishes with time. Since the reactoroff periods are short in time, the contribution from the residualν e flux is small but not negligible. However, the amount of residualν e can be estimated either by means of Monte Carlo simulations, or by performing a relative comparison of the rates observed at different Table 2. Reactor-off data samples during SD and MD. The last row shows the total background expectation between 1.0 and 8.5 MeV according to the model described in section 2, without considering the residualν e . baselines. Thereby once the residual neutrinos are estimated, the reactor-off data allow for a direct measurement of the total background remaining in theν e candidates samples. The SD reactor-off data has been used in [21] to estimate the total background rate in previous Double Chooz IBD selection procedures, as well as to confront it with the corresponding background models. In this analysis, the data have been reprocessed with the current IBD selection, yielding the rate of candidates quoted in table 2. In order to estimate the residual neutrino contribution, the Monte-Carlo approach described in [21] has been adopted. A dedicated simulation has been performed with FISPACT [22], an evolution code predicting the isotope inventory in the reactor cores. The neutrino spectrum is then computed using the BESTIOLE [18] database. The expected rate of residualν e in SD reactor-off period is found to be 0.58 ± 0.18 day −1 . Once subtracted to the observed rate of events, the measured inclusive background rate is 7.38 ± 1.07 day −1 , in good agreement with the expectation from the background model defined in section 2. The larger reactor-off statistical sample in the MD period, especially in the ND, allows for detailed comparisons with the background model adopted in this work and described in [9]. The energy spectra of the reactor-off IBD candidates in the FD and ND are shown in figure 3, superimposed to the background model. The expectation reproduces the data at high energies, but deviates below ∼3.0 MeV. This discrepancy corresponds to the presence of the residual neutrinos, whose spectrum is known to vanish above 3 MeV due to the involved parent isotopes. According to the different geometrical acceptance between the detectors (L 2 FD /L 2 ND ∼ 7), the contribution of the residualν e in the ND is significantly larger than in the FD. The observed reactor-off candidates above 8.5 MeV have been used to perform 9 Li estimations following the analysis described in section 2, being the only difference that no IBD interactions are expected. The obtained 9 Li estimates up to 8.5 MeV are (0.52 ± 1.49) day −1 in the FD and (9.33 ± 5.28) day −1 in the ND, consistently with the reactor-on expectations quoted in table 1. Reactor rate modulation analysis The measurement of the mixing angle θ 13 in reactor experiments relies on the comparison of the observed prompt energy spectrum with the expectation in absence of oscillation. Both the deficit in the observed number of candidates and the distortion of the energy spectrum are accounted for in a rate plus shape (R+S) analysis, as done in [9]. The null-oscillation expectation comes either from a reactor flux simulation or from the measurement of the flux at a short baseline (near detector), where the oscillation effect is small or still negligible. Besides theν e flux, a R+S background model in each detector is also considered, assuming a certain number of background sources. Thus, the θ 13 determination becomes backgroundmodel-dependent and the associated uncertainty contains a non-negligible contribution from the background model. JHEP01(2021)190 As an alternative approach, the RRM analysis offers a background-model-independent measurement of θ 13 . A rate-only (RO) determination of the mixing angle is implemented by comparing the observed rate of candidates (R obs ) with the expected one (R exp ) for different reactor thermal power (P th ) conditions. As the signal-to-background ratio varies depending on P th (theν e flux increases with the thermal power while the background rate remains constant), the RRM analysis provides a simultaneous estimation of θ 13 and the total (inclusive) background rate, independently of the number of background sources. Relying on rate-only information, these estimations are not affected by the energy distortion (with respect to the predicted spectrum) first reported in [6] and further discussed in [23]. The simple experimental setup of Double Chooz, collecting 2-On, 1-Off and 2-Off data, boosts the precision of the RRM results by offering a powerful lever arm to constrain the total background. The current analysis implements for the first time a multi-detector RRM technique, using reactor-on and reactor-off data collected with both the FD and the ND. As in R+S analyses, the systematic uncertainties are highly suppressed by means of the relative comparison of the observed flux at different detectors. Beyond the error suppression, the comparison of the measurements at both detectors also provides a determination of the residual neutrino rate (R rν ) in the MD reactor-off period. This ensures a precise determination of the total background rate which does not rely on Monte-Carlo simulations for the R rν estimation. The RRM technique exploits the correlation of the expected and observed rates, which follows a linear model parametrized by sin 2 (2θ 13 ) and the total background rate, B d : where the subindex d stands for either the FD or the ND, R ν is the expected rate of antineutrinos in absence of oscillation, and η osc is the average disappearance coefficient, sin 2 (∆m 2 L/4Eν e ) , which differs between reactor-on and reactor-off (R ν = R rν ) data due to the differentν e energy spectrum. This relation between R obs d and R exp d is evaluated for data taken at various reactor conditions, grouping the IBD candidates and the expected ν e in bins of the total baseline-adjusted thermal power (P * th = Nr i P i th /L 2 i , where N r = 2 is the number of cores). A fit of these data points to the model expressed in eq. (4.1) yields the determination of sin 2 (2θ 13 ) and B d . As the accidental background rate is known with a precision below 1%, the fit is performed with accidental-subtracted samples. As a consequence, hereafter B d refers to all background sources but the accidental one (in short, cosmogenic background). The η osc coefficient is computed, for each P * th bin, by means of simulations as the integration of the normalized antineutrino energy spectrum multiplied by the oscillation effect driven by ∆m 2 ( [24]) and the distance L between the reactor cores and the detectors. The average η osc FD (η osc ND ) value in the MD reactor-on period is computed to be 0.55 (0.11). The FD coefficient obtained for the SD reactor-on period is slightly larger (0.56) since the relative contributions ofν e from the B1 and B2 cores differ. According to the residual neutrino spectrum discussed in section 3, the average η osc FD (η osc ND ) in the reactor-off period is 0.86 (0.21). The systematic uncertainties related to the IBD detection and the reactorν e flux have been described in detail in [9]. The same values apply to R exp d , although only the normal- JHEP01(2021)190 ization uncertainties need to be accounted for. These can be divided in three groups: 1) detection efficiency (σ ), 2) reactor-onν e flux prediction (σ ν ), and 3) reactor-offν e flux prediction (σ rν ). In turn, these errors can be decomposed into their correlated and uncorrelated contributions among the detectors and the reactor cores. The correlated detection efficiency error in the FD and the ND is 0.25%, while the uncorrelated uncertainties are 0.39% (σ FD ) and 0.22% (σ ND ), respectively. Since the far detector has the same performance during the SD and MD periods, σ FD is fully correlated in the two data samples. Concerning the reactor-on flux uncertainty, only the thermal power (0.47%) and fractional fission rates (0.78%) are considered to be fully uncorrelated among reactors, in a conservative approach. This implies that the total correlated reactor error is 1.41% (fully dominated by the Bugey4 normalization), while the uncorrelated is 0.91% (σ ν ) for both B1 and B2 cores. As discussed in [11], the uncertainty on P th depends on the thermal power itself. The P th error in each P * th bin is computed according to the same procedure. However, as more of 90% of the reactor-on data are taken at full reactor power (either with 1 or 2 reactors being in operation), the dependence of σ ν with P th is negligible. In the MD period, σ ν is fully correlated among the two detectors, while σ ν is conservatively treated as fully uncorrelated between the SD and the MD data. The total correlated normalization error in the expected reactor-on IBD flux φ in the FD and the ND (σ φ ) yields 1.43%, considering the correlated detection and reactor uncertainties. Finally, the uncertainty on the residual ν e in reactor-off periods is treated differently in the SD and MD samples. For the SD sample, σ rν is set to 30% as estimated in [21]. For the MD samples, no error is considered as the IBD rate normalization of R rν is treated as a free parameter in the oscillation fit. θ 13 and background measurements The fit of the observed rates for each P * th bin in each detector is based on a standard χ 2 minimization. Apart from the free parameters sin 2 (2θ 13 ), B d and R rν (only for MD), a set of nuisance parametersᾱ are introduced in order to account for the different flux and detection uncertainties. The χ 2 function consists of reactor-on and reactor-off terms, which in turn are divided into individual terms for the FD (SD), FD (MD) and ND detector data samples. In addition, penalty or pull terms are added to constrain each one of the nuisance parameters to the uncertainties quoted in section 4. Assuming Gaussian-distributed errors, the reactor-on χ 2 for each bin in the detector sample d is defined as: where σ stat d is the statistical error and α φ , α ν d,r , and α d are the nuisance parameters accounting for the uncertainties σ φ , σ ν and σ d , respectively. As σ ν is treated as fully uncorrelated between SD and MD, specific α ν d parameters are used in both periods: α ν FD(SD),r = α ν FD(MD),r = α ν ND,r ≡ α ν r . The weights w d,r account for the relative fraction of antineutrinos generated in the reactor r and detected in the detector d. These values are computed JHEP01(2021)190 according to the Monte Carlo simulation considering the reactor powers and the baselines. Due to the low statistics in the reactor-off periods, specially in the SD one, the uncertainty in the sample of selected events is considered to be Poisson-distributed. The reactor-off χ 2 is then defined as binned Poisson likelihood following a χ 2 distribution: where N obs is the number of observed IBD candidates, C is the number of cosmogenic background events (C = B × T , being T the reactor-off live time), N exp is the expected number of antineutrinos (N exp = R exp × T ), and α rν d is the parameter accounting for the error on the residualν e expectation. While in the SD reactor-off data this parameter is constrained by σ rν , in the MD reactor-off it is left free, but correlated between the FD and the ND according to the ratio of reactor-averaged baselines L d : α rν ND = L 2 FD /L 2 ND × α rν FD . Finally, The last term of the χ 2 incorporates Gaussian pulls for theᾱ parameters according to their associated uncertainties: The number of reactor-on bins considered for each data sample (N b ) has been set according to the available statistics. As done in [11], the SD data is divided in 6 P * th bins, while the MD data in 4 bins for both detectors. The overall χ 2 function used for the fit can then be expressed as: The results of the (sin 2 (2θ 13 ), B FD , B ND ) fit are shown in figure 4, in terms of the observed versus the expected rate, and of the 68.4%, 95.5% and 99.7% confidence level (C.L.) regions. The RRM yields best-fit values of sin 2 (2θ 13 ) = 0.094 ± 0.017, B FD = 3.75 ± 0.39 day −1 and B ND = 27.1 +1.4 −2.1 day −1 , with χ 2 /dof = 11.0/14. The backgroundindependent determination of θ 13 is consistent with all previous results form Double Chooz, being the precision competitive with the one achieved by the R+S analysis in [9]. The values of the total cosmogenic backgrounds in the FD and ND are also consistent with the sum of the 9 Li and fast-neutron background expectations quoted in table 1, respectively, with similar associated errors. The best-fit for the residual neutrinos is brought to the physical limit of 0, with R rν FD < 0.64 day −1 and R rν ND < 4.2 day −1 at 90% C.L. The precision on θ 13 can be improved by introducing the constraint of the cosmogenic background estimates into the fit. The background constraint is added to the χ 2 function as two extra Gaussian priors: where B exp and σ B stand for the central value and uncertainty of the background expectations. These are built considering the fast-neutron determination from reactor-on data (table 1) and the combination of the 9 Li determinations with reactor-on and reactoroff data: B exp FD = 3.33 ± 0.29 day −1 and B exp ND = 22.57 ± 1.55 day −1 . The results of the corresponding sin 2 (2θ 13 ) fit are presented in figure 5. The fit yields a best-fit value of sin 2 (2θ 13 ) = 0.095 ± 0.015, with χ 2 /dof = 13.5/16. As expected due to the consistency between the background estimates and the reactor-off data, the central value is not significantly modified with respect to the background-independent θ 13 result. However, the combination of the background model and the reactor-off information allows for a more precise determination of the residual neutrinos, yielding now R rν FD = 0.48 ± 0.28 day Table 3. Cosmogenic background expectations and best-fit values in the FD and the ND. For comparison purposes, the last row shows the best-fit values obtained in the R+S analysis [9], restricted to the 1.0-8.5 MeV energy window. R rν ND = 3.18 ± 1.85 day −1 . According to these non-vanishing values, the best-fit parameters of B FD and B ND (3.37 ± 0.24 day −1 and 23.49 ± 1.40 day −1 , respectively) are slightly reduced with respect to the background-model-independent fit results. Although fully consistent, the best fit of B ND is found to be ∼7% higher than the one obtained in [9] (21.86 ± 1.33 day −1 in the 1.0-8.5 MeV energy range). This difference is mostly driven by the background constraint and the use of the new MD reactor-off data, not considered in the R+S analysis yet. In table 3, a summary of the background expectations and the best-fit values is presented. Due to the anti-correlation between sin 2 (2θ 13 ) and the background in the ND (visible in bottom plot of figure 4), the larger best-fit value of B ND obtained in the current RRM analysis pulls down the central value of θ 13 with respect to the R+S result (sin 2 (2θ 13 ) = 0.105 ± 0.014). Finally, in order to provide a measurement of the IBD rate normalization, the parameter α φ can be left free in the fit by removing the corresponding pull term in the χ 2 . Since the correlated detection efficiency is known with a negligible uncertainty (0.25%), this parameter provides effectively the relative normalization with respect to the central value of the reactor-on flux simulation. This central value is defined by the mean cross- JHEP01(2021)190 section per fission (see [9] for details), σ f , measured by Bugey-4. Once corrected for the specific averaged fuel compositions of the Double Chooz reactor cores, it is computed to be (5.75±0.08)×10 −43 cm 2 /fission. The best-fit value of α φ yields 0.04±0.86%, thus being fully consistent with the expectation but reducing the 1.43% uncertainty on the IBD rate normalization σ φ . According to this result, the best-fit of sin 2 (2θ 13 ) is not significantly modified. Summary and conclusions The simple experimental setup of Double Chooz, consisting of only two detectors and two reactors, allows for a simultaneous determination of θ 13 and the total background rates. The RRM analysis relies on the rate of observedν e interactions in data samples collected at different total reactor powers. The comparison of such rates with the null-oscillation Monte Carlo expectations provides a background-independent measurement of sin 2 (2θ 13 ), as well as inclusive background rates in the far and near detectors which do no depend of any a priori assumptions on the individual background sources. This approach is intrinsically different from the usual R+S θ 13 oscillation analyses implemented in reactor experiments, which are based on background models considering a number of background sources and estimating the corresponding rates and energy spectra from reactor-on data. In this work, a multi-detector RRM analysis is implemented for the first time. As in R+S analyses, the relative comparison of the rates observed at different baselines leads to a major reduction of the involved systematic uncertainties. In particular, the correlated detection and reactorν e flux errors cancel out, while the uncorrelated flux uncertainty is significantly suppressed. Apart from boosting the precision in the θ 13 measurement, the multi-detector measurement of the IBD interactions allows for a determination of the observed IBD rate normalization. The current oscillation analysis also uses for the first time reactor-off data samples for both the FD and ND, offering a powerful handle to constrain the backgrounds. Among the θ 13 reactor experiments, Double Chooz is the only one with available reactor-off samples, thus offering a unique cross-check of the background models. The RRM oscillation fit relies on the minimization of a χ 2 function consisting of reactor-on and reactor-off terms, as well as penalty terms constraining the nuisance parameters accounting for the systematic uncertainties to their estimated values. The errors considered in the fit are those impacting the expected IBD rates, namely, the detection efficiency and the reactor flux normalization uncertainties. The (sin 2 (2θ 13 ), B FD , B ND ) fit yields a background-independent value of θ 13 which is consistent with previous Double Chooz results: sin 2 (2θ 13 ) = 0.094 ± 0.017. The precision achieved by the RRM analysis is competitive with that one obtained in the R+S fit presented in [9] (sin 2 (2θ 13 ) = 0.105 ± 0.014), relying on a reactor-on background model. The best fit values of the total cosmogenic background rates in the FD and ND, B FD = 3.75 ± 0.39 day −1 and B ND = 27.1 +1.4 −2.1 day −1 , are also consistent with the background estimates. Thus, these expectations can be added to the RRM in order to improve the precision of the oscillation results: sin 2 (2θ 13 ) = 0.095 ± 0.015. The limited reduction on the error is due to the dominant role of the detection and reactor flux systematic uncertainties. The compatibility of the background-model-dependent R+S and the RRM results, as well as the consistency JHEP01(2021)190 between the reactor-off data and the background models, confirms the robustness of the Double Chooz oscillation analyses. Beyond the θ 13 result, the RRM fit is used to measure the observedν e rate normalization. The best-fit value yields a 0.04 ± 0.86% deviation with respect to the flux normalization predicted by Bugey-4, thus being fully consistent.
2023-01-21T14:59:59.273Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4c181f58bf87ab119a4cf3b6c03559d0fbbf85b4", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2021)190.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "4c181f58bf87ab119a4cf3b6c03559d0fbbf85b4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
150467170
pes2o/s2orc
v3-fos-license
Implementation of free inquiry learning model to establish 21st century skills Free inquiry learning model through field trip activity on Invertebrate Zoology and Macro algae courses for prospective biology teacher’s need to be done, as efforts to establish 21st-century skills to compete in globalization era. The skill in the 21st century includes three domains of competence: cognitive, interpersonal, and intrapersonal. The research purpose to describe cognitive, intrapersonal and interpersonal competencies for a prospective biology teacher. This research uses a quantitative approach with survey method for biology students. Data collection through observation and filling questionnaire. Data were analyzed using descriptive analyse. The results showed that cognitive competence includes cognitive process and strategies, knowledge and creativity with a range of values 80-98, while Likert analysis results on intrapersonal and interpersonal competence showed positive results. The conclusion that free inquiry learning model through field trip activity can establish 21st-century competence needed for a prospective biology teacher. Introduction Natural Science is one of the sciences studied in college natural science studies the phenomena of nature systematically, based on experimental results and observations. Natural science is science that deals with natural phenomena and material, which is systematic and generally accepted as a collection of observations and experiments [1]. The study of natural science is a way of finding out about nature systematically, so that science is not only a mastery of a collection of knowledge, facts, concepts or principles, but also a good discovery process through investigation, experimentation, observation, and so on. Students' scientific attitudes in science learning can be developed through discussion, experimentation, observation, simulation, or project activities in the field [2]. Scientific attitude can be defined as, "open mindedness, a desire for accurate knowledge, confidence in procedures for seeking knowledge and the expectation that the solution of the problem will come through the use of verified knowledge" [3]. Science learning on Invertebrate Zoology and Macro algae can be a program for students to study the natural environment. The learning process should focus on engaging immediate experience, such as a field trip for the development of competence to explore and understand nature scientifically. By inviting students to interact directly and understand nature through the process to find something, and to do something, it helps students gain a deeper understanding. Science learning should be conducted in scientific inquiry to cultivate the ability to think, work, be scientific and communicate them as an important aspect of life skills [4]. Biology as a part of science subjects requires direct experience in learning activities. Appropriate learning model is free inquiry with field trip method. Some research results show that inquiry and field trips have a positive impact in the learning process. Research states that inquiry can improve the skills of the science process, understand the concept and scientific attitude of students [5]. Meanwhile, research to determine the influence of field trips on students' creative thinking skills and practice on art education, suggest trip to nature and industrial locations can assist students in developing creativity and practice in education [6]. Based on some research results and observation results, the researcher conducted research by applying guided inquiry learning model through field trip activities to establish the 21st century competencies needed for students as Biology teacher candidates. One of the challenges of becoming a teacher today is the need for 21st century skills. They are needed to be further developed through research of cognitive competence (consist of cognitive, knowledge and creativity processes), intrapersonal competence (consist of intellectual openness, work ethic) and interpersonal competence (consist of teamwork, collaboration and leadership) [7]. Method This research uses quantitative approach of survey method. The survey was conducted on the beach of Santolo, Pamengpeuk Garut, West Java. Subject in this research are the student in Syekh Nurjati Institute of Cirebon3 rd semester who took the courses of Invertebrate Zoology and Macroalgae. The study was conducted for 4 weeks. Students are divided into groups by number per group of 6 students, observe of marine biota and collect specimens on Santolo beach. The results of the collection of specimens are brought to the laboratory for the identification process. Identification process by identifying each morphological character and comparing with related reference. The results of identification are written in the form of a report journal and a collection of preserved specimens of marine biota and presented the results. Data collection to measure the 21 st century skills of cognitive competence (include cognitive processes, knowledge and creativity,) is a journal of preserved reports and collections, while intrapersonal competencies (include intellectual openness, work ethic and selfevalution) and interpersonal competencies (include teamwork, collaboration and environmental awareness) through questionnaires and analyzed using Likert scale. The results were analyzed descriptive analyzed on all 21 st century competencies. Table 1 shows the result of the research of 21 st century skill formation on cognitive competence shows student of Biology teacher training program has creativity and innovation by presenting the result of field trip in the form of atlas and specimen of marine biota as media, which can facilitate in studying it. In 21 st century skill the ability to find and organize information quickly and efficiently a critical skill. The free inquiry model in the learning process can shape and enhance creativity [8]. This ability is required by a prospective teacher in providing creative and innovative teaching. Furthermore, on the knowledge indicator and knowledge process of students are based on the report journal of marine biota research results. In the cognitive process obtained lower results, shown in the ability of critical thinking, analysis and interpretation in the manufacture of journal reports in the form of identification and classification of invertebrates and macro algae does not cover the whole aspect. Nevertheless, the process can support increased cognitive competence (content skill) because prospective teachers must complete data analysis of various invertebrate and macro algae specimens from many sources of information with ICT capabilities. Thus they gain greater insight and information about the scholarship. Furthermore, in their lectures presented the results of their research journals as a learning process for dissemination of information and knowledge. Thus the implementation of free inquiry through field trip can establish cognitive competence in both critical thinking and creative ICT literacy, communication, which is the core of 21 st century skills [9]. 21 st century skills must be 3 mastered by a prospective teacher, so that they can have the knowledge and skills that can be integrated in classroom teaching to meet the learning objectives and challenges of the century. Furthermore, they are expected to become agents of change in 21 st century knowledge and skills implementation in curriculum and subjects. The implementation of 21 st century knowledge and skills in science learning, including digital literacy, inventive thinking, productivity and effective communication [10]. Intrapersonal competencies Based on the analysis of the data description through Likert analysis showed a positive to implementation of free inquiry learning model can establish the character of intrapersonal. The intrapersonal characters are sequentially from the highest of work ethics, intellectual openness and self-evaluation ( Figure 1). Work ethic includes integrity, being able to direct and position oneself according to conditions (self-direction), achievement orientation and professional. A good work ethic to be a conscientiousness character. That's seen an enthusiastic attitude of students in the process of collection and identification of marine biota. Intellectual openness includes curiosity, intellectual interest, continuous learning and personal, social responsibility well and appreciation for diversity. Intellectual openness showed openness students during or post-learning, an innovative and creative attitude in the manufacture of preserved specimens of marine biota well. Addition, intrapersonal competencies that can be formed through free inquiry learning such as the students are able to selfregulation and self-evaluation. Although the results indicate a neutral attitude of students in selfregulation, because there are still students who do not have emotional stability. Interpersonal competencies Based on the results of research indicate that Free Inquiry learning model can positively form interpersonal skills for prospective biology teacher's. This interpersonal skills include teamwork, collaboration skills and environmental awareness (Figure 2). Teamwork and collaboration skills in Free Inquiry learning model are the main factors in the attitude of agreeableness. This is seen during or/and post-learning, the students to be warm to the group's friends, the division of the tasks of each individual in the group and help each other so teamwork well.Teamwork and collaboration can improve interpersonal skills so that students are expected to succeed in a multi-cultural world that is a challenge of the 21 st century. In addition, during the process and post-learning through the field trip students have a better awareness of the environment so not only capable students identify marine biota, but they are also able to utilize and conserve marine biota well so as to bring up conservation skills to prospective biology teachers. Learning based on inquiry involves learners for the purpose of students how to think, to understand deeper concepts through search and find process [11] so that students have a good conceptual understanding that is able to integrate between material understanding and material interrelationship with the surrounding environment through ideas and abstractions on how to approach problems, reason, complex plan [12]. Conclusion Based on the result of the research, it can be concluded that free inquiry learning model through field trip activity can establish 21 st century competence namely cognitive, intrapersonal and interpersonal competencies needed for prospective biology teacher's. The cognitive competence, creativity ability of biology teacher candidates is better than knowledge and cognitive process. Nevertheless, the ability of cognitive processes can help improve students' knowledge skills of prospective teachers by mainly searching for reference resources through ICT skills. In the intrapersonal competence, the work ethic of biology teacher candidates shown through integrity during field trip activities is more prominent than the intellectual openness and self evaluation. Interpersonal competence is better shown in the environmental awareness aspect, compared to the communication and team cooperation aspects. Thus the implementation of free inquiry can form the skills of 21 st century biology teacher candidates
2019-05-13T13:05:26.008Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "0b988b6598aa83446b7489296592f4f72df4cbd5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1157/2/022118", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "18c3500123fa9d5c40316193f969314971007610", "s2fieldsofstudy": [ "Biology", "Education" ], "extfieldsofstudy": [ "Physics", "Sociology" ] }
2673976
pes2o/s2orc
v3-fos-license
SspA up-regulates gene expression of the LEE pathogenicity island by decreasing H-NS levels in enterohemorrhagic Escherichia coli Background Enterohemorrhagic Escherichia coli (EHEC) colonizes the intestinal epithelium and causes attaching and effacing (A/E) lesions. Expression of virulence genes, particularly those from the locus of the enterocyte effacement (LEE) pathogenicity island is required for the formation of a type three secretion system, which induces A/E lesion formation. Like other horizontally acquired genetic elements, expression of the LEE is negatively regulated by H-NS. In the non-pathogenic Escherichia coli K-12 strain the stringent starvation protein A (SspA) inhibits accumulation of H-NS, and thereby allows de-repression of the H-NS regulon during the stationary phase of growth. However, the effect of SspA on the expression of H-NS-controlled virulence genes in EHEC is unknown. Results Here we assess the effect of SspA on virulence gene expression in EHEC. We show that transcription of virulence genes including those of the LEE is decreased in an sspA mutant, rendering the mutant strain defective in forming A/E lesions. A surface exposed pocket of SspA is functionally important for the regulation of the LEE and for the A/E phenotype. Increased expression of ler alleviates LEE expression in an sspA mutant, suggesting that the level of Ler in the mutant is insufficient to counteract H-NS-mediated repression. We demonstrate that the H-NS level is two-fold higher in an sspA mutant compared to wild type, and that the defects of the sspA mutant are suppressed by an hns null mutation, indicating that hns is epistatic to sspA in regulating H-NS repressed virulence genes. Conclusions SspA positively regulates the expression of EHEC virulence factors by restricting the intracellular level of H-NS. Since SspA is conserved in many bacterial pathogens containing horizontally acquired pathogenicity islands controlled by H-NS, our study suggests a common mechanism whereby SspA potentially regulates the expression of virulence genes in these pathogens. Background Enterohemorrhagic Escherichia coli (EHEC) O157:H7 is an emerging food-and waterborne-enteric pathogen causing diarrhea, hemorrhagic colitis and the potentially fatal complication hemolytic uremic syndrome in humans [1,2]. EHEC colonization of enterocytes of the large bowel is characterized by an intestinal attaching and effacing (A/E) histopathology, which is manifested by a localized degeneration of brush border microvilli and an intimate attachment of bacteria to actin-rich pedestal-like structures formed on the apical membrane directly beneath adherent bacteria [3]. The A/E lesion is due to the activity of a type III secretion system (T3SS) mainly encoded by the 35-45 kb locus of enterocyte effacement pathogenicity island (hereafter named LEE), which is conserved in some EHEC isolates and other A/E pathogens such as enteropathogenic Escherichia coli (EPEC), atypical EPEC, rabbit EPEC, Escherichia albertii and Citrobacter rodentium [4][5][6][7]. The LEE pathogenicity island comprises at least 41 genes that mainly are located in five major operons (LEE1-5). The LEE encodes a TTSS, translocator proteins, secreted effectors, regulators, an intimin (adhesin) and a translocated intimin receptor. The LEE-encoded regulators Ler, Mpc, GrlR and GrlA are required for proper transcriptional regulation of both LEE-and non-LEE-encoded virulence genes in response to environmental cues [8][9][10][11][12]. The LEE was acquired by horizontal gene transfer [13] and is regulated by both generic E. coliand pathogenspecific transcription factors. Consequently, the regulation of the LEE reflects characteristics of such genetic elements (For review see [11,14]). Silencing of xenogeneic DNA in bacterial pathogens under conditions unfavorable for infection is important to ensure bacterial fitness [15]. H-NS, which is an abundant pleiotropic negative modulator of genes involved in environmental adaptation and virulence [16][17][18][19][20], is a major silencing factor of horizontally acquired genes [21,22]. H-NS silences genes in the H-NS regulon by various mechanisms. Binding of H-NS to regulatory regions of these genes prevents RNA polymerase from accessing and escaping from promoter DNA, which represents two different mechanisms used by H-NS to silence gene expression (see [23][24][25] and references therein). H-NS is also a major transcriptional modulator of the LEE pathogenicity island, where it negatively affects the expression of LEE1-5, map and grlRA [26][27][28][29][30][31]. Further, H-NS binds to regulatory sequences upstream of virulence-associated genes located outside of the LEE including those encoding the long polar fimbriae (lpf) required for intestine cell adherence and enterohemolysin (ehx) [32,33]. The expression of EHEC virulence genes including those encoded by the LEE is derepressed from the H-NS-mediated transcriptional silencing under physiological conditions that EHEC encounters during infection. Also, LEE expression is growth phase-dependent with maximum expression in early stationary phase [34]. H-NS-mediated silencing of transcription is overcome by the action of DNA-binding H-NS paralogues such as the LEE1-encoded global transcriptional regulator Ler (For review see [35]). Ler promotes the expression of many H-NS-repressed virulence genes including those of LEE1-5, grlRA and non-LEE-encoded virulence genes such as lpf and the virulence plasmid pO157-encoded mucinase stcE [26,28,31,[36][37][38][39]. Thus, Ler antagonizes H-NS in the regulation of many virulence genes, which belong to both the H-NS and Ler (H-NS/Ler) regulons. The E. coli stringent starvation protein A (SspA) is a RNA polymerase-associated protein [40] that is required for transcriptional activation of bacteriophage P1 late genes and is important for survival of E. coli K-12 during nutrient depletion and prolonged stationary phase [41][42][43]. Importantly, SspA down-regulates the cellular H-NS level during stationary phase, and thereby derepress the H-NS regulon including genes for stationary phase induced acid tolerance in E. coli K-12 [44]. A conserved surface-exposed pocket of SspA is important for its activity as a triple alanine substitution P84A/H85A/ P86A in surface pocket residues abolishes SspA activity [45]. SspA is highly conserved among Gram-negative pathogens [44], which suggests a role of SspA in bacterial pathogenesis. Indeed, SspA orthologs affect the virulence of Yersinia enterocolitica, Neisseria gonorrhoeae, Vibrio cholerae, Francisella tularensis and Francisella novicida [46][47][48][49][50][51]. Since E. coli K-12 SspA is conserved in EHEC where H-NS negatively modulates virulence gene expression, we asked the question of whether SspA-mediated regulation of H-NS affects EHEC virulence gene expression. Here we study the effect of SspA on the expression of LEE-and non-LEE-encoded virulence genes and its effect on H-NS accumulation in EHEC. Our results show that in an sspA mutant elevated levels of H-NS repress the expression of virulence genes encoding the T3SS system rendering the cells incapable of forming A/E lesions. Thus, our data indicate that SspA positively regulates stationary phase-induced expression of H-NS-controlled virulence genes in EHEC by restricting the H-NS level. SspA positively affects transcription of EHEC virulence genes To evaluate the effect of sspA on virulence gene expression in EHEC during the stationary phase we constructed an in-frame deletion of sspA in the E. coli O157:H7 strain EDL933 ATCC 700927 [52] and measured transcription of LEE-(LEE1-5, grlRA and map) and non-LEE-encoded (stcE encoded by pO157) genes ( Figure 1). Wild type and sspA mutant strains were grown in LB medium to stationary phase with similar growth rates (data not shown). Total RNA was isolated and transcript abundance was measured by primer extension analyses using labeled DNA oligos specific to each transcript of interest and ompA, which served as internal control for total RNA levels. Results revealed that transcript levels of LEE1-5, grlRA, map and stcE were reduced by up to 8-fold in the sspA mutant compared to wild type ( Figure 1A-H, lanes 1 and 2). The expression of these genes was restored when the sspA mutant was supplied with wild type sspA in trans from pQEsspA [43] (Figure 1A-H, lane 3). However, the expression of ler and other virulence genes tested (grlRA, espZ, sepL and stcE) remained repressed when the sspA mutant strain was supplied with mutant sspA from pQEsspA84-86 [45], which expresses SspA containing the tri-ple alanine substitution in the surface-exposed pocket ( Figure 1I and data not shown). These results indicate that SspA positively affects stationary phase-induced expression of both LEE-and non-LEE-encoded virulence genes in EHEC. Moreover, the mode of action of SspA is likely similar in E. coli K-12 and EHEC as the surface-exposed pocket of SspA also is required for SspA to affect the expression of EHEC virulence genes. SspA activates virulence gene expression by reducing the H-NS level Reduced virulence gene expression during the stationary phase could also be due to an increased level of H-NS in the EHEC sspA mutant as observed for H-NS-regulated genes in the E. coli K-12 sspA mutant [44]. We measured the levels of H-NS in stationary phase cells of wild type and sspA mutant EHEC strains by western analysis (Figure 3). Indeed, the H-NS level was two-fold higher in the sspA mutant than in the wild type, whereas the level of Fis as a control was not increased in the mutant compared to wild type. These results indicate that SspA activates the expression of EHEC virulence genes by decreasing accumulation of H-NS. Notably, such relative small change in H-NS levels was previously demonstrated to drastically affect the expression of the H-NS regulon involved in stationary phase-induced acid tolerance of E. coli K-12 [44]. Genetic analysis further indicated that hns mainly is epistatic to sspA in regulating H-NS-repressed virulence genes in EHEC ( Figure 4). We deleted hns in EHEC wild type and sspA mutant strains as described in Methods. The EHEC hns mutant derivatives had a mucoid phenotype and a longer generation time (g) than wild type (g, WT~2 7, g, hns~3 6 min and g, hns,sspA~4 5 min). Therefore, at least two independent clones of each hns mutant derivative were used in each experiment to ensure reproducible results. The expression of LEE1-5, grlRA, map and stcE was between 4 and 26-fold higher in an isogenic hns null mutant than in wild type ( Figure 4A-H, compare lane 3 with 1), which is consistent with the fact that there is enough H-NS in stationary phase wild type cells (Figure 3) to partially repress those virulence genes. Although the effect of hns on cell growth will be complex, an uncontrolled expression of the LEE genes and the T3SS is likely to be detrimental to the fitness of the cell [15]. Moreover, the expression level of EHEC virulence genes in the hns sspA double mutant was within the range of the level observed for the hns single mutant ( Figure 4A-H, compare lane 4 with 3). Thus, our data strongly indicate that SspA is located upstream of H-NS in the regulatory cascade controlling the virulence gene expression in EHEC. However, SspA might also directly activate virulence gene expression in addition to controlling H-NS levels. SspA is required for cell adherence and A/E lesion formation Since the expression of LEE-encoded genes involved in A/E lesion formation was decreased in a sspA mutant and increased in a hns sspA double mutant (Figures 1 and 4), we predicted that SspA affects lesion formation in a H-NS-dependent manner. To address this, we infected HEp-2 cells with wild type, sspA, hns and hns sspA mutant derivatives of EDL933, and determined the ability of these strains to form A/E lesions in vitro. To this end we used the qualitative fluorescent actin staining (FAS) assay [53], where actin filaments are stained with FITC-phalloidin to detect A/E lesions that are visualized as condensed actin directly beneath adherent bacteria. Whereas infection with wild type EHEC was associated with the appearance of microcolonies of adherent bacteria and A/E lesion formation on 70% of the HEp-2 cells (Figure 5A), the sspA mutant was unable to adhere and form A/E lesions ( Figure 5B) as determined from examination of more than 50 HEp-2 cells. The A/E lesion phenotype of the sspA mutant was restored when complementing with sspA in trans from pQEsspA ( Figure 5C), whereas mutant sspA supplied from pQEsspA84-86 ( Figure 5D) did not complement pedestal formation of the sspA mutant, verifying that the surfaceexposed pocket is functionally important for SspA to affect virulence of EHEC. Consistent with the finding that SspA regulates LEE expression through H-NS, the sspA mutant restored the ability to form A/E lesions in the absence of hns in the hns sspA background as in the hns single mutant ( Figure 5E-F). However, the hns sspA double mutant seemed to form A/E lesions to a higher degree than the hns single mutant, which indicates that SspA also affects the expression of virulence genes involved in A/E lesion formation independently of the H-NS-mediated regulation. Moreover, the finding that the cell adherence ability of the sspA mutant was restored when deleting hns indicates that a factor negatively regulated by H-NS is required for the adherence of EHEC to epithelial cells. The long polar fimbria, LpfA, which is part of the H-NS/Ler regulon and is required for cell adherence of EHEC [32,54,55], might represent such a factor. Altogether, the cell adherence and A/E lesion phenotypes of the sspA mutant are consistent with the finding that SspA positively regulates the expression of genes encoding the T3SS including those of the LEE by negatively affecting H-NS levels. The correlation between the effects of sspA on the transcription of H-NS/Ler-regulated virulence genes and on A/E lesion formation upon infection of HEp-2 cells supports the conclusion that SspA upregulates the expression of LEE and other virulence genes by reducing the accumulation of H-NS in the cell. A reduced cellular H-NS level mediated by SspA will derepress the H-NS regulon and thereby allow the expression of transcriptional activators such as Ler and GrlA. These two activators then form a positive transcriptional regulatory loop partially by preventing H-NS-mediated repression [28]. Accumulation of Ler will in turn antagonize H-NS function and with that enhance the expression of virulence genes controlled by Ler [26]. At present, the molecular mechanism behind SspA-mediated regulation of the H-NS level during stationary phase and in infection to facilitate virulence gene expression in EHEC is unknown. Also, it remains to be determined whether SspA directly affects transcription of virulence genes as is the case for SspA in Francisella tularensis, where SspA along with two other transcription factors and ppGpp activates transcription to link the nutritional status to virulence gene expression [56,57]. We observed that SspA positively affects additional H-NS-controlled virulence traits of EHEC such as stationary phase-induced acid tolerance (data not shown), which enables survival of the pathogen during passage through the low pH environment of the human gastrointestinal tract, and thereby contributes to a low infectious dose [58,59]. Also, sspA positively affects EHEC motility (data not shown), which could influence virulence as motility enables the pathogen to penetrate the intestinal mucus layer during colonization of host cells. This further supports an important role of sspA in EHEC virulence. Further experiments studying wild type and sspA mutant derivatives of the A/E pathogen Citrobacter rodentium in a mouse model could help determine whether sspA is required for virulence in vivo. Conclusions We established an important role of SspA in the regulation of LEE-and non-LEE-encoded virulence factors of a T3SS, which is important for A/E lesion formation by EHEC. SspA downregulates H-NS levels allowing the expression of EHEC virulence genes, which are part of the H-NS/Ler regulon. Virulence genes in many bacteria are horizontally acquired genetic elements and subject to repression by H-NS. Thus, our study indicates that SspA potentially plays an important role in the pathogenicity of many bacterial pathogens in general. Standard procedures Standard DNA techniques, agar plates and liquid media were used as described [60]. Restriction endonucleases, T4 DNA polynucleotide kinase-and ligase (New England Biolabs) and the Expand High Fidelity PCR System (Roche Applied Sciences) were used according to manufacturer's instructions. DNA sequencing was performed by the National Cancer Institute DNA Sequencing MiniCore facility. Bacteria were grown at 37°C in LB or DMEM (Invitrogen #11885) media supplemented with ampicillin (100μg/ml), chloramphenicol (25 μg/ml) or kanamycin (25 μg/ml) as needed. HEp-2 cells (ATTC # CCL-23) were cultured in DMEM supplemented with 10% fetal bovine serum (FBS), 100 U/ml penicillin and 100 μg/ml streptomycin at 37°C in 5% CO 2 . Strain and plasmid constructions Oligonucleotides used in this study are listed in Table 1. Gene deletions were constructed in EHEC O157:H7 EDL933 strain ATCC 700927 (Perna et al. 2001) by Lambda Red-mediated recombination using linear DNA fragments as described [61]. An in-frame deletion of sspA was created as previously described [44] resulting in strain DJ6010 (ATCC 700927 ΔsspA). The DNA fragment used for making the sspA deletion was amplified by PCR from pKD13 with primers PKD13sspAUS2 and PKD13sspADS. An hns deletion mutant derivative of strain ATCC 700927 was made by inserting a chloramphenicol resistance-encoding cat cassette, which was PCR amplified from pKD3 [61] using primers Δhns92-1 and Δhns92-2, 276 nt from the hns translation initiation codon (strain DJ6011). An sspA hns double mutant (DJ6012) was constructed by introducing the Δhns::cat deletion into strain DJ6010. All gene deletion constructs were verified by PCR amplification using primer sets sspABUS/sspABDS and hnsUS2/hnsDS2. In addition, Western blot analysis using polyclonal antibodies specific to the respective proteins confirmed the sspA and hns mutant strains. Plasmid pACYCler (pDJ610) contains a~800 bp DNA fragment encoding ler expressed from its two native promoters cloned into the HindIII/ BamHI sites of pACYC184. The DNA fragment was PCR amplified from EDL933 genomic DNA using oligos lerUS2/lerDS2. RNA isolation DMEM is known to enhance the expression of the T3SS, which was detrimental to growth of the hns mutant EHEC derivatives (data not shown) that already exhibit increased T3SS expression in the absence of H-NS-mediated repression. Therefore, virulence gene expression was monitored in cells grown in LB, where a mid-level expression of the T3SS occurs. Overnight cultures of wild type and mutant derivatives of EDL933 ATCC 700927 were diluted 1:1000 in LB, supplemented with antibiotics if necessary, and grown aerobically at 37°C to an optical density at 600 nm (OD 600 ) of~3.0 (stationary phase). Samples of the cultures corresponding to~7.5 × 10 9 cells were collected and RNA was stabilized immediately by addition of RNAprotect bacteria reagent according to manufacturer's protocol (QIAGEN). Total RNA was purified using MasterPure™ total RNA purification kit as recommended by the manufacturer (Epicentre). Contaminating DNA in the RNA preparations was removed by DNaseI treatment. Isolated RNA was quantified based on measurements of absorption at 260 nm. The quality of RNA was evaluated by determining the ratio of absorption at 260 nm and 280 nm, which was within the preferred range of 1.7 to 2.1, and by agarose gel electrophoresis. Western analysis Total protein was prepared from cultures grown in LB at 37°C to OD 600~3 .0. Samples containing equal amounts of total protein equivalent to 0.03 OD 600 units of cell culture were prepared and analyzed essentially as previously described [44]. Polyclonal antibodies against H-NS or Fis were used to detect the respective proteins. The western blots were developed using ECL plus reagents (GE Healthcare) and quantified with a FluorChem imaging system (Alpha Innotech). The western analysis was carried out at least twice, and similar results were obtained. Assay for the presence of A/E lesions on HEp-2 cells The ability of EHEC EDL933 (ATCC 700927) wild type and its mutant derivatives to adhere and form A/E lesions on HEp-2 cell monolayers was evaluated using the fluorescent actin staining assay as described [53]. Bacterial cells were grown without aeration for 16-18 h at 37°C in tryptic soy broth that was supplemented with antibiotics if needed. Prior to infection cells were diluted 1:5 in infection medium (DMEM supplemented with 2% FBS and 0.5% mannose) and incubated at 37°C 5% CO 2 for 2 h. About 2 × 10 6 bacteria (M.O.I.~10) in 100 μl were added to semi-confluent HEp-2 cell monolayers grown on glass coverslips in a 6-well plate (Multiwell™ Falcon #353046). After infection for 4-5 h, monolayers were fixed with 4% formamide in PBS, washed three times with PBS, permeabilized with 0.1% Triton X-100 in PBS, and then stained with Alexa Fluor 488 phalloidin (Invitrogen). Coverslips were mounted on slides using Prolong Gold antifade reagent (Invitrogen) and the edges of the coverslip were sealed with cytoseal-60 (Richard-Allan Scientific). The samples were visualized using a Zeiss Axiophot II microscope equipped with a 40X objective, epifluorescence filters and a 1.25 optovar (Carl Zeiss MicroImaging Inc.). Images were captured with a charge-coupled device camera (Micromax) using IPL lab software. For each bacterial strain the assay was carried out independently at least three times and at least 50 HEp-2 cells were visually examined.
2016-05-08T07:04:43.219Z
2012-10-11T00:00:00.000
{ "year": 2012, "sha1": "7964c83a5e9558723434d7af0e14cc057eb3ebaf", "oa_license": "CCBY", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-12-231", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7964c83a5e9558723434d7af0e14cc057eb3ebaf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
269951735
pes2o/s2orc
v3-fos-license
‘We did everything by phone’: a qualitative study of mothers' experience of smartphone-aided screening of cerebral palsy in Kathmandu, Nepal Background International guidelines recommend early intervention to all children at risk of cerebral palsy, but targeted screening programs are often lacking in low- and middle-income settings with the highest burden of disease. Smartphone applications have the potential to improve access to early diagnostics by empowering parents to film their children at home followed by centralized evaluation of videos with General Movements Assessment. We explored mothers’ perceptions about participating in a smartphone aided cerebral palsy screening program in Kathmandu, Nepal. Methods This is an explorative qualitative study that used focus group discussions (n = 2) and individual interviews (n = 4) with mothers of term-born infants surviving birth asphyxia or neonatal seizures. Parents used the NeuroMotion™ smartphone app to film their children at home and the videos were analysed using Precthl’s General Movements Assessment. Sekhon et al.’s framework on the acceptability of health care interventions guided the design of the group discussions and interviews, and the deductive qualitative content analysis. Results Mothers were interested in engaging with the programme and expressed hope it would benefit their children. Most felt using the app was intuitive. They were, however, unclear about the way the analysis was performed. Support from the research team was often needed to overcome an initial lack of self-confidence in using the technology, and to reduce anxiety related to the follow-up. The intervention was overall perceived as recommendable but should be supplemented by a face-to-face consultation. Conclusion Smartphone aided remote screening of cerebral palsy is acceptable in a lower middle-income population but requires additional technical support. Supplementary Information The online version contains supplementary material available at 10.1186/s12887-024-04829-5. Background Cerebral palsy (CP) is a disorder of movement, tone and posture resulting from nonprogressive damage to the developing brain [1].It is the most common form of motor disability in childhood and is often associated with other developmental problems [1].International guidelines recommend early intervention for children at high risk of CP with the aim of improving functional outcomes by goal oriented motor training and enhancing parental capacity for attachment [2]. The majority of children affected by CP live in low-and middle-income countries (LMICs), [3] where access to early diagnostics is limited [4].General Movements Assessment (GMA) is a free non-invasive tool for identifying infants at high risk of CP, [5] but requirement for training has limited its use in LMICs [6].Studies conducted mainly in high-income countries show that GMA can predict CP with around 90% sensitivity and specificity at 3 months' age and thereby enable targeted early intervention [7,8]. Telehealth has the potential to improve access to diagnostics across the globe [9].In high income countries, smartphone applications have already empowered parents to contribute to the follow-up of their children using remote GMA [10][11][12].Remote GMA could overcome the access barrier to early CP diagnosis and engaging parents in the follow-up can spare limited healthcare resources in LMICs [13]. We conducted a pilot study testing the feasibility of smartphone-aided remote GMA for identifying children at high risk of CP in Kathmandu, Nepal.The NeuroMotion ™ app developed by Linköping University, Sweden, [10] was translated into Nepali and provided to parents of infants at risk of CP due to birth asphyxia or neonatal seizures [14]. Acceptability is one of the key areas of assessing digital health interventions for health systems strengthening according to the World Health Organization Guidelines [15].Davis's Technology Acceptance Model postulates that acceptability is critical in predicting real-world usage of technological interventions [16].We therefore aimed to explore mothers' perceptions of participating in smartphone-aided remote developmental follow-up of their infants in Nepal.The results of the study will guide the overall feasibility assessment and potential future scale-up of the screening program.Pragmatic philosophy guided the inquiry as we wanted to learn how to modify the intervention based on the study findings. Methods Standards for Reporting Qualitative Research [17] and the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines were used for writing the report [18]. Design This is an explorative qualitative study using focus group discussions (FGD) and individual in-depth interviews (IDI) of mothers participating in the smartphone aided GMA follow-up of their children. Theoretical framework The Technology Acceptance Model by Davis suggests that actual system usage is determined by the overall attitude of the user towards a given system [16].This attitude stems from two cognitive responses to the design features of the intervention: perceived ease of use and usefulness [16].Sekhon et al. give seven constructs of acceptability that can be used to further elaborate the Technological Acceptance Model (Fig. 1) [16,19].They define acceptability as 'a multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention' [19].They also suggest that acceptability should be examined prospectively, concurrently, and retrospectively in relation to the intervention use. Study setting Nepal is a multi-ethnic country situated between China and India along the Himalayan Mountain range.Over the last two decades, the nation has experienced economic growth with the World Bank classification changing from low to lower-middle income status in 2020.In 2022, 80% of women and 92% of men owned a mobile phone [20].Several previous studies have explored the use of telemedicine to bridge the access barriers to healthcare services caused by the geographical and staffing challenges in the country [21]. This study took place in Kathmandu Valley, which consists of three intergrown cities one of which is the capital of the country.Kathmandu is the economic powerhouse of Nepal with the highest levels of education and access to communication technologies [20].Institutional deliveries have become the norm with above 80% of urban women delivering in a health facility in 2022 [20]. The epidemiology of CP in Nepal is poorly known as most available studies are facility based and only one population-based provincial register exists [22].In comparison to high-income countries, a larger proportion of children with CP are born term after birth asphyxia or suffer from post-natal complications [23].Care for children affected by CP is provided by non-governmental organizations working together with the Ministry of Health utilizing the World Health Organization Community Based Rehabilitation model [24]. Follow-up of the children and the remote GMA assessment are described in a separate paper [14].In short, recruitment of children took place at Paropakar Maternity and Women's Hospital (PMWH), the largest public tertiary maternity hospital in Kathmandu.Parents to 31 infants surviving birth asphyxia or neonatal seizures were instructed to install the NeuroMotion ™ app into their smartphones.At three months' age, the parents received a notification from the app requesting that they send a film of their child's spontaneous movements, which were analysed by the research team using Prechtl's qualitative GMA with results reported back to parents within 2 weeks of filming.Children with absent age-typical fidgety movements indicating a high risk of CP were referred for paediatrician's evaluation at the PMWH and recommended early intervention at a non-governmental rehabilitation organization Self-help Group for Cerebral Palsy. Study population A purposive sample of 12 mothers participating in the smartphone aided follow-up of their child were recruited for two FGDs with four participants each and four IDIs.Research assistants involved in the recruitment of the children and their follow-up helped to select information rich cases, who based on their previous experience with the families, were thought to have varied opinions about the program. The study participants comprised of mothers from diverse socioeconomic backgrounds, representing a range of educational levels from some years of primary school to completion of a university degree.Various castes and ethnicities, including both relatively advantaged and disadvantaged groups, were included in the study.All participating families had access to a smartphone, although not all mothers owned one personally.Specific demographic details, such as age, place of residence, occupation, parity, and ethnicity, were not reported to maintain participant confidentiality.Additionally, participants' prior knowledge about CP was not probed to avoid potential bias in responses.A previous ethnographic study conducted in Nepal suggests that the condition is generally recognized and explained by theories ranging from the biomedical to ayurvedic and to spiritual [25]. We chose to have FGDs with parents whose children had successfully been filmed and judged to have normal fidgety movements to elucidate a variety of opinions about the follow-up program as the topic was not considered to be too sensitive for a group discussion.IDIs were chosen when the child had absent fidgety movements indicating a high risk of CP (n = 2) and when filming was not completed successfully (n = 2) to ensure the privacy of the participants.[15] and Acceptability of Healthcare Interventions by Sekhon et al [18] Potential participants were approached by telephone by a research assistant after providing the results of the GMA analysis.Children who were not successfully filmed during the study were offered additional neurological assessment at one year of age and their mothers' willingness to participate was assessed at the same time. No parent refused to join, but one mother in each of the scheduled FGDs did not show up to the appointment.Reasons for drop-outs were not inquired.Informed consent was collected both in writing and orally for audio recording of the interviews and the following data-analysis and publication. Data collection Semi-structured interview frameworks were developed separately for the FGDs and IDIs based on Sekhon et al. 's acceptability framework (Fig. 1) [19].The guide was first piloted in five interviews (one single mother, two single fathers and two couples) who had not yet left the delivery hospital with focus on the parents' expectations about the follow-up.We found that it was difficult for the participants to discuss the program acceptability prospectively.These interviews were short, approximately 15 min each, and superficial, and were therefore only used for adjusting the interview guides.Minor edits to the guide were also permitted thereafter to ensure that all the relevant topics were covered (Supplementary material: Interview Guide).Also based on the pilot experience, only mothers were invited to join the IDIs and FGDs. A private location was selected for the group discussions and interviews at the PMWH.During the sessions, a female relative of the mothers took care of the infants.During one of the two FGDs the babies and the relatives were present in the room while in the other FGD and all the IDIs they waited outside.Both the group discussions and individual interviews started with an open question about the current health of the children participating in the follow-up before moving on to discuss the mothers' attitudes and experiences before, during and after the use of the app for filming their child.The FGDs lasted 40 to 60 min each and all mothers participated actively.The IDIs were 20 to30 minutes each.Data analysis was performed concurrently with data collection and the exact number of interviews was determined by the point of data redundancy, which was achieved after the first IDI conducted with a mother whose infant had not been successfully filmed.No repeat interviews were done. The FGDs and IDIs were conducted by a Nepalese female researcher with a background in nursing and child development and previous experience in qualitative research (PB).The interviews were observed by two Nepalese female public health students.One student was present at each session with responsibility for note keeping and both transcribed the audio recorded data verbatim and translated it from Nepali to English.Proof reading and finalization of the transcripts were undertaken by the interviewer.The finalized transcripts were not returned to the interviewees for review.Additional field notes were gathered throughout the data collection period and informal discussions with the research assistants in charge of patient recruitment and follow-up were held to better understand some of the answers given by the mothers during the interviews.All data were stored on a password protected hard drive. Data analysis Qualitative content analysis presented by Patton was used to analyse the data [26].A deductive approach building on seven constructs of acceptability defined by Sekhon et al. guided the data analysis (Fig. 1) [19]. One researcher (AK) coded the data.Manual data analysis was conducted, and findings were discussed in a multi-professional team consisting of both Nepali (PB) and European (AK and AA) researchers.Transcripts and notes were read repeatedly, and comments were made to margins to identify meaning bearing units.These were tagged with a defining code.Codes were grouped into categories consisting of similar content [26].Sekhon et al. 's seven constructs of acceptability were used as primary themes, but the codes and categories were then examined and refined until the categories became mutually exclusive.Sub-categories were used to aid in reaching convergence [26]. Constructs 'Burden' and 'Opportunity Costs' included in Sekhon et al. 's framework were merged as it was not possible to sufficiently differentiate the respondents' answers between these themes.An example of a partial coding tree for two of the categories is provided in the Supplementary Figure to exemplify the analytical process. Reflexivity The author team comes from academic and clinical background from Nepal and Northern Europe differing from that of many mothers interviewed, who sometimes only had elementary level education.Our training is based on the Western biomedical concept of explaining illness and disability whereas a plurality of interpretations are encountered in Nepal [25].Parents of children with disability commonly seek care from providers of different epistemological backgrounds in Nepal [27].The goal of the study was not to explore what disability means to participating families but to find out how they perceive the follow-up method.As such, cultural differences are likely to be less problematic.The fact that part of the research team comes from outside the caste system dictating social roles in Nepal could also be seen as beneficial for the interpretation of findings.Gender of the interviewers and interviewees were matched. Ethics Participating in the FGDs and IDIs might have caused anxiety in mothers who had been through a difficult delivery and thereafter been told that their child might develop disability.At the same time, it allowed parents to voice their view of the ethicality of the whole screening program.We reserved time for mothers' questions about the follow-up after each meeting and tried to ensure further referral of any medical needs of the children involved in the study.Parents who requested physical examination of their baby were provided the opportunity to meet a paediatrician from PMWH at the hospital's out-patient department. Results The findings of the analysis are presented in Table 1.Sekhon's acceptability framework formed the overarching themes under which two to four categories for each theme were placed.All interviews were conducted after the parents had used the app, but we considered each analytical category in relation to time from app use. Curious mothers Initially positive The initial feelings of the participants towards the follow-up had mostly been positive.Many expressed that they were happy that such a program was offered for their children and were satisfied with the information received during the consent taking.Some were fascinated by the idea of remote follow-up as a novel intervention that had not been offered by the hospital before.IDI Absent GMA 1: 'At that time, I completely agreed with her (the research assistant).When she said that the baby will be followed-up, I really like that idea.' Nervous Many mothers had also been nervous about joining the program.They all had gone through some form of neonatal complications with related anxiety.Some had previous experience of similar deliveries when no remote follow-up was in place and thus expressed initial scepticism towards the study.They voiced their concern for misuse of videos and had on few occasions been advised by family members or other parents in the ward against joining the study.This scepticism might have been partly explained by the fact that not all mothers were present during the initial consent taking and only received information about their child's enrolment later from their husbands.IDI Absent GMA 1: 'There are so many so many things we hear these days.People say these videos are for research, but they use it for some other purposes.' Anxiety during follow-up Worry and anxiety were present during the whole course of the follow-up.Mothers restlessly waited for time of filming to come.At least for some, participating in the follow-up itself increased anxiety.Others, however, denied additional worry when asked directly. IDI Failed Film 2: 'As we were continuously followed-up via phone, and were shown greater concern, I was really stressed.' Relieved, distressed or disappointed For those mothers whose children were determined to have normal fidgety movements at 3 months' age, worry soon dissipated.These parents were happy that they had joined the study and relieved to learn that their child was doing well despite the early life complications.FGD 2, respondent 2: 'Now, you have informed me that my baby is fine and does not have any abnormality, I feel delighted.' However, in cases where the assessment was abnormal, anxiety continued until the time of the interview.Mothers who had for some reason failed to upload a film for evaluation were disappointed and felt let down by the research team. IDI Failed Film 2: 'Till now, I have that (worry) in my heart.' Benefit to the child The primary motivation for participating in the study was perceived general benefit to the child.Mothers were hoping to gain knowledge about their infants' health and development.They also expressed hope that through enrolment in the study would be helped in case difficulties arose within these domains.The research assistants in charge of patient recruitment and follow-up were sometimes seen as a type of a counsellor who would help to navigate the health care system in case treatment was needed. FGD2, Respondent 1: 'We all joined because we were very hopeful that you will guide us in a proper direction to either treat the baby or to take care of the baby.' Direct contact with doctors Many mothers viewed the program as an opportunity for direct access to doctors in the delivery hospital.Joining the study was perceived to facilitate contact with the paediatricians who knew about the initial challenges of their children and had access to the GMA results.Understanding the separation between the research study and routine care was not always easy for the parents. IDI Absent GMA 1: 'I thought I will need to treat my baby, so why not to join this program which would benefit me to easily contact the doctors present here.' Certainty of the face-to-face meeting The effectiveness of the remote video assessment was often perceived as less complete than a physical evaluation.A physical appointment enables two-way communication that was lacking from the remote check-up, which led some mothers to voice their preference for a traditional visit to a doctor.In particularly the mothers whose children had absent fidgety movements and those who had not been able to film according to the plan appreciated the opportunity for a confirming physical evaluation. IDI Absent GMA 2: 'We can talk normally face to face and it will be certain.' A recommendable intervention Having completed the pilot study, a few mothers spontaneously voiced their recommendation for the remote follow-up to be offered to other children and in other hospitals too.No participant openly opposed the idea of continuing remote assessment despite facing some challenges during the study. FGD2, Respondent 3: 'We hope now other babies also gets (SIC) benefit from it.' Dedicated research team A major factor for the overall acceptability of the followup was the interaction between research assistants and the participants.The fact that the research assistants showed interest in the children and the parents was greatly appreciated.They also went the extra mile trying to ensure that everyone would have the opportunity to be filmed by sometimes agreeing to meet those struggling with the app either at home or at the delivery hospital.The support experienced by most participants was in stark contrast to the disappointment felt by those who for some reason had failed to film successfully. FGD2, Respondent 2: 'As I got the opportunity to have a face-to-face talk with you, this has made me more confident regarding my baby's condition' . Had the mothers been initially sceptical about joining the study, many expressed that the opportunity to ask questions during follow-up helped to reduce the worry.Sometimes, however, family dynamics hindered this communication with only fathers having access to the mobile phone and mothers receiving information only through their husbands. FGD2, Respondent 2: 'I wanted to talk with you directly but my husband thinks I will worry and take a lot of tension.So, he would never give my number to you.' Slow remote assessment Many, but not all, parents expressed that the waiting time of up to two weeks between filming and receiving results was excruciating.This was the period of peak anxiety and a major drawback of remote assessment in comparison to direct face-to-face examination by a physician.Mothers recalled feeling afraid of disability or disease in their child and some had made plans for consulting doctors in case the report was abnormal.A minority of parents were seemingly more relaxed reporting that the waiting time did not feel very long and that feelings of curiosity rather than anxiety dominated the period.FGD2, Respondent 2:'I would repeatedly ask my husband if there was any message from the hospital.My husband was frustrated and would ask me how many time do you repeat and ask the same question again and again.' FGD2, Respondent 3: 'I got to know about the result after 2-3 days of sending the video.I should consider it a very reasonable time.' Unmet expectations There were also some unmet expectations such as a wish to learn more about the analytical process involved in the GMA or a request for a formal written report after completing the child's assessment. IDI Failed Film 2: 'I wanted to know why this happened to my baby, what happened to my baby during operation?' . Limited understanding of the intervention Short video Many mothers who had participated in the follow-up seemed to have vague understanding about the nature of the remote assessment done to their child.A commonly occurring concern was 'How would you analyse such a small video of only 2 to 3 min?'This was particularly prominent if the child had not been at their perceived best during the period of filming.Many parents seemed to think that the doctors would be interested in the gross movements of the child rather than the fidgety movements occurring continuously during awake state.Some would even engage in stimulating their child to elicit gross motor movements which, although understandable, was counter to the instructions for GMA.FGD2, Respondent 1: 'He did not had (SIC) much movements.Once I would (turn) off the video then, the baby would move his hands and feet.So I was really tensed (SIC).' Trust in doctors Other mothers, however, expressed their trust in doctors being able to assess even short videos and seeing things that parents themselves might not have noticed.It was further pointed out that the child might not be at one's best during a physical visit either and with remote assessment several videos could be sent. FGD2, Respondent 3: 'I consoled myself thinking, videos will be assessed by doctors, so they will understand the 3 min video very well. ' A couple of participants had for some reason understood that the videos could only be uploaded during office hours, which limited their ability to participate. A mother knows best? Many mothers expressed a certain gut feeling about their child's good health despite initial worry after the delivery.Some issues arose when the parents' perception about their child differed from the assessment and led the parents to question the findings.The potential fallacy of trusting one's gut feeling more than experts' evaluation was, however, also perceived as a reason for continuation of the follow-up program. IDI Failed Film 1: 'Because of this lack of awareness among parents, this program is indispensable not only in this hospital but also in other hospitals as well.' Self-efficacy Easy and intuitive app Using the NeuroMotion ™ app was generally perceived as easy by most mothers.They expressed that following the instructions was simple and that uploading videos went smoothly.The process of learning how to use a new intervention caused excitement and the ability to participate in the child's follow-up was a positive experience for the family. IDI Absent GMA 1: 'This app is easily understood.Nowadays most of the women are educated, so yes, most of them could follow the instructions.' Lack of self-confidence While the NeuroMotion ™ app is designed to automatically send a notification to the parents' phone at the time of filming, most of the mothers needed additional confirmation call from the research assistant to engage in filming.Her role in empowering parents in filming was crucial for the overall success of the program.Some mothers received help from their relatives or the research team to complete the filming.Lack of clear confirmation from the app that a film was successfully uploaded was a cause of complaint from participants. FGD1, Respondent 1: 'There was a message and I thought maybe it was time to send the video.But the sister (research assistant) called after some days, so I was sure to film the baby.' Technical hinderances Sometimes technical glitches reduced the self-efficacy of parents and caused frustration.A few mothers did not receive the intended notifications and on some occasions the videos filmed did not automatically upload as intended.Help from the research assistant was needed in such cases.IDI Absent GMA 2: '…but I did not receive any notification.The sister (research assistant) had mentioned that notification would appear, but I did not received (SIC) notification during the first film.' Busy life The interviewed mothers were in two camps regarding how parent-friendly the follow-up program felt.A common view was that the remote assessment saves time in comparison to queuing for a physical assessment in the hospital, although particularly those living close to the hospital also voiced their preference for the hypothetical physical check-up.Answering the follow-up phone calls and engaging in filming was also a burden but was generally considered acceptable and a sign of engagement from the part of the research assistants. FGD2, Respondent 1: 'We did everything by phone, it was very helpful and convenient.' Easier with a familiar app A few parents questioned the trouble of learning the use of a new application instead of using something they already were familiar with like Facebook, and would have preferred sending videos via these channels instead. FGD1, Respondent 3:'You have brought this rule (of using the NeuroMotion ™ app), but it would be easy to send the video from Imo (a type of a messaging app) or (Facebook) messenger only.' Discussion This qualitative study exploring mothers' perceptions of smartphone-aided remote screening for CP in Nepal found that while the NeuroMotion ™ app was often easy to use and the idea of remote follow-up was well received, support is needed to enable successful filming and reducing parental stress. The Technology Acceptance Model suggests that the use of information systems is predicted by the interplay of design features and users' attitudes towards the intervention [16].Only half of the children recruited into this follow-up study were successfully filmed despite the overall positive affective attitude and acceptable ethicality [14].The relatively low success rate might be explained by the fact that some of the design features of the app like automatic notifications were not as functional in Nepal as in our previous experience in Sweden [10]. Minimizing technical glitches, and improving parental self-confidence and knowledge about the screening intervention by providing hands-on training to both parents at the hospital followed by clearer feedback about the timing of filming and success or failure of upload are some of the suggestions made by the interviewed mothers that could improve the outcome of filming in the future.It is necessary to allocate resources for counselling the parents involved in the follow-up to reduce their anxiety.Our findings are partly contradictory to a recent Danish study, which found that smartphone-aided remote GMA increased parents' sense of control during developmental follow-up [28].Some participants in Nepal also questioned the need to learn a separate app, and a recent study from Australia found that even fairly simple instructions can help parents to correctly film their child at home using a smartphone camera and a messaging app [13].A small pilot from India suggests that this can also be done in middle-income settings [29]. Intensive efforts are on-going to automatize GMA with the help of machine learning, [30] which could lead to a revolution in access to early CP diagnosis.Larger studies are, however, needed to confirm the validity of the method in LMIC settings [31].Our findings suggest that scaling up GMA screening with the help of remote assessment from home requires considerable resources in LMICs and alternative means of filming such as engaging community health workers in the task could be explored.The period between sending videos and receiving the results should be minimized to reduce parental stress, and a possibility for direct face-to-face consultation should be provided for all parents.Similar desire for faceto-face assessment was also voiced by mothers participating in neurodevelopmental follow-up of their infants in England [32].An opportunity to ask questions about their child's health and development improves parental trust in the screening results, and provides an opportunity to screen for other adverse neurodevelopmental outcomes than CP for which GMA has lower sensitivity for [33].Lastly, suitable early intervention for children identified to be at increased risk of CP should be available [2]. Limitations The acceptability framework by Sekhon et al. [19] that guided the development of our interview guide and analysis has an individualistic background and unclear transferability to Nepal, where communal decision making is common.Our findings highlight the importance of informing both parents about the follow-up already during recruitment and allowing both a chance to ask questions during the follow-up to avoid deepening the gender digital divide [34]. We interviewed mothers who had not succeeded in filming their child but were not able to reach out for those who refused to enrol in the study or were lost during the follow-up.The findings therefore represent opinions of parents who were sufficiently interested in the intervention to at least try to film their child at home.The overall transferability of the results was further limited by the pilot nature of the study and enrolment of only parents residing within the Kathmandu valley.The individual interviews were short and, due to drop-outs, the focus group discussions were conducted with only three participants with variable backgrounds, which might have reduced the depth and variety of opinions voiced.Social desirability bias might have also led to participants expressing a favourable opinion of the intervention as the interviewer came from the same organization that conducted the followup.Based on our pilot interviews, we only recruited mothers as the primary caregivers and the findings might have been different had fathers also participated. Conclusion Remote screening of CP using the NeuroMotion ™ app is acceptable to mothers in Nepal, but successful follow-up requires additional support to reduce parental anxiety and to improve their self-efficacy.The study was limited by small amount of data and findings might not be transferable beyond the urban lower middle-income setting. Fig. 1 Fig.1Theoretical framework predicting the use of a technological intervention.Adapted from Technology Acceptance Model by Davis[15] and Acceptability of Healthcare Interventions by Sekhon et al[18] Table 1 Acceptability of smartphone-aided screening of cerebral palsy in Kathmandu, Nepal
2024-05-23T13:15:12.150Z
2024-05-22T00:00:00.000
{ "year": 2024, "sha1": "85a8c37fd850279439ed1edff8837278fa63743d", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/counter/pdf/10.1186/s12887-024-04829-5", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "14542d84d74a2dc34f1b1b0be4a057ff4c43faed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55989002
pes2o/s2orc
v3-fos-license
Spring Flood Forecasting Based on the WRF-TSRM Mode The snowmelt process is becoming more complex in the context of global warming, and the current existing studies are not effective in using the short-term prediction model to drive the distributed hydrological model to predict snowmelt floods. In this study, we selected the Juntanghu Watershed in Hutubi County of China on the north slope of the Tianshan Mountains as the study area with which to verify the snowmelt flood prediction accuracy of the coupling model. The weather research and forecasting (WRF) model was used to drive a double-layer distributed snowmelt runoff model called the Tianshan Snowmelt Runoff Model (TSRM), which is based on multi-year field snowmelt observations. Moreover, the data from NASA’s moderate resolution imaging spectroradiometer (MODIS) was employed to validate the snow water equivalent during the snow-melting period. Results show that, based on the analysis of the flow lines in 2009 and 2010, the WRF-driven TSRM has an overall 80% of qualification ratios (QRs), with determination coefficients of 0.85 and 0.82 for the two years, respectively, which demonstrates the high accuracy of the model. However, due to the influence of the ablation of frozen soils, the forecasted flood peak is overestimated. This problem can be solved by an improvement to the modeled frozen soil layers. The conclusion reached in this study suggests that the WRF-driven TSRM can be used to forecast short-term snowmelt floods on the north slope of the Tianshan Mountains, which can effectively improve the local capacity for the forecasting and early warning of snowmelt floods. INTRODUCTION Flooding is one of the most frequent and devastating natural disasters in the World, which has threatened the survival and development of China for thousands of years.Over the past 50 years, China has made great strides towards water conservancy, flood control, and disaster mitigation.However, due to the abnormal variation of the climate and the impact of large-scale human activities on the environment, flooding in China remains a very serious problem and flood-related events can still occur.China has the most snow in countries located at the middle and low latitudes, with the amount of winter snow equivalent to 740×10 8 m 3 of water.The arid areas in northwest China are extremely short of surface water resources, as they cover about 25 % of China's land area while only having 3.3 % of the surface water.However, they are the most snow-rich areas, with a large amount of seasonal snow.Among the three main snow areas in China, two-fifths are concentrated in northwest China.In these areas, alpine seasonal snow is one of the main sources of rivers, and plays an extremely important role in the rational use of water resources. Numerous studies show that the snow in Xinjiang is more unique, accounting for about one-third of the snow water resource in China [1][2].Furthermore, 50 to 80 % of the river runoff in Xinjiang is from seasonal snow.There is a very close relationship between seasonal snow and the productivity of both agriculture and animal husbandry in Xinjiang.A thick snow covering not only can lead to wheat being free from frost damage, but can also provide favorable conditions for spring agricultural water.However, when a thick snow covering is followed by continuous warm weather or rain in the Spring, fast snow melting will lead to Spring floods in some areas, which might destroy farmland, hinder traffic, and threaten people's lives and property.An effective way to solve problems related to the ecological hydrology and environment in the water basin is by simulating and forecasting the process of snowmelt flood formation using a distributed hydrological model [3][4][5].Furthermore, one of the most effective ways to predict the snowmelt runoff is to establish a distributed snowmelt runoff model [7][8][9].Many studies have found that using a distributed snowmelt model to predict snowmelt runoff in real time is important, especially for the Tianshan Mountains.However, it has not been carried out effectively for the above-mentioned types of studies. In order to specifically set up a snow warning platform for the northern slopes of the Tianshan Mountains, and meet the requirement to consider the proposed mesoscale prediction system based on a distributed driving snowmelt model, in this study we propose the use of the Weather Research and Forecasting (WRF) driven Tianshan Snowmelt Runoff Model (TSRM), which provides further validation of prediction precision. STATE OF THE ART Many scholars have developed hydrological models that can simulate the snowmelt process, such as the variable infiltration capacity (VIC) and soil and water assessment tool (SWAT), but these models do not provide effective solutions for problems related to snow melting, since they are unable to cope with such a large scale or suffer from a lack of local data.However, Pei et al. [10] proposed the creation of a distributed snowmelt runoff model using 3S (Remote Sensing-RS, Geographical information System-GIS and Global Positioning System-GPS) technology in 2007.In addition to the fact that the structure of hydrological models need to be improved, the lack of meteorological observations in mountainous areas is a further limiting factor in accurately forecasting Spring snowmelt flooding.The scarcity of data from mountainous regions, the particularity of the hydrological processes, and the complexity of the related mechanisms increase the difficulty of studying various kinds of surface processes in mountainous areas during snow-melting periods.The atmosphere-hydrology coupled model can be used to resolve this problem, and its use for simulating and forecasting snowmelt floods has attracted a great deal of global research attention.The development of the atmosphere-hydrology coupled model can therefore help to improve the prediction accuracy of both the atmospheric and hydrological models, and can maximize the effective forecast period of snowmelt floods.For example, Kenneth et al. [11] coupled the fifth-generation Penn State/National Center for Atmospheric Research mesoscale model (MM5) and the distributed hydrology soil vegetation model (DHSVM), and carried out a snowmelt flood forecasting experiment in the Snoqualmie River Basin in western Washington, USA.They concluded that updating the weather forecast field in real time can effectively reduce the runoff output error.Furthermore, Evans et al. [12] coupled four regional climate models with the same hydrological model to predict the alpine snowmelt runoff in the Verzasca Valley in Switzerland, thereby providing an important scientific reference for an atmosphere-hydrology coupled model's performance on predicting snowmelt runoff in alpine-cold areas.Zhao et al. [13] used a WRF-driven DHSVM to predict the snowmelt runoff process of the Juntanghu Watershed on the north slope of the Tianshan Mountains for 24 h and obtained several interesting results.Finally, Wu et al. [14] combined WRF and Micromet dynamical and statistical downscaling to drive the snow-melting model and simulate the Spring snow-making process in the Kayiertesi Watershed, demonstrating that combining WRF and distributed hydrological models is an effective way to forecast high-resolution snowmelt floods in mountainous areas. Although many studies have forecasted snowmelt floods in various alpine-cold areas, any research that has used WRF to drive a localized snowmelt runoff model based on field observations has not been conducted effectively.Based on field observations from the North Slope of the Tianshan Mountains, the study reported on in this study will establish a better localized forecasting system by utilizing a WRF-driven double-layer distributed snowmelt runoff model to predict the Spring snowmelt floods in this area.This will provide a production basis for, and better protection of, the "oasis economy" of Xinjiang [15,16]. Water resources are scarce and valuable, and good water resource management can lead to their better development.However, due to its complexity and uncertainty, improving water resource management has become a challenge, particularly in arid and cold areas.Hydrological models are very important due to their great significance in better utilizing current hydrological theories for improving or creating new management strategies.Although hydrological models have been widely used in regional water resource simulation, several difficulties are still manifested when they are applied in practical applications.For example, the simulation of ice and snow resources that are covered by alpine mountains is still problematic for simulating water resources globally. Snowmelt models have yet to be built for many specific research areas (e.g., the north slope of the Tianshan Mountains).In addition, considering the larger topography of this area, the degree of temporal and spatial differentiation is extremely large.Therefore, we need to use the WRF model to provide accurate weather forecasts for the watershed, thus driving the snowmelt model, so that we can get more accurate snowmelt flood results.Owing to practical limitations, traditional mountain meteorological observation stations are very scarce in such areas, which makes it difficult to calibrate the snowmelt model using the data from traditional observation stations.The above limitations have prompted us to study the combined WRF-TSRM mode. This rest of this study is organized as follows.In Section 3, the methodology and input data are described.In Section 4, the TSRM is forced by WRF, and several validating analyses are carried out, from which comparison of the numerical forecasts and the results of analysis are obtained.Conclusions are given in Section 5. Figure 1 The location of the study area METHODOLOGY 3.1 Study Area The study area (43°43′N-44°06′N, 86°10′E-86°40′E) is the Juntanghu Watershed in the Hutubi County, Xinjiang.The meteorological observation stations are scarce in this area [17].The Juntanghu River is a small river in the western part of the Tianshan Mountains, which originates from the north slope of the mountains.Through a statistical analysis using geographical information system (GIS) tools, the elevation of the river basin's source is about 3400 m, and the elevation of most of the river basin is 1000-1500 m.The river network converges in Nazha'er in the lower mountain, flows into the plain across the front-range hills in the western part of Hutubi County, and feeds into the Hongshan Reservoir at the mountain pass.The river is about 45.20 km long from its source to the Hongshan Reservoir.The catchment area above the Hongshan Reservoir is 833.57 2 km , and the average elevation of the watershed is 1503 m.The average slope is 62.5 ‰ above the confluence of the two main tributaries and 52.6 ‰ below it.The average annual runoff of the river is 3.89×10 8 m 3 .From mid-September, there is snow in the alpine area, the amount of which reaches its maximum level in January with the decrease of temperature.In February, the temperature begins to rise and the snow begins to melt, with the melting process speeding up in March.The runoff of the Juntanghu River is not uniformly distributed in different times of the year, and it reaches its maximum level in the spring.From March to June, the heavy rain in the watershed area and the snowmelt water combine and the flooding process is rapid, resulting in snowmelt floods occurring almost every year and causing significant harm to people's lives and property and to the ecological environment [1].The area's watershed is fully developed and its features are typical.The study area is shown in Fig. 1. Tianshan Snowmelt Runoff Model (TSRM) The double-layer snowmelt model divides the snow cover into two layers according to the variations in energy and water (see Fig. 2).It can be approximated that when the snow depth h exceeds 0.2 m, the snow layer can be seen as two layers, whereas when the snow depth h is less than 0.2 m, the snow layer can be seen as one single layer.The upper snow layer absorbs energy through the input of factors, such as precipitation, turbulence, and solar shortwave radiation, and the energy of the lower layer comes mainly from the soil flux. When considering the double snow layers, the energy balance of the upper snow layer can be calculated as ln , where Q net is the net energy flux going into the upper layer per unit time, Q ln the net longwave radiation flux, Q sn the net shortwave radiation flux, Q p the heat input from precipitation, Q con the heat flux from the lower layer, Q s the sensible heat flux, and Q le the latent heat flux.The units of the above parameters are (J/m 2 ).The energy balance of the lower snow layer can be calculated as , where ΔQ is the net energy flux going into the lower layer per unit time, Q con the heat flux from the upper layer (J/m 2 ), and Q ground the heat flux from the soil layer (J/m 2 ).The water balances of the two layers are calculated as where _ W Surface is the variation of snowmelt water in the upper layer, _ W Bottom the variation of snowmelt water in the lower layer, E the evaporation of the snow, _ Surface flow the water infiltration of the upper snow layer into the lower layer after melting, P the water input through precipitation, INF the infiltration into the soil layer, and _ Bottom flow the outflow of the lower snow layer. The energy balance of the single-layer snow cover is calculated as where the meaning of each term is the same as in ( 1) and (2). Net Radiation on Snow Surface The net short-wave radiation flux Q sn in WRF is mainly controlled by the factor of albedo: (1 ) where A sfc is the surface albedo and Q s the shortwave radiation received by the surface.When the snow depth is more than 0.1 m, the snow layer will completely block the solar radiation and A sfc is equal to the snow albedo (A s ); however, when the snow depth is less than 0.1 m, the snow layer will be transmitted through the sunlight, and the snow cover and the soil will both have an impact on the surface albedo.Consequently: = +(1 ) A rA r A − , where A bg is the albedo of bare land. The snow age, solar radiation, and average daily temperature all have direct impacts on the snow-layer albedo.Kendo [18] proposed an exponential equation based on snow age, where α(0) is the albedo of the snow layer and α min is its minimum albedo.Winther [19] proposed the equation A s = 0.90 -0.92×10 −4 T acc − 0.0042Q s according to the solar radiation and cumulative daily temperature, in which T acc is the maximum accumulated temperature in a snowfall day. The Q ln is estimated through formula (7): where L ↑ is the outward long-wave radiation from the snow surface, L ↓ the incident long-wave radiation from the atmosphere, T a the air temperature (K), T s the snow surface temperature (K), s ε = 0.95 (the emissivity of the snow cover), σ = 5.67 K −4 ×10 −8 W•m −2 (the Stefan-Boltzmann constant), a c refers to the cloud-type empirical coefficient, and C is in the range between 0 and 1.The value of a c is as follows: stratus, a c = 0.24; stratocumulus, a c = 0.22; cumulus, a c = 0.20; altostratus, a c = 0.20; altocumulus, a c = 0.17; stratocirrus, a c = 0.08; cirrus, a c = 0.04). Heat Input from Precipitation P Q is estimated through formula ( 8): where ρ w is the density of the water (kg/m 3 ), c w the specific heat of the water [J/(kg•K)], c s the specific heat of the ice [J/(kg•K)], T a the air temperature (K), P s the snow water equivalent, and P r the amount of rainfall.The condition of the snow or rain is decided by the air temperature.When , Sensible and Latent Heat Flux The latent heat flux refers to the heat transfer caused by evaporation and condensation, and the sensible heat flux refers to the heat conduction between the air and snow.The latent heat flux and sensible heat flux are calculated as follows [20,21] Here, the snow surface is regarded as saturated, and the vapor pressure of the snow surface can be regarded as the saturation vapor pressure.The vapor pressure of the air is a product of the saturation vapor pressure under the air temperature and relative humidity (RH): where e svp is the saturation vapor pressure, which is a function of temperature and can be calculated by the Teten equation: Heat Flux from Lower Layer In this study, the temperature gradient is used to calculate the heat transfer between snow covers: con Q q s = × (16) where q = −k eff (dt/dz) is the heat flux density of the snow (W•m −2 ), s the area of the snow cover grid, and k eff is the coefficient of the snow's heat conduction. When the relative density of the snow layer is less than 0.156, Heat Flux from Soil Layer Compared to the solar radiation, latent heat, and sensible heat, the heat flux from the soil layer can be regarded as having less impact on the process of snow melting.Its value is generally between 0 and 10 W•m −2 [24].In this study, the relationship between the soil layer and snow layer used is as shown in the following equation [25]: where dT g /dz is the temperature gradient of the soil layer, k g the coefficient of the soil layer's heat conduction, T s the temperature of the under-layer snow, and T g the temperature of the under-layer soil (with a depth of z 2 ). Calculation Flow of Snow Melting Under the condition of double-layer snow, the melting of the surface layer can affect both the lower layer and the entire snow cover.Therefore, the snow-melting process of the upper layer is estimated first, during which there is only infiltration without the lateral flow of the snowmelt water.The snow-melting process of the lower layer is then estimated.Finally, the total outflow amount of the two layers is identified as that of the entire snow cover.The process is shown in Fig. 3. Infiltration into Soil Layer The melting of the frozen soil generally occurs later than the melting of the snow.During the melting of the frozen soil, the infiltration of the snowmelt water is minor, with excess infiltration as the main pattern.As the air temperature rises and the melting layer of the frozen soil thickens, in addition to the rate of the snow-melting increasing, the snow-melting infiltration also increases and the runoff becomes a combination of excess storage and excess infiltration.The equation for the relationship between the snow and the frozen soil is represented by [25] where S 0 is the moisture of the soil layer (m 3 /m 3 ), S 1 the initial moisture of the soil layer (m 3 /m 3 ), C = 1.1 the infiltration coefficient, T soil the temperature of the soil layer (K), and t the time interval (h). Calculation of Confluence The confluence process is calculated as a gridded form and is divided into slopes and rivers along the path.The river network is extracted using the digital elevation model (DEM) of the study area, and the slopes and rivers are determined by the threshold of the catchment grid with a field investigation having been carried out to validate the data [26].The corresponding time interval of the confluence is then calculated.The river grid is numbered, and the flow direction matrix is employed to calculate the confluence path and traverse the confluence grids, so the confluence time can be estimated. Input Data 3.4.1 DEM data The DEM data collected by this study are the 30M resolution data processed by the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Map (ASTER GDEM) version 1 (V1) data.Pretreatment is carried out to reduce the error of flow directions in flat areas and the error of no outflow in low-lying lands.According to the actual situation in the study area, the superposition of the height increment is used to calculate and fill the low-lying lands to reduce their impact.ArcGIS (Esri, Redlands, CA, USA) is used to analyze the pretreated DEM data relating to the flow directions, confluence accumulation, river networks, and sub-river basins [27]. Land-Use Data Land uses can change the drainage conditions of watersheds [28].In this study, the land-use and land-cover change (LUCC) data used are the land-cover products over China ( 250 250 × m 2 resolution) provided by the Cold and Arid Regions Science Data Center at Lanzhou. Meteorological Data The WRF weather forecast data used in this study is derived from the China Meteorological Assimilation Driving Dataset (CMADS) official website (http://www.cmads.org/nr.jsp)[29], Since the latest version of the data source has only been updated through 2010, the simulation and validation period of this study extends through 2010 only (2010 included); the 2016 datasets will be updated soon. Soil Data The Harmonized World Soil Database (HWSD) is resampled by ArcGIS and is interpolated into a spatial resolution of 30 m, which meets the models' accuracy requirements [30]. Snow Feature Extraction The model requires corresponding snow information, including the snow area, snow depth, and snow density, among which the data relating to the first two are interpreted and retrieved by MODIS, while the snow density is measured by field observation. Snow Area The snow cover index method and the images from the fourth and sixth bands of MODIS are used to calculate the Technical Gazette 25, 1(2018), 141-151 normalized difference snow index; the equation is as shown in the following equation [24]: where R 4 and R 6 are the albedos of the fourth and sixth bands, respectively. Snow Depth The snow depth is retrieved through the relationship between the snow depth and impact factors, as shown by Li's equation [31]: where SD is the snow depth; 1 2 , ,..., n X X X are the impact factors; and 1 2 , ,.. ., n A A A are the regression coefficients. The study area is covered by grass and a small amount of forest; a model that combines the above relationship has been derived as follows: For the grassland, RESULTS AND DISCUSSION 4.1 Comparison of Numerical Forecasts The WRF data are compared to the CMA/NMC T639L60 (http://www.weather.gov.cn/publish/nwp/t639/n-h/mslp.html)simulated data and the observational data from weather stations.The main parameters involved are the solar shortwave radiation, air temperature, soil moisture, soil temperature, relative humidity, and precipitation. The error of the data is verified by the root-meansquare error (RMSE), mean absolute error (MAE), extreme value, mean, and regression analysis.The RMSE and MAE are calculated as follows: Comparison of Shortwave Radiation In order to analyze the scientific quality of the results and consistency of the data, the data recorded during the period 01:00-12:00 AM every day between March 1 and 30, 2010 was selected to compare the shortwave radiation.A comparison of the shortwave radiation obtained from the WRF simulated values and the observed values are shown in Fig. 4. It can be seen from Fig. 4 that the WRF simulated value was higher on March 18 and 19, which was mainly due to the snowfall in the experimental area.In the simulation period, the resolution of the elevation data used in the WRF was 1000 m, which can also generate corresponding errors.It can be seen from Tab. 1 that the differences between the WRF and observation values regarding the means and extreme values are within 50 W•m −2 , the MAE is within 95 W•m −2 , and the overall error of shortwave radiation is 2.512 %. A regression analysis of the observed values and WRF simulated values is conducted using Statistical analysis System (SAS) software (Tab.2), resulting in a linear equation of 0.95491 26.71162 y x = + .The correlation between the equation and the variables was verified by the F test with a confidence level of 0.0001, which shows that the snowmelt model can meet the required level of prediction accuracy. Comparison of Air Temperature The observed temperature 2 m above the ground in the experimental field was compared with the variable T 2 in the WRF.The results show that the systematic error of the WRF is approximately 3 °C.The corrected simulated values of the WRF and the observed values are shown in Fig. 5. It can be seen from Tab. 3 that the accumulated error of simulation is 394.095°C, and the mean error is about 0.5 °C.The simulated value of the MAE is 3.471 °C, the corrected value is 1.334 °C, and the average correction is 0.456 °C.These results show that the difference between the corrected simulated data and the observed data is not large. .The goodness of fit, R 2 , was 0.9409 and the correlation between the equation and the variables was verified by the F test with a confidence level of 0.0001. Comparison of Relative Humidity The relative humidity of air contributes greatly to the evapotranspiration of the snow-melting process.A comparison of the relative humidity obtained from the WRF simulated values and the observed values are shown in Fig. 6.It can be seen from Tab. 5 that the maximum error is small, the mean error is 2.3 %, and the RMSE is 0.27 %. A regression analysis of the observed values and the WRF simulated values is conducted using SAS software (Tab. 6), resulting in a linear equation, 0.79963 0.1834 y x = + , with a goodness of fit, R 2 , of 0.8134, which meets the required accuracy. Comparison of Soil Temperature The soil temperature also contributes greatly to the speed of the snow-melting process.A comparison of the soil temperature obtained from the WRF simulated values and observed values are shown in Fig. 7.It can be seen from Tab. 7 that the maximum error is within 0.3 °C, the MAE is 0.18 %, and the RMSE can be regarded as zero. Comparison of Soil Moisture It can be seen from Fig. 8 that the observed values were not significantly different from the WRF simulated values before the melting of seasonal frozen soils.The observed values are higher than the simulated values after March 24, which can be interpreted as the impact of snowmelt water infiltration on the soil moisture due to the warming process.Because of the large diurnal temperature difference, the snowmelt water infiltration in the day increases the soil moisture, while the freezing of the water at night decreases it.This is why there are diurnal fluctuations in the curve.After March 27, the trend became gentler due to the decreasing temperature and less snow melting.After that, the soil moisture continues to increase and the diurnal fluctuations appear again.A comparison of the soil moisture obtained from the WRF simulated values and the observed values are shown in Fig. 8. It can be seen from Tab. 9 that the maximum error is within 0.1, the mean error is 0.05, the minimum error is within 0.07, and the RMSE is 6.5%. Analysis of Runoff Model This study not only verifies the accuracy of the outflow in the watershed, but also compares the variation of the snow water equivalent in the study area with the corresponding MODIS remote sensing data. Simulation of Snowmelt Runoff The snow-melting process of this model is in an hourly resolution, which increases the difficulty of observation.Therefore, the observed data during the flood peak is used in the experiment, and the observed data during other periods is averaged.However, due to the influence of the ablation of frozen soils, the forecasted flood peak is overestimated.This problem will be solved by an improvement to the modeled frozen soil layers.(1) The WRF-TSRM mode can provide an effective forecast and early warning for the snowmelt flood runoff on the north slope of the Tianshan Mountains.However, the WRF can only provide data with a resolution of 1×1 km 2 , which is larger than the hydrological unit on the grid scale.The data-matching problem could be the key to improving the simulation. (2) During verification, it was found the simulation tends to overestimate the runoff during the later period of the flood peak.This is because the ablation state of the frozen soil layer below the snow layer is different.It is recommended that the role of the ablation of the frozen soil in the runoff become an important area for future research.In the future, more modules regarding the mechanisms of the frozen soil freeze-thaw process will be embedded in the two-layer distributed snowmelt runoff model. (3) The impact of snowcover and seasonal frozen soil is also important.The threshold of the water-conservation capacity of snow cover changes with snow layer structure, and the infiltration capacity of the snowmelt also changes significantly due to the influence of the seasonal frozen soil, both of which will increase the uncertainty of the forecast.Therefore, it will be necessary to conduct an uncertainty analysis to further improve the model. The WRF-TSRM mode proposed in this study fully combines the advantages of the atmosphere and land surface model, which provides important scientific and technological support for snowmelt flood forecasting in the northern slope of the Tianshan Mountains.However, due to the fact that WRF only covered 2010, we did not validate and improve the WRF-TSRM mode using more observational data.We will make more improvements after collecting more data. Figure 2 Figure 2 Structure of double-layer snowmelt model temperature of the rainfall) and 272.15K s T = (the critical temperature of the rainfall of snowfall). latent heat, R d = 287 J•kg −1 /K −1 the dry air constant, e(T a ) and e(T s ) the vapor pressure of the air and snow, respectively, , r the aerodynamic resistance, x the elevation of the data being obtained, wind is characterized by unpredictability and variability[22], μ the wind speed which is a time series data[23] . The temperature gradient of the snow cover (K•m −1 ) is (dt/dz) = (bottom_T -surface_T)/Δh, where Δh is the difference in elevation between the snow layers. Figure 3 Figure 3 Calculation flowchart of the snow-melting process Figure 4 Figure 4 Comparison of shortwave radiation obtained from WRF simulated values to that from observed values Figure 5 Figure 5 Corrected WRF simulated values and observed values of air temperature Figure 6 Figure 6 WRF simulated values and observed values of relative humidity Figure 7 Figure 7 WRF simulated values and observed values of soil temperature Figure 8 Figure 8 WRF simulated values and observed values of soil moisture Figure 9 Figure 9 Spatial distributions of snow water equivalent in study area from MODIS data and simulation in 2009 4.7.1 Spatial Distribution of Simulated Snowmelt The snow water equivalent of typical snow-melting periods in 2009 and 2010 was simulated and validated. Figure 10 Figure 10 Spatial distributions of snow water equivalent in study area from MODIS data and simulation in 2010The validation was conducted using the mean observed snow density at the same time period.The simulation and MODIS data of the snow water equivalent Comparisons of the snowmelt runoff obtained by the simulated and observed values for 2009 and 2010 are shown in Figs.11 and 12, respectively. Figure 11 Figure 11 Simulated and observed processes of snowmelt runoff in 2009It can be observed from Figs. 11 and 12 that the simulated snowmelt runoff process line corresponds well with that of the observation data.Figs.13 and 14show that the qualification ratios (QRs) of the forecasted snowmelt runoff in 2009 and 2010 were 87 % and 90.85 %, respectively, and the distortion ratio is than 20 % for the daily snowmelt runoff simulation. Figure 12 Figure 13 Figure 12 Simulated and observed processes of snowmelt runoff in 2010 Figure 14 Figure 14 Accuracy of forecasted snowmelt runoff in 2010 5 CONCLUSIONS In order to predict the snowmelt flood in the northern slope of the Tianshan Mountains in Spring, a two-layer distributed snowmelt runoff model (TSRM) has been established based on the energy and mass balance for the Juntanghu Watershed on the north slope of the Tianshan Mountains.A simulation was conducted by coupling WRF with the TSRM.The conclusions of this study are as follows:(1) The WRF-TSRM mode can provide an effective forecast and early warning for the snowmelt flood runoff on the north slope of the Tianshan Mountains.However, the WRF can only provide data with a resolution of 1×1 km 2 , which is larger than the hydrological unit on the grid scale.The data-matching problem could be the key to improving the simulation.(2)During verification, it was found the simulation tends to overestimate the runoff during the later period of the flood peak.This is because the ablation state of the frozen soil layer below the snow layer is different.It is recommended that the role of the ablation of the frozen soil in the runoff become an important area for future research.In the future, more modules regarding the mechanisms of the frozen soil freeze-thaw process will be embedded in the two-layer distributed snowmelt runoff model.(3)The impact of snowcover and seasonal frozen soil is also important.The threshold of the water-conservation capacity of snow cover changes with snow layer structure, and the infiltration capacity of the snowmelt also changes significantly due to the influence of the seasonal frozen soil, both of which will increase the uncertainty of the forecast.Therefore, it will be necessary to conduct an uncertainty analysis to further improve the model.The WRF-TSRM mode proposed in this study fully combines the advantages of the atmosphere and land surface model, which provides important scientific and technological support for snowmelt flood forecasting in the northern slope of the Tianshan Mountains.However, due to the fact that WRF only covered 2010, we did not validate and improve the WRF-TSRM mode using more observational data.We will make more improvements after collecting more data. : Table 1 Statistical analysis of WRF simulated values and observed values of shortwave radiation Table 2 Regression results of WRF-simulated values and observed values of shortwave radiation Table 3 Statistical analysis of WRF simulated values, corrected values, and observed values of air temperature Table 4 Regression analysis of WRF-simulated values and observed values of air temperature Table 5 A statistical analysis of the WRF-simulated values and observed values of the relative humidity Table 6 A regression analysis of the WRF-simulated values and observed values of the relative humidity Table 7 A statistical analysis of the WRF-simulated values and observed values of the soil temperature Table 8 Regression analysis of WRF-simulated values and observed values of soil temperature R 2 , of 0.9853, which meets the required accuracy. Table 9 Statistical analysis of WRF simulated values and observed values of soil moisture.R 2 , of 0.8853, which meets the required accuracy. Table 10 Regression analysis of WRF simulated values and observed values of soil moisture
2018-12-13T09:55:42.408Z
2018-02-10T00:00:00.000
{ "year": 2018, "sha1": "2a7b34eaf7a3c4af3b387a6caf3309de6145d700", "oa_license": "CCBY", "oa_url": "https://hrcak.srce.hr/file/285630", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2a7b34eaf7a3c4af3b387a6caf3309de6145d700", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
13323510
pes2o/s2orc
v3-fos-license
Prognostic Impact of DNA-Image-Cytometry in Neuroendocrine (Carcinoid) Tumours Establishing prognosis proves particularly difficult with neuroendocrine tumours (NETs) as a benign looking histology can be associated with a malignant behaviour. In order to identify prognostic factors we examined 44 gastrointestinal and pulmonary, paraffin‐embedded NETs histologically and immunohistochemically. DNA‐image‐cytometry was used to examine 40 of these. We found that poor differentiation (corresponding to a Soga and Tazawa type D) and infiltrative growth correlated with a poorer prognosis. Moreover, parameters determined by diagnostic DNA cytometry like the 5c‐exceeding rate, the 2c‐deviation index, DNA‐grade of malignancy, DNA‐entropy and the type of DNA histogram were found to be of prognostic relevance. Morphometric parameters like the form factor and the mean nuclear area were relevant for survival, tumour recurrence and metastasis. However, in the multivariate analysis the only independent risk factor was the histological differentiation. The 5c‐exceeding rate is a good objective risk factor, which can be used particularly in cases in which only a fine needle biopsie is available. Direct comparison of the histology and the 5c‐exceeding rate in the multivariate analysis suggests that the 5c‐exceeding rate taken as sole prognostic factor might be of higher prognostic relevance than the histology but larger studies are needed to confirm this. Introduction Neuroendocrine tumours (NETs) are rare lesions and occur with an incidence of 2.0/100,000 for men and 2.4/100,000 for women according to a recent Swedish study [16]. The most important problem concerning these tumours is that lesions with the same histological appearance can behave completely different. Therefore, despite new classification concepts and some progress concerning the knowledge of the molecular base of NETs it is impossible to predict their behaviour with certainty [20]. This is particularly true when only biopsy material is available which does not allow an evaluation of the invasive edge of the lesion. In the literature there are many studies evaluating prognostic factors of NETs. Localization, local invasiveness, the size of the lesion as well as the mitotic count have been regarded as the most decisive factors [20]. However, there are reports on cases with large mesen-terial lymph node deposits originating from ileal NETs of a few millimetres in size [27]. In the present study we asked whether diagnostic DNA image cytometry is better at predicting the biological behaviour of NETs than morphological parameters. Material In this retrospective study we examined the paraffinembedded NETs of 44 patients which had been removed endoscopically or by open surgery. To obtain follow-up data we reviewed the hospital notes, sent questionnaires concerning clinical presentation and progress to the referring doctors, and we contacted the registrars' office concerning survival data. The follow-up varied between 0 to 108 months. One patient was lost to follow-up as he had moved away. The questionnaires were answered in 28 cases. Two patients had been treated with chemotherapy and two with chemotherapy and interferon for secondary tumours. Morphology and immunohistochemistry We reviewed all the routinely performed HE stained sections to confirm the diagnosis and classify the tumours according to Soga and Tazawa. They classified NETs (carcinoids) depending on their growth pattern into five different types. Tumours of type A have solid, nodular growth pattern, type B a trabecular, and type C a tubular or acinar growth pattern. Tumours of type D are "atypical" with many mitoses, a solid growth pattern, and may have necroses. They represent the highly differentiated neuroendocrine carcinoma of the current classification concept. Type E stands for any tumour which shows two or more of the above histologies [26]. Goblet cell carcinoids formed another group. We also assessed whether the tumour grew invasively or had set metastases. For the immunohistochemistry we used cuts of 5-8 µm from each tumour. The slides were deparaffinated and rehydrated. Then the endogenous peroxidase was inhibited with a mixture of 1% H 2 O 2 and methanol and rinsed in PBS. The slides were incubated for 60 minutes with the primary antibody at room temperature. All the primary antibodies apart from chromogranin (Histoprime) were from DAKO. The cuts were then rinsed three times in PBS and incubated with the secondary antibody anti-rabbit or antimouse IgG (both from Dianova) in a dilution of 1 : 40 for 45 minutes. Once more the slides were rinsed three times with PBS and then incubated for 5 minutes with DAB. The reaction was stopped under running tap water. A counter stain with haemalum was performed and the slides were mounted. DNA-image-cytometry DNA-cytometry was performed on cytospins prepared from nuclear suspensions after enzymatic cell separation using pepsine as described earlier [4]. Due to lack of material in four cases it was only performed on 40 samples. For preparation we cut around the tumour outlines in the paraffin block and prepared the nuclear suspension from two to three 70 µm sections. Small tumours were cut out from the blocks completely. The suspension was applied on slides covered with poly-L-lysine and these were air-dried. Afterwards the slides were stained in a modified Varistain 24-3 staining machine. Initially they were fixed according to Böhm [7], then hydrolysed in 4 N HCl for 55 minutes at 27.5 • C. This was followed by staining according to Feulgen during 60 minutes using pararosanitine as dye [4]. Measurement was performed using the DNA-cytometer CM-1 (Hund, Wetzlar). This was developed at RWTH Aachen in cooperation with the company ABOS (Munich, Germany). It consists of a microscope Hund 500 LL, equipped with an interference filter with a 570 ± 10 nm half-value width and a device that keeps the voltage at the halogen lamp constant. An objective with 40× magnification was used. The microscope is connected via a calibrated camera and a monitor with a personal computer for image analysis. Nuclei to be measured were marked with a cursor on the monitor. Integrated optical densities (IOD's) were measured for each nucleus after automatic segmentation of its contour using a watershed algorithm. Nuclei were thus detected using their local background for IOD measurements. A software-based glare error correction was performed. In each slide 20 granulocytes were measured for internal calibration. The mean IOD of the granulocytes was defined as the 2c DNA content of a normal diploid nucleus. We did not use a correction factor. The CV-values of the IOD's of the reference cells were below 5% [15]. In each slide 200 tumour cells were measured. We calculated the position of the first and second DNA-stemline, the 5c exceeding events (5cEE) and 5c exceeding rate (5cER), the 2c deviation index (2cDI), the DNA-entropy, the DNA-malignancy grade, the mean DNA-content of the tumour cells, the mean formfactor, the mean nuclear area, the standard deviation of DNA-ploidy, the coefficient of variation of the DNA-ploidy and the DNAstemline interpretation according to Böcking [4][5][6]28]. Statistical analysis The statistical analysis was performed using Systat Student version 1.0 except for the Cox regression which was performed using SPSS software version 10. P values below 0.05 were considered as statistically significant. All patients were included in the statistical analysis. When only a local, endoscopic resection of the tumour was performed we considered that we did not know whether the subjects had metastasis to the lymph nodes or not. Correlation between two parameters was measured with the Pearson's Chi-Square while significance of the Kaplan-Meyer curves was measured by Tarone-Ware. The thresholds for the different DNA variables were derived from Kaplan-Meyer statistics with balanced groups. Study population Of the 44 patients 20 (45.5%) were men and 24 (54.5%) were women, 65.9% of the patients survived. One patient was lost to follow-up via the registry as he moved to a different country. Of the questionnaires that had been sent to the general practitioners 28 were answered. In 13.8% of the cases the patients suffered from a secondary tumour. In eight cases death was due to metastatic spread of the NET or of secondary tumours. In five cases the cause of death was not known and one patient died of a post-operative cardio-vascular shock. The mean age was 54.9 years with a range from 19 to 79 years. An age 55 years was associated with a shorter survival time (p < 0.002) and a higher incidence of local (p = 0.012) and distant metastases (p < 0.03) and tumour recurrences (p < 0.006). Eight of our patients with NETs of the appendix were in the younger age group. To enssure that the difference in prognosis was not due to the higher incidence of the appendiceal NETs in the younger group we checked whether these differences persisted once these patients were excluded. Even then there remained a difference in survival and metastases and recurrences depending on the age though only the difference in survival remained statistically relevant (p < 0.02). Localization and therapy The distribution of the tumours throughout the pulmonary and gastro-intestinal tract is shown in Table 1. NETs of the appendix have a good prognosis as long as the tumour does not extend to the coecum. In the latter case the frequency of local recurrences (p = 0.077) and metastases (lymph node, p = 0.013; distant, p < 0.04) increases compared to localized appendiceal NETs as NETs of other localizations. There was a tendency of longer survival of the patients with NETs solely affecting the appendix compared to patients with tumours in other localizations, but the difference was not statistically significant (p < 0.066). If the tumours were located in the jejunum or ileum the chances of survival were worse than in other localizations (p < 0.045). The patients who had a palliative operation had a poorer prognosis than patients with a curative or endoscopic operation (p < 0.00065). The frequency of secondary metastases in the lymph nodes (p < 0.003) and in other organs (p < 0.002) was higher with palliative operations than with radical operations and endoscopic resection. Size and histology The size of the tumours had been reported in only 24 cases. The smallest tumour was 0.3 cm in diameter, the largest 9 cm and the average was 3.6 cm. Unlike other authors we could not find a correlation between tumour size and survival, metastasis or recurrences but this is probably due to our small numbers. Most of the tumours, i.e. 26, were of the mixed type E (65.9%), seven of which had parts that were poorly differentiated (type D). Nine were of the type AC and 6 of the type AB. Twelve tumours (27.3%) were of type A, two of type B, one of type C. We also had three goblet cell carcinoids, one of which consisted purely out of goblet cells and the others had parts, which corresponded to type B or type AB, respectively. We found that NETs which were poorly differentiated or of the goblet cell type were more frequently associated with lymph node metastases (p = 0.001), distant metastases (p = 0.0002) and a higher lethality (p < 0.00001) than other types (Fig. 1). Infiltrative growth of the tumours was also associated with more frequent distant metastases (p < 0.04) and higher lethality (p = 0.006). Immunohistochemistry NSE immunoreactivity was found in 41 of 44 cases (93.2%) and chromogranin immunoreactivity in 31 of 44 cases (70%). There was no evidence that the expression of these antigenes was associated with a different prognosis. In case of the peptide hormones a positive reaction was found for pancreatic polypeptide and somatostatin in 25%, serotonin in 40.9%, glucagon in 13.6% and for gastrin in 6.8% of the carcinoids. Tumours of the pancreas, papilla vateri, appendix and rectum frequently produce several hormones. The survival of patients with tumours which reacted to pancreatic polypeptide seemed to be somewhat better (p < 0.05) but was not reflected in a significantly lower frequency of local tumour recurrence or lower incidence of metastases. In half of our patients a positive reaction to S-100 was found and in 55% we found sustentacular cells in the tumour tissue. Metastasis into other organs occurred less frequently in patients with S-100 positive tumours (p = 0.002) or tumours that contained sustentacular cells (p < 0.02). DNA-cytometry The histograms showed three different patterns of DNA-distribution (Fig. 2). The first showed a single DNA-stemline near 2c and only a few values at 4c. The second revealed a smaller stemline around 4c besides that at 2c, the latter containing most of the cells and the third pattern showed a second stemline at 4c comprising more cells than the first one (Fig. 2). Whereas pattern 1 corresponds to the euploid-diploid-type, pattern 2 is identical to the euploid-polyploid-type, and pattern 3 is the aneuploid-peritetraploid DNA-histogramtype as defined by the 4th ESCAP-consensus-report [15]. 43.2% of our cases revealed just one stemline around 2c. The DNA-stemlines were found at 1.97c on average. Two stemlines, the first of which was larger than the second, we found in 43.2%. The least num-ber of cases (11.4%) showed the third pattern of DNAdistribution with two stemlines, the second of which was larger than the first. The NETs in the three different groups showed a significantly different behaviour concerning the frequency of recurrences, metastases and survival (Table 2, Fig. 3). Mean DNA-ploidy, the position of the DNA-stemline and the coefficient of variation of the DNA-stemline did not correlate with prognosis. On the other hand, a 2cDI greater 0.4 (p < 0.003), a 5cER of 0.5 (p < 0.005), a DNA malignancy grade of 0.26 or greater (p < 0.003) and a DNA-entropy greater 3.55 (p < 0.004) was associated with a higher mortality (Fig. 3). The higher mortality associated with these different DNA-parameters was also reflected in a higher frequency of metastasis and local recurrence ( Table 2). A 5cER > 1.5 corresponding to greater than three 5cEE was related to a more marked difference in mortality and all of the tumours in this group metastasised to the lymph nodes (p = 0.002) and into other organs (p = 0.001). Compared to the 5cER > 0.5 this test was therefore more specific but less sensitive ( Table 2). Morphometry A higher mortality was found for the cases in which the mean area of the nucleus exceeded 38.6 µm 2 (p < 0.03) and those in which the formfactor was 1.27 (p < 0.04) (data not shown). There was no correlation between the morphometric parameters and frequency of tumour recurrence and of metastases. Cox model All the parameters which were relevant for the prognosis in the univariate analysis were entered into the Cox model and tested for their independence. Unfortunately, it had to be broken off after the sixth step due to the limited number of cases. At that stage all six parameters entered were still independent and were associated with quite high relative risks (Table 3). Discussion Prediction of biological behaviour of neuroendocrine tumours (NETs) is still a difficult matter. The aim of the present study was to evaluate DNA cytometry as a prognostic tool in NETs of different localizations, and to compare its value with other prognostic fac- tors known from the literature. We identified patient's age, localization of the tumour, structure of the invasive edge, histological type, invasion of the coecal base in appendiceal NETs, S-100 reactivity, the presence of sustentacular cells, and DNA cytometry as prognostic important parameters. The most important finding of our study was the highly significant and multivariate independent prognostic importance of DNA cytometry. A 2cDI > 0.4, a DNA-malignancy grade 0.26, a DNA-entropy > 3.55, a 5cER 0.5 were of prognostic relevance. Similar to the study of Stipa et al. we found that a 5cER > 1.5% was the most specific but also the least sensitive parameter [29]. The 5cER 0.5 correlated also with the incidence of metastases and tumour recurrences. The 5cER has been found to be a good marker of malignancy in studies examining other tumours [13,24]. In the literature there are contradictory data on the importance of DNA cytometry in NETs. The largest study is by Padberg et al. who found a significant association between survival, tumour size and localization, TNM-stage and tumours belonging to type I/II or type III/IV according to Auer in DNA-cytometry [23]. Two investigations using flow cytometry revealed a correlation between DNA-aneuploidy and staging, tumour size, depth of invasion and also mortality in colorectal carcinoid tumours [8,31]. Blöndal et al. examined 15 NETs of the lung by flow cytometry and found that the benign tumours tend to be diploid [3]. Other studies found a relationship between the percentage of tumour cells lying above 2.5c and the survival time in patients with metastasized intestinal NETs [11,22]. Alanen et al. who examined NETs of the pancreas found that the DNA-index was >1.8 in the group of patients that died after 6 years of follow-up [2]. The group of Stipa et al. examined gastrinomas and insulinomas of the pancreas and found a correlation between metastatic spread, a 5cER > 1% and a DNA-stemline-ploidy > 2.5c. The ploidy was the more sensitive but also less specific prognostic factor [30]. However, there are other studies which did not show any correlation between DNA-cytometric parameters and prognosis [10,12,17]. In the study of Fitzgerald et al. 95 tumours were examined and no connection between DNA-cytometry and prognosis was found. They did not differentiate though between non-diploid and frankly aneuploid tumours which might blur the differences. These very different results are probably at least in part due to the fact that the definitions of DNA-aneuploidy varied from one study to another. The studies which established a correlation between DNA-parameters and prognosis tended to be image-cytometric studies. This might be an indication that the non-selective measurement of flow cytometric studies is inadequate to de-tect nuclei with minor DNA aberrations. That nearly all NETs have chromosomal imbalances has recently been shown by CGH analysis. Gains as well as losses were found [30]. The difficulty in assessing prognostic factors in a relatively rare disease like NETs is that prospective studies are difficult. When past cases are reviewed the degree and thoroughness of the followup tend to vary greatly. The quality of the information regarding lymph node and distant metastases as well as local recurrencies will therefore vary markedly from case to case and tend to obscure correlations of DNAmeasurements and outcomes. The classification into three different histogram types correlated well with prognosis. Patients with NETs of DNA histogram type III developed distant metastases in 80% and 50% suffered from local recurrence. In contrast, patients with type I NETs developed metastases in less than 6.5% and had no local recurrences. 5cER > 1.5 seemed the most useful tool for the identification of metastatic carcinoids. All the tumours in that group had developed metastases in the lymph nodes and in other organs. It was the first parameter which was taken into the Coxmodel. This demonstrates that in our study it is the parameter which taken on its own predicts the survival best. The different relative risks attributed to the different risk factors in the Cox-model were quite significant though they also showed a wide variation. As the multivariate analysis had to be stopped after the sixth step a full evaluation was not possible. Taken together, our data indicate that DNA cytometry is a useful objective tool to predict the prognosis of NETs. Other important factors were the histological differentiation, invasiveness and the localization of the tumour. DNA-cytometry can also be performed on smears of pre-operatively obtained (endo-) sonographically or CT-controlled fine-needle aspiration biopsies. Therefore not only a non-invasive specific diagnosis of NETs is possible (eventually by assistance of immunocytochemistry) but also the valid assessment of prognosis (occurrence of metastases, survival). Thus, in older patients or those who are not operable a tumourresection can be avoided if the DNA-pattern gives evidence of a favourable prognosis.
2017-10-05T00:00:52.761Z
2004-07-14T00:00:00.000
{ "year": 2004, "sha1": "921740f3cbbbb3cdffdaf4e7b32544041186a757", "oa_license": null, "oa_url": "https://doi.org/10.1155/2004/195478", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7282b2fd68219ea4150ebf9337f0f896c468efb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255962403
pes2o/s2orc
v3-fos-license
Breadth and magnitude of antigen-specific antibody responses in the control of plasma viremia in simian immunodeficiency virus infected macaques Increasing evidence suggests an unexpected potential for non-neutralizing antibodies to prevent HIV infection. Consequently, identification of functional linear B-cell epitopes for HIV are important for developing preventative and therapeutic strategies. We therefore explored the role of antigen-specific immune responses in controlling plasma viremia in SIV infected rhesus macaques. Thirteen rhesus macaques were inoculated either intravaginally or intrarectally with SIVMAC251. Peripheral blood CD4+ T-cells were quantified. Plasma was examined for viremia, antigen specific IgG, IgA and IgM binding responses and neutralizing antibodies. Regions containing binding epitopes for antigen-specific IgG, IgM and IgA responses were determined, and the minimum size of linear Envelope epitope responsible for binding antibodies was identified. The presence of neutralizing antibodies did not correlate the outcome of the disease. In a few SIV-infected macaques, antigen-specific IgG and IgM responses in plasma correlated with decreased plasma viremia. Early induction and the breadth of antigen-specific IgG responses were found to be significantly correlated with the control of plasma viral load. Immunoglobulin classes share similar functional linear B-cell epitopes. SIV-specific linear envelope B-cell epitopes were found to be 12 amino-acids in length. Early induction of combination of peptide-specific IgG responses were found to be responsible for the control of plasma viral load and indicative of disease outcome in SIV-infected rhesus macaques and might be important for the development of therapeutic strategies for control or prevention of HIV/AIDS. Background HIV-1 infection is associated with polyclonal B-cell activation, hypergammaglobulinemia, the presence of immature/ transitional CD10+ or exhausted CD27 negative B-cells in blood [1,2], exhaustion of tissue-like memory (CD20(hi)/ CD27(−)/CD21(lo)) B-cells [3], loss of total B-cell populations [4,5], and nonspecific switching from IgM to IgG, IgA and IgE responses. Our recent data demonstrated defective memory (CD21 + CD27+) B-cell proliferation in selective tissues in simian immunodeficiency virus (SIV)infected macaques [6,7]. Therefore, the maintenance of normal and effective humoral immune responses may be the key to the prevention and control of HIV/SIV infection. Recent reports emphasize that HIV-specific antibodies (Abs), instead of T-cell responses, may correlate better with protection in seronegative partners of HIV-1 infected individuals [8]. Moreover, other emerging studies demonstrate a correlation between anti-HIV antibodies and protection from infection, although these protective Abs are not strictly neutralizing in vitro [9]. Furthermore, lymphocytic choriomeningitis virus infection in the mouse model has also shown that non-neutralizing Abs elicited early in infection are capable of binding to the virus and limiting it's spread [10]. B-cell epitopes are described either as conformational (discontinuous, assembled) epitopes where multiple discontinuous amino-acids (aa) segments are folded to produce a unique conformational epitope complementary to the antibody, termed a contact epitope [11][12][13][14], or linear epitopes (continuous, sequential) where epitopes do not incorporate protein folding and can be represented by linear peptide sequence [15]. The continuous maturation of the immune response following SIV infection emphasizes the need to study the generation of SIV-specific Ab responses, antigen-antibody binding efficacy, and their potential importance in regulating disease progression. The present study was designed to determine the importance of total immunoglobulin, antigen-specific immunoglobulin responses against whole viral lysate (WVL), peptides corresponding to Env, Gag, Nef, and Tat and neutralizing antibodies (NAbs) in controlling plasma viral load (pVL) in SIV MAC 251 infected rhesus macaques (RMs). Regions containing binding epitopes for antigen-specific IgG, IgM and IgA responses during different stages of SIV infection were determined, and the minimum size of linear Env epitope responsible for binding Abs was identified. Our findings suggest that conformational IgG and IgM responses as well as breadth of different peptide-specific functional IgG responses are indicative of disease outcome. The presence of NAbs against neutralization-sensitive and resistant pseudovirus did not predict the outcome of the disease. Animals All animals in this study were housed at the Tulane National Primate Research Center (TNPRC) in accordance with the standards incorporated in the Guide for the Care and Use of Laboratory Animals [16]. All of TNPRC animal housing meets the Laboratory and Animal Biosafety Level 2+ requirements recommended for hepatitis, AIDS, and other viral agents related studies in the CDC/NIH publication "Biosafety in Microbiological and Biomedical Laboratories". The Tulane Institutional Animal Care and Use Committee (IACUC) of the TNPRC approved all animal procedures related to this manuscript. The TNPRC is fully accredited by the Association for the Assessment and Accreditation of Laboratory Animal Care (Animal Welfare Assurance A-4499-01). Virus inoculation, sample collections from animals were performed under the direction of veterinarians. Every effort was made to avoid unnecessary discomfort and pain to animals. At the TNPRC, animal discomfort or pain was alleviated by appropriate use of anesthetic medications. Rhesus macaques were sedated with ketamine (10 mg/ kg body weight) whenever they were removed from their home cage for routine blood collection, physical examinations or other minor procedures. Animals were euthanized humanely using the standard method of euthanasia for nonhuman primates, where animals were euthanized using Telazol and buprenorphine followed by a lethal intravenous dose of sodium pentobarbital. This method was consistent with the recommendation of the American Veterinary Medical Association Guidelines. Animal and tissue sampling Thirteen adult Indian RMs (Macaca mulatta), initially negative for SIV, HIV-2, type D retrovirus and simian T-cell leukemia virus 1 infection were inoculated with 300-500 TCID 50 of SIV MAC 251-CX intravaginally (Ivag) or intrarectally (IR) ( Table 1). Two out of 13 RMs were positive for Mamu-A*01 and majority were negative for Mamu-B*17 alleles. Mamu-A*01 and Mamu-B*17 alleles were found to be linked with lower viral set points and slow disease progression in SIV infected RMs [17][18][19][20]. However, the independent presence of Mamu-B*17 allele does not predict the outcome of SIV infection [21]. Intravaginally inoculated female macaques were treated with depot medroxyprogesterone acetate (30 mg intramuscularly) 28d prior to SIV exposure [22]. Sodium heparin and EDTA peripheral blood (PB) were collected at sequential time points for analysis described below. Plasma viral load quantification Plasma viral RNA was quantified either by bDNA signal amplification assay for IR macaques (Siemens Diagnostics, USA) [22,23] or by quantitative RT-PCR assay for Ivag macaques (Wisconsin National Primate Research Center Virology Core laboratory) [24,25]. The lower limit of RNA detection for bDNA and RT-PCR assays were 125 and 60 SIV-RNA copies/ml of plasma respectively. Neutralization assay Neutralizing antibody (NAb) activity was measured in 96-well culture plates by using Tat-regulated luciferase (Luc) reporter gene expression to quantify reductions in virus infection in TZM-bl cells against the neutralization-sensitive SIV MAC 251.6 and the neutralization-resistant SIV MAC 251.30 pseudoviruses, using MLV-pseudotyped virus as a negative control for non-specific virus inhibition as described previously [28]. TZM-bl cells were obtained from the NIH AIDS Research and Reference Reagent Program, as contributed by John Kappes and Xiaoyun Wu. Heat inactivated plasma samples were diluted over a range of 1:20 to 1:43740 in cell culture medium and pre-incubated with virus (~150,000 relative light unit equivalents) for 1 h at 37 ºC before addition of cells. Following a 48 h incubation, cells were lysed and Luc activity determined using a microtiter plate luminometer and BriteLite Plus Reagent (Perkin Elmer). Neutralization titers are the sample dilution at which relative luminescence units (RLU) were reduced by 50% compared to RLU in virus control wells after subtraction of background RLU in cell control wells. Measurement of antigen-specific and total immunoglobulin responses Antigen-specific IgG, IgA and IgM were detected in plasma using ELISA as previously described [5,29,30]. Purified SIV MAC 251 WVL was used as coating antigen (5 μg/ml, Applied Biosystems) to quantify antibody responses against conformational/discontinuous epitopes. Total IgG, IgM and IgA concentrations were measured by ELISA as previously described [5,7], where plates were coated with either anti-monkey IgG Fc/7s (Accurate Chemicals), IgM Fc (Accurate Chemicals) or IgA Fc (Alpha Diagnostic International) Abs. All samples were assayed in duplicate with appropriate positive and negative controls. For quantification of WVL-specific and total immunoglobulin responses, rhesus IgG, IgA (NIH-Nonhuman Primate Reagent Resource) and IgM (Fitzgerald Industries International) standards were used. Nonlinear regression using a sigmoidal dose-response variable slope model was used to interpolate concentrations from the standard curve. For WVL-specific Ig responses, positive values had to exceed the mean + 2SD of all animal's preinfection readings for a specific antigen at absorbance 490 nm. To increase stringency and to account for variation between animals, the absorbance values for each Amyloidosis in the small intestine and enterocolitis; lesions in lung are typical of viral pneumonia with presence of SIV giant cells T153 9.5 F 500 + --a F and M denote female and male respectively; b TCID 50 : Tissue culture infectivity dose at 50%; ND: not done; "+" and "-" denotes positive and negative results respectively for the respective column individual animal also had to exceed two times the value for the specific animal prior to infection to be classified as positive. Measurement of peptide-specific immunoglobulin responses SIV-Env (catalog-6883), SIV-Gag (catalog-6204), SIV-Nef (catalog-8762), and SIV-Tat (catalog-6207) 15-mers with 11-aa overlap peptides (NIH AIDS Research and Reference Reagent Program) were used as coating antigen. Plates were coated with peptide pools (PPs, 4-10 peptides per pool, 5 μg/ml of each peptide in 0.1 M sodium carbonate monohydrate, pH 9.6) to determine antibody responses generated against linear/continuous epitopes by ELISA as described earlier [5,29,30]. For all peptide-specific ELISA values, the positive responses were determined as specified above for WVL-specific responses. Cumulative OD values were calculated by the summation of OD/490 values for each positive peptide pool, after subtraction of the pre-infection value for each peptide pool in a protein. Statistical analysis Statistical significance for immunoglobulin quantitation data was determined using a one-way ANOVA. The Bonferroni method was used as a post hoc multiple comparison test for all means. Statistical differences between groups were tested using a two-tailed unpaired t-test. Pearson coefficient of determination analysis was performed to calculate correlation between pVLs and immunoglobulin responses and between pVLs and breadth of antigen-specific antibody responses using SAS software (version 9 in a windows environment). For all analyses, P values <0.05 were considered significant. Plasma viral load and peripheral CD4+ count demonstrate the progression of SIV infection in SIV-infected macaques Male to female/female to male and male to male sexual transmission via intravaginal and intrarectal route respectively are responsible for 75-85% of the HIV transmission [31]. In this study, we have used two groups of animals inoculated via Ivag (represents male to female transmission) or IR (represents male to male transmission) route to determine whether the inoculation route had any effect in regulating antigen-specific antibody and neutralizing antibody responses. We were also interested to determine whether the inoculation route had any impact on the breadth of immune responses. In all SIV-infected RMs, peak plasma viral replication (log 10 6.3-7.8 RNA copies/ml of plasma) was detected between 14 and 21 day post infection (dpi, Fig. 1a). Only one animal GN91 was able to control pVL to approximately 1 × 10 3 RNA copies/ml of plasma following the initial peak of viremia ( Fig. 1), whereas the remaining animals had plasma SIV-RNA greater than 10,000 copies/ml of plasma. Several of these animals (CL86, DE50, FK88, AE14, and P205) progressed to AIDS rapidly and were euthanized accordingly (Table 1). There were no difference in the mean viral set points (measured 25 to 95dpi) between Ivag and IR infected RMs. Following SIV MAC 251 infection, absolute CD4 count in peripheral blood decreased rapidly in the majority of the animals except GN91, in which CD4+ T-cell count decreased after 150dpi (Fig. 1a). Animal N107 had a low peripheral CD4+ T-cell count from the beginning compared to other macaques and by 18dpi the CD4+ T-cell count was below 200 cells/μl of blood and remained low throughout this study. The absolute CD4+ T-cell counts do not completely reflect the status of pVL as AE14 and BG21 had similar patterns of CD4+ T-cell count in peripheral blood, however the plasma viral load in AE14 remained 2.2 log higher than the viral load in BG21 from 68dpi onwards. Neutralizing antibody titers did not predict the disease outcome or the progression of disease All thirteen RMs were selected to perform NAb assays at several different time points after SIV infection ( Table 2). Neutralization of the highly neutralizationsensitive Tier 1A virus, SIVmac251.6 was detectable in all animals by 25-27 days post-infection and rose to very high titers over the time course of the samples received. Out of a total of 13 RMs, neutralization of SIV-mac251.30, which exhibits an insensitive Tier 3 neutralization phenotype that more closely approximates the infecting virus, was detected at 42 and/or 95dpi in 4 RMs and at 257dpi in 1 RM. A few of the titers were fairly high in RMs AE14, AP09, CL87 and P205. However, AE14 and P205 progressed to AIDS rapidly despite the presence of NAb titers against neutralizationresistant SIVmac251.30 pseudovirus (Table 2). We did not observe any difference in Nab titers between Ivag and IR infected RMs. In a few SIV-infected macaques, antigen-specific IgG and IgM responses in plasma correlated with decreased plasma viremia Total and WVL-specific IgG, IgM and IgA responses were measured in both IVag and IR inoculated SIV infected macaques ( Fig. 2 and Additional file 1: Table S1). Total IgG concentration remained largely consistent (IVag: 2989-9367 μg/ml; IR: 1533-4028 μg/ml) while SIV-specific IgG responses increased significantly during the course of SIV infection in both IVag and IR infected macaques (geometric mean values 1084 μg/ml at 257 days post infection (dpi) and 334 μg/ml at 272dpi in IVag and IR infection respectively; Fig. 2). Increased Table S1). However, SIV infected macaques showed an early peak in SIV-specific IgM concentrations in acute infection (Ivag: geometric mean value 1.1 μg/ml at 27dpi; IR: mean value 0.7 μg/ml at 25dpi; Fig. 2) that decreased at later time points. SIV-specific IgM concentrations were lower in animals infected via the IR route, however, the trend between the two inoculation routes remained same. Total IgA responses varied nonsignificantly throughout infection (IVag: 632-1388 μg/ ml; IR: 797-1093 μg/ml; Additional file 1: Table S1). However, SIV-specific IgA responses were detected in 5 out of 6 Ivag infected RMs and the level peaked at 73dpi. AE14 from IR infected RMs did not generate WVL-specific IgA responses and the overall IgA titer in rest of the IR infected animals was lower than Ivag infected RMs (Fig. 2). Pearson coefficient of determination analysis was performed to determine correlation between pVLs and immunoglobulin responses (20dpi to 257dpi and 25dpi to 272dpi for Ivag and IR RMs respectively, Fig. 1b) as reported earlier [32]. Total Ab responses in four (BC35, DE50, AE14 and AP09) out of thirteen macaques were negatively correlated with pVLs suggesting that total IgG, IgM or IgA had minimal role in controlling pVL (Fig. 1b). Similarly WVL-specific Ab responses in four (BC35, CL87, GN91 and AP09) out of thirteen RMs were negatively correlated with pVLs suggested that WVL-specific IgG or IgM also had minimal role in controlling pVL (Fig. 1b). Combination of different peptide-specific IgG responses found to correlate with the control of plasma viral load Similar to WVL-specific immunoglobulin responses, peptide-specific immunoglobulin responses measured by cumulative OD values were also correlated with pVLs for all RMs to determine if any of the peptide-specific responses may represent a key predictor for the control of plasma viremia. Cumulative IgG responses against Env, Gag, Nef and Tat linear peptides show statistically significant negative correlations in eight, four, three and one RM respectively (Fig. 1b). RM GN91 maintained pVL to approximately 10 3 RNA copies/ml, where Env, Gag, Nef and Tat specific IgG responses significantly upregulated, which might be playing a key role in controlling pVL. Significant negative correlations between Env-specific IgG responses and pVL in three disease progressing RMs (CL86, DE50 and FK88, Table 1) suggested that single antigen-specific responses were not adequate to control pVL and/or the disease progression and a combination of different peptide-specific IgG responses might be important in controlling pVL. Despite several significant negative correlations detected between antigen-specific IgM and IgA responses and pVLs, these independent responses did not predict the disease outcome (Fig. 1b). The breadth of IgG responses (IgG responses against antigens like Env, Gag, Nef and/or Tat) were also correlated with pVLs in all animals (Fig. 3). One (GN91) out of total 13 RMs had significant negative correlation between pVL and the number of antigen responses (r = −0.64, p-value = 0.035; Fig. 3b), which suggested that breadth of antibody responses had an impact in controlling pVL. Infact, three (Env, Gag and Nef ) antigen-specific IgG responses in GN91 were detected as early as 55dpi and by 126dpi IgG responses were prevalently detected against all four antigens (Env, Gag, Nef and Tat) and remained high throughout the time point of this study (Fig. 3a). No other animals had such early and increased breadth of IgG responses detected in this study. The breadth of IgG responses was evident more in Ivag compared to IR infected RMs where all Ivag animals had two or more antigen-specific IgG responses detected (Fig. 3a). AE14 RM from IR group had shown no detectable antigen-specific responses upto 236dpi. Another two RMs (N107 and P205) from IR group had shown selective Envspecific IgG responses throughout the study period. No significant correlation was detected between breadth of IgM or IgA responses and pVLs in any of these animals. (See figure on previous page.) Fig. 1 a Plasma viral load and absolute CD4+ T-lymphocyte counts over the course of 257 days for intravaginally (Ivag, n = 6, upper rows) and 272 days for intrarectally (IR, n = 7, bottom rows) SIV MAC 251 inoculated macaques were shown. One macaques, GN91 was able to maintain plasma viral load to approximately 1 × 10 3 RNA copies/ml of plasma following the initial peak of infection. The remaining macaques retained a high viral load throughout infection amongst which CL86, DE50, FK88, AE14 and P205 progressed to AIDS rapidly and were euthanized accordingly. Note GN91 was able to preserve peripheral CD4+ T-cell population after SIV infection for upto 150dpi. b Pearson coefficient of determination analysis between plasma viral load and either total immunoglobulins or antigen specific immunoglobulins were shown for all macaques following SIV MAC 251 inoculation for all time points. No significant correlation between plasma viral load and total or antigen-specific immunoglobulin responses were detected in two macaques (AP64 and P205) for all time specified above. Additionally, no significant correlations were found for WVL-specific IgA or peptide-specific IgM for Env and Nef. No Tat or Nef-specific IgA responses were detected. R and P denote Pearson R and probability values respectively for each row when correlated with plasma viral load. NS denotes non-significant P-value (p > 0.05). ND denotes a correlation cannot be calculated. Significant correlation values were shown in bold numbers Immunoglobulin classes share similar functional linear Bcell epitopes Individual 15-mer peptides (11-aa overlap) for SIV-Env, Gag and Nef antigens were used to identify functional linear B-cell epitopes using antigen specific ELISA protocol mentioned above (Figs. 4 and 5). In general, peptide-specific IgG responses were detected against multiple regions of all proteins and increased following SIV infection in both Ivag and IR inoculated macaques (Figs. 4b and 5b, f ). By 73dpi all Ivag infected RMs responded to both SIV-Gag and Env PPs with 100% responding to Env PP16 and 19 and Gag PP11 (Figs. 4b and 5b). Conversely, some macaques infected via the IR route did not respond to any PPs until 272dpi, when 100% of animals responded to Env PP16 (Figs. 4b and 5b). Peptide-specific IgM responses in Ivag infected macaques appeared during early infection and diminished overtime (Figs. 4c and 5c, g, i). Peptide-specific IgA responses were limited and inconsistent compared to peptide-specific IgG responses in Ivag and IR infected macaques and there were no detectable Nef or Tatspecific IgA responses (Figs. 4d and 5d). SIV infection Overall, the majority of the three immunoglobulin classes bound the same regions of Env peptides (e.g., first 31-aa in PP16, Fig. 4a). High Ab binding was also observed for Env-PP13, V1-V3 regions of gp120, and gp41. During SIV infection, the majority of the Env-specific IgG and IgA responses were generated against gp41, however, this was not statistically significant for either route of inoculation when compared to the binding with gp120 (Fig. 4b, p > 0.05). V2-V3 of gp120 played an important role in inducing antigen-specific IgM responses. Gag-specific (p15 and p27) immunoglobulin responses were limited, and most Ab responses occurred in the p2, p1 and highly conserved p8 and p6 regions Asterisks indicate statistically significant differences (p < 0.05) when compared to 13dpi (Fig. 5a). PP11 had important B-cell epitopes that bound to IgG, IgA and IgM Abs (Fig. 5a). The Gagspecific IgM responses were similar to that of IgG and IgA, where a moderately strong affinity for the Gag-PP7 region was detected during acute infection, however this response diminished in chronic infection (Figs. 5c & d). In the IR inoculated SIV-infected RMs, Nef-specific IgG responses were limited to BG21 and appeared late during infection (95dpi). However, Nef-specific IgG Fig. 3 a Plasma viral load and number of antigens responsible for IgG responses were plotted for each macaque over the course of 257 days for intravaginally and 272 days for intrarectally SIV MAC 251 infected macaques. Each bar represents breadth (the number of antigens (Env, Gag, Tat and/or Nef) positive in any combination for IgG responses) at that specific time point. Absence of any bar represents no antigen-specific IgG responses detected for that time point. b Pearson coefficient of determination analysis between plasma viral load and breadth of IgG responses were shown for all macaques following SIV MAC 251 inoculation for all time points. Note, GN91 had significant correlation values that were shown in bold numbers. Animal AE14 had no detectable IgG responses upto 236dpi against any SIV antigen and therefore no value of correlation was determined. R and P denote Pearson R and probability values respectively for each animal when correlated with plasma viral load. P value <0.05 was considered significant responses were detected early at 55dpi only in GN91, a RM from Ivag inoculated SIV-infected group. Moreover, Ivag inoculated SIV-infected RMs that survived upto 257dpi exhibited Nef-specific IgG responses at late chronic phase of infection (Fig. 5f). SIV-Nef epitopes varied between IgG and IgM and were scattered throughout the Nef protein (Fig. 5e). Tat-specific responses appeared as early as 73dpi in Ivag infected macaques compared to IR infected macaques where the responses were detected at 236dpi. GN91 was one of the several animals with strong responses to this antigen; however, it showed no differences in the early generation of these responses when compared to other animals. Functional linear Envelope B-cell epitopes are 12 amino acid in length SIV-Envelope epitope mapping was performed with plasma collected from 4 chronically Ivag SIV-infected macaques (BC35, CL86, CL87 and GN91, more than 150dpi) using the antigen-specific IgG ELISA protocol as described above. Five to 12-aa peptides from a 15mer SIV-Env peptide (sequence: KTVLPVTIMSGLVFH in V3 region) were synthesized commercially (GenScript, USA) and used as coating antigen at the concentration of 5 μg/ml to determine the exact length of a linear Bcell epitope. ELISA cutoff values were determined from SIV naïve macaques as described above. Positive IgG responses were detected when peptides with 12aa or Nef (e) proteins, indicating regions containing potential linear epitopes responsible for peptide-specific IgG, IgM and/or IgA responses were shown. Bracketed regions identify the amino acid sequences used to construct 15mer 10 peptide pools (PP1-13 for Gag and PP1-7 for Nef proteins). Colors refer to the percentage of animals exhibiting peptide-specific immunoglobulins for that region during infection with SIV MAC 251 as shown in the key. The Gag amino acid sequence (a) was divided into 6 proteins (p15, p27, p2, p8, p1 and p6) with two important regions, p8 (nucleocapsid region) and p6, shown in shaded area. The percentage of macaques with SIV-Gag peptide-specific IgG (b), IgM (c) and IgA (d) responses, SIV-Nef peptide-specific IgG (f) and IgM (g) and SIV-Tat peptide-specific IgG (h) and IgM (i) responses were shown for SIV infected macaques inoculated either intravaginally (n = 6) or intrarectally (n = 7). Responses to individual peptide pools (PP) were shown for each time point (days post infection; dpi). Colors refer to the percentage of animals exhibiting peptide-specific immunoglobulin responses for that region during infection with SIV MAC 251 as shown in the key. Note that Nef-and Tat-specific IgA responses were not detected greater in length were tested, which suggests that the minimum size of the functional linear epitope for eliciting antibodies was 12aa in length (Table 3). Individual 15-mer SIV-Env antigen was also used to determine the length of B-cell epitope responsible for binding responses (Fig. 4a). Discussion Identification of broadly neutralizing antibodies (bNAbs) directed at the Env protein provides a possible solution to generate antibody based vaccine strategies. However, all HIV vaccines assessed to date were unable to generate bNAbs. The contribution of bNAbs to the control of [33]. Moreover, SIV MAC 251 and SIV MAC 239 were shown to be relatively resistant to antibody-mediated neutralization by both autologous and MAbs treatment [34,35]. In our earlier study with either SHIV immunized or naïve SIV MAC 251 challenged macaques, none of the macaques were able to generate significant levels of neutralizing Abs against either pathogenic SIV MAC 251 or laboratory-adapted SIV MAC 251 [23]. In this study, the presence of NAbs did not predict the outcome of the disease and moreover the occurrence of those antibodies might be too late to control the infection at that point. GN91 was able to control pVL to approximately 1 x 10 3 RNA copies/ml of plasma and had significantly upregulated WVL-specific IgG and IgM responses compared to CL87 suggesting that binding Ab responses might be playing an important role in clearing pVL. Our overall data implies that combination of different peptide-specific IgG responses are predictive of the control of plasma viremia compared to the peptide-specific IgM and IgA responses. Our findings corroborate prior studies showing that while gp41 and gp120 show high levels of binding to IgG, there appears to be no correlation with control of early viral load, and subsequent Env-specific IgG responses had little impact on disease progression [36][37][38] when present without additional responses. Gag has been shown to have little antiviral function [39], however, it appears to be a strong target for antigen-specific immunoglobulin responses. The majority of immunoglobulins bind around the conserved NC, p8, and p6 regions of the Gag protein, which have several important functions in viral pathogenesis, such as viral replication, encapsulating the viral genome, aiding in reverse transcription, viral genome packaging, and viral budding [40]. While Gag is not a target for bNAbs, B-cell recognition of these highly important and conserved regions of the Gag protein may be indicative of a potential target for vaccine design. Limited data are available on the role of Nef-specific Abs in HIV disease progression. Absence of anti-Nef Abs was found to be associated with symptomatic HIV infection [41]. In our study, we identified increased and early Nef-specific IgG responses in GN91, which controlled pVL much earlier than other disease progressing RMs. Our findings, along with the recent single domain antibody study for the inhibition of Nef protein in a mouse model [42], indicate that Nef needs to be considered as an important target for a novel therapeutic approach in the prevention and control of HIV. Several studies demonstrated that Tat-specific antibodies are more common in individuals who are successfully controlling the disease, suggesting that Tat-specific Abs have a beneficial role in preventing disease progression [43][44][45][46][47][48]. In this study, we detected the presence of Tat-specific IgG in several animals, but as we have shown, it appears to be part of a large overall antigen-specific response that plays role in controlling pVL. Early induction of IgG responses against important targets and maintenance of those responses at higher magnitude might be crucial in reducing plasma viral loads and a possible predictor of disease outcome. We also asked whether linear epitopes for antigenspecific immunoglobulins identified in this study corresponded to bNAb targets. The b12 bNAb recognizes almost exactly the same region as CD4, where these conserved regions of the CD4bs [49] fall within Env-PP10 and PP11 and induced very limited peptide-specific IgM responses (Fig. 4). However, the CD4bs is highly conformational, whereas only linear epitopes were examined in this study. Similarly, three series of bNAbs, specifically PG9, PG16 [50], and CH01-CH04 [51] target conserved conformational regions within the variable loops (V1-V3) of gp120. While these interactions have been shown to be highly conformational, the analogous V1-V3 regions in SIV-MAC 239 were shown to be strong targets for peptidespecific IgG, IgA and IgM responses in SIV infected macaques (Fig. 4). Two bNAbs, 2F5 and 4E10, target the highly conserved MPER region of gp41 [52]. Linear epitopes for 4E10 and 2F5 on HIV-1 strain HXB2 represent NWFNIT and ELDKWA peptide sequences respectively [52], and these regions correspond to PP17 region in SIV-MAC 239 (665-678aa). Seventy percent of SIV-infected macaques generated antigen-specific IgG responses to this region, and several macaques also generated antigenspecific IgA responses to this region. Therefore, Envspecific immunoglobulins bind to several linear epitopes of recently identified bNAbs, with the exception of the CD4bs. In future studies, it will be important to define the plasma viral sequence data in SIV infected animals and determine if the presence of escape variant(s) have any correlation with the antigen-specific immunoglobulin responses. In summary, quantitative and qualitative immunoglobulin responses were detected in SIV-infected macaques, where increased IgG responses were measured specifically against the gp41 region followed by variable regions V1-V3 of gp120, and NC and p6 region of Gag protein. Antigen-specific IgM and IgA responses were more limited but targeted towards similar regions of the Env and Gag proteins. Several regions in Env protein strongly bind to immunoglobulins and are important epitopes for bNAbs. Early induction and increased breadth of antigen-specific IgG responses might be crucial to the control of plasma viral load and predictive of disease progression in SIV/HIV infection. Conclusions Our data strongly suggest that an early induction and increased breadth of peptide-specific IgG responses are indicative of disease progression. While Gag-specific responses may be argued to be an indirect indicator of HIV disease progression, recent identification of Abs against Nef and Tat proteins suggests that it may be possible to prevent and control HIV/AIDS by targeting the function of Tat and Nef proteins. Moreover, the presence of NAbs did not predict the outcome of the disease in our study where the generation of those antibodies might be too late to control the SIV infection. The experimental design does not address the role of T-cell responses and other innate immune responses in regulating plasma viral load in these SIV-infected animals. Additional file Additional file 1: Table S1. Quantification of total IgG, IgM and IgA following SIV MAC 251 infection in rhesus macaques (PDF 2059 kb)
2023-01-18T15:01:34.011Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "a0b004f1642b02e5b0049e298f7554e27306195b", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-016-0652-x", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a0b004f1642b02e5b0049e298f7554e27306195b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
246078938
pes2o/s2orc
v3-fos-license
How does the Untire app alleviate cancer‐related fatigue? A longitudinal mediation analysis Abstract Objective A waiting‐list randomized controlled trial supported the effectiveness of the multimodal Untire app in reducing cancer‐related fatigue (CRF) in cancer patients and survivors. However, little is known about the causal mechanisms of different app components through which the intervention effect was achieved. We aim to examine whether specifically targeted factors (i.e., fatigue catastrophizing, depression, mindfulness, sleep, and physical activity) mediated the intervention effects of the Untire app on fatigue outcomes. Methods Seven hundred ninety‐nine persons with CRF were randomized (2:1) into intervention (n = 519) and waiting‐list control (n = 280) groups. Self‐report data on the primary outcome fatigue severity and interference and the abovementioned potential mediators were collected at baseline and 12 weeks. Participants who completed the 12‐week assessment were included in the analyses (intervention = 159; control = 176). We performed longitudinal multi‐categorical multiple mediation analysis using PROCESS macro to examine whether the potential mediators explained the overall intervention effects. Results Improvements in fatigue catastrophizing (bootstrap 95% CI (−0.110; −0.011)), depression (bootstrap 95% CI (−0.082; −0.004)), and mindfulness (bootstrap 95% CI (−0.082; −0.002)), significantly mediated the intervention effect on fatigue severity, whereas sleep quality (bootstrap 95% CI (−0.081; 0.009)), sleep disturbance (bootstrap 95% CI (−0.038; 0.029)), and physical activity (bootstrap 95% CI (−0.068; 0.000)) did not. Similar associations were found for fatigue interference. Conclusions Untire app access reduces fatigue severity and interference mainly by decreasing fatigue catastrophizing, depression, and by increasing mindfulness. Supporting these psychological mechanisms is crucial for reducing fatigue among cancer patients and survivors. | BACKGROUND Since cancer-related fatigue (CRF) is the number one side effect of cancer and cancer treatment, 1 much research has been targeted at determining effective interventions to ameliorate CRF symptoms. 2 In a recent waiting-list randomized controlled trial (RCT), 3 we found evidence for the effectiveness of the Untire self-management app in reducing CRF in cancer patients and survivors. Cancer-related fatigue is a profound, complex, multifactorial problem, with many etiologies involving physiological, biochemical, and psychological systems. 4 Therefore, the Untire app's content has a multimodal design, combining as a self-management toolbox various components hypothesized to attenuate fatigue. However, little is known about the causal mechanisms of these different app components through which the intervention effect was achieved. Effective CRF interventions include physical exercise and targeted psychological and mind-body treatments, whereas pharmacological interventions could not be advised due to limited effectiveness and side effects. 5 Evidence-based CRF interventions can address fatigue reduction through different mechanisms. 4 Exercise interventions aiming to reduce CRF in cancer patients assume that a lack of physical activity and physical breakdown during cancer (treatment) can worsen fatigue. 6 It is presumed that with exercise interventions, physical activity, strength, and fitness can be increased, 7 resulting in a reduction in CRF. Reduced CRF, in turn, allows patients to become more capable of carrying out the activities of daily living and thereby helping further improve the overall quality of life (QoL). 8 Cognitive-behavioral therapy (CBT) aimed at reducing CRF is based on the assumption that several fatigue-related cognitions (e.g., catastrophizing thoughts, depression 2 ) and behaviors (e.g., difficulties with goal setting, setting boundaries, poor sleep hygiene) are related to the persistence of fatigue. 9 Targeting emotions, behaviors, and cognitive processes with CBT is assumed to result in less dysfunctional thoughts about fatigue, contributing to fatigue reduction. 10 Mindfulness-based cognitive therapy (MBCT) can reduce CRF by fostering deep relaxation and sleep quality via breathing exercises, autogenic training (i.e., teaching the body to respond to own verbal commands fostering relaxation), as well as body scans (i.e., paying attention to parts of the body and bodily sensations in a gradual sequence from head to toes). 11,12 Despite the variety in research on all kinds of CRF interventions, only about 13% percent of studies to date investigated multimodal CRF interventions. 2 Although many potential mechanisms of CRF interventions can be assumed, to our knowledge, only a few studies have evaluated mechanisms that mediate treatment effects of CRF interventions. One RCT performing mediation analysis demonstrated that a physical activity intervention with yoga exercises could reduce cancer fatigue via reduced sleep disturbance. 13 A recent study on pooled data from three RCTs that tested the efficacy of CBT on CRF suggested causal mediation via increased (cognitive) self-efficacy and decreased fatigue catastrophizing, focusing on symptoms, perceived problems with activity, and depressive symptoms. 10 Another RCT on CBT showed that cancer patients experienced a significant increase in fatigue-related self-efficacy, with greater self-efficacy associated with decreased fatigue severity. Contrary, however, there was no evidence that changes in physical activity, exercise capacity, perceived physical activity, fatigue catastrophizing, or emotional functioning mediated CBT's effect on fatigue. 14 The Untire self-management app has a multimodal set-up that combines most of the abovementioned evidence-based treatment approaches to reduce cancer patients' and survivors' disabling fatigue. 15 The self-management app is based on face-to-face therapy for CRF, including energy conservation, activity management, restful sleep, mindfulness-based stress reduction (MBSR), psychosocial support, CBT, and physical activity exercises, 15,16 and is in line with the guidelines of the National Comprehensive Cancer Network. 17 | PURPOSE OF THE RESEARCH In the Untire app RCT, the primary analyses showed that the multimodal Untire app could effectively reduce fatigue symptoms (i.e., fatigue severity and interference). As a next step, we explore the potential mediating role of psychological factors (i.e., fatigue catastrophizing, depression, mindfulness, sleep quality, sleep disruption) and physical activity in lowering fatigue. | Ethical approval The Medical Ethical Committee of the University Medical Center Groningen (UMCG), the Netherlands, approved the study procedures. We received either ethical approval or a waiver from authorized institutions in the four English-speaking countries targeted (i.e., Australia, Canada, United Kingdom, and the United States). 18 Further details are provided in the protocol 15 and outcome paper. 3 | Study design and setting The Untire app study is a large-scale international waiting-list RCT that examined the effectiveness of the Untire intervention in SPAHRKÄS ET AL. and study enrollment took place until October 2018. We recruited potential participants online via paid social media advertisements on Facebook and Instagram. 19 We targeted patients and survivors of ≥18 years old who experienced fatigue at moderate/severe levels (i.e., an average composite score of ≥3 on items 1, 2, and 3 of the Fatigue Symptom Inventory [FSI] 20,21 ) and owned a smartphone. Exclusion criteria were a diagnosis of and receiving treatment for a severe mental disorder (i.e., major depression, psychotic disorder, anxiety disorder, or addiction) as these persons may need more intensive treatment than offered by the app. 3 Also excluded were persons diagnosed with chronic fatigue syndrome, myalgic encephalomyelitis, or fibromyalgia as the app focuses on improving physical activity, which is not recommended and could potentially be harmful to these patients. 22 After having completed the baseline questionnaire, 799 participants were randomized (2:1) into intervention (n = 519) and waiting-list control (n = 280) groups. Participants in the intervention group received access to the Untire app at the start of the trial, whereas the control group was treated as usual (TAU). This means that control group participants could participate in other programs to reduce CRF. However, little interference of TAU was expected since the usual treatment options for CRF symptoms are limited and rarely discussed by care professionals. 23 After 12 weeks, the control participants also received the intervention (i.e., 1 year of free app access). Participants received e-mail invitations and reminders at 4, 8, 12, and 24 weeks to complete the questionnaires. | Untire app The Untire app intervention is based on evidence-based methods for patients with CRF in clinical practice and comprises four modules, these are, My themes, My exercises, Physical activity, and Tips. It is hypothesized that the app addresses dysfunctional thoughts and sleep hygiene via CBT and psycho-education (My themes). Further, it is hypothesized that the app reduces stress and improves sleep via MBSR (My exercises), helps to improve physical fitness through exercise instructions and activity management (Physical Activity), and empowers via positive psychology (Tips). 11,12,24 Additionally, through Quick scans, participants get weekly insight into their fatigue levels, burden, happiness, satisfaction, and energy leaks over time. Participants in the intervention group were instructed to use the app at least once a week. | Dependent variables The primary outcome is the change in fatigue severity and fatigue interference from baseline to 12 weeks, assessed with the self-report questionnaire FSI. 20 Fatigue severity was assessed by calculating the average of three severity items (Cronbach's α = 0.71) and fatigue interference by calculating the average of seven interference items (Cronbach's α = 0.91). All items were completed on an 11-point Likert scale ranging from 0 (not at all fatigued/no interference) to 10 (as fatigued as I could be/extreme interference). A higher score indicates higher fatigue severity or interference. | Potential mediators The following potential mediating factors were assessed fatigue catastrophizing, depression, mindfulness, sleep quality, sleep disturbance, and physical activity. The 13-item Fatigue Catastrophizing Scale was adapted from the Pain Catastrophizing Scale, 25 We described sample characteristics (i.e., gender, age), treatment outcomes (i.e., fatigue severity and fatigue interference), and potential mediator variables (i.e., fatigue catastrophizing, depression, mindfulness, sleep quality, sleep disturbance, and physical activity) at baseline and 12 weeks, for those who completed the 12-week assessments (i.e., T12-completer). To investigate the mediating effects of the potential mediator variables on fatigue outcomes after 12 weeks of Untire app access, we used the PROCESS macro, an observed variable ordinary least squares regression-based modeling tool. 29 This method can handle longitudinal mediation analysis, including multiple mediators in a single model. Specifically, we tested a parallel multiple mediator model 29 that estimates the direct and indirect effects of condition X (intervention vs. control) on Y (fatigue outcomes) through the six proposed mediators measured at 972 -12 weeks for each mediation path while accounting for other mediation paths, as well as for covariates (baseline scores for all mediators and the fatigue outcome). The analysis of covariance model has been supported by prior research comparing different methods for estimating mediated effects. 30 For the longitudinal mediation analyses, the following steps were taken: (1) we tested intervention effects (i.e., X = intervention vs. control) on potential mediators over 12 weeks (path-a); (2) we tested associations between the potential mediators and fatigue severity and fatigue interference over 12 weeks (path-b); and (3) we tested if associations in potential mediators mediated the relationship between the intervention effect (X) and fatigue outcomes (path-ab). Paths a and b make up the mediating pathway over 12 weeks, with the mediating effect usually described in the literature as the product of coefficients (ab). 31 The c 0 -path is the direct (unmediated) 12-week intervention effect on fatigue outcomes in intervention versus control groups. The multiple mediation analysis provides the mediating effect's coefficients with bias-corrected bootstrapped confidence intervals (CIs), which is currently an optimal way of performing mediation analysis. 32 Five thousand biascorrected bootstrapped samples and 95% CIs were used in the present analysis. Complete case data were used for this analysis because the bias-corrected CIs can only be generated with complete data. | Study sample In total, 799 participants completed the primary outcome at baseline, of which 335 completed the 12-week fatigue outcome assessment and were therefore included in the analysis. Table 1 shows that most participants were female (92%), middle-aged (57.4 � 9.5 years), and moderate to severely fatigued (severity: 6.5 � 1.4; interference: 5.7 � 2.0). Means and standard deviations for the potential mediators are also reported in Table 1. Baseline characteristics (including the potential mediator variables) of the 335 T12-completers did not differ significantly from the baseline characteristics of the 799 participants who completed the baseline assessment. | Test of the intervention effect (X) on the mediator over 12 weeks (Δ M )-a-path The self-management intervention significantly improved outcomes of fatigue catastrophizing, depression, mindfulness, sleep disturbance, and physical activity ( Table 2). Sleep quality did not differ significantly for both univariate regression models of fatigue severity and fatigue interference (p > 0.05), suggesting that the intervention T A B L E 1 Baseline characteristics and 12-week outcomes of study participants who completed the 12-week fatigue assessment in the intervention or control group | Test of the effect of the mediator (Δ M ) on the treatment outcome (Δ Y ) over 12 weeks-b-path The potential mediators from step 1 were significantly related to fatigue severity (p < 0.05, Table 2), except sleep disturbance (p > 0.05, Table 2). Sleep quality was significantly associated with a reduction in fatigue severity (p < 0.05), although it was not significantly associated with a reduction in fatigue interference (p > 0.05). | Test of the mediating effect over 12 weekspath ab Testing the mediation effects revealed significant associations for the following mediators: fatigue catastrophizing, depression, mindfulness (p < 0.05, Table 3), whereas sleep quality, sleep disturbance, and physical activity did not mediate the intervention effect on fatigue outcomes (p > 0.05, Table 3 previous RCTs showed that the effect of CBT promoting physical activity for patients with CRF were not mediated by a change in physical activity or physical fitness, 35 even when physical activity was measured using actigraphy. 36 On the contrary, another RCT identified graded exercise therapy (GET) as an essential component of CBT for CRF, resulting in a larger reduction in fatigue than the other components, mediating CRF by an increased level of perceived activity. 37 However, this does not mean that Get alone would be sufficient to manage severe CRF since it was provided as a component of CBT. Another RCT demonstrated that CBT with GET mediated CRF via self-efficacy only, but not via physical activity, exercise capacity, perceived physical activity, fatigue catastrophizing, or emotional functioning. 14 Self-efficacy is assumed to play a role in fatigue reduction but has unfortunately not been assessed in our study. We also learned that Untire app access could not evoke improvements in sleep quality over 12 weeks, although we learned that sleep quality was still significantly associated with fatigue reduction, suggesting a theoretical base. A limitation is that we used self-reported, single-item questions to assess physical activity and sleep. Future studies could use more extensive questionnaires or objective measures (e.g., wearables), and perhaps such assessments would have painted a different picture. Also, it could be the case that some themes within the app can be further improved (e.g., sleep and physical activities). | Clinical implications Our research has shown that a multimodal self-management app can be an effective and advisable toolbox to support fatigued cancer patients and survivors via different mediating pathways. We do not know to what extent each part of the app was used since we offered the Untire app as a 'black box' with multiple components. Even though many cancer patients and survivors might prefer a single module within the app, it also seems likely that a combination accumulated to the intervention effect of fatigue reduction. Moreover, we believe that a multimodal intervention has further clinical advantages over single-component interventions because people can prioritize and choose the components they want to work on. | Study limitations One advantage is that we simultaneously examined several potential mediators in one model. While we could elucidate which mediators contributed to the intervention effects, unfortunately, we cannot make claims about the dose-response relationship of engaging in specific in-app modalities concerning changes in mediators and fatigue outcomes. In addition, our analysis allowed taking baseline and 12-week assessments into account, meaning that we could gain insights about potential mediators over time but could not zoom into the dynamics of mediators and outcomes within the 12 weeks. It seems plausible that the working mechanism of the self-management app is not a one-way causal effect. Another limitation might be the higher dropout rates in the intervention group. Although sample characteristics do not differ significantly between completers and non-completers of the 12-week assessment, we know that the sample is relatively selective concerning gender, and type of cancer. Although we do not expect differences between the working mechanisms in men and women, we cannot say for sure because of the few men in the T A B L E 3 Testing the mediation effect of the six pre-specified longitudinal-mediators using the bootstrap approach, ANCOVA-model -975 trial. Our sample's high number of women might be due to our Facebook Ads recruitment strategy. We targeted cancer patients and survivors, of which mainly women responded, and we then created look-alike-audiences with similar characteristics. Research shows that men are generally less likely to perform health-protective behaviors than women. 38 In order to increase the use of such apps among male cancer patients and survivors, recruitment via social media might be adapted. In our study, Facebook Ads with images of male participants were slightly associated with improved uptake among men. A recent study demonstrated that using male-specific ads can result in a significantly higher proportion of men completing a survey than gender-neutral ads (38% male vs. 25% gender-neutral). 39 In order to stimulate the use of these apps among men, it could also be helpful to tailor information in the app specifically to men, informing users that this information is selected for them. | Future research This study represents an essential step in mHealth research directed at CRF. More research is required to identify factors and their interplay on the causal pathway of CRF. We now investigated the multimodal intervention as a whole, but future studies could further match the time spent in specific modules to outcomes of these modules. Dose-response analysis could gain further insights into how interventions work and lead to more effective interventions in the future. Besides, future research could focus on app content personalization, which would support patients' use of the app's most beneficial and personally relevant components. | CONCLUSIONS This study showed that access to the multimodal Untire app appears to reduce fatigue severity and interference by reducing fatigue catastrophizing, depression and by increasing mindfulness. Supporting these psychological mechanisms is crucial for supporting patients with fatigue due to cancer and cancer treatment.
2022-01-22T06:18:49.293Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "2a1e47c2dd50e466de82eda365aa6f50d0281025", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pon.5886", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "a5230fbc30e634b57f1c8642a3c91ff74cba2e23", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256548565
pes2o/s2orc
v3-fos-license
Diversity of ribosomes at the level of rRNA variation associated with human health and disease Summary With hundreds of copies of ribosomal DNA (rDNA) it is unknown whether they possess sequence variations that ultimately form different types of ribosomes. Here, we developed an algorithm for variant-calling between paralog genes (termed RGA) and compared rDNA variations with rRNA variations from long-read sequencing of translating ribosomes (RIBO-RT). Our analyses identified dozens of highly abundant rRNA variants, largely indels, that are incorporated into translationally active ribosomes and assemble into distinct ribosome subtypes encoded on different chromosomes. We developed an in-situ rRNA sequencing method (SWITCH-seq) revealing that variants are co-expressed within individual cells and found that they possess different structures. Lastly, we observed tissue-specific rRNA-subtype expression and linked specific rRNA variants to cancer. This study therefore reveals the variation landscape of translating ribosomes within human cells. Figure S2: Reference Gap Alignment (RGA) steps The four steps in the RGA method are illustrated with matching numbers to the steps in the main text.S17 for region to regional variant conversion).The top most abundant rRNA regional variant is presented per ES/non-ES region across tissues.The x-axis is the same for all panels and is displayed in (K).(A-K) Scatter plot of different regional rRNA variants, relative abundances for TCGA cancer biopsy samples.The top most abundant rRNA regional variant (Table S17) is presented per ES/non-ES region across tissues.The x-axis is the same for all panels and is displayed in (K).X-axis cancer type full name for the abbreviations are listed at the bottom of Figure S21.(A-K) Scatter plot of different regional rRNA variants, relative abundances for TCGA cancer biopsy samples.The top most abundant rRNA regional variant (Table S17) is presented per ES/non-ES region across tissues.The x-axis is the same for all panels and is displayed in (K).X-axis cancer type full name for the abbreviations are listed at the bottom of Figure S21.(A-K) Scatter plot of different regional rRNA variants, relative abundances for TCGA cancer biopsy samples.The top most abundant rRNA regional variant (Table S17) is presented per ES/non-ES region across tissues.The x-axis is the same for all panels and is displayed in (K).X-axis cancer type full name for the abbreviations are listed at the bottom of Figure S21. Figure S25: Cancer-specific rRNA variants relative-abundances (5 of 6 figures) (A-K) Scatter plot of different regional rRNA variants, relative abundances for TCGA cancer biopsy samples.The top most abundant rRNA regional variant (Table S17) is presented per ES/non-ES region across tissues.The x-axis is the same for all panels and is displayed in (K).X-axis cancer type full name for the abbreviations are listed at the bottom of Figure S21.(A-B) Scatter plot of different regional rRNA variants, relative abundances for TCGA cancer biopsy samples.The top most abundant rRNA regional variant (Table S17) is presented per ES/non-ES region across tissues.The x-axis is the same for all panels and is displayed in (B).X-axis cancer type full name for the abbreviations are listed at the bottom of Figure S21. Experimental Procedures Reference index for rDNA extraction For calling variants we map against three rDNA references : 18S and 28S extraction from hESC, GIAB and T2T, 1KGP rDNA calling -hESC and GIAB: Reads were extracted by mapping HiFi fastq long reads (Table S1, S19 for HiFi sample list) to the "Reference index for rDNA extraction" using minimap2(3) with "-N 20 -ax map-ont" parameters and processes with samtools(4). T2T rDNA calling: We used complete 219 rDNA copies with 18S and 28S annotation by T2T.hESC rRNA calling: The RT with 3' primers of 18S and 28S results in 18S and 28S rRNA reads. For both rDNA and rRNA, we keep reads that we consider full or near full length as followed: 1. We split long reads to consecutive non-overlapping 50bp short reads and map the short reads to "Reference index for rDNA extraction" using bowtie2 using default parameters(5). 2. To call a long read 18S, we demand a long read to have at least 18 short reads (which is the equivalent of ~900bp) to map to the 18S gene.For 28S calling, we demand a read to have at least 50 short reads (which is the equivalent of ~2500bp) to map to the 28S gene. 3. We keep 18S reads in the length range 1,500-2,100 bp and 28S reads in the length range 4,500-5,500 bp. 4. For calling deletion variants, in order to avoid calling variants where the RT stopped, we only considered reads that do not have deletions at the beginning at position 56 in the 18S and position 25 in the 28S and only report deletion atlas variants after these positions. 1. We classified sequences as either 18S or 28S followed by Needleman-Wunsch global sequence alignment 17 of each sequence to one RNA45S5 reference (either 18S or 28S based on read classification) 18 . 2. We created a reference sequence that aligns with all other sequences that we call a gap-aligned reference.This gap-aligned reference has the same sequence as the reference, but at each nucleotide position, we extended a gap at the size of the maximal gap found by the global sequence alignment to all sequences.Importantly, this gap-aligned reference allows straightforward comparison among all sequences without requiring computationally expensive all-by-all pairwise sequence alignments. 3. We aligned all H7-hESC sequences to the gap-aligned reference using the previous global alignment with additional extended gaps at reference positions. 4. Lastly, we extracted all variants at a given position across all aligned sequences.Notably, we have benchmarked the gap-pemalty opening and extension which can affect the indel number (Table S4).Since benchmarked parameters yielded similar total number of indels, we use the default Needleman-Wunsch parameters of high penalty of gap opening and low penalty of gap extention. Nucleotide variant calling in long reads We ran our four step RGA, algorithm on the reads that pass the criteria in "18S and, 28S extraction from 1KGP, GIAB, hESC, and T2T" The output of this alignment are exact alignment of all reads against the 18S/28S reference.With this we extract all sequence variants of the 18S and 28S both in rRNA and rDNA. For 1KGP dataset, we report rDNA variants that were found in at least 5 reads and detected in 3 samples in Table S5 (out of 30 1KGP samples with HiFi reads).For GIAB 2 trio families, we report variants found in in at least 5 reads and detected in 2 samples Table S6-S7 (For HiFi and ONT datasets).Since the Chinese Han father sample was not sequenced in ONT, we did not include this sample in the HiFi dataset and the total number of samples were 5: Ashkenazi mother, father, son, and Chinese Han mother and son. Atlas variant calling We ran "Nucleotide, helix and ES variant calling" on both the hESC rDNA and the rRNA data and call a found variant an atlas variant if the variant is present in abundance greater than the HiFi read accuracy and are found in both rDNA and rRNA. HiFi accuracy is expected to be greater than 99.9%. For the hESC we obtained the following from "18S and, 28S extraction from hESC, GIAB and T2T" step: From the hESC rDNA we have obtained 762 complete 18S 386 complete 28S. From polysomal rRNAs we have obtained 128,801 complete 18S and 28,853 complete 28S. We randomly subsampled rDNA 18S and 28S reads to 386 and 28,853 reads for both 18S and 28S rRNA. Then, given HiFi accuracy, per position we expect < 1 error per rDNA base pair and <= 14 errors per rRNA position.We therefore call atlas variants that satisfy both: A. Nucleotide variant abundance in rDNA is equal or greater than 2 B. Nucleotide variant abundance in rRNA is equal or greater than 14 After finding nucleotide atlas variants, to call for atlas helix and ES variants, we use the helix/ES annotation (Table S13,14) to aggregate nucleotide atlas variants at a given region and consider variants if they are found in both rDNA and rRNA. In-cell DMS probing for long-read sequencing Approximately 2 x 10^7 of H7-hESC were used for in-cell DMS probing.Cells were washed with pre-warmed DPBS (Gibco, 14040133) prior to dissociation with Accutase (Gibco, A1110501) for ~5 mins at 37 °C.Then, cells were neutralized with mTeSR1 (StemCell Technologies, 85850) supplemented with 1µM thiazovivin (Tocris, 3845) and pelleted down by centrifuging at 200 x g at room temperature for 3 mins.Cells were then resuspended in 2,800 µL of pre-warmed mTeSR1+Tv. Then, 800 µL of pre-warmed 1 M bicine (Fisher Scientific, ICN10100580) buffer (pH 8.3 at 37 °C) was added, followed by 400 µL of 16% DMS (Sigma Aldrich, D186309) diluted in 100% ethanol (Gold Shield Distributors, 0412804-PINT).DMS labelling was done by incubating the mixture at 37 °C for 5 mins, prior to be quenched by adding 2,000 µL of ice-cold BME (Sigma Aldrich, M3148).Cells were pelleted down by centrifuging at 200 x g at 4 °C for 3 mins, and then lysed by resuspending them in 8 mL of cold TRIzolTM reagent (Invitrogen, 15596026).Solution was left at room temperature for 5 mins prior to adding 600 µL chloroform (Fisher Scientific, C298-500).The tube was then shaken vigorously for 15 seconds or so and left at room temperature for 3 mins.The sample was then centrifuged at 21,000 x g for 15 mins at 4 °C.A total of 4,440 µL aqueous phase was extracted and 4,440 µL of 100% ethanol was added before subjecting them into further cleanup and DNAse digest using Zymo RNA Clean and Concentrator Kit-5 as elaborated in "Polysome RNA extraction".50 µg of total RNA was used across ten 100 µL reactions.RT was done as described in "rRNA reverse transcription" section, with a few modifications.After RT, 0.4x beads by volume were used to size select cDNA.Ten reactions were then pooled together, and its cDNA concentration measured. For the second-strand synthesis, each reaction was done with a maximum of 500 ng of cDNA. Afterwards, PacBio IsoSeq library was constructed as per described in "PacBio SMRT sequencing library preparation" section. rRNA subtype DMS reactivity and structure calling 28S sequenced reads from "In-cell DMS probing for long-read sequencing" were binned intro rRNA subtypes as followed: 1. We ran RGA method on the DMS reads 2. We bin DMS reads to rRNA subtype groups based on the hESC subtypes nucleotide positions at 60, 2188, 3513, 4913 3. Per DMS read, at each nucleotide we mark sequence variants that are not in the atlas as modified.4. For every rRNA subtype group, we calculate the rRNA subtype group reactivity as the average modification per nucleotide position across binned DMS reads. 5. Next we use 90% winsorization to set the DMS reactivity values from 0 to 1. 6. Lastly, we use the Biers Matlab package with RNAstructure and Varna for plotting (6)(7). 28S full length sequence comparisons and visualization in PCoA In the T2T genome we discovered that although there are only 62 reported rDNA variants, the high frequency rDNA variants in H7-hESC also appeared in high frequency in the T2T rDNA (Figure S9A, R=0.8 Pearson correlation).In the GIAB samples, like in the H7-hESC, we found hundreds of variants with high agreement between their frequencies and H7-hESC frequencies (Figure S9B-D).We analyzed the linkage of the same positions in the T2T and GIAB, and found as found in the H7-hESC that es7l positions have low linkage, and es15l, es27l and es39l have relatively higher linkage within each region (Figure S10). We compare the pairwise-distances between all hESC/GIAB 28S separately using 6-mer word base comparison with Alfree tools (8).Pairwise distances are then visualized by plotting the first two PCos of the and the Bray-Curtis PCoA (Figure 3D).Each 28S is colored by the haplotype of that 28S as defined by the 60, 3513 and 4913 positions. Atlas relative abundance calling for short-read datasets Short read RNA-seq are mRNA targeted however we found that about 2% of reads mapped to rRNA in the GTEx and TCGA.For comparing relative abundance across samples, we rarefaction samples of GTEx dataset to 500,000 rRNA mapped reads and TCGA to 250,000 rRNA mapped reads and throw samples with less than 100,000 rRNA mapped reads. For short-reads, we use Kallisto( 9) tool for region relative abundance estimation in the following way: Once made for all GTEx/TCGA samples.We create one Kallisto index( 9 Here we chose our ES/non-ES atlas reference, as ES/non-ES regions are longer than helices (Table S14) but using the helix expanded reference atlas would give the same results (Extended data 3-4 with "expand_100bp" in the name of the file). We quantify all-region abundances of a sample (in the example below named FASTQ-FILE) using Kallisto(9) with default parameters. Linux command line: > kallisto quant -i atlas_rRNA FASTQ-FILE -o OUTPUT_DIRECTORY Then, to compare expression of a given ES/non-ES regional variant, we normalize read count by the region length and normalize to one every ES/non-ES region independently. Python code: equal volumes of each corresponding fraction was pooled in a separate tube to a total volume of 900 µL.To 900 µL of fractionated sample, 900 µL of acid phenol chloroform was added and heated at 65 ℃ for 5 minutes.The samples were then centrifuged at 21000g for 10 minutes at room temperature, and the aqueous phase transferred to a new tube.The aqueous phase was mixed with an equal volume of 100% ethanol and the RNA purified using the Zymo RNA Clean and Concentrator Kit-5 following manufacturer's instructions.DNase treatment was performed using TURBO DNase (1 µL of 2 U/µL per 50 µL reaction) at 37 ℃ for 30 minutes, and the RNA purified using the Zymo RNA Clean and Concentrator Kit-5 following manufacturer's instructions.400 µL of cold cytoplasmic lysis buffer was added to each cell pellet.Cells were mixed and lysed by repeated vortexing for 30 s, followed by cooling down on ice for 30 s, repeated for 3 times in total. Cells were then incubated on ice for 30 minutes, vortexing for approximately 10 seconds every 10 minutes for complete lysis.Afterwards, cellular debris, organelles, and microsomes were removed with four serial centrifugations at 800 x g twice, 8,000 x g, and 21,300 x g for 5 mins each at 4 °C. RNA amount of the clarified cytoplasmic lysate was measured using nanodrop.Approximately 0.8-1 mg of RNA was set aside for sucrose gradient fractionation. The cytoplasmic lysate was then layered on top of the sucrose gradient.The tubes were then loaded into SW 41 Ti rotor and centrifuged at 40,000 rpm for 2.5 hours at 4 °C.Afterwards, the gradient was then fractionated into 16 2 mL tubes every 30 seconds of ~700 µL solution each using a fractionation system (Brandel BR-188).The A260 trace was used as a reference to determine where the free ribonucleoproteins, free subunits, monosomes, and polysomes were.100 µL of 10% SDS (Invitrogen AM9820) and 200 µL of 1.5 M sodium acetate (Invitrogen AM9740) were added into each fraction. RNA was extracted from each fraction by adding 500 µL of acid-phenol:chloroform, pH 4.5 with IAA (125:24:1) (Invitrogen AM9722).The fractions were then incubated at 65 °C, 500 rpm thermomixer for 5 minutes.The RNA-containing aqueous phase, ~700 µL, was separated from the organic phase by centrifugation at 21,300 x g for 15 minutes at 4 °C.Further cleanup and trace DNA removal were done as described in the section "Whole-cell RNA extraction". PacBio SMRT sequencing library preparation Multiplexed library was made as described in "Iso-Seq™ Express Template Preparation for Sequel® and Sequel II Systems".cDNA amplification was skipped to prevent possible amplification bias against highly structured or repetitive sequences.In brief: equal amounts of cDNA from each barcode was pooled for a total of ~200 ng prior to DNA damage repair step.After pooling, cDNA was concentrated with 1x volume of SPRIselect beads and eluted in 48 µL of nuclease-free water.DNA damage repair, end-repair / A-tailing, overhang adapter ligation, and the final library cleanup were performed according to the protocol mentioned above, substituting the ProNex beads with the washed SPRIselect beads. HeLa cell culture The Human HeLa cell line was sourced from ATCC(CCL-2), and subsequent culturing was performed in DMEM supplemented with 10% FBS at 37°C and 5% CO 2 . Micro cover glasses (Electron Microscope Sciences, 72226-01) underwent a pretreatment step with Gel Slick (Lonza, 50640) at room temperature for 15 mins and were then air-dried.HeLa cells were cultured in treated 12-well plates, and after rinsing with 1× PBS (Thermo Fisher Scientific, 10010049), they were fixed with 1 mL of 1.6% PFA (Electron Microscope Sciences, 15710-S) in PBS buffer at room temperature for 15 mins.Following fixation, the cells underwent permeabilization by treatment with 1 mL of pre-chilled (-20°C) methanol (Sigma-Aldrich, 34860-1L-R) and incubation at -20°C for an hour.Thereafter, HeLa cells were transferred from the -20°C fridge to room temperature for 5 mins, and then washed twice with PBSTR (0.1% Tween-20 (Calbiochem, 655206), 0.1 U/μL RNaseOUT (Thermo Fisher Scientific, 10777019) in PBS) for 5 mins each. For the reverse transcription (RT) process, primers were prepared by dissolving them at a concentration of 250 μM in ultrapure RNase-free water (Thermo Fisher Scientific, 10977023), followed by pooling.All probes were manufactured by Integrated DNA Technologies (IDT).The probe mixture was subjected to heating at 90°C for 5 mins, followed by cooling to room temperature.The samples were then treated with 300 μL of template switching mixture, which included 1× template switching buffer (New England Biolabs, M0466L), 250 µM dNTP (Invitrogen 100004893), 40 µM 5-(3-aminoallyl)-dUTP (Invitrogen AM8439), 2.5 µM RT primer, 0.4 U/µL RNaseOUT, 3.3 µM template switching oligo, and 1× template switching RT enzyme mix.This mixture was incubated at 4°C for 15 mins, followed by an overnight placement in a 42°C humidified oven with gentle shaking. The following day, the samples underwent three washes with 500 µL PBST (0.1% Tween-20 in PBS) for 5 mins each.To cross-link cDNA molecules containing aminoallyl-dUTP, the specimens were incubated with 5 mM BS(PEG) 9 (Thermo Fisher Scientific, 21582) in PBST for 1 hour at room temperature, followed by a wash with PBST at room temperature for 5 mins.The cross-linking reaction was quenched by treating the samples with 0.1 M Glycine (Sigma-Aldrich, 50046-250G) in PBST at room temperature for 30 mins.To degrade residual RNA and generate single-stranded cDNA, the specimens were incubated for 2 hours at 37°C with an RNA digestion mixture, composed of 0.25 U/μL RNase H (New England Biolabs, M0297L), 1 mg/mL RNase A (Thermo Fisher Scientific, EN0531), and 10 U/μL RNase T1 (Thermo Fisher Scientific, EN0541) in 1× RNaseH buffer.The samples were then washed twice with PBST for 5 mins each.After the final PBST wash, the samples were incubated with 300 µL of splint ligation mixture containing 0.2 mg/mL BSA (New England Biolabs, B9000S), 2.5 µM splint ligation primer, and 0.1 U/µL T4 DNA ligase (Thermo Fisher Scientific, EL0011) in 1× T4 DNA ligase buffer at room temperature for 4 hours with gentle shaking.Subsequently, they were washed three times with 500 µL PBST for 5 mins each. To create nanoballs of cDNA (amplicons) containing multiple copies of the original cDNA sequence, each cDNA circle undergoes linear amplification through rolling-circle amplification (RCA).This is achieved by immersing the cDNA in a 300 µL RCA mixture consisting of 0.2 U/μL Phi29 DNA polymerase (Thermo Fisher Scientific, EP0094), 250 μM dNTP, 40 μM 5-(3-aminoallyl)-dUTP, and 0.2 mg/mL BSA in 1× Phi29 buffer at 30°C for 4 hours with gentle shaking.Following RCA, the samples were subjected to two washes with PBST.Subsequently, they were incubated with 20 mM methacrylic acid N-hydroxysuccinimide ester (Sigma-Aldrich, 730300-1G) in 100 mM sodium bicarbonate buffer at room temperature for one hour, followed by two additional washes with PBST for 5 mins each.The samples then experience a 10-minute incubation in 500 μL monomer buffer containing 4% acrylamide (Bio-Rad, 161-0140) and 0.2% bis-acrylamide (Bio-Rad, 161-0142) in 2× SSC (Sigma-Aldrich, S6639) at 4°C.Following the aspiration of the buffer, a 35 μL polymerization mixture, made of 0.2% ammonium persulfate (Sigma-Aldrich, A3678) and 0.2% tetramethylethylenediamine (Sigma-Aldrich, T9281) dissolved in monomer buffer, is placed at the core of the sample and is promptly covered with a Gel Slick-coated coverslip.The polymerization is then carried out inside an N 2 enclosure for 90 mins at room temperature.Afterward, the sample is washed three times with PBST, each time for 5 mins. Several iterative sequencing experiments were conducted to decode the rRNA identity.For each iteration, the sample initially underwent treatment with a stripping buffer containing 60% formamide (Calbiochem, 655206) and 0.1% Triton-X-100 (Sigma-Aldrich, 93443) at room temperature twice for 10 mins each, followed by a triple wash in PBST, each lasting 5 mins.Then the samples were incubated with a 300 μL sequencing mixture containing 0.2 U/μL T4 DNA ligase, 0.2 mg/ml BSA, 10 μM reading probe, and 5 μM fluorescent decoding oligos in 1× T4 DNA ligase buffer for at least 3 hours at room temperature.Post-incubation, the samples were thrice washed with a washing and imaging buffer made of 10% formamide in 2× SSC buffer, each wash lasting for 10 mins.Following the washing steps, the samples were immersed in the washing and imaging buffer for imaging.DAPI (Sigma-Aldrich, D9542) was dissolved in the wash and imaging buffer and performed following manufacturer's instruction for nuclei staining for 20 mins.Images were captured using a Leica TCS SP8 confocal microscope equipped with a 40× oil immersion objective (NA 1.3) and an acquisition voxel size of 142 nm × 142 nm × 500 nm. Figure S4 : Figure S4: 3D structure of the 40S and 60S with variants Figure S5 : Figure S5: Pipeline overview and 18S and 28S variant distributions Figure S11 : Figure S11: In-cell DMS with long-read sequencing shows accessibility differences in es7l of different 28S subtypes Figure S12 : Figure S12: In-cell DMS with long-read sequencing shows accessibility differences in es7l of different 28S subtypes Figure S13 : Figure S13: In-cell DMS with long-read sequencing shows that es15l of different 28S subtypes have different RNA 2D structure Figure S14 : Figure S14: In-cell DMS with long-read sequencing shows that es27l of different 28S subtypes have different RNA 2D structure Figure S15 : Figure S15: In-cell DMS with long-read sequencing shows that es27l of different 28S subtypes have different RNA 2D structure 1 . Hg38 Un_GL000220v1 positions:105,423-118,723 2. A consensus from mapped Hg38 regions to Un_GL000220v1:105,423-118,723 3. A consensus rDNA from T2T v1.0 assembly of CHM13 created by multiple sequence alignment using Clustal Omega(1,2) ) combining the 18S and 28S variants in our ES/non-ES atlas with expanded reference (Extended data 5-6 with "expand_100bp" in the name of the file) using Kallisto default parameters.Linux command line: > cat Extended_Data5.atlas.ES.18S.expand_100bp.fa> extended_atlas.fa> cat Extended_Data6.atlas.ES.28S.expand_100bp.fa>> extended_atlas.fa> kallisto index -i atlas_rRNA extended_atlas.faThis expanded reference version of the atlas is the same ES/non-ES atlas with additional flanking 100bp on both ends (3' and 5') of the relevant region with the reference sequence.With these expansions, short reads that map to the 5' or 3' end of a region are mapped to the variants (as opposed to unmapped without expansions).
2023-02-04T14:09:23.328Z
2023-02-02T00:00:00.000
{ "year": 2023, "sha1": "4521bae58ecfd9ed1ec0a214fc716f9729ef94fb", "oa_license": "CCBYND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9915487", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b9395f703586d46cf9c45af1ebb549f58646cd7b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
256302645
pes2o/s2orc
v3-fos-license
Left atrial deformation and risk of transient ischemic attack and stroke in patients with paroxysmal atrial fibrillation Left atrial (LA) remodeling is closely related to the occurrence of cerebral stroke; however, the relationship between early-stage impaired deformability of the left atrium and stroke/transient ischemic attack (TIA) remains unclear. The aim of this study was to evaluate the changes in LA deformability and to assess its relationship with stroke/TIA events using speckle tracking echocardiography. A total of 365 patients with paroxysmal atrial fibrillation (non-stroke/TIA [n = 318]; stroke/TIA [n = 47]) underwent comprehensive echocardiography with speckle tracking imaging to calculate mean LA longitudinal strain and strain rate values from apical 4-chamber, 2-chamber, and 3-chamber views. The stroke/TIA group was older, had a greater proportion of males, and had lower LA strain rate during left ventricular early diastole (SRE), and the difference was statistically significant (P < .05). On univariate linear regression analysis, the following clinical and conventional echocardiographic parameters showed a significant linear correlation (P < .001) with SRE: E/A ratio; LA volume index (VI); body mass index; mean E/e′; left ventricular ejection fraction; age; and hypertension. Multiple linear regression analysis revealed a linear dependence between SRE and E/A ratio, LA VI, and body mass index. The regression equation was y = −1.430–0.394X1 + 0.012X2 + 0.019X3 (P < .001) (y, SRE; X1, E/A ratio; X2, LA VI; X3, body mass index). In multivariate logistic regression analyses, SRE and sex ratio were independent risk factors for stroke/TIA (SRE, odds ratio 2.945 [95% confidence interval 1.092–7.943]; P = .033; sex, odds ratio 0.462 [95% confidence interval 0.230–0.930]; P = .031). Among patients with paroxysmal atrial fibrillation, SRE reflected impaired deformability of the left atrium in the early stages and was associated with the risk of stroke/TIA. Introduction Stroke is a serious complication in patients with atrial fibrillation (AF) because AF causes an elevation in left atrial (LA) pressure. Moreover, long-term increase in the LA pressure causes progressive LA remodeling. [1] Many studies have demonstrated a strong correlation between changes in LA function and LA remodeling. [2] LA remodeling can lead to impaired atrial contractility, prolonging the blood flow through the left atrium and causing atrial wall fibrosis as well as mechanical and electrical conduction abnormalities. These changes ultimately lead to LA enlargement, blood stasis, and thrombogenesis. [3,4] Studies have shown LA longitudinal strain is a more effective marker of LA remodeling and function compared to the conventional ultrasound echocardiography parameters such as LA volume index (VI); moreover, the former allows for a more accurate evaluation of LA reservoir, conduit, and pump function. Furthermore, it is a convenient, noninvasive examination method which shows better repeatability in the hands of experienced echocardiographic operators and, thus, may have valuable prospects for other applications. [5,6] Medicine The present study was designed to measure conventional LA echocardiography and LA strain and strain rate parameters in patients with paroxysmal AF, and then to analyze the differences between stroke/transient ischemic attack (TIA) patients and non-stroke/TIA patients. The objective was to explore the relationship between impaired deformability of the left atrium and stroke/TIA events and to assess its predictive value for stroke/ TIA events. Study population Patients who underwent pre-procedural transthoracic echocardiography (TTE) and transesophageal echocardiography prior to radiofrequency catheter ablation for symptomatic drug-refractory paroxysmal AF at the Beijing Anzhen Hospital Atrial Fibrillation Center, Capital Medical University (Beijing, China) between January 2017 and June 2020 were enrolled. Patients with persistent or permanent AF, those with AF with rheumatic valvular disease, history of valve repair or replacement, echocardiography confirmed left ventricular (LV) ejection fraction (EF) <50%, previous AF ablation, any mitral valve disease (including mild-degree disease), were excluded. All patients provided written informed consent to participate in the present study. Patients were divided into 2 groups according to history of stroke/TIA to evaluate difference(s) in echocardiographic data, especially with respect to LA strain and strain rate, and to determine the best predictive parameters. Ischemic stroke was confirmed by sudden-onset focal neurological deficit(s) and magnetic resonance imaging or computed tomography findings. TIA was defined as sudden-onset focal neurological signs or symptoms that resolved within 24 hours. The study was approved by the Ethics Committee of the Beijing Anzhen Hospital, Capital Medical University. Conventional TTE All subjects underwent comprehensive TTE study using a cardiovascular ultrasound system (Vivid 7 or Vivid E9, GE Medical Systems, Horten, Norway), equipped with a S51 phased-array sector probe (2.5-3.5 MHz). Standard M-mode, 2-dimensional, and Doppler images were acquired in the parasternal and apical views. LV EF was calculated using the biplane Simpson rule. LV diastolic function was assessed according to mean E/eʹ, E/A, and LA VI. Tissue Doppler imaging was performed to measure myocardial velocities. A pulsed sample volume was placed at both the septal corner and lateral wall of the mitral annulus; early diastolic (eʹ) and late diastolic (aʹ) myocardial velocities were recorded. The mean E/eʹ ratio was then calculated. All measurements were calculated and recorded as the average value of three cardiac cycles according to the American Society of Echocardiography guidelines. [7,8] Speckle tracking echocardiography LA velocity vector imaging data were collected from each patient. The standard gathering sections were apical 4-chamber view, apical 2-chamber view, and apical 3-cavity view; each section was set to acquire 3 cardiac cycles, with the frame rate adjusted to >60 frames/s. The analysis was performed offline using workstation software (EchoPAC PC version 201; GE Medical Systems, Milwaukee, WI, United States). LA strain was set to 0 at the beginning of the P wave (i.e., P-triggered analysis). After manual tracing of the endocardial LA borders, an epicardial surface tracing was automatically generated by the system creating a region of interest, which was automatically determined, and speckles were traced frame by frame. The region of interest was divided into 6 segments and the averages of the values and curves for strain and strain rate in the longitudinal direction for each segment were generated automatically. The LA roof segments in each view were excluded in this study due to the discontinuity of the LA wall at the connection of the pulmonary veins. Segments with suboptimal images were rejected by the software and excluded from analysis. By calculating the average LA strain and strain rate in the 4-chamber, 2-chamber, and 3-chamber views, the mean longitudinal LA strain and strain rate were obtained. The LA strain versus time curves were generated; the positive peak strain during LV systole (SS) is LA SS and negative peak strain during LV diastole (SD) is LA SD; the distance between the 2 peaks is the global longitudinal strain of LA (Fig. 1). For the LA strain rate versus time curves, the positive peak in the middle during LV systole indicates LA strain rate during LV systole, the negative peak on the right side indicates LA strain rate during LV early diastole (SRE), and another negative peak on the left side indicates LA strain rate during LV late diastole (Fig. 2). Inter-and intra-observer variability To assess inter-and intra-observer variability, LA strain and strain rate were measured in 15 randomly selected patients. First, measurements were repeated twice by a single observer. Subsequently 2 experienced observers, who were blinded to one another's measurements and to the study time-point, obtained ventricular demand (i.e., velocity vector imaging) data from the same patients. Statistical analysis Continuous data are expressed as mean ± standard deviation. Categorical data are summarized as frequencies and percentages. Continuous variables were compared using the Student t test (for comparisons between 2 groups) and analysis of variance (for multi-group comparisons). Categorical variables were compared using the chi-squared test. Univariate and multivariate logistic regression analyses were performed to determine risk factors for stroke/TIA in patients with paroxysmal AF. Statistical analyses were performed using SPSS version 23.0 (IBM Corporation, Armonk, NY). Two-tailed P values <.05 were considered indicative of statistical significance. Study population A total of 385 patients were enrolled in our research. Among them, LA strain was confirmed in 365 patients (222 [61%] male; mean [± standard deviation] age, 58.9 ± 10.5 years) using high-quality speckle tracking imaging. Forty-seven patients had a history of ischemic stroke or TIA while 318 patients had no such history (Fig. 3). Basic clinical and echocardiographic data of all patients according to history of stroke or TIA are summarized in Table 1. There were significant differences between the 2 groups with respect to age, sex, and SRE (P < .05). Compared with the nonstroke/TIA group, the stroke/TIA group was significantly older, had a greater proportion of males, and had lower SRE. There were no significant between-group differences with respect to other clinical descriptive and conventional 2-dimensional echocardiographic parameters. Association of LA strain rate with clinical features, cardiac structure deformability, and function change Univariate linear regression analysis was performed for SRE and all risk factors listed in Table 1. Each of the following clinical and conventional echocardiographic parameters showed a significant linear correlation with SRE (P < .001): E/A ratio; LA VI; body mass index; mean E/eʹ; LVEF; age; and hypertension. After parameter optimization, multiple linear regression analysis revealed a linear dependence between SRE and multiple parameters including E/A ratio, LA VI, and body mass index. The regression equation was y = −3.487-0.83X1 + 0.024X2 + 0.05X3 + 0.064X4 + 0.028X5 (y, SRE; X1, E/A ratio; X2, LA VI; X3, body mass index; X4, mean E/eʹ; X5, age). The results of the regression analysis are summarized in Table 2. Predictive value of conventional ultrasound and strain parameters for stroke events LA VI, mean LA systolic strain, and mean LA SRE were highly sensitive predictors of stroke/TIA events, while LV end-diastolic diameter and mean E/eʹ demonstrated good specificity. Overall, however, the area under the curves were not good indicators in this study (Table 3). This was likely attributable to the relatively small sample size of the stroke group and, although the inclusion criteria were strictly adhered to, there was a high likelihood of bias. Value of various parameters as risk factors for stroke/TIA A summary of the logistic regression analyses for clinical and echocardiographic parameters associated with stroke/TIA is presented in Table 4. Univariate logistic regression analyses revealed that stroke/TIA was associated with older age (P < .05), female sex (P < .05), and greater LA SRE (absolute value was near 0; P < .001). Multivariate logistic regression analyses revealed that LA SRE and sex ratio were independent risk factors for stroke/TIA (SRE, odds ratio 2.945 [95% confidence interval 1.092-7.943]; P = .033; sex, odds ratio 0.462 [95% confidence interval 0.230-0.930]; P = .031). Discussion In this study, we analyzed changes in LA strain and strain rate in patients with paroxysmal AF using a speckle tracking technique and explored its utility in predicting the risk of stroke/TIA. Our main findings include the following: changes in LA SRE occurred earlier in patients with paroxysmal AF complicated with stroke/ TIA events; LA SRE and clinical and conventional ultrasound echocardiography parameters were significantly related; and LA SRE was an independent risk factor for stroke among patients with paroxysmal AF. LA reservoir, conduit, and pump function, and LA deformability LA function in the entire cardiac cycle is divided into 3 phases: storing of blood in the LV systolic phase; most blood in the left atrium flows into the LV in the early diastolic phase; and LA contraction pumps the remaining blood into the LV in the late LV diastolic phase. [5,9] Accordingly, 3 different peaks in the LA strain rate curve were recorded: 1 positive in the LV systolic phase and 2 negative in the LV early and late diastolic phases, respectively, corresponding to LA reservoir, conduit, and pump function. [10,11] It has been established that, with regard to strain, strain rate can theoretically reflect damage to LA deformability at an early stage due to a combination of time change, which has a higher diagnostic value. The L 0 of the left atrium in the longitudinal axis should be determined first when analyzing LA strain rate; similar to LV strain and strain rate analysis, we believe that the left atrium is in the L 0 state at the beginning of LA contraction, when electrocardiogram shows the P-wave phase. [12] Compared with the non-stroke group, LA VI was greater in the stroke/TIA group, although the difference was not statistically significant. However, there were significant differences between the 2 groups with respect to LA SRE (P < .05), which indicated that there were no obvious morphological changes in the LA. Furthermore, LA deformability decreased in the early stages and LA longitudinal strain was not significantly altered. Association between LA SRE and conventional clinical echocardiographic parameters Previous studies have demonstrated a significant correlation between LA strain and conventional clinical echocardiographic parameters. [3,13,14] Our results are consistent with this, as LA SRE demonstrated a significant linear correlation with age, body mass index, hypertension, LVEF, mean E/eʹ, LA VI, and E/A ratio (P < .001), with an especially strong correlation with age. LA SRE not only reflected LA remodeling, but more importantly, reflected early impairment of LA deformability. As such, LA SRE is a potential marker of changes in LA function in the early stages. [15][16][17][18][19] [20] Increased LA pressure and volume load would lead to the gradual fibrosis of the LA wall, leading to LA reservoir and conduit dysfunction. However, before enlargement of the left atrium, the LA deformability capacity is likely to be compromised. Thus, an ideal ultrasound parameter for accurate evaluation of LA deformability would be valuable and could facilitate earlier prediction of the above-mentioned cardiovascular events. Previous studies have demonstrated the feasibility of use of 2-dimensional echocardiographic speckle tracking technology for evaluating LA remodeling and deformability. [3,6,[21][22][23][24] Smallsample studies have identified LA longitudinal strain as a risk factor for stroke/TIA among patients with AF. [5] In our study, LA longitudinal strain rate showed changes at an early stage compared to LA strain and was found to be an early marker of damage to LA deformability. Indeed, in this study, LA SRE was an independent risk factor (P < .05) for stroke in patients with paroxysmal AF according to multivariate logistic regression analysis-more specifically, the smaller the absolute value of SRE, the greater was the risk of stroke/TIA. However, for a more definitive characterization of the relationship between the index and stroke/TIA, studies with longer-term follow-up combined with clinical and conventional echocardiographic parameters are needed. [25] Clinical applications Changes in LA strain rate reflect the capacity of LA deformability, which precedes LA remodeling, [26,27] and is more accurate than LA strain. Therefore, it is a more suitable parameter for screening of patients at an early stage. Due to its noninvasiveness and easy operation, the speckle tracking technique is becoming increasingly popular, and we anticipate that it will prove to be a reliable parameter in subsequent studies. Limitations Some limitations of this study should be acknowledged while interpreting the results. First, this was a cross-sectional study, and patients were not followed up to evaluate the effectiveness of the risk factors. Second, we only included patients with paroxysmal AF. Future studies should also include patients with non-AF with or without stroke/TIA, and then compare changes in LA strain rate. Third, all patients in this study had preserved LV EF. Conclusions LA SRE reflected changes in the LA deformability in the early stages. It was an independent risk factor for stroke/TIA among patients with paroxysmal AF. Further prospective studies, including more controls and longer-term follow-up, are required to evaluate the clinical utility of changes in LA strain rate.
2023-01-28T05:07:46.241Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "4afcbaa786377f3ceef85285f3da53ab1bd474b7", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4afcbaa786377f3ceef85285f3da53ab1bd474b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211025656
pes2o/s2orc
v3-fos-license
Jet Cutting Technique for the Production of Chitosan Aerogel Microparticles Loaded with Vancomycin Biopolymer-based aerogels can be obtained by supercritical drying of wet gels and endowed with outstanding properties for biomedical applications. Namely, polysaccharide-based aerogels in the form of microparticles are of special interest for wound treatment and can also be loaded with bioactive agents to improve the healing process. However, the production of the precursor gel may be limited by the viscosity of the polysaccharide initial solution. The jet cutting technique is regarded as a suitable processing technique to overcome this problem. In this work, the technological combination of jet cutting and supercritical drying of gels was assessed to produce chitosan aerogel microparticles loaded with vancomycin HCl (antimicrobial agent) for wound healing purposes. The resulting aerogel formulation was evaluated in terms of morphology, textural properties, drug loading, and release profile. Aerogels were also tested for wound application in terms of exudate sorption capacity, antimicrobial activity, hemocompatibility, and cytocompatibility. Overall, the microparticles had excellent textural properties, absorbed high amounts of exudate, and controlled the release of vancomycin HCl, providing sustained antimicrobial activity. Introduction Aerogels are nanostructured, lightweight materials with open, high porosities and large surface areas that currently find applications in many industrial sectors due to their thermal, optical, electrical, or mechanical properties [1,2]. The outstanding textural properties of the aerogels have also attracted the attention from other fields such as biomedical and environmental sciences [3][4][5][6]. Biomedical applications of aerogels include the encapsulation of bioactive agents with solubility or stability limitations, and their use as synthetic scaffolds for tissue engineering and wound dressing materials for chronic wounds [7][8][9][10][11][12][13][14][15][16]. In the latter case, the large surface area of the aerogels confers them the solvents used in the study. After a preliminary screening of the parameters for the processing of the chitosan solution, the gelation of vancomycin-loaded chitosan particles was performed in 2 L of EtOH containing 26 mL of 25% aqueous NH3. The loading of vancomycin HCl into the particles was performed by addition of the drug to the initial chitosan solution (10 wt.% with respect to chitosan). The microparticles were left in the gelation bath for 1 h for ageing and then the solvent was replaced with absolute EtOH twice. Gel particles were placed in filter paper and dried for 3.5 h with supercritical CO2 in a 250 mL high pressure autoclave (120 bar, 40 °C, 15 g/min). Figure 1. Schematic representation of the jet cutting process. The chitosan solution was pressed out of the nozzle as a fluid jet and cut into cylinders by the cutting disc. The cylinders acquired the spherical shape of droplets before falling into the gelation bath due to surface tension. Figure 1. Schematic representation of the jet cutting process. The chitosan solution was pressed out of the nozzle as a fluid jet and cut into cylinders by the cutting disc. The cylinders acquired the spherical shape of droplets before falling into the gelation bath due to surface tension. Production of Chitosan Aerogel Microparticles Chitosan gel particles were produced using a JetCutter Type S equipment (GeniaLab GmbH, Braunschweig, Germany). Table 1 summarizes the combinations of parameters used in this study. Briefly, the jetted chitosan solution (250 mL of 2 wt.% chitosan in 1% v/v acetic acid aqueous solution) was extruded through a nozzle assisted by compressed air (P = 2 bar). A protective piece of stainless steel was placed around the cutting disc to collect fluid loss. Different nozzle diameters (350, 400, and 500 µm), number of wires (40 and 120) of the cutting disc, and cutting disc rates (1000 to 6000 rpm) were used to test the feasibility of particle production and the morphology and particle size distribution (PSD) of the resulting particles. The angle of the jet was in all cases perpendicular to the cutting disc. A gelation bath consisting of 2 L of alkaline medium was placed below the jet cutter to form and collect the gel microparticles. Preliminary tests were carried out in aqueous media (0.2 M NaOH) to avoid the use of high volumes of ethanol and thus reducing the amount of organic solvents used in the study. After a preliminary screening of the parameters for the processing of the chitosan solution, the gelation of vancomycin-loaded chitosan particles was performed in 2 L of EtOH containing 26 mL of 25% aqueous NH 3 . The loading of vancomycin HCl into the particles was performed by addition of the drug to the initial chitosan solution (10 wt.% with respect to chitosan). The microparticles were left in the gelation bath for 1 h for ageing and then the solvent was replaced with absolute EtOH twice. Gel particles were placed in filter paper and dried for 3.5 h with supercritical CO 2 in a 250 mL high pressure autoclave (120 bar, 40 • C, 15 g/min). Morphology and Textural Properties During the initial screening of the jet cutting process, gel particles processed using nozzle diameters of 400 and 500 µm and gelified for 1 h in aqueous 0.2 M NaOH were examined by optical microscopy (VisiScope TL384H, VWR International GmbH, Darmstadt, Germany) to qualitatively monitor properties such as sphericity and homogeneity. After supercritical drying, the resulting unloaded and vancomycin-loaded aerogels were studied by scanning electron microscopy (SEM) at 3 kV (FESEM ULTRA PLUS, Zeiss, Oberkochen, Germany). Prior to SEM-imaging, aerogels were sputtered-coated (Q150 T/S/E/ES, Quorum Technologies, Lewes, UK) with a 10 nm layer of iridium to improve the contrast. The PSD and sphericity of the aerogels were determined by dynamic image analysis (CamSizer XT, Retsch, Haan, Germany). All data for the PSD were obtained based on the x area , i.e., the particle diameter obtained from the area of particle projection. Sphericity was given as a value between 0 and 1, with 1 being a perfect sphere. Nitrogen adsorption-desorption measurements (ASAP 2000 Micromeritics Inc, Norcross, GA, USA) were used for the determination of the textural properties of the aerogel particles loaded with vancomycin HCl and gelified in a NH 3 /EtOH medium. Specific surface area (a BET ) was calculated using the Brunauer-Emmet-Teller (BET) method, whereas the Barrett-Joyner-Halenda (BJH) method was applied for the determination of the pore size distribution, specific pore volume (V p,BJH ), and mean pore diameter (d p,BJH ). Overall porosity (ε) was determined using Equation (1): where ρ bulk is the bulk density determined from the weight of particles of a known volume, and ρ skel is the skeletal density determined by helium pycnometry (MPY-2, Quantachrome, Delray Beach, FL, USA). Fluid Sorption Capacity Test Approximately 40 mg of aerogel microparticles were placed in 6-well plate inserts of known weight and immersed in Falcon tubes containing 50 mL of PBS (Phosphate Buffered Saline) pH 7.4 solution. At specific times (1, 2, 4, 8, and 24 h), the inserts were removed from the solution and weight gain was determined. The experimental test was carried out in triplicate. The PBS sorption capacity was calculated using Equation (2): where w 0 and w t are the weight of the particles before and after the immersion in PBS during a certain time t, respectively. Vancomycin Entrapment Yield and Release Tests Vancomycin-loaded chitosan aerogel particles (50 mg) were placed in glass vials containing 5 mL of 0.1 M HCl. After 4 h, the particles were dissolved and the concentration of vancomycin in the medium was determined by UV/Vis spectrophotometry at a wavelength of 280 nm (Genesys10uv, Thermo Spectronic, New York City, NY, USA). The absorbance of dissolved unloaded chitosan aerogels was also determined to remove the influence of the polymer in the UV-measurements. The concentration of vancomycin HCl was calculated using a calibration curve in 0.1 M HCl validated in the 20-300 µg/mL range (R 2 > 0.9995). The entrapment yield of vancomycin into the aerogels was determined using Equation (3): where w p is the amount of vancomycin HCl present in the particles and w t is the total amount of vancomycin added to the initial chitosan solution. Vancomycin release tests were carried out in Franz cells consisting of a donor chamber and a receptor chamber separated by a 0.45 µm cellulose nitrate membrane filter (Whatman, Maidstone, UK). The receptor chamber was filled with 6 mL of PBS (pH 7.4) and ca. 40 mg of particles were added to the donor chamber. Surface available for drug diffusion was 1 cm 2 . The release tests were performed in triplicate in an orbital shaker (VWR ® Incubating Mini Shaker, VWR, Chester, PA, USA) at 37 • C and 400 rpm. At preset times, aliquots of 1 mL were taken from the receptor chamber and the withdrawn volume was replaced with fresh PBS. The concentration of vancomycin HCl was determined by UV-Vis spectrophotometry (8453, Agilent, Santa Clara, CA, USA) using a calibration curve in PBS validated in the 25-200 µg/mL range (R 2 > 0.9997). Experiments were carried out in triplicate and results were expressed as µg of vancomycin released per mg of loaded aerogel particles. Antimicrobial Tests Antibacterial activity of the aerogel microparticles was tested against S. aureus (ATCC 25923). Exponential bacterial culture (10 6 CFUs/mL) was prepared in a simulated body fluid (SBF, pH 7.4). The bacterial suspension (200 mL) and 7 mg of chitosan aerogel particles (with and without vancomycin) were incubated at 37 • C and 150 rpm for 6, 24, and 48 h. After incubation, the planktonic population was quantified by the colony-forming units (CFUs) method. A solution of vancomycin HCl (1.85 mg/mL) and free bacterial suspension acted as positive and negative controls, respectively. Three independent experiments were performed in triplicate. Results were expressed as the logarithmic concentration of planktonic bacteria. Hemolytic Activity Test The hemolytic activity of the vancomycin-loaded aerogel microparticles was tested using human blood (Galician Transfusion Center, Spain) obtained in accordance with the rules of the Declaration of Helsinki. A sample of fresh human whole blood was diluted to 33% (v/v) in 0.9% (w/v) NaCl and 1 mL of the diluted blood was poured in Eppendorf tubes containing 5 mg of vancomycin-loaded chitosan aerogel microparticles, 100 µL of 4% (v/v) Triton X-100 (positive control) or 100 µL of 0.9% (w/v) NaCl (negative control). Samples were incubated for 60 min at 37 • C and 100 rpm in an orbital shaker and then centrifuged at 10,000 g for 10 min (Sigma 2-16P, Sigma Laboratory Centrifuges, Germany). Then, 150 µL of the supernatant were transferred to a 96-well plate and the absorbance of the hemoglobin was measured at 540 nm (FLUOStar Optima, BMG Labtech, Germany). The percentage of hemolysis of the aerogels was determined using Equation (4): where Abs s is the absorbance of the samples containing the aerogels, Abs n is the absorbance of the negative control (0% of hemolysis), and Abs p is the absorbance of the positive control (Triton X-100, 100% of hemolysis). Tests were carried out in triplicate. Cytotoxicity Test The compatibility of vancomycin-loaded aerogel microparticles was tested against BALB/3T3 mouse fibroblasts. Cells were seeded in 24-well plates (12,350 cells per well) in DMEM supplemented with 10% FBS, penicillin 100 U/mL, and streptomycin 100 µg/mL and incubated overnight at 37 • C in a humidified atmosphere with 5% CO 2 . Then, four replicates of 5 mg of particles were sterilized using UV radiation (30 min, 254 nm), placed in cell culture inserts (Thermo Fisher Scientific, Waltham, MA, USA), and immersed in the wells. Cells cultured without particles were the positive control. After 24 and 48 h of incubation, the inserts with the particles were collected and 50 µL of WST-1 reactive were added to each well. After 4 h of incubation, plates were shaken thoroughly for 1 min and 100 µL from each well were transferred to a 96-well plate in triplicate. The absorbance was measured at 450 nm in a plate reader (EnSpire, PerkinElmer, Madrid, Spain) and cytocompatibility was determined using Equation (5): where Abs s and Abs c are the absorbance of the wells cultured with and without (control) the aerogels, respectively. Jet Cutting of Chitosan Gels and Morphology and Textural Properties of the Resulting Aerogel Particles The processability of a viscous chitosan solution with the jet cutter was studied producing gel microparticles under different conditions, using an aqueous basic solution (0.2 M NaOH) as the gelation bath. Chitosan gelation took place immediately after contact of the droplet of chitosan solution with the surface of the gelation bath. The change from an acidic to alcaline medium caused the deprotonation of the amino groups of the chitosan, and thus its gelation by a precipitation mechanism. In general, smaller particle sizes were obtained when using smaller nozzle diameters and a higher number of wires in the cutting disc that cut the fluid jet at a higher frequency. However, the use of the smallest nozzle diameter (350 µm) in this study led to frequent events of clogging. Nozzle diameters of 400-500 µm showed good processability and particles were produced at different cutting disc rates (2000, 4000, and 6000 rpm). The 120-wired cutting disc resulted in high fluid losses since it was not able to split the fluid jet into cylinders and the solution remained attached to the wires until deviated to the collector of fluid loss instead of the gelation bath. The chitosan solution successfully reached the gelation bath when the 40-wired cutting disc was used. In accordance with the literature [34], higher nozzle diameters led to larger particle sizes (Table 2 and Figure 2a,b) since the mass flow of the chitosan solution was higher, but similar PSDs were observed using nozzle diameters of 400 or 500 µm. Regarding the cutting disc velocity, aerogels processed at 2000 rpm had larger diameters and broader PSD than those processed at 4000 and 6000 rpm. In general, a higher cutting disc velocity results in smaller particle sizes, but this trend was not herein observed at 4000 and 6000 rpm. This could be explained with the values of sphericity (Table 2) and SEM images of the particles (Figure 3). When using the projected area as the parameter to estimate particle size, if the particle is not spherical the value may be biased by its orientation. Thus, a flattened particle may have the same projected area as a larger spherical particle. Particles processed at 6000 rpm were flatter, probably because of their lower weight and subsequent deformation upon contact with the surface of the gelation medium [35]. The aerogels obtained from chitosan gels processed by jet cutting with a nozzle diameter of 500 μm and a cutting disc velocity of 4000 rpm were lightweight (bulk = 0.060 ± 0.002 g/cm 3 ) and highly porous (ε = 95.6% ± 0.2%), and presented excellent textural properties (aBET = 188.0 ± 9.4 m 2 /g). Porosity and bulk density values were consistent with previous reports on chitosan aerogels gelified using a similar precipitation method, but the specific surface area was slightly lower [36,37], The aerogels obtained from chitosan gels processed by jet cutting with a nozzle diameter of 500 μm and a cutting disc velocity of 4000 rpm were lightweight (bulk = 0.060 ± 0.002 g/cm 3 ) and highly porous (ε = 95.6% ± 0.2%), and presented excellent textural properties (aBET = 188.0 ± 9.4 m 2 /g). Porosity and bulk density values were consistent with previous reports on chitosan aerogels gelified using a similar precipitation method, but the specific surface area was slightly lower [36,37], The aerogels obtained from chitosan gels processed by jet cutting with a nozzle diameter of 500 µm and a cutting disc velocity of 4000 rpm were lightweight (ρ bulk = 0.060 ± 0.002 g/cm 3 ) and Polymers 2020, 12, 273 8 of 13 highly porous (ε = 95.6% ± 0.2%), and presented excellent textural properties (a BET = 188.0 ± 9.4 m 2 /g). Porosity and bulk density values were consistent with previous reports on chitosan aerogels gelified using a similar precipitation method, but the specific surface area was slightly lower [36,37], probably due to a higher degree of shrinkage of the gel during gelation in ethanol medium [38]. Chitosan aerogels with higher specific surface areas have also been described, but involved the use of chemical crosslinkers that could leave toxic residues in the gels, raising regulatory concerns [39]. Fluid Sorption Capacity A good moisture balance is required at the wound site for adequate wound epithelization and closure. However, wound exudates are environments rich in inflammatory cytokines and chemokines and can also be a suitable medium for bacterial proliferation [40,41]. Thus, it is important that materials used in wound dressings are able to absorb the exudates, maintaining good conditions for the healing process. The exudate sorption capacity of the aerogel microparticles was determined by a gravimetric method ( Figure 4). Due to their high porosity and large surface area, aerogels were able to absorb up to nine times their weight in PBS after 24 h. Unlike chemical crosslinking, where bonds between the polymer fibers are permanent and may lead to rigid structures with limited water sorption capability [42], the physical precipitation of chitosan allowed for a certain degree of swelling in the polymer network, so the microparticles could retain high amounts of water within their structure. probably due to a higher degree of shrinkage of the gel during gelation in ethanol medium [38]. Chitosan aerogels with higher specific surface areas have also been described, but involved the use of chemical crosslinkers that could leave toxic residues in the gels, raising regulatory concerns [39]. Fluid Sorption Capacity A good moisture balance is required at the wound site for adequate wound epithelization and closure. However, wound exudates are environments rich in inflammatory cytokines and chemokines and can also be a suitable medium for bacterial proliferation [40,41]. Thus, it is important that materials used in wound dressings are able to absorb the exudates, maintaining good conditions for the healing process. The exudate sorption capacity of the aerogel microparticles was determined by a gravimetric method ( Figure 4). Due to their high porosity and large surface area, aerogels were able to absorb up to nine times their weight in PBS after 24 h. Unlike chemical crosslinking, where bonds between the polymer fibers are permanent and may lead to rigid structures with limited water sorption capability [42], the physical precipitation of chitosan allowed for a certain degree of swelling in the polymer network, so the microparticles could retain high amounts of water within their structure. Drug Loading and Release Vancomycin HCl is a highly hydrosoluble drug (solubility > 100 mg/mL), but it is poorly soluble in ethanol [43]. Accordingly, chitosan gelation was performed in ethanol with NH3 to mitigate drug migration through diffusion to the gelation bath, which would result in low drugloading efficiencies. The entrapment efficiency for vancomycin contained in the chitosan aerogel microparticles was of 24.6 ± 0.3%, being the final loading in the particles of 22.4 ± 0.3 μg of vancomycin/mg of aerogel particles. The obtained drug loss can be explained by drainage of the water containing the drug from the chitosan solution when dropped in the ethanol of the gelation bath. In any case, the loading was still high if compared to other drug-loaded aerogel formulations prepared in aqueous medium (≈ 12%) [36]. In the release studies, aerogel microparticles formed a layer on the membrane of the donor compartment of the Franz cells, simulating their application in the wound. Microparticles only released 50% of the drug payload after 4 h ( Figure 5) and complete release was observed after 24 h (the release profile reached a plateau that was kept after 48 h). The microparticles provided concentrations above the MIC (2 μg/mL) for susceptible bacteria already at a short time period, as confirmed in the antimicrobial activity tests (cf. Section 3.4.). Drug Loading and Release Vancomycin HCl is a highly hydrosoluble drug (solubility > 100 mg/mL), but it is poorly soluble in ethanol [43]. Accordingly, chitosan gelation was performed in ethanol with NH 3 to mitigate drug migration through diffusion to the gelation bath, which would result in low drug-loading efficiencies. The entrapment efficiency for vancomycin contained in the chitosan aerogel microparticles was of 24.6 ± 0.3%, being the final loading in the particles of 22.4 ± 0.3 µg of vancomycin/mg of aerogel particles. The obtained drug loss can be explained by drainage of the water containing the drug from the chitosan solution when dropped in the ethanol of the gelation bath. In any case, the loading was still high if compared to other drug-loaded aerogel formulations prepared in aqueous medium (≈ 12%) [36]. In the release studies, aerogel microparticles formed a layer on the membrane of the donor compartment of the Franz cells, simulating their application in the wound. Microparticles only released 50% of the drug payload after 4 h ( Figure 5) and complete release was observed after 24 h (the release profile reached a plateau that was kept after 48 h). The microparticles provided concentrations above the MIC (2 µg/mL) for susceptible bacteria already at a short time period, as confirmed in the antimicrobial activity tests (cf. Section 3.4). Antimicrobial Tests The antimicrobial activity of the vancomycin-loaded aerogel microspheres was tested in an SBF medium against S. aureus (Figure 6), since it is the most common Gram-positive bacteria in chronic wounds [44]. The aerogel microparticles loaded with vancomycin showed a fast antimicrobial effect, being able to completely inhibit the bacterial growth after 6 h of incubation. Bacterial growth inhibition of the vancomycin-loaded aerogels was maintained during the evaluated time (48 h). On the other hand, non-loaded aerogels did not inhibit the bacterial growth. The antimicrobial effect of the drug-loaded aerogels indicated that the aerogels preserved the active form of vancomycin HCl and released it at an adequate rate. The use of a polymeric matrix that releases the drug instead of the direct use of the drug powder allows for a more precise adjustment of the dosage, avoiding toxic effects [45]. Although many studies have evaluated the antimicrobial capacity of chitosan [46], it has been reported that chitosan only presents antimicrobial activity when dissolved in acidic media [47], probably due to the protonated free amino groups that interfere with the bacterial membrane. Antimicrobial Tests The antimicrobial activity of the vancomycin-loaded aerogel microspheres was tested in an SBF medium against S. aureus (Figure 6), since it is the most common Gram-positive bacteria in chronic wounds [44]. The aerogel microparticles loaded with vancomycin showed a fast antimicrobial effect, being able to completely inhibit the bacterial growth after 6 h of incubation. Bacterial growth inhibition of the vancomycin-loaded aerogels was maintained during the evaluated time (48 h). On the other hand, non-loaded aerogels did not inhibit the bacterial growth. Antimicrobial Tests The antimicrobial activity of the vancomycin-loaded aerogel microspheres was tested in an SBF medium against S. aureus (Figure 6), since it is the most common Gram-positive bacteria in chronic wounds [44]. The aerogel microparticles loaded with vancomycin showed a fast antimicrobial effect, being able to completely inhibit the bacterial growth after 6 h of incubation. Bacterial growth inhibition of the vancomycin-loaded aerogels was maintained during the evaluated time (48 h). On the other hand, non-loaded aerogels did not inhibit the bacterial growth. The antimicrobial effect of the drug-loaded aerogels indicated that the aerogels preserved the active form of vancomycin HCl and released it at an adequate rate. The use of a polymeric matrix that releases the drug instead of the direct use of the drug powder allows for a more precise adjustment of the dosage, avoiding toxic effects [45]. Although many studies have evaluated the antimicrobial capacity of chitosan [46], it has been reported that chitosan only presents antimicrobial activity when dissolved in acidic media [47], probably due to the protonated free amino groups that interfere with the bacterial membrane. The antimicrobial effect of the drug-loaded aerogels indicated that the aerogels preserved the active form of vancomycin HCl and released it at an adequate rate. The use of a polymeric matrix that releases the drug instead of the direct use of the drug powder allows for a more precise adjustment of the dosage, avoiding toxic effects [45]. Although many studies have evaluated the antimicrobial capacity of chitosan [46], it has been reported that chitosan only presents antimicrobial activity when dissolved in acidic media [47], probably due to the protonated free amino groups that interfere with the bacterial membrane. Hemocompatibility Biomaterials to be applied directly to the wound need to be compatible with red blood cells, so they do not interfere with the hemostatic activity. A good hemostatic response to the aerogel formulation is crucial, since the first stage of the wound healing process is intended to reduce blood loss and to start the formation of a provisional wound matrix [48]. In later stages of cell proliferation and repair, a process of formation of new blood vessels (angiogenesis) also takes place. The determination of the hemoglobin released from red blood cells from diluted human blood samples after incubation with the material is a simple method to evaluate hemocompatibility. Results showed that the vancomycin-loaded chitosan aerogel microparticles were compatible with the red blood cells compared to the negative control (saline solution), and even the hemolytic activity was lower (−7.7%). According to ISO 10993-4, materials with hemolysis values lower than 5% can be safely used. Cytocompatibility Vancomycin HCl is frequently applied by intrawound to prevent post-surgical infection, but it may have a cytotoxic effect at certain concentrations [49]. Therefore, a fibroblast cell line was used to test cell viability after incubation with the vancomycin-loaded chitosan aerogel microparticles ( Figure 7). Fibroblasts are the functional cells of the dermis and are responsible for the production of the extracellular matrix, mainly composed of collagen and elastin [50]. During the proliferative stage of the wound healing process, fibroblasts migrate to the wound site and participate in the granulation process by deposition of collagen fibers that will constitute the scar tissue [51]. Overall, the aerogels presented good biocompatibility, with values higher than 80% (after 24 and 48 h). Hemocompatibility Biomaterials to be applied directly to the wound need to be compatible with red blood cells, so they do not interfere with the hemostatic activity. A good hemostatic response to the aerogel formulation is crucial, since the first stage of the wound healing process is intended to reduce blood loss and to start the formation of a provisional wound matrix [48]. In later stages of cell proliferation and repair, a process of formation of new blood vessels (angiogenesis) also takes place. The determination of the hemoglobin released from red blood cells from diluted human blood samples after incubation with the material is a simple method to evaluate hemocompatibility. Results showed that the vancomycin-loaded chitosan aerogel microparticles were compatible with the red blood cells compared to the negative control (saline solution), and even the hemolytic activity was lower (−7.7%). According to ISO 10993-4, materials with hemolysis values lower than 5% can be safely used. Cytocompatibility Vancomycin HCl is frequently applied by intrawound to prevent post-surgical infection, but it may have a cytotoxic effect at certain concentrations [49]. Therefore, a fibroblast cell line was used to test cell viability after incubation with the vancomycin-loaded chitosan aerogel microparticles ( Figure 7). Fibroblasts are the functional cells of the dermis and are responsible for the production of the extracellular matrix, mainly composed of collagen and elastin [50]. During the proliferative stage of the wound healing process, fibroblasts migrate to the wound site and participate in the granulation process by deposition of collagen fibers that will constitute the scar tissue [51]. Overall, the aerogels presented good biocompatibility, with values higher than 80% (after 24 and 48 h). Conclusions The use of the jet cutting technology in combination with supercritical fluid-assisted drying technique represents an excellent strategy for the processing of aerogels from highly viscous precursor solutions. Chitosan aerogels were successfully produced in the form of spherical microparticles through this combined technology, and presented as suitable drug carriers for wound healing applications. Rotation speed and number of wires of the cutting disc along with nozzle diameter were the key parameters for the jet cutting process to obtain spherical and unimodal aerogel particles in the 700-900 μm range. The processing approach presented is compatible with the loading of drugs in the aerogel structure without the involvement of additional steps. The use of ethanol instead of aqueous baths for chitosan gelation turned an attractive strategy for vancomycin Conclusions The use of the jet cutting technology in combination with supercritical fluid-assisted drying technique represents an excellent strategy for the processing of aerogels from highly viscous precursor solutions. Chitosan aerogels were successfully produced in the form of spherical microparticles through this combined technology, and presented as suitable drug carriers for wound healing applications. Rotation speed and number of wires of the cutting disc along with nozzle diameter were the key parameters for the jet cutting process to obtain spherical and unimodal aerogel particles in the 700-900 µm range. The processing approach presented is compatible with the loading of drugs in the aerogel structure without the involvement of additional steps. The use of ethanol instead of aqueous baths for chitosan gelation turned an attractive strategy for vancomycin loading since the drug entrapment yield in the resulting aerogel particles was significantly improved. The in vitro drug release from the chitosan aerogels provided local concentrations of vancomycin able to inhibit the microbial growth of S. aureus bacteria in less than 6 h after treatment. High fluid sorption capacity, hemocompatibility, and cytocompatibility with fibroblasts of the chitosan aerogel formulation were suitable for the intended biomedical application. This aerogel-based formulation can meet the requirements to prevent infections for those cases of treatment of chronic wounds shortly after debridement. Vancomycin-loaded aerogel particles can be directly applied at the wound site or included as a component of a multi-layered dressing.
2020-01-30T09:15:25.604Z
2020-01-29T00:00:00.000
{ "year": 2020, "sha1": "aacf5dcb1a760614c4502affe6b0141bf3d8ae64", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/polymers/polymers-12-00273/article_deploy/polymers-12-00273-v2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78c6144e1b9ca2bf5d834c7e78a54607cbca3559", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
255947833
pes2o/s2orc
v3-fos-license
Assessing organ at risk position variation and its impact on delivered dose in kidney SABR Delivered organs at risk (OARs) dose may vary from planned dose due to interfraction and intrafraction motion during kidney SABR treatment. Cases of bowel stricture requiring surgery post SABR treatment were reported in our institution. This study aims to provide strategies to reduce dose deposited to OARs during SABR treatment and mitigate risk of gastrointestinal toxicity. Small bowel (SB), large bowel (LB) and stomach (STO) were delineated on the last cone beam CT (CBCT) acquired before any dose had been delivered (PRE CBCT) and on the first CBCT acquired after any dose had been delivered (MID CBCT). OAR interfraction and intrafraction motion were estimated from the shortest distance between OAR and the internal target volume (ITV). Adaptive radiation therapy (ART) was used if dose limits were exceeded by projecting the planned dose on the anatomy of the day. In 36 patients, OARs were segmented on 76 PRE CBCTs and 30 MID CBCTs. Interfraction motion was larger than intrafraction motion in STO (p-value = 0.04) but was similar in SB (p-value = 0.8) and LB (p-value = 0.2). LB was inside the planned 100% isodose in all PRE CBCTs and MID CBCTs in the three patients that suffered from bowel stricture. SB D0.03cc was exceeded in 8 fractions (4 patients). LB D1.5cc was exceeded in 4 fractions (2 patients). Doses to OARs were lowered and limits were all met with ART on the anatomy of the day. Interfraction motion was responsible for OARs overdosage. Dose limits were respected by using ART with the anatomy of the day. Introduction Stereotactic ablative body radiotherapy (SABR) is a novel treatment to treat renal cell cancer (RCC) in medically inoperable patients, resulting in excellent local control and low toxicities [1,2]. However, the large dose received to organs at risk (OARs) surrounding the target might result in undesirable toxicity post SABR treatment [3][4][5][6]. Patients will therefore benefit from any effort to minimise dose deposited to OARs. Small bowel (SB), large bowel (LB) and stomach (STO) are OARs in the context of kidney SABR treatment. These organs are subject to daily positional and shape changes, which may result in variations between planned and delivered dose [7,8]. Bowel and stomach motion include peristalsis motion and respiratory-induced motion, in addition to shape and size changes due to filling [9]. To account for motion, the use of an isotropic margin expansion to OAR contour to create a planning organ at risk volume ('PRV' technique) is recommended [8]. In the case of bowels, a structure container including the entire intestinal cavity ('bowel bag' technique) may be used [10,11]. Moreover, adaptive radiation therapy (ART) may be a solution to account for OAR interfraction motion [12,13]. Open Access *Correspondence: mathieu.gaudreault@petermac.org Intrafraction motion and its impact on the dose distribution in kidney SABR have been studied in the literature [14][15][16][17][18]. In particular, kidney motion under free-breathing was reported to be less than 10 mm of amplitude [15]. However, intrafraction motion may lead to significant dose difference with respect to the static planned dose [17]. Abdominal OAR interfraction motion were reported in liver SBRT [19]. Dose limits to OAR were exceeded in stomach, heart, and oesophagus, mostly due to motion in the supero-inferior direction. However, the probability of small bowel and large bowel overdosage is expected to be higher in kidney SABR treatment compared with liver SBRT treatment. We observed three cases of bowel stricture requiring surgery post kidney SABR treatment at our institution. The intent of this study is to investigate dosimetric factors that could have led to this toxicity. To do so, we aim to estimate intrafraction and interfraction variations of small bowel, large bowel and stomach and their impact on the delivered dose as determined from cone-beam CTs (CBCT) in treatment position. We further aim to suggest solutions that may lower dose received to OARs in kidney SABR treatment. Materials and methods Toxicities of the whole kidney SABR cohort treated at our institution were not available at the time of this study. Rather, we selected for inclusion a contemporaneous cohort of kidney SABR patients to those we observed with bowel stricture. This was a pragmatic sample size based on availability of imaging data due to software upgrades. This included all 36 consecutive patients treated from January 2018 to February 2021 for whom online image guidance registration dicom files were available. Lesion sizes smaller or equal to 4 cm were prescribed 26 Gy in a single fraction (SF) and lesion sizes greater than 4 cm were prescribed 42 Gy in three fractions (MF) [20]. Renal metastasis were prescribed 20 Gy in a SF. Each patient was simulated with a four-dimensional CT scan (4DCT) in free breathing on a Brilliance Big Bore 16-slice CT scanner (Philips Healthcare, Andover, MA, USA). Images were sorted into 10 phase-based bins with a bellows system for the respiratory trace (Philips Healthcare). The pixel spacing was either 1.17 mm or 1.37 mm. The slice thickness was 2 mm with the exception of two patients where 1 mm was used. The planning CT was the average intensity projection (AIP) of the 10 phase images in 30/36 patients, the AIP of the exhale phase images (typically phase 40% to phase 70%) in 5/36 patients who were treated with respiratory gating and a 3D exhale breath hold acquisition in 1/36 patient. Patients were immobilized at simulation and during treatment with the BodyFix vacuum drape (Elekta, Stockholm, Sweden). The tumour was delineated on the planning CT to generate an internal target volume (ITV). The ITV covered the residual motion in the gating window as estimated on the 4DCT in respiratory gating cases and the estimated variation between repeat breath holds in the breath hold case. A planning target volume (PTV) was generated through a 5 mm isotropic expansion of the ITV. Dose distribution was optimized and calculated to the PTV by using the Eclipse treatment planning system (Varian Medical Systems, Palo Alto) with Photon Optimization algorithm (v15.6 or v15.1) for optimization and AcurosXB algorithm (v15.6 or v15.1) reporting dose to medium for dose calculation. The dose calculation grid was 2.5 mm or 1.25 mm in plane, and 2 mm in the supero-inferior direction. All patients were planned with volumetric modulated arc therapy (VMAT) with the exception of one patient planned with 3D conformal radiation therapy (3DCRT) and one patient with intensity-modulated radiation therapy (IMRT). According to our protocol, all plans underwent patient-specific QA pre-treatment that generally included a 4DCT review, treatment plan review and 3D calculation and delivered log file based pre-treatment QA with a 2%/2 mm Gamma passing rate. Interfraction motion may be due to daily variation in organ shape or size, weight loss during the course of treatment, radiation damage, or change in tumour size [24]. Interfraction positional change was measured from the CBCT acquired for setup at time of treatment; the CBCT acquired immediately prior to treatment was used (PRE CBCT). Intrafractional variation may be caused by respiratory motion, peristalsis, or cardiac motion [24]. Intrafraction motion was measured by using the first CBCT acquired after some dose had been delivered to the patient (MID CBCT). All CBCTs were acquired with either 125 kVp or 140 kVp with 2 mm slice thickness with the exception of 2 CBCTs with 1 mm slice thickness. The pixel spacing was either 0.91 mm or 0.51 mm. SB, LB, and STO were retrospectively delineated in treatment position on all PRE CBCTs and MID CBCTs. Bowels were segmented by contouring each bowel loop independently from each other ('bowel loop technique'). In the case of streak artefacts due to bowel gas motion during CBCT acquisition, organ edges were approximated. CBCT quality was classified qualitatively as 'excellent' , 'good' , and 'approximate' depending on how well bowel edges could be determined visually. The registration used to match the tumour on the CBCT to the planning CT performed by the radiation oncologist at time of treatment was applied, and the OAR contours on the CBCT were copied to the planning CT. Dose metrics were then extracted for each OAR based on their position on the planning CT, PRE CBCT, and MID CBCT. Location of OARs was quantified through the determination of the shortest distance between the ITV and the OAR, denoted dist(ITV,OAR). In order to do so, the ITV contour was successively expanded with 1 mm isotropic margin. The overlap between the expanded ITV and OAR was determined after each expansion. The shortest distance between the two structures was defined as being the first distance in which the overlap between the two structures returned a non-null structure. Interfraction motion was quantified by calculating the difference between the shortest distance on PRE CBCT and on planning CT, �dist PRE CT = dist PRE (ITV, OAR) −dist CT (ITV, OAR) . An OAR closer to the ITV on PRE CBCT compared with its distance on CT had �dist PRE CT < 0 . A similar quantity was defined to quantify intrafraction motion �dist MID PRE = dist MID (ITV, OAR)− dist PRE (ITV, OAR) . The mean and standard deviation of the magnitude of the interfractional and intrafractional variation, dist PRE CT and dist MID PRE , were reported. To test if a variation in dist PRE CT or dist MID PRE leads to a variation in the planned dose per fraction to OAR, the Pearson correlation coefficient (r) between dist and the difference between the near to maximum planned dose per fraction of this OAR on PRE CBCT and on planning CT, D0.03cc PRE CT = D0.03cc PRE − D0.03cc CT , or on MID CBCT and on PRE CBCT, D0.03cc MID PRE = D0.03cc MID −D0.03cc PRE , was calculated. In the case where a dose limit was exceeded in a given fraction by using structures on PRE CBCT, a new treatment plan was generated to investigate if dose to organs could have been lowered while preserving adequate target coverage. To do so, dose optimization and calculation was first performed by using contours determined on the PRE CBCT to perform the optimization with the objectives used in the original treatment plan. If constraints were still not met, a knowledge-based planning (KBP) model (RapidPlan v15, Varian Medical Systems, Palo Alto) was used to further improve the model (KBP model was not available at time of original treatment planning). Metric extraction, determination of the shortest distance between the ITV and OAR and dose optimization and calculation were performed by using the Eclipse Scripting Application Programming Interface (ESAPI v16.1, Varian Medical System, Palo Alto). Statistical significance of the difference in the medians was determined with a Wilcoxon signed rank test for equal sample size and a Wilcoxon rank sum test otherwise by using the Scipy v1.5.2 module. The null hypothesis was rejected at the 5% significance level (p-value < 0.05). Results The characteristics of the 36 patients that received kidney SABR considered in this study are described in Table 1. In this cohort, 34 lesions were primary renal cell cancer, 1 lesion was a renal metastasis, and 1 lesion was a renal bed. A total of 76 PRE CBCTs were acquired. MID CBCTs were acquired in 12 patients in the SF cohort and in 9 patients (18 Fx) in the MF cohort, for a total of 30 MID CBCTs. The time difference between MID CBCT and PRE CBCT ranged from 5.5 min to 22.8 min (mean ± standard deviation = 9.6 ± 3.8 min). Optimisation was performed to bowel loops in 20 plans (56%), PRV in 13 plans (36%) and bowel bag in 3 plans (8%). The quality was judged 'excellent'/'good'/'approximate' in 25%/41%/34% of all PRE CBCTs and 7%/23%/70% of all MID CBCTs. An illustration of the dose distribution and OARs location on the planning CT and PRE CBCT is shown in Fig. 1 (Patient 25). Adaptive therapy Patients 22, 35, and 36, all in the MF patient cohort, suffered from bowel stricture. Planned and estimated delivered dose volume histograms of all patients are available in the Additional file 2. In the context of adaptive therapy, only estimated doses on PRE CBCT were considered. All dose limits to OAR were respected in the SF patient cohort. Dose limits were exceeded in 5 patients in the MF patient cohort. Dose metrics PTV D99%, SB D0.03cc, LB D1.5cc, and LB D0.03cc of the MF cohort are shown in Fig. 3. Dose metric SB D0.03cc was not respected in 8 Fx (4 patients) and LB D1.5cc was not respected in 4 Fx (2 patients). The metric LB D0.03cc was larger than 100% of the prescription dose in 13 Fx (6 patients). Dosimetric conflicts could have been solved with ART in all cases, as shown in Fig. 4. In Patient 35, the KBP model was used in the three fractions due to challenging small bowel and large bowel locations. Target coverage was compromised with ART (ART PTV D99% = 20.0 Gy/32.0 Gy/18.9 Gy compared with In the remaining four patients (Patient 18, 22, 27, and 28), target coverage was reasonable with optimisation using the original objectives on the PRE CBCT contours (ART PTV D99% ranged from − 5% to 10% relative to planned PTV D99% in 7 Fx), and doses to OAR were all reduced (ART SB D0.03cc ranged from − 47% to − 8% relative to PRE CBCT SB D0.03cc in 6 Fx and ART LB D1.5cc = −22% and − 15% relative to PRE CBCT LB D1.5cc in 2 Fx). Discussion Dose received by SB, LB and STO during kidney SABR treatment was estimated from the planned dose to the projected structures delineated on CBCT acquired before and during treatment. This study was motivated by reported cases of bowel stricture post SABR treatment at our institution. It is important to note that these three cases of the 36 cases analysed in this study do not represent a crude rate of this toxicity. Rather, we selected this cohort to estimate delivered bowel dose in a series of contemporaneously treated patients to those with bowel stricture. Interfraction and intrafraction motion of SB, LB and STO were estimated from the shortest distance from these OARs to the ITV. Interfraction motion was larger than intrafraction motion in STO, while both motions were similar in SB and LB. A similar conclusion was achieved in an earlier investigation of abdominal OAR motion using MRI, except that interfraction motion was also larger than intrafraction motion in LB [25]. A correlation was observed between the near to maximum planned dose and the interfraction and intrafraction motion; as the OAR to ITV distance decreased, the near to maximum dose to the OAR increased. This correlation was not observed in a previous study in liver SBRT [19], which used the dose metric OAR D0.3cc and the Euclidian distance between centres of mass. The difference may be explained by the use of the shortest distance between the tumour and the OAR in this current study rather than the distance between their centres of mass, as the later can be influenced by OAR motion far from the ITV. All dose limits were respected in the SF patient cohort. However, dose limit SB D0.03cc was exceeded in 4 patients (8 Fx) and LB D1.5cc was exceeded in 2 patients (4 Fx) in the MF cohort. These results are consistent with earlier findings where interfraction motion was responsible for abdominal OARs overdosage [19]. LB was inside the planned 100% isodose in all PRE CBCTs and MID CBCTs acquired in patients that suffered from bowel stricture (Patient 22, 35, and 36). An example is shown in Fig. 5. In these patients, treatment planning was challenging as LB overlapped with the target. A PRV was used on the LB in all three patients, and the target coverage was compromised to meet the LB constraint applied to the PRV. Interfraction LB motion however resulted in portions of the LB moving into the 100% isodose line at time of treatment, indicating that the PRV margin was not sufficient to space the LB. ART would have successfully reduced dose to OARs by using the anatomy of the day as optimization structures. However, online ART may increase significantly the total treatment time due to the additional planning time. Another solution would be to not deliver dose at a given fraction and postpone treatment if OAR locations differ from the original plan. The main limitation in this study comes from CBCT quality. Reduced quality was mainly due to the presence of bowel gas generating streak artefacts, which can obscure the edge of organs. As a result, some contours were only an approximation of the actual location of OARs. Improvement of CBCT quality to reduce bowel gas artefact may be achieved with machine learning [26]. Furthermore, this problem may be avoided by magnetic resonance guided radiation therapy (MRgRT) [27,28]. Moreover, due to the inferior CBCT quality and Hounsfield unit inaccuracy, dose calculation was not performed on the CBCT, but the dose as calculated on the planning CT was assumed for all CBCT dose assessments. This approximation may further be improved through use of improved on-treatment CBCT with reduced artefacts and improved HU accuracy [29,30]. A further limitation is the unequal sampling frequency of interfraction (76 potential measurements) and intrafraction motion (30 potential measurements). The small number involved in the determination of the intrafraction motion may have affected the statistical significance of the results. Finally, the intrafractional variation was measured by using only two points in time. This method can therefore capture positional drift but cannot provide a complete picture of respiratory motion or peristalsis, as higher temporal sampling would be needed. Conclusions Interfraction motion was responsible for the overdosage of the large bowel in three kidney SABR patients that experienced bowel stricture post SABR treatment. Dose limits could have been respected by using adaptive radiation therapy with the anatomy of the day.
2023-01-18T15:29:57.898Z
2022-06-27T00:00:00.000
{ "year": 2022, "sha1": "5521492aea615f4d406589926ec6d8d01aa998bf", "oa_license": "CCBY", "oa_url": "https://ro-journal.biomedcentral.com/counter/pdf/10.1186/s13014-022-02041-2", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "5521492aea615f4d406589926ec6d8d01aa998bf", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
264135542
pes2o/s2orc
v3-fos-license
Clinical and Radiological Parameters to Discriminate Tuberculous Peritonitis and Peritoneal Carcinomatosis It is challenging to differentiate between tuberculous peritonitis and peritoneal carcinomatosis due to their insidious nature and intersecting symptoms. Computed tomography (CT) is the modality of choice in evaluating diffuse peritoneal disease. We conducted an ambispective analysis of patients suspected as having tuberculous peritonitis or peritoneal tuberculosis between Jan 2020 to Dec 2021. The study aimed to identify the clinical and radiological features differentiating the two entities. We included 44 cases of tuberculous peritonitis and 45 cases of peritoneal carcinomatosis, with a median age of 31.5 (23.5–40) and 52 (46–61) years, respectively (p ≤ 0.001). Fever, past history of tuberculosis, and loss of weight were significantly associated with tuberculous peritonitis (p ≤ 0.001, p = 0.038 and p = 0.001). Pain in the abdomen and history of malignancy were significantly associated with peritoneal carcinomatosis (p = 0.038 and p ≤ 0.001). Ascites was the most common radiological finding. Loculated ascites, splenomegaly and conglomeration of lymph nodes predicted tuberculous peritonitis significantly (p ≤ 0.001, p = 0.010, p = 0.038). Focal liver lesion(s) and nodular omental involvement were significantly associated with peritoneal carcinomatosis (p = 0.011, p = 0.029). The use of clinical features in conjunction with radiological findings provide better diagnostic yields because of overlapping imaging findings. Introduction Tuberculosis (TB) is an ancestral and stubbornly prevalent global infection, affecting a quarter of the world's population [1].TB affects nearly 10 million people and leads to death in more than a million people annually, which might be just the tip of the iceberg [2].It remains a global problem with the increasing use of biologics, HIV infection, emigration, and an aging population [3].TB primarily affects the lungs but nearly 15% of immunocompetent patients and up to 50% of immunocompromised patients can develop extrapulmonary clinical manifestations [4].Abdominal TB accounts for approximately 15% of extrapulmonary tuberculosis (EPTB) cases with a greater incidence in high TB-burden countries [5].Abdominal TB can involve all organs but is predominantly limited to the peritoneum, intestine, solid viscera, and lymph nodes.Tuberculous peritonitis (TBP) is the most common clinical manifestation of abdominal TB, involving the peritoneum, mesentery, and omentum [6,7].Despite the availability of various tools for the diagnosis of abdominal TB, the current approach involves using a combination of clinical, radiologic, endoscopic, microbiologic, histologic, and molecular techniques [8].This is primarily related to the low sensitivity of microbiological tools for diagnosis.TBP may present in a variable manner with features like fever, abdominal distension, pain, loss of weight, and ascites.It is pertinent to note that the diagnosis of TBP is often difficult as microbiological positivity in ascitic fluid is uncommon.The diagnosis often rests on the ascitic adenosine deaminase with a level of >39 U/L being considered fairly sensitive and specific for TBP.Molecular tests, like the Xpert MTB/RIF, have low sensitivity for the diagnosis of TBP [9].Therefore, it is of the utmost importance to distinguish TBP from peritoneal carcinomatosis, which it mimics closely. Peritoneal carcinomatosis (PC) is an invasion of the serous membrane lining the abdominal cavity by malignant cells.PC can be a result of metastasis from intra-abdominal or extra-abdominal malignancies.Abdominal sources of metastasis commonly include the ovaries, large bowel, stomach, and pancreaticobiliary; extra-abdominal sources commonly include the breast, lung, thyroid, and lymphoma [10].Diagnosis of PC symbolized a poor prognosis due to the advanced stage of malignancy and limited treatment modalities available in the past.Recent advances in the last few decades, like cytoreductive surgery (CRS) which targets macroscopic disease, and hyperthermic intraperitoneal chemotherapy (HIPEC) which targets microscopic residual disease, have provided promising outcomes [11].These techniques are aggressive and associated with high morbidity.Early and accurate diagnosis, as well as the timely initiation of therapy, is crucial for optimal efficacy. One of the close mimics of PC is TBP.The diagnosis of TBP may be delayed due to the insidious and intersecting clinical presentation of abdominal pain, distension, intestinal obstruction, and fever [7,8,12].However, when no clear suggestion of primary cancer in the ovary or any other organ is found from imaging, the differentiation between TBP and PC can be challenging.This is especially the case with overlapping imaging findings of diffuse infiltration of the peritoneum, omentum, or mesentery [13].Furthermore, with no specific laboratory test to separate the two entities, the clear separation between the two often requires laparoscopy or histopathological confirmation. Computed tomography with an intravenous contrast is frequently the imaging modality of choice and is used in the non-invasive evaluation of peritoneal involvement, due to its feasibility and availability.Cross-sectional imaging provides adequate visualizations of all pockets of the peritoneal cavity and subdiaphragmatic spaces, which may not be easy with diagnostic laparoscopy.A few studies have tried to differentiate TBP from PC using computed tomography, however significant overlap between the two persists [14][15][16].The information provided by conventional radiological imaging is limited to the morphological anatomy, whereas molecular imaging provides additional information on physiological and pathological processes [17].Imaging modalities like diffusion-weighted magnetic resonance imaging and fluorodeoxyglucose positron emission tomography (FDG PET) have improved the diagnostic performance compared to CT in differentiating PC from TBP [18].Newer modalities like dual time point imaging and 68 Ga-FAPI-04 (fibroblast activation protein-specific inhibitor) PET/CT imaging have greatly improved the specificity for diagnosing malignant over inflammatory lesions [19,20].The use of dual and delayed time point imaging is based on the differential changes in avidity related to the different levels of glucose-6-phosphatase in benign and malignant lesions.The FDG uptake increases with time in malignant lesions.Similarly, FAPI PET has a role in detecting malignancies within a fibroblast dominant microenvironment including PC, but its response in TBP is not known.However, the availability, cost, and required expertise limit the utility in resourceconstrained economies.Image-guided peritoneal biopsies have provided better diagnostic yields, while laparoscopic biopsies are invasive as well as expensive.Therefore, we planned an observational study describing the clinical features and computed tomographic findings in TBP and PC to identify the features which could help in differentiating these two entities. Study Design and Patients We conducted an ambispective analysis of patients suspected as having tuberculous peritonitis or peritoneal carcinomatosis between January 2020 to December 2021 at a tertiary care center in North India.All consecutive patients presenting with ascites who were suspected to have peritoneal carcinomatosis or tuberculous peritonitis were considered for inclusion.We excluded patients with features of other etiological causes of ascites, e.g., those with chronic liver disease, chronic kidney disease, and heart failure.The study was approved by the institutional ethics committee vide letter number INT/IEC/2020/SPL-679 dated 30 May 2020.Some of the patients were included as part of an ongoing randomized trial that focused on the role of rolling over prior to paracentesis to improve the cytological yield [21].Other patients were identified from our medical records.All patients provided written informed consent prior to inclusion and additional consent prior to any invasive procedure which was deemed as clinically relevant for the evaluation.Guidelines related to ethical human research including the Declaration of Helsinki and the Indian Council of Medical Research were followed. Work Up and Follow-Up The clinical features at the time of presentation included abdominal distension, abdominal pain, intestinal obstruction, a lump in the abdomen, and fever.History of loss of weight and past history of TB or malignancy were recorded.The patients underwent relevant evaluations guided by the radiological findings.Those with ascites underwent diagnostic paracentesis (blinded or radiology-guided).The samples were sent for cytological analysis for malignant cells, cytology (differential cell count), ascitic fluid Xpert MTB/RIF, ascitic adenosine deaminase, and the serum albumin ascites gradient.Cytological analysis for suspected peritoneal carcinomatosis was done thrice in patients where the initial evaluation did not yield a diagnosis.Those with an unclear diagnosis and having omental and/or peritoneal thickening were considered for an ultrasound-guided fine needle aspiration/biopsy for evaluation of the cause of the disease. Definitions The final clinical diagnosis was based on the gold standard as defined below.Peritoneal carcinomatosis (PC) was diagnosed on the basis of positive cytological findings on examination of the peritoneal fluid and/or fine needle aspiration from peritoneal/mesenteric/omental lesion and/or positive histopathological findings from surgical specimens [22].The diagnosis of tuberculous peritonitis (TBP) was on the basis of positive cytological findings (caseating granuloma or granuloma) on fine needle aspiration from omentum/peritoneal lesions, elevated ascitic fluid adenosine deaminase (ADA) i.e., >39 U/L, microbiological positivity (culture or Xpert MTB/RIF) in ascitic fluid or response to antitubercular therapy (ATT) as evidenced by the disappearance of ascites within two months of initiation of ATT [8].Some patients with an equivocal diagnosis (e.g., ADA > 30 but <39 U/L) were started on ATT and were considered to have TBP only in cases where objective evidence of a response to antitubercular therapy was documented in the form of resolution of ascites. CT Techniques and Analysis All CT scans were performed using multidetector row CT scanners.The scans were performed following an intravenous injection of 80-100 mL of non-ionic iodinated contrast agent.The scans of the entire abdomen and pelvis were acquired in the portal venous phase (70-90 s from the start of contrast injection). All computed tomography (CT) scans were reviewed by a gastrointestinal radiologist with 10 years of post-training experience and a nuclear imaging expert with 10 years' experience (RK).The experts were aware of the research query, i.e., radiological discrimination of TBP and PC, but were not unaware of the clinical findings, ascitic workup, investigations or the diagnosis.A predesigned format was provided for the documentation of findings in all cases.Any discrepancies were sorted by discussion between the imaging experts. Radiological features noted in all patients included the presence and density of ascites, and loculated ascites; lymphadenopathy; peritoneal, mesenteric and omental involvement; and bowel involvement.Additionally, the liver, spleen, pleural effusion, and adnexal involvement (in females) were noted. We graded the ascites as mild, moderate and severe [23].The attenuation of ascites was defined as high or low if the attenuation value was >10 or <10 HU, respectively.If lymphadenopathy was present, the site, presence of conglomeration, necrosis, and calcification were recorded.The presence of peritoneal thickening and peritoneal enhancement were noted.Peritoneal thickening was further categorized as smooth or nodular.The omental involvement was reported as smudged (hazy), nodular, or cake-like (soft tissue/sheet like mass).The presence and site of bowel thickening and dilatation were reported.Additionally, the clumping of bowel loops and presence of encapsulating membranes were assessed.Visceral organs like the liver and spleen were assessed for enlargement.Any focal lesions were also recorded.The attenuation of the liver relative to the spleen was also reported as high or low.The contour of these organs was assessed for scalloping caused by ascites.Mesenteric changes were assessed as the presence of any stranding or nodules.Lymphadenopathy was recorded as present or absent.Basic peritoneal anatomy, peritoneal thickening and omental involvement have been described elegantly through images previously; readers are guided to glance through for better understanding of metastatic patterns in cases of malignancy [24]. Figures 1-4 show typical radiological findings in patients of peritoneal carcinomatosis and peritoneal tuberculosis. Statistical Analysis All analyses were performed using Statistical Package for Social Sciences (SPSS) version 23.0.Continuous variables were summarized using the mean and standard deviation.Categorical variables were summarized as frequency and percentage.The Chi-square test was used to analyze the relationship between two categorical variables.Two-sided pvalves were reported and a p-value < 0.05 was considered statistically significant.The study population was grouped into two groups, i.e., TBP and PC.Continuous variables were compared using the Mann-Whitney U test, while categorical variables were compared using the Chi-square test.A regression analysis was performed to identify the independent predictors of peritoneal carcinomatosis. Patients Around 110 patients were assessed for inclusion, but some were excluded for various reasons (4: clinical details not available, 17: diagnosis unclear or unproven).Therefore, a total of 89 patients were included in the study.There were 44 cases of tubercular peritonitis and 45 cases of peritoneal carcinomatosis.The mean age of the study group was 42.11 ± 16.39 years, and there were 35 (38.5%) males.Abdominal distension and loss of weight were the predominant complaints present in 73 (82%) and 70 (78.6%)patients, respectively.Overall, 58 (65.2%), 37 (41.6%),18 (20.2%)and 11 (12.3%)patients had a history of pain in abdomen, fever, lump in abdomen and intestinal obstruction, respectively. Clinical Differences The median age of the study group was 31.5 years (IQR, 23.5-40) in TBP and 52 years (IQR, 46-61) in PC (Table 1).The median age in TBP was significantly lower as compared to PC (p-value < 0.001).The number of male patients was 19 (43.2%) in TBP and 16 (35.5%) in PC.The most common symptom in both TBP and PC patients was abdominal Statistical Analysis All analyses were performed using Statistical Package for Social Sciences (SPSS) version 23.0.Continuous variables were summarized using the mean and standard deviation.Categorical variables were summarized as frequency and percentage.The Chi-square test was used to analyze the relationship between two categorical variables.Two-sided p-valves were reported and a p-value < 0.05 was considered statistically significant.The study population was grouped into two groups, i.e., TBP and PC.Continuous variables were compared using the Mann-Whitney U test, while categorical variables were compared using the Chi-square test.A regression analysis was performed to identify the independent predictors of peritoneal carcinomatosis. Patients Around 110 patients were assessed for inclusion, but some were excluded for various reasons (4: clinical details not available, 17: diagnosis unclear or unproven).Therefore, a total of 89 patients were included in the study.There were 44 cases of tubercular peritonitis and 45 cases of peritoneal carcinomatosis.The mean age of the study group was 42.11 ± 16.39 years, and there were 35 (38.5%) males.Abdominal distension and loss of weight were the predominant complaints present in 73 (82%) and 70 (78.6%)patients, respectively.Overall, 58 (65.2%), 37 (41.6%),18 (20.2%)and 11 (12.3%)patients had a history of pain in abdomen, fever, lump in abdomen and intestinal obstruction, respectively. Mesenteric involvement was noted in 35 (79.5%) and 29 (64.4%)patients with TBP and PC, respectively (Table 3).Mesenteric stranding or nodularity were not significantly associated with either condition (p = 0.095 and p = 0.390).Table 3 shows the differences in both groups as per CT findings of the peritoneal structures. The sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy of significant radiological variables are depicted in Table 4. Discussion It is challenging to recognize differentiation between TBP and PC early because of its insidious nature and intersecting symptoms.However, it is obligatory to provide an early diagnosis to avoid morbidity and provide definitive therapy.Contrast-enhanced CT is the imaging modality of choice for differentiation between TBP and PC, due to its widespread availability [25].There are no characteristic or pathognomonic imaging findings for either PC or TBP, but clinical findings and the demographic origin of the patient, in conjunction with imaging findings may be indicative of a probable diagnosis.This study revealed overlapping findings between the two, but suggested clinical findings in conjunction with imaging features may be suggestive of the leading diagnosis.Clinical findings like young age, fever, and prior history of TB point towards TBP as the leading diagnosis.Similarly, radiological findings like loculated ascites, conglomerated lymph nodes, splenomegaly, and bowel involvement favored TBP; whereas nodular omental involvement and focal liver lesions favored PC. Conceptually, symmetry or asymmetry of mesenteric and omental involvement is governed by the route of pathogenic spread in TBP and PC.In TBP, the hematogenous route of spread results in a uniform distribution, and in PC, asymmetry is the result of metastatic implants through ascitic fluid movement, which are dictated by the anatomical features of the abdominal compartment, the negative pressure of subdiaphragmatic spaces, intestinal peristalsis, and gravity [26,27].Multiple studies have compared the radiological findings in the past and inconsistent results have suggested considerable overlap.Contrast-enhanced CT can elucidate various characteristics like ascites, pattern of peritoneal, mesenteric, omental involvement, and lymph node and visceral organ changes. The diagnostic sensitivity and specificity of CT for diagnosing TBP have been reported to be around 65% and 90%, respectively [14,15].Similarly for PC, the diagnostic sensitivity and specificity have been reported to be around 75% and 90% [14,15].The landmark paper by Ha et al. first reported on the utility of abdominal CT in the discrimination of TBP and PC [14].They reported mesenteric changes like thickening and macro nodules (>5 mm); and omental changes like a smudged appearance and presence of an omental line to be significantly associated with TBP.Irregular omental infiltration was significantly associated with PC, but omental caking was noted to be statistically similar between the two.They suggested a model which included mesenteric macro nodules, presence of an omental line, irregular infiltrated omentum, and splenic abnormalities for predicting the underlying diagnosis.In another recent study from South Asia, the presence of peritoneal macro nodules, smooth omentum, calcified lymph nodes, splenomegaly, and high-density ascites suggested the diagnosis of TBP, while PC was suggested by the presence of omental irregularity [15].This study suggested that CT may be better in patients over 40 years for diagnostic discrimination because of the variability of imaging findings in young people.Another work by Charoensak et al. suggested that ascites, loculated ascites, smooth peritoneal thickening, a smudged omentum, mesenteric abnormalities, and smaller lymph nodes were suggestive of TBP.On the other hand, irregular peritoneal thickening, peritoneal nodules, omental caking or nodularity were more suggestive of PC [28].In a smaller study by Rodríguez E et al. nodular implants and irregular peritoneal thickening were suggestive of PC [29].In another study by Kang et al. which primarily focused on the value of acidic adenosine deaminase, the authors reported while smooth peritoneal involvement and mesenteric involvement were more suggestive of PTB, irregular or nodular thickening of the peritoneum suggested an underlying diagnosis of PC [30].However, the most common CT findings noted with TBP are ascites, smooth peritoneal thickening, mesenteric involvement, and omental thickening.This study confirmed the results of previous studies where ascites have been found in the majority of patients with TBP and PC [29].A systematic review which included 6 studies with 656 patients, where 262 patients had TBP and 394 patients were diagnosed with PC was conducted.Among 17 features that were studied for diagnostic accuracy, smooth peritoneal thickening showed the highest diagnostic accuracy with a specificity of 84% and sensitivity of 60% with AUC of 0.83 for the diagnosis of TBP [31].The omental involvement pattern (nodular, smudged or cake-like), which has been commonly studied in most of the studies, was found to have low diagnostic yield.Contrary to this, in a study by Ramanan et al., which purely focused on the omental rim sign, the authors suggested that this sign on contrast-enhanced CT depicting a uniformly thick enhancing peripheral outline of the omentum in the venous phase was specific and sensitive for TB peritonitis [16] However, when the "omental line" finding by Ha et al. and the "omental rim" sign described by Ramanan et al. were combined, the pooled specificity and sensitivity of this finding for discriminating PTB from PC was 96% and 67%, respectively [31].Similarly, lymph node necrosis and calcifications, and mesenteric macro nodules were fairly specific for TB, but showed poor sensitivity.Although ascites have been commonly found in both conditions, the density and presence of loculation yielded a poor diagnostic accuracy because of poor sensitivity and specificity [31].A summary of these findings suggests that computed tomography may have some role in suggesting the underlying diagnosis, but is not diagnostic in most cases. Clinical features can often guide clinicians in making a provisional diagnosis in the case of diffuse peritoneal disease.Patients with TBP are younger and have more inflammatory loads manifesting as a fever as compared to PC.This study confirmed the findings of previous studies where patients with TBP were younger and presented more commonly with a fever [32,33].History of TB in the past or family history have been noted in 5-20% of the patients with active TB [5].Approximately 10% of the patients in this study had a past history of TB, confirming similar findings from previous studies.Symptoms like abdominal distension, pain in abdomen, and loss of weight were present in the majority of our patients, similar to previously reported large studies; caution should be exercised in diagnosis based on these findings due to their non-specificity [15].In a study that included multiple centers which evaluated the clinical characteristics and CT findings to discriminate between TBP and PC, they found that younger age, presence of fever, and night sweats were the clinical features suggestive of TB.On CT, the presence of an omental rim, and calcified or enhancing lymph nodes suggested TB; whereas omental caking, irregular peritoneal thickening or the presence of nodules, visceral scalloping, and a larger amount of ascites suggested PC.The authors suggested that a model including these parameters had an AUC of 0.914 in the discrimination of the two diseases [33]. The present study had a few limitations.Firstly, it was a single-center study, with relatively small numbers in both of the groups.However, the study also has strengths including a good follow-up rate with a clear diagnosis and blinding of the radiologist to the underlying diagnosis or clinical information.Further, the results were reported by two radiologists and the final opinion was based on consensus. Conclusions Most of the findings analyzed from CT overlap in both diseases.While loculated ascites, conglomerated lymph nodes, bowel dilatation, the presence of an encapsulating membrane around the bowel, and splenomegaly suggested TBP; focal liver lesions, hepatic attenuation, and omental nodularity suggested PC.None of these findings were specific for a particular diagnosis.The use of a combination of clinical features and radiological findings may suggest the underlying diagnosis but none of these features is specific to the underlying condition.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and was approved by the Institutional ethics committee of the Postgraduate Institute of Medical Education and Research INT/IEC/2020/SPL-679 dated 30 May 2020. Table 1 . Baseline characteristics of tuberculous peritonitis and peritoneal carcinomatosis patients. Table 2 . Radiological features of TBP and PC. Table 3 . Radiological involvement of mesentery, peritoneum, and omentum in TBP and PC. Table 4 . Sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy of significant radiological variables.
2023-10-16T15:04:56.213Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "3932075b3ef0b481a525d625ef771e00670e33d5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/20/3206/pdf?version=1697211946", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "a00c112b3e9aebd540727206d10e4e3e808ab60e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6006911
pes2o/s2orc
v3-fos-license
Hemomediastinum due to spontaneous rupture of a mediastinal bronchial artery aneurysm – A rare cause of thoracic pain Hemomediastinum is a rare pathological event. Multiple underlying causes and contributory factors can be identified, such as trauma, malignancy, iatrogenic, bleeding disorder or mediastinal organ hemorrhage. Also, a mediastinal bronchial artery aneurysm may be the source of a hemomediastinum. Hemoptysis is an important directive symptom, however occasionally, patients only present with thoracic pain or symptoms related to extrinsic compression of the airways or esophagus. Using contrast-enhanced computed tomography (CT) of the chest, hemomediastinum can be adequately diagnosed, and the involved vascular structures can be revealed. In case of a (ruptured) bronchial artery aneurysm, transcatheter embolization provides a minimally invasive procedure and is treatment of first choice. In this case report, a 76-year-old female is presented with spontaneous rupture of a mediastinal bronchial artery aneurysm resulting in hemomediastinum causing thoracic pain. Superselective embolization of the left bronchial artery was successfully performed. Introduction Spontaneous hemomediastinum is rarely observed in clinical practice and is a potentially life-threatening condition. Underlying causes have been categorized into three groups. First, spontaneous hemomediastinum may occur secondary to bleeding disorders such as hemophilia, or secondary to anticoagulant treatment. Secondly, mediastinal tumors (e.g. thymomas, teratomas), organs or blood vessels may be involved. Thirdly, one can distinguish spontaneous idiopathic hemomediastinum, which can particularly appear after sudden increase in intrathoracic pressure, (e.g. during coughing, sneezing or vomiting, or sudden sustained hypertension) [1,2]. In case of a primary problem of (large) blood vessels, the most common cause is aortic (aneurysm) dissection. Rupturing of a mediastinal bronchial artery aneurysm is a rather unusual cause of spontaneous hemomediastinum. Bronchial artery aneurysms are detected in less than 1% of all patients who undergo selective bronchial angiography [5]. Case A 76-year-old female patient with past medical history including acute rheumatic fever (childhood), osteoporosis and total abdominal hysterectomy presented herself at the emergency room because of acute thoracic pain. Apart from the pain, which radiated to both jaws and upper back, the patient had no other complaints. She was not using any medication. Physical examination was normal, with a blood pressure of 155/75 mmHg (equal in right and left arm), a regular pulse of 70 beats per minute and a temperature of 36.3 C. The electrocardiography was normal, and no abnormalities were observed upon chest radiography and echocardiogram. Laboratory blood testing showed normal kidney and liver function, normal coagulation, a normal blood count and negative cardiac enzymes. The patient was discharged from the hospital with a diagnosis of atypical thoracic pain. Two weeks later she attended the emergency room again with similar severe thoracic pain. She also mentioned complaints of heart burn and dysphagia. A contrast-enhanced chest CT was performed upon suspicion of a pulmonary embolism. The CT ruled out a pulmonary embolism, but did reveal a large mass in the posterior mediastinum ( Fig. 1) with an axial diameter of 5.5 cm and cranio-caudal diameter of 7.4 cm, that extended to the subcarinal level (Fig. 2). This mass showed contrast extravasation suggesting active bleeding. Angiography was performed, demonstrating a large (pseudo)aneurysm of the left bronchial artery (Fig. 3). Superselective embolization using coils was successfully carried out (Fig. 4) in a coaxial way, using a 5F Cobra catheter and a microcatheter for superselective embolization. Fibered coils were placed distally and proximally to the pseudoaneurysm in order to avoid backflow and consecutive recurrence. The patient recovered quickly and was discharged two days after the procedure. Six weeks later, upon evaluation at the outpatient clinic, she was free of complaints and chest CT showed that the mediastinal hematoma had completely resolved (Fig. 5). Discussion Aneurysms and pseudoaneurysms of the pulmonary vasculature are rare and more often affect the pulmonary arteries than the bronchial arteries or the pulmonary veins [5]. An aneurysm typically involves all 3 layers of the vessel wall, whereas a pseudoaneurysm represents a contained rupture in which not all layers of the affected wall are involved. Bronchial arteries are normally <1.5 mm in diameter at their origin and decrease to 0.5 mm as they enter the broncho-pulmonary segment. A bronchial artery diameter exceeding 2 mm is generally considered pathological and associated with an increased risk of severe clinical complications [3]. Bronchial artery aneurysms may be mediastinal or intrapulmonary in location and are associated with different medical conditions: congenital (sequestration, pulmonary agenesis), arteriovenous malformation, vasculitis (Behçet disease, Hughes-Stovin syndrome), bronchiectasis, infectious disease (tuberculosis, atypical mycobacteria, aspergillosis, histoplasmosis), sarcoidosis, silicosis, post-traumatic, hereditary hemorrhagic telangiectasis (OslereWebereRendu disease) or idiopathic [5]. In many of the before-mentioned diseases, pulmonary circulation is reduced at the level of the pulmonary arterioles because of hypoxic vasoconstriction, thrombosis and vasculitis inducing a compensatory enlargement of the bronchial arteries [4]. The clinical presentation of a bronchial artery aneurysm depends on its size and location, but also on the presence of concomitant disease. Intrapulmonary bronchial artery aneurysm is commonly manifested by hemoptysis which can range from blood-streaking of sputum to massive hemoptysis that is potentially life-threatening. Patients with a (ruptured) mediastinal bronchial artery aneurysm more frequently present with chest pain and with symptoms related to extrinsic compression of adjacent structures such as the airways (shortness of breath), the esophagus (dysphagia) or the vena cava (vena cava superior syndrome) [1,2,5,6]. Sporadically, a hemothorax is found. In order to adequately diagnose a hemomediastinum, performing a chest CT with contrast material application is the designated approach. Consecutive angiography may then be the next best step towards treatment. Obviously, a ruptured bronchial artery aneurysm requires immediate treatment, but also an asymptomatic bronchial artery aneurysm should generally be treated, as rupture can be dangerous. Surgical extirpation can be done through (video-assisted) thoracotomy and reliably eliminates the lesion, but is invasive and not feasible in every patient. In our opinion, transcatheter embolization is the treatment of first choice. Superselective catheterization with a microcatheter is usually performed using coils for safe embolization. Careful evaluation of potential spinal cord branches should be carried out prior to embolization to avoid severe complications (e.g. spinal cord ischemia, which is extremely unlikely to happen with this type of embolization). Furthermore coil embolization should be performed by placing coils proximal and distal from the pseudoaneurysm in order to avoid recurrence. After the procedure, patients may experience chest pain (prevalence 24e91%) or dysphagia (prevalence 0.7e18%); both are likely related to an ischemic event caused by embolization and are usually transient. Subintimal dissection of the bronchial artery can occur (prevalence 1e6.3%), but is usually asymptomatic. High success rates have been reported for bronchial artery embolization, but recurrence after successful embolization can occureprobably due to collateral vessels, incomplete embolization, and arterial re-canalizationemaking re-intervention necessary [1,4,6]. Conclusion A hemomediastinum is a rare pathological event with several possible underlying causes including a ruptured bronchial artery aneurysm. Bronchial artery aneurysms present with various symptoms ranging from massive hemoptysis to subtle chest pain. First choice treatment consists of transcatheter embolization.
2017-08-16T05:50:08.774Z
2014-03-14T00:00:00.000
{ "year": 2014, "sha1": "69a9fb03b5cff6e32a0c138369413330d7085dab", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.rmcr.2013.12.012", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69a9fb03b5cff6e32a0c138369413330d7085dab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125730804
pes2o/s2orc
v3-fos-license
Kinematic-wave model of viscous fingers with mixing layer A new one-dimensional kinematic model of viscous fingers growth is proposed. This model is based on the assumption of an intermediate layer formation. The shear instability of the flow is developed in this layer due to intensive mixing of liquids of different viscosities. The thickness of the intermediate layer between outer ones is determined by the velocity difference in the layers. The results of the calculation of the growth rate of the fingers in the framework of this model are in a good agreement with the calculations carried out on the basis of more general two-dimensional equations as well as with the known Koval model. Introduction It is well known that a displacement process involving two fluids is often unstable if the displacing fluid has lower viscosity than the displaced one. The resulting instability developing at the interface between two fluids is often called the viscous fingering [1,2]. A number of works devoted to understanding of various aspects of such form of instability is available in the literature [3,4]. It should be noted that a numerical modelling of the displacement process requires the wasteful expenses of computer time at high Peclet numbers. Moreover, it is difficult to reproduce the detailed fingering pattern. As it was shown in [5,6] it is simpler to describe the concentration of solute averaged across the fingers. In such cases, the mixing zone is an important feature for determining the extent of mixing. Despite the considerable work in this field was presented, the spreading and growth of the mixing zone is the important opened question. There are the several empirical models for the modelling of the evaluation of mixing zone in unstable, miscible displacements. An empirical model has been suggested by Koval [7] to give a basis for computation of miscible displacement. This model suffers from adoption of empiricism. The involved principal parameters have insignificant or indirect physical value. However, an experimental study of the mixing zone growth in miscible viscous fingering at high Peclet numbers [8] shows that the trend is consistent with Koval's model [7]. Further development of averaged models of fingers formation is represented in [9,10]. All these models are based on the assumption that pressure is constant in the transverse direction of the main flow, as well as an empirical information about the displacing and displaced fluids distribution in the region of intensive viscous fingering. In depending on the viscosity ratio and related with the thickness of the mixing zone. The velocity of propagation and the thickness of the viscous finger in the framework of the kinematic-wave model coincide with the corresponding calculations on the basis of the 2D equations [10]. This makes it possible to predict the parameters of viscous fingers avoiding the time-consuming calculations. We also show that the proposed 1D model is in a good agreement with the known Koval model [7]. Equations of motion A Newtonian weakly-compressible flow in a Hele-Shaw cell (the area between two parallel plates separated by a small gap of constant thickness b in the z direction) is described by the equations Here p is the pressure, ρ is the density, and µ is the dynamic viscosity. The second derivatives of the velocity vector v = (u, v, w) with respect to the variables x and y are negligible compared to the derivatives in the vertical direction z. They are assumed to be zero in the suggested model. We put on that the velocity field can be represented as The functions p and ρ are supposed to be not depend on z. Averaging equations (1) through the gap we obtain (primes are omitted) Here and below µ denotes the viscosity of the fluid divided by the medium permeability b 2 /12 (further in the text we will call it simply "viscosity"); coefficient β is equal to 6/5. We are interested to describe the process of displacement of a fluid with the viscosity µ 1 by less viscous fluid with viscosity µ 2 . We utilize for this description equations (2) with variable viscosity depending on the concentration function c. This function is scaled such that it is equal to unity in the displaced fluid (µ = µ 2 ) and zero in the displacing one (µ = µ 1 ). We should add the transport equation for the concentration c to the governing equations. It has the divergent form (the diffusion terms are assumed to be neglected) Following [11] we use a monotonic relationship between the viscosity and the concentration in the form µ(c) = µ 1−c 1 µ c 2 . In order to close model (2), (3) we should specify the equation of state p = p(ρ). A weak compressibility of the medium given by the equation of state p = p(ρ) provides hyperbolicity of the model. If the condition u 2 + v 2 p (ρ) keeps then the results depend on the choice of p = p(ρ) insignificantly. Therefore, for the numerical simulation of the 2D flows we assume where the constants ρ 0 and c 0 mean characteristic density and speed of sound in the medium. System (2)-(4) was recently proposed in [10] for modelling of the viscous fingers taking into account the inertia of fluid. It may be important for the high finger velocities. It was also pointed out that for the process of unidirectional displacement the pressure variation in the transverse direction is small (p y = 0). Layered flow We study an incompressible Hele-Shaw flow in a two-dimensional rectangular domain. The size of the domain is L × H. We assume ρ = 1 without loss of generality. We replace the second of momentum equation in (2) by p y = 0. It corresponds to transverse flow equilibrium. We consider the class of viscosity-stratified flows In this case equations (2) and (3) take the form (see [10,12] for the details) Here is the depth of i−th liquid layer of viscosity µ i and velocity u i (t, x), Q = const is the total flow rate through the cell. In deriving equations (5) the kinematic condition at the layers interface is utilized. For the average velocity U = Q/H of fluid injection, one can use the following simplified version of governing equations (5). The simplification is based on the replacement of the momentum equations by the corresponding Darcy laws: The remaining equations of system (5) do not vary. In what follows, we assume that H = 1 and Q = 1. Two-layer flow It is interesting to note that in the case of two-layer flow system (5), where the momentum equations are replaced by Darcy laws (6), is reduced to the so-called naïve Koval model [9] ∂h 1 ∂t Here M = µ 2 /µ 1 is the viscosity ratio. Indeed, taking into account the relations we can represent equation h 1t + (u 1 h 1 ) x = 0 as equality (7). If M > 1 we construct the solution of (7) in the form of a simple wave The previous formulae give the solution of equation (7) with discontinuous initial data Let us note, that for M < 1 solution of the same Cauchy problem is given by the shock wave moving with the average flow velocity D = U = 1. In this case the Saffman-Taylor instability does not develop. It is known that the growth rate of the viscous fingers calculating with the help of the naïve Koval model is significantly higher than it is observed experimentally. Koval [7] postulates empirically that equation (7) It should be noted that formula (8) is used in practice [8] but not justified theoretically. Below we propose a way to correct model (7) by using an intermediate mixing layer. 3. Three-layer flow with mixing We use the notation (µ 1 , u 1 , h 1 ), (μ, v, η) and (µ 2 , u 2 , h 2 ) for the fluid viscosities, velocities and depths in the displacing, intermediate and the displaced layers respectively (µ 1 <μ < µ 2 ). The cell height H and the flow rate Q are assumed to be unity as before. Then within the kinematic model (p x = −µ i u i ) we have Following [13,14] we consider the equilibrium between generation and dissipation of energy of the small-scale motions in a shear flow. In this case the thickness of the intermediate layer can be expressed in the form where δ is an empirical parameter. Here we do not present the derivation of relation (10). Therefore, this formula can be regarded as an additional hypothesis reducing the equations of fingers evolution in the development of the Saffman-Taylor instability to the Hopf equation for the averaged thickness of a finger. Let us define the middle-line z = h 1 + η/2 and formulate the law of mass conservation for the liquid layer of depth z z t + (ψ(z)) x = 0, where ψ = u 1 h 1 + vη/2. Using the notation and formulas (9) we express the fluid velocity v in the mixing layer by means of the variables η and z: v = (a 1 η + a 2 (z)) −1 . Substitution of η(z) into the previous formula expresses velocities v and u =μv/µ 1 as the functions of the middle-depth z. As the result we derive the relation for the flux ψ in the layer of depth z: Note that it is necessary to require the fulfilment of the inequalities The last two inequalities are the consequence of h 1 ≥ 0 and h 1 + η ≤ 1. To determine the depth z = z(t, x) we use conservation law (11) with function ψ(z) given by formula (12). Let us construct a solution of equation (11) with piecewise constant initial data z(0, x) = 1 for x < x 0 and z(0, x) = 0 for x > x 0 . This formulation means that the domain x < x 0 is filled by the liquid of viscosity µ 1 (h 1 = 1, η = h 2 = 0) whereas the liquid of viscosity µ 2 (h 2 = 1, h 1 = η = 0) fills the domain x > x 0 . Since the fluid of viscosity µ 1 moves in the positive direction of the x-axis with the average velocity U = 1 then if µ 1 < µ 2 the Saffman-Taylor instability develops at the interface. Due to constraints (13) the function ψ(z) does not define on the whole interval [0, 1]. Typical behavior of the function ψ(z) is given in Figure 1 We assume that a functionψ(z) is the extension of ψ(z) on the entire interval [0, 1]. Note that the function ψ(z) can be extended in various ways. If the plot of constructed extensioñ ψ(z) lies between the convex hull of the plot ψ(z) (Figure 1, line 1) and the diagonal of the unit square connecting points (0, 0) and (1, 1) (Figure 1, line 4) then the solution of the problem does not depend on the choice of extension [15]. Let us construct the tangents to the plot of ψ(z) from the origin and from the point (1, 1) (z 1 and z 2 denote the abscissae of the points of tangency). Thus, the functionψ(z) is given by formula (12) on the interval [z 1 , z 2 ]. On the intervals [0, z 1 ) and (z 2 , 1] it is represented by the tangent lines. Using the functionψ(z) we construct a self-similar solution of equation (11) with discontinuous initial data. This solution is represented by the simple wave This wave is joined to the strong discontinuities ξ 1 = ψ (z 1 ) and ξ 2 = ψ (z 2 ) (see [15] for details). A comparison with the Koval model Let us verify kinematic-wave model (11) by means of comparing with the well-approved Koval model (7). The effective viscosity ratio M e given by formula (8) is used instead of M . We choose the following parameters: µ 1 = 2, µ 2 = 8. It means that the viscosity ratio M = 4 and M e = 1.417 according to (8). Self-similar solution of the Koval model for these parameters is shown in Figure 2 (solid curve). Corresponding solution of the scalar conservation law (11) (with δ = 1.8) is presented in Figure 2 by the dashed curve. As we can see in the framework of these models the growth rates of viscous fingers are similar. Although the parameter δ is the function of M , the values of δ(M ) vary weakly for M ∈ (1, 8). Moreover, proposed model (11) describes the behavior of the finger near its tip better than the Koval model. It should be noted that another kinematic-wave model has been recently obtained in [10] by taking into account the friction between the fluid layers of different viscosities. This effect is important when the finger become thin at the vicinity of the tip. A comparison with 2D model (2)-(4) We compare the results of numerical simulations of the viscous fingers formation based on twodimensional hyperbolic system of equations (2)-(4) with the results obtained by using kinematicwave model (11). The calculations are performed in the Hele-Shaw cell with sizes 20 × 2 in the coordinate system moving with the average flow velocity U = 1 with respect to the x-axis. The viscosities of the fluids are equal to µ 1 = 2 and µ 2 = 8. At the initial moment of time the interface x = x 0 = 10 is perturbed with respect to the formula x = x 0 + 4∆ 1 (exp(−10(y − 1) 2 ) − 1/2) where ∆ 1 is the resolution of the uniform grid in x. For discretization with respect to x and y we use 400 and 50 nodes respectively. On the left and right boundaries of the computational domain the reflection conditions are imposed. We assume that in the framework of kinematic model (11) in the mixing layer the variable c is equal to 1/2. So, the viscosity in this layer is given by formulaμ = √ µ 1 µ 2 by virtue of the fact in two-dimensional model (2)-(4) the viscosity is µ(c) = µ 1−c 1 µ c 2 . The calculations are carried out at β = 1, ρ 0 = 1 and c 0 = 75. In this case the change in density is not more than 0.15%. At the same time the condition p y = 0 is fulfilled with high accuracy. It means that the above proposed one-dimensional model is suitable for describing a such flow. The given perturbation of the interface leads to the formation of a single finger. It is symmetric with respect to the line y = 1. The results of the concentration c calculations using model (2)-(4) at time t = 10 and t = 20 are shown in Figure 3 . Self-similar solutions of kinematic-wave model (11) with δ = 1.8 is shown in Figure 3 at t = 10 and t = 20 by solid (the middle-line 1 ± z) and dashed (the outer 1 ± h 1 and inner 1 ± (h 1 + η) boundaries of the finger) curves. The self-similar variable ξ is replaced by ξ + U , U = 1. It corresponds to a transition into the moving coordinate system. As we can see, the velocity of viscous finger propagation and its transverse diffusion are in a good agreement with calculations provided by equations (2)-(4). Conclusion The main result of this paper is the construction of a new kinematic-wave model of viscous finger growth (11). The model is based on the assumption about an intermediate mixing layer formation between two layers of liquid with different viscosities and velocities. Relation (10) gives the thickness of the mixing layer by means of the fluid velocity in the outer layers. This formula is not derived in the paper and can be considered as an additional hypothesis. It allows describing the evolution of the viscous fingers by using Hopf-like equation. The proposed model (11) is verified with the help of the comparison with the well-tested Koval model (7) and (8). The results are also confirmed by the numerical simulation on the basis of 2D equations (2)-(4). Figure 3 shows that the velocity of the viscous finer propagation are the same for the both models. Thus, the suggested 1D model predicts the propagation velocity of the viscous fingers to a sufficient degree of accuracy and do not require large processing power. The model also provides an additional information concerning to the transverse diffusion of viscous finger.
2019-04-22T13:08:10.816Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "fb798c55b342aa175f867e3440e06f8299dbe288", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/894/1/012107", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e76a74d6c1ebeb8a6014834b2ded593e3b7fc112", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
265797107
pes2o/s2orc
v3-fos-license
Prevalence and factors associated with malaria, typhoid, and co-infection among febrile children aged six months to twelve years at kampala international university teaching hospital in western Uganda Background Paediatric febrile illnesses pose diagnostic challenges in low-income countries. Western Uganda is endemic for both malaria and typhoid but the true prevalence of each individual disease, their co-infections and associated factors are poorly quantified. Objective To determine the prevalence of malaria, typhoid, their co-infection, and associated factors amongst febrile children attending the paediatrics and child health department of Kampala International University Teaching Hospital (KIU-TH) in Western Uganda. Methods Cross-sectional study used a survey questionnaire covering demographics, clinical and behavioural variables. We obtained blood for peripheral films for malaria and cultures for typhoid respectively; from 108 consecutively consented participants. Ethical approval was obtained from KIU-TH research and ethics committee (No. UG-REC-023/201,834). Multivariate regression analysis was performed using Stata 14.0 (StataCorp. 2015) at 95% confidence interval, regarding p < 0.05 as statistically significant. Results Majority of participants were males 62% (n = 67), cared for by their mothers 86.1% (n = 93). The prevalence of malaria was 25% (n = 27). The prevalence of typhoid was 3.7% (n = 4), whereas the prevalence of malaria-typhoid co-infection was 2.8% (n = 3). Using treated water from protected public taps was associated with low malaria-typhoid co-infection [p = 0.04; aOR = 0.05, 95%CI [0.003–0.87], whereas drinking unboiled water from open wells increased the risk for the co-infection [p = 0.037, cOR = 17, 95%CI (1.19–243.25)]. Conclusions The prevalence of blood culture confirmed malaria-typhoid co-infection in children was lower than previously reported in serological studies. These findings emphasize the need to use gold standard diagnostic investigations in epidemiological studies. Educational campaigns should focus on the use of safe water, hygienic hand washing, and proper waste disposal; and should target mothers who mainly take care of these children. Introduction Febrile illnesses are still a global health challenge in the developing countries [1].Malaria and typhoid fever are a major cause of febrile illness, responsible for 619,000 and 216,000 global deaths annually, respectively [2].These deaths tend to double when there is dual infection [3].Children below 15 years in sub-Saharan Africa are at risk of these two infections due to possibility of common source spread from school settings [4]. Current reports show increasing global trends of malaria during 2022 [2], and the disease burden is highest amongst low-income countries in the tropics which have seasonal variations and contaminated ground water sources [1].The burden of malaria can be compounded with typhoid-salmonella co-infection at the interface of dry and wet seasons, linking the two disease entities [5].Besides, the social circumstances of both diseases can be driven by malnutrition, HIV, poverty and poor sanitation [6] which are of public health concern in Uganda. Uganda is endemic for both malaria and typhoid [7].Therefore, clinicians in resource-constrained settings should anticipate this co-infection in children due to their overlapping clinical features [8], which makes it difficult to diagnose them accurately [9].Diagnostic challenges prompt clinicians to treat the co-infection without laboratory confirmation, risking drug resistance [1,10].On the one hand, failure to prescribe the relevant medications, timely poses a risk of diagnosing typhoid fever only after complications such as bowel perforation have occurred [7].This has both legal implication and impacts on treatment outcome. To-date, there are no lifelong protective vaccines for both malaria and typhoid due to high mutation rates [11] and inability to stimulate an immature immune system in children [12].These challenges counteract the global target of reducing morbidity and mortality due to malaria by 90%, between 2016 and 2030 [2].The world health organization (WHO) recommends 4 doses of "RTS,S″ malaria vaccine as part of prevention tools in children from 5 months of age, following studies that demonstrated a 30% reduction in malaria-related mortality after vaccine use in P. falciparum highly endemic African region [13], however, the protective effect of the vaccine wears off after 3 years [14]. According to WHO, the criterion standard for diagnosis of malaria is a blood slide whereas typhoid fever requires culture isolation of the organism, which is widely considered 100% specific [15].Culture of the bone marrow aspirate is the most sensitive at 90% for typhoid salmonella [16], but extremely painful, which may outweigh the benefits in paediatric population.It has been shown that multiple blood cultures (>3), yield sensitivities of 73-97%, particularly larger volume (10-30 ml) [15].Despite though, it is not routine in Uganda to obtain the mandatory three blood samples in the paediatric population and such results are often not timely available to guide prescriptions due to a backlog of samples amidst scarce human and infrastructural resources [17]. Thus due to lack of standard diagnostic tools [10], any fever in children in Uganda is primarily treated as malaria [18,19]; only to think of other causes when there is no improvement on anti-malarial drugs [7,20].What is often available to diagnose malaria and typhoid infections in Ugandan context are rapid kits that have concerns of reduced specificity [15].Besides, late presentation of children with fever and possible exposure to an anti-malarial or antibiotic prior to hospital visit, could result in missing such late infections even on blood smears and cultures [19,21].This has posed threat for irrational drug prescriptions and antibiotic resistance in our tertiary hospital settings [19]. Although there are existing nation-wide interventions and published data to aid curbing malaria in Uganda [22], malaria-typhoid co-infection as a single disease entity is being overlooked in the paediatric population.Knowledge of the extent of this burden and factors associated with this co-infection are key to high index of suspicion, primary prevention, early detection, and proper integrated case management.The main objectives of the present study therefore were to determine the prevalence of: (i) malaria; (ii) typhoid; (iii) malaria-typhoid co-infection and (iv) associated factors; amongst febrile children attending the paediatrics department of Kampala International University Teaching Hospital (KIU-TH) in Western Uganda. Study design This was a cross-sectional descriptive and analytical study conducted between March-November 2019. Study participants and settings The study involved children aged between 6 months to twelve years who presented with fever at the department of paediatrics and childcare of Kampala International University Teaching Hospital (KIU-TH).All eligible children with fever at the paediatric department of KIU-TH including outpatients, in-patients, and emergency wards; were consecutively recruited until the desired sample size was realised.This was intended to generate a sample size large enough to relate the findings to the population. The study site is the main teaching hospital for Kampala International University Schools of Medicine and Allied Health, located in Ishaka Municipality, Bushenyi District of Western Uganda.The hospital is a 700-bed capacity, providing emergency, out and in-patient specialised paediatrics and child health care.According to the Uganda Bureau of Statistics [23], the hospital provides diagnostic and therapeutic services to over 16,646 catchment population.This malaria endemic region has two rainy seasons, beginning March to May, and September to November, during which malaria and diarrhoeal infections peak. According to Uganda Bureau of Statistics [23], the population of children between six months to 12 years in Bushenyi district is about 45.9%; of which 7.2% do not attend school; 88.1% attend primary school, while the illiteracy rate is reported to be 12.1%.Reportedly, over 95.8% of the district's population own at least one mosquito net; only 16.1% have access to piped water whereas 6.8% use bore holes.In addition, up to 0.6% of the districts' population do not have access to any toilet facility and practice open defecation while only 23.1% practice proper solid waste disposal and 95.7% are not living in descent dwellings.Findings from a study on spatio-temporal distribution of typhoid showed that the highest disease burden was recorded in central, followed by western and south-western Uganda, and least in eastern and northern parts [24]. Sample size calculation Being across sectional study where the proportion (P) was the parameter of interest, and using non random sampling, the sample size was calculated using modified Daniel's formula [25]. Objective 1: The prevalence of malaria in children in Bushenyi District in Western Uganda had been reported to be 3.5% [26] and therefore P = 0.035.Assuming a statistical power of 80% at 95% CI, the resulting sample was 106. Objective 2: Based on the study done at KIU-TH in Western Uganda, the prevalence of typhoid fever in children was reported to be 2.76% [27].Substituting 0.0276 for P and assuming a statistical power of 80% at 95% CI, the resulting sample was eighty-four. Objective 3: Based on the Tanzania study the prevalence of malaria-typhoid co-infection was reported to be 3.5% [20].Substituting 0.035 for P, and assuming a statistical power of 80% at 95% CI, the resulting sample was 106. Therefore, a minimum sample size of 106 was considered adequate to address all the study objectives.Detailed sample size calculations are available as supplementary file 1. Inclusion criteria All children aged between 6 months and 12 years with fever were recruited into the study. Exclusion criteria Children whose parents or legally authorised representatives declined consent during study period were excluded.Patients with a history of antibiotic and/or anti-malarial treatment within 2 weeks prior to admission, and those on malaria prophylaxis or long-term antibiotics were excluded from the study to minimise false negative results. Study procedure Malaria cases were stratified as uncomplicated or severe based on clinical symptoms and number of malaria parasites as observed under a microscope [28].This stratification was for the purposes of proper case management by the attending clinicians.Blood samples for typhoid salmonella culture were collected from eligible participants with a positive blood slide for malaria. Recruitment of study participants was conducted at the paediatrics and child health (emergency, outpatient, and inpatient) units of KIU-TH, after emergency resuscitation (if deemed necessary by the attending clinician).Every respondent or legally authorised representative was explained to the purpose of the study to endorse an informed consent document with a signature or thumb print.A pretested coded check list of parameters of interest specially designed for this purpose was then administered by the investigators.A complete history of associated symptoms such as nausea, loss of appetite, headache, abdominal and join pain, physical examination and relevant laboratory investigations was conducted and findings of interest were recorded on the data tool.In general, patients at paediatric department are received and triaged by the medical team on duty.The first contact clinician is a general doctor who then consults a paediatric resident, paediatrician, or infectious disease specialist when there is need.The team routinely carries out several ward rounds in a day to review laboratory results and determine if there is need to amend the initial treatment decisions.The recruitment process and flow of participants is summarised in (Fig. 1). Laboratory procedures All laboratory analyses were conducted at the microbiology laboratory of KIU-TH.Patients were sent at the laboratory reception where they were assigned a unique laboratory number after registration, followed by blood sample collection.Caretakers to participants or legally authorised representatives were asked to give written informed consent for both specimen collection and subsequently to answer a brief questionnaire in their local language for the illiterate. Collection of samples for malaria blood slide The ring finger was cleaned using an anti-septic solution (chlorhexidine) and allowed to dry, and then pricked with a sterile lancet.The first drop of blood was cleaned with a dried cotton wool and finger was squeezed to allow a drop of blood to flow on the centre of a clean, dry, grease free glass slide.A clean glass rod was used to spread the blood in a circular motion to make a thick blood film such that the back of the watch can be read.The prepared thick blood film was allowed to air dry in accordance with [29]. Collection of blood sample for culture of salmonella The skin at a chosen site for venipuncture was cleaned using an antiseptic solution.The area was allowed to dry prior to venipuncture.A non-touch technique was used to draw three mls of venous blood that was transferred into brain heart infusion broth after disinfection of the rubber septum using an antiseptic solution.The culture bottle was labelled with the participant code number and then taken to the laboratory immediately.Following arrival at the laboratory, each specimen was registered in the appropriate record book and incubated at 37ᵒC for 7days in accordance with [30].Samples collected in the night also underwent a similar process since the laboratory is easily accessible and within the hospital.The specimen was prepared as follows. Thick blood smears staining The dried thick blood smears were prepared in accordance with a method described by Ref. [31].The "plus subsystem" was used to quantify the malaria parasites in accordance with centre for disease control criteria [32].This was intended to guide clinical management. Blood culture and gram staining for morphology After 7 days of incubation, blood samples with growth were sub-cultured on Salmonella-Shigella agar (SSA) under class II biosafety cabinet and incubated at 37ᵒC for 18-24 h.Cultures were re-incubated after first 24 h without growth for up to 72 h before reporting no growth.Cultures with growth were observed for colony characteristics.Colonies were picked with the help of sterile wire loop and smears were made by emulsifying the colony with a drop of normal saline on a clean dried slide.Gram-staining was done to observe the morphology features under a microscope and Salmonella colonies were identified in accordance with De et al. [33]. Quality control All slides and gram stains were interpreted by two independent laboratory technologist who were blinded of the patient's history.In case of disagreement, a professor of medical microbiology and parasitology (EA) was consulted, and his decision was considered final.Each of the slides were compared with a standard positive malaria blood slide already available in the hospital laboratory.Each suspected Salmonella isolate from research participant was compared with a standard Salmonella organism.All the positive samples of isolated Salmonella were taken for external quality control as blind duplicate samples at the nearby Mbarara Regional Referral Hospital for validation. Data collection methods and study variables We collected data using investigator administered pre-tested questionnaire designed in English and local language (Runyankole).We obtained data on independent variables including fever, abdominal pain, vomiting, and loss of consciousness.The data tool also captured information on social circumstances which based on previous literature [34,35]; were presumed to have an influence on disease transmission including: socio-demographic factors (age, maternal level of education, school going status of the child); behavioural factors (source of drinking water, hand washing practices, definitive human waste disposal); and awareness of preventive measures for the two infections.This method had been validated to be effective in similar study settings [28]. Validity and reliability of data collection instrument The pre-test study was conducted at Lugazi Health Centre IV.We used content validity index in which five participants who were not part of the sample population, were given the questionnaire.A measure of inter-participant agreement was determined.A Cronbach's co-efficient alpha of more than 0.8 was considered to imply that the items on the questionnaire were reproducible and consistent. Data analysis Data was entered into Microsoft Excel (version 2010) and exported to Stata software version 14.1 (StataCorp.2015.Stata Statistical Software: Release 14. College Station, TX: StataCorp LP) for cleaning and analysis.The participants' socio-demographic, behavioural and clinical characteristics are summarised using frequencies and percentages in tables.The mean and standard deviation were used for continuous variables that were normally distributed otherwise the median and inter-quartile range were used.We used the modified Poisson regression (with robust standard errors) model to determine factors associated with malaria-typhoid co-infection.Factors with medical plausibility and those with p < 0.2 at bivariate analysis were considered for multivariate analysis.At multivariate analysis, confounding and effect modification (interaction) were assessed at cut-off of 15%.The factors with p < 0.05, in the final model were statistically significant.The measures of association are reported as odds ratios (OR), with corresponding 95% CI and p-values. Results By the end of the study period, a total of 159 participants were seen at the peadiatric and child health department and 108 were eligible for inclusion and analyses (Fig. 1).All patients with uncomplicated malaria were treated with artemether-lumefantrine combination therapy (ACT) for 3 days whereas those with complicated malaria were treated with intravenous artesunate 3 mg/kg (weight <20 kg) and 2.4 mg/kg (weight >20 kg) at 0, 12, 24 h then every 24 h until they could tolerate oral ACT in accordance with WHO guidelines [36].Further, patients confirmed to have typhoid received intravenous ceftriaxone 50 mg/kg/day (maximum 2 g) for 7 days in accordance with a local hospital protocol. Behavioural characteristics of febrile children attending peadiatric department at KIU-TH Over 61.0% (n = 66) of the children and or their guardians seldom washed their hands before eating food whereas over 48.0%(n = 52) of them frequently wash their hands without soap.Additionally, only 48.2% (n = 52) of the children could wash their hand after use of toilet/latrine.Hygienically, about 2.0% practiced open defecation, while 82.4% (n = 89) of the participants used latrine/toilets (Table 2). Factors associated with malaria-typhoid co-infection among febrile children attending paediatric department at KIU-TH Before adjustment, children who were being taken care of by their mothers were 96.0% less likely to have malaria-typhoid coinfection compared to those being taken care of by their fathers [p = 0.028; cOR = 0.04, 95% CI (0.003-0.71)], however this association did not remain valid upon adjustment for confounding [p = 0.33; aOR = 0.14, 95%CI (0.003-7.33)]. Children who reported taking unboiled drinking water from open wells were 17 times more likely to get malaria-typhoid coinfection [p = 0.037, cOR = 17, 95%CI (1.19-243.25)].Children whose source of water was public taps were 97.0% less likely to have malaria-typhoid co-infection compared to those who used open wells [p = 0.015, cOR = 0.03, 95%CI [0.02-0.51].This association remained statistically significant even after adjustment for confounding [p = 0.04; aOR = 0.05, 95%CI (0.003-0.87)].There was no statistically significant association between malaria-typhoid co-infection and gender, level of education, type of accommodation and school going status (Table 7). Prevalence of malaria amongst febrile children attending paediatric department of KIU-TH The first objective of the study was to determine the prevalence of malaria amongst febrile children attending the paediatric department of KIU-TH which was found to be 25.0%.This prevalence is lower than 36.5% reported in an Ethiopian study [35], though higher than the 3.5% [38] and 12.0% [22] previously reported in Western and South Western Uganda respectively.The current figure is also higher than the Ugandan National average of 19.0% [39].This discrepancy could be arising from differences in inclusion criteria, our study having recruited only those who were febrile, including those above 5 years.There has been also concerns that regular reports from Uganda Health Management and Information System (HMIS) suffer inaccuracies; including underreporting of fevers, since only episodes covered by the national public health system are captured, amidst lack of laboratory confirmation [22].In addition, our relatively higher prevalence could also be related to seasonality, having conducted the study in two rainy and one dry season as opposed to the national average that is based on a single calendar year [39].However our findings show a significant reduction in malaria prevalence from a previously reported Ugandan National average of 42.0% in 2009 [40].This reduction could depict successful National malaria control programmes such as indoor residual spray and distribution of free insecticide treated mosquito nets for vector control.The most affected age group in our study population was 7-9 years.In a similar study in Northern Uganda, this age group was the most affected at 61.8% [41].This could be due to increased outdoor activity in this age group, but whether mostly the exposure is at school or home deserves further investigation. Prevalence of typhoid amongst febrile children attending paediatric department of KIU-TH The second objective of the study was to determine the prevalence of typhoid amongst febrile children attending the paediatric department of KIU-TH, based on blood cultures.The study established the prevalence at 3.7% and all affected participants were aged 7-12 years.This blood culture based prevalence is comparable to 2.8% [27] previously reported at KIU-TH and to 2.3% reported at Mulago National Referral Hospital [42].Also our low prevalence compares well to 0.5%-5.0%reported by Birhanie et al. [35] and Habte et al. [43] respectively in Ethiopia. Previously in Uganda, typhoid infection based on blood cultures has been largely studied only during outbreaks rather than routine in the paediatric population.Ranges between 2.6% and 22.6% were reported among adults during Kasese outbreak in Western Uganda [44].In a retrospective study amongst all febrile patients attending clinics in Bushenyi district, the overall prevalence was reported to be 36.6%,affecting mainly 10-29 year olds of low income class [45], however this was a Widal Agglutination serological based study with sensitivity and specificity concerns, amidst data quality constraints of retrospective studies.Serological tests as opposed to blood cultures have been found to give higher prevalence rates of typhoid fever, resulting from false positive results in Nigeria [46], India [47] and Pakistan [48].In an Ethiopian study, typhoid fever was prevalent in 19.0% based on serological Widal test as opposed to 0.5% based on blood culture [35], emphasising the need for extending laboratories with capacity to do blood cultures for proper diagnosis.In conformity with our study findings, a blood culture study in Cameroon found typhoid fever prevalence of 2.5% amongst febrile patients.Other studies in low and middle income countries have found typhoid prevalence lowest amongst children below 4 years and above 15 years [46,49] and highest amongst school going age group of 5-10 years [48].The higher Typhoid burden in the later age group could be due to common source infections from public boreholes in our primary schools setting, alongside poor hand hygienic practices. Prevalence of malaria-typhoid co-infection amongst febrile children attending paediatric department at KIU-TH The third objective of the study was to determine the prevalence of malaria-typhoid co-infection amongst febrile children attending the paediatric department of KIU-TH.The prevalence of malaria-typhoid co-infection based on microscopy and blood culture respectively in our study population was found to be at 2.8%.Our co-infection rates are comparable to 3.5% reported in a Tanzanian study among children below 15 years [20] and to 2.5% in an Indian study [47].Contrary to the findings of Birhanie et al. [35], all cases in the present study were between 7 and 12 years as opposed to 2-5 years.However, the prevalence of this co-infection in our study is lower than 6.5% reported in an Ethiopian study of blood cultures [35], although the later involved a general population; including participants above 12 years.The co-infection rates in our study are far lower that what has been reported previously in serological studies.In their serological study in Western Uganda, Agwu et al. [50] reported a co-infection rate of 20.9% which was comparable to 18.3% in a Nigerian study that used serological tests in a general population [46].The inclusion of general population should control for confounding from comorbidities such as HIV/AIDs that have been shown to be associated with higher rates of both malaria and typhoid [50], otherwise the resulting high prevalence of malaria-typhoid co-infection could be overestimated.In a similar study in Sierra Leon, there was no association between having malaria and typhoid fever, but presence of fever was more associated with Salmonella Typhi compared to Plasmodium parasites [51].A consensus on age specific-blood culture-based reporting of this co-infection amongst researchers, could thus address variability of findings in the future studies.Research on malaria-typhoid co-existence is critical due to a compelling body of evidence to suggest that inherent immunological responses such as hemolysis caused by acute malaria infection is an independent risk for superimposed typhoid infection which would otherwise be "silent" to a level detectable by blood culture [34,35,52]. Factors associated with malaria-typhoid co-infection amongst febrile children attending paediatric department at KIU-TH The fourth objective of the study intended to determine the factors associated with Malaria-Typhoid co-infections amongst febrile children.Upon adjusting for confounders, we found that the most crucial factor influencing this co-infection was the source of water.Using treated water from protected public taps was associated with low malaria-typhoid co-infection (p = 0.04) whereas drinking unboiled water from open public wells increased the risk for the co-infection (p = 0.037).Capturing such history could be made a routine element of screening children presenting with febrile illnesses in our settings. Other Ugandan authors have pinned contaminated water and food as main driving factors for typhoid infection [44], although their main focus had been on adult population.In a similar Ethiopian study, using non treated water from open sources such as springs and wells was associated with blood culture confirmed typhoid fever, especially amongst rural dwellers [43].Contrary to findings of Khan et al. [48] in Pakistan, we found no significant association between the child's school going and malaria-typhoid co-infection in the present study despite the fact that all cases were within 7-12 years; a typical school going age group.Other studies attribute malaria-typhoid co-infection in school going age to increased outdoor activity as well as poor hand washing habits in absence of parental supervision [35].In our study, over 61.0% of the children seldom washed their hands before handling food, whereas 48.0% of them did so but without soap.These statistics coupled with open defecation and failure to wash hands after visiting toilets demonstrated in the present study, warranty urgent behavioural change campaigns. Study strengths and limitations This study boosts of several strengths.First, the investigations used in the present study i.e., blood culture and blood side microscopy are considered gold standard in definitive diagnosis of typhoid and malaria respectively.Secondly, being a cross-sectional study, the level and quality of completeness of data could easily be controlled.Moreover, the data tool used was not only specifically designed for this study but also validated for reliability.Lastly, all positive cases of Salmonella were externally cross-examined and confirmed by an external reference laboratory. However, there were some limitations in this study.First, although blood slide and blood culture are considered gold standard for diagnosing malaria and typhoid respectively, in exceptional cases, malaria parasites may not be captured in peripheral blood smears even in presence of severe infection due to sequestration of parasitized cells in deep capillary beds.Secondly, the reported isolated case of typhoid fever in the present study was included based on presence malaria pigment in circulating neutrophils and monocytes despite having no malaria parasites in setting of suspected malarial infection.In addition, the sensitivity of blood cultures for typhoid salmonella is intrinsically moderate at only 85.0-90.0%[16,53].Lastly, the prevalence which was used in our sample size calculation was just an estimate for the proportionate split based on a single previous study, assuming normal approximation.As such, a maximum variation of assuming 0.5 proportion could have yielded a more "conservative" sample size, higher statistical power, and more precise confidence interval estimates of the true population [54].These factors together with our consecutive recruitment could limit the generalisability of the findings. Conclusions and implication of results The prevalence of malaria was high in our study population compared to the national average [39].The prevalence of blood culture confirmed typhoid fever and malaria-typhoid co-infection were not as high as previously reported based on serological studies [50].The co-infection was clustered mainly among children aged 7-12 years.Although the prevalence of co-infection was low to conclusively discern on the risk factors, using treated water from protected public taps seemed protective whereas consuming unboiled water from open public wells was a statistical risk.These findings justify the routine standardised testing for either infection amongst febrile children, to avoid irrational antibiotic prescriptions [19] and complications related to late diagnosis.Health worker and community driven educational campaigns should focus on use of safe water, hygienic hand washing practices and proper waste disposal, and should target mothers who mainly take care of these children. human participants and in accordance with the Declaration of Helsinki [56].Ethical approval was obtained from the School of Medicine Research and Ethics Committee of Kampala International University, Western Campus (No. UG-REC-023/201,834). Informed consent was sought from all participants and or their legally authorised representatives who endorsed their signatures or thumb prints on the consent form document, having been made to understand the risks and benefits of the study.All participants were free to withdraw their consent at any stage of the study.Withdrawal of consent by any patient did not affect the quality of treatment or impinge on their entitlements.All laboratory results were immediately availed to the guardians and attending clinicians to guide treatment. Table 1 Socio-demographic characteristics of febrile children attending paediatric department at KIU-TH. Table 2 Behavioural characteristics of febrile children attending paediatric department at KIU-TH. Table 3 Source of water and media awareness about malaria and typhoid amongst febrile children attending paediatric department at KIU-TH. Table 4 Prevalence of malaria, typhoid and their co-infection among febrile children attending paediatric department at KIU-TH. Table 6 Clinical characteristics of febrile children attending paediatric department of KIU-TH. Table 7 Bivariate and multivariate analysis of factors associated with malaria-typhoid co-infection among febrile children attending paediatric department at KIU-TH.
2023-08-31T15:09:08.466Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "8b947b242725a9e563208da663ac66507f131442", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2405844023067968/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f9d9b0b0373b4b652e768fadab6db7d1cf8f0372", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
17014912
pes2o/s2orc
v3-fos-license
Objective OBJECTIVE To introduce a fuzzy linguistic model for evaluating the risk of neonatal death. METHODS The study is based on the fuzziness of the variables newborn birth weight and gestational age at delivery. The inference used was Mamdani's method. Neonatologists were interviewed to estimate the risk of neonatal death under certain conditions and to allow comparing their opinions and the model values. RESULTS The results were compared with experts' opinions and the Fuzzy model was able to capture the expert knowledge with a strong correlation (r=0.96). CONCLUSIONS The linguistic model was able to estimate the risk of neonatal death when compared to experts' performance. INTRODUCTION In bioscience there are several levels of uncertainty, vagueness, and imprecision, particularly in the medical and epidemiological areas, where the best and most useful description of disease entities often comprise linguistic terms that are inevitably vague. The theory of Fuzzy Logic has been developed to deal with the concept of partial truth values, ranging from completely true to completely false, and has become a powerful tool for dealing with imprecision and uncertainty aiming at tractability, robustness and low-cost solutions for real-world problems. These features and the ability to deal with linguistic terms could explain the increasing number of works applying Fuzzy Logic in biomedicine problems. 12,14In fact, the theory of Fuzzy Sets has become an important mathematical approach in diagnosis systems, 2 treatment of medical images 4 and, more recently, in epidemiology 6,9,15 and public health. 7e capability of working with linguistic variables, easiness of understanding, low computational cost, and its ability to incorporate to the systems the human expert experience, are attributes that make this approach an extremely interesting option to elaborate medical models.The basic concepts of the fuzzy sets theory is presented in the next section. Neonatal mortality is defined as the death that occurs up to 28 days of life and it is a very important population health indicator.This indicator provides information on social welfare, and ethical and political aspects of a population under certain conditions.Among the main causes of neonatal mortality, low birth weight (LBW) preterm newborn (PT) are the most important.There is a classification for preterm and low birth weight children.Those whose are bornweighing less than 2,500 g are considered low birth weight, and among them, those who are born weighing less than 1,500 g are considered very low birth weight.Correspondingly, children who are born before having completed 37 weeks of gestation are considered preterm, and extreme pre-term are those born before having completed 32 weeks of gestation. 1e incidences of LBW and PT in Brazil are around 10%. 3,5 The estimate of the risk of neonatal death can provide important information to pediatricians, especially to neonatal intensive care physicians, with respect to the attention a newborn requires.Nevertheless, a possible source of confusion could be the Boolean classification for PT and LBW described above, because let's say an infant born weighing 2,600 g might not receive the necessary attention because this infant is not considered LBW.The same could happen with an infant born at 38 weeks of gestation.Low birth weight, extreme low birth weight, preterm and extreme pre-term newborns are the main risk factors to neonatal mortality.Neonatal mortality in the state of São Paulo, the most industrialized Brazilian state, in 2000 was 11.45/1000 livebirths. 13 t is evident that the care provided to a newborn infant could differ depending on the hospital and its location (whether they are in more developed or more populous areas, rural or urban zone, etc.).It is common in fairly small hospitals the pediatrician is not there at the time of birth, and other professionals are in charge of evaluating the newborn. To estimate the risk of neonatal deaths, it has been applied a Logistic Regression Model using dichotomous independent variables such as Yes or No, Present or Absent. 8As opposed to Logistic Regression, Fuzzy Logic allows assigning, for instance, a newborn with birth weight of 1,350 g to a fuzzy subset VLBW with 0.63 membership degree and to a LBW fuzzy subset with 0.25 membership degree, bringing in the inherent uncertainties of this record.In fact, a newborn weighing 1,490 g at birth and another weighing 1,510 g at birth, who are classically categorized as LBW and IBW respectively, do not show significant differences on biological, anatomical and physiological aspects.In the fuzzy approach each element may be compatible with several categories, with different membership degrees.The advantage of the fuzzy theory is to consider an even and more realistic classification of the children relating to the two variables assumed. Considering the scenario discussed above, the development of a simple, low cost program, which is able to evaluate more appropriately the risk of neonatal death, could become an important tool.Thus, it is presented in the study a theoretical fuzzy linguistic model to estimate the risk of neonatal death based on birth weight and gestational age. Fuzzy sets theory The theory of fuzzy sets was introduced by Lotfi A. Zadeh 17 from the University of California, Berkeley, in the 1960's as a means to model the uncertainty within natural language and introduced vagueness concept.Among the various paradigmatic changes in science and mathematics in the last century, one such change concerns to the concept of uncertainty.According to the traditional view, science should strive for certainty in all its manifestation (precision, specificity, sharpness, consistency, etc.); hence, uncertainty (imprecision, non-specificity, vagueness, inconsistency, etc.) is regarded as unscientific.According to the alternative view, uncertainty is considered essential to science. Zadeh's key notion was graded membership, according to which a set could have members who fit into it partly.So, if one assumes X is a set serving as the universe of discourse, a fuzzy subset A of X is associated with a function which is generally called membership function.The idea is that for each x, m A (x) indicates the degree to which x is a member of the fuzzy set A. This membership degree indicates the compatibility degree of the assertion "x is A". The classic set theoretical operations can thus be extended to fuzzy sets, which have membership grades that are in the interval [0,1].So, if one assumes that A and B are two fuzzy subsets of X, their standard union, intersection, and complement are also fuzzy sets given by: and where A is the negation of A (not A).Union, intersection and complement defined above are fuzzy operators that one can use to combine fuzzy variables to form fuzzy expressions, as aggregating fuzzy rules.Sometimes, a fuzzy set could represent linguistic concepts, such as very small, small, high, and so on, as interpreted in a particular context, resulting in the named linguistic variable.It is characterized by its name tag, a set of fuzzy values (linguistic terms) and the membership functions of these labels.Consider, for example, the linguistic variable named Fever with a set of linguistic terms representing absent fever, moderated fever and intense fever.So, Fever is a concept that could be translated by fuzzy sets, which membership functions express quantitatively the notions of fever absent, fever moderated and intense fever.The ability to operate with linguistic variable is one of the most important characteristics of fuzzy sets theory and its successful applications. A fuzzy linguistic model is a rule-based system that uses fuzzy sets theory to treat the phenomena.Its basic structure includes four main components: • a fuzzyfier, which translates crisp (classical numbers) inputs into fuzzy values; • an inference engine that applies a fuzzy reasoning mechanism to obtain a fuzzy output (in the case of Mamdani inference); • a knowledge base, which contains both a set of fuzzy rules and a set of membership function representing the fuzzy sets of the linguistic variable; and • a defuzzifier, which translates the fuzzy output into a crisp value.The decision process is performed by the inference engine using the rules contained in the rule base.This fuzzy rules define the connection between fuzzy input and output. A fuzzy rule has a form: If antecedent then consequent, where antecedent is a fuzzy expression composed by one or more fuzzy sets connected by fuzzy operators, and consequent is an expression that assigns fuzzy values to the output variables.The inference process evaluates all rules in the rule base and combines the weighted consequents of all relevant rules into a single output fuzzy set (Mamdani's model).In many applications of the fuzzy theory it is necessary to produce crisp value as the result of an approximate reasoning process.The fuzzy output set may then be replaced by a "crisp" output value obtained by a process called defuzzification.There are many methods to defuzzify a fuzzy output, but in all of them the crisp value found reflects the best representation of fuzzy set defuzzified. 16e brief outline above describes fuzzy set theory and the approximate reasoning process in its simplest and most commonly used form.There are a variety of other approaches at different levels, perhaps most notably in the choice of the aggregation operators, and in the definition of the inference operation.To the reader who wishes to learn more about fuzzy logic theory it is recommended the book by Yen and Langari. 16 METHODS A linguistic fuzzy model is consists of a set of fuzzy rules and an inference method.The most common inference method is the Minimum of Mamdani, whose output is a fuzzy set.In general, Mamdani's fuzzy models are completely based on experts experience.If one is interested in a crisp output it is possible to find it with a defuzzification method, like a Center of Area. 14,15e fuzzy linguistic model to evaluate a risk of neonatal death has two antecedents: birth weight and Note that, by combining all possible inputs it is possible to build 12 rules, but it was considered relevant only 10 rules, since there are situations that in fact cannot occur.For instance, it is impossible for a very pre-term newborn to have a normal birth weight or insufficient birth weight.Normally an infant in this situation is born at low or very low birth weight.Although this is mathematically possible, it was subtracted from the rule bases, reducing the number of the rules. The procedure of the fuzzy linguistic model, given two of the above inputs for any child, consists of calculating the membership degree of these values in all fuzzy sets of birth weight and gestational age.Next, the risk of neonatal death is determined by inference of the fuzzy rule set, using Mamdani's inference and defuzzification of the fuzzy output.The system was run in a Matlab software. RESULTS The fuzzy sets related to the linguistic variables birth weight and gestational age are presented in Figures 1a and 1b, respectively. It is important to note that this membership function represents the degree of compatibility of some input to all categories rather the probability of this input be classified in any category.In fact, the membership degree represents the possibility that the input belongs to the set.Figure 2 shows the membership functions of the output variable risk of neonatal death. The model was run using several values of the input variables and Figure 3 presents the results of the mapping of the system. It can be noted in this graph that the risk of neonatal death decreases monotonically when birth weight or gestational age increases, as expected.The inconsistent region in this figure corresponds to the excluded In order to validate the model, the cases presented in Table were evaluated by four other experts and applied to the model for the comparison of results.The Spearman correlation coefficient between the model results and the experts' opinions ranged from 0.91 to 0.97.Considering the average of experts' opinions and model results it was found a Spearman correlation coefficient equal to 0.96.Table presents the risk values of neonatal death provided by the average of experts' opinions and the model.Figure 4 shows the correlation between these values. As it can be noted from the Figure 4, the fuzzy model based in only two input variables was sufficiently robust to determine the risk of neonatal death when compared to experts' performance. DISCUSSION Neonatal mortality is a main component of childhood mortality. 13A means of to identifying newborns with high risk to neonatal mortality can offer information to physicians who attend these newborns for them to take actions and prevent devastating outcomes.In the existing literature there are not references of studies approaching this issue in a fuzzy sets theory context. In this study it was proposed a fuzzy linguistic model to evaluate the risk of neonatal death based on birth weight and gestational age.In the fuzzy approach, one element can fit into two or more classes with different membership degree and it is important to mention that the sum of membership degree does not make 1, in a clear opposition to probability theory.The fuzzy approach also considered inherent uncertainties of the classification process, such as in the classification of a newborn with 2,495 g and another one with 2,505 g, who are classically classified as LBW and IBW respectively.In this fuzzy approach these newborns simultaneously fit into LBW and IBW with some membership.Furthermore, in logistic regression there is a need of a considerable number of records to establish an association between the outcome, neonatal death, and determinant variables, such as birth weight and gestational age.In fuzzy model, as presented here, there is not necessary. The model provided good results when compared with the mean values obtained from several experts.The advantage of the risk estimator presented is that the model values do not change with time, which it is not true for experts' opinions.In fact, the experts could provide different values for death risk under the same conditions, depending on their positive or negative feelings.It is common to get from experts different answers for the same question in a week time.In this sense, the model presented here could offer a standardization of the classification process. In addition, this model avoids the variability in the analysis of newborn conditions provided by different health professionals, which could yield inequalities in the treatment.Besides, the fuzzy model is very simple and implies in low computational expenses, making it possible an easy and inexpensive implementation, features that have an important role in developing and poor countries.In cities where there are no experts available, the model can help understanding and evaluating the risk of neonatal death based only on information regarding the gestational age and birth weight.This is available even in very modest conditions. As expected, the agreement between the model and experts is improved in extreme situations, since there are less uncertainties in these cases.For instance, when birth weight and gestational age are optimal or when birth weight and gestational age are very critical there are few doubts about the expected outcome.On the other hand, when birth weight and gestational age are in intermediate (doubtful ones), experts provide conflicting opinions as a result of their feelings and personal experiences.However, despite these divergences, the correlation is still very strong with a p<0.0001 significance level. Expecting that the model could be improved with the introduction of new variables, such as Apgar score, previous report of stillbirth and unsuccessful pregnancy is natural and should be encouraged.However, it is important to bear in mind that the number of fuzzy rules grows exponentially and this can impair the model performance.Besides, the inclusion of new variables does not guarantee the improvement and robustness of the model. The application of fuzzy sets theory in biomedicine and, particularly, in pediatrics, is a new area of research.Nevertheless, this approach has provided promising results in several medical applications, proposing a paradigmatic shift of the healthy sciences. 10,11The fuzzy model proposed in this paper represent a modest contribution to this changing scenario, since the results show that the fuzzy sets theory can be a powerful tool, in addition to the already existing, to estimate neonatal mortality and other important health indicators. Figure 2 -Figure 1 - Figure 2 -Fuzzy sets to output variable risk of neonatal death. Figure 3 - Figure 3 -Surface found by mapping of fuzzy linguistic model to evaluate the risk of neonatal death. Figure 4 - Figure 4 -The correlation between fuzzy model and experts values to risk of neonatal death. The base rules consisted of the following ones: 1. IF weight is VLBW AND gestational age is VPT THEN Risk is HR. 2. IF weight is LBW AND gestational age is VPT THEN Risk is HR. 3. IF weight is VLBW AND gestational age is PT THEN Risk is HR. 4. IF weight is LBW AND gestational age is PT THEN Risk is LHR. 5. IF weight is IBW AND gestational age is PT THEN Risk is LR. 6. IF weight is NBW AND gestational age is PT THEN Risk is LR. 7. IF weight is VLBW AND gestational age is T THEN Risk is LHR.8. IF weight is LBW AND gestational age is T THEN Risk is LR. Table - Some hypothetical situations with birthweight (in grams) and gestational age (in weeks) and the experts (means) and model estimated risks (in percent).
2014-10-01T00:00:00.000Z
2002-12-01T00:00:00.000
{ "year": 2002, "sha1": "c78f16b4592eab02a9ddc521e26fb07b1dcb04e9", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rsp/a/TVQF3xTLVKsjSVtqrF79F5f/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "c78f16b4592eab02a9ddc521e26fb07b1dcb04e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
68223787
pes2o/s2orc
v3-fos-license
Data-oriented Wireless Transmission for Effective QoS Provision in Future Wireless Systems Future wireless systems need to support diverse big data and Internet of Things (IoT) applications with dramatically different quality of service requirements. Novel designs across the protocol stack are required to achieve effective and efficient service provision. In this article, we introduce a novel data oriented approach for the design and analysis of advanced wireless transmission technologies. Unlike conventional channel-oriented approach, we propose to optimally design transmission strategies for individual data transmission sessions, considering both the quality of service requirement and operating environment. The resulting design can effectively satisfy the stringent performance and efficiency requirements of all applications. Introduction Data is becoming one of the most essential resources of modern society. The timely processing, delivery, and analysis of data will bring huge social and economic benefit. As such, data are being generated and collected at an accelerating rate. In particular, big data applications, such as video surveillance, augmented reality (AR)/virtual reality (VR) gaming and medical imaging, generate data of large sizes. The evergrowing Internet of Things (IoT) devices typically transmit and receive small data packets in a sporadic fashion. The data from different applications also have dramatically different quality of service (QoS) requirements. Certain IoT applications require extremely high reliability and low latency. For example, factory automation applications require a packet loss rate of less than 10 -9 with an end-to-end delay smaller than one millisecond. Other IoT systems involve a huge amount of nodes with seriously limited energy resources. These nodes, usually powered by non-chargeable and non-replaceable battery, are expected to function for over 10 years. Future wireless systems must effectively and efficiently support such machine-type communications (MTC) with diverse QoS requirements. The development of digital wireless communications over past three decades has been centred on mobile broadband (MBB) service provision. With the deployment of transmission technologies, including channel adaptive transmission, multiple-input-multiple-output (MIMO) transmission and multi-carrier/orthogonal frequency division multiplexing (OFDM) transmission, current generation of cellular and wireless LAN systems can now effectively support the mass offering of MBB services [1,2]. The MBB service can be further enhanced with the application of massive MIMO technology [3] over millimeter Wave (mmWave) frequency range [4] in the emerging fifth generation (5G) cellular systems. Meanwhile, existing technological solutions can not readily satisfy the stringent requirements of future IoT applications in terms of ultra-high reliability, low latency, and very high energy efficiency. In particular, the latency of current forth generation (4G) network is in the range of 30-100 ms with the packet transmission reliability of 0.99 [5], which fall way short of the ultra-high-reliability low-latency communications (URLLC) required by mission-critical IoT applications. Several approaches to reduce the latency have been proposed in 5G systems, including virtual network slicing to create private connection for delay reduction over backbone networks [6] and new packet/frame structure with variable numerology to minimize scheduling latency [7]. We need also new technological breakthroughs in physical transmission schemes to effectively satisfy the stringent requirements of critical MTC. Many IoT applications involve a large number of MTC devices for sensing, metering, and monitoring purposes. These devices will sporadically exchange short packets with less stringent reliability and latency requirements. Meanwhile, the major design challenges for these IoT applications include massive connectivity, wide coverage, low cost, and high energy efficiency. Conventional transmission technologies were designed targeting long data transmission sessions, as required by MBB services, and become highly inefficient in supporting massive MTC [8]. Several possible solutions for scalable support of sporadic short packet transmission are proposed, including minimum signalling/overhead with random access [9], grant-free non-orthogonal access control, and increased base station complexity [10]. Meanwhile, designing highly energy-efficient transmission schemes for massive MTC devices still poses as one of the most fundamental challenge. To effectively satisfy the stringent performance and efficiency requirements of diverse IoT applications, in this article, we introduce a novel data oriented approach for the design and analysis of advanced wireless transmission technologies for Data-oriented Wireless Transmission for Effective QoS Provision in Future Wireless Systems Hong-Chuan Yang, University of Victoria Mohamed-Slim Alouini, King Abdullah University of Science and Technology future wireless systems. Unlike conventional channel-oriented approach, we propose to optimally design the transmission strategies for individual data transmission sessions, considering both the QoS requirements of the data and the prevailing operating environment. Data-oriented Versus Channel-oriented Conventional wireless transmission technologies were developed while targeting average channel quality, usually characterized by ergodic capacity or average error rate. The general design goal is to enhance and/or approach the effective average data of wireless channels. Typically, the same transmission scheme is applied to all transmission sessions over a wireless link. Following this general channel oriented approach, several transmission technologies [11], such as channel adaptive transmission, multicarrier transmission, multiple antenna transmission, etc, were developed. Generally, these technologies help improve the average quality of the channel, which usually translates to better average QoS experienced by the data. Meanwhile, such channel-oriented approach ignores the specifics of individual data transmission sessions. While the average channel quality indicators can accurately reflect the QoS experienced by long transmission sessions, as in MBB services, they fail to characterize the service quality of short transmission sessions, as illustrated in Figure 1. Note that the QoS experienced by short data transmission sessions varies dramatically with the prevailing channel condition. Such variation will have detrimental effects on the QoS provision for IoT traffics. The channel oriented approach may either cannot deliver the required level of reliability and result in insufficient design for critical MTC or consume too much resource/energy and lead to inefficient implementation for massive MTC. For example, consider the transmission of a short packet of 1 kB over a wireless channel with average data rate of 10 Mbps and average bit error rate of 10 -4 . Can we claim that the packet will be delivered successfully over this channel within 1 millisecond with 99.999999% certainty? Even if we improve the channel quality to average data rate of 100Mbps and average bit error rate of 10 -6 , we will not have a definite answer. The average-channel-quality-based characterization becomes inefficient to determine whether a critical MTC application can be supported. To effective support IoT application in future wireless systems, we suggest a novel data-oriented approach in the design of wireless transmission technologies. We propose to design and apply transmission strategies from the data's perspective, instead of targeting the average channel quality, In particular, the transmission strategies are optimally designed for individual data transmission sessions with the consideration of both QoS requirement and operating environment. For a given data packet from mission-critical MTC application, a transmission strategy that minimizes the latency while satisfying the reliability requirement should be applied. Meanwhile, strategies that minimize energy consumption under a certain delay requirement may be used for massive MTC packets. With such design philosophy, different transmission strategies may apply over the same channel for different traffic type. The rationale of such data oriented approach is that optimizing the transmission strategy for individual data sessions will more effectively satisfy the reliability and efficiency requirements, which will in turn enhance the performance of overall transmission system. A similar design philosophy has been applied in the congestion control of data centres. There are generally two traffic types in data centres: long-lived throughput-sensitive elephant flows and short-lived delay-sensitive mice flows. The elephant flows cause persistent congestions where mice flows create transient congestions. As such, different congestion control policy should apply depending upon the causes of the congestion [12]. While sharing similar philosophy, we propose to apply different physical layer transmission technologies for data with different QoS requirements here. The design and analysis of wireless transmission technologies with the data-oriented approach typically involves three generic steps. We first need to define suitable performance metrics to quantify the QoS experienced by individual data transmission session. Conventional channel-oriented metrics such as average error rate and ergodic capacity may apply to MBB service, whereas new metrics should be developed for critical MTC and massive MTC. Then, we need to establish the performance limits from individual data transmission session perspective and use them as guidelines for transmission strategy design and optimization. Given the random varying nature of the operating environment, these performance limits should be characterized in a statistical sense. Finally, we can design and optimize practical transmission strategies to approach the established performance limits. Various optimization tools and machine learning algorithms can apply to develop the most favourable transmission strategies for the given operating environment and implementation constraint. To further illustrate the proposed data-oriented design approach, we present two data-oriented performance metrics, targeting critical MTC and massive MTC, respectively and use them to establish the performance limits for short data transmission. These analysis leads to some brand new insights to wireless transmission system designs. Data-oriented Analysis for Critical MTC The general design goal of 5G networks for mission-critical IoT applications is to achieve ultra-reliable low-latency transmission. The reliability of digital transmission over wireless channels can be improved with error control coding, retransmission, and diversity combining techniques. Under stringent latency requirement, only coding schemes with short block length may be feasible. Similarly, the latency requirement will limit the number of retransmission attempts, if any. While diversity techniques demonstrate as the most desirable solution for achieving URLLC, the conventional analysis and design of diversity combining schemes were targeting the average performance metric, such as the average error rate. To effectively satisfy the requirement of URLLC, we need a performance characterization that jointly considers reliability and latency requirements. To develop a data-oriented performance limits for critical MTC, we raise the following fundamental but not fully answered question: Given a certain amount of data, what is minimum time duration required to successfully transmit it to the destination? The answer to this question will establish the relationship between the best achievable reliability and the corresponding latency requirement and provide important design guidelines for URLLC. Accordingly, we define a dataoriented metric, minimum transmission time (MTT), as the minimum time duration required transmitting a certain amount of data over wireless channels. Let H denote the amount of data to be transmitted. In informational theoretical sense, H represents the amount of information contained in the data. The MTT will be a function of H, denoted by T min (H). For a given H value, MTT will vary with the channel bandwidth, the channel realization, and the adopted transmission strategy. As such, MTT should be characterized in a statistical sense. More specifically, we can define the delay outage rate (DOR) as the probability that MTT for a certain amount of data is greater than threshold duration, denoted by T th . T th can be related to the latency requirement of the data to be transmitted. As such, DOR serves as a statistical measure for the QoS experienced by individual data transmission session. For example, we can determine if a factory automation application can be supported by evaluating DOR 1 millisecond and comparing it with 10 -9 . As an application of new data-oriented performance limit DOR, we can compare two classical adaptive transmission schemes for fading channels for the channel state information available at the transmitter (CSIT) scenario, namely optimal rate adaptation (ORA) and optimal power and rate adaptation (OPRA). It has been shown that wireless transmission over fading channel with ORA can achieve the ergodic capacity. It has also been established that OPRA transmission can further enhance the capacity of fading wireless channel with water filling power allocation [11]. Figure 2 compares the DOR performance of ORA and OPRA transmission strategies for small data transmission, where the transmission completes within a channel coherence time, over slow Rayleigh fading channel. In particular, we plot DOR of both strategies as function of the delay threshold T th for different amount of data. We can see that for both H values, there is a mixed behavior between the DOR performance of ORA and OPRA. Specifically, when the delay threshold is small, OPRA leads to smaller DOR than ORA. When the threshold duration becomes larger, the DOR with ORA transmission improves and becomes much smaller than that with OPRA. In fact, the DOR of OPRA converges to a fixed value when delay threshold becomes very large, which is equal to the probability of no transmission with OPRA. Figure 3 illustrates the effect of the average received signal-tonoise ratio (SNR) on the DOR performance. We can see that when the average SNR is small, ORA always achieve smaller DOR than OPRA, which holds the transmission with higher probability. When the average SNR increases, the DOR performance of OPRA improves, but still is worse than that of ORA when the delay threshold is large. Note that from the conventional ergodic capacity perspective, OPRA always outperform ORA, especially over low SNR regime. We observe from the DOR analysis, however, that OPRA is not always the better strategy from the perspective of individual data transmission session. OPRA is preferred over ORA when the delay requirement is very stringent or the channel quality is favourable. Data-oriented Analysis for Massive MTC A typical design goal for massive MTC applications is to achieve the highest possible energy efficiency while satisfying a certain QoS requirement. Existing energy efficiency metrics fail to take into account the reliability requirement of data transmission. To establish suitable data-oriented energy efficiency limits for individual data transmission sessions, we pose the following fundamental question: What is the minimum amount of energy required to reliably transmit a given amount of data to its destination? The answers to this question will provide the valuable design guidelines for the energy-efficient transmission of big and small data. Accordingly, we define data-oriented energy utilization metric, namely minimum energy consumption (MEC), as the minimum amount of energy required to successfully transmit a certain amount of data over a wireless channel. The MEC will be a function of data amount H, denoted by E min (H). For a given H value, MEC will vary with the transmission power, the channel bandwidth, the channel realization, and the adopted transmission strategy. Similar to previous section, we can define the energy outage rate (EOR) as the probability that MEC for a certain amount of data is greater than a threshold energy amount. In particular, EOR is mathematically defined as EOR = Pr[E min (H) > E th ], where E th denotes the energy threshold. Equivalently, EOR can be calculated as the probability that the per-bit energy consumption is greater than a threshold value E th /H. As such, EOR serves as a statistical characterization for the energy efficiency experienced by individual data transmission session. To illustrate further, we study the EOR performance of continuous power adaptation (CPA) over a point-to-point wireless channel that introduces slow flat fading. With CPA, the transmitter adapts the transmission power with the channel condition while maintaining a constant received SNR, denoted by γ c , under the peak transmission power constraint P max (also known as truncated channel inversion [11]). Figure 4 illustrates the EOR performance of CPA over slow Rayleigh fading channels. We can see that maintaining a higher target received SNR with CPA leads to larger EOR. This can be explained by noting that higher γ c implies larger transmission power during transmission on average. We also observe from Figure 4 that larger peak transmission power results in larger EOR, especially when the energy threshold is large. With CPA, larger P max will lead to larger probability of transmission for the same target SNR. We can conclude that from individual data transmission session perspective, lower power and smaller transmission rate lead to higher energy efficiency. Figure 5 plots the cumulative distribution function (CDF) of the waiting time before data transmission with CPA over slow Rayleigh fading channels. Note that the delivery time is simply equal to the sum of waiting time and transmission time, which is equal to H/(Blog 2 (1+ γ c )). We again examine the effect of peak transmission power and target received SNR during transmission. We can see that maintaining a higher target SNR with CPA results in longer waiting time, as intuitively expected. We also observe from Figure 5 that larger peak transmission power helps reduce the waiting time. With CPA, larger P max will lead to larger probability of transmission for the same target SNR, whereas smaller P max will ensure that the system transmit only over more favourable channel condition and as such reduce the energy consumption. We conclude that different γ c /P max values lead to different trade-off between energy efficiency and transmission delay. Concluding Remarks In this article, we presented a new perspective to wireless transmission technology design, particularly targeting the effective QoS provision in future wireless systems. The dataoriented approach brings interesting new insights to wireless communications. Through the newly proposed data-oriented performance limits, we observe that while OPRA always outperform ORA from the ergodic capacity perspective, OPRA is not always the preferred transmission scheme from the individual data transmission session perspective, especially for critical MTC. We also note that there is a tradeoff of energy efficiency and delay with CPA strategy. In this article, we illustrated the main idea of the data-oriented approach from the performance metrics definition and performance limit characterization, assuming ideal rate and power adaptive transmission schemes with perfect CSI at the transmitter. There are many directions to further explore the data-oriented approach for wireless system design. In particular, the practical limited CSI and even non-CSI at the transmitter scenarios are of important practical interest. Adaptive modulation and coding (AMC) and automatic repeat request (ARQ) are two popular transmission strategies that explore limited feedback from the receiver. An initial investigation on the transmission time of a large amount of data with discrete rate adaptation over fading channels has been recently reported [12]. With the established performance limits, we can carry out transmission strategy design and optimization from the data perspective. The general design goal is to arrive at the best transmission strategy for the data to be transmitted over the prevailing channel condition. For example, sophisticated coding and diversity schemes should be invoked for the URLLC transmission, whereas non-coherent energy efficient modulation schemes should be adopted for massive MTC transmission. Given the generally complex channel and interfering condition, the conventional optimization solution based on performance analytical result may not be feasible. Off-line deep learning combined with light-weight on-line reinforcement learning will engender favourable solutions.
2019-01-08T12:24:32.000Z
2019-01-08T00:00:00.000
{ "year": 2019, "sha1": "4d59ae7d730cd835254123a3b12a1126b73b2ffd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e36a06cb1cd686dedde79faa12cb8d4c7581aa20", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
239616126
pes2o/s2orc
v3-fos-license
The Sonora Substellar Atmosphere Models. II. Cholla: A Grid of Cloud-free, Solar Metallicity Models in Chemical Disequilibrium for the JWST Era Exoplanet and brown dwarf atmospheres commonly show signs of disequilibrium chemistry. In the James Webb Space Telescope era high resolution spectra of directly imaged exoplanets will allow the characterization of their atmospheres in more detail, and allow systematic tests for the presence of chemical species that deviate from thermochemical equilibrium in these atmospheres. Constraining the presence of disequilibrium chemistry in these atmospheres as a function of parameters such as their effective temperature and surface gravity will allow us to place better constrains in the physics governing these atmospheres. This paper is part of a series of works presenting the Sonora grid of atmosphere models (Marley et al 2021, Morley et al in prep.). In this paper we present a grid of cloud-free, solar metallicity atmospheres for brown dwarfs and wide separation giant planets with key molecular species such as CH4, H2O, CO and NH3 in disequilibrium. Our grid covers atmospheres with Teff~[500 K,1300 K], logg~[3.0,5.5] (cgs) and an eddy diffusion parameter of logKzz=2, 4 and 7 (cgs). We study the effect of different parameters within the grid on the temperature and composition profiles of our atmospheres. We discuss their effect on the near-infrared colors of our model atmospheres and the detectability of CH4, H2O, CO and NH3 using the JWST. We compare our models against existing MKO and Spitzer observations of brown dwarfs and verify the importance of disequilibrium chemistry for T dwarf atmospheres. Finally, we discuss how our models can help constrain the vertical structure and chemical composition of these atmospheres. INTRODUCTION With thousands of exoplanets and brown dwarfs detected to date, "substellar science" has turned its focus from the mere detection to the characterization of these objects. The characterization of the atmospheres of exoplanets and brown dwarfs is of prime interest as it holds key information on the composition and evolution of the atmosphere and, indirectly, of the protoplanetary disk from which it formed. The characterization of exoplanet and brown dwarf atmospheres is done either by detailed comparison of observations to grids of self-consistent for-ward models (e.g. Marley et al. 2012;Allard et al. 2012;Allard 2014), or by MCMC-driven retrievals on these spectra (e.g. Line et al. 2015;Burningham et al. 2017;Kitzmann et al. 2019). Models of atmospheres with equilibrium chemistry can provide a good fit to a large number of atmosphere spectra (e.g., Buenzli et al. 2015;Yang et al. 2016;Kreidberg et al. 2014). However, a number of exoplanet and brown dwarf spectra suggest that their atmospheres are in chemical disequilibrium, with species such as CH 4 , CO and NH 3 being enhanced (CO) or subdued (CH 4 and NH 3 ) in comparison to their equilibrium abundance profiles (e.g., Saumon et al. 2006;Moses et al. 2011;Barman et al. 2015). Hotter atmospheres tend to be governed by chemical equilibrium, while cooler atmospheres can be in chemical equilibrium in deeper, hotter layers and in disequilibrium on the higher, cooler, visible layers due to quenching (see review by Madhusudhan et al. 2016). Chemical disequilibrium due to quenching was first suggested in the atmosphere of Jupiter (Prinn & Barshay 1977) and later brown dwarfs (Fegley & Lodders 1996). Chemical disequilibrium has since been observed in the atmospheres of a number of exoplanets and is shown to be ubiquitous for cooler brown dwarfs (e.g., Noll et al. 1997;Geballe et al. 2001a;Burgasser et al. 2006;Saumon et al. 2006;Leggett et al. 2007b;Moses et al. 2011;Miles et al. 2020). Quenching happens in an atmosphere when transport processes, like convection or eddy diffusion, transport molecules higher up in cooler atmospheric layers where the chemical reaction times are slower. Due to the rate of transport of the molecules being faster than the chemical reaction rates, the bulk composition of the atmosphere will still be representative of the deeper, hotter layers, such that the abundances of higher, cooler layers will be out equilibrium given local pressure and temperature conditions. There are two particular atmospheric pairs for quenched species that involve important molecules in exoplanet and brown dwarf atmospheres: CO-CH 4 and N 2 -NH 3 . An overview of the chemical reactions and intricacies of each cycle can be found, for example, in ; Zahnle & Marley (2014) and Madhusudhan et al. (2016, and references therein). Previous studies have showed the importance of quenching and the resulting chemical disequilibrium for a number of observations. A previous work that was similar in scope to our own is that of Hubeny & Burrows (2007). They studied the effect of disequilibrium chemistry on the spectra of L and T dwarfs as a function of gravity, eddy diffusion coefficient, and the speed of the CO/CH 4 reaction and found that all these parameters influence the magnitude of the departure from chemical equilibrium in their model atmospheres. Following up on those studies Zahnle & Marley (2014) performed a theoretical study of CH 4 /CO and NH 3 /N 2 chemistry and highlighted the importance that surface gravity should play in the quenching of exoplanet and brown dwarf atmospheres. Zahnle & Marley (2014) also visited the influence of the atmospheric scale height and atmospheric structure on quenching. A number of papers have modeled quenched atmospheres to study the effect of quenching on atmosphere spectra and/or fit observations of brown dwarf or im-aged exoplanet spectra. Most of these however, calculate the spectra of exoplanet atmospheres using atmospheric temperature-pressure and composition profiles that are calculated independently of each other (i.e., the change of the composition profile does not inform and, potentially, alter the temperature-pressure profile of the atmosphere, which in turn may affect the composition profile). Models that use a self-consistent scheme to study the effect of quenching on atmospheres are still rare. Hubeny & Burrows (2007) showed that not calculating the atmospheric profiles in a self-consistent way could result in errors of up to ∼100K for a given pressure. Recently, Phillips et al. (2020) presented an update of the 1D radiative-convective model ATMO that calculates the atmospheric profiles of an atmosphere in a self-consistent way. Phillips et al. (2020) applied their updated code to cool T and Y dwarfs and showed that quenching affects the atmospheric spectra and proposed that the 3.5-5.5µm window can be used to constrain the eddy diffusion of these cool atmospheres. Even in the case of hot Jupiters, Drummond et al. (2016) showed that for strong chemical disequilibrium cases, not using a self-consistent scheme can also lead to errors of up to ∼100K for a given pressure. HST and ground-based observations of exoplanet and brown dwarf atmospheres have allowed us to get a first glimpse of their atmospheric composition. Miles et al. (2020), e.g., presented low resolution ground-based observations of seven late T to Y brown dwarfs and showed how their observations can constrain the log K zz of these atmospheres when compared with theoretical models. In the JWST era the number of exoplanets with high quality spectra will increase significantly. JWST observations, in addition to observations with forthcoming Extremely Large Telescopes (ELTs) on the ground will allow the community to study in more detail atmospheric compositions and to test for the existence of disequilibrium chemistry in atmospheres as a function of atmospheric properties, such as effective temperature, surface gravity, metallicity, insolation etc, and allow us to place significantly better constrains in exoplanet and brown dwarf atmospheric physics. The long-wavelength coverage of JWST observations will allow for the first time the simultaneous characterization of multiple pressure layers in these atmospheres and enable us to constrain changes in atmospheric chemistry and cloud composition with pressure. However, the accuracy of our retrievals will depend on the accuracy of our models. To prepare for this era a grid of model spectra and composition profiles is needed that can be used for the characterization of exoplanet and brown dwarf atmospheres. Here, we expand on the work of Zahnle & Marley (2014) to study the effect of quenching on model atmospheres, via the calculation of self-consistent temperature-pressure and composition profiles. This paper is part of a series of studies that present the Sonora grid of atmosphere models. Marley et al. (2021) presented the first part of Sonora: a grid of cloud-free, set of metallicities ([M/H]= -0.5 to + 0.5) and C/O ratios (from 0.5 to 1.5 times that of solar) atmosphere models and spectra named Sonora Bobcat. Morley et al (in preparation) will present the extension of the Sonora grid to cloudy atmospheres. In this paper we present the extension of the Sonora grid to cloud-free, solar metallicity atmospheres in chemical disequilibrium. We have adapted our well-tested radiative-convective equilibrium atmospheric structure code that is previously described in, e.g., Marley et al. (2002);Fortney et al. (2005Fortney et al. ( , 2008; Marley et al. (2012); Morley et al. (2012), to model atmospheres in chemical disequilibrium due to quenching. As our follow on to our first generation grid , Sonora Bobcat (Marley et al., submitted), we name this grid Sonora Cholla. In Cholla we focus on these important and interconnected atmospheric abundances: CO-CH 4 -H 2 O-CO 2 and NH 3 -N 2 -HCN to be in disequilibrium in our model atmospheres. To define the disequilibrium volume mixing ratios of our atmosphere we followed the treatment of Zahnle & Marley (2014) (for more details see Sect. 2). We present models for atmospheres with effective temperatures ranging from 500 K to 1300 K, log g=[3.0,5.5] (cgs) and for various values for the eddy diffusion coefficient. The extension of the grid to cloudy atmospheres and different metallicities will be part of future work. Our grid models and spectra are given open-access to the community as an extension of the Sonora grid (Marley et al. 2018). This paper is organized as follows: In §2 we discuss the updates to our code and its validation. In §3 we present results from our grid of quenched atmospheres. We then discuss the effect of quenching on the temperaturepressure profile ( §3.1), composition profiles ( §3.2), and near-infrared colors ( §3.4) of our model atmospheres. We follow by discussing the effect of quenching on the detection of CH 4 , CO, H 2 O and NH 3 by JWST ( §3.3) and the colors of our model atmospheres ( §3.4). Finally, in §4 we discuss the importance of quenching in exoplanet and brown dwarf atmospheres and how JWST will improve our understanding of chemistry changes in atmospheres and present our conclusions. THE CODE We used the atmospheric structure code of M. S. Marley and collaborators, as described in a variety of works (Marley et al. 1996(Marley et al. , 2002Fortney et al. 2005Fortney et al. , 2008Morley et al. 2012;Marley et al. 2021) which follows an iterative scheme to calculate self-consistently the temperature-pressure profile, composition profiles, and emission spectra of a model atmosphere. The code uses the correlated k-method to calculate the wavelength dependent gaseous absorption of the atmosphere (e.g., Goody et al. 1989). In its original version the k-absorption tables the code uses are 'premixed', i.e., the mixing ratio of different species is specified for a given (pressure, temperature) point. Modeling an atmosphere in chemical disequilibrium with such a scheme requires the re-calculation of k-absorption tables for every quenched 'premixed' atmosphere. Adaptations of the code We adapted the radiative-transfer code to follow the random overlap method with resorting-rebinning of kcoefficients as described in Amundsen et al. (2017). The advantage of this method is that it allows the calculation of the k-absorption coefficients of a variable mix of species, with the abundance of every species (and thus its influence to the total absorption of a layer) being determined at run time. Changes in the temperaturepressure and composition profiles of an atmosphere directly inform changes in the absorption of an atmospheric layer, and allow the calculation of properties of quenched atmosphere in a self-consistent scheme. This in turn requires iteration for the atmosphere to converge. We then allowed the following species to be in chemical disequilibrium due to quenching in our model atmospheres: CH 4 , CO, NH 3 , H 2 O, CO 2 , HCN and N 2 . Quenching of these species was calculated following Zahnle & Marley (2014). At any given point, i.e. for every temperature-pressure profile (hereafter TP profile), on the iterative scheme, the code calculates the quenching level for every species (CO, CH 4 , H 2 O, CO 2 , NH 3 , N 2 , HCN) following Zahnle & Marley (2014), and quenches the composition profiles of each species accordingly. In particular, for every set of species we calculate for each layer of our model atmosphere the mixing timescale t mix which depends on the eddy diffusion coefficient K zz , and the chemical reaction rate of the species t species which depends on the pressure and temperature of the layer (see Zahnle & Marley (2014)). The deeper layer for which t mix < t species is set as the quenching level as above that the reaction rate is slower than convection and the latter takes over in the atmosphere. The composition of the species for each layer at pressures lower than the quenching level are kept constant at the value of the quench level. The quenched profiles are then used in the radiative transfer scheme for the next iteration. In the current version of the code we have used a simple profile for the eddy diffusion coefficient K zz , keeping it constant at the noted model value for all levels. We have set the code up in a modular way such that in future iterations we will be able to use variable with pressure K zz profiles (see Sect. 4). Finally, in its original version the code performs the radiative-transfer calculations using 196 wavelength bins, chosen appropriately to take into account the major spectral features in an atmosphere. As we discuss in Sect. 2.2.1, we adapted the number of bins to 661 to increase the accuracy of our code whilst mixing molecules. We note that members of our group have previously explored the effect of disequilibrium chemistry in the atmospheres of brown dwarfs and exoplanets (e.g., Saumon et al. 2007;Burningham et al. 2011;Moses et al. 2016). For instance Saumon et al. (2007); Burningham et al. (2011) explored the effect of disequilibrium chemistry in the atmospheres of Gliese 570D, 2MASS J1217-0311, 2MASS J0415-0935 and Ross 458C. These papers used a simpler, but not fully self-consistent method described in Saumon et al. (2006) to calculate the quenched volume mixing ratios of an atmosphere. Self-consistent radiative-convective atmosphere models were calculated, assuming equilibrium chemistry, and then non-equilibrium chemical abundances were explored later, varying the strength of vertical mixing. While this method proved extremely useful for exploring the phase of disequilibrium chemistry and often provided good fits to observations, the calculation of the quenched volume mixing ratios is not performed in a self-consistent way in the radiative transfer scheme. Meaning, the altered chemical abundances did not feed back into changes in the atmospheric radiativeconvective equilibrium TP profile. Here, we updated our code to calculate the properties of an atmosphere with disequilibrium chemistry in a fully self-consistent way. Finally, we note that various teams have explored different ways to define the quenching levels in an atmosphere and their effect on atmospheric chemistry (e.g., Visscher 2012;Drummond et al. 2016;Tsai et al. 2018). While using a full chemical kinetics network might be more accurate for 3D general circulation models of irradiated exoplanets (e.g., Tsai et al. 2018), Zahnle & Marley (2014) showed that for the atmospheres of interest in this paper the approximation we use is faster than a chemical network and remains accurate. In particular, Zahnle & Marley (2014) compared the quenching levels retrieved from a full kinetics model, that uses the entire network of chemical reactions possible in these atmospheres, against the Arrhenius-like time scale t species we adopt in this paper. Zahnle & Marley (2014) showed that this approximation is valid for selfluminous atmospheres, but cautioned that it might not work for strongly irradiated exoplanets. In this paper we focus on self-luminous atmospheres and we adopted the Zahnle & Marley (2014) approximation. Our code is modular and allows for the future implementation of other quenching schemes to model highly irradiated atmospheres, for example. Validating the code We validated the code against its well-tested premixed version for atmospheres with different compositions and at different effective temperatures and gravities. We tested the effect of mixing the k-coefficients at run time on the TP profile versus the 'premixed' k-coefficients. We calculated 'premixed' k-coefficients where we kept the abundance of key species at a constant value above their quenching level. We then run the premixed version of the code with these k-coefficients and compared the resulting TP and composition profiles against those of the new code. In Fig. 1 we show the TP profile and composition profiles for CH 4 and H 2 O of a quenched atmosphere with T eff =1000 K and g=1000 m s −2 . The premixed quenched atmosphere profiles are plotted with the red, dashed lines and our new quenched atmosphere profiles run with the same chemistry with the blue, solid lines. The profiles match, with relative errors being 10 −4 . Our quenched models were found to fit the welltested premixed models, with the colder atmospheres having a relative larger error ( 10 −2 at 400 K) than the hotter atmospheres ( 10 −4 at 1000 K). The corresponding errors in the produced spectra were also dependent on the temperature of the model with the hotter quenched atmosphere models showing a better match to the premixed models (relative error 5% at 1600 K) than the colder quenched models (relative error 10% at 400 K; see also Fig. 2). Finally, we compared the T eff of our mixed kcoefficient model atmospheres with the T eff of the corresponding pre-mixed model atmosphere in the full parameter space covered by our models. The relative error in the T eff on which our models converged was 0.001% across the parameter space. For example, the absolute error of T eff for our T eff = 650 K models was 0.2 K for all log g and for the T eff = 1250 K models it was 0.1 K. Thus, the introduction of disequilibrium chemistry in a self-consistent scheme does not affect the effective temperature of our model atmosphere. racy of the resorting-rebinning of k-coefficients method. Indeed, we found that using our code's "native" resolution of 196 bins resulted in large discrepancies in the converged TP and composition profiles and spectra of our new code versus those of the well-tested premixed code. Increasing the number of bins progressively improved the accuracy (see Fig. 2). Increasing the number of wavelength bins though, increases the time the model atmosphere code needs to converge to a solution. We ran a number of test cases at different resolutions ranging from 196 to 890 windows and compared the increase in accuracy of the converged TP and composition profiles and spectra with the increase in code running time. Following this procedure we chose to run our grid in a resolution of 661 windows, with the higher density of points typically needed where H 2 O and CH 4 opacities were the largest. As can be seen in Fig. 2 the accuracy of comparison to the premixed version depends on the temperature of the model atmosphere, with the colder quenched atmospheres ( 600 K) showing a larger deviation from the premixed quenched atmospheres (top and middle panels) for the same chemistry. Validating our code against previously published results We also compared our output against published results from other groups. In Fig. 3 (top panel) we show spectra between 3 µm and 16 µm for our model atmospheres with T eff =900 K, log g=5.5 and for the equilibrium chemistry (black line) and disequilibrium chemistry with log K zz =2 (green line), 4 (red line) and 7 (blue line) cases. For comparison, we also show post-processed disequilibrium spectra (following the same method as in Geballe et al. 2009) for the log K zz = 4 (dark red dashed-dotted line) and 7 (light blue dashed-dotted line) cases. The post-processed spectra are obtained from dis-equilibrium chemistry computed with the TP structure of the atmosphere in equilibrium. The spectra were binned down to a resolution of 200 for plot clarity. The middle panel is a zoom-in in the 3 µm to 6.5 µm region, where we overplotted the 900 K, log g = 5.5 and log K zz = 4 'fast' model 1 of Hubeny & Burrows (2007) (gray, dashed line) from: www.astro.princeton. edu/ ∼ burrows/non/non.html. These figures are comparable to Figs.12-13 of Hubeny & Burrows (2007). Finally, the bottom panel shows the volume mixing ratios of our model atmospheres with log g=5.5 and T eff =800 K, 1000 K and 1200 K, and for the equilibrium chemistry models (dashed lines) and the disequilibrium chemistry models with log K zz = 4 (in [cm 2 /s]; solid lines). This figure is comparable to Fig. 2 of Hubeny & Burrows (2007). Our model atmospheres have comparable volume mixing ratio patterns to the model atmospheres of Hubeny & Burrows (2007). However, slight changes in the TP profiles during our iterative calculation of the TP and composition profiles resulted in slightly lower CO content than the equilibrium models even deeper than the CO quenching level, unlike the Hubeny & Burrows (2007) models. The CO absorption in the 4.7 µm window (middle panel) increased with increasing log K zz in a comparable way to Hubeny & Burrows (2007). Comparing (2014)). The different opacities used by Hubeny & Burrows (2007) also cause differences in the spectra. Finally, in agreement with these authors, and as first pointed out and explained in , end of section 5.3) for Gl 570D, we found that the NH 3 absorption at wavelengths 10 µm is relatively insensitive to our log K zz , over the range of values investigated. Finally, comparing our model disequilibrium spectra with the post-processed disequilibrium spectra we note an overall agreement between 3 µm and 16 µm. The most prominent difference is in the ∼10.5 µm NH 3 feature where our absorption is 30% deeper than for the post-processed spectra. In Fig. 4 (bottom panel) we show our disequilibrium model spectrum (blue lines) and the corresponding post-processed spectrum (cyan dashed line) for a T eff =800 K, log g=5.0 model. In this case our ∼10.5 µm NH 3 feature is in agreement with the post-processed model. Both our models are cooler than the equilibrium models (and thus the postprocessed models) by 30 K to 120 K throughout the atmosphere. At the pressure range probed by the NH 3 feature the change in temperature and NH 3 content of our atmosphere for the (900 K, 5.5) model is smaller than the change for the (800 K, 5.0) model. This results in a deeper NH 3 feature for our (900 K, 5.5) model, which is closer to the equilibrium feature than the postprocessed model. Decoupling the temperature-pressure from the atmospheric chemistry calculation as in the post-processed spectra could thus lead to changes in the spectra for some T eff -log g combinations and impact our atmospheric characterization. Gliese 570D Finally, we tested the code against observations of late T-dwarf Gliese 570D. The T7.5 dwarf Gliese 570D (hereafter Gl570D) was observed by Geballe et al. (2001b), Burgasser et al. (2004), Cushing et al. (2006), and Geballe et al. (2009) and was shown to exhibit evidence of disequilibrium chemistry. Gl570D is now considered the archetypal cloud-free brown dwarf atmosphere in disequilibrium. Saumon et al. (2006) fitted the observations of Gl570D and retrieved a temperature of 800-820 K and log g=5.09-5.23. Using models of atmospheres with disequilibrium chemistry Saumon et al. (2006) obtained a good fit of the spectrum of Gl570D, including the 10.5µm NH 3 feature. Saumon et al. (2006) tested different possible sources for the NH 3 depletion in the atmosphere of Gl570D and concluded that it must be due to disequilibrium chemistry. Hubeny & Burrows (2007) also showed that a 800 K, log g=5.0, log K zz =4 model gave a good fit to the 6 µm -12 µm window observations. Line et al. (2015) performed a retrieval on the Gl570D spectra and retrieved a T eff of 714 +20 −23 K and log g=4.76 +0.27 −0.28 . We fit the spectrum of Gl570D using models at 700 K, 750 K, 800 K and 850 K and log g=4.5, 4.75, 5.00 and 5.25. In Fig. 4 we show our model spectra at (T eff , log g) = (800 K, 5.0) as representative of the best-fit cases of Saumon et al. (2006) and Hubeny & Burrows (2007). The top panel shows the NIR Gl570D observations dataset (black line), as well as the (800 K, 5.0) equilibrium model (red model) and our log K zz =4 model (blue line), while the bottom panel shows the mid-IR Gl570D observations dataset. We also show a postprocessed disequilibrium spectrum for the (800 K, 5.0) model atmosphere with log K zz =4 (cyan, dashed line) 3.5 5.5 7.5 9.5 11.5 13.5 15. Top panel: 3µm -16µm spectra of our 900 K, log g=5.5 model atmospheres with equilibrium chemistry (black line) and with disequilibrium chemistry with log Kzz=2 (green line), 4 (red line) and 7 (blue line). We also show the corresponding post-processed disequilibrium spectra for log Kzz=4 (dark red dashed-dotted line) and 7 (light blue dashed-dotted line) for reference. The spectra were binned down to a resolution of 200 for plot clarity. Middle panel: Same as middle panel, but zoomed in to wavelengths between 3 and 6µm. Here, we overplotted the corresponding Hubeny & Burrows (2007) fast model from: www.astro.princeton.edu/$\sim$burrows/non/non.html (gray dashed line). Bottom panel: Volume mixing ratios for CH4(green lines), H2O (black lines), NH3 (red lines) and CO (blue lines) for our disequilibrium chemistry models with log Kzz=4 (solid lines) and equilibrium chemistry models (dashed lines) for atmospheres with log g=5.5 and T eff =800 K, 1000 K and 1200 K. Note that the colder a model is, the lower its minimum temperature is (so the further to the left the line extends). for comparison. We binned down our model spectra to a variable resolution comparable to the observations for plot clarity. In agreement with Saumon et al. (2006) and Hubeny & Burrows (2007) our (800 K, 5.0) disequilibrium models gave the best fit to the observations of Gl570D across the 1.1 µm -14 µm spectrum (smaller χ 2 by a factor of ∼3) with the exception of the 2.0 µm -2.2 µm window, where our disequilibrium models were underluminous in comparison to the observations (χ 2 increased by a factor of ∼2.2). The (700 K, 5.0) models which are representative of the best-fit case of Line et al. (2015) (not shown here) had a poor fit in the 10.0 µm -12 µm window, where NH 3 dominates, and a worst overall χ 2 fit (larger by a factor of 1.8) than the 800 K model. The underluminosity of our models in the K-band and a mismatch for wavelengths shortward of 1.1 µm is due to inaccuracies in our alkali opacities database and, potentially, our CIA and other opacities in the K-band. Addressing these issues is part of ongoing work. The post-processed spectrum provides a good fit to the observations of Gl570D, and is slightly underluminous in comparison to the observations in the 2.0 µm -2.2 µm window, like our self-consistently calculated spectrum. However, the TP profile of the post-processed spectrum is 15 K to 100 K hotter than our self-consistent TP profile in the pressure range probed by the NIR Gl570D spectrum. This could lead to erroneous conclusions about the atmospheric structure when we characterize an atmosphere with post-processed spectra. In Fig. 5 we plot the CH 4 and NH 3 volume mixing ratios of our quenched atmosphere models (solid lines) against the corresponding equilibrium ratios (dasheddotted lines) and the retrieved ratios of Line et al. (2015) (shaded areas) for our best-fit disequilibrium chemistry model at 800 K (top panel) and the 700 K model (bottom panel) that is representative of the best-fit model of Line et al. (2015). For both models the CH 4 content of our model atmospheres was within the range retrieved by Line et al. (2015). For NH 3 however, our best-fit model has a lower volume mixing ratio than what Line et al. (2015) retrieved. The NH 3 volume mixing ratio for the (700 K, 5.0) disequilibrium model was within the range retrieved by Line et al. (2015). We note, however, that the retrieval of Line et al. (2015) took into account only the 1.1-2.3 µm spectral range, while our best-fit model was also driven by the fit to the strong NH 3 feature in the 10.0-12 µm window. This suggests that for retrievals in the JWST era the inclusion of the longer wavelength observations will be important to better constrain the NH 3 content of an atmosphere. . Observed spectrum (black solid line) with error bars (gray lines) of Gl570D and disequilibrium models for an atmosphere with T eff =800 K, log g=5 representative of the best-fit models of Saumon et al. (2006) and Hubeny & Burrows (2007). The models are in equilibrium (red, solid line) or disequilibrium with log Kzz= 4 (blue, solid line). We also show a post-processed disequilibrium spectrum with log Kzz= 4 (cyan, dashed line) for comparison. Our model spectra were binned down to a variable resolution of 3,000 to 200 comparable to the observations for plot clarity. The error bars are overplotted in both spectra , but the error is too small to be visible in the NH3 feature. To examine the effects of quenching on our model atmospheres as a function of temperature (T eff ), gravity (log g) and eddy diffusion parameter (K zz ) we have calculated a grid of models from T eff 500 K to 1300 K (with steps of 50 K), log g ranging from 3.0 to 5.5 (with steps of 0.25; cgs) and log K zz =2, 4 and 7 (cgs). For a number of models we also run a case of log K zz =10. Our model atmospheres are cloud-free and have solar metallicity. The extension of the grid to cloudy atmospheres, and atmospheres of different metallicities will be part of future work. For every model we created an output file for the Sonora grid with the TP and composition profiles for the following species: H 2 , He, CH 4 , CO, CO 2 , NH 3 , N 2 , H 2 O, TiO, VO, FeH, HCN, H, Na, K, PH 3 and H 2 S. We then created high resolution emission spectra for these 1e-07 1e-06 1e-05 1e-04 1e-03 Volume mixing ratio 10 4 10 3 10 2 10 1 10 0 10 1 10 2 log(P) models using the radiative transfer code described in Morley et al. (2015). In Sect. 3.1 we study the effect of quenching on the TP profiles of the atmospheres, in Sect. 3.2 we study the effect of quenching on the composition profiles and in Sects. 3.3 and 3.4 we study the effect of quenching on the spectra and colors of our model atmospheres. TP profiles of quenched atmospheres In Fig. 6 (bottom panel) and T eff of 650 K (red lines), 950 K (blue lines) and 1250 K (gray lines) with their corresponding equilibrium profiles (dashed lines); and in Fig. 8 we show the TP profiles for atmospheres with T eff =1000 K, log K zz = 4 (top panel) and 7 (bottom panel) and log g ranging from 3.0 (green lines) to 5.5 (gray lines). For a constant gravity log g=5.0 (Fig. 7), the colder a model atmosphere was (i.e., the lower its T eff was), the colder its upper atmosphere became in reference to the equilibrium model both for log K zz = 4 and 7. For the log K zz = 7 models, the upper atmosphere (P 0.1 bar) cooled by 3.7%-20.9% for the T eff 1250 K and by 3.9%-24.8% for the T eff 650 K models. The deeper atmosphere cooled down as well, but to a smaller degree. For a constant temperature T eff =1000 K (Fig. 8), the higher the surface gravity of the atmosphere was, the smaller δT was at all pressures for both log K zz = 4 and 7. This is due to the TP profile of the atmosphere shifting from deeper to lower pressures in the atmosphere and moving nearly vertical to the CO/CH4 equilibrium lines (see Fig.2 10 4 10 3 10 2 10 1 10 0 10 1 10 2 log(P) T eff =1000 K ; log(K zz )=4 logg = 5.50 logg = 4.50 logg = 3.50 logg = 3.00 eq. eq. eq. eq. 0 700 1400 2100 2800 3500 4200 T [K] 10 4 10 3 10 2 10 1 10 0 10 1 10 2 log(P) T eff =1000 K ; log(K zz )=7 logg = 5.50 logg = 4.50 logg = 3.50 logg = 3.00 eq. eq. eq. eq. To complete our picture of the effect of T eff and gravity on δT in Fig. 9 we show the absolute δT (=T eq −T deq ) for all our grid models at 0.5 bar (top panels), 7 bar (middle panels) and 20.0 bar (bottom panels), for log K zz =7. The pressures were chosen as representative of the upper, mid and deeper atmosphere. For log K zz = 7 our model atmospheres were cooler than the equilibrium models. For log K zz = 2 (not shown here) our quenched model atmospheres were cooler than the equilibrium chemistry ones across most pressure layers, except at higher T eff -low g models where at higher pressures the atmospheres heated up. Composition profiles of quenched atmospheres In Fig. 10 (top panel) we show the volume mixing ratio profile of H 2 O for the quenched and equilibrium model atmospheres for a model atmosphere with T eff = 800 K, log g=5.0 and log K zz = 2 (black, solid line), 4 (purple, dashed line), 7 (red, dashed-dotted line) and 10 (green, dotted line). We also show (middle panel) the volume mixing ratio of H 2 O for model atmospheres with log g=5.0, log K zz = 4 and T eff of 700 K (red lines), 1000 K (blue lines) and 1300 K (gray lines); and (bottom panel) the volume mixing ratio of H 2 O for model atmospheres with log K zz = 4, T eff = 1000 K and log g Figure 9. Relative δT as a function of log g and T eff for models with log Kzz=7 at 0.5 bar (top panels), 7 bar (middle panels) and 20 bar (bottom panels). of 3.0 (green lines), 3.5 (red lines), 4.5 (blue lines) and 5.5 (gray lines). As expected, with increasing log K zz our model atmospheres depart further from equilibrium chemistry, with the H 2 O volume mixing ratio relatively reduced by 2% for log K zz = 2 to 17% for log K zz = 10. For a constant log g the departure from equilibrium chemistry depends on the temperature of our model atmospheres (Fig. 10, middle panel). In the upper atmosphere, for log K zz =4 the H 2 O volume mixing ratio of our atmosphere was reduced by 2% at 700 K to 38.8% at 1300 K. Finally, as expected (see also Zahnle & Marley 2014), for a constant temperature the departure from equilibrium chemistry depends strongly on the gravity of our model atmosphere (bottom panel of Fig. 10 and Fig. 11). The smaller log g is, the larger the depletion of H 2 O higher up in the atmosphere is. Due to quenching happening higher up in the atmosphere, the H 2 O profile of the atmosphere in deeper layers coincides with the equilibrium chemistry model profile. On the other hand, the larger log g is the larger the depletion of H 2 O deeper in the atmosphere is. As an indication of the changes in the volume mixing ratio of other species with log g and log K zz , Fig. 11 shows the volume mixing ratio at 0.1 bar for H 2 O (blue markers), CH 4 (red markers) and CO (green markers). Our model atmospheres have T eff = 800 K, log g= 5.0 (squares), 4.25 (circles) and 3.75 (diamonds) and different log K zz values. The volume mixing ratio of all three species changes with log K zz for both gravities. Similar to H 2 O (bottom panel of Fig. 10), the smaller log g is, the larger the change in CH 4 and CO is with log K zz . As noted in Zahnle & Marley (2014) log g affects the log K zz for which CO dominates the atmosphere. Finally, Figs. 12 and 13 show the relative changes in the H 2 O content of our model atmospheres as a function of T eff and log g, and for various log K zz and pressures. These figures are indicative of the trends in the relative change of different species' volume mixing ratios, and do not intend to show a complete picture of the grid. Detection of CH4, CO and NH3 in quenched atmospheres In Fig. 14 we plot spectra of our model atmospheres for log g=3.5 and 5.25, T eff of 750 K (top two spectra) or 1100 K (bottom two spectra) and log K zz =4 (red line) or 7 (blue line). We also plot the corresponding spectra of model atmospheres with equilibrium chemistry (green line). For plotting clarity we shifted our spectra by arbitrary amounts and binned our spectra down to a resolution of 500. Quenching of CH 4 , CO, H 2 O and NH 3 affected the composition profiles of these species in our model atmospheres and thus the detectability of these species in the atmosphere. For log g=5.25 and T eff 1100 K the changes in the composition of our model atmospheres in reference to the equilibrium model were small for CH 4 , H 2 O and NH 3 resulting in non-detectable changes in the model spectra in the major absorption bands of these species. Notably, quenching affected the strong CO band around 4.7 µm resulting in a large difference (δF∼40%-50% for log K zz =4 or 7 respectively at 1100 K, to δF∼50%-60% at 750 K) for all atmospheres. In this Section we test the detectability of CH 4 , CO, H 2 O and NH 3 for our model atmospheres with JWST. In Fig. 15 (top panel) we plot our model atmosphere spectra for atmospheres with T eff ranging from 650 K to 1300 K and log g=5.5. Red lines are used for the log K zz =4 models, blue lines for the log K zz =7 models and green lines for the equilibrium chemistry models. The spectra are plotted as they would be observed with JWST using NIRSPEC (covering the 0.97 µm -5.14 µm range). Note that these spectra also cover the wavelength range observed with the NIRISS Single-Object Slitless Spectroscopy (NIRISS SOSS; covering the 0.6 µm -2.8 µm range), but at a higher resolution, so we omit showing the NIRISS SOSS spectra here. We also plot the same models but for log g=4.5 (middle panel) and 3.5 (bottom panel). For high-and intermediate-gravity atmospheres (log g=5.5 and 4.5) JWST NIRISS-SOSS could detect departures from equilibrium in the H 2 O and CH 4 content of an atmosphere for the cooler atmospheres ( 600 K), a small change in the NH 3 content might be detectable around ∼1.6µm, but no changes were observable for CO. This is due to the fact that the single CO absorption band in the NIRISS-SOSS wavelength range overlaps with absorption bands of other major species like CH 4 and H 2 O. For the low gravities (log g=3.5) JWST NIRISS-SOSS could be able to separate easier the different log K zz cases in intermediate to cooler atmospheres (T eff 900 K). The inclusion of the 4-5 µm window with NIRSPec, allowed to detect deviations from equilibrium for the H 2 O, CH 4 and possibly NH 3 content of our model atmospheres like NIRISS-SOSS did, but also allowed the detection of variation in the CO content of our model atmospheres around the 4.7 µm CO-absorption band, especially for the cooler atmospheres (T eff 900 K). Finally, a small change was potentially observable around 4.14 µm due to changes in the PH 3 content of the colder atmospheres. We note though, that the overlap with a H 2 O and a CO absorption band may hinder this detection at low resolutions. The PH 3 feature could be easier detected for higher metallicity objects (Visscher et al. 2006;Miles et al. 2020). Exploring the effect of disequilibrium chemistry on atmospheres with non-solar metallicities will be part of future work. Finally, we note that the use of MIRI spectroscopy would allow the detectability of NH 3 in disequilibrium through changes in the 10.5µm NH 3 feature. In particular, our 500 K and 550 K models at a medium resolution (not shown here) showed a decrease in the flux in the 10.5 µm NH 3 absorption feature by ∼40% and ∼30% in comparison to the equilibrium flux, allowing the detection of NH 3 in disequilibrium for these cooler model atmospheres. Colors of quenched atmospheres Quenching changed the composition of our model atmospheres and thus their spectra in comparison to the equilibrium models (see also Sect. 3.3). In this section we study the effect of quenching in the color of our model atmospheres. In particular, we study the effect of quenching on the color that JWST NIRCam would observe for our model atmospheres. We used NIRCam's F115W filter (J-band), F162M (H), F210M (K) and F356W (L). In Fig. 16 diamonds). Quenching clearly affected the colors of our model atmospheres. In H −K and J −K all disequilibrium atmospheres are bluer than the equivalent equilibrium chemistry models for all log K zz . In J − H the temperature and gravity of the atmosphere influenced the color change in reference to the equilibrium models. The colder atmospheres ( 700 K) turned redder for all models and all log K zz . At intermediate temperatures (700 K T eff 1050 K) atmospheres shifted bluer for log K zz = 2 and 4, while they turned redder for log K zz = 7. Finally the hotter ( 1100 K) atmospheres turned redder for log K zz = 7, while for log K zz = 2 and 4 they turned bluer, or had approximately the same color as the equilibrium models. To explain the color differences between our equilibrium and disequilibrium chemistry models we focus on the log g=5.25 models. For the hotter models ( 1,000 K) most pressure layers were depleted in H 2 O, CH 4 and NH 3 . Some layers appeared to have an overabundance of these species due the disequilibrium atmosphere following a different TP profile than the equilibrium one. For example, at 1300K the disequilibrium model had more H 2 O than the equilibrium model deeper in the atmosphere ( 10 bar) and in a narrow pressure layer around ∼0.5 and ∼2 bar, but it was depleted in H 2 O in all other layers. CH 4 was depleted at all pressures 1 bar and was overabundant for deeper layers. NH 3 was overabundant at all pressures. On average, the J and H band probed the same pressure layers in our 1300 K model atmospheres. The H band (at a high resolution) though, probed a wider range of pressure across the band, varying from ∼1 bar at the edges, down to ∼15 bar in the center of the band (Fig. 17). The J band on the other hand, probed the ∼15-20 bar region throughout the band, with the exception of some narrow lines where lower pressures were probed and the wings of the band where pressures around ∼3.5 bar were probed. The pressures probed by both J and H bands were depleted in H 2 O, had slightly overabundant NH 3 , while CH 4 was slightly overabundant for J and depleted in pressures probed by the H band. This resulted in the J band being dimmer for the disequilibrium model than the equilibrium one (δF ∼14%), while the H band had comparable CO CO CH4 CH4 CH4 H2O H2O H2O NH3 NH3 NH3 PH3 logg=3.5-NIRSPEC Figure 15. Spectra of a selection of our model atmospheres as they would be observed by JWST with NIRSPEC at high resolution (G140H + G235H + G395H) . Top panel: atmospheres with log g=5.5 and T eff ranging from 650 K to 1300 K. Green lines are atmospheres with equilibrium chemistry, while red and blue lines are quenched atmospheres with log Kzz=4 or 7 respectively. Medium and lower panels: Same as top panel but for log g=4.5 and 3.5 respectively. brightness for the disequilibrium and equilibrium models. This resulted in redder J-H colors for the disequilibrium model. The disequilibrium model K band got dimmer than the equilibrium model K band (δF ∼33%), resulting in overall bluer J-K colors than the equilibrium chemistry models. The reason for the dimmer K band was that its longer wavelengths are dominated by a CH 4 and a NH 3 band, and its shorter ones by an H 2 O band. The K band probed pressures between ∼0.5 and ∼4.5 bar were NH 3 was overabundant by ∼50%-150% (relative variation). In these pressures H 2 O was slightly overabundant as well, while CH 4 was slightly underabundant. This resulted in a dimmer K-band for λ 2.15 µm. The longer wavelength part of the K band was slightly dimmer or comparable to the equilibrium model one, due to including an NH 3 and a CH 4 window. Finally, errors in our CIA opacity database could also affect our K band and result in further dimming as for Gl570D (see Sect. 2.3). Overall this resulted in a dimmer K-band (relative change of ∼34% vs ∼13% for the J-band for the log K zz = 7 model) and bluer J-K and H-K colors. For intermediate models at ∼900 K and log g=5.25, H 2 O and CH 4 were depleted for most pressures except the deeper atmosphere (deeper than ∼10bar for log K zz = 4 to ∼30bar for log K zz = 7). Finally NH 3 was depleted for all pressures 10 bar. The pressure ranges probed by the J, H and K bands in our 900K model differ from those at 1300K. The J band for the 900K model probed pressures around 50 bar (see Fig. 17), which resulted in the J band being dimmer for the disequilibrium than the equilibrium models (relative δF 18%), since the pressure range probed covers areas where H 2 O is overabundant by a few percent (∼4%; all percentages are relative variations). The H band probed pressures around 15 bar. In the pressures probed by the H band, CH 4 was depleted by ∼20-30% and NH 3 was overabundant by ∼10%-30% (for log K zz = 4 and 7), which resulted in an overall dimmer H band (average relative δF ∼ 20%) and a slightly bluer J-H color for our disequilibrium atmosphere than the equilibrium atmosphere. Finally, the K band of our disequilibrium model became dimmer than the equilibrium one (average relative δF ∼ 46%) due to an overabundance of NH 3 and CH 4 in the pressures probed by the band. This resulted in bluer J-K colors for the disequilibrium atmospheres. Finally, for the even colder models at ∼650 K, the pressures probed by the J, H and K bands are 25 bar, which are overabundant in CH 4 , NH 3 and H 2 O, resulting in a dimmer J band (relative δF ∼ 48%), a less dim H band (relative δF ∼ 35%) and a dimmer K band (rel-ative δF ∼ 81%) resulting in redder J-H colors and bluer J-K and H-K colors. In Fig. 18 we plot our M J vs J-H and J-K CMD against the observations of Dupuy & Liu (2012) (top and middle panel), and the Spitzer IRAC M Ch1 vs Ch1-Ch2 against the ensemble of T and Y dwarfs presented in Kirkpatrick et al. (2019). Note that for this plot we used the MKO and Spitzer IRAC filters on our model spectra, to match the data from Dupuy & Liu (2012) and Kirkpatrick et al. (2019). We also note that we don't intend this plot as a characterization effort for any of the Dupuy & Liu (2012) or the Kirkpatrick et al. (2019) targets since both our disequilibrium and equilibrium models are cloudfree, solar metallicity models while a number of these targets are expected to be (at least partially) cloudy and could potentially have non-solar metallicities. Finally, to keep our plot consistent with the data we only plot models with T eff 1000 K for the Spitzer IRAC dataset. The disequilibrium chemistry models turned the J-H colors of our atmospheres redder than the equilibrium chemistry models for the later T-type atmospheres. This is in agreement with observations. However, comparing our models with the Dupuy & Liu (2012) observations in the T-dwarf regime it can be seen that our models are still bluer than the data for intermediate and low gravities (3.5 and 4.25 here). All our model atmospheres are cloud-free. The introduction of clouds and hazes in an atmosphere is known to turn the colors of atmospheres redder (Morley et al. 2012). The color discrepancy between our model atmospheres and observations necessitates extending the disequilibrium chemistry model grid to cloudy atmospheres. This will be part of a future paper. Based on their J-H and H-K (not shown here) colors a number of the observed early T dwarfs reside in both the equilibrium and disequilibrium chemistry space. When J-H, J-K and H-K is taken into account, our disequilibrium chemistry models fit better the mid T and later dwarfs than the equilibrium models. Additionally, the disequilibrium chemistry models give a better fit to the Spitzer 3.6 µm and 4.5 µm (Ch1 and Ch12; bottom panel of Fig. 18) for most of the ensemble of T dwarfs presented in Kirkpatrick et al. (2019). For the latest T ( T8.5) and Y dwarfs disequilibrium chemistry alone cannot provide a good match to the observed Spitzer colors. However, clouds are expected to play a crucial role in these atmospheres (Morley et al. 2012) and affect their colors. In a future paper we will extend the disequilibrium chemistry model grid to cloudy atmospheres and revisit this plot. Our findings support the importance of disequilibrium chemistry for T type dwarfs, which was already suggested by Saumon et al. (2006Saumon et al. ( , 2007 based on ob-servations of Gl570D, 2MASSJ04151954-0935066 and 2MASSJ12171110-0311131. A number of the T dwarfs in our sample like 2MASSJ11145133-2618235 (Leggett et al. 2007a), ULASJ141623.94+134836.3 (Burgasser et al. 2010), 2MASSJ09393548-2448279 (Burgasser et al. 2008) and 2MASSJ12373919+6526148 (Liebert & Burgasser 2007) have already been suggested to be in disequilibrium. For example, Burgasser et al. (2008) showed that the spectrum of 2M0939 (T8) was best fit with a log K zz =4 model. ULAS J1416+1348 (T7.5) was also found to be best-fit by a log K zz =4 model by Burgasser et al. (2010). On the other hand, 2M1237 (T7) and 2M1114 (T7.5) were observed to have faint K-bands which Liebert & Burgasser (2007) and Leggett et al. (2007a) attributed to a subsolar metallicity ([m/H]∼-0.3 for 2M1114) or high gravity. These authors did not explore the possibility of disequilibrium chemistry for these targets. Long wavelength coverage spectra that help retrieve the abundances of multiple species would allow us to disentangle the effect of metallicity and disequilibrium chemistry for these targets. DISCUSSION AND CONCLUSIONS JWST will enable the imaged exoplanet and brown dwarf community to study in more detail a range of exoplanet and brown dwarf atmospheres and perform comparative studies of their properties as a function of atmospheric properties (T eff , log g, metallicity etc). The longwavelength-coverage, high quality spectra of exoplanet and brown dwarf atmospheres that JWST will acquire will allow us to probe simultaneously a wider range of pressures than ever before and constrain chemistry and cloud changes in atmospheres as a function of pressure. JWST will also provide us with time-resolved observations of imaged exoplanets and cooler brown dwarfs of comparable quality to what HST does for L/T transition brown dwarfs today (Kostov & Apai 2013). This will allow us to constrain the time-variability of chemistry and clouds in these atmospheres. However, to do that accurately we will need models that properly account for vertical mixing in the atmosphere. The departure of an atmosphere from equilibrium chemistry, i.e., how strong its vertical mixing is, depends on the atmosphere's properties and the eddy diffusion coefficient (log K zz in this paper) of an atmosphere. The latter is also an important parameter for constraining the cloud formation in the atmosphere (e.g., Marley et al. 2013), thus observational constrains of log K zz are of high importance to the community. Recently, e.g., Miles et al. (2020) presented low resolution groundbased observations of seven late T to Y brown dwarfs and compared their observations against models of at-mospheres with disequilibrium chemistry to constrain the log K zz of these atmospheres. Miles et al. (2020) showed that log K zz spans a range of values in these atmospheres from 4 to 8.5, and discussed how comparing these values against the maximum log K zz predicted from theory can help constrain the existence of detached convective zones in warmer atmospheres. In the coming decade JWST will allow for the first time to constrain changes in log K zz as a function of pressure in atmospheres and potential trends with atmospheric properties such as T eff and log g. Such observations will allow us to constrain in unprecedented detail the vertical structure of atmospheres, including the potential existence of detached convective zones predicted by theory (see, e.g., Marley & Robinson 2015). The accuracy of our results though, will depend on the accuracy of our models. For this reason sets of models that use a selfconsistent scheme to study the effect of quenching on atmospheres are necessary for the JWST era. In this paper we presented an extension of the atmospheric structure code of M. S. Marley and collaborators (e.g., Marley et al. 2021 to calculate the TP and composition profiles of atmospheres in disequilibrium, in a selfconsistent way (see Sect.2). We validated the new code against its well tested equilibrium chemistry version, as well as previously published results (Sect. 2.2), and tested the fit of our new non-equilibrium models against observations of Gl570D (Sect. 2.3), which is considered the archetypal cloud-free brown dwarf atmosphere with signs of disequilibrium chemistry. A number of differences between our models and the observed spectra were noted that are related to inaccuracies in our alkali opacity database for wavelengths shortward of 1.1 µm and, potentially, our CIA opacity in the K-band. Addressing these issues is part of ongoing work. The extension of our code presented here, opens up the possibility to model more complex atmospheres with variable log K zz profiles in the future. Flasar & Gierasch (1978) and later Wang et al. (2014), e.g., predicted that Jupiter and Saturn will show latitudinal variation of log K zz due to the planet's rotation. Wang et al. (2014), e.g., showed that the log K zz (P ) profiles in Jupiter should change with latitude, with the value of log K zz changing by more than an order of magnitude for a given temperature layer between 0 • and 80 • latitude. The changes in Saturn were about two orders of magnitude. Similar variable profiles with pressure and latitude could exist in imaged atmospheres and they would affect the chemical profiles and cloud formation in these atmospheres. Additionally, the existence of detached convective zones in some atmospheres would locally change the log K zz profile, making the use of a Figure 16. Color Magnitude Diagrams (CMDs) for three gravities (log g = 3.5 (solid lines), 4.25 (dashed-dotted lines) and 5.25 (dotted lines)) of our equilibrium atmospheres (red-orange triangles) and disequilibrium atmospheres with log Kzz=7 (blue squares). Radii computed from our equilibrium model evolution (Marley et al. 2021). Our model atmospheres range from 500K to 1000K with a step of 50K, and 1000K to 1300K with a step of 100K. Our J, H and K correspond to JWST F115W, F162M, and F210M respectively. Overplotted with gray lines are the equilibrium tracks from the Sonora Bobcat models Marley et al. (2021). The gravities of the overplotted Bobcat models range from log g = 3.0 to 5.5 with a step of 0.25. simple log K zz profile as used in this paper inaccurate for the characterization of the three-dimensional structure of these atmospheres. Thus, updated models with variable log K zz may be needed in the future. Here shown are the pressures probed at different wavelengths for our log Kzz=4 models with T eff = 1300 K (red line), 850 K (black line) and 650 K (blue line) and log g = 5.25. Also shown is the equilibrium 1300 K model (brown, dashed line) for comparison. The J, H and K band passes are also shown for convenience. The abundance profiles of the major absorbents in the pressures probed by the different bands differ between our equilibrium and disequilibrium model atmospheres. These differences lead to different amounts of absorption and thus colors for the disequilibrium and the equilibrium atmospheres. In the JWST era the Direct Imaging community will have access to high resolution observations of imaged atmospheres in the near and mid infrared. In Sect. 3.3 we showed that using NIRISS and NIRSpec observations we should be able to distinguish between (at least the cloud-free) cooler atmospheres with disequilibrium or equilibrium chemistry. In particular, both NIRISS and NIRSpec should allow the detection of disequilibrium in H 2 O, CH 4 and NH 3 , while the longer wavelength observations of NIRSpec should also allow the detection of CO in disequilibrium. In Sect. 2.3 when we compared our best-fit model volume mixing ratio of NH 3 against the retrieved ratio of Line et al. (2015), we showed that omitting the 10 µm -12 µm observations could have affected their best-fit model. This suggests that in the JWST era MIRI MRS observations will also be important for accurate NH 3 retrievals. NH 3 has been detected in disequilibrium in some cooler T dwarfs (Canty et al. 2015) and Y dwarfs (Cushing et al. 2011). Overall our coolest disequilibrium model at 500 K, for log g= 5.0 was depleted in NH 3 (relative difference of -50%) as expected. This was detectable in the major NH 3 feature around 10.5 µm, which showed a lack of NH 3 in the quenched model atmosphere. However, in the deeper pressures probed in the 1.0-1.3 µm and 1.5-1.6 µm windows, which include NH 3 absorption windows, the cooling in the TP profile of our at- Fig. 16. Note that for this plot we used the MKO (top and middle panel) and Spitzer IRAC filters (bottom panel) on our model spectra, to match the observations. For the IRAC data we also plot the average error bar (δm) in the photometry for reference. mosphere (at those pressures) resulted in an overabundance of NH 3 which should be detectable for some atmospheres. We are currently extending our grid to cooler atmospheres (down to 200 K), in the realm of Y-dwarfs, where Cushing et al. (2011) reported a possible detection of NH 3 in the atmosphere of WISEP-J1738 (350 K -400 K) in the 1.5-1.6 µm window. Fig. 19 shows an example of how NH 3 excess in our cooler atmospheres changed the H-band in a comparable way to the tentative detection on WISEP J1738+2732 by Cushing et al. 2011. A similar dependence on gravity appeared in our detection of CO, with the lower gravity models showing detectable disequilibrium CO absorption around 4.7 µm for the cooler atmospheres. Finally, for most of our model atmospheres the detection of CH 4 is easier at higher gravities than at lower gravities, (see Fig. 15) in agreement with observations that suggest that low gravity planetary mass objects are depleted in CH 4 in comparison to their brown dwarf analogues (that have a higher surface gravity). In this paper we showed that, at least for cloud-free atmospheres, disequilibrium chemistry results in redder J-H colors for our model atmospheres (Sect. 3.4). For smaller values of log K zz only the cooler quenched atmospheres ( 700 K) turned red at all gravities. At log K zz = 7 disequilibrium chemistry resulted in redder colors even for atmospheres as hot as 1200 K ( Fig. 16 and Sect. 3.4). A number of brown dwarfs and planetary mass objects have been detected that are redder in J-H than their standard field counterparts, like WISEP J004701.06+680352.1 (W0047; Gizis et al. 2012), 2MASS J12073346-3932539, the planets of the HR8799 system (2M1207b Oppenheimer et al. 2013;Currie et al. 2011;Skemer et al. 2014) and others. Most of these atmospheres are expected to be cloudy. For example, both W0047 and 2M1207b show rotational variability which is related to cloud patchiness (Lew et al. 2016;Zhou et al. 2016), and planetary mass companion observations are best fit by cloudy models (e.g., Skemer et al. 2014). Clouds will affect the colors of the atmo-sphere, but a number of these atmospheres are also CH 4 depleted (Biller & Bonnefoy 2018) hinting to disequilibrium chemistry in their atmospheres (see, e.g., Barman et al. 2015). Our cloud-free model atmospheres at the temperatures and gravities that are representative of the planets of the HR8799 system (900 K-1100 K and log g 4) and for log K zz = 7 (Barman et al. 2015) became redder in J-H with δ[J-H] = 0.12 to 0.26. Observations of the HR8799 system planets found a color difference from field brown dwarfs that is comparable or larger than what our cloud-free atmospheres showed. Clouds and disequilibrium chemistry are expected to interplay in the atmosphere and affect its color. Clouds are expected to turn atmospheres redder and deplete some of the available chemical species changing the mixing ratio of species in the atmosphere. This hints to the importance of extending the Sonora grid to cloudy atmospheres with disequilibrium chemistry, which will be part of future work. Finally, we compared the Color Magnitude Diagrams (CMD) of our disequilibrium and equilibrium models against MKO observations of brown dwarfs by Dupuy & Liu (2012) and Spitzer data from Kirkpatrick et al. (2019) (Sect. 3.4). We noted that our disequilibrium chemistry models give a better fit to photometry of mid to late T type brown dwarfs, supporting the importance of disequilibrium chemistry for T type dwarfs (Saumon et al. , 2007. In particular, only disequilibrium models could fit the Spitzer colors of Kirkpatrick et al. (2019). The equilibrium models have more CH 4 and less CO than the observed atmospheres so they appear redder than the latter. On the other hand, disequilibrium chemistry increases the CO content of the atmosphere and reduces its CH 4 content (Fig. 15), which results to bluer colors in agreement with the observations. A number of these target atmospheres have already been suggested to be in disequilibrium. Previous fitting of spectra for some of these targets suggested that a subsolar metallicity or high gravity, may be responsible for their different colors, but the authors did not explore the possibility of disequilibrium chemistry for these targets. Refitting these observations with equilibrium and disequilibrium chemistry models at different metallicities would be of interest. Extending the Sonora grid to atmospheres of different metallicities with disequilibrium chemistry will be part of future work. Following the Sonora bobcat models, our models will be archived in Zenodo under https://zenodo.org/ record/4450269. ter high-performance computing resources made available for conducting the research reported in this paper (https://arcc.ist.ucf.edu). Part of this work was performed under the auspices of the U.S. Department of Energy under Contract No. 89233218CNA000001. JJF, MSM, CVM, RL, and RSF acknowledge the support of NASA Exoplanets Research Program grant 80NSSC19K0446.
2021-10-25T01:16:33.619Z
2021-10-22T00:00:00.000
{ "year": 2021, "sha1": "72a3011a916c4540679bc9a4b070add524b2b14f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "72a3011a916c4540679bc9a4b070add524b2b14f", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
24107063
pes2o/s2orc
v3-fos-license
Variation in sulfur and selenium accumulation is controlled by naturally occurring isoforms of the key sulfur assimilation enzyme ADENOSINE 5'-PHOSPHOSULFATE REDUCTASE2 across the Arabidopsis species range. Natural variation allows the investigation of both the fundamental functions of genes and their role in local adaptation. As one of the essential macronutrients, sulfur is vital for plant growth and development and also for crop yield and quality. Selenium and sulfur are assimilated by the same process, and although plants do not require selenium, plant-based selenium is an important source of this essential element for animals. Here, we report the use of linkage mapping in synthetic F2 populations and complementation to investigate the genetic architecture of variation in total leaf sulfur and selenium concentrations in a diverse set of Arabidopsis (Arabidopsis thaliana) accessions. We identify in accessions collected from Sweden and the Czech Republic two variants of the enzyme ADENOSINE 5'-PHOSPHOSULFATE REDUCTASE2 (APR2) with strongly diminished catalytic capacity. APR2 is a key enzyme in both sulfate and selenate reduction, and its reduced activity in the loss-of-function allele apr2-1 and the two Arabidopsis accessions Hodonín and Shahdara leads to a lowering of sulfur flux from sulfate into the reduced sulfur compounds, cysteine and glutathione, and into proteins, concomitant with an increase in the accumulation of sulfate in leaves. We conclude from our observation, and the previously identified weak allele of APR2 from the Shahdara accession collected in Tadjikistan, that the catalytic capacity of APR2 varies by 4 orders of magnitude across the Arabidopsis species range, driving significant differences in sulfur and selenium metabolism. The selective benefit, if any, of this large variation remains to be explored. Sulfur is one of the essential mineral nutrients for all organisms and is required for the biosynthesis of sulfur-containing amino acids, lipids, vitamins, and secondary metabolites, the catalytic and regulatory activity of enzymes, and the stabilization of protein structures. However, animals can only utilize sulfur that has been incorporated into biomolecules primarily originating from plants. Sufficient sulfur fertilization is also important to maximize yields of various crops, including rapeseed (Brassica napus) and wheat (Triticum aestivum; Bloem et al., 2004;Dubousset et al., 2010;Steinfurth et al., 2012). Interestingly, sulfur fertilization can also be important for food quality, with well-fertilized wheat crops producing flour with improved bread-making qualities (Shahsavani and Gholami, 2008). Over the last 50 years, the levels of sulfur in soils have been impacted significantly by the release of sulfur dioxide into the atmosphere as a result of the combustion of fossil fuels (Menz and Seip, 2004). In the atmosphere, sulfur dioxide is oxidized to sulfuric acid and thereafter deposited on soils as acid rain. Regulations controlling the release of sulfur dioxide from power stations introduced in the 1970s have caused significant declines in sulfate deposition due to this acid rain (Menz and Seip, 2004). These reductions in sulfate deposition are significant enough to have increased the need for sulfur fertilization of agricultural crops such as rapeseed and wheat (Dubousset et al., 2010;Steinfurth et al., 2012). The essentiality of sulfur for plants and the relatively recent major fluctuations in soil sulfate deposition make the study of natural variation in sulfur homeostasis by plants attractive. Such studies not only have the potential to identify new molecular mechanisms involved in sulfur homeostasis but also could provide a platform for probing at the genetic level interactions between natural plant populations and a changing soil environment. The sulfur analog selenium is also widely distributed in soil and is incorporated into biomolecules in plants in place of sulfur via sulfur assimilatory processes. Although selenium is not required by plants, it is essential for animals, and plant-based selenium is the primary source of this important nutrient for humans. In animals, selenium plays a vital role in numerous different selenoproteins involved in redox reactions, selenium storage, and hormone biosynthesis (Underwood, 1981;Rayman, 2012;Roman et al., 2014). In these proteins, selenium is incorporated as seleno-Cys. Unlike plants where selenium is incorporated into biomolecules nonspecifically as a sulfur analog, seleno-Cys in animals is biosynthesized by the action of a specific seleno-Cys synthase that converts Ser-tRNA to seleno-Cys-tRNA (Roman et al., 2014). To help prevent the negative health effects of selenium deficiency in the diet, fortification of various foods with selenium is already practiced, and biofortification of crops such as wheat, through the addition of selenium to fertilizers, is being proposed (Lyons et al., 2003). Moreover, there is a growing body of studies suggesting that supraoptimal dietary levels of selenium in the form of seleno-Met and seleno-methylseleno-Cys may be helpful in preventing certain cancers, although the efficacy of these supplements is still debated (Rayman, 2012;Steinfurth et al., 2012). Therefore, the identification of genes that control variation in the uptake and metabolism of sulfur and selenium in plants is not only an essential task for understanding the molecular mechanisms of plant nutrition but also important for crop yield, food quality, and human health. Plants mainly take up sulfur from the soil in the form of sulfate. After uptake of sulfate via the sulfate transporters SULTR1;1 and SULTR1;2 in the root (Yoshimoto et al., 2007;Barberon et al., 2008), sulfate needs to be reduced to sulfide before it can be incorporated into Cys, the first sulfur-containing compound in the assimilatory process (for review, see Takahashi et al., 2011). Sulfate is first activated through adenylation by ATP SULFUR-YLASE (ATPS) to form adenosine 59-phosphosulfate (APS) in both plastids and the cytosol (Rotte and Leustek, 2000). After activation, sulfate as APS is reduced to sulfite by APS REDUCTASE (APR) using reduced glutathione (GSH) as the electron donor (Gutierrez-Marcos et al., 1996;Setya et al., 1996). Alternatively, APS can be phosphorylated by APS kinase to form 39-phosphoadenosine 59-phosphosulfate (PAPS), which acts as the sulfate donor for sulfotransferases to incorporate sulfate directly into saccharides and secondary metabolites such as glucosinolates . Sulfite produced by APR is further reduced to sulfide by sulfite reductase (Khan et al., 2010). In the final step, sulfide is combined with O-acetyl-Ser to form Cys, catalyzed by O-acetyl-Ser (thiol)lyase (Wirtz and Hell, 2006). In plants, selenium is taken up as selenate via sulfate transporters (Shibagaki et al., 2002). After uptake, it is thought that selenate is reduced to selenite via the action of ATP sulfurylase and APS reductase (Shaw and Anderson, 1972;Pilon-Smits et al., 1999;Sors et al., 2005aSors et al., , 2005b. Selenite is most likely nonenzymatically reduced to selenide and combined with O-acetyl-Ser to form seleno-Cys catalyzed by O-acetyl-Ser(thiol)lyase (Ng and Anderson, 1978). Sulfate uptake and reduction represent the two control points for sulfur homeostasis (Kopriva et al., 2009). Based on sequence similarity, there are 14 genes annotated as sulfate transporters in the Arabidopsis (Arabidopsis thaliana) genome, and at least six of them have evidence supporting their functions (Kopriva et al., 2009;Takahashi, 2010). Among these, SULTR1;1 and SULTR1;2 encode high-affinity sulfate transporters in the root responsible for sulfate uptake from the soil solution (Rouached et al., 2008;Takahashi, 2010). Genes encoding both ATPS and APR also form gene families in Arabidopsis, with ATPS being encoded by four genes and APR by three (Kopriva et al., 2009). ATPS enzymes localize to both the cytosol and plastids, whereas APR enzymes localize solely to the plastids (Lunn et al., 1990;Rotte and Leustek, 2000). ATPS1 and APR2 are responsible for the majority of ATP sulfurylase and APS reductase activity of seedlings. In a quantitative trait locus (QTL) analysis of sulfate content in Arabidopsis leaves using recombinant inbred lines generated by crossing Bayreuth (Bay-0) and Shahdara (Sha), APR2 was identified as a locus controlling variation in sulfate accumulation between the Bay-0 and Sha accessions (Loudet et al., 2007). The causal quantitative trait nucleotide in the Sha allele of APR2 results in a substitution of Ala-399 to Glu-399. The substitution localizes in the thioredoxin active site, which reduces the affinity of APR2 for GSH and results in a loss of 99.8% of enzyme activity. The loss-of-function Sha APR2 allele leads to a significant decrease in sulfate reduction and a concomitant increase in leaf sulfate accumulation. In the same analysis, a second QTL for leaf sulfate accumulation was described (Loudet et al., 2007). This second QTL was recently established to be driven by an ATPS1 expression-level polymorphism between Bay-0 and Sha, with Bay-0 showing reduced expression of ATPS1 compared with Sha (Koprivova et al., 2013). Both Loudet et al. (2007) and Koprivova et al. (2013) focused their studies on the genetic architecture of natural variation for leaf sulfate using a single recombinant population created by crossing the Bay-0 and Sha accessions. This recombinant inbred population has proved a powerful tool for the identification of novel alleles controlling several different traits (Loudet et al., 2008;Jiménez-Gómez et al., 2010;Jasinski et al., 2012;Pineau et al., 2012;Anwer et al., 2014). However, because this set of recombinant inbred lines is composed of genotypes from only two accessions, its value is limited when trying to understand allelic diversity across the Arabidopsis species as a whole. To address this limitation, we performed experiments using a set of 349 Arabidopsis accessions collected from across the species range. These 349 accessions were selected from a worldwide collection of 5,810 accessions sampled to minimize redundancy and close family relatedness (Baxter et al., 2010;Platt et al., 2010). To allow us to further probe Arabidopsis specieswide allelic diversity, we also utilized the complete genome sequences of 855 Arabidopsis accessions from the 1,001 Genomes Project (http://signal.salk.edu/atg1001/ index.php). Since sulfate uptake and accumulation is only one step in the complex process of sulfur assimilation in plants, uncovering the genetic architecture of sulfate accumulation, as performed by Loudet et al. (2007) and Koprivova et al. (2013), could potentially overlook genetic variation in other important aspects of sulfur metabolism. To enable us to identify natural genetic variation in sulfur homeostasis beyond just sulfate accumulation, we chose to quantify total leaf sulfur. This potentially allowed us to capture variation in the accumulation of both sulfate and other important sulfur-containing metabolites such as GSH and glucosinolates. Furthermore, to test genetically the long-held assumption that selenium in plants is metabolized as a sulfur analog, we also phenotyped the same set of plants for total leaf selenium. Through a combination of high-throughput elemental analysis, linkage mapping, genetic and transgenic complementation, reciprocal grafting, and protein haplotype analysis, we have established that the natural variation in both leaf sulfur and selenium content in Arabidopsis is controlled by several rare APR2 variants across the whole of the Arabidopsis species. Natural Variation in Leaf Sulfur and Selenium Content in a Worldwide Sample of Arabidopsis Accessions To examine the natural variation in total leaf sulfur and selenium content in Arabidopsis, we employed a population of 349 accessions collected across the species range. These accessions were grown in an artificial soil in a controlled environment with short days to limit flowering. After 5 weeks of growth, leaves were harvested and analyzed for multiple elements by inductively coupled plasma mass spectrometry (ICP-MS). We observed large variations in both total leaf sulfur and selenium concentrations across the 349 accessions analyzed, with total leaf sulfur ranging from 3,000 to 19,000 mg kg 21 dry weight ( Fig. 1A) and total leaf selenium ranging from 6 to 26 mg kg 21 dry weight (Fig. 1B). Genome-wide association (GWA) mapping is a powerful method for testing the phenotypic effect of multiple alleles across the genome and is successful for the identification of relatively strong-effect alleles at relatively high frequency in the population. For example, this approach has been used successfully in Arabidopsis to identify variation at the sodium transporter High-affinity Potassium Transporter1 and the zinc transport Heavy Metal Adenosine Triphosphatase3 as controlling a significant proportion of the population-wide diversity in the leaf accumulation capacity of sodium and cadmium, respectively (Baxter et al., 2010;Chao et al., 2012). Surprisingly, no peaks of single-nucleotide polymorphisms (SNPs) significantly associated with variation in the concentration of total leaf sulfur or selenium were identified using GWA mapping. Indeed, only a single SNP (chromosome 5 position 10832256), located in AT5G28820, a gene of no previously known function, exceeded our experimentwide significance threshold (P , 2.3 3 10 27 ) for total leaf sulfur (Supplemental Fig. S1), and no SNP with significant associations were identified for total leaf selenium accumulation (Supplemental Fig. S2). However, by calculating broad sense heritability, which allows an estimation of the proportion of the phenotypic variance accounted for by genetics, we found that most of the variation in total leaf sulfur and selenium in the set of 349 Arabidopsis accessions can be accounted for by genotype (heritability = 0.75 for sulfur and 0.68 for selenium). Similar levels of heritability have been observed previously for sulfur accumulation in leaves in a smaller population of 96 Arabidopsis accessions . The high heritability of these traits and the inability of GWA to identify SNPs responsible for variation in total leaf sulfur and selenium concentration are the expectations for traits controlled by many alleles of small effect or the segregation of very rare alleles of large effect. Identification of the Causal Locus for Variation in Total Leaf Sulfur and Selenium of Arabidopsis Compared with GWA mapping, linkage mapping in a synthetic F2 population is more powerful for the identification of weak or rare alleles responsible for natural variation due to the high representation of alleles in a biparental cross. However, to take advantage of this power, it is important to select parents that contain the genetic variation of interest. To achieve this, we selected the accession with the highest total leaf sulfur and selenium content, an accession collected in the Czech Republic near Hodonín (Hod). This accession was crossed with the Columbia-0 (Col-0) accession, which has total leaf sulfur and selenium concentrations close to the population median ( Fig. 1). Leaves of F1 hybrid plants were analyzed and found to have the same total leaf sulfur ( Fig. 2A) and selenium (Fig. 2C) concentrations as the Col-0 parent, indicating that the locus (loci) controlling these two phenotypes are dominant in Col-0. To further confirm this, we grew 328 F2 individuals from this cross and analyzed leaves for total sulfur and selenium. We observed a total leaf sulfur content in F2 plants consistent with a single recessive locus driving elevated total leaf sulfur in Hod, with 72 plants showing a total leaf sulfur phenotype similar to Hod, 196 similar to Col-0, and 60 plants showing a phenotype intermediate between the two parents ( Fig. 2B). A similar distribution was observed for total leaf selenium in the same population (Fig. 2D). The correlation coefficient between the total leaf concentrations of sulfur and selenium in this F2 population was high (r 2 = 0.78; Fig. 3), suggesting that the two phenotypes are controlled by the same locus. We also observed a correlation (r 2 = 0.51) between the concentrations of total leaf sulfur and selenium in the 349 Arabidopsis accessions from which Hod was chosen. To map the causal locus driving both elevated total leaf sulfur and selenium in Hod, we performed extreme array mapping (XAM), in which we combined bulk segregant analysis with SNP microarray genotyping (Becker et al., 2011). From the 328 phenotyped Col-0 3 Hod F2 plants, we pooled separately 57 individuals with extreme high total leaf sulfur concentration and 61 individuals with extreme low total leaf sulfur concentration. Genomic DNA was extracted from the two tissue pools, labeled, and hybridized separately to the Affymetrix SNP tiling array Atsnptile 1. The allele frequency differences between the two pools for all the polymorphic probes were assessed (Becker et al., 2011). Based on these allele frequency differences, we mapped the causal locus for high total leaf sulfur to the long arm of chromosome 1, with the depletion in Col-0 genotypes peaking at 23 Mb (Fig. 4A). The single strong XAM peak further supports the prediction of a single recessive locus controlling the Hod high total leaf sulfur and selenium phenotype. To narrow the mapping interval of the high total leaf sulfur and selenium locus, we genotyped 328 F2 individuals from the same population used for the XAM with six cleaved-amplified polymorphic sequence (CAPS) or simple sequence length polymorphism PCR-based markers spanning the identified 20-to 24-Mb interval on chromosome 1 (Fig. 4B). Based on the total leaf sulfur and selenium phenotype of the F2 individuals or their F3 progeny, we identified 27 recombination events between the causal locus and the tested markers. Among these 27 recombinants, the number of crossover events between the causal locus and the markers CF20M, CF22FMA, CF225HK, CF23M, CF235HK, and CF24MA were 19, nine, two, zero, five, and eight, respectively. Based on this result, the mapping interval for the causal polymorphism was narrowed to a 1-Mb interval between markers CF225HK and CF235HK (Fig. 4B). To further narrow this mapping interval, five more polymorphic markers between the markers CF225HK and CF235HK were developed and used to genotype an enlarged mapping population of 1,084 F2 individuals. Eighteen individuals with crossovers between the two markers were identified (Fig. 4C). We genotyped all of these recombinants at five newly developed markers polymorphic between Col-0 and Hod and phenotyped all plants in the F2 and/or F3 generation. Based on the association of genotype and phenotype, the causal polymorphism was mapped to a 101-kb interval between markers CF229954 and CF23055 (Fig. 4C). This interval contains 30 genes, including the APR2 gene encoding APS reductase (Fig. 4D). As APR2 is known to have a key role in sulfur assimilation (Kopriva et al., 2009) and has been shown to play an important role in regulating sulfate content in Arabidopsis (Loudet et al., 2007), we chose it as a strong candidate for the causal gene of the high total leaf sulfur and selenium phenotype of Hod. APR2 Is the Causal Locus Underlying Natural Variation in Total Leaf Sulfur and Selenium in the Hod Accession To test for false negatives in the GWA analysis, SNPs linked to APR2 were reanalyzed. Seven SNPs within the APR2 gene were tested, and one exhibited a nominal P value of 0.0012 for total leaf selenium, but no significant variation in total leaf sulfur was mapped to the APR2 locus. Selecting a 100-kb window surrounding APR2 identified 247 previously genotyped SNPs. This increased SNP number still did not detect any associations between genotypes linked to APR2 and either total leaf sulfur or selenium accumulation that exceeded the now approximately 1,000-fold reduced multiple test correction of P , 2.202 3 10 24 as compared with the genome-wide analysis (Supplemental Table S1). The mapping of the trait to this location in the line cross, but failure to find an association, is consistent with the presence of a rare allele of APR2 with a strong effect on total leaf sulfur and selenium accumulation in the Hod accession. To further examine if APR2 is the causal locus, we phenotyped three independent transfer DNA (T-DNA) Figure 2. High leaf sulfur and selenium in the Arabidopsis Hod accession is recessive. A and C, Total leaf sulfur (A) and selenium (C) concentrations in Hod, Col-0, and their F1 progeny. Data represent means 6 SE (n = 12 independent plants per genotype). B and D, Frequency distribution of total leaf sulfur (B) and selenium (D) concentrations in Col-0, Hod, and F2 progeny from a Col-0 3 Hod cross. Letters above bars indicate significant groups using a one-way ANOVA Tukey's honestly significant difference test using a 95% confidence interval. Data are accessible using the digital object identifier 10.4231/ T96Q1V5R (see http://dx.doi.org/). DW, Dry weight. insertional alleles of APR2 with T-DNA inserts in the fourth exon (apr2-1; GABI_108G02) and 169 bp (apr2-2; SALK_119683) and 116 bp (apr2-3; SALK_035546) upstream of the APR2 translational start site. Analysis of total leaf sulfur and selenium of the T-DNA insertional alleles revealed that apr2-1 phenocopied Hod (Fig. 5A), while the apr2-2 and apr2-3 alleles with T-DNA insertions in the APR2 promoter showed a similar but weaker phenotype compared with Hod (Fig. 5, A and C). These results support the hypothesis that polymorphism(s) at APR2 are responsible for the high total leaf sulfur and selenium phenotype in Hod. To validate this hypothesis, genetic complementation was performed by crossing the three apr2 alleles with Hod. The F1 hybrid plants from these crosses all exhibited the same high total leaf sulfur and selenium concentration as the apr2 parents, establishing that an unknown natural polymorphism(s) in APR2 leads to the high total leaf sulfur ( To reinforce this conclusion, transgenic complementation was performed by introducing an APR2 genomic fragment from Col-0 (including a 1.5-kb promoter region, the gene body, and an 896-bp downstream sequence) into Hod. Transgenic lines in the T2 generation were grown, and total leaf sulfur and selenium were analyzed. T2 plants were also tested for GUS activity as a reporter of the transformation vector by histochemical staining to confirm that an individual T2 plant contained the transgenic fragment. Individuals without the transgenic fragment were removed from further analysis. Using this approach, all seven independent Hod lines transformed with APR2 Col-0 showed significantly reduced total leaf sulfur and selenium, indistinguishable from Col-0 (Fig. 5, B and D). The successful complementation of the elevated total leaf sulfur and selenium phenotype of Hod by a genomic DNA fragment containing APR2 Col-0 establishes beyond doubt that APR2 is the causal locus driving natural variation in both total leaf sulfur and selenium between Col-0 and Hod. The High Total Leaf Sulfur Phenotype in Hod Is Driven Primarily by the Shoot It has been reported that APR2 is expressed both in shoot and root, and this was confirmed here using quantitative real-time (qRT)-PCR (Fig. 6). Expression of APR2 in apr2-1 is completely lost and partially lost in apr2-2 and apr2-3, respectively, confirming apr2-1 as a loss-of-function allele, as determined previously (Loudet et al., 2007), whereas apr2-2 and apr2-3 are only partial loss-of-function alleles. Interestingly, even though A, DNA microarray-based bulk segregant analysis of the leaf sulfur concentration trait using phenotyped F2 progeny from the cross Hod 3 Col-0. Lines represent allele frequency differences between high and low total leaf sulfur pools of F2 plants at SNPs that are polymorphic between Hod and Col-0. Solid lines indicate sense strand probes, and dashed lines indicate antisense strand probes. B, PCR-based genotyping of 328 F2 plants (Col-0 3 Hod) narrowed the causal gene to between CAPS makers CF225HK and CF235HK. C, Fine-mapping using 1,084 F2 plants (Col-0 3 Hod) narrowed the causal gene to a 101-kb region between CAPS makers CF22945 and CF23055. Numbers under the horizontal lines in B and C represent the number of recombinants between the indicated marker and the causal gene. D, Candidate causal genes in the mapped region and the gene structure of APR2 (bottom). Gray bars indicate exons, and black lines indicate introns. APR2 Hod is a loss-of-function allele relative to APR2 Col-0 (Fig. 5, A and C), the expression of APR2 in Hod is essentially the same as in Col-0. This suggests that the loss of function of APR2 Hod is not due to an expression-level polymorphism and is more likely caused by a polymorphism in the APR2 gene body. This is similar to the weak allele of APR2 identified previously in the Sha accession (Loudet et al., 2007). Given that APR2 is expressed in both root and shoot, to determine which tissue controls the APR2-dependent variation in total leaf sulfur between Hod and Col-0, it was necessary to perform a reciprocal grafting experiment (Fig. 7). This experiment established that elevated leaf sulfur in Hod is primarily driven by the shoot. Total leaf sulfur in selfgrafted and nongrafted Hod plants was found to be indistinguishable from grafted plants with a Hod shoot and Col-0 root (Fig. 7). However, grafted plants with a Col-0 shoot and Hod root were indistinguishable from nongrafted Col-0 and only slightly higher than selfgrafted Col-0 plants. An Amino Acid Substitution in Hod Leads to Loss of Function of APR2 Consistent with our genetic analysis establishing that APR2 Hod is hypofunctional compared with APR2 Col-0 , we found that the APS reductase activity in Hod is reduced to the level of the apr2-1 null allele (Fig. 8A). Furthermore, APR2 Hod is unable to restore the APR activity of apr2-1 when both are present in F1 Hod 3 apr2-1 hybrid plants (Fig. 8A). This is in contrast to the APR2 Col-0 allele, which is able to fully restore the APR activity of APR2 Hod in hybrid Hod 3 Col-0 F1 plants (Fig. 8A). These observations are fully consistent with APR2 Hod being a recessive loss-of-function allele of APR2. In order to identify the polymorphism(s) causing loss of function of APR2 Hod , we sequenced the APR2 Hod genomic region covering the gene body, promoter, and 39 terminus. After assembling the sequenced fragments, 116 polymorphic sites between Hod and Col-0 were observed, of which 61 are localized in the promoter, five in the 59-untranslated region, 12 in Figure 5. Complementation of the high total leaf sulfur and selenium of the Arabidopsis Hod accession. A and C, Total leaf sulfur (A) and selenium (C) of Col-0, Hod, apr2-1, and their F1 progeny. B and D, Total leaf sulfur (B) and selenium (D) concentrations of Hod transformed with Col-0 APR2 genomic DNA. Data represent means 6 SE (n = 12 independent plants in A and C and n = 4-12 plants in B and D). Letters indicate significant groups using a one-way ANOVA Tukey's honestly significant difference test using a 95% confidence interval. All data are accessible using the digital object identifier 10.4231/T9BG2KW2 (see http://dx.doi.org/). DW, Dry weight. introns, and 26 in exons. Of those in the promoter, 47 are SNPs and 15 are short insertions/deletions (Supplemental Table S2). Although insertions/deletions in the promoter can drive expression-level polymorphisms, which can form the basis of variation in gene function (Rus et al., 2006;Baxter et al., 2008), this was ruled out as a cause of the loss of function of APR2 Hod because the expression of APR2 in Hod is very similar to APR2 expression in Col-0 (Fig. 6). Therefore, we focused on the protein-coding portion of APR2 Hod . Of the polymorphic sites in the APR2 exons between APR2 Hod and APR2 Col-0 , 10 are nonsynonymous substitutions resulting in amino acid changes (Supplemental Table S2), raising the possibility that these changes may impair the enzymatic activity of the APR2 Hod protein. To examine this possibility, we purified both recombinant APR2 Hod and APR2 Col-0 proteins, measured their APR activity in vitro, and derived various kinetic constants for the enzymes for both the APS and GSH substrates. This analysis clearly established that the APR2 Hod enzyme has extremely impaired catalytic capacity compared with the APR2 Col-0 enzyme for both substrates (Table I), confirming that one or more amino acid substitutions are responsible for the loss of function of APR2 Hod . We also assayed the activity of purified recombinant APR2 Bay-0 and APR2 Sha proteins as controls, since kinetic data have previously been published on these enzymes and to allow comparison with the kinetic parameters of the various APR2 variants (Loudet et al., 2007). The APR2 protein haplotype in Hod is very similar to the haplotype of the Catania-1 (Ct-1) accession reported previously (Loudet et al., 2007), with the exception of a Gly-216 amino acid residue in the central region of the APR2 Col-0 protein that is changed to an Arg in the APR2 Hod enzyme. We observed that the concentration of total leaf sulfur and selenium in Ct-1 is similar to that in Col-0 and much lower than in Hod, supporting the conclusion that this Gly/ Arg polymorphism is causal for the loss of activity of APR2 Hod . This Gly-216 is part of the conserved Arg loop that interacts with the APS substrate (Chartron et al., 2006;Bhave et al., 2012). Therefore, it is possible that this Gly/Arg polymorphism in the APR2 Hod protein might alter the binding of APS to the enzyme reducing its activity. However, we observed that the K m values for APS of APR2 Hod and APR2 Col-0 are very similar (Table I) and therefore conclude that Gly-216 is perhaps more likely to be involved in the APR2 catalytic mechanism. The Frequency of Weak Alleles of APR2 in the Global Arabidopsis Population The previously identified weak allele of APR2 in the Sha accession was unique to Sha within a small but diverse collection of 32 accessions (Loudet et al., 2007), suggesting that it is relatively rare. To perform a more in-depth analysis of the frequency of APR2 alleles, including the Sha allele and the newly identified Hod allele, we examined the APR2 protein haplotype in 855 Arabidopsis accessions that have had their whole genomes sequenced as part of the 1,001 Genomes Project (http://signal.salk.edu/atg1001/index.php). Neither the Sha nor the Hod APR2 allele was found to be present in any other accession, establishing that these two hypofunctional alleles are very rare in the global Arabidopsis population. Finding two extremely rare independent hypofunctional APR2 protein haplotypes inspired us to search for additional rare alleles. We selected LAC-5, Tammisari-2 (Tamm-2), Oystese-0 (Oy-0), and Glueckingen (Gu-1) as candidates for containing hypofunctional APR2 alleles, as these are the four accessions with the highest total leaf sulfur concentrations from the 349 accessions originally screened (excluding Hod). Furthermore, we analyzed APR2 sequences of the 855 resequenced Arabidopsis accessions to look for . Reciprocal grafting establishes that shoots are primarily responsible for the high total leaf sulfur concentration in the Arabidopsis Hod accession. DW, Dry weight; NG, nongrafted plants; SG, self-grafted plants; Col-0/Hod, Col-0 shoot grafted onto a Hod root; Hod/Col-0, Hod shoot grafted onto a Col-0 root. Data represent means 6 SE (n = 11-14). Letters above each bar indicate statistically significant groups using a one-way ANOVA with groupings by Tukey's honestly significant difference test using a 95% confidence interval. nonsynonymous substitutions in the APR2 coding region. Eight accessions with amino acid changes in conserved regions of APR2 were selected, making a total of 12 additional accessions for which APS reductase activity was measured in cell-free extracts (Fig. 8B). Compared with Col-0, five of the tested accessions were found to have significantly reduced APS reductase activity, whereas one accession had significant increased activity (Fig. 8B). Accessions with significantly reduced APR activity are Lovvik-1 (Lov-1), Lov-5, Faberget-Lower-1 (Fal-1), Tfa-08, and Stepnoje-1 (ICE61). Among them, the first four accessions are all from western Sweden and share the same APR2 protein haplotype (Table II). This western Swedish haplotype has an amino acid change of Phe-265 to Ser-265 in the a8-helix (Chartron et al., 2006;Stevenson et al., 2013) of the PAPS reductase domain of the APR2 enzyme (Table II), and the total APR activity is reduced to similar levels as in Figure 8. APR activity (A and B), total leaf sulfur concentration (C), and leaf sulfate concentration (D) in different natural accessions, apr2 alleles, and F1 hybrids. Data represent means 6 SE (n = 3). Letters above each bar indicate statistically significant groups using a one-way ANOVA with groupings by Tukey's honestly significant difference test using a 95% confidence interval. DW, Dry weight; FW, fresh weight. Hod (Fig. 8B). The accession ICE61 represents a singleton haplotype in the 855 resequenced accessions, which has an amino acid change of Ala-155 to Val-155 in the a3-helix (Stevenson et al., 2013) of the PAPS reductase domain (Table II). The APS reductase in accession ICE61 is reduced to 70% of that in Col-0 (Fig. 8B). Analysis of total leaf sulfur contents in these accessions with alternative APR2 alleles from across the Arabidopsis species revealed a link between the strong reduction of APR activity and high total leaf sulfur content in many accessions (Fig. 8C). However, APR activity is not always correlated with total sulfur content in the leaves, as the reduction of APR activity in ICE61 did not affect total leaf sulfur accumulation and the increased activity in Qartaba (Qar-8a) was accompanied by high total leaf sulfur content. Similarly, the mechanism underlying high total leaf sulfur accumulation in LAC-5, Tamm-2, Oy-0, and Gu-1 is independent from APR, as the high total leaf sulfur phenotype was not linked to a decrease in APR enzyme activity. Since APR2 was previously shown to be responsible for variation in sulfate levels (Loudet at al., 2007), we also determined the foliar contents of this sulfur-containing metabolite. Interestingly, there was a better correlation between APR and leaf sulfate levels, as all accessions with APR activity lower than Col-0 accumulated sulfate (Fig. 8D). There is a good correlation between high leaf sulfate accumulation and high total leaf sulfur content, suggesting that sulfate is the major sulfur pool in Arabidopsis leaves. In some accessions, however, these two traits are uncoupled (e.g. Fal-1 and Tfa-08; Fig. 8, C and D), pointing to variation in the relative size of the leaf sulfate pool. To further confirm that the western Swedish APR2 variant is hypofunctional, we expressed an APR2 Lov-5 complementary DNA (cDNA) in bacteria as a representative of the western Swedish APR2 protein haplotype. Recombinant APR2 Lov-5 enzyme was purified and assayed for APR activity. This analysis established that the APS reductase activity of APR2 Lov-5 is very low compared with that of APR2 Col-0 (Table I) and suggests that the amino acid change Phe-265/Ser-265 (Table II) impairs the activity of APR2 . Surprisingly, this Phe-265/Ser-265 polymorphism in APR2 (and in the other western Swedish accessions Lov-1, Tfa-1, and Fal-1) is in the a8-helix of the PAPS reductase domain, a region that does not have any predicted catalytic function (Chartron et al., 2006). However, this residue is conserved as a Phe or Tyr across all other 855 accessions tested as well as APS and PAPS reductases from other plant and bacterial species, suggesting that it may play an important role, perhaps in maintaining the proper tertiary or quaternary structure of APR2. Indeed, this amino acid substitution significantly affects the binding of both substrates, APS and GSH, pointing to a more global effect on the enzyme structure. Loss of APR2 Activity Leads to Reduced Sulfur Assimilation and Increased Accumulation of Sulfate To understand why the reduced function of APR2 results in high total leaf sulfur, we measured the flux of sulfur from sulfate into the reduced sulfur compounds Cys and GSH, and into proteins, in Arabidopsis lines with hypofunctional APR2 alleles. We measured this flux as 35 S assimilated into reduced sulfur-containing compounds as a percentage of the total [ 35 S]sulfate taken up by the plant. In these same lines, we also measured the sulfate content of the leaves. Accessions and mutants with weak alleles of APR2, including Hod, Sha, apr2-1, apr2-2, and apr2-3, have reduced flux of sulfur through the sulfate assimilation pathway compared with Col-0, which possesses a strong APR2 allele (Fig. 9A). Based on these observations, we hypothesized that sulfate should accumulate in those accessions with a weak APR2 allele, where sulfate assimilation is reduced. Measurement of leaf sulfate confirmed this hypothesis. Hod, apr2-1, and apr2-2, all with weak alleles of APR2, have increased accumulation of sulfate in leaves compared with Col-0, which possesses a strong allele of APR2 (Fig. 9B). In the F1 hybrid plants from a Hod 3 apr2-1 cross, the APR2 Hod allele is unable to complement the elevated sulfate accumulation of apr2-1, whereas the APR Col-0 allele is able to complement (Fig. 9B). These results confirm that elevated leaf sulfate in Hod is driven by the reduced function of APR2 Hod . Taken together, the sulfur flux and sulfate accumulation data strongly support our conclusion that, in the case of Hod, the reduced function of APR2 activity limits the plant's ability to reduce sulfate, causing sulfate to accumulate in leaf tissue. However, this enhanced sulfate accumulation in Hod does not appear to be driven by constitutively elevated sulfate uptake, as the rate of sulfate uptake in Hod is indistinguishable from that in Col-0 (Supplemental Fig. S3) when measured as total accumulation of 35 S after exposure of the plants to [ 35 S] sulfate. In contrast, although Sha has a weak allele of APR2 and reduced flux through the sulfate assimilation pathway, it does not show higher sulfate concentrations compared with Col-0. This suggests that further levels of regulation on sulfate accumulation exist besides APR2, such that high sulfate in Sha, due to weak APR2 activity, can be suppressed. DISCUSSION The wide variation in total leaf sulfur and selenium and its high heritability in the population of 349 accessions we studied suggest that this population is a good resource for investigating the genetic architecture of phenotypic variation in these traits. However, even though we observed high heritability for total leaf sulfur and selenium accumulation, genome-wide and targeted association mapping failed to identify the genetic basis of this heritable variation. This missing heritability is not unusual in a genome-wide association study (GWAS; Brachi et al., 2011), with human height being a classic example. Human height has a heritability of approximately 80% (Visscher 2008). However, a GWAS on 90,000 individuals identified allelic variation that accounted for less than 10% of this heritable variation in height (McEvoy and Visscher, 2009). It was concluded that heritable variation in height is controlled by numerous small-effect alleles. Such an explanation also could cause the failure of our GWAS to identify genetic variation that explains the heritable variation we observe in total leaf sulfur and selenium concentrations. However, the much smaller population size used in our study compared with the human height study means that we cannot exclude the possibility of nondetection of the phenotypic effects of large-effect rare alleles. Allelic heterogeneity, in which multiple alleles of the same gene are associated with a phenotype, as is the case for variation in leaf sodium governed by High-affinity Potassium Transporter1 (Segura et al., 2012), also could diminish the sensitivity of GWA mapping using singlelocus tests. However, the use of a multilocus test, as used here, did not improve our detection of loci linked to variation in leaf sulfur or selenium. Investigating the genetic basis of high total leaf sulfur and selenium in accessions in the extreme tail of the phenotypic distribution using synthetic F2 populations, we were able to overcome some of these limitations of association mapping. Creating a synthetic F2 population between Col-0 (with average total leaf sulfur and selenium) and Hod (with high total leaf sulfur and selenium concentrations), and combining its analysis with available sequence data, allowed us to detect three different large-effect rare alleles of APR2. These alleles were contained in the Hod accession from the Czech Republic, four accessions from western Sweden, and ICE61 from southwestern Russia. These alleles were most likely not detected using association mapping due to their low frequency in the phenotyped population of 349 accessions studied. The large-effect nature and low frequency of these new APR2 alleles in the Arabidopsis species-wide collection used in this study are consistent with them being of recent adaptive significance (Brachi et al., 2012;Rockman, 2012). Because of the species-wide nature of the collection of accessions used, locally adaptive alleles would be sampled at a low frequency in the population. Furthermore, large-effect new mutations are theorized to play an important role in bouts of rapid evolution when a genotype suddenly finds itself in an environment to which it is badly adapted (Rockman, 2012). This raises the Figure 9. Sulfur flux in whole seedlings (A) and leaf sulfate content (B) in natural accessions, apr2 alleles, and F1 hybrids. Data represent means 6 SE (n = 3). Letters above each bar indicate statistically significant groups using a one-way ANOVA with groupings by Tukey's honestly significant difference test using a 95% confidence interval. Sulfur flux was quantified as the incorporation of [ 35 S] sulfate into thiols and proteins as a percentage of the total [ 35 S]sulfate taken up by the seedlings. FW, Fresh weight. question, to what mismatched environment might the founders of the accessions containing the reducedfunction alleles of APR2 have been exposed? One interesting possibility is that these reduced-function APR2 alleles are adaptive to soils with elevated sulfate concentrations, where the capacity to limit the energydemanding reduction of sulfate may have a fitness advantage. In support of this notion, we note that, at least for the European accessions possessing these weak alleles of APR2, they were collected from central Europe and Scandinavia, regions known to have experienced high levels of sulfate deposition due to acid rain in the 1950s, 1960s, and 1970s (Menz and Seip, 2004. Alternatively, the impairment of sulfate reduction rate might be an adaptation to a lower growth rate in adverse conditions to prevent the accumulation of reduced sulfur compounds (e.g. during nitrogen limitation; Koprivova et al., 2000). The identification of four individuals with the same weak allele of APR2 in a localized area in western Sweden close to the town of Härnösand is also consistent with this allele being locally adaptive. Although these observations are intriguing, more work is needed to establish if these weak alleles of APR2 are adaptive, and if so, to what. The availability of these newly identified natural alleles of APR2 that encode enzymes varying across a broad range of catalytic capacity provides a unique opportunity to better understand the relationship between structure and function in this important enzyme. This potential is clearly highlighted by the unpredicted behavior of the enzyme encoded by the APR2 Hod and Swedish alleles. The APR2 Hod allele encodes an enzyme with a polymorphism that, based on our current knowledge, should be in the APS-binding loop. This polymorphism reduces the activity of the APR2 Hod enzyme. However, it does not affect the enzyme's binding affinity for APS. The Swedish APR2 allele contains a polymorphism that produces a nonconservative amino acid change within the PAPS reductase domain of the enzyme. This change reduces the catalytic efficiency of the enzyme by over 1,000-fold, even though, based on our current understanding, this polymorphism should be in a region of secondary structure that has no catalytic activity. Overall, our data support the model that reducedfunction or loss-of-function alleles of APR2 drive the elevated accumulation of total leaf sulfur (primarily as sulfate) by reducing the flux of sulfur from inorganic sulfate into reduced organic forms (e.g. Cys and GSH). This limitation in the assimilation of sulfur results in the accumulation of the pathway substrate, sulfate. This is consistent with previous studies showing a major role of APR in the control of flux through the sulfate assimilation pathway (Vauclare et al., 2002;Scheerer et al., 2010). It is also possible that the reduction in sulfate assimilation rate may lead to the signaling of sulfur deficiency, giving rise to further elevated uptake and accumulation of sulfate. However, we do not observe an increased sulfate uptake rate in Hod. Additionally, we observe APR2 mRNA levels in Hod to be similar to those in Col-0 when growing in sulfate-replete conditions. Since the level of APR2 mRNA is normally strongly induced by sulfur starvation (Gutierrez-Marcos et al., 1996), this suggests that Hod is not experiencing sulfur deficiency. This indicates that sulfur deficiency signaling in Hod may not play a key role in its elevated sulfate accumulation. Intriguingly, the reciprocal grafting experiments established that reduced APR2 function solely in the leaves is sufficient to achieve the elevated total leaf sulfur concentrations observed in the Hod accession. Even though APR2 is known to be active in both roots and leaves, the activity in the leaves is known to be much higher (Koprivova et al., 2000). The fact that sulfur accumulates only in plants with a loss of APR2 activity in the leaves, therefore, is consistent with our proposed role of the loss of function of APR2 in high total leaf sulfur in Hod. It is clear from this and previous studies (Loudet et al., 2007;Koprivova et al., 2013) that limitation of the catalytic activity of the ATP sulfurylase or APS reductase, enzymes that work sequentially to reduce sulfate to sulfite, is a significant source of natural variation in sulfate accumulation in Arabidopsis. Furthermore, for those accessions with high APR activity and high leaf sulfate, we would propose that they have a reduced-function allele of ATPS1 (Koprivova et al., 2013). However, it is also evident that variation in leaf sulfate accumulation cannot always be explained in this way. The four western Swedish accessions (Lov-1, Lov-5, Fal-1, and Tfa-08) share the same weak allele of APR2, and all have similarly low APR activity. However, Tfa-08 accumulates significantly more leaf sulfate than the other three Swedish accessions, suggesting that these accessions contain other loci that limit sulfate accumulation. Making crosses between Tfa-08 and these other lower sulfate-accumulating accessions with weak alleles of APR2 could provide a way to discover new genes and alleles involved in sulfur homeostasis in plants. Unlike animals, plants do not have specific mechanisms for incorporating selenium into the amino acids seleno-Cys and seleno-Met. Rather, selenium in plants is thought to be metabolized nonspecifically via the sulfate assimilation pathway (for review, see Sors et al., 2005b). Heterologous and ectopic expression studies in plants of genes encoding ATP sulfurylase and APS reductase singly and together support this conclusion (Pilon-Smits et al., 1999;Sors et al., 2005a), although the role of these genes in tolerance to elevated selenate is less clear (Hatzfeld et al., 1998;Pilon-Smits et al., 1999;Sors et al., 2005a). Our observation that elevated total leaf selenium in Arabidopsis in Hod is caused by a reduction in function of APR2 provides independent, nonheterologous, genetic evidence supporting these earlier reports that APR activity is required for selenium assimilation in plants. Uncovering this new role for natural genetic variation in APR2 opens up a new avenue for the exploitation of natural variation in sulfur metabolism for the manipulation of dietary selenium concentrations in crop plants. Plant Materials and Growth Conditions Arabidopsis (Arabidopsis thaliana) plants used for elemental analysis by ICP-MS were grown in a controlled common garden with a photoperiod of 10 h of light (90 mmol m 22 s 21 )/14 h of dark, humidity of 60%, and temperature ranging from 19°C to 22°C. Detailed information for planting and management was the same as described previously (Lahner et al., 2003). Plants used for the measurement of sulfate content, APR activity, and sulfur flux were grown for 2 weeks on vertical plates with Murashige and Skoog medium without Suc supplemented with 0.8% agarose. The plates were placed in a controlled-environment room at 20°C under a 16-h-light/8-h-dark cycle. Plants for expression analysis of APR2 were grown in axenic conditions as described previously . Briefly, sterilized seeds were sown on one-half-strength Murashige and Skoog medium (Sigma-Aldrich) solidified with 1.2% agar containing Suc. Plates were placed at 4°C for 3 d to achieve optimum germination and then transferred into a growth chamber with a photocycle of 16 h of light (90 mmol m 22 s 21 )/8 h of dark and temperature of 22°C for another 3 weeks. Elemental Analysis Total leaf elemental concentrations were determined as described previously (Lahner et al., 2003). In brief, one to two leaves were harvested from 5-week-old plants with a sharp knife and clean plastic tweezers. Leaves were rinsed three times with ultrapure water (18 MV) and placed at the bottom of Pyrex digestion tubes. Samples were subsequently dried at 92°C for 20 h. Dried samples together with analytical blanks and standard reference material (National Institute of Standards and Technology, Standard Reference Material 1574) were digested with 0.7 mL of concentrated nitric acid (OmniTrace; VWR Scientific Products) with gallium added as an internal standard. After completing digestion at 118°C, all tubes were diluted to 6 mL with 18-MV water and analyzed by ICP-MS (Elan DRCe; PerkinElmer) for lithium, boron, sodium, magnesium, phosphorus, potassium, calcium, manganese, iron, cobalt, nickel, copper, zinc, arsenic, selenium, molybdenum, and cadmium. The raw data were normalized using a previously described method (Lahner et al., 2003). Unnormalized data have been deposited on the iHUB site (previously known as PiiMS; Baxter et al., 2007) for viewing and download at http://www.ionomicshub.org. Genome-Wide and Targeted Association Mapping GWA analyses were performed using the MLMM package (Segura et al., 2012) and SNP data from the tiling array data from call method 75 (Atwell et al., 2010) comprising just over 214,000 SNPs. Phenotypic data consisting of measurements of sulfur and selenium accumulation were obtained from the iHUB site (http://www.ionomicshub.org) as data set doi: 10.4231/T9H41PBV (Nordborg 360 HapMap) and originate from 331 of the 360 accessions selected previously to represent the species-wide diversity of Arabidopsis (Baxter et al., 2010;Platt et al., 2010). Leaf tissue was analyzed for sulfur and selenium accumulation by ICP-MS analysis (Lahner et al., 2003). MLMM optimal models were achieved at the first iteration, and inclusion of the top SNP from this first iteration reduced the P values of the remaining SNPs and also reduced the fit of those at APR2. Thus, controlling for the genetic effects of the top-associated SNP did not improve the fit of SNP segregation to the phenotypic data. The optimal outputs, therefore, are similar to what would be obtained by ordinary mixedmodel association mapping. For targeted, candidate gene, association analysis, SNP data were restricted to consider only the SNPs within APR2 (The Arabidopsis Information Resource 10 build; http://www.arabidopsis.org/servlets/ TairObject?type=locus&name=At1g62180) or to data within 100 kb of DNA centered on the APR2 gene (chromosome 1 centered at 22,976,566 bp in The Arabidopsis Information Resource 10). Seven SNPs were found in the APR2 gene and 245 were identified within 100 kb of DNA. Map-Based Cloning The F2 mapping population was derived from an outcross of Arabidopsis accessions Hod and Col-0. Rough mapping was performed using SNP tiling array-based XAM as described previously (Becker et al., 2011;Chao et al., 2012). In brief, 328 F2 individuals were phenotyped by ICP-MS and ranked by leaf sulfur concentrations. The 57 individuals with the highest leaf sulfur concentration and the 61 individuals with the lowest leaf sulfur concentration were pooled separately. Genomic DNA from each pool was extracted using a DNeasy plant mini kit (Qiagen) and labeled using the BioPrime DNA labeling system (Invitrogen Life Technologies). The labeled DNA from each pool was separately hybridized to the Affymetrix SNP tiling array Atsnptile 1. The signal intensities for all probes were extracted and spatially corrected using R scripts written by Borevitz et al. (2003). The original CEL files containing raw signal intensity data used in this study have been submitted to the Gene Expression Omnibus under accession number GSE61922. Polymorphic SNPs between the two parents were used for linkage mapping. Based on signal intensity differences among four probes for each SNP, antisense and sense probes for two alleles, the allele frequency differences between the two pools were calculated for each SNP. The whole process was performed using R scripts available at http://ars.usda.gov/mwa/bsasnp. The XAM generated a rough mapping position for the causal gene, so the candidate region was further narrowed using PCR-based markers. Multiple PCR-based CAPS and simple sequence length polymorphism markers were developed (Supplemental Table S2). All 328 F2 individuals used for XAM were genotyped at these markers, and recombinants between the markers were selected for further analysis. The F2 recombinants with high leaf sulfur were used directly to narrow the mapping interval, while the recombinants with low leaf sulfur content were used for determination of the causal region in the F3 generation. Fine-mapping was performed in a similar manner using an enlarged F2 population of 1,084 individuals. Sequencing of Candidate Genes and Screening of APR2 Variants The genomic region of Hod APR2 was sequenced using overlapping fragments amplified by PCR as described previously . First, overlapping DNA fragments (0.7-1.2 kb) covering the APR2 genomic region, including the promoter, gene body, and 39 terminus, were amplified using KOD Hot Start DNA polymerase (Toyobo) using Hod genomic DNA as the template. Specific primers for the reactions were designed using Overlapping Primersets (http://pcrsuite.cse.ucsc.edu/Overlapping_Primers.html) and are listed in Supplemental Table S3. Each fragment was sequenced in two directions using the same primers for amplification. The sequences were assembled using Seq-Man Lasergene software (DNASTAR; http://www.dnastar.com), and the assembled contigs was aligned using the Col-0 APR2 sequence as a reference to identify polymorphisms between Hod and Col-0. The DNA sequences of APR2 variants were obtained from the database containing the genome sequences of 855 Arabidopsis accessions (http://signal. salk.edu/atg1001/3.0/gebrowser.php.). First, the predicted protein sequence of APR2 in each of the 855 accessions was extracted. Sequences were copied into a Microsoft Word document. Each sequence was separated by a paragraph marker, and each amino acid in the same sequence was separated with a tab using the Replace function. These data were copied into Microsoft Excel. In the Excel file, each row represented one APR2 amino acid sequence of an accession, each cell represented one amino acid of the protein, and each column represented a position of the amino acid. Positions at which at least one amino acid varied were identified using the Filter function. Candidate hypofunctional APR2 variants were further screened for based on the presence of variant amino acids with conserved domain. Plant Transformation In order to complement the high leaf sulfur phenotype of Hod, a binary vector for the expression of the Col-0 APR2 fragment was constructed. First, a 4.4-kb DNA fragment containing the APR2 promoter, gene body, and 39 terminus was amplified from Col-0 genomic DNA using KOD Hot Start DNA polymerase (Toyobo) using specific primers listed in Supplemental Table S3. This fragment was cloned into the pCR-XL-TOPO vector (Invitrogen Life Technologies) for DNA sequencing, followed by recombination into the pCAMBIA1301 binary vector using SalI and BamHI restriction enzymes. This vector was transformed into Agrobacterium tumefaciens strain GV3101. Transgenic transformation of Arabidopsis was achieved using this transgenic A. tumefaciens strain following the floral dip method (Clough and Bent, 1998). Positive transgenic lines were identified using one-half-strength Murashige and Skoog medium solidified with 0.7% agar containing 50 mg mL 21 hygromycin and 1% Suc. APS Reductase Activity APS reductase activity was determined as the production of [ 35 S]sulfite and assayed as acid volatile radioactivity formed in the presence of [ 35 S]APS and GSH as the reductant (Loudet et al., 2007). The protein concentrations were determined using a Bio-Rad protein kit with bovine serum albumin as a standard. Recombinant APR2 proteins were prepared and analyzed as described by Loudet et al. (2007). In short, the APR2 coding regions were amplified from RNA isolated from the corresponding accessions, cloned into pET14b plasmids, and their identity was verified by sequencing. The proteins were purified at 4°C from triplicate 100-mL cultures of Escherichia coli BL21(DE3) grown overnight at 30°C using Ni 2+ affinity chromatography. The APR reaction mixture contained 5 mg of recombinant APR2 from Sha, Hod, and Lov-5 or 4 ng of enzyme from Col-0 and Bay-0, 50 mM Tris-HCl, pH 9, 500 mM MgSO 4 , and varying concentrations of [ 35 S]APS (specific activity, 50 Bq nmol 21 ; 1-150 mM with 10 mM GSH) and GSH (1-100 mM with 75 mM APS) in a volume of 250 mL and was incubated for 30 min at 37°C. For the calculation of kinetic parameters, the data were linearized according to Lineweaver and Burk. Leaf Sulfate Quantification and Analysis of Sulfate Uptake and Assimilation Leaf sulfate contents were determined by ion-exchange HPLC as described by Koprivova et al. (2013). Approximately 50 mg of plant tissue was homogenized and extracted in 1 mL of sterile water for 1 h at 4°C. The extracts were heated for 15 min at 95°C and centrifuged for 15 min at 13,000 rpm. Twenty microliters of the extracts was analyzed on an IC-PAK Anion HR 4.6-3 75-mm column (Waters) using a lithium borate/gluconate eluent at 1 mL min 21 in isocratic mode and a Waters 432 conductivity detector. Sulfate uptake and sulfur flux through sulfate assimilation were measured as incorporation of 35 S from [ 35 S] sulfate to thiols and proteins essentially as described by Mugford et al. (2011). The seedlings were transferred onto 24-well plates containing 1 mL of Murashige and Skoog nutrient solution adjusted to a sulfate concentration of 0.2 mM and supplemented with 5.6 mCi of [ 35 S]sulfate (Hartmann Analytic) to a specific activity of 1,860 Bq nmol 21 sulfate and incubated in light for 4 h. After incubation, the seedlings were washed three times with 2 mL of nonradioactive nutrient solution, carefully blotted with paper tissue, weighed, transferred into 1.5-mL tubes, and frozen in liquid nitrogen. Total 35 S accumulation in seedlings was used to calculate the sulfate uptake rate. The quantification of total 35 S in plant material and in different thiols and proteins was performed exactly as described by Mugford et al. (2011). Reciprocal Grafting Experiment The grafting of Arabidopsis plants was performed as described previously . On day 7 after grafting, grafted unions were examined with the stereoscope. Healthy grafted plants without any adventitious root were transferred to potting mix for a further 4 weeks of growth in a controlled environment as described above. Graft unions were again examined after leaf samples were harvested for ICP-MS analysis, and grafted plants with adventurous roots or without a clear graft union were removed from the final analysis. Expression Analysis of APR2 Total RNA was extracted using the TRIzol Plus RNA Purification kit (Invitrogen Life Technologies) according to the manufacturer's protocol. First-strand cDNA was synthesized using the SuperScript VILO cDNA Synthesis Kit (Invitrogen Life Technologies). First-strand cDNA was used as a template for subsequent qRT-PCR. Primers for APR2 quantification were designed using Primer Express Software version 3.0 (Applied Biosystems) such that the reverse primer spanned the third exon-exon junction. qRT-PCR was performed using a realtime PCR system (ABI StepOnePlus; Applied Biosystems) using SYBR Green PCR Master Mix (Applied Biosystems). The qRT-PCR results were analyzed as described previously (Livak and Schmittgen, 2001). Statistical Analyses To calculate the broad sense heritability, the environmental variance (V E ) was first estimated based on replicates of each reference and tested accession, as V E ¼ ð1=nÞ∑ n i¼1 V i , where V i is the phenotypic variance within the individuals of the i accession and n is the number of accessions tested. The genotypic variance was calculated as V G = V P 2 V E , where V P is the phenotypic variance of the whole GWAS population. The broad sense heritability was then calculated as H 2 = V G /V P . We used normalized data across trays for the above estimates. ANOVA was performed in Microsoft Excel by using the Anova: Single Factor function of the Data Analysis Tools. Student's t test was also performed in Microsoft Excel by using the TTEST function assuming equal variances. Supplemental Data The following materials are available in the online version of this article. Supplemental Figure S1. Genome-wide association analysis of leaf sulfur concentration across 349 Arabidopsis accessions. Supplemental Figure S2. Genome-wide association analysis of leaf selenium concentration across 349 Arabidopsis accessions. Supplemental Figure S3. Sulfur uptake in natural accessions and apr2 alleles. Supplemental Table S1. Associations of APR2-linked genotypes with leaf sulfur and selenium. Supplemental Table S3. Information on primers used in this study.
2018-04-03T05:41:12.386Z
2014-09-22T00:00:00.000
{ "year": 2014, "sha1": "8638af560bda7a5e842614f5f1e505b6e20b7a40", "oa_license": "CCBY", "oa_url": "http://www.plantphysiol.org/content/plantphysiol/166/3/1593.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "74df2a88ec8c75262cc721e638182b014312fed6", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
124693246
pes2o/s2orc
v3-fos-license
Impact of artifact correction methods on R-R interbeat signals to quantifying heart rate variability ( HRV ) according to linear and nonlinear methods RINCON SOLER, A. I. Impact of artifact correction methods on R-R interbeat signals to quantifying heart rate variability (HRV) according to linear and nonlinear methods.. 2016. 50 f. Dissertation (M.Sc. Postgraduate program in Physics applied to Medicine and Biology) Faculty of Philosophy, Sciences and Literature, University of São Paulo, Ribeirão Preto SP, 2016. In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be a ected by di erent types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quanti cation of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in di erent quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering e ciency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction xi xii method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the di erences between successive heartbeats (RMSSD), Lomb's periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of signi cant di erences in comparison to the values calculated for the same signals without the presence of missing points. Key-words: 1. Heart Rate Variability. 2. Artifact correction. 3. Linear and nonlinear methods. 4. Biomedical signal processing. support, and for always be with me and give your smile in the hardest moments. Thank for always be you. I wish to express my huge gratitude to my tutor and friend Prof. Luiz Otavio Murta, PhD, who gave me the opportunity to start this academic stage in Brazil and who was always willing to discuss and clarify any doubts that arose during this process. Thank you also for sharing your knowledge and experience with humility and joy. I am very grateful to Luiz Eduardo V. Silva, PhD, for to be always available to talk and discuss about the wonderful world of the Biomedical Signal Processing. Thanks to guide me and give me important advises along all this time in order to obtain always the best results. A heartfelt thanks to my colleagues and friends of the research group CSIM for sharing with me everyday, for the excellent talks and to always make me feel comfortable even away from home. I wish to thank my friend Eduard A. Hincapié, M.Sc., who was always available to discuss the ideas about physics or simply available to talk. In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be aected by dierent types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantication of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in dierent quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering eciency, deletion Introduction The electrocardiogram (ECG) is a measure of the electrical activity of the heart that can be obtained from electrodes placed on the skin, which allows the description of the depolarization processes of the atria and ventricles [1]. From ECG records can be extracted measurements between successive heartbeats known as RR intervals, which are widely used to assess heart rate variability (HRV). HRV and their changes have been associated with the pathophysiology of some cardiac failures and potential risk of heart disease associated with obesity, epilepsy, diabetes, hypertension, and sudden death [2,3,4]. Most problems related to HRV signals are concerned to spurious interbeat intervals, which will lead to misinterpreted results. For human HRV signals, the major problem is related to ectopic beats, atrial brillation, sinus tachycardia, sinus bradycardia, ventricular tachycardia, and some others [5,6]. However, in experimental eld (rats and mice), it is very common to nd poor quality ECG signals, related to animal movements, poorly fastened electrodes, source power noise and other inuences; resulting in HRV signals with a great amount of missing beats. In order to solve the problems related to the presence of artifacts in HRV signals, dierent correcting methods have been proposed. Some of the most common correction artifact methods used in R-R time series involve process of deletion and interpolation of the problematic segments [7,8,5]. However, some other methods have been proposed for pre-processing these time series and eliminated the artifact interference. Some of this methods correspond to: Comparison and merging [9], predictive autocorrelation method [10], non-linear predictive interpolation [7], exclusion of R-R interval segments with divergent duration [5], impulse rejection [11], integral 1 -Introduction pulse frequency model (IPFM) [11], sliding window average lter [12] and threshold ltering using wavelet [12,13]. • To establish the maximum missing beats that might be present in a signal in order to produce reliable analysis. • To nd the better correcting method that works for dierent combinations of length and rate missing points. • To determine the importance of correcting methods on analysis methods (linear and nonlinear). Finally, this master thesis is organized as follows: In chapter 2 a background of heart rate variability (HRV) and artifacts that aect it are presented. Heart rate variability (HRV) is dened as the inter-beat variability between successive heart beats in a determined time interval. This variability is mediated directly by the polarization and depolarization process of the sinus node (SN), which at the same time is regulated by the interaction of the sympathetic and parasympathetic branches of the autonomic nervous system (ANS). An increase in the parasympathetic activity implies a heart rate (HR) diminution mediated by liberation of acetylcholine; while, an increase of the HR is a direct consequence of an increase in the sympathetic activity that in this case is mediated through norepinephrine liberation on the heart beat regulatory mechanisms [2,14,4,5]. Then, it can be established that the dynamical balance between sympathetic and parasympathetic activity has strong inuence on HR causing oscillations around its average value, in other words the HRV phenomenon. In this order of ideas, HRV is used as noninvasive method to evaluate the sympathetic and parasympathetic functions of the ANS and the cardiovascular regulation [15,16,17]. In this eld of study many pathologies, not necessarily from cardiovascular origin, and physiologic factors that directly disturbs the regulation of the ANS. These situations cause constant and abrupt changes in the HR and the HRV. In this way, HRV can be used to study many types of diseases as: Myocardial Infarction [5,16,7], sudden cardiac death [7,17,5], ventricular arrhythmias [5,18,19], congestive Figure 2 [27,28], pattern recognition [29] and Wavelet transform [12,30]. Although the accuracy of these methods, there is no standard methodology for the R-peak detection phase, and this part of the processing is always left to the investigator's choice. With all R peaks detected the next step is to calculate the time dierence between two consecutive marks (R peaks) in order to generate a time series of RR intervals. After calculate these dierences for the entire signal the obtained results is a discrete time series knowing as RR tachogram or Heart Rate Variability signal. It is important to note that this series of variability lacks uniformity in the distance between the points due to temporary dierences between successive heartbeats, a feature that reects the interaction of the sympathetic and parasympathetic system on heart activity. to the average value of all RR intervals (AVRR), the standard deviation of all RR intervals (SDNN), the square root of the mean squared dierences of the successive RR intervals (rMSSD), the percentage of dierences between adjacent N-N intervals that are by more than 50ms (pNN-50). It is important to state that previous studies have shown that these time parameters are highly correlated with high frequency variations in heart rate (HR) [5,31]. HRV signals exhibit an oscillatory behavior in which components of high and low frequency as a result of cardiovascular modulations performed by the sympathetic and parasympathetic nervous systems are mixed. Thus, methods of analysis in the frequency domain are used in order to quantify this type of information from the estimate of the power spectrum as a function of the frequencies contained in the signal [5,32]. The calculation of the power spectrum or the power spectral density (PSD) of the HRV signal can be performed using parametric and nonparametric methods. Parametric methods usually estimate power spectrum through autoregressive models (AR) applied to the signal, while nonparametric methods using algorithms based on Fourier transform: FFT and periodograms. However, these methods require that the input signal will be evenly sampled, it means that all samples will be equally spaced in time. Then, in order to fulll this requirement, it is necessary to perform a process of resampling on the RR series before make the spectral estimations. For HRV series it is recommended to make a cubic spline interpolation over the data using 4 Hz as a value for the re-sampling frequency (f r−s ) [5,32,33]. To avoid the resampling process over the HRV signals and obtain an PSD estimate directly from the unevenly data, the algorithm proposed by Lomb in 1976 [34] and modied by Scargle in 1982 [35] can be used. This method estimates the PSD performing a normalization of sine or cosine functions independently in each sample of the input signal, allowing to retain all frequency characteristics and avoiding the induction of errors due to the interpolation process. Once the (PSD) is estimated, it is possible to quantify reliable information in the frequency domain integrating the spectrum in frequency bands previously dened. Four frequency bands, directly related to some physiological phenomena, have been considered as standard values in the frequency analysis of the HRV [5,36,37]. The values of these frequency bands correspond to: • VLF → Power in the very-low frequency range: 0.003-0.04 Hz for humans and 0.00-0.20 Hz for rats. • LF → Power in the low frequency range: 0.04-0. 15 Hz for humans and 0.20-0.75 Hz for rats. • HF → Power in the high frequency range: 0.15-0.40 Hz for humans and 0.75-3.00 Hz for rats. • LF/HF → The ratio of the power in the low frequency range to that in the high frequency range. Database Real ECG signals with a length greater that ninety minutes were analyzed in order to generate an inter-beat interval (RR) time series data base. These signals were recorded with a sampling frequency of 1.000 Hz in continuous mode from three dierent groups of animals (Healthy rats, hypertensive rats and heart failure rats) using a telemetry measurement system (PowerLab system model ML870) associtaded directly to LabChart Pro Software version 8.0 from ADInstruments. In this process a transmitter device is inserted surgically into the animal and the record signal is done remotely. These procedures of measurement, register and ltering were performed using LabChart Pro Software version 8.0. Next step involves the QRS complexes detection, and more specically all R peaks on each ECG signal in order to quantify the RR distances and generated the RR time series. This process was carried out using the processing modules incorporated in the LabChart Pro Software. After processing all ECG signals and detect the R peaks the next step was extract the inter-beat interval distances, also known as RR intervals, and generate a set of time series that will be used further in our analysis of artifact correction and quantication of the Heart Rate Variability (HRV) parameters [6,17,33]. Using the RR module from LabChat Pro software this process could be done for each ECG signal analyzed in the previous stage. The same procedure was carried over thirty-seven (37) dierent signals. Finally, all RR time series generated with this procedure previously described above were visually inspected with the aim to nd segments without missing points with a length, of at least, ten-thousand (10.000) points. As a result, it was obtained sixteen series from the three dierent group of subjects, which were labelled from Rat-01 until Rat-16. Correction methods In this study, we have selected the ve most common artefact correction methods found in the literature about HRV analysis. These methods usually have been implemented to deal with problems like ectopic beats, noise and non-uniform sampling of the RR time series. However, it is well known that deletion, linear interpolation and cubic spline interpolation methods have been used in some investigations to correct missing beats in articial RR time series [4,16,19]. In this work, these methods will evaluate the accuracy of this techniques working on the missing points case on real signals. A brief description of each correction method is presented to follow. Deletion (DEL) Deletion method simply removes the missing RR intervals in the time series and replaces each removed point(s) shifting the following RR intervals to the place of the deleted ones. After that, the corrected time series are shorter than the original. Figure corresponds to a schematic representation of the deletion method performance when the signal presents some missing points. B Number of missing RR intervals to delete. Linear Interpolation (LI) Interpolation means to compute points or values between ones that are known using the surrounding data 1 . In this order of ideas, Linear interpolation is the simplest method of interpolate values using straight line segments. Formally, given two known points located at (x 0 , y 0 ) and (x 1 , y 1 ), and knowing that the linear interpolation is a straight line between these points, we can nd for any x value in the range (x 0 , y 0 ) their respective and unknown pair y using the equation 3.1. These process can be better understood looking the gure 3.3. Cubic Spline Interpolation (CI) The idea of a spline is basically joint two consecutive elements in a data series using a specic mathematical function. This function is used on each interval between all data points when the number of elements in a series is greater than two. The simplest spline is obtained connecting the data with a straight line as can be seen in section 3.2.2. The next simplest type of function is quadratic, and so on. In this order of ideas, the cubic spline interpolation is a method to joint/determine points in a data series using dierent cubic functions on each interval between data points. In general, this cubic functions correspond to third-order polynomials with the stipulation that the curve obtained be continuous and smooth. The procedure for a cubic spline interpolation is to t a piece-wise function of the form: where S i is a third degree polynomial dened by equation 3.3, for i = 1, 2, · · · , n − 1. In this process the rst and second derivatives of these n − 1 equations are fundamental, and they are calculated as: It is very important that all previous calculations must meet the following properties: 1. The piecewise function S(x) will interpolate all data pints. To follow, it is presented a schematic representation of an cubic spline interpolation process over a generic curve. Moving Average Window (MAW) The Moving Average Window is an algorithm that calculates the unweighted mean of the last n samples in order to predict the next point in a data series. The parameter n is often called the window size, because the algorithm can be thought of as a window that slides over the data points. Equation shows how to implement the procedure explained before, where y[i] is the predicted point based on the x[j] previous samples. As an alternative, the group of points from the input signal can be chosen symmetrically around the predicted point as can be seen in the following example for a window size of n = 6 To achieve this it is just necessary to change the limits of the summation in equation 3.6. In this case the the window length needs to be suciently wide in order to have sucient points around predicted points and obtain a better estimate. It is important to know that symmetrical averaging requires that n be an odd number. In this research it has been used a symmetrical MAW with length width n = 8, it means that the average value for each point is calculated using four points at left and four at right. It is important to understand that this technique will average every point in the signal and not only the missing segments as occur with interpolation methods. modied Moving Average Window (mMAW) The modied Moving Average Window is a correction technique based on the moving average window method describes in 3.2.4, that uses a symmetrical moving average window only in the segments where the RR time series presents missing points. In this case the meaning objective is reduce at maximum the processing over the entire data series, and oer an alternative method to correct RR time series based on a well-known technique. Nonlinear Predictive Interpolation (NPI) The Nonlinear Predictive Interpolation (NPI) method is an algorithm designed by N. Lippman on 1994 [7], in order to solve the problem of ectopic beats present the analysis of an RR time series. It is able to perform corrections for single or sequences of ectopic beats with any length. In this research, the NPI method has been modied and used to correct the missing beats problem in inter-beat interval (RR) time series. To perform the NPI method over RR time series is necessary to follow this steps: 1. Scan forward RR time series until the rst missing point (RR interval) is found. 2. Dene the segment length to be replaced, beginning with the rst RR interval until the next RR intervals are found. The total amount of RR intervals to be replaced are called beats to ll (B), and becomes an input for the next steps. A sequence of M RR intervals immediately before and N RR intervals immediately following the missing segment are used to dene an (M + N ) ini element test array. The M and N values could be dierent between them and they must to be specied as input parameters for the NPI algorithm. In gure 3.5 can be seen an example of how the element test array is conformed. 4. All available RR intervals are scanned, searching for segments of length [M + B + N ] without any missing RR interval. The M and N RR intervals in these sequences are used to construct M + N element comparison arrays as: 5. All element comparison arrays of length M + N found on the previous step are compared with the (M + N ) ini element test array using a Cartesian distance metric, and the closest matching array is stored. . From the closest matching comparison array determined above, the B RR intervals are extract and used to replace the missing segment found in the rst step. 7. Repeat the previous procedure in order to nd more missing segments or until reach the end of the time series. M B N Number of RR intervals before the missing segment. Number of RR intervals after the missing segment. Number of missing RR intervals to replace. [M+B+N] From Sequence domain; However, they have problems to deal with factors such as non-stationary and dierent types of noise (noisy nature). In order to solve such problems, it is necessary to use nonlinear methods to achieve a comprehensive approach in the analysis of HRV. AVRR This parameter corresponds to the mean value of all RR intervals in a data series. As we know, mean is a parameter for a distribution random variable, which is dened as a weighted average of its distribution. The AVRR is calculated as: where N is the total number of all RR intervals in the time series. SDRR Standard Deviation of RR intervals is the measure of the variability or dispersion of a data set. This is a global index that correlates strongly with the total power of the time series, often r > 0.9 [48]. SDRR is calculated as where RR is the arithmetic mean of the values RR i , dened by equation 3.9. RMSSD RMSSD is the square root mean of successive RR intervals calculated using the equation 3.11, where N is the total number of all RR intervals in the series. This parameter plays an important role on heart rate variability analysis and it has been used in previous investigations as a signicant indicator of both atrial brillation (AF ) and sudden unexplained death in epilepsy (SUDEP ) [31,48]. Lomb-Scargle Periodogram (LSP) This method belongs to Frequency-Domain set of techniques used to analyze heart rate variability (HRV). Introduced in 1976 by Lomb [34] and modied in (1982 ) by Scargle [35], the method is used to estimate the Power Spectral Density (PSD) of an unevenly sampled signal. Lomb method has advantage over traditional methods based on the Fast Fourier Transform (FFT ), because no re-sampling process is needed in order to create an evenly sampled representation and in the evaluation of the power spectrum the data are weighted on a point by point basis instead of weighted on a time interval basis [33,49,50]. Bellow, it is presented a short mathematical description about Lomb's method. Specic details can be consulted by the reader on references [32,34,35,49,50]. The procedure consist in t a time series x(n) unevenly sampled at times t n by a weighted pair of cosine and sine waves, where each function is weighted by coecients a and b respectively. This tting procedure must to be performed over the N samples of x(n) obtained at times t n and repeated for each frequency f . Equation 3.12 represents the tting function to use in this approach, where coecients a and b must to be determined in some point of the tting procedure. Now, we t P to signal x using minimization of the squared dierence between them over all n samples (equation 3.13) and repeating this procedure for each frequency (f ) data. In order to nd the minimum error in the minimization process, the next step sets to zero the partial derivatives of equation (3.13) respect to coecients a and b, that is: and After evaluation of equations 3.14 and 3.15, and using some algebra over results; we obtain the following representations: At this points, it is introduced the special feature of Lomb's algorithm: For each frequency f, every sample located at times t n is shifted by an amount τ . Then, in equations 3.16 and 3.17 t n becomes t n − τ . In order to avoid errors introduced by the sine-cosine cross-terms, it can be chosen an optimal time shift value (τ ) that makes them zero. This value is set as: The value of variable τ is used in equations 3.16 and 3.17 in order to determine the form of a and b coecients for each frequency. Making the respective algebra in those equations, it is found that: Next step involves computation of the sum of squares of the sinusoidal signal form equation 3.12, this is P 2 (a, b, f, t − n), in order to obtain a representation that be proportional with the power spectrum S of x(n) as a function of f : As the shift term τ was introduced in the mathematical formulation of S, all cross-terms are set to zero following the previous denitions. Now, the expressions for the coecients a and b are replaced on equation 3.21 obtaining as result an expression for the power spectrum. This expression has the following form: In some cases the power spectrum expression from equation 3.22 is divided by 2 in order to get a power spectrum representation similar to the representaion obtained by the Fourier transform, or by 2σ 2 (σ 2 → variance of x(n)) in order to obtain the standard normalized power spectrum and determine the statistical signicance of the every peak in the spectral representation. Last representation corresponds to the modication of Lomb's algorithm made by Scargle in 1982 [35] and used by other authors to estimate the power spectral components in the heart rate variability (HRV) [32,33,47,49]. In the case of HRV it is well known that power spectrum covers a wide range To calculate DFA for a given a time series x(t) with t = 1, 2, · · · , N , the following steps should be performed: 2. The integrated time series (y(k)) is divided intro boxes of equal length n as shown in gure 3.6. On each box is calculated the local trend by tting a regression line y n (k) in this data segment. 4. This procedure is repeated for several dierent scales (all possible box sizes of length n) in order to provide a relationship between F (n) and the box size n. -Materials and Methods Typically, uctuations increases when the box size n increases. Now, if log(F (n)) increases linearly as a function of log(n), the time series follows a scaling law. Under such conditions the uctuation can be characterized by a scaling exponent α, the slope of the line relating log(F (n)) vs log(n). Dierent values of α from specic types of time series as presented in table 3.1. Scaling Exponent Description of the Signal 0 < α < 0.5 Small value followed more likely by a larger value and vice versa α = 0.5 Completely uncorrelated time series, that is, whitenoise 0.5 < α < 1.0 Small value followed more likely by a small value and large value followed more likely by a large value (correlated). log(F (n)) vs log(n) cannot be tted by only one scaling coecient (α). This kind of situations conducts to a well known crossover phenomena, usually attributed to changes in the correlation properties of the time seris at dierent time or space scales, though it can also be a result of nonstationarities in the time series [8,53]. In this phenomenon the angular coecient of the tted line is altered from a given value n, from which it is necessary to make a new t of the values log(F (n)) vs log(n). After that, two scaling exponents are obtained in order to quantify the short-range (α 1 ) and long-range(α 2 ) correlations of the series. In practice, the algorithm to calculate the MSE follow tow steps: 1. Given a time series x i , i = 1, · · · , N , a coarse-graining process is applied over it. In this procedure, multiple coarse-grained time series (y (τ ) j ) are constructed by averaging the data points with non-overlapping windows of increasing length τ . Each element of the coarse-grained series, y To better comprehension of the coarse-graining process see gure 3.8. 2. Then, SampEn is calculated for each coarse-grained time series and plotted as a function of the scale factor τ . Symbolic Dynamics (SymDyn) Physiological time series usually shows complex structures which cannot be quantied or interpreted by linear methods due to the limited information about the underline dynamic system, whereas the nonlinear approach suers from the curse of dimensionality [14,60]. In order to solve such problems the Symbolic Dynamics (SymDyn) method was proposed. The method attempts to characterize the original time series in a simple and coarser symbolic notation capable to retain the essential dynamic characteristics of the original time series. During this process there is loss of information contained in the time series, however, dynamical features are retained by the new coarse-grained representation [45,46,61]. The conversion of a time series into a set of symbols begins dividing the original signal into two or more levels, depending on how many symbols you want to use. There are several methods to make this process of conversion but the most used protocol involves the use of the signal average or the standard deviation. Figure 3.9 shows an example of symbolic quantication for an articial signal. As can be seen in the gure 3.9 the levels are selected as A, B, C and D; nonetheless, numbers or any other representations can be used in order to dene a set of levels to quantify the signals. The next step involves the conversion of a time series into a symbol string and grouping these symbols in words of length L. A new word is always formed by stepping forward one step in the symbol string. An example of this process can be seen in gure 3.9, where the string formed by Statistical Tests In order to compare the results obtained after performing some experimental procedure the eld of statistics oers some useful techniques, that are based on the relationship between the data. In this order of ideas, sometimes it is better to use an independent test over the data set or use a paired test to know is our null hypotheses is rejected or not. As previously stated, there are many statistical methods to compare the properties of two or more groups; However, due to the nature of our data and the way in what the experiment was carried out, to follow the two comparison test used will be described. Paired t-test A paired t-test is normally used to compared tow groups or populations means, in where the observations of one population are paired with the observations of the other population. Examples of this type of situations are: students diagnostic test results before and after a particular module or course, or, a comparison of the eciency of some treatment applied on the same subjects for a period of time [63,64]. Let x and y datasets measured on the same test subjects before and after application of some treatment. Then, in order to perform a paired t-test to know if the the null hypothesis that the true mean dierence is zero, it means that there is not dierence induced by the treatment, the procedure is as follows: 1. Calculate the dierence (d i = y i − x i ) between the two observations on each pair, making sure you distinguish between positive and negative dierences. 2. Calculate the mean dierence, d. . Under the null hypothesis, this statistic follows a t-distribution with n − 1 degrees of freedom. 5. Use tables of the t-distribution to compare your value for T to the t n−1 distribution. This will give the p-value for the paired t-test. The Wilcoxon Signed Rank Sum Test The Wilcoxon signed rank sum test is a nonparametric alternative to the two sample paired t-test, and it is known to be part of the family of distribution free test. This method is used to test the null hypotheses that the median of a distribution is equal to some value, normally zero. It can be used: (a) in place of a one sample t-test, (b) in place of a paired t-test or (c) for ordered categorical data where a numerical scale is inappropriate but where it is possible to rank the observations [63,64]. To carried out the Wilcoxon signed rank sum test in the case of paired data, the correct procedure is: 1. State the null hypotheses, in this case that the median dierence (M ), is equal to zero. On our data, these statistical tests were applied according to the result obtained after applied a Shapiro Wilk normality tests over each corrected and uncorrected signal. If the normality was preserved after each correction procedure, then a paired t-test was used; but, if the normality was losing after each correction, then the Wilcoxon signed rank sum test was choice. Then, any of the statistical procedures were used following the steps described before with the statistical hypotheses that the median of the dierence of the paired data were zero, against that it was not zero. These procedure was carried out the fteen HRV indexes Missing points quantication The identication and quantication of all missing points was made by visual inspection of the 37 RR time series generated form the ECG recordings. The total length of each signal was used in this inspections. The results shows that maximum quantity of missing points was around 5% of the total signal length and the most representative types of losses can be dened as: one non-consecutive random beat, three consecutive random beats and ten consecutive random beats. This analysis also allow us to determine that these three set of losses are distributed as: • 1 non-consecutive beat = 79% • 3 consecutive beats = 11% • 10 consecutive beats = 10% Notice that the sum of these loss percentages must be equal at maximum quantity of losses determined before. Knowing that the maximum percentage of losses corresponds to 5% of the total signal length, it was selected another level of losses in order to know the correction methods performance in at least two dierent situations. xed to 2.5% of points in total signal length. Correction stage Signals with missing points were processed, each one, with every correction method described in chapter 3, and this corrected version was used as inputs of the HRV analysis stage. Time parameters calculated after application of each correction technique. From these results it can be observed a good corrections performed by, almost, all methods with nearly values in comparison with control group. At the same time we note that MAW method has appreciable dierences for SDRR and RMSDD parameters but maintain a very good correlation on AVRR in comparison with the other techniques. These dierences may be attributed to the process of smoothing carried out when the moving average is applied over the data, producing a drastically variance reduction. Frequency domain parameters after the application of correction methods. Nonlinear Domain Parameters Nonlinear parameters for control group and corrected time series were calculated using Detrended Fluctuation Analysis (DFA), MultiScale Entropy (MSE) and Symbolic Dynamics (SymDyn) methods, all described previously in chapter (3). In DFA analysis the short (α 1 ) and long term (α 2 ) indices were calculated with a xed crossover point equal to n = 10. This value is result of previous analysis in which more than 10 signals were analyzed by an minimization error algorithm specically design to nd the better crossover point. On the other hand, MSE measures were performed using as input parameters a tolerance factor (r) value of 0.15 the standard deviation of the time series, and a maximum number of scales (τ ) equal to 20. Finally, symbolic dynamic analysis was carried out using 6 levels (ζ = 6)to quantify the time series, words of length L = 3 and non-overlapped windows with 300 points. On each window was performed the symbolic analysis and the median of the patterns were used to report the behavior of the analyzed data. These input parameters for DFA, MSE and SymDyn were kept for all time series analyzed in this work. Additionally it was found that these types of losses represent, respectively, 79%, 11% and 10% of the total amount of losses initially found (5%). † Using two dierent levels of loss was possible to determine that the moving average window (MAW) method is not suitable to make corrections in inter-beat interval (RR) time series. However, a slightly modied version of it gives very close results in comparison to those found using deletion and classical methods based on interpolation. † Correction procedures for the low level of losses using DEL, LI, CI and mMAW The procedure to calculate the SampEn could be describe as follows: Assume a time series of length N , X = x 1 , x 2 , · · · , x N . Now, dene a template vector of length m of the form U m (i) = x i , x i+1 , x i+1 , · · · , x i+m−1 and a distant function d[U m (i), U m (j)], (i = j), that could be the Chebyshev distance function. The next step implies to count the number of vector pairs in template vectors of length m and m + 1 having a d[U m (i), U m (j)] ≤ r, and denoted by B and A respectively. We dene then the sample entropy as: where, • A is the number of template vector pairs having d[U m+1 (i), U m+1 (j)] < r of length m + 1. • B is the number of template vector pairs having d[U m (i), U m (j)] < r of length m. From the denition is easy to see that A will always have a value smaller or equal to B. Therefore, SampEn(m, r, τ ) will be always either be zero or positive value. In the case of HRV the value of m is set to be 2 and the value of r to be 0.15 × std. Where std corresponds to the standard deviation of all dataset. An example taken from [65] using a simulated series and presented in the gure B.1, illustrates the procedure for calculating sample entropy (SampEn)for the case in which the pattern length, m, is 2, and the similarity criterion, r, is 20. and reects the probability that sequences that match each other for the rst two data points will also match for the next point.
2019-04-21T13:13:00.266Z
2016-03-10T00:00:00.000
{ "year": 2016, "sha1": "54f0c99a3662b6b01ca2c53bc64d943751c8f40b", "oa_license": "CCBYNCSA", "oa_url": "http://www.teses.usp.br/teses/disponiveis/59/59135/tde-02052016-130306/publico/Anderson_Ivan_Rincon_Soler_Dissertacao_MSc_2016_Corrected.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "529bf4af132457b1980494b8be12f7ba181cf62f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
250642203
pes2o/s2orc
v3-fos-license
Identification of human-dependent routes of pathogen’s transmission in a tertiary care hospital Objectives The purpose of the study was to validate the risk of patients' exposure to pathogenic flora carried on hands of students, visitors, and patients themselves, analyzing its density and genera and to compare them with the microflora of healthcare workers (HCWs). Patients and methods Between May and June 2018, five groups of participants were included. Each group consisted of eight individuals. Palmar skin imprints were obtained from dominant hands of doctors, nurses, students, visitors, and patients in orthopedics ward. Imprints were incubated at 37°C under aerobic conditions, and colony-forming units (CFU) on each plate were counted after 24, 48, and 72 h. Microorganisms were identified. Results Hands of doctors were colonized more often by Gram - positive non-spore-forming rods bacteria than hands of nurses (p<0.05). A higher number of Staphylococcus epidermidis CFUs was observed on doctors’ than on nurses’ hands (p<0.05), whereas Staphylococcus hominis was isolated from doctor’s and patients’ imprints, but was not from nurses’ and students’ imprints (p<0.05). Micrococcus luteus colonized patients’ hands more often than students’ (p<0.05), visitors’ hands than doctors’ (p<0.05), students’ than nurses’ (p<0.05), visitors’ than nurses’ (p<0.05) and patients’ hands (p<0.05). Staphylococcus aureus (S. aureus) was isolated only from one doctor and one nurse (203 and 10 CFUs/25 cm2 ). Imprints taken from the hands of patients, students and visitors were S. aureus-free. No methicillin-resistant S. aureus (MRSA), vancomycin-resistant enterococci, nor expanded spectrum betalactamase-positive or carbapenemase-positive rods were isolated. The number of Gram-negative rods was the highest on visitors' hands, significantly differing from the number on patient’s, doctor’s, nurse’s, and student’s hands. Spore-forming rods from genus of Bacillus were isolated from representatives of all tested groups. Bacillus cereus occurred more commonly on visitors’ hands than doctors’ hands (p<0.05). Conclusion Patients, students, and visitors may play the causal role in the spread of pathogenic bacteria, particularly spore-forming rods. Our study results confirm the effectiveness of educational activities, that is the hospital's hand hygiene program among HCWs, patients, and visitors. Hand hygiene procedures should be reviewed to put much more effort into reducing the impact of all studied groups on the transmission of infectious diseases. of effective methods of pathogen eradication enforces us to focus on prevention and control. The hand hygiene (HH) program is crucial to control the spread of infection. Hospital staff is obliged to practice World Health Organization's (WHO) five indications for HH to reduce the risk of HAI, which is before touching a patient, before performing clean/aseptic procedure, after body fluid exposure risk, after touching a patient, and after touching patient's surroundings. [4] Visitors to hospitalized patients should be also considered as a pathway of microbial spread as they connect the microbiological environment of the hospital with their own home and work environment. Patients themselves may undertake risky social behaviors, e.g., direct interpersonal contacts and exchange of their properties, thereby opening routes to the transmission of infection. Teaching hospitals are exposed to pathogens that may be transmitted by students and young doctors having courses or internships at departments, possessing the capability to exchange microbial flora between them. As a result, they may serve as transmitters of HAIs, as well. All these groups should be treated as a potential source of HAI infection, and their meaning should be updated, as their representatives have the possibility of exposing hospitalized patients to opportunistic or pathogenic microorganisms transmitted on the skin, shoes, or clothes. [5] In the present study, we hypothesized that pathogenic bacteria were not present on the palm skin of students, patients and visitors and their resident and transient skin microflora did not differ from that of doctors and nurses, as the result within undertaken action according to hospital's HH program for healthcare workers (HCWs), patients and visitors. We, therefore, aimed to validate the risk of patients' exposure to pathogenic bacteria carriage of students, visitors, and patients themselves, analyzing its density and genera and to compare them with the microflora of HCWs. PATIENTS AND METHODS This observational study was conducted at the Department of Orthopaedic Surgery and Traumatology, Medical University of Warsaw, Poland, located at the humid, continental climate zone. The samples were taken in May and June 2018 during late springtime. Palmar skin imprints were obtained from dominant hands using Count-Tact ® plates (25 cm 2 ) (bioMerieux, Marcy l'Etoile, France) from eight randomly selected participants among doctors, nurses, medical students, patients and visiting them relatives. Imprints were taken at the midday (±10 min) from previously non-informed participants. The palm of the dominant hand was chosen, as it is predisposed to become contaminated in consequence of frequent contacts with the environment, domestic and professional, including working equipment, furniture, and items of everyday use. It also serves for interpersonal contacts. Moreover, it is the palm of the dominant hand that usually keeps toilet paper. Thus, its skin perfectly reflects the owner's microbial environment. Orthopedic surgeons participated in the study on the non-operating (ambulatory) day to exclude the influence of hand washing and disinfection before the imprint's collection. Wound dressing nurses were excluded from the study, as they disinfected their hands more frequent than other nurses, even a few times per hour. The students taking practical classes at the ward took part in the study. Patients were hospitalized for at least three days, before the samples were taken. They were in the postoperative period. Patients after trauma and interventions to the dominant upper extremities were excluded from the study. Visitors were those visiting their relatives at the department at the midday. Palms were pressed for 10 sec on Count-Tact ® plates with a force of 5,0 N. Imprints were incubated at 37°C under aerobic conditions, and colony forming units (CFU) on each plate were counted after 24, 48, and 72 h. Identification of isolates was determined by the Vitek MS Matrix-Assisted Laser Desorption Ionization -Time of Flight Mass Spectrometry (Vitek, bioMerieux, Marcy l'Etoile, France), according to the manufacturer's instructions. Gram staining technique was applied to differentiate microorganisms and to determine their morphology. Bacterial resistance was evaluated according to the European Committee on Antimicrobial Susceptibility Testing (EUCAST) guidelines. [6] The occurrence of resistance mechanisms on isolates was investigated (methicillin-resistant Staphylococcus aureus. [MRSA] isolates, glycopeptide resistance for Enterococcus spp., expanded spectrum beta-lactamase [ESBL], metallo-beta-carbapenemase (MBL), serine carbapenemase (KPC) and screening for carbapenemase oxa production for Gram-negative rods) according to local guidelines. Statistical Analysis Statistical analysis was performed using the Statistica version 13.3 software (StatSoft, Kraków, Poland). Data were presented in a number of CFU per plate (25 cm 2 ) with mean and standard deviation (SD) values. The Kruskal-Wallis test was used to compare the differences in species diversity between the studied groups. A p value of <0.05 was considered statistically significant. RESUlTS A detailed description of the occurrence of individual bacterial species in a given study groups is presented in Table I and the number of CFU per plate (25 cm 2 ) of selected of microorganisms is presented in Table II. No MRSA isolates, glycopeptide-resistant Enterococcus spp., neither ESBL-positive, metallo-beta-carbapenemasepositive, serine carbapenemase-positive nor carbapenemase oxa-positive Gram-negative rods were isolated. The majority of isolates constituted Grampositive cocci. Their concentration on doctors' hands significantly exceeded those on nurses', students', and visitors' hands. Coagulase-negative Staphylococci formed the majority of all isolates. Staphylococcus aureus (S. aureus) was isolated only from one doctor and one nurse (203 and 10 CFUs/25 cm 2 ). Imprints taken from the hands of patients, students and visitors were S. aureus free. Enterococcus faecalis (E. faecalis), belonging to fecal flora was detected on one nurse's hands and one patient only. The S. aureus isolate was not MRSA and E. faecalis was not vancomycinresistant enterococci (VRE). Gram-negative rods were isolated from palms of three doctors, two patients, two nurses, two visitors, and five students. The number of Gramnegative rods was the highest on visitors' hands, significantly differing from the number on patient's, doctor's, nurse's, and student's hands (Table II). Spore-forming rods from genus Bacillus, were isolated from hands of six doctors, five patients, six nurses, six visitors, and seven students. Hands of doctors were colonized more often by Gram-positive non-spore-forming rods bacteria than hands of nurses (p<0.05). Coagulase-negative Staphylococci constituted the most numerous groups among the isolates. However, there were differences in species diversity between the studied groups. The higher number of Staphylococcus epidermidis (S. epidermidis) CFUs was observed on doctors' than on nurses' hands (p<0.05), whereas Staphylococcus hominis (S. hominis) was isolated from doctor's imprints, but was not from nurses' and students' hands (p<0.05). A higher number of coagulase-negative Staphylococci CFU was observed from imprints taken from patient's hands than from nurses' hands (p<0.05). The S. hominis was most frequently occurring species on patients' palms than on nurses' palms (p<0.05). Micrococcus luteus (M. luteus) colonized patients' hands more often than students' (p<0.05), visitors' hands than doctors' (p<0.05), students' than nurses' (p<0.05), visitors' than nurses' (p<0.05) and patients' hands (p<0.05). Bacillus cereus (B. cereus) was more commonly found on visitors' hands than doctors' hands (p<0.05). The comparison between the number of isolates from studied groups is presented in Table III. DISCUSSION Medical personnel are believed to be among the most important vectors of pathogens transmission. It involves students, patients, and visitors who are in hospital on a regular or occasional basis. To validate the risk of patients' exposure to pathogenic bacteria, we analyzed the quantity of aerobic microbiota composition of HCWs' dominant hand, as well as medical students, patients, and visitors in orthopedic ward. Imprints collected at the midday reflected normal skin flora but also environmental and occupational. The highest risk of hand's contamination occurs at the most crowded areas that have the highest bacterial concentrations: shops, working places, public transportation, and toilets, waiting rooms, etc. We chose the midday for taking samples to lose the effects of morning hand washing, which controls the skin microbiome. At least 4 to 5 h-long intervals should minimalize its influence on the hand's microbiome, pointing out to the flora residing on hands during the daytime. In our studies, representatives of normal human skin flora were most frequently obtained. Staphylococcus aureus was isolated from two medical staff. Gram-negative and Gram-positive spore-forming rods most frequently colonized student's hands which proves that students may serve as a vector of HAIs. Despite the emphasis on HH in medical staff, the control of HH in students, patients and visitors is poor. Their potential in the spread of HAI should not be ignored. In this study, doctors and patients had the highest concentration of microorganisms on their palms, twice as other groups. The presence of pathogenic flora transmitted due to interpersonal contacts from foci of infection, as well as a lack of proper HH, may have an impact on HAI transmission. The hand's skin microbiome is more variable and less stable than any other in-between the same organism, as it consists of resident and transient microorganisms. [7] Human skin is regularly colonized by microorganisms, aerobic and anaerobic, at concentration ranging from more than 10 6 on the scalp, 5¥10 5 in the axilla, 4¥10 4 on the abdomen, 10 4 on the forearm (CFUs/cm 2 ). Fingertip microbiome consists of up to 300 CFUs/cm 2 . In HCWs, total bacterial counts should be reduced due to repeated HH procedures, but close contact with patients has an influence on the hands' microbiome. Regularly used medical equipment is not germ-free, either. Both facts may explain why medical professionals' fingertips contain 10 4 to 10 6 CFU/cm 2 of microorganisms. [8] The transient and resident skin flora varies among individuals, being stable for the particular one. [9] An average of more than 150 species may be found on the palm. Normal microflora protects the host from the invasion of pathogenic strains, individuals possessing lower microbial diversity are more likely to harbor pathogenic microorganisms, such as MRSA, Enterococcus spp. and Candida albicans. Analyzing the hand's skin microbiota diversity, bacteria are the most prevalent microorganisms (>80% relative abundance), whereas viruses and fungi are presented in less than 5% each. [10] The composition of the microorganisms that make up the skin microbiota is related to interactions between species. [11,12] Resident flora when transmitted into sterile body cavities, eyes, or non-intact skin may cause an infection. The transient microbiota is more amenable to removal by routine HH rather than a resident one. Healthcare workers regularly acquire them as consequence of direct contacts with patients and environment. [9] In our study, S. aureus accounted for 5% of the cultured microflora in doctors, and 0.2% in nurses. Our results show that patients, medical students, and visitors carry resident and transient skin microflora. In addition, students can transfer pathogens between patients, departments, and hospitals. [13] The WHO's five indications for HH and transmission-based precautions dedicated to HCWs should properly be abided by medical staff, patients, visitors, and students, [14] particularly nowadays, among novel coronavirus 2019 disease (COVID- 19) pandemic. [15] Messages enabling patients understand and learn can be included in information leaflets and in posters displayed at the facility entrance and in waiting areas. [4] The unrestricted access to soap, water, sinks and alcohol-based hand rub (AHR) stations to all, should be available so handwashing followed by AHR would be performed at least: before entering the patient's room and immediately after leaving it. [16] Unfortunately, HH is widely ignored by medical staff, enforcing installation control devices including visual and acoustic reminders, or electronic. [17] Visitor HH is an evidence-based strategy to reduce pathogen transmission. [18] Social pressure highly influences visitor's compliance to HH rules. [19] Visitors being in the company are more likely to use AHR than being alone -a reduction of HH compliance from 44% at the main hospital's entrance to 4.1% at the departments, and only 2.7% at patient's room. [20] Furthermore, visitors are 5.28-times more prone to use AHR, when dispensers are located in the middle of the lobby and demonstrably labelled with landmarks and barriers. [19] The AHR use is 1.35-times more likely in the afternoon than morning, and by younger people than the elder. To increase the AHR usage, dispensers should be installed in exposed/public, not private areas. According to Birnbach et al., [8] 64% of visitors disobeyed WHO's five indications for HH. Moreover, 42.8% of those who disobeyed them reported that they obeyed the HH. A total of 26% of visitors who disobey HH rules carry Gram-negative rods, and MRSA. Visitors obeying HH rules did not carry pathogenic flora on their hands and their hand normal microbial load was 0.9 CFU per cm 2 , while the rate of disobeying was 89.3 CFU per cm 2 among visitors. [8] The AHRs applied for HH alters human and environmental microbiota composition. [21] Although it is a well-known fact, we are reminded of this truth only when we have a patient infected with Clostridioides difficile (C. difficile), in an individual case or during an outbreak. In everyday practice, HCWs pay no attention to the change of microflora, e.g., contamination of hands with spores or sporeforming rods. The effectiveness of AHR solutions including ethanol and isopropanol, chlorhexidine, iodine povidone, and octenidine dihydrochloride is not permanent. [22] Pathogens may become tolerant to them. According to Pidot et al., [23] Enterococcus faecium isolates became 10-times more tolerant to alcohol after their five years-lasting usages, becoming resistant even to standard 70% isopropanol due to mutation in genes responsible for carbohydrate uptake and metabolism. Silver nanoparticles-based gel hand wash is very promising, but still requires validation. [24] The novelty of our study was to determine the quantitative microflora composition of hands of five groups interacting with each other in orthopedic ward. We concentrated on aerobic bacteria. The presence of S. aureus was found in one doctor and one nurse, and, while Enterobacter cloacae (E. cloacae) was found on the hand of the student and Acinetobacter baumannii (A. baumannii) of the patient, but these species accounted for only 5%, 0.2%, 0.1% and 0.3% of total CFU cultured in each group, respectively. The E. cloacae is a component of human fecal microflora and A. baumannii occurs naturally in water and soil, as well as in contaminated hospital environment or colonized patient/staff. The presence of Gram-negative rods in the skin microflora is a result of the HH neglect. In the study of Tang et al., [25] the following results were obtained: A. baumannii constituted 15% of the bacteria on HCWs' hands, Pseudomonas spp. 9%, and E. cloacae 9%. The study by Domínguez-Navarrete et al. [26] revealed the presence of pathogenic bacteria on the hands of preclinical medicine students. A total of 60.6% of students were carriers of S. aureus, 3% Pseudomonas aeruginosa, 3% Enterobacter, and 18.1% Candida spp. [27] Ssemogerere et al. [27] The differences we found concerned the quantity or quality of microorganisms that belong to the normal microflora of the hand skin (mostly coagulase-negative Staphylococci) and the probable environmental contaminant of the genus Bacillus. Our results confirmed the research hypotheses. We found no significant differences in the pathogenic transition flora, which was isolated in single cases, which may prove the effectiveness of educational activities, that is the hospital's HH program among HCWs, patients and visitors. Although the sample size of eight subjects per group is small and it is not representative of the entire multidisciplinary university hospital staff and is the limitation of the study. The results are preliminary, but can give a microbiological insight into HH compliance. While we were planning this experiment, we did not expect abundance of spore-forming rods, which do not belong to normal skin flora. Currently, it is relevant to investigate C. difficile hand contamination on HCW role in asymptomatic patients. Alcohol in AHR lacks activity against bacterial spores, [28] but it is effective in killing the vegetative cell (non-spore form) of C. difficile which may be present in higher numbers than the spores. [4] In conclusion, as given in the medical literature, patients, students, and visitors may play the causal role in the spread of pathogenic bacteria, particularly spore-forming rods, their hands may carry pathogenic microflora. The results of our study demonstrate general compliance with the hospital's HH program for HCWs, patients and visitors, introduced years ago. Furthermore, HH is one of the topics covered during microbiology course, as well as during other courses, including clinical ones. Nevertheless, continuous training of staff in HH, as well as patients and visitors, and students, and particularly in monitoring compliance with them, should be maintained.
2022-07-20T06:17:37.230Z
2022-07-06T00:00:00.000
{ "year": 2022, "sha1": "f503f50d2865d6ec0c57d0c368e180f26ebe2e2c", "oa_license": "CCBYNC", "oa_url": "https://jointdrs.org/full-text-pdf/1364", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ee1b523f05b8f51b8d2c90fe606f1b3b8b76f3f2", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
32847693
pes2o/s2orc
v3-fos-license
Role of depression in secondary prevention of Chinese coronary heart disease patients receiving percutaneous coronary intervention Introduction Coronary heart disease (CHD) patients who have undergone percutaneous coronary intervention (PCI) have higher rates of depression than the general population. However, few researchers have assessed the impact of depression on the secondary prevention of CHD in China. Objective The main purpose of this investigation was to explore the relationship between depression and secondary prevention of CHD in Chinese patients after PCI. Methods This descriptive, cross-sectional one-site study recruited both elective and emergency PCI patients one year after discharge. Data from 1934 patients were collected in the clinic using questionnaires and medical history records between August 2013 and September 2015. Depression was evaluated by the 9-item Patient Health Questionnaire. Secondary prevention of CHD was compared between depression and non-depression groups. Results We found that depression affected secondary prevention of CHD in the following aspects: lipid levels, blood glucose levels, smoking status, physical activity, BMI, and rates of medication use. Conclusions Depressive patients with CHD are at increased risk of not achieving the lifestyle and risk factor control goals recommended in the 2006 AHA guidelines. Screening should focus on patients after PCI because treating depression can improve outcomes by improving secondary prevention of CHD. Introduction Coronary heart disease (CHD) patients who have undergone percutaneous coronary intervention (PCI) have higher rates of depression than the general population. However, few researchers have assessed the impact of depression on the secondary prevention of CHD in China. Objective The main purpose of this investigation was to explore the relationship between depression and secondary prevention of CHD in Chinese patients after PCI. Methods This descriptive, cross-sectional one-site study recruited both elective and emergency PCI patients one year after discharge. Data from 1934 patients were collected in the clinic using questionnaires and medical history records between August 2013 and September 2015. Depression was evaluated by the 9-item Patient Health Questionnaire. Secondary prevention of CHD was compared between depression and non-depression groups. Results We found that depression affected secondary prevention of CHD in the following aspects: lipid levels, blood glucose levels, smoking status, physical activity, BMI, and rates of medication use. PLOS Introduction Coronary heart disease (CHD) is the leading cause of morbidity and mortality globally, and when combined with stroke, it accounted for 17.5 million deaths in 2012 [1]. One of the most commonly used treatments for CHD is percutaneous coronary intervention (PCI), which has become more commonly used than coronary artery bypass graft [2], as PCI is a safe, efficient and less costly revascularization procedure [3]. Approximately 400,000 CHD patients undergo PCI in China every year. There are some procedure-related psychological reactions after PCI. Notably, depression is significantly correlated with adverse cardiac events in CHD [4]. The prevalence of depression in patients with acute myocardial infarction has been reported to be approximately 20% [5]. A previous investigation indicated that patients who are depressed after suffering from an acute coronary syndrome have poor cardiac outcomes and an increased risk of mortality after accounting for risk factors [6]. Some research has indicated that depression is correlated with a significant increase in the risk of negative health outcomes in patients experiencing coronary revascularization, independent of traditional risk factors [6] [7] [8]. CHD patients who have depression are at an increased risk for recurring cardiac events after PCI [9]. Depression typically comprises manifestations such as a sense of depressed emotion, a loss of affection or enjoyment in activities, sleep disorders, fatigue, and diminished concentration. Depression can significantly decrease engagement in lifestyle modifications that are essential to halting the progression of CHD. Secondary prevention programs are known to be essential to decreasing the burden of progression in CHD. Furthermore, the main modifiable risk factors affecting the development and progression of CHD are smoking, hypercholesterolemia, overweight and obesity, physical inactivity, hypertension, and diabetes [10], all of which may be affected by depression. Therefore, the objective of this investigation is to explore this relationship between depression and secondary prevention of CHD in patients who have undergone PCI. We chose one year after PCI as the timeframe of our investigation because we wanted to determine the prevalence of depression and the implementation of secondary prevention methods after one year of recovery. Study design The present investigation was a descriptive and cross-sectional survey using a structured questionnaire and medical history including laboratory tests of CHD patients. The participants were recruited consecutively from a coronary follow-up clinic between August 2013 and September 2015 at their one-year follow-up after PCI. The reason for the one-year time point is because in our hospital, as in many hospitals in China, post-PCI patients are asked to return to the clinic for a comprehensive examination 1 year after the procedure. This follow-up visit is an easy way to obtain patients' data and clinical condition. We therefore chose this time point to attain the cross-sectional information in this study. After obtaining informed consent, patients' eligibility was confirmed by analyzing their medical records for the inclusion and exclusion criteria. Participants were included if they (1) were 18 to 70 years old; (2) had a diagnosis of coronary heart disease; (3) underwent PCI one year ago; (4) accepted participation in this study; and (5) were able to speak, read, and write Chinese. Participants were excluded if they had (1) a terminal illness, (2) abnormal renal and liver function, (3) a limb deficiency, or (4) a language comprehension disorder. Instruments and measurements Two investigators (Feng C. and Ji T.) interviewed the patients to gather sociodemographics such as age, gender, type of PCI, education, cigarette smoking status, body mass index, hypertension history, diabetes mellitus, and self-management abilities (physical activity). Data from physical examinations [including height, weight, and systolic and diastolic blood pressure (SBP and DBP, respectively)] and biochemical testing [total cholesterol (TCHO), total triglycerides (TG), low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, fasting blood glucose (FBG), and glycated hemoglobin A1c (HbA1c) levels] were obtained from all participants. These medial data was obtained from their medical chart in our medical records system. Participants' depression status was calculated via the 9-item Patient Health Questionnaire (PHQ-9), which was administered in a private room using structured questionnaires. The PHQ-9 consists of nine items, each of which assesses the existence of 1 of the 9 DSM-IV criteria for a depressive episode in the past two weeks. Each question in the PHQ-9 is answered using a 4-point scale ranging from 0 (never) to 3 (nearly every day), for a total score ranging from 0 to 27; higher scores indicate a higher likelihood of major depressive disorder. The PHQ-9 questionnaire is a one-page survey and can be accomplished alone. The PHQ-9 questionnaire was first translated into Chinese by a bilingual psychiatrist. The answers were reviewed by 2 independent research coordinators for accuracy. Validation of the PHQ-9 in the Chinese sample The translated version was back-translated and modified until the back-translated version was comparable with the original English version. Some patients were invited to review the Chinese version and to provide feedback. Some modifications were made before the final version of the PHQ-9 was completed. The reliability of the Chinese version of the PHQ-9 was tested. The internal consistency value, obtained by using Cronbach α coefficient, was 0.81 (95% CI, 0.80-0.83). To assess test-retest reliability, 265 patients completed the PHQ-9 a second time within 2 weeks. The intraclass correlation coefficient for test-retest reliability of the total scores was 0.86 (95% CI, 0.83-0.91; F = 7.73, df = 264, P<0.01), demonstrating limited variability between the two-week time points. Measurement of physical examination and biochemical variables After the participants had rested for 10 min, blood pressure (BP) was obtained three times with a desktop mercury column sphygmomanometer with participants in a seated position. The time interval between each measurement was 2 minutes. The average of the BP values was calculated and used for analysis. Blood samples were drawn from each patient after they had fasted for at least 12 h and rested overnight. FBG levels were obtained using oxygen electrodes; TCHO levels were measured using the cholesterol oxidase method; TG levels were measured using the enzymatic method; and HDL-C and LDL-C levels were directly measured using the clearance method. GHbA1c level was measured using high-performance liquid chromatography. Definitions and outcomes In the PHQ-9, compared with a lower score, a higher score reveals more depression. As indicated previously, a score of 10 is the ideal cutoff for detecting the presence of major depression in Chinese patients [11]. We therefore used the cutoff value of !10 for major depression. The goals of secondary prevention of CHD include the following: 1) complete non-smoking: never smoked or stopped smoking for at least 3 months; 2) !30 minutes of moderate-intensity aerobic activity per day !5 days per week: patients self-reported their physical activity mode and duration; 3) weight management resulting in BMI >18.5 kg/m 2 and <25.0 kg/m 2 ; 4) BP<140/ 90 mm Hg; 5) FBG <6.11 mmol/L in DM patients, and 6) LDL-C<2.6 mmol/L. Ethical considerations This investigation was approved by the ethics committee of Shanghai Changhai Hospital before subject enrollment, and it adhered to the principles of the Declaration of Helsinki (as revised in Brazil 2013). All participants in this research read and signed an informed consent. Statistical analyses For the statistical analyses of the data in this study, the Statistical Package for the Social Sciences (SPSS) version 22 (IBM Corp, Armonk, New York) was used. Differences between continuous variables were evaluated using t-tests, and the χ 2 test was used for categorical variables. Logistic regression analyses were used to evaluate the associations between depression and secondary prevention of CHD patients after PCI by calculating adjusted odds ratios (ORs) and 95% confidence intervals (CIs). Adjusted factors included type of PCI, education, and amount of smoking (cigarettes per day). Missing data were not imputed. The significance level was set at .05. All demographic and clinical data, with the exception of age, are reported as frequencies and percentages; age is reported as the mean and standard deviation. Descriptive statistics, mean T standard deviations, or percentages were used to describe the participant profiles. Population characteristics Of the 1934 patients enrolled, 30 (1.5%) subjects were excluded due to incomplete or missing data; these patients were missing at random. A total of 756 patients were female, the mean age ± standard deviation (SD) was 55.64 ±10.6 years (range, 31-76 years), and BMI was 27.3 ±2.3. In this study, 52.32% of patients had hypertension, and approximately 31.08% of patients had DM. Nearly a quarter of patients received emergency PCI. Most of the patients had attained less than a college education. The non-smoking rate was nearly 70%. As evaluated by the PHQ-9, the average depression score was 8.54±3.41. Using the cutoff score of !10, 267 (13.8%) of the PCI patients were determined to be depressed. Sociodemographic and clinical characteristics are shown in Table 1. The influence of depression on secondary prevention: Bivariate analysis Patients were divided by depression status into two groups. Comparisons are shown in Table 2. FBG, HbA1c, TCHO, TG, LDL-C, and HDL-C were significantly different between the depression and non-depression groups, while BMI, SBP, and DBP were not. For rate of control of risk factors, smoking quit rate, physical activity, BMI, lipids, and glucose levels were statistically significantly different between groups, while BP was not. Medication use was also shown in Table 2. The usage rates of angiotensin converting enzyme inhibitor (ACEI) / angiotensin receptor blocker (ARB)s,β-blockers, and lipid-lowering drugs were significantly lower in the depression group. The influence of depression on secondary prevention: Multivariate analysis Considering the significant differences in factors such as type of PCI, education, and amount of smoking (cigarettes per day) at baseline, we conducted logistic regression analyses to control for the baseline impact. We defined depression state as the dependent variable (depression: 0; non-depression: 1), while type of PCI, education, amount of smoking (cigarettes per day), smoking quit rate, physical activity, BMI, lipids, glucose, ACEI/ARBs, β-blockers, and lipidlowering drugs were independent variables. Table 3 shows that smoking quit rate, physical activity, BMI, lipids, glucose, lipid-lowering drugs and education were factors that were strongly related to depression after adjusted the factor of type of PCI, amount of smoking, ACEI/ARB and β-blocker.(P<0.05). Discussion Depression is frequently encountered as a response to an acute coronary episode or a correlated procedure such as PCI; in our study, the PCI population sample had a rate of depression of 13.8%, which is much higher than that of the normal population (3.6%) [12]. It is worth noting that the prevalence of depression in the CHD patients in our study differs from that of previous studies (34.6% to 45.8%) [13]. First, the cutoff point (!10) may mainly account for this difference, as some researchers use 5 as the cutoff score in the PHQ-9. Second, one year may be enough time for the patients to recover from depression after PCI. Third, there may be some patients who cannot or choose not to answer the questionnaire honestly. As several studies have previously illustrated, in China, people with depression typically do not want to admit to being depressed out of fear of being labeled insane, which may also contribute to the difference in prevalence from previous studies. To the best of our knowledge, our study is the first to identify the relationship between the secondary prevention of CHD in patients after PCI and depression. In our study, there was a substantial number of patients who did not implement the lifestyle modifications and risk factor control objectives recommended in the 2006 AHA guidelines [14]. Patients with CHD who have depression are at an increased risk of not achieving the lifestyle and risk factor control goals outlined for secondary prevention. In our study, after controlling for differences in the baseline data, we found that depressed patients reported lower medication compliance (lipidlowering drugs), lower smoking quit rates, poor control of BMI, higher lipid and glucose levels, and lower levels of physical activity. These aspects are all vital to halting the progression of coronary heart disease. Physicians should pay more attention to and be alert for depressive symptoms accroding to our findings, since depression status also increases the risk of cardiovascular disease, possibly due to its potential biological mechanisms involving pro-inflammatory cytokines, worsening endothelial function, and coagulation factors [15]. There are several possible suggestions to tackle this issue. First, we suggest that physicians should pay more attention to patients with depression and encourage them to seek help from a mental health professional. After the interview, we informed the patients in our study who were considered to have depression, but they demonstrated a low understanding of the basic facts of depression, and most of them were not interested in speaking about depression. This phenomenon has been reported before and is one of the reasons why most patients with mental disorders do not seek professional consultations and are therefore left to cope on their own [16]. One possible interpretation of this finding is that fear of stigmatization might prompt Chinese individuals to be much more likely to deny being depresssed. Second, most cardiologists renounce their responsibility in confirming that depression status is evaluated. Currently, a large proportion of cardiologists in China do not believe that they should evaluate their patients for depression and think that it is the duty of a nurse or the family physician. Although nurses may provide psychoeducational education to help reduce depression, doctors should join in the management of cardiovascular risk factors by diagnosing and treating depression. For depressed patients who have undergone PCI, the management strategies are similar to those of other depressed patients and include exercise programs, cognitive behavioral therapy (CBT), general support, and antidepressant medication. Exercise programs seem to be very effective at reducing depression [17]. General support from family members and friends is also very essential to depressed patients. Therefore, it may be valuable to include significant others in the education of the patients. In addition to psychological consultations and rehabilitation, antidepressant medication is also effective in the treatment of depression. Treatment for depression among cardiac patients is effective for improving depressive symptoms and may improve cardiac mortality [18] [19]. However, there are issues concerning potential risks of medication. One of the major potential hazards of antidepressant medications is their impact on prolonging cardiac myocyte action potentials. This is especially the case for tricyclic antidepressants, and thus when prescribing medication for depression, physicians should consider and monitor for this side effect. It is notworthing that selective serotonin reuptake inhibitors (SSRIs) have been found to have a protective cardiovascular effect. They may be suggested for use rather than tricyclics. There are some limitations in our research. First, pre-PCI depression data are not available, and this is a major limitation. Second, the 1-year study time point should also be taken into consideration, because it is quite a long time after the PCI. Third, there are no other data points more proximal to the procedure to assess the effect of PCI. In addition, this is a singlesite study, which is always a limitation to the generalizability of findings. Finally, in the area of secondary prevention, dietary control was omitted because it cannot be clearly demonstrated by most of the participants. Depression is common in CHD patients and is correlated with higher mortality and morbidity rates. In our study, we found that depression has a strong impact on several approaches to secondary prevention in CHD patients. Although more investigations are needed to clearly and consistently establish the cardiovascular impact, depression itself affects quality of life. More importantly, it reduces compliance in adherence to medical and lifestyle strategies. There is significant research supporting the initiation of exercise programs, general support, and antidepressant medications to reduce depression in the CHD population. Supporting information S1 Table.
2018-04-03T02:20:58.541Z
2017-12-21T00:00:00.000
{ "year": 2017, "sha1": "e187dcff1fad6004f90ff0bc4e743fed29c89523", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0187016&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e187dcff1fad6004f90ff0bc4e743fed29c89523", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
226289982
pes2o/s2orc
v3-fos-license
BOHR COMPACTIFICATIONS OF GROUPS AND RINGS Abstract We introduce and study model-theoretic connected components of rings as an analogue of model-theoretic connected components of definable groups. We develop their basic theory and use them to describe both the definable and classical Bohr compactifications of rings. We then use model-theoretic connected components to explicitly calculate Bohr compactifications of some classical matrix groups, such as the discrete Heisenberg group ${\mathrm {UT}}_3({\mathbb {Z}})$ , the continuous Heisenberg group ${\mathrm {UT}}_3({\mathbb {R}})$ , and, more generally, groups of upper unitriangular and invertible upper triangular matrices over unital rings. Introduction The motivation for this research was the study of the model-theoretic connected components of some matrix groups over unital rings in order to describe the classical Bohr compactifications of these matrix groups through the use of model theory. Bohr compactifications of topological groups play an important role in topological dynamics and harmonic analysis, and they have some applications to differential equations.They allow to reduce many problems in the theory of almost periodic functions on topological groups to the corresponding problems about functions on compact groups.For example, see [Pan90,Pan96]. The model-theoretic connected components of a definable group G (see Section 1 for definitions) are among the fundamental objects used to study G as a first-order structure.They are of particular significance in definable topological dynamics, a generalization of classical topological dynamics.In [GPP14,KP17], the authors introduce and study the notion of the definable Bohr compactification of a group G definable in a first-order structure.This compactification is described in terms of one of the model-theoretic connected components of G.The classical Bohr compactification of a discrete group G is a special case, and arises when G is considered with the full set-theoretic structure (i.e. when every subset of G is 0-definable).Also, the classical Bohr compactification of a topological group was described in [GPP14,KP19] in terms of a suitably defined model-theoretic connected component. The calculation of model-theoretic connected components of matrix groups over a unital ring naturally led us to the development of the analogous notions of (model-theoretic) connected components of rings.These components were not studied so far and are interesting in their own right.In this paper, our first objective is to give precise definitions of various components of a ring (see Definition 2.1 and the discussion following it), and prove some fundamental results about them such as Proposition 2.5, Proposition 2.6, or Corollary 2.7.In particular, we show in Proposition 2.6 that, as opposed to the group case, the appropriately defined 0-and 00-connected components of a unital ring always coincide.We also relate these components to the model-theoretic connected components of the additive group of the ring (see Corollaries 2.17, 2.22, the examples in Subsection 2.3, and Proposition 2.26).In Subsection 2.4, we observe that ring components can be used to describe the [definable] Bohr compactification of a discrete ring.In Subsection 2.5, we introduce a notion of a model-theoretic component for a topological ring and use it to describe the Bohr compactification of such a ring.Besides elementary algebraic and model-theoretic tools, also certain consequences of Pontryagin duality are involved in some arguments in the above part.All the facts around Pontryagin duality which we need in this paper are discussed in the preliminaries. Our original objective was to use model-theoretic connected components to explicitly compute both the definable and classical Bohr compactifications of some matrix groups.We focus on the groups UT n pRq and T n pRq, where R is a unital ring.We obtain a general description of the Bohr compactifications of these groups (see Propositions 3.2 and 3.4).In the case of some classical rings, e.g. when R is a field, or the ring of integers, or the ring of polynomials in several variables over an infinite field, we get more precise descriptions, which in particular applies to the discrete Heisenberg group UT 3 pZq (see general Corollary 3.5 and its applications in Subsection 3.4).We also adapt our approach to the groups UT n pRq and T n pRq treated as topological groups (for R being a topological ring), obtaining descriptions of their classical Bohr compactifications, which in particular applies to the continuous Heisenberg group UT 3 pRq (see Propositions 3.25, 3.27, and Example 3.28). Our method of computing classical Bohr compactifications of the above matrix groups via model-theoretic connected components is novel, and, up to our knowledge, the descriptions of the Bohr compactifications which we obtained have not been known so far. As an example, let us state here our descriptions of the classical Bohr compactifications of both the discrete and continuous Heisenberg group. The Bohr compactification of the discrete Heisenberg group UT 3 pZq is -, where Z Bohr is the Bohr compactification of the discrete group pZ, `q, Ẑ is the profinite completion of Z, and the product of two matrices from this set is defined as follows: ¨1 a b where π : Z Bohr Ñ Ẑ is a unique continuous group epimorphism compatible with the maps from Z, provided by universality of Z Bohr .More precisely, the Bohr compactification of UT 3 pZq is the homomorphism from UT 3 pZq to the above group of matrices which is defined coordinatewise by the natural maps Z Ñ Z Bohr and Z Ñ Ẑ. The Bohr compactification of the topological group UT 3 pRq is ¨1 R Bohr 0 0 1 R Bohr 0 0 1 --R Bohr ˆRBohr , where R Bohr is the Bohr compactification of the topological group pR, `q.More precisely, the Bohr compactification of the topological group UT 3 pRq is the continuous homomorphism from UT 3 pRq to R Bohr ˆRBohr defined coordinatewise by the (Bohr compactification) map R Ñ R Bohr . Preliminaries In this paper, we use standard model-theoretic notations.We consider groups and rings as objects definable in some first-order structure M , and often assume the groups and rings themselves to be first-order structures in some language L expanding the language of groups and rings, respectively.We always consider a structure M together with a fixed κ-saturated and strongly κ-homogeneous elementary extension M ą M , where κ ą |M | `|L| is a strong limit cardinal.For a definable set X Ď M , we denote by X its interpretation in M .We call a set A Ď M small if |A| ă κ.If pG, ¨, . ..q is a definable group, we say that a subgroup H ď Ḡ has bounded index if | Ḡ{H| ă κ.For rings, by the index of a subring we mean its index as a subgroup of the additive group of the ring. Groups and rings will be often equipped with topology compatible with their operations.We make a blanket assumption that all topological groups, topological rings, and topological spaces in general that we consider in this paper are always Hausdorff (unless stated otherwise).We note however that most of the presented theory can be repeated, possibly with minor modifications, by requiring only compact spaces to be Hausdorff. We will say that M is equipped with the full (set-theoretic) structure if all subsets of M n , n P N, are 0-definable.The language of such a structure will be denoted by L set,M . If G is a definable group in M , and A Ă M a small set, recall the following well-known subgroups of Ḡ, so-called model-theoretic connected components: ‚ Ḡ0 A , the intersection of all A-definable subgroups of Ḡ with finite index, ‚ Ḡ00 A , the smallest A-type-definable subgroup of Ḡ with bounded index, ‚ Ḡ000 A , the smallest A-invariant subgroup of Ḡ with bounded index.We refer to [Gis11,BCG13] for the properties of the connected components which we are going to use and explain below.Clearly Ḡ000 A ď Ḡ00 A ď Ḡ0 A .Sometimes (e.g. in theories with NIP), the group Ḡ0 A does not depend on the choice of A, in which case we say that Ḡ0 " Ḡ0 H exists, and similarly for the other components.Each component is a normal subgroup of Ḡ. The quotients Ḡ{ Ḡ000 A , Ḡ{ Ḡ00 A , Ḡ{ Ḡ0 A can be equipped with the logic topology (where a set is closed if and only if its preimage under the quotient map is type-definable over a small set of parameters), making them respectively a quasi-compact (i.e.not necessarily Hausdorff), a compact, and a profinite topological group.The same holds for the quotient of Ḡ by any normal subgroup of bounded index which is A-invariant, A-type-definable, or an intersection of some A-definable subgroups of Ḡ, respectively. Let G be a topological group.A compactification of G is a compact topological group K together with a continuous homomorphism φ : G Ñ K with dense image.The Bohr compactification of G is a compactification φ : G Ñ K satisfying the following universal property: if φ 1 : G Ñ K 1 is a compactification of G, then φ 1 " f ˝φ for a unique continuous homomorphism f : K Ñ K 1 .The Bohr compactification of G always exists and is unique up to isomorphism.We will denote this object by G Bohr .It is a classical notion in topological dynamics and harmonic analysis.It can be naturally extended to the category of topological rings, and other topological-algebraic objects, as done in [Hol64,HK99]. The work in [GPP14] and [KP17] developed a model-theoretic version of Bohr compactifications.Let us briefly explain this setting.Suppose X is a definable set and C a compact topological space.Recall that a function f : X Ñ C is said to be definable if for each pair of disjoint, closed subsets C 1 , C 2 Ď C, there are definable, disjoint subsets U 1 , U 2 Ď X such that f ´1rC i s Ď U i for i " 1, 2. For a definable group G, we call its compactification φ : G Ñ K definable if φ is definable.The results of [GPP14] show that a group G definable in a model M has the universal definable (called the definable Bohr ) compactification, which is just the quotient Ḡ{ Ḡ00 M or rather the quotient homomorphism G Ñ Ḡ{ Ḡ00 M ; we will denote it by G dBohr .In the full (set-theoretic) setting L set,M , G dBohr " G Bohr (for G treated as a discrete group), and the last result specializes to the following corollary. M is the (classical) Bohr compactification of the discrete group G.In this way, definable compactifications can be viewed as a generalization of classical ones.If G is a locally compact abelian group, harmonic analysis provides a description of G Bohr in terms of Pontryagin duality.Recall that the group Hom c pG, S 1 q of all continuous homomorphisms from G into the circle group S 1 " R{Z " r´1 2 , 1 2 q can be endowed with the compact-open topology, making it a locally compact abelian group.This object is called the Pontryagin dual of G, which we denote by p G: p G " Hom c pG, S 1 q. Fact 1.2 ([Kat04], Chapter VII, Section 5).Let G be a locally compact abelian topological group.Then its Bohr compactification where p G disc denotes p G considered with the discrete topology.Moreover, still assuming that G is a locally compact abelian group, the map b is injective, that is for every g P Gzteu, there is Recall that a profinite group is an inverse limit of finite groups.The next fact is [RZ10, Theorem 2.9.6](b). Fact 1.4.The Pontryagin dual of a profinite abelian group is a discrete, torsion abelian group.Conversely, the Pontryagin dual of a discrete, torsion abelian group is a profinite abelian group. From the last two facts, we get: Corollary 1.5. A discrete abelian group G is torsion if and only if p G is profinite. We will need the following fact (e.g.see [Dik18,Theorem 3.3.14]) in our analysis of modeltheoretic connected components.We give a short proof based on Pontryagin duality. Fact 1.6.A discrete abelian group G is of finite exponent if and only if G Bohr is profinite. Proof.(Ñ) Assume that G is of finite exponent.Then p G is also of finite exponent (as if g n " e for all g P G, then f n pgq " f pgq n " f pg n q " 0 for any f P p G and g P G), so p G is a torsion abelian group.Therefore, by Fact 1.2 and Corollary 1.5, G Bohr -Hom ´p G disc , S 1 ¯is profinite. (Ð) Assume that G Bohr is profinite.Then p G is torsion (again by Fact 1.2 and Corollary 1.5).Suppose for a contradiction that G is not of finite exponent.Then, by [BCG13, Lemma 4.9], there is a homomorphism from G to S 1 with dense image.Such a homomorphism is an element of p G of infinite order, a contradiction. Remark.Alternatively, the implication (Ð) can be obtained using the Baire category theorem and Fact 1.3 in place of [BCG13, Lemma 4.9].First, observe that every torsion, compact abelian group K has finite exponent.Indeed, by the Baire category theorem, for some n P N ą0 the closed subgroup Krns :" tk : k n " eu of K is clopen and so of finite index; since K is torsion and abelian, this implies that K has finite exponent.Hence, since in our case K :" p G is torsion (by Fact 1.2 and Corollary 1.5) and compact (by [RZ10, Proposition 2.9.1(b)]), it has finite exponent.Therefore, since G -p p G, we conclude that G has finite exponent, too.A is profinite for any small set of parameters A Ă Ḡ. Proof.We may assume that A Ď G. Let Ḡ be a monster model for both the full language and original language.Then Ḡ{ Ḡ00 A is a topological quotient of Ḡ{ Ḡ00 H , where the former quotient is computed in the original language and the latter one in the full language.Since Ḡ{ Ḡ00 H is a profinite group, so is Ḡ{ Ḡ00 A . Another important consequence of Pontryagin duality is the following fact (see [RZ10, Proposition 5.1.2]).Recall that under our assumptions, topological spaces are considered to be Hausdorff. Fact 1.9. A topological, unital ring is profinite if and only if it is compact. Throughout the rest of the paper, R " pR, `, ¨, 0, 1, . ..q is a (not necessarily commutative) unital ring, possibly with an additional structure, R ą R is a κ-saturated and strongly κhomogeneous elementary extension of R, and A Ă R a small set of parameters.More generally, one can consider a ring R which is 0-definable in a structure M .We assume that R is unital in order to proceed more smoothly in some proofs, apply Fact 1.9, or talk about the groups of invertible upper triangular matrices, but many definitions and observations work without this assumption, which will be mentioned in some places.However, an important consequence of Fact 1.9, namely the equality of the components in Proposition 2.6(iv), requires unitality (see the discussion after Question 2.8). Whenever we consider an ideal I of a ring R, we will specify whether we mean a left, right, or two-sided ideal; except the cases where the (unital) ring R is commutative.The following inclusions are obvious: In fact, we prove in Proposition 2.6(i)-(iii) that the components of the top row of the diagram coincide with the respective components of the bottom row.That is, there is no need to distinguish between the ring components and ideal components, which justifies item (v) of Definition 2.1.We moreover prove in Proposition 2.6(iv) that the components R0 A,ring and R00 A,ring also coincide in any (unital) ring.This means that among the defined components there are only at most two distinct ones, and we leave as a question whether they coincide (see Question 2.8).We will keep distinguishing the components from the diagram until after Proposition 2.6 is proven. The following example shows that (similarly to the group 00-component) the component R00 R,ideal can be thought of as a generalization of the kernel of the standard part map in the sense that it coincides with this kernel in a certain class of compact rings. Example 2.2.If R is a compact topological ring with a basis of neighborhoods of 0 consisting of definable sets, and all definable subsets of R have the Baire property, then R00 R,ideal " kerpstq, where st : R Ñ R is the "standard part" map, and R{ R00 R,ideal -R.In particular, this applies to the ring Z p of p-adic integers in the (pure) language of rings. Proof.Let µ be the two-sided ideal of R consisting of the infinitesimals, that is the intersection of the Ū 's with U ranging over all definable neighborhoods of 0. It is well-known that compactness of R yields a well-defined group (in fact, also ring) homomorphism st : R Ñ R defined by stprq :" r 1 for a unique r 1 P R with r ´r1 P µ; moreover, kerpstq " µ and R{ kerpstq -R which is of bounded size.Therefore, R00 R,ideal Ď kerpstq.It remains to show that kerpstq Ď R00 R,ring .Write R00 R,ring as the intersection of some R-definable sets Pi , i P I, such that for every i there is j with Pj ´P j Ď Pi .Then each P i (computed in R) is generic (that is some finitely many additive translates of P i cover R), and so, by compactness of R, each P i is non-meager.Since each P i has also the Baire property, we conclude from Pettis theorem (see [Kec95,Theorem 9.9]) that each P i ´Pi is a neighborhood of 0. By the choice of the P i 's, for every i there is j such that P j ´Pj Ď P i , so we conclude that each P i is a definable neighborhood of 0. Hence, kerpstq " µ Ď R00 R,ring .Thus, we get the induced (abstract) isomorphism from R{ R00 R,ideal to R. To see that it is a homeomorphism, it is enough to show that it is continuous (as both rings are compact).For this we need to check that st ´1rF s is type-definable for any closed subset F of R. Note that F " Ş rPRzF U c r for some choice of definable neighborhood U r of r (which exists by assumption).So it is enough to check that st ´1rF s " µ `ŞrPRzF Ū c r which is clearly type-definable, where Ūr is the interpretation of U r in R.This is left for the reader. The fact that the assumptions are satisfied for the ring Z p follows from quantifier elimination in Q p in Macintyre's language and the definability of Z p in Q p (see [Bél12]). Consider the action of the monoid p R, ¨q on R by left multiplication.For any r P R, the map f r : R Ñ R given by f r pxq :" r ¨x is an endomorphism of the additive group p R, `q.For X Ď R, define its setwise stabilizer as Stab RpX q :" r P R : r ¨X Ď X ( . For any X, Write r 2 " r 1 `a for some a P Stab RpGq, and consider any r 1 P f ´1 r 1 rGs X G. Then r 1 r 1 P G and ar 1 P G, so r 2 r 1 " r 1 r 1 `ar 1 P G, and therefore r 1 P f ´1 r 2 rGs.This proves the claim.As f ´1 r rGs X G depends only on the Stab RpGq-coset of r, JpGq can be written as the intersection of a small number of type-definable sets over the same small set of parameters (namely A together with a fixed set of representatives of the Stab RpGq-cosets), so JpGq is type-definable.Since JpGq is A-invariant, it is in fact A-type-definable.Since the subgroups f ´1 r rGs, r P R, of the additive group of R have uniformly bounded index, an intersection of a small number of such subgroups is also a subgroup of bounded index.Hence, JpGq has bounded index. (ii) follows by a similar argument. A key point in what follows is the trivial observation below that the assumption that the index of the stabilizer is bounded is always satisfied when G is a bounded index subring of R. A standard observation about the connected components of groups is that each component has only boundedly many conjugates, so it must contain their intersection.In Lemma 2.3, we instead used the assumption on the index of the stabilizer.Interestingly, the assumption that the index of the left [or right] stabilizer is bounded is sufficient to find a two-sided ideal instead of just one-sided one, as proved in the proposition below. Proposition 2.5.Let G ď p R, `q be a subgroup with bounded index such that either Proof.(i) Let I l and I r be the smallest A-type-definable left and right, respectively, ideals in R with bounded index.By Remark 2.4 applied to S :" I l , we see that Stab 1 RpI l q has bounded index.Thus, by Lemma 2.3, I l contains I r .In the same way, I r contains I l .Hence, I l " I r " R00 A,ideal is a two-sided ideal.Now, suppose that Stab RpGq has bounded index (the case when Stab 1 RpGq has bounded index is similar).Then, by Lemma 2.3, G contains I l , so we are done by the conclusion of first paragraph of this proof. (ii) The argument is again similar. We are now able to prove that some of the connected components introduced in Definition 2.1 are actually equal. Proposition 2.6. (i) R000 A,ring " R000 A,ideal , (ii) R00 A,ring " R00 A,ideal , (iii) R0 A,ring " R0 A,ideal , (iv) R0 A,ring " R00 A,ring .Proof.Items (i) and (ii) follow from Remark 2.4 and Proposition 2.5 applied to G " S :" R000 A,ring and G " S :" R00 A,ring , respectively.We prove (iii) and (iv).Since the quotient ring R{ R00 A,ideal is compact, it is profinite by Fact 1.9, so there is a basis of neighborhoods of 0 that consists of clopen two-sided ideals.Let π : R Ñ R{ R00 A,ideal be the quotient map.We have R00 A,ideal " ( . Consider a clopen two-sided ideal I of R{ R00 A,ideal .Both J :" π ´1rI s and its complement are type-definable, hence definable.Also, J has finite index.Since R00 A,ideal ď J , the orbit of J under Autp R{Aq is bounded and so finite by definability of J .Thus, Ş f PAutp R{Aq f rJ s is Adefinable with finite index.This shows that R00 A,ideal is an intersection of A-definable two-sided ideals with finite index, and therefore R00 A,ideal " R0 A,ideal .In particular, R00 A,ring Ď R0 A,ring Ď R0 A,ideal " R00 A,ideal " R00 A,ring (where the last equality holds by (ii)), and so we get (iii) and (iv). We now adopt the notation from item (v) of Definition 2.1; that is, we write R000 A for R000 A,ideal p" R000 A,ring q, R00 A for R00 A,ideal p" R00 A,ring q, and R0 A for R0 A,ideal p" R0 A,ring q.Proposition 2.6 establishes that R00 A " R0 A regardless of the first-order structure of R.This is in stark contrast to the case of groups.The key difference is that due to Pontryagin duality, every (unital) compact topological ring is necessarily profinite, hence totally disconnected (which forces R{ R00 A and R{ R0 A to be the same object).The analogous statement is not true for groups.In particular, by Corollary 1.7, given an abelian group G with infinite exponent considered with the full structure, the (compact) quotient Ḡ{ Ḡ00 G is not profinite; it follows that Ḡ00 G ‰ Ḡ0 G .A concrete instance of this case is pZ, `q, discussed in more detail in Example 2.18.Another counterexample is the circle group S :" S 1 pRq defined in an o-minimal expansion of R. We have S0 H " S, but S00 H ‰ S0 H as it consists of the infinitesimal elements of S. Regarding the components R00 A and R000 A , let us write explicitly what we have observed in the first paragraph of the proof of Proposition 2.5. Corollary 2.7. (i) R00 A is the smallest left and the smallest right A-type-definable ideal of R with bounded index. (ii) R000 A is the smallest left and the smallest right A-invariant ideal of R with bounded index.Question 2.8.Is R000 A " R00 A p" R0 A q? Equivalently, is R{ R000 A always profinite?This question is strongly related to some problems concerning our computation of the typedefinable connected component of unitriangular groups, which will be discussed in Section 3.2 after Question 3.6.In particular, see Lemma 3.8 for equivalent statements. We conclude this subsection with a discussion on what happens if we drop the assumption that R is unital.First, observe that this assumption is not needed in Lemma 2.3, Remark 2.4 and Proposition 2.5 (working with JpGq X G in place of JpGq in the proof).However, the assumption that R is unital was used in the proofs of Proposition 2.6 (iii) and (iv).Nevertheless, it turns out that (iii) holds also for non-unital rings, which is explained below, whereas (iv) fails in general: to see it, start from any abelian group pR, `, . . .q for which p R, `q00 A ‰ p R, `q0 A and turn it into a (non-unital) ring with the trivial multiplication.Then the above additive group components coincide with the respective ring components, so R00 A " p R, `q00 A ‰ p R, `q0 A " R0 A .The proofs of Lemma 2.3, Remark 2.4 and Proposition 2.5 can be easily adapted to yield the following lemma. Lemma 2.9.Let R be any (not necessarily unital) ring.Let G ď p R, `q be an A-definable subgroup with finite index such that Stab RpGq [respectively Stab 1 RpGq] has finite index.Then: RpSq are both of finite index; (iii) G contains the intersection of all A-definable left ideals of finite index and also the intersection of all A-definable right ideals of finite index, and these two intersections coincide and form a two-sided ideal. Proof.(i) In the proof of Lemma 2.3(i), it is enough to work with JpGq X G and observe that all f ´1 r rGs X G are definable and of finite index and there are only finitely many of them.(ii) follows as in Remark 2.4.(iii) We modify the proof of Proposition 2.5(i).Consider any A-definable left ideal I of finite index.By (ii), Stab 1 RpI q has finite index.Thus, by (i), I contains an A-definable right ideal of finite index.Symmetrically, we have the same statements for switched roles of "left" and "right".This implies that the intersection of all A-definable left ideals of finite index coincides with the intersection of all A-definable right ideals of finite index, and so it is a two-sided ideal.Moreover, by (i) this two-sided ideal is contained in G. Proposition 2.10. For an arbitrary (not necessarily unital) ring R, R0 A,ring " R0 A,ideal coincides with the intersection of all A-definable left [right] ideals of finite index. Proof.By Lemma 2.9, the intersection of all A-definable left [right] ideals of finite index is a two-sided ideal I. Since I is type-definable, R{I is a compact topological ring (with the logic topology).It is also profinite as a group, as I is an intersection of definable finite index subgroups. Claim.If a topological ring is profinite as a group, then it is profinite as a ring.In particular, R{I is profinite as a ring.Proof of Claim.Let S be a topological ring which is profinite as a group.Then S has a basis of neighborhoods of 0 consisting of clopen subgroups, and we need to show that it has a basis of neighborhoods of 0 consisting of clopen two-sided ideals.So take a clopen subgroup V Ď S. For each x P S, there are open neighborhoods U x Q 0 and By compactness, there are finitely many x 0 , x 1 , . . ., x n´1 such that S " Clearly, U is an open neighborhood of 0 and SU S `U Ď V .Let H be the group generated by SU S `U .Then H is a two-sided ideal.Since SU S `U is open, H is open (therefore clopen), and H Ď V because V is a group.This suffices. Hence, as in the proof of Proposition 2.6, we get that I is an intersection of A-definable two-sided ideals of finite index.Thus, R0 A,ideal Ď I, but the opposite inclusion is immediate from the definition of I, so we have equality.Hence, by Lemma 2.9 (ii) and (iii), we easily get R0 A,ideal Ď R0 A,ring , while the opposite inclusion is obvious.2.2.Characterization of the ring components.We now give a characterization of the ring components in terms of subgroups of the additive group.For convenience, the following result is stated in two parts, even though the components R00 A and R0 A are equal.Proposition 2.11.(i) R00 A is the intersection of all A-type-definable subgroups G of p R, `q with bounded index such that rp R, `q : Stab RpGqs is bounded. (ii) R0 A is the intersection of all A-definable subgroups G of p R, `q with finite index such that rp R, `q : Stab RpGqs is finite. Proof.If G is a subgroup of p R, `q with bounded index such that Stab RpGq has bounded index, then, by Proposition 2.5(i), R00 A Ď G, and so: Conversely, we have the following lemma. Lemma 2.12.Let G be an A-type-definable subgroup of p R, `q with bounded index.The following conditions are equivalent: RpGq in items (ii) and (iii). Likewise the lemma below for A-definable groups. Lemma 2.13.Let G be an A-definable subgroup of p R, `q with finite index.The following are equivalent: RpGq has bounded index.If G is A-definable, then the same holds for "bounded" replaced by "finite". 2.3.Ring components vs. additive group components.Our goal is to compare the connected components of R to the connected components of the additive group p R, `q.We start with an immediate observation. It is natural to ask under which conditions R00 A is equal to one of the group components.Namely, Question 2.16.When R00 A " p R, `q00 A ?When R00 A " p R, `q0 A ?Our objective is now to find a characterization of when p R, `q0 A " p R, `q00 A .This equality means exactly that the group quotient R{p R, `q00 A is profinite (this equivalence is well-known and can be justified by an argument as in the proof of Proposition 2.6).Below is an immediate consequence of Corollary 1.7 for additive groups of rings. Corollary 2.17.Suppose that R is considered with the full structure. (i) If pR, `q has infinite exponent, then p R, `q{p R, `q00 R is not profinite, and so R is profinite, and so p R, `q00 R " p R, `q0 R .A fundamental example of a ring whose additive group has infinite exponent is the ring of integers.Regardless of the structure on Z, every subgroup of p Z, `q with finite index is of the form n Z for some n ‰ 0, and so it is 0-definable.Hence, for any structure on Z, p Z, `q0 " Ş n‰0 n Z exists and is an ideal, so it coincides with Z0 (which therefore exists).Example 2.18.Consider Z with the full structure.Since Z has infinite exponent, the above comment and Corollary 2.17 imply that Z00 Z " Z0 " p Z, `q0 ‰ p Z, `q00 Z .Using more explicit arguments, in [BCG13, Example 4.5] the same conclusion was obtained working with the pure ring structure pZ, `, ¨q. The core argument behind Corollary 1.7 relies on harmonic analysis and the description of the Bohr compactification which it provides.On the other hand, both this corollary as well as the corollaries which we derive from it are stated in algebraic and model-theoretic terms.This leads to a question whether they can be proved by means of model-theory, e.g.: Question 2.19.Can one prove Corollary 2.17 without referring to Pontryagin duality? We have already seen that R00 A " R0 A may be strictly bigger than p R, `q00 A .Now, we give examples where R00 A is strictly bigger than p R, `q0 A .Example 2.20.We are going to find an infinite field K and a 0-definable proper subgroup H ă R " p K, `q with finite index.In a field of characteristic p ą 0, such a subgroup always exist, and we can add a predicate for it.But we also give an example for a pure field structure. Let p be prime and n P N ą0 .Consider the finite field F p n in the language of rings.The 0definable function f : F p n Ñ F p n given by f pxq " x p ´x is a homomorphism of pF p n , `q whose kernel is the prime field F p Ď F p n .Hence, the image of f is a subgroup of pF p n , `q with index p, and this is also true in the ultraproduct K :" ś nPN F p n {U for a non-principal U .Then K is infinite and has the desired subgroup H. Then p R, `q0 A ď H Ĺ R00 A " K. Remark 2.21.In a field K of characteristic 0, the group pK, `q is divisible and has no subgroups of finite index, so p K, `q0 " K exists and coincides with K00 . Lemmas 2.12 and 2.13 give us the following straightforward criteria for when the typedefinable connected component of R differs from the connected components of p R, `q. Corollary 2.22. (i) R00 A ‰ p R, `q00 A if and only if there exists an A-type-definable G ď p R, `q with bounded index such that rp R, `q : Stab RpGqs is unbounded. (ii) R0 A ‰ p R, `q0 A if and only if there exists an A-definable G ď p R, `q with finite index such that rp R, `q : Stab RpGqs is infinite. Observe that if A Ď R, then on the right-hand side of the second criterion the ring R can be replaced by R. Now, we give an application of the second criterion.The example Z 2 rXs was suggested to us by Światos law Gal. Example 2.23.(1) Let R :" Z 2 rXs be equipped with the full structure.We will show that it satisfies the right hand side of Corollary 2.22(ii) for any A Ď R, so R0 Then h is an epimorphism of groups and G :" ker h is a subgroup of R of index 2.We will check that f P Stab R pGq iff f is constant, which directly implies that rp R, `q : Stab Rp Ḡqs is infinite. Clearly 0, 1 P Stab R pGq.Now, take f P Z 2 rXs with degpf q " k ą 0. Fix some natural n ą 1 such that 2 n ´k ą 2 n´1 .Let g :" X 2 n ´k.Then g P G, but hpf ¨gq " a 2 n " 1, so f ¨g R G. (2) The above example generalizes to any R :" Kr Xs equipped with the full structure, where K is a field of characteristic p ą 0 and X " pX i q iăλ is a (possibly infinite) tuple of variables.Namely, let h : R Ñ Z p be given by h ´ÿ aīX ī¯: where π : pK, `q Ñ pZ p , `q is any group homomorphism which is the identity on Z p , and a 2 k is aī for the tuple ī with 2 k on the first position and 0 elsewhere.As in (1), G :" kerphq has finite index in pR, `q, whereas Stab R pGq has infinite index, because each polynomial in KrX 0 szZ p is not in Stab R pGq.So R0 A ‰ p R, `q0 A for any A Ď R. Example 2.23(1) implies that for R :" ZrXs we also have R0 A ‰ p R, `q0 A , but in order to see this, we need to make a few general remarks which may be useful in other situations, too. Remark 2.24.Suppose R, S are rings A-definable in some structure M and f : S Ñ R is an A-definable epimorphism.Then f r SÅ s " R Å and f rp S, `qÅ s " p R, `qÅ , where ˚P t0, 00, 000u. Proof.This follows easily from the fact that for any group epimorphism h : G Ñ H and subgroups K ď G and L ď H, we have rH : f rKss ď rG : Ks and rG : f ´1rLss ď rH : Ls. Notice that whenever R is a ring definable in a structure M , then each of the components RÅ and p R, `qÅ (where ˚P t0, 00, 000u and A Ď R) computed with respect to the language L set,R coincides with the one with respect to the language L set,M . Example 2.25.For S :" Zr Xs ( X a tuple of variables of an arbitrary length) equipped with the full structure and any A Ď S we have S0 A ‰ p S, `q0 A .In order to see this, let R :" Z 2 rXs and take an epimorphism f : S Ñ R. Let M consist of two sorts S and R and equip it with the full structure.By the comment preceding this example, we can compute our components with respect to L set,M in place of L set,R .Since f is 0-definable in M , the conclusion follows from Example 2.23(1) and Remark 2.24. In Example 2.18, the left hand side of the criterion in 2.22(i) holds, so the right hand side holds as well.But can one see directly that the RHS of (i) holds in this example?Also, the left hand of the criterion in 2.22(ii) fails in this example, so the right hand side fails as well, but this is trivially seen directly, as each subgroup of finite index of pZ, `q is an ideal. Below we show a positive result for the case of a group component which does not depend on the parameters (which is for example always the case under NIP). Proof.By Corollary 2.7 and Remark 2.15, to prove (i), it is sufficient to show that if p R, `q0 exists, then it is a left ideal.For any r P R, the set f ´1 r rp R, `q0 s is an intersection of definable subgroups of p R, `q of finite index, so p R, `q0 Ď f ´1 r rp R, `q0 s.In (ii), the proof of p R, `q00 " R00 is similar; then the remaining equality follows from Remark 2.15. In (iii), the proof of p R, `q000 " R000 is again similar.Since p R, `q000 exists, so does p R, `q00 .As p R, `q is abelian, by [KP19, Theorem 0.5], we have p R, `q000 " p R, `q00 .The remaining equalities follow from (ii). 2.4.Definable compactifications of rings.We now turn our attention to the notion of definable compactifications of rings.Let us recall the notion of definable compactification. (1) For a definable X Ď R and a compact topological space C a function f : X Ñ C is said to be definable if for each pair of disjoint, closed subsets C 1 , C 2 Ď C, there are definable, disjoint subsets U 1 , U 2 Ď X such that f ´1rC i s Ď U i for i " 1, 2. (2) A definable compactification of a ring R " pR, `, ¨, . ..q is a compact topological ring C together with a definable ring homomorphism φ : R Ñ C with dense image.(3) The definable Bohr compactification of R is a unique up to isomorphism definable compactification φ : R Ñ C which satisfies the following universal property: if As in the context of groups, if a ring R is considered in the full set-theoretic language L set,R , then a definable [Bohr] compactification is the same thing as a classical [resp.Bohr] compactification of R considered with the discrete topology. An essential result of [GPP14] shows the existence and uniqueness of the definable Bohr compactification of a definable group by means of its connected components.We state an analogous result for rings. Proposition 2.28.The definable Bohr compactification of R is R{ R00 R with the natural map Proof.This is proven similarly to Proposition 3.4 of [GPP14], as the argument about lifting a group homomorphism also works for ring homomorphisms. The above definition and proposition are valid also for non-unital rings.Let us note that in contrast with groups, by Fact 1.9, the definable Bohr compactification of a unital ring R coincides with the universal definable, profinite compactification of R. 2.5.Model-theoretic connected components of topological rings and the classical Bohr compactification.In [GPP14] and in Section 2 of [KP19], the classical Bohr compactification of a topological group G was described as Ḡ{ Ḡ00 top , where G is equipped with a structure in which all open sets are 0-definable (e.g. with the full structure), where Ḡ00 top can be described as the smallest 0-type-definable, bounded index subgroup of Ḡ containing the infinitesimals.In fact, several equivalent definitions of Ḡ00 top are given in Section 2 of [KP19].Here, we want to present an analog for topological rings, describing their Bohr compactifications (which coincide with the universal profinite compactifications for unital rings) in terms of a suitable component, where the Bohr compactification of a topological ring R is, of course, defined as the unique universal (ring) compactification of R. Let R be a ring 0-definable in a structure M so that all open subsets of R are 0-definable (e.g.M " R is equipped with the full structure).Let µ be the ring of infinitesimal elements in R. Then Rµ Ď µ, but µ is not necessarily a left ideal of R. M , an M -type-definable subring of bounded index.As before, for any r P R, let f r : R Ñ R be given by f r pxq :" r ¨x.Let JpGq :" Ş rP R f ´1 r rGs (intersected additionally with G, if one wishes to drop our general assumption that R is unital); it is the largest left ideal of R contained in G.By Remak 2.4 and the proof of Lemma 2.3, we can find a small set S (e.g. a set of representatives of cosets of G in R which contains 1) such that JpGq " Ş rPS f ´1 r rGs and so JpGq is M -type-definable.We will prove that JpGq " G, which shows that G is a left ideal.Then the right version of this argument shows that G is also a right ideal, so we will be done. We need to show that G Ď JpGq.Let Gpxq be the partial type defining G, and Jpxqthe partial type defining JpGq.Both types are with parameters from M .Take any formula ϕpxq P Jpxq.It is enough to show that G Ď ϕp Rq.By compactness, we can find ψpxq P Gpxq and s 0 , . . ., s n´1 P S such that f ´1 s 0 rψp Rqs X ¨¨¨X f ´1 s n´1 rψp Rqss Ď ϕp Rq.So we can find r 0 , . . ., r n´1 P R such that f ´1 r 0 rψp Rqs X ¨¨¨X f ´1 r n´1 rψp Rqs Ď ϕp Rq.Since for every r P R, G Ď f ´1 r rGs, we have G Ď f ´1 r 0 rψp Rqs X ¨¨¨X f ´1 r n´1 rψp Rqs Ď ϕp Rq.Definition 2.30.R00 top :" µ `R 00 M . Since R00 M is the smallest M -type-definable two-sided ideal [ring] of bounded index, we get the following corollary. Corollary 2.31. R00 top is the smallest M -type-definable, bounded index two-sided ideal containing µ and also the smallest M -type-definable, bounded index ring containing µ. Proposition 2.32.The quotient map π : R Ñ R{ R00 top is the Bohr compactification of the topological ring R. Proof.The proof is a straightforward adaptation of the proof of [KP19, Fact 2.4(ii)], so we will skip it.Let us only remark that, using the notation from the proof of [KP19, Fact 2.4(ii)], since kerpf ˚q is a bounded index two-sided ideal which is an intersection of some sets of the form Ū for U open in R, we see that it is a 0-type-definable, bounded index two-sided ideal containing µ, and so R00 top Ď kerpf ˚q by Corollary 2.31. Classical and definable Bohr compactifications of some matrix groups Our aim in this section is to describe the definable (in particular classical, taking the full structure) Bohr compactifications of some classical discrete groups.We focus on the groups UT n pRq and T n pRq of (respectively) upper unitriangular and invertible upper triangular matrices over a (unital) ring R and describe their type-definable connected components in order to compute their definable Bohr compactifications.This is done in Subsection 3.2.In Subsection 3.4, we apply these general considerations to some classical rings R (such as Z or Kr Xs), computing explicitly the definable Bohr compactifications of UT n pRq and T n pRq for those rings.In the last subsection, we apply our approach to the topological groups UT n pRq and T n pRq for R being a topological (unital) ring, computing their classical Bohr compactifications. In this section, we often write matrices where some of the coefficients are replaced with sets of coefficients to denote the set of matrices in which the coefficients can be (independently) chosen from the sets that replace them.Similarly, we replace submatrices with sets of submatrices. 3.1.Some linear algebra over rings.First, we analyze the structure of the group UT n pRq for a unital ring R. The following belongs to standard linear algebra.A matrix B P UT n`1 pRq can be written as ˆA v 0 1 ˙for some A P UT n pRq and v P R n .The map ψ : UT n`1 pRq Ñ UT n pRq given by sending B to its upper-left n ˆn submatrix A is a group epimorphism.Its kernel consists of all matrices of the form ˆI v 0 1 ˙, v P R n , and is naturally isomorphic to pR, `qn . The short exact sequence splits via the map s : UT n pRq Ñ UT n`1 pRq which sends A to ˆA 0 0 1 ˙.Hence, UT n`1 pRq becomes a semidirect product UT n pRq˙φ pR, `qn .With a direct calculation, we verify that the action φ : UT n pRq Ñ AutppR, `qn q is just the standard action of UT n pRq on the R-module R n : ˆA 0 0 1 Thus, the group operation in UT n pRq ˙φ pR, `qn is just pA, vq ¨pA 1 , v 1 q " pAA 1 , v `Av 1 q.We now perform a similar analysis for T n pRq.First, consider the following variant of the semidirect product of groups.Suppose that K, H and N are groups, and that there are: a left action φ 1 : K Ñ AutpN q and a right action φ 2 : H Ñ AutpN q.For k P K, h P H, n P N , write kn and nh in place of φ 1 pkqpnq and φ 2 phqpnq, respectively.The set K ˆH ˆN can be equipped with the following operation: It is easy to see that this is a group operation if and only if both actions commute, that is if kpnhq " pknqh for all k P K, h P H, n P N .In that case, we will denote such a group as pK, Hq ˙φ2 φ 1 N .The groups K ˆH and N are naturally embedded in pK, Hq ˙φ2 φ 1 N as K ˆH ˆt1u, and t1u ˆt1u ˆN respectively.The subgroup N ď pK, Hq ˙φ2 φ 1 N is normal.The action of K ˆH on N by conjugation is as follows: pk, h, 1q ¨p1, 1, nq ¨pk, h, 1q ´1 " pk, h, 1q ¨p1, 1, nq ¨pk ´1, h ´1, 1q " p1, 1, knh ´1q. Note that if either of the actions φ 1 , φ 2 is trivial, then pK, Hq ˙φ2 φ 1 N is just a semidirect product of K ˆH and N .Now, consider a matrix B P T n`1 pRq.It can be written as ˆA v 0 r ˙for some A P T n pRq, v P R n , and r P R ˚.We consider a product of two matrices represented this way: From the calculation above, it follows that T n`1 pRq is isomorphic to the group pT n pRq, R ˚q˙φ 2 φ 1 pR n , `q with A P T n pRq acting on R n by v Þ Ñ Av, and R ˚acting on R n by v Þ Ñ vr.Hence, the conjugate of v P R n by pA, rq P T n pRq ˆR˚i s pAvqr ´1 " Apvr ´1q. Discrete triangular groups. Recall that R is a unital ring, and A Ă R is a small set of parameters.Our first goal is to describe UTp Rq 00 A , the A-type-definable connected component of UT n p Rq, along with the quotient UTp Rq{ UTp Rq 00 A .In particular, for A :" R, we get a description of the definable Bohr compactification of UT n pRq; working in L set,R , this compactification coincides the classical Bohr compactification of the discrete group UT n pRq.A natural candidate for the component is UT n p R00 A q.However, we will see that in general it may happen that UTp Rq 00 A ň UT n p R00 A q. Define a sequence I i,A p Rq, i P N ą0 , of A-type-definable subgroups of p R, `q as follows: A , ‚ for i ą 0 let I i`1,A p Rq be the smallest A-type-definable subgroup of p R, `q containing the set R ¨Ii,A p Rq. Moreover, if for some i P N ą0 the group I i,A p Rq is a two-sided ideal (or just left ideal) of R, then I j,A p Rq " R00 A for all j ě i.Conversely, if I j,A p Rq is constant for j ě i, then I i,A p Rq " R00 A is an ideal.Indeed, since I i,A p Rq " I i`1,A p Rq, it is a bounded index, A-type-definable left ideal contained in R00 A , and so it coincides with R00 A by Corollary 2.7.When R and A are fixed, we will omit the parameters and write I i to denote I i,A p Rq. Proposition 3.1. UT n p Rq 00 A " While the groups I i need not be (two-sided) ideals, if i, j ă k, then for any coset a `Ii P p R, `q{I i and b `Ij P p R, `q{I j we have pa `Ii qpb `Ij q Ď ab `Ik ; that is, the coset ab `Ik is well-defined.Consequently, if S " ř s v s w s where each w s and each v s is a coset of I is and I js , respectively, then S can be unambiguously considered as an element of p R, `q{I k for any k such that i s , j s ă k for all s.In the result below, the group operation on the set of matrices is defined using this identification. , where B :" p R, `q andis a topological group isomorphism, with the right hand side equipped with the product topology induced from the logic topologies on the quotients B{I .The quotient B{I 1 is exactly the definable Bohr compactification of pR, `q.More precisely, the definable Bohr compactification of UT n pRq is the homomomorphism from UT n pRq to the above group of matrices given coordinatewise as the quotients by the appropriate I i 's. To state the analogous results for the group T n p Rq, we need to define another non-decreasing sequence I 1 i,A p Rq, i P N ą0 , of A-type-definable subgroups of p R, `q as follows: ‚ I 1 1,A p Rq is the smallest A-type-definable subgroup of p R, `q which contains p R, `q00 A and which is closed under multiplication by R˚f rom both left and right.‚ for i ą 0 let I 1 i`1,A p Rq be the smallest A-type-definable subgroup of p R, `q that contains the set R ¨I1 i,A p Rq ¨R ˚and that is closed under multiplication by R˚f rom both left and right.By definition and induction, we have I i,A p Rq Ď I 1 i,A p Rq Ď R00 A for all R, A, i.Hence, if I j,A p Rq is constant for j ě i, then I 1 i,A p Rq " R00 A .Also, as before, if I 1 j,A p Rq is constant for j ě i, then I 1 i,A p Rq " R00 A .Again, when R and A are fixed, we write I 1 i to denote I 1 i,A p Rq. Proposition 3.3. T n p Rq 00 A " ¨p R˚, ¨q00 The group operation in the result below uses the identifications analogous to those discussed before Proposition 3.2: Proposition 3.4.The definable Bohr compactification of the (discrete) group T n pRq is , where P :" p R˚, ¨q{p R˚, ¨q00 R is the definable Bohr compactification of pR ˚, ¨q, B :" p R, `q, andis a topological group isomorphism, with the right hand side equipped with the product topology induced from the logic topologies on the quotients B{I 1 i . More precisely, the definable Bohr compactification of T n pRq is the homomomorphism from T n pRq to the above group of matrices given coordinatewise as the quotients by p R˚, ¨q00 R or by the appropriate I 1 i 's.We will prove Propositions 3.1-3.4later in this subsection.From now on, when we compute Bohr compactications, we will be describing them only as compact groups, skipping the information about the actual homomorphisms from the groups in question to these compact groups, since these homomorphisms are always as in the last parts of Propositions 3.2 and 3.4. The descriptions of the definable Bohr compactifications of UT n pRq and T n pRq given by Propositions 3.2 and 3.4 can be significantly improved under the following condition on the ring R: I i,R p Rq " R00 R for all i ě 2. p:q The condition asserts exactly that the sequence I i,R p Rq stabilizes after (at most) two steps.Assuming p:q, each quotient p R, `q{I i,R p Rq and p R, `q{I 1 i,R p Rq for i ě 2 is the ring definable Bohr compactification R dBohr of R. Hence, by Propositions 3.2 and 3.4, we get: Corollary 3.5.Assume R satisfies p:q.Then the definable Bohr compactification of the group considered with the product topology. In the next subsection, we will consider several classes of rings, each time showing that they satisfy p:q.This motivates the following: Question 3.6.Does p:q hold for every ring R? Condition p:q is strongly related to Question 2.8, which is explained in the next two lemmas. On the other hand, J is an R-invariant left ideal which contains p R, `q00 R and so has bounded index, hence it must contain R000 R by Corollary 2.7.Lemma 3.8.Let J be the subgroup of p R, `q generated by R ¨p R, `q00 R .The following conditions are equivalent. (i) J is type-definable. (ii) J is generated by R ¨p R, `q00 R in finitely many steps.(iii) R000 R " R00 R .If the above equivalent conditions hold, then p:q holds for R. A positive answer to Question 2.8 is the assertion that condition (iii) of Lemma 3.8 holds, yielding p:q and the descriptions of the definable Bohr compactifications of UT n pRq and T n pRq given by Corollary 3.5.So Question 2.8 can be restated in the following enriched form.Question 3.9.Do the equivalent conditions from Lemma 3.8 hold for every unital ring?If yes, is there a bound on the number of steps which are needed to generate a group by R¨p R, `q00 R which works for all rings R? We expect a positive answer to this question (so also to Question 3.6).This will be dealt with in a forthcoming paper of the third author and Tomasz Rzepecki.In the next subsection, we will give a positive answer in several concrete examples. Let us only argue here that in order to answer Questions 2.8 and 3.6 for commutative, unital rings R in the full language L set,R , we can restrict the context to polynomial rings over Z in possibly infinitely many variables.This essentially follows from the fact that for each commutative, unital ring there is a ring of polynomials Zr Xs (where X is a tuple of possibly infinitely many variables) and an epimorphism f : Zr Xs Ñ R. Indeed, let us work in the two-sorted structure M with sorts R and Zr Xs in the language L set,M .Then all the relevant "components" associated with R computed in L set,R coincide with the ones computed in L set,M , and similarly for the ring Zr Xs; hence, we can work in L set,M .Put P :" Zr Xs, and let P be the interpretation of P in the monster model M .Finally, since f is a 0-definable ring epimorphism, by Remark 2.24, we have f r P 000 H s " R000 H , f r P 00 H s " R00 H , and we easily check that f rI i,H p P qs " I i,H p Rq for all i. The same holds for non-commutative rings, using free rings in non-commuting variables in place of polynomial rings. We now show a number of lemmas needed in the proofs of Propositions 3.1-3.4.We will be using notations and observations from Subsection 3.1.Lemma 3.10.Let K, H, and N be 0-definable groups and G :" pK, Hq˙φ 2 φ 1 N with 0-definable actions φ 1 , φ 2 .Then Ḡ00 A " p K00 A ˆH 00 A q ˙φ2 φ 1 N 1 , where N 1 is the smallest A-type-definable, bounded index subgroup of N invariant under the actions of both K and H on N . Proof.First observe that a subgroup N 0 ď N is invariant under the actions of K and H if and only if it is invariant under conjugation by elements of K ˆH.The group Ḡ00 A X N is a bounded index, A-type-definable subgroup of N invariant under the action of K ˆH by conjugation, so it contains N 1 .The group Ḡ00 A X p K ˆHq is an A-type-definable subgroup of K ˆH of bounded index, so it contains p K ˆHq 00 A " K00 A ˆH 00 A .Thus, p K00 A ˆH 00 A q ˙φ2 φ 1 N 1 Ď Ḡ00 A .Since N 1 is invariant under both group actions, p K00 A ˆH 00 A q ˙φ2 φ 1 N 1 is a group.It is A-type-definable and with bounded index, so we get Ḡ00 A " p K00 A ˆH 00 A q ˙φ2 φ 1 N 1 .Corollary 3.11.Let H and N be 0-definable groups and G :" H ˙φ N with a 0-definable action φ.Then Ḡ00 A " H00 A ˙φ N 1 , where N 1 is the smallest A-type-definable, bounded index subgroup of N invariant under the action of H on N .Lemma 3.12.(i) Let G :" UT n pRq ˙φ pR, `qn , where φpAqpvq :" Av.Then the smallest bounded index, invariant under the action of UT n p Rq, and A-type-definable subgroup N 1 of p R, `qn is equal to I n ˆIn´1 ˆ. . .ˆI1 . (ii) Let G :" pT n pRq ˆpR ˚, ¨qq ˙φ2 φ 1 pR, `qn , where φ 1 pAqpvq :" Av and φ 2 prqpvq :" vr.Then the smallest bounded index, invariant under the actions of both T n p Rq and R˚, and A-type-definable subgroup N 1 of p R, `qn is equal to Proof.For k ď n, let S k be the image of the embedding of p R, `qk into p R, `qn by the map pv k , . . ., v 1 q Þ Ñ p0, . . ., 0, v k , . . ., v 1 q.We freely identify S k with p R, `qk .First we show (i).We prove the following by induction on k: For k " 1, simply observe that N 1 X S 1 is an A-type-definable subgroup of p R, `q of bounded index so it must contain I 1 .Now, suppose the statement holds for some k ą 0. Let C " tc P R : pc, 0, 0, . . ., 0q P N 1 X S k`1 u. We have N 1 X S k`1 Ě C ˆIk ˆ. . .ˆI1 .We conclude the induction by showing that C Ě I k`1 .Let x P I k be arbitrary and take v :" p0, x, 0, . . ., 0q P N 1 X S k`1 .For any r P R there is an A P UT n p Rq such that φpAqpvq " prx, x, 0, . . ., 0q P S k`1 .As N 1 is invariant under UT n p Rq, we have φpAqpvq P N 1 and also φpAqpvq ´v " prx, 0, 0, . . ., 0q P N 1 X S k`1 .This shows that C contains the set R ¨Ik .Therefore, since C is an A-type-definable subgroup of p R, `q of bounded index, it contains I k`1 .We have that N 1 " N 1 X S n Ě I n ˆ. . .ˆI1 .As I n ˆ. . .ˆI1 is A-type-definable with bounded index, it remains to show that it is invariant under the action of UT n p Rq. Take v " pv n , v n´1 , . . ., v 1 q P Rn .For a unitriangular matrix A, Av is of the form pv n `v1 n , v n´1 v1 n´1 , . . ., v 2 `v1 2 , v 1 q, where each v 1 i is an R-linear combination of tv j : j ă iu.So v P I n ˆ. ..ˆI 1 implies Av P I n ˆ. . .ˆI1 , since I i Ě I i `RI i´1 `. . .`RI 1 . We now prove (ii).First, let r, r 1 P R˚a nd let I denote the n ˆn identity matrix.Since N 1 is closed under the actions φ 1 , φ 2 , we have rN 1 r 1 " prIqN 1 r 1 Ď N 1 , so N 1 is closed under multiplication by R˚f rom both left and right.Now, similarly to (i), we prove by induction on k that For k " 1, we again observe that N 1 XS 1 is an A-type-definable subgroup of p R, `q of bounded index.It is closed under multiplication by R˚f rom both sides, so it must contain I 1 1 .Now, suppose the statement holds for some k ą 0. Define C as in the proof of item (i).Then 1 and we need to show C Ě I 1 k`1 .Let x P I 1 k be arbitrary and take v :" p0, x, 0, . . ., 0q P N 1 X S k`1 .For any r P R and r 1 P Rt here is an A P T n p Rq such that Avr 1 " prxr 1 , xr 1 , 0, . . ., 0q P S k`1 .We have Avr 1 P N 1 and also Avr 1 ´vr 1 " prxr 1 , 0, 0, . . ., 0q P N 1 X S k`1 .This shows that C contains the set R¨I 1 k ¨R ˚.Since C is an A-type-definable subgroup of p R, `q of bounded index, closed under multiplication by R˚f rom both left and right, it contains I 1 k`1 .We now have that N 1 is A-type-definable with bounded index, and clearly invariant under multiplication by R˚f rom the right, it remains to show that it is invariant under the action of T n p Rq. Take v " pv n , v n´1 , . . ., v 1 q P Rn .For a triangular matrix A, Av is of the form pr n v n `v1 n , r n´1 v n´1 `v1 n´1 , . . ., r 2 v 2 `v1 2 , r 1 v 1 q, where for each i, v 1 i is an R-linear combination of tv j : j ă iu and r i P R˚.So We are now ready to prove the previously stated results. Proof of Proposition 3.1.By Corollary 3.11 and Lemma 3.12(i), we have UT n`1 p Rq 00 A -UT n p Rq 00 A ˙pI n ˆ. . .ˆI1 q -¨UT n p Rq 00 A I n . . . where the isomorphisms are the obvious ones so that the first and the last group are in fact equal.Hence, the result follows by induction on n. Proof of Proposition 3.2.Write H for the group of matrices on the right hand side of the formula in Proposition 3.2.Let F : UT n p Rq Ñ H be the map sending a matrix ra ij s P UT n p Rq to the matrix rb ij s P H defined by As the group operation in H is the ordinary matrix multiplication after the identification of the product of cosets a `Ii , b `Ij with ab `Ik for all a, b P R and i, j ă k, the map F is a group homomorphism.It is clearly onto, and Proposition 3.1 implies that kerpF q " UT n p Rq 00 A .Hence, UT n p Rq{ UT n p Rq 00 A -H as an abstract group.By compactness of the logic topologies, in order to see that this isomorphism is a homeomorphism, it is enough to check that it is continuous.But this is clear, as the preimage by F of a subbasic closed set S in the product topology on H (i.e. S consists of all matrices in H whose fixed pi, jq-th entry belongs to a fixed closed subset of B{I j´i ) is type-definable. The proofs of Propositions 3.3-3.4are similar to the two previous ones. Connected components of abelian groups via characters. In this subsection, we give a description in terms of characters of the type-definable connected component of any abelian group.It will be needed to get descriptions of some Bohr compactifications in the next subsection, and may prove to be useful in future studies. Let G be an abelian group definable in a structure M .Recall that HompG, S 1 q is the group of all homomorphisms from G to the compact group S 1 " R{Z " r´1 2 , 1 2 q.By Hom def pG, S 1 q we denote the subgroup consisting of all definable homomorphisms in the sense explained before Fact 1.1.Note that if all subsets of G are definable, then Hom def pG, S 1 q " HompG, S 1 q. The next fact follows from the proofs of Lemma 3.2 and Proposition 3.4 of [GPP14]. Fact 3.13.Each χ P Hom def pG, S 1 q extends uniquely to an M -definable homomorphism χ P Homp Ḡ, S 1 q, where M -definable means that the preimages of all closed subsets of S 1 are Mtype-definable subsets of Ḡ. Lemma 3.2 of [GPP14] provides the following construction of χ from the fact above.Let g P Ḡ and let ppxq :" tppg{M q.For a formula φpxq P p, let cl pχrφpGqsq denote the closure of χrφpGqs in S 1 .The set Ş φPp cl pχrφpGqsq is shown to be a singleton in S 1 , and χpgq is defined to be the unique element of this singleton. Proposition 3.14.Suppose that G is an abelian group 0-definable in M .Then p Ḡ, `q00 M " Proof.The second equality is obvious.So we focus on the first equality.(Ď) Observe that for every χ P Hom def pG, S 1 q, kerp χq " č  is an M -type-definable subgroup of Ḡ of bounded index.Hence, kerp χq contains p Ḡ, `q00 M .(Ě) Take a P Ḡzp Ḡ, `q00 M .Let i : Ḡ Ñ Ḡ{p Ḡ, `q00 M be the (M -definable) quotient map.Since ipaq is not the neutral element and Ḡ{p Ḡ, `q00 M is a compact abelian group, the second part of Fact 1.2 yields ϕ P Hom c p Ḡ{p Ḡ, `q00 M , S 1 q with ϕpipaqq ‰ 0. Then χ 1 :" ϕ ˝i : Ḡ Ñ S 1 is a character which is definable over M .Hence, by Fact 3.13, χ :" χ 1 | G : G Ñ S 1 is a definable character with χ " χ 1 .We get that χpaq ‰ 0, so a R χ´1 "`´1 m , 1 m ˘‰ for any m P N such that 1 m ă | χpaq|.Remark 3.15.Let G be any group equipped with the full structure, and χ : G Ñ S 1 a (0definable) character.Let m ą 1.Take the 0-definable set D :" χ ´1 "`´1 m , 1 m ˘‰ and write D for its interpretation in Ḡ. Then: The right hand side of the inclusion is 0-type-definable.If the inclusion fails, then there is a 0-definable subset P of G such that Dz P is non-empty and disjoint from χ´1 "" ´1 m , 1 m ‰‰ .But then we can find r P DzP .Since r P G and r P Dz P , we have that χprq " χprq is not in , a contradiction with the definition of D and the fact that r P D. (ii) If this fails, then there is r in χ´1 "`´1 m , 1 m ˘‰ X Dc .Hence, by the definition of χ, we get that χprq is in the closure of χrD c s Ď p´1 m , 1 m q c , so χprq R p´1 m , 1 m q, a contradiction.By Proposition 3.14 and Remark 3.15, we get Corollary 3.16.Let G be an abelian group equipped with the full structure.Then 3.4.Triangular groups over some classical rings.We apply Propositions 3.2 and 3.4 (more precisely, Corollary 3.5) to compute definable (so also classical by equipping the ring of coefficients with the full structure) Bohr compactifications of UT n pRq and T n pRq for the following classical rings R: fields, Z, Kr Xs or even KrGs (where K is a field and G is a group or semigroup). For each of the above classes of rings, we first consider the group UT n pRq.We show that the set R ¨p R, `q00 R generates a group in finitely many steps, whence condition (ii) of Lemma 3.8 is satisfied.This shows that p:q holds for each of the considered rings, so we can apply Corollary 3.5 to compute the definable Bohr compactification of UT n pRq.In fact, in these examples, the set R ¨p R, `q00 R generates R00 R in one step, i.e.R ¨p R, `q00 R " R00 R .(On the other hand, one can show that the case of ZrXs equipped with the full structure requires exactly two steps, which will be shown in the aforementioned forthcoming paper of the third author with Tomasz Rzepecki).For each R, after dealing with the compactification of UT n pRq, we follow with the computation of the compactification of T n pRq. We begin with the case of an infinite field R " K.For any A, K ¨p K, `q00 A " K and so for all i ě 2 we have I i,A p Kq " K, the only non-trivial ideal of K. Corollary 3.5 gives us that the We now work with R :" Z. Proof.Since p Z, `q0 " Z 0 is an ideal, we clearly have Z ¨pZ , `q00 Z Ď p Z, `q0 , so it is remains to prove p Z, `q0 Ď Z ¨pZ , `q00 Z .The group p Z, `q00 Z is the intersection of a downward directed by inclusion family tP i p Zqu iPI of 0-definable sets.For every i P I we can find n i P P i pZqzt0u.Then n i ¨Z Ď P i p Zq ¨Z " Z ¨Pi p Zq. Thus, p Z, `q0 " Ş nPN ą0 n ¨Z Ď Z ¨Pi p Zq.By compactness, we conclude that p Z, `q0 Ď Z ¨č iPI P i p Zq " Z ¨pZ , `q00 Z . This lemma implies that p:q holds for R " Z.The quotient p Z, `q{I 1,Z p Zq is the definable Bohr compactification pZ, `qdBohr of pZ, `q, whereas p Z, `q{I 2,Z p Zq " p Z, `q{p Z, `q0 is Ẑ, i.e. the profinite completion of Z. So, by Corollary 3.5, we get that the definable Bohr . We are now interested in the rings of polynomials R :" Kr Xs, where K is an arbitrary infinite field and X is a (possibly infinite) tuple of variables.We show that R ¨p R, `q00 R " R " R0 R (i.e.again R ¨p R, `q00 R generates R0 R in a single step).In fact, we will work more generally with any ring R containing an infinite subfield K, covering also rings of the form KrGs, where G is a group or semigroup. Recall the notion of a thick set from [Gis11, Definition 3.1]. Definition 3.19.A subset D of a group is said to be thick if it is symmetric and there is a natural number n ą 0 such that for any elements g 0 , . . ., g n´1 there exist i ă j ă n with g ´1 i g j P D. By compactness, it is clear that for any A-definable group G, each definable superset D of Ḡ00 A contains a definable superset of Ḡ00 A that is thick in Ḡ, namely D X D´1 .Hence Ḡ00 A is the intersection of some directed family of A-definable thick subsets of Ḡ. Note that for an arbitrary (unital) ring R, R¨p R, `q00 A Ď R00 A " R0 A .Hence, by compactness, we get Lemma 3.20.Let R be any ring. (i) R¨p R, `q00 A " R0 A if and only if for every A-definable superset P of p R, `q00 A there is an A-definable two-sided (or just left or right) ideal P 1 of R of finite index with P 1 Ď RP .(ii) R ¨p R, `q00 A " R if and only if for every P as in (i), R " RP .(iii) R˚¨p R, `q00 A " R if and only if for every P as in (i), R " R˚P . Proposition 3.21.Let R be any ring containing an infinite field K (e.g.R " Kr Xs), equipped with any structure.Then R ¨p R, `q00 A " R " R0 A .Proof.We need to check that the right hand side of item (ii) from Lemma 3.20 is satisfied.For this, take any A-definable symmetric subset P of R containing p R, `q00 A .Then P is thick.So P X K is thick in K, hence there is a non-zero d P P X K (as K is infinite).Since d is invertible, Rd " R. Thus, we have proved that RP " R, so we are done. As a corollary, we extend the description of the definable Bohr compactification of UT n pKq from the beginning of the subsection to UT n pRq for any R containing an infinite field K.By the last proposition, I i,R p Rq " R for all i ą 1, so Corollary 3.5 yields the following description of UT n pRq dBohr : ¨1 pR, `qdBohr 0 . . .We now turn to the group T n pRq for R containing an infinite field K. First, we need to show the following strengthening of Proposition 3.21 whose proof is less elementary, as it uses Proposition 3.14.Proposition 3.22.Let R be any ring containing an infinite field K (e.g.R " Kr Xs), equipped with any structure.Then R˚¨p R, `q00 A " R " R0 A .Proof.Without loss of generality, we can assume that A " R (enlarging R and A if necessary). We need to check that the right hand side of item (iii) from Lemma 3.20 is satisfied.For this, take any symmetric R-definable set D containing p R, `q00 A .Note that D is thick in R and that its realization D in R is also thick in R. We need to show that R " R ˚D. Choose a basis tb i u iPI for R treated as a linear space over K. Claim.For every finite J Ď I there is a thick subset D J of K such that ř jPJ D J b j Ď D. Proof of Claim.By Proposition 3.14 and compactness, there are finitely many R-definable characters χ 0 , . . ., χ k´1 : pR, `q Ñ S 1 and m P N ą0 such that χ ´1 0 rr´1 m , 1 m ssX¨¨¨Xχ ´1 k´1 rr´1 m , 1 m ss Ď D. Let χ ij : pK, `q Ñ S 1 be the character defined as the composition χ i ˝ej , where e j : pK, `q Ñ R is given by e j paq :" ab j .Put n :" |J|.Then, for D ij :" χ ´1 ij rr´1 mn , 1 mn ss (where i " 0, . . ., k ´1 and j P J) we have Since each D ij is thick (as the preimage of a thick set by a homomorphism), the set D J defined as Ş iăk,jPJ D ij is also thick (see [Gis10, Lemma 1.2]).By the last displayed formula, we have Claim.For every n P ω and thick subset D n of K we have K ¨Dˆn n " K ˆn, where X ˆn denotes the n-fold Cartesian power, and ¨coordinatewise multiplication. Proof of Claim.We need to show that for every a 0 , . . ., a n´1 P K there exists a P K such that pa 0 , . . ., a n´1 q P a ¨Dˆn n .Since 0 P D n , we can assume that all the a i 's are non-zero.Then the last statement is equivalent to the condition a ´1 0 D n X ¨¨¨X a ´1 n´1 D n ‰ t0u.Now, since D n is thick and each a ´1 i ¨is an automorphism of pK, `q, each a ´1 i D n is thick, so the intersection of all of them is also thick by [Gis10, Lemma 1.2], so contains a non-zero element, because K is infinite.Now take any x P R. Then there are a finite J Ď I and k j P K for j P J such that x " ř jPJ k j b j .By the first claim, there is a thick subset D J of K such that -´pR ˚, ¨qdBohr ¯n . 3.5.Topological triangular groups.We describe how our approach can be adapted to compute the classical Bohr compactification of UT n pRq and T n pRq treated as topological groups with the product topology induced from the topology on R, where we assume that R is a (unital) topological ring. In order to do that, we need first to recall how to present model-theoretically the Bohr compactification of a topological group.So let G be a topological group 0-definable in a first order structure M in such a way that all open subsets of G are 0-definable (e.g.we can work in L set,M ).Following [KP19, Definition 2.3], we define Ḡ00 top to be the smallest bounded index subgroup of Ḡ which is an intersection of some sets of the form Ū for U open in G. Let µ denote the intersection of the Ū 's for U ranging over all open neighborhoods of the neutral element of G; µ is the group of infinitesimal elements of Ḡ. Proposition 2.1 of [GPP14] or Fact 2.4 of [KP19] says that Ḡ00 top is a normal subgroup of Ḡ, and the quotient mapping π : G Ñ Ḡ{ Ḡ00 top is the Bohr compactification of G (treated as a topological group).Proposition 2.5 of [KP19] describes Ḡ00 top as the smallest M -type-definable [or 0-type-definable], bounded index subgroup of Ḡ which contains µ.We will be using this description rather than the original definition. We need the following variant of Lemma 3.10. Lemma 3.23.(i) Let I 1 , . . ., I n be topological groups 0-definable in M .Equip G :" I n Î1 with the product topology, and assume that all open subsets of G are 0-definable in M .Then µ " µ In ˆ¨¨¨ˆµ I 1 (where µ I i is the group of infinitesimals in I i ), and Ḡ00 top " Īn 00 top ˆ¨¨¨ˆĪ 1 00 top .(ii) Let K, H, and N be topological groups 0-definable in M , and let φ 1 and φ 2 be continuous, 0-definable, respectively left and right actions by automorphisms of K on N and of H on N .Equip G :" pK, Hq ˙φ2 φ 1 N with the product topology, and assume that all open subsets of G are 0-definable in M .Then µ " pµ K ˆµH q ˙φ2 φ 1 µ N (where µ K , µ H , and µ N are the groups of infinitesimals in K, H, and N , respectively), and Ḡ00 top " p K00 top ˆH 00 top q ˙φ2 φ 1 N 1 , where N 1 is the smallest 0-type-definable, bounded index subgroup of N containing µ N (equivalently, containing N 00 top ), and invariant under the actions of both K and H. (iii) Let H and N be topological groups 0-definable in M , and let φ be a continuous, 0-definable left action of H on N by automorphisms.Equip G :" H ˙φ N with the product topology, and assume that all open subsets of G are 0-definable in M .Then µ " µ H ˙φ µ N (where µ H and µ N are the groups of infinitesimals in H and N , respectively), and Ḡ00 top " H00 top ˙φ N 1 , where N 1 is the smallest 0-type-definable, bounded index subgroup of N containing µ N (equivalently, containing N 00 top ), and invariant under the action of H. Proof.(i) follows easily from the definitions of infinitesimals and product topology, and the aforementioned characterization of Ḡ00 top in terms of µ. (ii) Note that G is a topological group.Observe that all open subsets of K, of H, and of N are 0-definable in M , so the objects µ K , µ H , µ N , K00 top , H00 top , and N 00 top are defined.As in (i), the equality µ " pµ K ˆµH q ˙φ2 φ 1 µ N is clear from definitions.Having this, the equality Ḡ00 top " p K00 top ˆH 00 top q ˙φ2 φ 1 N 1 follows as in Lemma 3.10, using the aforementioned characterization of Ḡ00 top in terms of µ. (iii) follows from (ii). Let now R be a (unital) topological ring.We work in L set,R .Define a sequence I i p Rq, i ą 0, of 0-type-definable subgroups of p R, `q as follows: I 1 p Rq :" p R, `q00 top , and for i ą 0, I i`1 p Rq is the smallest 0-type-definable subgroup of p R, `q containing the set R ¨Ii p Rq.By Corollary 2.31, we have p R, `q00 top " I 1 p Rq ď I 2 p Rq ď . . .ď I i p Rq ď . . .ď R00 top , and all the comments right after the definition of I i,A in Subsection 3.2 have their obvious counterparts.In particular, I j is constant for j ě i if and only of I i " R00 top .Using Lemma 3.23, one can easily check that the proof of Lemma 3.12(i) adapts to the present context, so we get the following variant of Proposition 3.1.Keeping in mind the identifications as in the discrete case described right after Proposition 3.1, the proof of the next result is the same as for Proposition 3.2 (using Proposition 3.24 in place of 3.1)., where B :" p R, `q and the topology on the right hand side is the product topology induced from the logic topologies on the quotients B{I i .The quotient B{I 1 is exactly the Bohr compactification of the topological group pR, `q. In order to state similar results for T n p Rq, we need to and do assume that the group of units pR ˚, ¨q is topological with the topology induced from R. As in the discrete case, to state the results for the group T n p Rq, we need to define another non-decreasing sequence I 1 i p Rq, i P N ą0 , of 0-type-definable subgroups of p R, `q as follows: I 1 1 p Rq is the smallest 0-typedefinable subgroup of p R, `q which contains p R, `q00 top and which is closed under multiplication by R˚f rom both left and right; for i ą 0, I 1 i`1 p Rq is the smallest 0-type-definable subgroup of p R, `q that contains the set R ¨I1 i p Rq ¨R ˚and that is closed under multiplication by R˚f rom both left and right.And again, the comments right after the definition of I 1 i,A in Subsection 3.2 have their obvious counterparts.In particular, I i p Rq Ď I 1 i p Rq Ď R00 top for all i.Using Lemma 3.23, one can easily check that the proof of Lemma 3.12(ii) adapts to the present context, so we get the following variants of Propositions 3.3 and 3.4., where P :" p R˚, ¨q{p R˚, ¨q00 top is the Bohr compactification of the topological group pR ˚, ¨q, B :" p R, `q, andis a topological group isomorphism, with the right hand side equipped with the product topology induced from the logic topologies on the quotients B{I 1 i .We expect that the following variant of p:q is true for every topological ring: top for all i ě 2. p::q Using Lemma 2.29, one can show that p::q would follow from a positive answer to Question 3.9.This will be discussed in the forthcoming paper.For now, notice that if R satisfies p::q, then our formulas for the Bohr compactifications of the topological groups UT n pRq and T n pRq obtained in Propositions 3.25 and 3.27 simplify in the same manner as in Corollary 3.5 but with each definable Bohr compactification replaced by the (topological) Bohr compactification. Example 3.28.Let R " K be a topological field (e.g.R).Then K ¨p K, `q00 top " K, and so for all i ą 1, I i p Kq " K. Let Q :" p K, `q{p K, `q00 top , i.e. the Bohr compactification of the topological group pK, `q.By Proposition 3.25, the Bohr compactification of the topological group UT n pKq is Let P :" p K˚, ¨q{p K˚, ¨q00 top , i.e. the Bohr compactification of the topological group pK ˚, ¨q.Since K˚¨p K, `q00 top " K, and so for all i ě 1, I 1 i p Kq " K, by Proposition 3.27, the Bohr compactification of the topological group T n pKq is Remark 2.4.Let S be a subring of R. Then S Ď Stab RpSq and S Ď Stab 1 RpSq.Thus, if S has bounded index, so do Stab RpSq and Stab 1 RpSq. UT n p - Kq{ UT n p Kq 00 top -Q n´1 . 2. Model-theoretic connected components of rings 2.1.General theory.We define the following model-theoretic connected components of R in a way analogous to model-theoretic connected components of groups. in G. We claim that JpGq has the desired properties. It is clearly A-invariant. We may write JpGq as Ş is a subring, then Stab RpSq is the largest subring of R in which S is a left ideal; it is known as the left idealizer of S in R [Goo76, p. 121].We similarly consider the action of p R, ¨q on R by right multiplication and denote the setwise stabilizer of X under this action by Stab 1 RpX q :" r P R : X ¨r Ď X Stab RpGq, (iii) Stab RpGq has finite index.Also, we can replace Stab RpGq with Stab 1 RpGq in items (ii) and (iii)., and if G is A-definable, then Stab RpGq is also A-definable, and so Stab RpGq is of bounded index if and only if it is of finite index.Corollary 2.14.Let G ď p R, `q be A-type-definable with bounded index.Then Stab RpGq has bounded index if and only if Stab 1 A Proposition 3.2.The definable Bohr compactification of the (discrete) group UT n pRq is UT n p Rq{ UT n p Rq 00 R -¨1 B{I 1 B{I 2 . . .B{I n´2 B{I n´1 definable Bohr compactification UT n pKq dBohr of UT n pKq is We similarly note that for any A we have I 1 i,A p Kq " K for all i ě 1, so, by Corollary 3.5, the definable Bohr compactification T n pKq dBohr of T n pKq is compactification UT n pZq dBohr of UT n pZq is One easily gets a description of the (topological) connected component of UT n pZq dBohr .For a topological group G, we will denote its topological connected component as G t in order to avoid confusions with its model-theoretic components.Corollary 3.18.´UTn pZq dBohr ¯t -ˆ´pZ, `qdBohr ¯t˙n ´1 .Moving to T n pZq, observe that Z˚" Z ˚" t1, ´1u, and hence I 1 i " I i for all i.Then, by Proposition 3.4 or Corollary 3.5, the definable Bohr compactification T n pZq dBohr of T n pZq is Hence, we have x P ř jPJ Kb j " K ř jPJ D J b j Ď KD, where the equality ř jPJ Kb j " K ř jPJ D J b j is provided by the second claim.Thus, we have shown that R Ď KD Ď R ˚D, so R " R ˚D. Proposition 3.25.Let R be a (unital) topological ring.Then the Bohr compactification of the topological group UT n pRq equals UT n p Rq{ UT n p Rq 00 top -¨1 B{I 1 B{I 2 . . .B{I n´2 B{I n´1 The group operation in the result below uses the identifications analogous to those discussed before Proposition 3.2: Proposition 3.27.The Bohr compactification of the topological group T n pRq is T n p Rq{ T n p Rq 00 top -
2020-11-11T02:00:47.207Z
2020-11-09T00:00:00.000
{ "year": 2022, "sha1": "558b75fe09a405596944dab8c4dcb7dc63e579a9", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A17363C17A374AD054323AA172B8A988/S002248122200010Xa.pdf/div-class-title-bohr-compactifications-of-groups-and-rings-div.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "558b75fe09a405596944dab8c4dcb7dc63e579a9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
257405139
pes2o/s2orc
v3-fos-license
Analogies between hadron-in-jet and dihadron fragmentation We describe the formal analogies in the description of the inclusive production in hard processes of hadron pairs (based on dihadron fragmentation functions) and of a single hadron inside a jet (based on hadron-in-jet fragmentation functions). Since several observables involving dihadron fragmentation functions have been proposed in the past, we are able to suggest new interesting observables involving hadron-in-jet fragmentation functions, in lepton-hadron deep-inelastic scattering and hadronic collisions. We describe the formal analogies in the description of the inclusive production in hard processes of hadron pairs (based on dihadron fragmentation functions) and of a single hadron inside a jet (based on hadron-in-jet fragmentation functions). Since several observables involving dihadron fragmentation functions have been proposed in the past, we are able to suggest new interesting observables involving hadron-in-jet fragmentation functions, in lepton-hadron deep-inelastic scattering and hadronic collisions. I. INTRODUCTION Investigation of the partonic structure of hadrons is based on the crucial method of factorization, which makes it possible to split the cross section of a given process in a perturbative calculable hard cross section (describing the underlying elementary process at the partonic level) and one or more nonperturbative functions (describing the distribution of partons inside hadrons and/or their fragmentation into detected hadronic final states). Although factorization has been established for many hard processes in the collinear framework, where transverse momenta of all partons are integrated, this is not the case for transverse-momentum dependent partonic functions (TMDs). For certain processes involving two hadrons in the initial state and with observed hadronic final states, e.g., inclusive production of hadrons in hadronic collisions A + B → C + D + X, TMD factorization can be explicitly broken because the strongly interacting particles are entangled by a complicate color flow [1,2]. Because of this, it is not possible to describe these processes in terms of the TMDs that appear in other processes, like the inclusive production of a hadron C in Deep-Inelastic Scattering (Semi-Inclusive DIS -SIDIS -denoted as + A → + C + X) [3,4] or the inclusive production of two hadrons C, D in electron-positron annihilations (e + + e − → C + D + X) [5] or the Drell-Yan process [6]. Even neglecting factorization-breaking contributions, the TMDs involved in hadron-hadron collisions would be different from the ones in the other processes, an effect that has been referred to as generalized universality [7][8][9][10]. The most familiar example where this problem occurs is the study of the Collins effect [11]. The so-called "Collins function" can be used as an analyzer of the transverse polarization of the fragmenting quark. It can appear in SIDIS in combination with the chiral-odd TMD parton distribution function (TMD PDF) h 1 , called "transversity", and in the e + + e − → C + D + X process [12][13][14]. However, because of TMD factorization breaking, it is not possible to rigorously study the Collins function in hadronic collisions. An alternative option to the Collins effect is represented by the inclusive production of two hadrons coming from the fragmentation of a single parton. In this case, the analyzer of the transverse polarization of the fragmenting quark is represented by the transverse component of the relative momentum of the hadron pair [15]. The advantage is that this correlation survives the integration over parton transverse momenta and can be analyzed in the collinear framework. Hence, in the SIDIS process + A ↑ → + (C 1 C 2 ) + X the transversity h 1 can be extracted as a collinear PDF through the chiral-odd partner H 1 , the dihadron fragmentation function (DiFF) that describes the fragmentation of a transversely polarized quark into the hadron pair [16][17][18] and is also called Interference Fragmentation Function (IFF) [16,17]. As for the Collins function, the H 1 can be independently extracted from azimuthal asymmetries in the production of two opposite dihadron pairs in e + e − annihilations, in the collinear framework [19][20][21][22]. This last remark makes the crucial difference. First of all, it allows to cross-check the universality of both h 1 and H 1 in hadronic collisions of the type A + B ↑ → (C 1 C 2 ) + X [23]. Secondly, it makes it possible to extract the chiral-odd PDF h 1 from a global fit of SIDIS, e + e − and hadronic collision data in the same theoretically rigorous way as it is usually done for the other unpolarized f 1 and helicity g 1 PDFs [24]. Another intriguing option is represented by the inclusive production of a hadron inside a jet. In fact, for a collision process like A + B → (Jet C) + X the cross section can be factorized in a hybrid form [25]: it involves collinear PDFs in the initial collision, but the final state is represented by a new function, the jet TMDFF (jTMDFF) 1 , that depends on the jet kinematics. The jTMDFF can be matched onto the same TMD FF of hadron C which appears in SIDIS and e + e − cross sections in the TMD framework [26]. It is then possible to access TMD FFs even for that class of processes where factorization in the TMD framework is not available. When one of the two colliding hadrons is transversely factorized, say B ↑ , the fragmentation of the transversely polarized quark is described by the polarized jTMDFF H ⊥ 1 that can be matched onto the Collins function H ⊥ 1 [27]: this "Collins-in-jet" effect makes it possible to check the universality of the Collins function and gives an alternative option to access the transversity h 1 in a rigorously factorized framework. The hybrid factorization for the hadron-in-jet inclusive production has been shown to work also for the SIDIS cross section [28][29][30]. Hence, it comes natural to consider the formal similarities between the inclusive production of dihadrons and of hadrons inside a jet, i.e. between DiFFs and jTMDFFs. In this way, we are able to transfer the knowledge acquired on one mechanism to the other one, and suggest new channels to investigate the partonic structure of hadrons. The paper is organized as follows. In Sec. II, we recall the formalism for describing the inclusive dihadron production in unpolarized proton-proton collisions. In Sec. III, we illustrate the formulae for the inclusive hadron-in-jet production in the same process. In Sec. IV, we generalize the formalism to the case of collisions with one transversely polarized hadron. In Sec. V, by comparing the cross sections for the two mechanisms we establish a general set of correspondence rules. In Sec. VI, we use these rules to extend the study of two processes: a) the inclusive production of two backto-back hadrons-in-jet in unpolarized proton-proton collisions, which could give access to jets initiated by linearly polarized gluons; b) the inclusive production of a hadron-in-jet in SIDIS up to subleading twist, which could give access to the chiral-odd PDF e(x) related to the nucleon scalar charge. Finally, in Sec. VII we conclude and give some future perspectives. Collinear kinematics for the fragmentation of a quark with 3-momentum p (and transverse polarization s ⊥ ) into a pair of hadrons with total 3-momentum P = P1 + P2 pointing along p, namely with P ⊥ = 0. Theẑ axis is along the same direction of p and P . II. FRAGMENTATION INTO A PAIR OF HADRONS We consider the fragmentation of an unpolarized quark q, with 4-momentum p and mass m, into two unpolarized hadrons inside the same jet, with 4-momenta P 1 , P 2 and masses M 1 , M 2 , respectively. We define the total 4momentum P = P 1 + P 2 and relative 4-momentum R = (P 1 − P 2 )/2 of the pair, where P 2 = M 2 hh is its invariant mass. We choose theẑ axis along the direction of the jet axis. At leading order in the strong coupling constant (LO), we identify the jet axis with the direction of the 3-momentum p. We choose the so-called "collinear" kinematics where the 3-momentum P is pointing along p. The transverse components of R with respect toẑ is denoted by R ⊥ , with Fig. 1). 2 The hadron pair is inclusively produced from a hard process in deep-inelastic regime. When specifying the kinematics on the light cone, the dominant components P − 1 , P − 2 , p − can be used to define the following invariants [18,21,32,33] which represent the fraction of the fragmenting quark momentum carried by the hadron pair and how this fraction is split inside the pair, respectively. The fragmentation is described starting from the quark-quark correlator [18,32] ∆(p, P, R) = X dx (2π) 4 e ip·x 0|ψ(x)|X, P, R X, P, R|ψ(0)|0 , where ψ is the quark field operator and the sum runs over all possible final states |X, P, R containing a hadron pair with total and relative momenta P, R, respectively. At leading twist, the fragmentation of an unpolarized quark into two unpolarized hadrons can be parametrized in terms of a single DiFF according to [32] where ∆(z hh , ζ, R 2 ⊥ ) = z hh 32 dp + dp ⊥ ∆(p, P, R) In fact, the full dependence of the correlator in Eq. (2) is reduced to the one in Eq. (4) by considering that [34] -in Eq.(4) we integrate over the light-cone suppressed variable p + and over p ⊥ with the condition p − = P − /z hh ; -our choice of frame and kinematics implies no dependence on P ⊥ ; -the following kinematical relations hold [18]: It is useful to recall also that [18] from which we deduce that in general DiFFs depend only on the relative angle between p ⊥ and R ⊥ . A. Cross section for dihadron production in proton-proton collisions If the hadron pair is inclusively produced from the collision of two unpolarized protons with momenta P A and P B , we can identify the reaction plane as the plane formed by P A and P . The azimuthal orientation around P of the plane formed by P 1 and P 2 with respect to the reaction plane is described by the azimuthal angle φ R (see Fig 2 and Ref. [36] for a formal definition). The transverse component of P with respect to P A is denoted by P T . Its modulus represents the hard scale of the process, namely we assume that |P T | M hh , M 1 , M 2 . For simplicity, in the following the dependence of DiFFs on |P T | is understood. At leading order in 1/|P T |, the differential cross section for the process A + B → (C 1 C 2 ) + X reads (see App. A and Eq. (15) of Ref. [36]) where f a 1 and f b 1 are the usual parton distribution functions (PDFs) in the proton for partons a, b with fractional momenta x A , x B , respectively, and η is the pseudorapidity of the hadron pair with respect to P A : The elementary cross section dσ describes the scattering of partons a and b into partons c (with momentum P/z hhC ) and d, which is not detected. The partonic Mandelstam variablesŝ,t,û are related to the external ones bŷ The δ function in Eq. (7) expresses the momentum conservation in the partonic scattering, and it can be rewritten as [36]ŝ In Eq. (7), the sum runs upon all possible combinations of parton flavors. The elementary cross sections dσ ab→cd for the independent combinations are listed in the Appendix of Ref. [36]. III. HADRON-IN-JET FRAGMENTATION We now consider the distribution of a hadron with 4-momentum P h and mass M h inside a jet with radius r, initiated by a unpolarized quark q with 4-momentum p and mass m. Following Ref. [26], we denote by j ⊥ the transverse momentum of the hadron inside the jet (see Fig. 3). The latter is defined with respect to the standard jet axis (rather than using a recoil-free algorithm) because only in this case a direct connection to the TMD FF can be made [26]. As in the dihadron case, theẑ axis is chosen along the standard jet axis and at LO it is identified with the direction of p. When the jet is produced in a hard process in deep-inelastic kinematical regime, the large light-cone components of quark, jet, and hadron vectors are denoted by p − , J − , and P − h , respectively. They are used to define the following invariants which represent the fraction of the fragmenting quark momentum carried by the jet and the fraction of jet momentum carried by the hadron inside the jet, respectively. The J − is related to the transverse momentum of the reconstructed jet in the hard process, whose size is denoted as |P T | and represents the hard scale of the process itself. The fragmentation is described starting from the quark-quark correlator where, as before, ψ is the quark field operator and the sum runs over all possible final states |X, J, P h containing a hadron P h inside a jet J. At leading twist, the object describing the observed hadron inside the produced jet can be parametrized in terms of a jTMDFF according to [26] where N c is the number of quark colors, |P T |r is the typical momentum scale of the jet [26], and Depending on the relative size of |j ⊥ |, |P T |r and the QCD nonperturbative scale Λ QCD , the jTMDFF of Eq. (14) can be expressed in different factorized forms. Here, we are interested in the kinematical region Λ QCD |j ⊥ | |P T |r where collinear radiation within the jet and soft radiation of order |j ⊥ | are relevant, while harder radiation is allowed only outside the jet and it does not affect the distribution of the hadron transverse momentum |j ⊥ |. In this regime, a factorized form for D 1 is given in Ref. [26] in terms of a hard matching function (related to the hard out-of-jet radiation) and a convolution of a usual TMD FF and a soft function (accounting for the soft radiation inside the jet). It is obtained by initially evolving the TMD FF in the usual Collins-Soper-Sterman (CSS) scheme up to the jet scale |P T |r, then matching to the calculable hard function describing the out-of-jet radiation, and finally evolving to the hard scale by using the standard time-like DGLAP equations. All calculations in Ref. [26] are performed at NLO. At LO, the direction of the quark momentum p coincides with the standard jet axis and its transverse component is equal to the transverse momentum of the reconstructed jet in the hard process, |p T | ≈ |P T |. In this approximation, the jTMDFF D q 1 for the fragmentation of a quark q into a hadron inside the jet reduces to where D q 1 is the standard single-hadron TMD FF that can be isolated also in e + e − annihilations or in semi-inclusive deep-inelastic scattering. A. Cross section for hadron-in-jet fragmentation in proton-proton collisions We consider the same situation as in Sec. II A, namely the collision of two unpolarized protons with momenta P A and P B . The final state is now described by the inclusive production of a jet where a hadron is identified inside it with transverse momentum j ⊥ with respect to the standard jet axis. Following Ref. [27], the factorization theorem for the process A + B → (Jet C) + X can be written as wherez J is given as in Eq. (11) and H U ab→cd describes the elementary hard process a + b → c + d from which the parton c initiates the reconstructed jet. As detailed in App. B, the H U ab→cd of Ref. [27] can be reconnected to the dσ ab→cd of Eq. (7) by The cross section of Eq. (17) can then be cast in the form where we used Eq. (10) adapted to the case of hadron-in-jet fragmentation, i.e., by replacing z hhC with z JC for the fragmenting parton c and using the pseudorapidity of the jet with respect to P A . IV. FRAGMENTATION OF TRANSVERSELY POLARIZED QUARKS We extend our study to the case of a fragmenting quark with transverse polarization s ⊥ . We first consider the fragmentation into a pair of unpolarized hadrons (see Fig. 1). In the kinematic conditions described in Sec. II, the leading-twist correlator of Eq. (4) can be expanded as [32] where H 1 describes the probability density for a transversely polarized quark to fragment into a pair of unpolarized hadrons with total momentum collinear with the quark momentum. The H 1 can be extracted by the following projection where σ µν = i[γ µ , γ ν ]/2 and its spatial index i points in the direction of s ⊥ . Similarly, if the transversely polarized quark fragments into a hadron inside a jet in the kinematical conditions described in Sec. III (see Fig. 3), we can project out the "Collins-in-jet" function H ⊥ 1 from the correlator in Eq. (15) as where again the spatial index i of σ µν points in the direction of s ⊥ . In the following section, for the two fragmentation scenarios we analyze the contributions that arise in the cross section for proton-proton collisions when one of the two protons is transversely polarized. A. Transversely polarized proton-proton collisions For the process A + B ↑ → (C 1 C 2 ) + X depicted in Fig. 2, the polarized part of the cross section reads (see App. A and Eq. (16) of Ref. [36]) where S BT is the transverse polarization of the colliding proton with orientation φ S B with respect to the reaction plane, and h b 1 is the transversity distribution for the transversely polarized parton b with fractional momentum x B . The elementary cross sections d∆σ ab ↑ →c ↑ d describe the scattering of parton a and b with transfer of the transverse polarization of the latter to parton c while summing on the undetected fragments from parton d. All the possible independent flavor combinations are listed in the Appendix of Ref. [36]. The corresponding process A + B ↑ → (Jet C) + X is displayed in Fig. 4. A hadron with 3-momentum P h is inclusively produced inside a jet with standard axisĴ from the collisions of a proton with 3-momentum P A and a transversely polarized proton with 3-momentum P B and polarization S BT . The azimuthal angles φ S B and φ h describe the orientation of S BT and of the plane formed by P h andĴ , respectively, with respect to the reaction plane formed by P A andĴ . The polarized part of the cross section reads [27] where H Collins ab ↑ →c ↑ d is the cross section for the transfer of transverse polarization in the elementary hard process a + b ↑ → c ↑ + d, and H ⊥ c 1 is the polarized jTMDFF describing the hadron inside the jet produced by the transversely polarized fragmenting parton c. By extending the relation (18) to the polarized case involving H Collins ab ↑ →c ↑ d of Ref. [27] and d∆σ ab ↑ →c ↑ d of Ref. [36] (and exchangingt ↔û to account for the fact that the transversely polarized parton in Ref. [27] is a ↑ while in Ref. [36] is b ↑ ), we finally get CORRESPONDENCE BETWEEN DIHADRON AND HADRON-IN-JET FRAGMENTATION We are now in the position to compare the cross sections for the A+B (↑) → (C 1 C 2 )+X and A+B (↑) → (Jet C)+X processes. We deduce that: -from Eq. (12), the combination z J z h = P − h /p − describes the fraction of fragmenting quark momentum carried by the hadron inside the jet; hence, it can be mapped onto z hh of Eq. (1); -by comparing the same two equations, we can map z h onto ζz hh = z 1 − z 2 , the relative fractional momentum carried by the hadron pair; thus, for both hadronic final states (dihadron and hadron inside jet) the light-cone kinematics can be described by a pair of invariants and we can establish a correspondence between these pairs, namely (z hh , ζ) ↔ (z J , z h ); -along the same line, we can map the transverse momentum j ⊥ of the hadron inside the jet with respect to the standard jet axis onto the transverse component R ⊥ of the hadron pair relative momentum with respect to the direction of the pair total momentum, which in collinear kinematics coincides with the standard jet axis; obviously, the same mapping holds for their azimuthal angles, i.e., φ h ↔ φ R ; -by directly comparing Eqs. (7) with (19) and Eqs. (23) with (25), the jTMDFF can be mapped onto the corresponding DiFF according to 4π As a final remark, DiFFs have been extracted so far only through a LO analysis of inclusive dihadron production in e + e − annihilation [21], in combined e + e − and SIDIS processes [22], and in a global fit of e + e − , SIDIS and hadronhadron collision data [24]. The formalism of jTMDFFs is available instead up to NLO, but only in the unpolarized case [26,27]. However, NLO corrections separately affect the elementary hard cross section for a 2 → 2 partonic process [38] and the OPE-expanded expression of D 1 [26]. Hence, we can argue that the same structure of the leading-twist cross sections for inclusive dihadron production in Eqs. (7), (23) holds also at NLO. The correspondence expressed in Eqs. (26) and (27) represents the main result of this paper. It has been derived by considering the case of proton-proton collisions but it can be extended to all hard processes where collinear factorization holds, like inclusive dihadron production in e + e − annihilations [19][20][21] and SIDIS. In the following, we outline some interesting applications involving proton-proton collisions and the SIDIS process. VI. OPPORTUNITIES WITH HADRON-IN-JET FRAGMENTATION In this section, we mention two possible applications of the above correspondence where results known for dihadron inclusive production can be formally translated into the cross section for hadron-in-jet fragmentation, opening up new channels for investigating the partonic structure of hadrons. inclusive production of two back-to-back hadron pairs with total momenta PC = PC1 + PC2 and PD = PD1 + PD2, and backto-back projections PCT and PDT on the transverse plane (φC = φD + π), respectively; the planes containing the momenta of each pair form the azimuthal angles φR C and φR D with the reaction plane containing PA and PC . Right panel: inclusive production of two back-to-back jets with axisĴC ,ĴD, and back-to-back projected momenta PCT and PDT on the transverse plane (φC = φD + π), respectively; in each jet a hadron is detected with 3-momentum P h C (P h D ) and transverse component j C⊥ (j D⊥ ) with respect to the jet axisĴC (ĴD); the planes containingĴC , P h C andĴD, P h D form the azimuthal angles φj C , φj D with the reaction plane containing PA andĴC . A. Inclusive production of two dihadrons and back-to-back hadron-in-jet's in proton-proton collisions For the process A + B → (C 1 C 2 ) C + (D 1 D 2 ) D + X depicted in the left panel of Fig. 5, after summing over the polarizations of initial hadrons the leading-twist cross section reads (see App. A and Eqs. (20)(21)(22) in Ref. [36]) where the momenta and the angles of the second hadron pair are defined in complete analogy with the first pair by replacing the labels c, C with d, D. The delta functions describe in the elementary process the conservation of energy and of momentum both along the longitudinal direction of theẑ axis (identified with P A , see Fig. 5) and in the transverse plane. In Eq. (28), the elementary cross sections d∆σ ab→c ↑ d ↑ involve only quarks for the final partons c, d, while d∆σ ab→g ↑ g ↑ contain only final gluons linearly polarized in the transverse plane. Hence, the H g 1 function describes the fragmentation of such linearly polarized gluons into pairs of unpolarized hadrons. 5 For both cases of final polarized quarks and gluons, all nonvanishing combinations are listed in the Appendix of Ref. [36]. Therefore, by disentangling specific asymmetries in the azimuthal orientation of the planes containing the momenta of the two dihadrons one can access the DiFFs H q 1 and H g 1 for the fragmentation of transversely polarized quarks and linearly polarized gluons, respectively, without considering any polarization in the initial hadronic collision [36]. Because of the correspondence described in Sec. V, it is interesting to explore the same possibility for the process A + B → (Jet C) + (Jet D) + X, as depicted in the right panel of Fig. 5. Using the same rules of correspondence as in the single hadron-pair production, we get where φ j C and φ j D are the azimuthal angles with respect to the reaction plane of the transverse momenta j C⊥ , j D⊥ of hadrons inside the jets with radius r C and r D , respectively. All other variables are defined in complete analogy with the single hadron-in-jet case, identifying each corresponding jet by using the labels c, C and d, D. From Eq. (29), we deduce that the cos(φ j C − φ j D ) asymmetry in the azimuthal distribution of the two hadrons inside the two back-to-back jets is generated by two "Collins-in-jet" effects, one per each jet. This asymmetry allows to isolate back-to-back jets produced by the fragmentation of back-to-back transversely polarized quarks, giving an alternative option to access the Collins function of each hadron inside the corresponding jet. Similarly and even more interestingly, extracting the cos(2φ j C − 2φ j D ) Fourier component in the azimuthal distribution allows to isolate back-to-back jets produced by the fragmentation of back-to-back linearly polarized gluons, giving access to a new class of fragmentation functions: the H ⊥ g 1 describe the inclusive production of hadrons inside jets by the fragmentation of linearly polarized gluons, where the hadron transverse momentum j ⊥ with respect to the jet axis becomes the spin analyzer of the gluon linear polarization in the transverse plane. . The final state can be either a pair of unpolarized hadrons with momenta P1 and P2 with azimuthal orientation φR and total momentum P = P1 + P2 (left panel), or a hadron with momentum P h and azimuthal orientation φ h inside a jet with standard axisĴ (right panel). In both cases, the parallel kinematics is considered where P andĴ are along the momentum transfer q = − , which identifies theẑ axis. B. Semi-inclusive deep-inelastic scattering up to subleading twist The left panel of Fig. 6 describes the kinematics for the + A → + (C 1 C 2 ) + X process, namely for the inclusive production of a hadron pair with momenta P 1 and P 2 by the scattering of a lepton with 4-momentum off a hadronic target with momentum P A , mass M and polarization S, leading to a final lepton with 4-momentum . The azimuthal orientations of the transverse polarization S T and of the hadron pair plane (represented by R ⊥ = (P 1⊥ − P 2⊥ )/2) are given by φ S and φ R , respectively, and they are all measured with respect to the scattering plane identified by and . The hard scale of the process is given by The collinear kinematics is realized by integrating over the transverse components of the hadron-pair total 3-momentum P = P 1 + P 2 or, equivalently, by taking P collinear withq, which identifies theẑ axis. The expressions of the LO cross section for various combinations of polarization of lepton probe and proton target are listed in Eqs. (44)(45)(46)(47)(48)(49) of Ref. [32] up to subleading twist. A slightly different notation was employed in Ref. [39], where the cross section was described in terms of structure functions, based on the analgous expression in Ref. [40]. Here, we limit ourselves to reproducing the terms that are more interesting for our discussion: where α is the fine structure constant, x = Q 2 /2P A · q ≈ k + /P + A is the fraction of target momentum carried by a parton with 4-momentum k and fractional charge e q , y = P A · q/P A · ≈ (E − E )/E is the fraction of beam energy transferred to the hadronic system, λ is the beam helicity, S L is the target longitudinal polarization, ε is the ratio of longitudinal and transverse photon flux, and The structure functions of interest can be written in terms of PDFs and DiFFs in the following way Equation (33) represents the standard way to address in a collinear framework the chiral-odd transversity PDF h 1 (x). The integral of h 1 (x) is the tensor charge, which might represent a possible portal to new physics beyond the Standard Model [41] since it is relevant for explorations of new possible CP-violating couplings [42] or effects induced by tensor operators not included in the Standard Model Lagrangian [43]. The tensor charge can be computed in lattice QCD with very high precision [44]. Future facilities will have a large impact on the current uncertainty on the tensor charge extracted from phenomenological studies [45][46][47]. Equation (34) is particularly interesting because it contains the contribution of the twist-3 chiral-odd PDF e(x), which contains crucial information on quark-gluon-quark correlations (see, e.g., [48]). The integral of e(x) is the scalar charge of the nucleon and is related to the the so-called σ term, which plays an important role in understanding the emergence of nucleon mass from chiral symmetry breaking [49] and its decomposition in terms of contributions from quarks and gluons [50][51][52][53]. The nucleon scalar charge can be important also for the search of physics beyond the Standard Model, since it probes scalar interactions and can be relevant for dark matter searches (see, e.g., Refs. [43,54]). The nucleon scalar charge and the σ term have been computed in lattice QCD (for a review, see Ref. [55] and references therein). The e(x) has been studied in several non-perturbative models of hadron structure [56][57][58][59][60][61][62][63][64]. It can be extracted in the TMD framework by considering the beam spin asymmetry that isolates the dσ LU cross section for inclusive single-hadron production, where the chiral-odd partner is represented by the Collins function H ⊥ 1 [65][66][67]. However, this observable contains other three contributions [68][69][70]. Moreover, each term is represented by an intricate convolution upon transverse momenta. Therefore, it may be more convenient to work in the collinear framework and isolate the e(x) through the simple product with its chiral-odd partner represented by the DiFF H 1 , as shown in Eq. (34). In order to reach this goal, we need to deal with the second contribution in Eq. (34), which depends on the unknown twist-3 DiFFG . Calculations in the spectator model show thatG turns out to be small, and possibly with opposite sign to H 1 [71]. The extraction of e(x) from CLAS and CLAS12 data projected onto the x dependence was performed assuming that the M hh dependence ofG is the same of H 1 but rescaled by a constant factor [72]. A possible strategy to overcome this problem could be to study the ratio dσ LU /dσ U L [33]. In fact, if the term proportional toG would be negligible, using the flavor symmetries of H 1 [21-24, 35, 73-76] the ratio should not exhibit any dependence on (z hh , M hh ), since the latter should cancel out between numerator and denominator. On the contrary, any observed dependence would hint at a non-negligible contribution from twist-3 DiFF, making the extraction of e(x) more challenging. In this perspective, collecting more information on these observables from other channels should give more insight. Hence, it might be useful to consider the + A → + (Jet C) + X process depicted in the right panel of Fig. 6, namely the SIDIS on a (polarized) hadron target where a hadron with momentum P h is inclusively produced inside a jet with transverse momentum j ⊥ with respect to the jet axisĴ taken parallel to theẑ =q axis. The cross section of this process has the same structure of Eq. (30). By using the correspondence of Sec. V, the structure functions in Eqs. (33) where Qr is the typical scale of the jet with radius r. Equation (36) indicates a new way to address the collinear transversity PDF h 1 (x), namely through the "Collins-in-jet" effect in the SIDIS process. Equations (37), (38) show that in the same framework of the SIDIS "Collins-in-jet" effect one can address also the twist-3 collinear PDFs e(x) and h L (x), provided that the remaining contribution given byG ⊥ is small. TheG ⊥ is a new twist-3 jTMDFF that corresponds to the above twist-3 DiFFG . As in the dihadron case, by using the current knowledge on the "Collinsin-jet" effect one could predict the dependence of the ratio dσ LU /dσ U L on the kinematic variables of the final state. The analysis of any possible deviation of data from these predictions would indicate if the contribution of the twist-3 G ⊥ would be or would not be negligible. VII. CONCLUSIONS Transverse-Momentum-Dependent (TMD) factorization gives the possibility of measuring many interesting signals and access many intriguing features of the structure of hadrons. However, one of its shortcomings is that it cannot be applied to hadronic collisions with observed hadronic final states, like, e.g., the process A + B → C + D + X. Two alternative mechanisms have been proposed to recover part of the versatility of TMDs while preserving the applicability to hadronic processes: the inclusive production of dihadrons, or of a hadron inside jet. The inclusive production of dihadrons, namely of two hadrons originating from the fragmentation of the same parton, can be usefully studied in the collinear framework, where transverse momenta of all partons are integrated; it involves universal collinear Dihadron Fragmentation Functions (DiFFs). The inclusive production of a hadron inside jet, namely the inclusive production of a jet with a detected substructure, can be studied in a hybrid factorization approach involving collinear partonic functions in the initial state and TMD hadron-in-jet Fragmentation Functions (jTMDFFs) in the final state. In this paper, we have explored similarities between the two formalisms of dihadron and hadron-in-jet production, and we have established a set of correspondence rules between DiFFs and jTMDFFs. We have used this correspondence to transfer to the jTMDFF case some interesting results obtained with DiFFs, in particular for inclusive production of two back-to-back dihadrons in unpolarized proton-proton collisions, and for inclusive production of a dihadron in semi-inclusive deep-inelastic scattering. In unpolarized proton-proton collisions with the inclusive production of two back-to-back jets where one hadron is detected inside each jet, the cross section contains specific modulations that can distinguish if the hadron is detected inside a jet generated by a quark or by a gluon. Moreover, one of the two modulations is sensitive to a polarized jTMDFF directly linked to the TMD fragmentation function of a linearly polarized gluon. In semi-inclusive deep-inelastic scattering, the cross section for unpolarized lepton probe and transversely polarized proton target offers a new channel to extract the chiral-odd transversity collinear parton distribution function h 1 (x), which is connected to the puzzling proton tensor charge. The cross section for longitudinally polarized lepton and unpolarized proton contains a term proportional to the chiral-odd subleading-twist collinear parton distribution function e(x), which is connected to the well known nucleon σ term and to the physics of QCD chiral symmetry breaking. The above examples illustrate how useful the formal comparison between DiFFs and jTMDFFs can be. The inclusive production of dihadrons has already been measured in hadronic colliders [77][78][79], e + e − colliders [80] and fixed-target experiments [81][82][83][84]. The inclusive production of hadrons-in-jet has been measured only in hadronic colliders [85,86]. Both channels will be (abundantly) available at the future Electron-Ion Collider [46,47]. Therefore, we think it is worth to explore the above mentioned possibilities and to push further the analysis of the consequences of the correspondence rules set in this paper. Appendix A: Cross section for inclusive dihadron production In Ref. [36], Eq. (15) shows the unpolarized cross section for the process A + B → (C 1 C 2 ) + X. After integrating over the azimuthal orientation φ S B of the polarization of hadron B, it reads where η, |P T |, M 2 hh and φ R are defined in Sec. II A, and θ C is the polar angle between P and the direction of the back-to-back emission of the two hadrons in their center-of-mass (c.m.) frame (see Fig. 3 of Ref. [18]). It turns out that ζ = a + b cos θ C , with a, b functions of only the invariant mass M hh [18]. Therefore, the Jacobian of the transformation is dζ = 2|R|/M hh d cos θ C with [36] Using the kinematic relations in Eq. (5) and the obvious definition R ⊥ = (|R ⊥ | cos φ R , |R ⊥ | sin φ R ), we can compute the Jacobian of the transformation dM 2 hh dφ R = dR ⊥ 8/(1 − ζ 2 ). The cross section in Eq. (A1) can be conveniently rewritten as where takes into account the above Jacobians. In a similar way, Eq. (16) of Ref. [36] describes the polarized cross section for the process A + B ↑ → (C 1 C 2 ) + X: where S BT is the transverse polarization of the colliding proton with orientation φ S B with respect to the reaction plane, and h b 1 is the transversity distribution for the transversely polarized parton b with fractional momentum x B . The elementary cross sections d∆σ ab ↑ →c ↑ d describe the annihilation of parton a and b with transfer of the transverse polarization of the latter to parton c while summing on the undetected fragments from parton d. All the possible independent flavor combinations are listed in the Appendix of Ref. [36]. By applying the same transformation of variables from d cos θ C dM 2 hh dφ R to dζdR ⊥ , we get We generalize the above formulae to the case of the inclusive production of two dihadrons. In Ref. [36], from Eqs. (20)(21)(22) the unpolarized cross section for the process A + B → (C 1 C 2 ) C + (D 1 D 2 ) D + X reads (after integrating on the polarizations of initial hadrons) where the momenta and the angles of the second hadron pair are defined in complete analogy with the first pair by replacing the labels c, C with d, D. The additional delta functions are due to momentum conservation in the elementary ab → cd process both in the longitudinal direction of theẑ axis, identified with P A , and in the transverse plane. In collinear kinematics, the conservation in the transverse plane is trivially P CT /z hhC = −P DT /z hhD . This implies that the above cross section is integrated in the azimuthal angles of P CT and P DT with the condition φ C = φ D + π and that the moduli are constrained by [8] δ The conservation along theẑ axis in the c.m. frame of the annihilation reads x A P Az −x B P Bz = P Cz /z hhC +P Dz /z hhD . Using the previous delta function, after some manipulation it can be rewritten as Finally, the third delta function is the analogue of Eq. (10). Because of Eq. (A10), it can be rewritten aŝ sδ(ŝ +t +û) = z hhC δ(z hhC −z hhC ) ,z hhC = |P CT | √ s In Eq. (A8), the elementary cross sections d∆σ ab→c ↑ d ↑ involve only quarks for the final partons c, d, while d∆σ ab→g ↑ g ↑ contain only final gluons linearly polarized in the transverse plane. Hence, the H g 1 function describes the fragmentation of such linearly polarized gluons into pairs of unpolarized hadrons. For both cases of final polarized quarks and gluons, all nonvanishing combinations are listed in the Appendix of Ref. [36]. By introducing the same transformation of variables used for the inclusive production of a single hadron pair, the cross section of Eq. (A8) can be rewritten as
2023-03-09T06:42:45.804Z
2023-03-08T00:00:00.000
{ "year": 2023, "sha1": "074aa4bbd688b9dcdb41f21afb0734b176e57648", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "074aa4bbd688b9dcdb41f21afb0734b176e57648", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
265110968
pes2o/s2orc
v3-fos-license
Cytomegalovirus Retinitis in a Patient Taking Upadacitinib: A Case Report Upadacitinib is a relatively new drug used to treat autoimmune diseases. However, patients treated with upadacitinib may develop infections. We report a case of cytomegalovirus (CMV) retinitis that developed during upadacitinib administration. A 79-year-old woman presented with progressively decreasing vision in both eyes. Her decimal best-corrected visual acuity (BCVA) was 0.2 in the right and 0.01 in the left eye. The patient was taking upadacitinib for one year. Fundus examination revealed vitreous opacities and extensive white retinal lesions with hemorrhage in both eyes. CMV was detected in the anterior aqueous humor, vitreous humor, and blood samples. We diagnosed her with panuveitis and CMV retinitis, performed a vitrectomy in both eyes, and administered intravenous ganciclovir and steroids. After treatment, her BCVA improved to 0.6 in the right and 0.1 in the left eye. Ophthalmologists and physicians should be aware of CMV infections in patients being treated with upadacitinib. Introduction Upadacitinib is used for the treatment of autoimmune diseases such as rheumatoid arthritis [1].Patients taking this drug sometimes develop side effects, including infection [1,2]; however, ocular infections are rare.Human cytomegalovirus (CMV) also known as human herpesvirus 5 is a common infection [3].CMV usually infects during childhood and remains asymptomatic throughout life in healthy people, and more than 60% of adults have CMV-specific IgG antibodies [3].CMV activates in immunosuppressed patients, resulting in severe organ damage.It causes pneumonia, gastrointestinal disease, hepatitis, and retinitis [3,4].CMV retinitis, characterized by retinal white lesions with hemorrhage, may result in permanent vision loss [4].Early diagnosis of CMV retinitis is important for patients' quality of life.We describe the case of a patient with CMV retinitis treated with upadacitinib for rheumatoid arthritis. Case Presentation A 79-year-old woman presented with progressively decreasing vision in both eyes.She noticed vision loss and visited her local eye clinic four months ago.The patient was diagnosed with idiopathic uveitis and treated with betamethasone eye drops and subtenon injections of triamcinolone acetonide.However, her vision worsened, and she was referred to our hospital.Her medical history included rheumatoid arthritis, Sjögren syndrome, interstitial pneumonia, asthma, and type 2 diabetes mellitus on insulin.She was diagnosed with rheumatoid arthritis 22 years ago, treated with a combination of oral prednisone and methotrexate, and received intravenous infliximab every eight weeks.She was diagnosed with mild interstitial pneumonia 11 years ago and underwent follow-up chest computed tomography scans and blood examinations.Moreover, she was diagnosed with Sjögren syndrome eight years ago.Her dry eye and dry mouth symptoms were mild.She was infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) two years ago and was also diagnosed with post-coronavirus interstitial pneumonia.The use of methotrexate and infliximab was discontinued to prevent exacerbation of interstitial pneumonia.However, she complained of swelling and pain at the metacarpophalangeal joints and proximal interphalangeal joints in both hands.Therefore, oral upadacitinib (15 mg/day) was started one year ago.The swelling and pain in her fingers improved, and she had no signs of any new infection or respiratory symptoms.Moreover, she underwent cataract surgery in both eyes at the initial eye clinic one year ago.Her postoperative bestcorrected visual acuity (BCVA) was 1.0 in both eyes. At our initial examination, her BCVA was 0.2 in the right and 0.01 in the left eye.The intraocular pressure was 11 mmHg in the right and 12 mmHg in the left eye.Slit-lamp examination revealed corneal edema, Descemet folds, and several pigmented keratic precipitates (KPs) in the left eye (Figure 1).Therefore, the left corneal translucency was poor.The cell gradings of the anterior chamber were 3+ in both eyes [5].Fundus examination revealed vitreous opacities and extensive white retinal lesions with hemorrhage in both eyes (Figure 2). FIGURE 2: Fundus photographs at the initial visit. A. Right eye.White retinal lesions with hemorrhage were observed on the superior nasal and inferior temporal areas.The photograph was slightly obscured because of vitreous opacity. B. Left eye.White retinal lesions extending from the macula area to the temporal side were observed.The photograph was slightly obscured due to vitreous opacity. We collected anterior aqueous humor for polymerase chain reaction (PCR) examination.The CMV antigenemia assay method (C7-HRP) at the initial blood examination was positive (223 positive cells/50,000 leukocytes) and confirmed active CMV infection.Her body temperature was within normal range.The patient had no systemic symptoms other than finger stiffness.The patient was diagnosed with panuveitis and CMV retinitis.We hospitalized her, discontinued upadacitinib therapy, and started treatment with intravitreal ganciclovir (280 mg/day) and prednisone (60 mg/day).We measured her blood glucose four times a day and adjusted the units of insulin.Moxifloxacin, betamethasone, nepafenac, and ganciclovir eye drops were administered.We performed a vitrectomy combined with posterior hyaloid removal and silicone oil tamponade in the right eye on day two.We collected vitreous humor during the surgery.We also administered intravitreal injections of foscarnet in the right (day one) and left eye (days one and eight). Both PCR examinations of anterior aqueous and vitreous humors confirmed CMV.Inflammation gradually decreased, and the retinal lesion showed a tendency to regress.As C7-HRP levels improved (three positive cells/50,000 leukocytes) on day 14, we switched to oral valganciclovir (900 mg/day) on day 16.We gradually decreased the prednisone dose and changed it to oral administration.She was infected with SARS-CoV-2 on day 16 and had a slight fever and mild cough.However, her oxygen saturation (SpO2) was within normal range and her chest X-ray showed no changes.The patient was isolated until SARS-CoV-2 negative confirmation and was treated with oral molnupiravir for five days.After the left cornea became clear, we performed a vitrectomy combined with posterior hyaloid removal and silicone oil tamponade in the left eye on day 30.We reduced the amount of valganciclovir (450 mg) after two negative CMV confirmations on blood examinations (on days 24 and 30) and prednisone (20 mg/day).The patient was discharged on day 39 with an improved BCVA, 0.6 in the right, and 0.1 in the left eye (Figure 3). FIGURE 3: Fundus photographs at the discharge. A. Right eye.After silicone oil tamponade.The white retinal lesions were improved compared to the initial examination. B. Left eye.After silicone oil tamponade.The white retinal lesions were improved compared to the initial examination. We obtained written informed consent from the patient for the publication of this case report, which does not contain any personal identifying information. Discussion We describe a case of CMV retinitis during oral treatment with upadacitinib.Upadacitinib is a Janus kinase (JAK) inhibitor [1,6].JAK is a group of intracellular enzymes involved in the signaling of inflammatory cytokines, such as interferons and interleukins [1].JAK has four isoforms (JAK1, JAK2, JAK3, and tyrosine kinase 2), and upadacitinib is a selective JAK1 inhibitor [6].Upadacitinib was approved in the United States in 2019 as a new drug to treat moderate-to-severe rheumatoid arthritis, an autoimmune disease characterized by inflammation and bone destruction [7,8].Besides rheumatoid arthritis, upadacitinib is currently used to treat several autoimmune diseases [8][9][10].However, upadacitinib sometimes causes side effects, including neutropenia, hepatic disorder, venous thrombosis, and infections [1,8].A phase 3 study of upadacitinib reported that 24 weeks of upadacitinib administration (15 mg/day) caused candidiasis (1.3%) and herpes zoster (1.3%) [2].Few reports are available regarding CMV infections in the same herpes genus.A clinical trial documented a patient who experienced a CMV infection, and whether this individual developed retinitis remains uncertain [8].Although a few reports of CMV retinitis were noted in patients taking tofacitinib (a selective JAK1, JAK2, and JAK3 inhibitor) [11,12], no similar case reports of upadacitinib were found in the PubMed database.CMV retinitis is an opportunistic infection often observed in patients with acquired immunodeficiency syndrome [4].CMV retinitis is also observed in patients undergoing hematopoietic stem cell transplantation and blood disorders [4,13].A recent study reported that CMV antigenemia was found in patients with autoimmune diseases, including systemic lupus erythematosus, Sjögren syndrome, and rheumatoid arthritis [14].The long immunosuppressive conditions in patients with the treatment for those autoimmune diseases may activate potential CMV infection, resulting in organ damage.CMV retinitis can also occur due to topical administration of steroids [15,16].Although the patient also received topical steroids, her initial blood examination revealed a systemic CMV active infection, implying that long-term oral upadacitinib administration might have contributed to the systemic infection. Anti-CMV drugs (intravenous ganciclovir or oral valganciclovir), intravitreal injections of anti-CMV drugs, and vitrectomies are usually performed to treat CMV retinitis [4].In some cases, steroids can also be administered [17].In this case report, the patient had several autoimmune systemic diseases (such as severe rheumatoid arthritis and interstitial pneumonia).She also presented with panuveitis and severe inflammation in both eyes.Therefore, the patient received a combination of systemic steroids and anti-CMV drugs for treatment, resulting in the resolution of inflammation and improved visual acuity. Conclusions We report the development of CMV retinitis in a patient on upadacitinib.Ophthalmologists and physicians should exercise caution regarding CMV retinitis in patients being treated with upadacitinib.Their close collaboration is crucial for effective patient monitoring and management. FIGURE 1 : FIGURE 1: Photographs of the anterior segment of the eyes at the initial visit.A. Right eye.Fine keratic precipitates (KPs) were observed.The cornea remained transparent.B. Left eye.Corneal edema and Descemet's folds were present.Large pigmented KPs were observed.
2023-11-11T16:42:49.584Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "b18de8b13106c1444cc9eb351fbba8db83afda71", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/198838/20231106-21345-ze1y9x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd68c8959ad80c76934f090046f7395a265e5f62", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
257902770
pes2o/s2orc
v3-fos-license
Gurka vs Slaughter equations to estimate the fat percentage in children with cerebral palsy from all subtypes and levels of the Gross Motor Function Classification System Background Body composition assessment in children with cerebral palsy (CP) is a challenge, specially the fat percentage. There are different methods that can be used to estimate the fat percentage in this population, such as anthropometric equations, but there is still a need to determine which is the best and most accurate. The purpose of the study was to determine the method that best estimates the fat percentage in children from all CP subtypes and levels of the Gross Motor Function Classification System (GMFCS). Methods Analytical cross-sectional study in which 108 children with CP diagnosed by a pediatric neurologist were included with any type of dysfunction and from all levels of the GFMCS. Slaughter equation, Gurka equation and Bioelectrical impedance analysis (BIA) as reference method, were used. Groups were stratified by sex, CP subtypes, GMFCS level and Tanner stage. Median differences, Kruskal–Wallis, Mann–Whitney U test, Spearman's correlation coefficients and simple regressions were used, also multivariate models were performed. Results The Slaughter equation differed from the other methods in the total population and when it was compared by sex, CP subtypes, gross motor function and Tanner stage. The Gurka equation showed significant differences by sex and gross motor function. Gurka equation correlated positively and significantly with BIA to estimate the fat percentage in all the CP subtypes and levels of the GMFCS. Tricipital skinfold (TSF), arm fat area (AFA) and weight for age index (W/A) showed the highest variability with respect to fat percentage. Conclusion Gurka equation is more appropriate and accurate than Slaughter equation to estimate the fat percentage in children with CP from all subtypes and levels of the GMFCS. have protein-energy malnutrition [7], which can lead to changes in body composition; for example, decreased lean mass (LM), fat mass (FM), and fat percentage, increase in total body water (TBW), and changes in bone mineral density (BMD) [8][9][10]. There are different methods of measuring and estimating body composition in children with CP; however, there is no consensus that specifies which method is the best of them [11,12]. Fat percentage is the most clinically applicable measurement for assessing nutritional status [13]. In boys, a low fat percentage is considered to be ≤ 10%, normal is from 11 to 25% and high is > 25%, while in girls, ≤ 15% is considered low, normal is from 16 to 30% and high is > 30% [14]. Dual X-ray absorptiometry (DXA) is a reliable method for accurately estimating FM, LM, fat percentage and bone mass (BM) in children [15][16][17] and adults [18]; however, it is expensive, requires specialized equipment, and emits low levels of radiation [8,[19][20][21]. Doubly labeled water (DLW) has been used to assess fat mass in children with CP, and similar to DXA, it is a method that requires specialized equipment, is timeconsuming to perform and is commonly used for investigation purposes and in research centers [8,21]. Two methods that are less expensive and relatively easy to perform, and that can provide valid and reliable measurements for body composition assessment and fat percentage specifically are bioelectrical impedance analysis (BIA) and anthropometry (with two and four skinfolds) [9,19]. BIA is a fast, noninvasive, portable method that is increasingly used in clinical settings and in studies in children [4,11,19,20]. From a clinical and practical point of view, anthropometry is the most widely used method for estimating body composition since it requires only a skinfold caliper, previously standardized personnel, and the cooperation of the patient [9,22]. However, there is concern that these methods may not be reliable in children with CP. On the one hand, the hydroelectrolytic status of these children can influence the results obtained by BIA [19]. On the other hand, the known alterations in body composition in children with CP, such as the increase in total body water and the decrease in LM, FM and BMD, indicate that the Slaughter equation underestimates the fat percentage [8]. Although this equation is commonly used in clinical and research settings to estimate body fat percentage in healthy populations, it has been analyzed in children with neurological damage [23]. Gurka et al. [8] published a correction factor that improves the validity of the Slaughter's equations, and some authors recommend the use of the correction factor for children with CP [21]. ESPGHAN guidelines highlight the importance of the routine use of skinfold thickness measurements with the calculation of fat percentages using this equation [12]. Therefore, the purpose of this study is to determine the anthropometric method that best estimates the fat percentage in children with CP from all subtypes and levels of the Gross Motor Function Classification System (GMFCS). Materials and methods In an analytical cross-sectional design study, 108 participants (53 girls and 55 boys) aged 24 months to 16 years 9 months (7 y 9 m ± 4 y 3 m) were included. The participants had CP with any subtypes, and their gross motor function scale level was diagnosed and classified by a pediatric neurologist who attended the pediatric outpatient clinic of the Hospital Civil de Guadalajara "Dr. Juan I. Menchaca". The participants were fed orally or by gastrostomy feeding tubes according to oral tolerance and severity of malnutrition. The sample size was obtained from a previous study on the nutritional status of children with CP and estimated with a confidence level of 95% (α = 0.05) [24]. Patients with diagnoses unrelated to CP (Down syndrome, autism, degenerative disorders), those receiving medications that could alter body composition (steroids, thyroxine, antiretroviral drugs) and/or those with CP of postnatal origin (traumatic injuries, accidents, tumors, other injuries) were excluded. Anthropometry All measurements were obtained by two previously standardized observers. Weight was obtained with a clean diaper and as little clothing as possible; and a SECA scale (Model 700 Hamburg Germany) with a precision of 50 g was used. The child was weighed in the arms of a family member or an observer, and then only the adult was weighed. The result was obtained by calculating the difference between the weight obtained with the family member and the child, minus the weight of the family member alone. Height (cm) was obtained using leg length (LL) as an alternative height measurement with the following equation: [(3.26 × LL) + 30.8]. The measurement of leg length was conducted using a tape measure (Seca, 206, Hamburg, Germany), according to the technique of Stevenson [25] (1995), and the final value was obtained by averaging the two measurements. Mid-upper arm circumference (MUAC) was measured with a flexible tape (Seca, 206, Hamburg, Germany) at the midpoint of arm length from acromion to olecranon in the left arm. The tricipital skinfold (TSF) was measured with a Lange caliper (Cambridge, Maryland) on the postero-medial part of the left arm at the same point as the MUAC. With the separation of the skin and adipose tissue and the placement of the forceps in the fold one centimeter above the marked point, three measurements were taken per observer and the average was obtained. The subscapular skinfold (SSF) was located at the lower angle of the left scapula where a mark was made; the skin and adipose tissue were taken one centimeter below and diagonally to the mark; the tips of the caliper were placed and after taking the measurement three times, the average was obtained. Bioelectric impedance analysis (BIA) Bioelectrical impedance was performed with Quadscan 4000 equipment (Body Stat Limited, England). After a previous fast of three hours, two electrodes (pediatric) were placed on the back of the hand (at the level of the wrist and metacarpus) and two on the back of the foot (metatarsal and ankle). Metallic objects that could interfere with the measurement were removed, and the procedure was performed at 50 Ohms. The measurement was obtained with the child in the dorsal supine position and as relaxed as possible. The estimation of fat percentage was obtained. Ethical considerations The study did not put the study subjects at risk, and it adhered to the guidelines of the Declaration of Helsinki and the principles of beneficence, nonmaleficence, justice, and autonomy of decision. The Ethics Committee of the Hospital Civil de Guadalajara "Dr. Juan I. Menchaca" approved the protocol under number No. 1344/14. Statistical analysis For descriptive statistics, medians and standard deviations were used for quantitative variables, and frequencies and percentages were used for qualitative variables. For analytical statistics, median differences, Kruskal-Wallis and Mann-Whitney U tests were performed. Spearman's rank correlation coefficients was used. Simple regressions was used to predict anthropometric variables in which a significant correlation was found with the different methods to estimate the fat percentage. Multivariate models were performed. Statistical analysis was performed using SPSS software version 21, and a p value < 0.05 was considered significant. Table 2 shows the general characteristics of the participants. It is shown that the percentage of fat estimated with the Gurka equation and with BIA were similar, while this percentage was lower with the Slaughter equation. There was a significant difference when comparing the fat percentages of the total population with the three methods (p < 0.001). When comparing the fat percentage in all subjects between the Slaughter and Gurka equations and between the Slaughter equation and BIA, there was a significant difference (p < 0.001), while the fat percentage was similar between the Gurka equation and BIA. When comparing the fat percentage by sex, there was a significant difference with the Gurka equation (p < 0.001). Results Most of the subjects presented spastic CP (73%) and were grouped into levels 4 and 5 according to the GMFCS. Regarding pubertal development, 72% belonged to levels I and II on the Tanner scale (Table 3). Table 4 shows the fat percentage estimated with anthropometry (Slaughter equation and Gurka equation) [8,26] and BIA in the total population with spastic CP and with other types of dysfunction. Slaughter equation underestimated the fat percentage in both groups when it was compared to BIA, while Gurka equation was closer to BIA. The analysis of the fat percentage with each method and the GMFCS levels are presented in Table 5. There was a significant difference when comparing the fat percentages obtained by the three methods in levels 1-3 and levels 4-5. When comparing the percentage of fat Table 6 shows the correlation coefficients between Gurka vs. BIA and Slaughter vs. BIA. It is shown that the fat percentage estimated by Gurka correlated positively and moderately with BIA in all the CP subtypes and in all levels of the GMFCS. Slaugther equation correlated positively with BIA only in the spastic group and in the GMFCS levels 4-5. When comparing the fat percentage among the three groups by Tanner stages, both the Gurka equation and BIA indicate that the group with the lowest fat percentage is that of stages IV-V. When performing post hoc tests to compare the fat percentage between the Tanner stages for each method with the total population, there was a significant difference between stages I-II vs. IV-V with the Gurka equation (p = 0.038), between stage III vs. IV-V with the Gurka equation (p = 0.038), between stages I-II vs. III with BIA (p = 0.016) and between stages I-II vs. IV-V with BIA (p = 0.007). When comparing the fat percentage between boys and girls, it was significantly higher in girls in stages I-II (p = 0.001) and in stage III (p = 0.003) according to the Gurka equation (Table 7). Consistent with the criteria of Lohman et al. (1987) [28] with the Slaughter equation, girls and boys had a low fat percentage both in the total population and when separating the population by Tanner stages. While with the Gurka equation, the fat percentage was found to be adequate in boys and girls. With BIA, it was found to be high in boys in stages I and II and adequate in stages III, IV and V; in girls, it was adequate in all stages. Table 8 shows that 70% of the variability of the fat percentage estimated by the Gurka equation is explained by the TSF, 28% by the MUAC and 67% by the arm fat area. Multivariate models were performed with the same variables; those that entered the model were TSF and MUAC, and the coefficient of determination remained similar to that shown in the table. Discussion In the present study, it was shown that the estimation of body fat percentage differs among the Slaughter equation, the Gurka equation and BIA, but between Gurka and BIA there are no significant differences. The body composition of children with CP is altered, mainly due to lack of mobility [29]. It has been argued that skinfolds are not accurate in children with CP because when they are reduced in the extremities, they do not necessarily reflect low fat stores since fat in this population tends to accumulate centrally [12,15]. Despite this, ESPGHAN [30] recommends that fat estimation from skinfolds should be performed routinely as a component of nutritional assessment in children with CP. [13], wide limits of agreement (mean difference of -9.6% to -7.1%) were found with the Slaughter equation. It seems that the Gurka equation is more sensitive and accurate for estimating the percentage of fat in boys and girls, since it was the only method that differed between sexes. The estimated fat percentage was higher in the spastic group than in the group with other CP subtypes (ataxic, dyskinetic, hypotonic and mixed). It is important to mention that the Slaughter equation underestimated these findings both in the spastic group and in the group with other CP subtypes with respect to the other two methods. In several studies, the Gurka equation has been shown to adequately estimate the fat percentage in the population with CP [2]. This equation is derived from the Slaughter equation and takes into account a correction factor based on gender, race, pubertal status, and gross motor function [8]. In two studies [2,28], the fat percentage estimated by this equation was found to have an excellent correlation with that estimated by DXA (r = 0.883 and CCC = 0.86, respectively). However, a study by Rieken et al. (2010) [22] showed that the Gurka equation is not accurate in estimating the fat percentage in children with CP. This might be because, their population consisted only of children with gross motor function levels 4 and 5, and in both our study and that of Finbranten et al. (2015) [2] and Oeffinger (2013) [31], the populations were more heterogeneous. BIA is an appropriate method to estimate the fat percentage in children with CP [30] and has been used in some studies [19,31,33]. However, BIA has its own limitations, such as the state of hydration, which is very heterogeneous, varies with age and sex and can give a false result depending on the state of dehydration, which is common in children with CP [13]; in addition, it has a very good correlation with DXA, with differences of less than 2% [31]. The percentage of fat in the most affected group (gross motor function levels 4 and 5) was higher with BIA and the Gurka equation than with the Slaughter equation, and it was higher in this group than in the less affected group (levels 1-3). This finding is very similar to that reported by Finbranten (2015) [2], where an altered body composition was reported according to the level of gross motor function. Likewise, the results are similar to those reported by Whitney et al. (2019) [34], where a higher fat percentage, fat mass and fat mass index was observed in the nonambulatory population than in the ambulatory population with CP. These findings are also consistent with the notion that stunted children tend to gain fat and become overweight or obese later in life [35]. In our study, this altered body composition was demonstrated through the Gurka equation, which was the only method that showed significant differences between levels 1-3 vs. 4-5. For this reason, the Gurka equation is a more reliable method to estimate the fat percentage in the most affected levels of the CP severity [34]. Slaughter equation correlated positively and moderately with BIA in the spastic group and in the more affected children; it showed a stronger correlation than Gurka equation in the same groups. This finding could be related to the fact that Gurka equation includes an additional correction for the most affected levels (+ 5.1), and 87.9% of our sample belonged to levels 4-5. Besides, Slaughter equation only includes the tricipital and subscapular skinfolds which have a good correlation with the fat percentage. This anthropometric equation should be used carefully to estimate the fat percentage only in the more affected groups because it underestimates the fat percentage even more in the most affected levels [8]. In the study by Rieken et al. (2010) [9], the Gurka equation was shown to be more accurate in estimating the fat percentage in the less affected levels of gross motor function (1-2) and less accurate in the most affected (3)(4)(5), perhaps because the size of the population in that study was very small [13]. To our knowledge, this is the first study to analyze body fat percentage based on biological maturity, that included children from all CP subtypes and levels of the GMFCS. It was shown that there was a significant difference in the fat percentage between girls and boys only with the Gurka equation in stages I, II and III. In stages IV and V, there was no significant difference between the sexes, perhaps because the sample size was smaller and could be explained by a type II error. Similar to the results shown in Table 2, we can infer that the Gurka equation is more sensitive than the other two methods for estimating the fat percentage between sexes. It was shown that TSF, AFA, the W/A ratio, MUAC and BMI are the best predictors of fat percentage in children with CP and that they explain most of the variance in fat percentage estimated by the Gurka equation. The variance of the fat percentage estimated by the Gurka equation that was explained by the BMI was exactly the same as that found by Kuperminc et al. (2010) [21] and that regarding the MUAC it was similar; also, the correlation coefficients of the TSF and the AFA were higher than those of the BMI and MUAC. One of the strengths of our study was the sample size, which included subjects with both spastic CP and other CP subtypes. In addition, we included subjects of all levels of gross motor function. One of the limitations was that we did not compare the fat percentage with DXA as gold standard because it was not available, for this reason we estimated the fat percentage by BIA. Another limitation was that the sample size of the stages IV and V of the Tanner classification was small. Another limitation was that the sample size of the group IV-V of the GMFCS was bigger than the group I-III, and finally we had a small sample size of some CP subtypes such as ataxic, dyskinetic, hypotonic and mixed. Conclusions The fat percentage estimated by Gurka equation is similar to that estimated by BIA and it is more appropriate and accurate than Slaughter equation for estimating the fat percentage in children with CP from all subtypes and levels of the GMFCS. The fat percentage can be estimated using the Gurka equation. Longitudinal studies with a larger sample size are required to compare the fat percentage among different age groups.
2023-04-03T14:04:31.819Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "82598d9ce418e3446db04ff7c500da7a190bb999", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "82598d9ce418e3446db04ff7c500da7a190bb999", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
6798962
pes2o/s2orc
v3-fos-license
Physical structure of the protoplanetary nebula CRL618. I. Optical long-slit spectroscopy and imaging In this paper (paper I) we present optical long-slit spectroscopy and imaging of the protoplanetary nebula CRL618. The optical lobes of CRL618 consist of shock-excited gas, which emits many recombination and forbidden lines, and dust, which scatters light from the innermost regions. From the analysis of the scattered Halpha emission, we derive a nebular inclination of i=24+-6 deg. The spectrum of the innermost part of the east lobe (visible as a bright, compact nebulosity close to the star in the Halpha HST image) is remarkably different from that of the shocked lobes but similar to that of the inner HII region, suggesting that this region represents the outermost parts of the latter. We find a non-linear radial variation of the gas velocity along the lobes. The largest projected LSR velocities (~80 km/s) are measured at the tips of the lobes, where the direct images show the presence of compact bow-shaped structures. The velocity of the shocks in CRL618 is in the range ~75-200 km/s, as derived from diagnostic line ratios and line profiles. We report a brightening (weakening) of [OIII]5007AA ([OI]6300AA) over the last ~10 years that may indicate a recent increase in the speed of the exciting shocks. From the analysis of the spatial variation of the nebular extinction, we find a large density contrast between the material inside the lobes and beyond them: the optical lobes seem to be `cavities' excavated in the AGB envelope by interaction with a more tenuous post-AGB wind. The electron density, with a mean value n_e~5E3-1E4 cm-3, shows significant fluctuations but no systematic decrease along the lobes, in agreement with most line emission arising in a thin shell of shocked material (the lobe walls) rather than in the post-AGB wind filling the interior of the lobes. (...) introduction The physical mechanisms responsible for the onset of bipolarity and polar acceleration in planetary nebulae (PNe) are already active in the first stages of the evolution beyond the Asymptotic Giant Branch (AGB), i.e. in proto-Planetary Nebulae (PPNe, also called post-AGB objects). Therefore, PPNe and young PNe hold the key for understanding the complex and fast (∼ 10 3 yr) nebular evolution from the AGB towards the PN phase. Such evolution is believed (by an increasing number of astronomers) to be governed by the interaction between fast, collimated winds or jets, ejected in the late-AGB or early post-AGB phase, and the spherical and slowly expanding circumstellar envelope (CSE) resulted from the star mass-loss process during the AGB (see Sahai & Trauger 1998;Kastner, Soker, and Rappaport 2000). CRL 618 (= RAFGL 618 = IRAS 04395+3601 = Westbrook Nebula) is a well studied PPN which has very recently started its post-AGB journey (only ∼ 200 yr ago, e.g. Kwok & Bignell 1984) and is rapidly evolving to-wards the PN stage. Most of the circumstellar matter in CRL 618 is still in the form of molecular gas. This component, with a total mass of ≈ 1.5M ⊙ , consists of: (1) a spherical and extended ( > ∼ 20 ′′ ) envelope expanding at 17.5 km s −1 (Knapp & Morris 1985;Hajian, Phillips, & Terzian 1996;Phillips et al. 1992); and (2) an inner, compact bipolar outflow moving away from the star at velocities > ∼ 200 km s −1 (Cernicharo et al. 1989;Gammie et al. 1989;Meixner et al. 1998;Neri et al. 1992). The outer, and slowly expanding component is interpreted as the result from the mass-loss process of the central star during the AGB, which took place at a rate ofṀ = few× 10 −5 -10 −4 M ⊙ yr −1 . The fast bipolar outflow, with a mass of ∼10 −2 M ⊙ , is believed to be the result of the interaction between a fast, collimated post-AGB wind and the spherical AGB CSE (see references above). The high-excitation nebula (atomic and ionized gas) is composed of: (1) a compact H II region, visible through radio-continuum emission, elongated in the E-W direction with an angular size of 0. ′′ 2-0. ′′ 4 that is increasing with 1 time (Kwok & Feldman 1981;Martín-Pintado et al. 1993); and (2) multiple lobes with shock-excited gas which produces recombination and forbidden line emission in the optical (e.g. Goodrich 1991;Trammell, Dinerstein, and Goodrich 1993;Kelly, Latter, and Rieke 1992;Trammell 2000). From previous spectroscopic data, the inner H II region and the lobes are known to be expanding with velocities of ∼ 20 km s −1 and up to ∼ 120 km s −1 , respectively Carsenty and Solf 1982). The analysis of different optical line ratios indicates that a relatively large range of temperatures (≈ 10,000 to 25,000 K) and densities (≈ 1600 to 8000 cm −3 ) are present in the lobes (Kelly, Latter, and Rieke 1992). The optical spectrum of CRL 618 also shows a weak, red continuum which is the stellar light reflected by the nebular dust. From spectropolarimetric observations it is known that a fraction of the flux of the Balmer lines is also scattered light originally arising from the inner, compact H II region (Schmidt and Cohen 1981;Trammell, Dinerstein, and Goodrich 1993). The polarization of the forbidden lines is negligible, indicating that they are almost entirely produced by the shock-excited gas in the lobes with a small or insignificant contribution by scattered photons from the H II region (the high density in this region, ∼10 6 cm −3 , produces collisional de-excitation of most forbidden lines). The central star of CRL 618 has been classified as B0 based on the shape of the dereddened visual continuum (Schmidt and Cohen 1981) and on the weak [O III] line emission from the inner, compact H II region (Kaler 1978). The luminosity of CRL 618, obtained by integrating the observed IRAS fluxes, is L = 1.22×10 4 L ⊙ [D/kpc] 2 (Goodrich 1991). Based on the L values predicted by evolutionary models (∼ 10 4 L ⊙ ) and the typical scale height of PN (∼ 120 pc) these authors calculate a distance to the source of 0.9 kpc, which we also assume in this paper. In this paper (paper I of a series of two) we report optical imaging and long-slit spectroscopy of CRL 618. Observational techniques and data reduction are described in Sect. 2. In Sections 3 and 4 we present our observations, including a brief description of the nebular morphology and main characteristics of the optical spectrum. In Sect. 5 we analyze the different emission line components and study the physical properties of the different nebular regions probed by them. In Sects. 6 and 7 we derive the mean nebular inclination and describe the kinematics of CRL 618, respectively. The spatial distribution along the nebular axis of the extinction and the electron density are presented, respectively, in Sects. 8 and 9. In the latter, we also estimate the atomic and ionized mass in different regions. Finally, we discuss our results and give a possible scenario to explain the formation and future evolution of CRL 618 in Sect. 10. The main conclusions of this work are summarized in Sect. 11. 2. observations and data reduction 2.1. Optical imaging Ground-based We have obtained narrow-band images of the Hα line and the adjacent continuum emission in the PPN CRL 618 (Fig. 1a,b). Observations were carried out on November 8 th and 9 th , 2001 with the Palomar Observatory 60-inch telescope. The detector was the CCD camera #13, which has 2048×2048 pixels of 24µm in size and provides a plate scale of 0. ′′ 378 pixel −1 . The narrow-band filters used for imaging the Hα line and adjacent continuum are centered at 6563 and 6616Å, respectively, and have a full width at half maximum (FWHM) of 20Å. Weather conditions during the observations were good, although non-photometric, and the seeing ranged between 0. ′′ 9 and 2 ′′ . The final spatial resolution of the Hα (continuum) combined images is 1. ′′ 9 (1. ′′ 1). Total exposure times are 2700 s for the Hα image and 6300 s for the continuum. Data reduction, including bias subtraction, flat field correction, removal of cosmic rays and bad pixels, and regis-tering+combination of individual images, was performed following standard procedures within IRAF 1 . Astrometry of our images was done cross-correlating the coordinates of 376 and 187 field stars (for Hα and the continuum, respectively) in the CCD and those in the USNO-A2.0 catalog. This procedure yields a plate scale of 0. ′′ 378±0. ′′ 009 pixel −1 . The coordinates of the origin of our spatial scale in our images are R.A.= 04 h 42 m 53. s 55 and Dec.= 36 • 06 ′ 54. ′′ 5 (J2000). The errors of the absolute positions derived are < ∼ 0. ′′ 4 (considering the standard deviation of our best calibration solution, 0. ′′ 15, and the absolute position error of the USNO stars, about 0. ′′ 25). We have obtained a pure Hα line emission image by subtracting the continuum emission to our original Hα+continuum image. The continuum image was first smoothed to match the lower effective spatial resolution of the Hα image and then the flux of several field stars was measured in both images to derive a scaling factor that includes the differences of the system 'filter+CCD' response between the two images as well as the different exposure times and sky transparency when the observations were performed. Flux calibration was performed using the USNO-A2.0 R magnitudes of the field stars. The error in the flux calibration is dominated by the absolute flux error in the USNO-A2.0 catalog (∼55%). The flux calibration of the Hα and continuum images is consistent with our calibrated long-slit spectra within a factor < ∼ 2. Hubble Space Telescope As a complement to our data, we have used the high angular resolution Hα image of CRL 618 from the Hubble Space Telescope (HST ) archive (GO 6761,PI: S. Trammell). This image was obtained with the Wide Field Planetary Camera 2 (WFPC2) and the narrow-band filter F656N, around the Hα line. The WFPC2 has a 36 ′′ ×36 ′′ field of view and a plate scale of 0. ′′ 0455 pixel −1 . The image (Fig. 1c) is pipeline-reduced. Optical long-slit spectroscopy We obtained optical long-slit spectra of CRL 618 on November 12 th and 13 th 2000, using the Intermediate Dispersion Spectrograph (IDS) of the 2.5m Isaac New-ton Telescope of the Roque de los Muchachos Observatory (La Palma, Spain). The detector was the EEV10 CCD, with squared pixels of 13.5µm lateral size. Only a clear and unvignetted region of 700×2600 pixels was used (the 2600 pixels were along the spectral axis). The CCD was mounted on the 500mm camera, leading to a spatial scale of 0. ′′ 19 pixel −1 . The R1200Y and R900V gratings were employed providing spectra in the (6233-6806)Å and (4558-5331)Å wavelength ranges with dispersions of 16.4Åmm −1 (0.22Åpixel −1 ) and 22.1Åmm −1 (0.29Åpixel −1 ), respectively. A total of three slit positions were observed at position angles (PA) of 93 • (along the nebular symmetry axis) and 3 • (through the west lobe of the nebula). The slit positions are shown in Fig. 2 superimposed on the Hα groundbased image of CRL 618. At PA=93 • two spectra were obtained: one with the slit placed on the northernmost pair of lobes (hereafter, slit N93) and the other one with the slit displaced ∼ 1 ′′ towards the South (partially covering the southernmost lobes; slit S93). At position N93, spectra with two gratings, R1200Y and R900V, were obtained with exposures times of 2000 s and 2700 s, respectively. For the other two slit positions (S93 and PA=3 • ) only the R1200Y was used, and exposure times were 2000 s and 1300 s respectively. The slit was 1 ′′ wide and long enough to cover the whole nebula (and a significant region of the sky). The data were reduced following standard procedures for long-slit spectroscopy within the IRAF package, including bias subtraction, flat-fielding and illumination corrections, sky subtraction, and cosmic-ray rejection. We used CuNe and CuAr lamps to perform the wavelength calibration. The spectral resolution achieved (FWHM of the calibration lamp lines) is ∼ 50 km s −1 , at Hα, and ∼ 80 km s −1 , at Hβ. Flux calibration was done using sensitivity functions derived from two spectrophotometric standards, HR1544 and HD217086, observed at different airmasses, and taking into account the atmospheric extinction curve at La Palma. The geometric distortion of the long-slit spectra (∼ 0.06%) was corrected also using one of the flux calibration stars. The effective spatial resolution obtained ranges between ∼ 1 ′′ and 1. ′′ 6. morphology of the optical nebula In Fig. 1 we show direct images of the PPN CRL 618 in the light of the 6616Å continuum (a) and Hα line emission (b and c) obtained from the ground and with HST . The Hα images show the shocked, optical lobes of CRL 618 consist of several components with a very rich structure at a scale of ∼ 0. ′′ 2 (see Fig. 1c). The east and west lobes are both composed of several, collimated jet-like structures emanating from the center of the nebula. Bright, bowshaped ripple-like features are frequent along the lobes. In the Hα images of CRL 618 we have labeled A' the small, bright region of the east lobe closer to the nebular center (Fig. 1c). This region is very bright in the Hα image but very weak, e.g., in the WFPC2/HST [O I]λ6300Å and [S II]λλ6716,6731Å images (see Trammell 2000;Sahai et al. 2002, in preparation). As we will see in the following sections, the spectrum of region A' is in fact substantially different from that of the shock-excited lobes. This leads us to think that region A' has a different nature, which we discuss in § 5.2. A weak Hα emission halo surrounds the lobes of CRL 618 (Fig. 1b,c). In the HST Hα image the halo is observed very close to the boundaries of the emission lobes and is particularly intense and extended in the east lobe: note the clumps of diffuse emission ahead of this lobe (i.e. just beyond the tip of the lobe). The diffuse halo, like region A', is absent in the [O I] and [S II] HST images (see references above). In our ground-based Hα image (with higher signal-to-noise ratio than that obtained with the HST ) the size at a 3σ level of the halo is ∼ 35 ′′ ×22 ′′ , with the major axis roughly oriented at PA=3 • and centered at the position where the maximum Hα emission is observed. The continuum image of CRL 618 traces the distribution of the dust particles in the nebula (which reflect the stellar light) as well as any field star. The central star of CRL 618 is not directly visible, which indicates that the stellar light is strongly obscured along the line of sight. The continuum brightness distribution is in general similar to that of Hα, suggesting the presence of dust in the lobes. the optical spectrum of crl 618 The optical spectrum of CRL 618 is composed of recombination and forbidden line emission superimposed on a faint, red continuum. Long-slit spectra of emission lines In Fig. 2 we show long-slit spectra for the most intense lines in the ∼ [6230-6805]Å range for slits N93, S93, and PA3. The weakest lines detected in this range, [S III]λ6312.1Å and He IIλ6678Å, for slit N93 are shown in Fig. 3. These spectra have been smoothed with a flattopped rectangular kernel of dimension 3×3 pixels: the resulting degradation in the spatial and spectral resolution is less than ∼ 4% from the nominal value (see §2.2) since the smoothing window was smaller than the local seeing and FWHM of the calibration lines. The origin of the spatial scale in the spectra (and in the images to the left and in Fig. 1) coincides with the point of maximum extinction (measured as the highest Hα/Hβ line ratio; § 8). We will refer to this point also as to nebular center. The LSR systemic velocity of the source, V sys =-21.5±0.5 km s −1 (derived from molecular line emission observations, e.g., Cernicharo et al. 1989) is indicated by a vertical line on each spectrum. We find remarkable differences between the profile of the Hα line (and also Hβ, see below) and those of most forbidden lines for each slit position. These differences are specially noticeable in the bright east lobe. A number of spectral features 2 (labeled A, B, and C in Fig. 4) are seen to be superimposed on an underlying Hα profile very similar to that of [O I]λ6300Å (and most forbidden lines). (Carsenty & Solf, 1982, noted also the presence of two different components in the Hα profile, see § 5.1). Feature A is the intense Hα emission component observed in the innermost region (closest to the central star) of the east lobe. The maximum emission from this feature occurs at offset ∼1. ′′ 4. For slit N93 such maximum is slightly red-shifted (∼ 3.5 km s −1 ) with respect to V sys whereas for S93, feature A is blue-shifted by ∼ 10 km s −1 . From comparison with the Hα images of CRL 618 we determine that feature A originates in the inner region A' (Fig. 1). There is also some blue-shifted [N II]λλ6548,6584Å, [S III]λ6312Å and He IIλ6678Å emission arising in this region, which we refer to as emission feature "a" by analogy to feature A in the Hα profile (Figs. 3 and 4). Feature B is the broad, red-shifted wing around offset ∼ −2. ′′ 5 and +3. ′′ 5, in our N93 and S93 spectra. This feature is prominent in regions where the scattered continuum reaches maximum intensity (see Fig. 5 and the continuum image in Fig. 1). Feature B is also present in the PA3 Hα spectra. Feature C is the slightly redshifted emission component observed close to the systemic velocity from offsets +6 to +12 ′′ in our N93 and S93 spectra. We note that feature C extends beyond the region where forbidden line emission is observed (i.e., beyond the shocked lobes). From comparison with the Hα direct images of the nebula we conclude that feature C originates in the weak 'halo' which surrounds the bright lobes ( Fig. 1). This halo is visible towards the east up to offsets +20 ′′ in our deep, Hα ground-based image. Feature C is also present along slit PA3, which crosses the lobes and the halo at the outermost parts of the east lobe. We have labeled "C?" the weak, red-shifted emission beyond the west lobe in slits N93 and S93 (Fig. 4). This tentative feature (also marginally present in the Hβ profile) has a redshift slightly larger than that of feature C. For slit S93, the maxima in the Hα and forbidden (e.g. [O I]) line emission are observed at different spatial and spectral positions: the Hα emission peak is red-shifted and located at offset ∼ 3. ′′ 5, whereas the maximum emission from most forbidden lines is clearly blue-shifted and located at offset ∼ 4. ′′ 2. The reason for this difference is that the Hα emission peak is most likely the result of the superposition of features C, B, and maybe A on top of the lobe emission and not only emission produced locally in the east lobe. In Fig. 5 we present the long-slit spectra obtained along the nebular axis (N93) in the wavelength range [4558-5331]Å. The Hβ profile is similar to that of Hα, except that feature A is less prominent for the former very likely due to a larger extinction towards the nebular center. Features B and C are clearly detected in the Hβ profile up to (projected) velocities of ∼ 290 km s −1 (LSR) and around V sys , respectively. The profile of [O III]λ5007Å is similar to that of Hβ: note the maximum emission slightly red-shifted at the position where the continuum is detected in our spectra (coincident with feature B in Hα) and the weak emission closer to the nebular center (coincident with feature A). The profile of the [N I] doublet (partially resolved) is different from Hβ and [O III] but similar to most forbidden lines in the red. We have detected a number of weak emission lines in this wavelength range, most of them from Fe ions. The [Fe III] lines, similarly to the [S III]λ6312 line, are observed in the region where the scattered continuum reaches its maxi-mum intensity and in the inner region where feature A arises, with no trace of emission from the outer parts of the shocked lobes. The [Fe II] lines are remarkably intense at the tips of the lobes and show no sign of Hα-analog features, i.e. they show no emission from region A' or the region of maximal continuum emission. At 4650Å, we also detect a faint and broad emission feature, which is only visible in the region where the continuum is observed for the east lobe. We identify this feature with the Wolf-Rayet bump at 4650Å (see below). Line identification and flux measurements We list in Table 1 all the lines that we have detected in the optical spectra of CRL 618 together with their air wavelengths and fluxes (undereddened). The fluxes have been estimated from slit N93 (for other slits only the lines in the red wavelength range were observed), integrating the emission of the pixels above the 3σ level along the slit and in the spectral direction. Most of the lines in Table 1 were already detected and identified in this source from previous optical spectroscopy (see references in § 1), however we find some new features in this work. The most relevant of the new identifications is the Wolf-Rayet bump (WR-bump) at ∼ 4650Å (Fig. 5). This bump is a blend of high excitation lines such as N IIIλ 4634-41Å, and C IIIλ 4647-51Å (e.g. Tylenda, Acker, & Stenholm 1993). The C IVλ4658.3Å line, usually considered as a part of the WR-bump, is listed separately in Table 1 together with the blended [Fe III]λ4658.1Å line, because it is spectrally resolved from the rest of the bump in our spectra. The WR-bump 3 is detected in the east lobe, only in the region where the scattered continuum (and feature B) is observed. The one-dimensional blue spectrum for this region is shown in Fig. 5, where the presence of He II λ4685.55Å (also reported by the first time in this work) and other high excitation (Fe?) lines, can be noticed. Emission around 5007Å was detected previously with a low signal-to-noise ratio and attributed to a blend of [Fe II] and [O III] lines (Schmidt and Cohen 1981;Kelly, Latter, and Rieke 1992). We have also detected the [O III]λ4959 line in our deep spectra with a total flux ∼ 1/3 of the [O III]λ5007 (Table 1) as theoretically expected regardless of the nebular conditions. Accordingly, the flux measured at 5007Å must be mostly due to the [O III]λ5007Å line. We have compared the absolute fluxes measured in the present work with the fluxes measured by Kelly, Latter, and Rieke (1992). For most of the lines we find good agreement, our fluxes being on average ∼ 2.2 and ∼ 1.6 (for the east and west lobes, respectively) times smaller than those measured by Kelly et al., as expected from the different slit widths used in both observations (they use a 2. ′′ 5-wide slit, covering the nebula almost totally in the equatorial direction and totally along the axis). There is however a remarkable difference in our fluxes for the [O III]λλ4959,5007Å and [O I]λλ6300,6363Å lines and those measured by Kelly et al. For the [O III] transitions, these authors found a total flux for the east (west) lobe a factor ∼ 0.8 (1) smaller than ours, without correcting the different slit width. Considering that our slit is a factor ∼ 2.5 smaller than theirs and the size of the nebula (2. ′′ 5-3 ′′ in the direction perpendicular to the slit), then we must conclude that the intensity of the [O III] lines is now a factor 2-3 larger than 9 years ago. Trammell, Dinerstein, and Goodrich (1993) also found upper limits for the flux of the [O III] lines that are also much smaller than our measurements in spite of their broader slit (2 ′′ ). We also derive a [O III]/Hβ line ratio (less dependent on the slit width) a factor ∼2 larger than that obtained by Kelly et Continuum A relatively weak continuum is clearly visible in the blue (R900V) long-slit spectra along the nebular axis around offset 3 ′′ (on the bright east lobe, see Fig.5,top). This position corresponds to the relative maximum in the continuum brightness distribution seen in the direct image ( Fig. 1). We also detect continuum emission at the same position in the red spectra (R1200Y) for slits N93 and S93, and also, at the spatial origin for slit PA3, but with smaller signal-to-noise ratio. The red continuum is only visible in substantially smoothed spectra (not shown here). The average surface brightness of the red (blue) continuum, measured from our long-slit spectra in a 1 ′′ ×1 ′′ region around offset 3 ′′ is ∼ 8×10 −17 (∼ 5×10 −17 ) erg s −1 cm −2Å−1 arcsec −2 . The continuum brightness has been measured in a spectral window of 80Å in the vicinity of Hα and Hβ, respectively. The intensity and spatial distribution of the continuum obtained from our long-slit spectra are consistent with the continuum image ( Fig. 1). Our measurements of the red continuum flux are in reasonably agreement with previous estimates by Kelly, Latter, and Rieke (1992). Nevertheless, the continuum at blue wavelengths in our spectra is only a factor ∼ 1/1.6 weaker than the red, whereas from the data given by the previous authors we measure a blue-to-red continuum ratio of ∼ 1/2.4. Note that although the absolute flux calibration can be uncertain up to a factor ∼ 50%, the relative flux value is more accurate. This difference is consistent with a blue brightening of a factor ∼ 1.5 (∼ 0.44 mag) in the last 10 years (assuming that the red continuum flux has not varied). Gottlieb and Liller (1976) reported an increase of ∼ 2 mag in B-band, from measurements between 1922 and 1975. These authors derived a B-band brightening rate of 0.06 mag/yr, which is roughly consistent with our results. These authors attributed the increase of the blue continuum flux to the expansion of the circumstellar envelope of CRL 618 and the subsequent decrease of the extinction. Scattered and local emission components We find remarkable differences between the long-slit profiles of the recombination and forbidden emission lines in CRL 618. Figs. 2 and 4 clearly show the presence of three 'extra' emission features (A, B, and C) in the Hα spectrum compared to the profile of most forbidden lines. Some of these features are also present in certain forbidden lines ( § 4.1). Here we analyze the origin of these features. Feature A, the intense Hα emission observed close to the nebular center in slits N93 and S93, arises in the inner region A' (Fig. 1). This region is very bright in the HST Hα image but much weaker in the light of [O I] and [S II] (Trammell 2000). This behavior is consistent with the absence of feature A in our spectra for these forbidden transitions. For slit S93, feature A is blue-shifted with respect to V sys by ∼ 10 km s −1 . The counterpart to this feature observed in other recombination and forbidden lines is also blue-shifted: for [N II]λ6584Å, feature a is blue-shifted by ∼ 25 and 33 km s −1 at position N93 and S93, respectively; for Hβ, [O III]λ5007Å, and [S III]λ 6312Å, observed only at position N93, the blue-shift is ∼8-12 km s −1 . The measured Doppler blue-shift suggests that feature A is locally produced within the east lobe, which is approaching towards us. For slit N93, feature A lies at V sys , showing no apparent shift towards the blue. We believe that in this case the spectral position of the Hα feature A could be altered (slightly red-shifted) because of the presence of feature B at almost the same spatial position, which is relatively intense and is clearly red-shifted in the east lobe. The blue-shift of feature A could also be intrinsically smaller for position N93 than for S93, as suggested by the smaller blue velocities of its counterpart (feature a) in the other lines. Features B and C are most likely the scattered component of the emission in the Balmer lines for this source ( § 1). Previous spectropolarimetric observations indicate that ∼ 40% of the total Hα flux is polarized (with a line polarization of about 15%; Trammell, Dinerstein, and Goodrich 1993), i.e. it is light scattered by nebular dust. The polarization of the forbidden lines is, however, much lower and this would explain the absence of these features in these cases. Features B and C are red-shifted, although they originate in the approaching lobe (note that the forbidden line emission from the east lobe is blue-shifted), which clearly points to their scattered nature: the light scattered by dust moving away from the central source should be red-shifted for both the approaching and receding lobe (see e.g. Schwarz et al. 1997). Carsenty and Solf (1982) also found some red-shifted Hα emission (probably our feature B) arising in the approaching bright lobe which they attribute to scattered emission following the same argument. The previous interpretation is consistent with the similar brightness distribution of these features and the continuum (Figs. 1 and 2), as expected if they share the same scattered nature. Feature B is clearly most prominent in the region of maximum continuum emission for all the slits positions. Moreover, features B and C are much more intense for the east lobe than for the west one, where the scattered continuum is much weaker and the fraction of polarized light smaller (0.5 versus 0.2, for the east and west lobe respectively; Schmidt and Cohen 1981). The different profiles and spatial distributions of fea-tures B and C in our long-slit spectra suggests that they are produced by dust in different nebular regions. The low red-shift of feature C (∼ 6-8 km s −1 ) and its location (beyond the forbidden-line emitting east lobe, in the weak Hα halo) points to dust not located inside the lobe but beyond it, in the unshocked, slowly expanding AGB CSE. Since the AGB CSE surrounds the lobes, we can expect feature C to be present (with more or less intensity) all along the slits. This is consistent with the presence of the diffuse halo surrounding the lobes in the Hα images, and not only in front of them (see below). The tentative feature "C?", observed at the tip of the west lobe, could be the counterpart to feature C in the receding lobe. The much larger red-shift of feature B (up to LSR velocities of ∼ 240 km s −1 ) and its location (within the lobes) is more consistent with dust inside the shocked-lobes which is flowing outwards rapidly (presumably at the same speed as the gas). We identify this feature with the red-shifted Hα emission detected by Carsenty and Solf (1982) in the east lobe, which they also attribute to scattered emission produced by fast outflowing dust. At the position where feature B is observed, some contribution by feature C is also expected (see above), however, the spectral resolution achieved in these observations is not sufficient to separate both features. The line emission that is being reflected by the dust, visible as the halo (in the direct images) and as features B and C (in our long-slit spectra), is unlikely to originate in the shocked-lobes themselves. In fact, neither the halo, nor features B and C are detected in most forbidden lines, and in particular, in the [O I] line. However, the [O I] and Hα emission locally produced in the east lobe have comparable intensities ( Fig. 4 and HST images in Trammell 2000), which should result in almost equally bright scattered components for both lines if the reflected photons were originally produced in the lobes. Features B and C must then result from the reflection of light produced in the nebular center (in the central H II region or/and in region A'), where the emission of most forbidden lines observed by us is intrinsically much less intense. The different nebular regions The presence or absence of the different spectral features and, in general, the profiles of the different lines are diagnostic of the physical properties (such as temperature, ionization, and electron density) of different nebular regions in CRL 618, from the highly obscured H II region to the shocked lobes. The inner H II region This region is indirectly probed by lines with B-or C-like features (scattered light), namely Hα, Hβ, [O III], He I, He II, [Fe III], [S III], and the WR-bump. The spectrum of the central H II region is thus most easily seen in the region where the scattered continuum is observed, at offset ∼ 3 ′′ in the east lobe. The central H II region (previously studied through its radio-continuum emission, see § 1) is characterized by high densities ( > ∼ 10 6 cm −3 ) and high temperatures (13,000-15,000 K) that provide the required conditions to produce most of the (high-excitation) lines above (e.g., the ionization potential of the [Fe III], [S III], He II, and WR-bump are 16.16, 23.33, 24.59 and > ∼ 30 eV, respectively). The large critical densities for the observed [Fe III] and [S III] transitions, n c ∼ 10 8 − 10 9 cm −3 , explain why these forbidden lines are not de-excited by collisions in the central region of CRL 618. We note also, that the relatively large critical density of the [O III]λ5007 transition, few×10 5 cm −3 , would explain the weak [O III]λ5007 emission from the inner H II region, which is indirectly observed at the position of maximal continuum and feature B. Unfortunately spectropolarimetry of the lines listed above presumably arising from the H II region has not been obtained previously, so we do not know their polarization degree (nor the fraction of scattered flux) and we cannot test the previous scenario. In any case it seems that these high excitation lines are unlikely to be produced in the shocked lobes, where the spectrum is dominated by low excitation lines. Region A' This region is also very likely a relatively dense region, considering that only forbidden lines with high critical densities (n c > ∼ few× 10 4 cm −3 ) are observed therein, namely: The spectrum of this region (the innermost part of the east lobe, where features A and a are observed) is remarkably different from that of the shocked lobes but very similar to that of the central H II region. In fact, most lines are common to both regions except for the absence of the WR-bump (which requires higher excitation conditions) in region A' and the undetected [N II] emission (with a low critical density, n c ∼ 2×10 4 cm −3 ) from the denser H II region. This fact suggests that region A' is more likely to be ionized by the UV stellar radiation rather than by shocks. In this sense, region A' represents the outermost (less dense) parts of the H II region surrounding the star. The observed relative line intensities are not consistent with any of the different shock-excitation models by Hartigan, Raymond, and Hartmann (1987) (hereafter HRH87). The very weak emission of the [O I] lines from region A' ([O I]/Hα < 0.05) implies that this region is fully ionized (e.g. Hartigan, Morse, & Raymond 1994). The observed [O I] transitions have relatively high critical densities (n c ∼2×10 6 cm −3 ) and, therefore, significant collisional de-excitation of the involved levels is not expected. Therefore, the weakness of the [O I] transitions is because most oxygen in region A' is ionized. Considering the similar ionization potentials of oxygen and hydrogen, we conclude that the ionization fraction (X=n e /n total ) of region A', like that of the inner H II region (where [O I] is not detected) is ∼ 1. The shocked lobes This region is probed by lines without any additional features (A, B, or C). The most intense of these lines ([O I], [S II], and [N I]) are known to originate entirely from the gas in the lobes since for these lines, no polarization has been measured (Trammell, Dinerstein, and Goodrich 1993) and no red-shifted components are found in the approaching lobe. Although no polarimetry of the rest of the lines within this class has been made, we conclude that they must have a similar origin given their similar profiles. The relative intensities of the lines observed at the lobes (dereddened using the average extinction for each lobe, see §8 below) are consistent with shock-excited emission. In particular, the flux ratios observed by us lie in between the predicted values for bow-shock excitation models 8 and 9 by HRH87 with shock velocities between 50 and 100 km s −1 and preshock density 300 cm −3 . These range of velocities for the shocks is mainly constrained by the detected, although weak, [O III]λ5007Å emission and implies shock speeds slighly larger (but still comparable) to previous estimates following similar procedures (V shock =20-80 km s −1 ; Riera, Phillips, & Mampaso 1990;Trammell, Dinerstein, and Goodrich 1993). The higher shock velocity derived by us is basically due to the higher [O III] flux obtained in our observations. [N II] and [Fe II] lines show the largest intensity contrast between the tips and the innermost regions of the lobes amongst forbidden transitions. [Fe II] lines are known to be good tracers of astrophysical shocks (in supernovae, Herbig Haro objects, PNe, etc): [Fe II] lines are very sensitive to the high densities and temperatures in the shocked gas; moreover, fast shocks are able to extract substantial amounts of Fe from dust grains (e.g. Welch et al. 1999;Reipurth et al. 2000). Therefore, the intensity enhancement of the [Fe II] (and [N II]) lines is very likely related to the presence of intense shocks in these regions. In addition, the strongest [Fe II] line emission is expected in regions where the compression and heating of the gas is largest, for example, at the head of a bow shock, where the velocity component normal to the shock is greatest. The presence of bow-shocks at the tips of the lobes of CRL 618, where the maximum [Fe II] emission is observed, is suggested by bright, curved features in the direct HST images of the nebula (Fig. 1). Finally, the presence of intense [Fe II] lines in the shock-heated and compressed gas in the lobes of CRL 618 points to dissociative (V s >30 km s −1 ) J-shocks rather than C-shocks (Reipurth et al. 2000), consistent with the relatively large shock velocity derived from diagnostic line ratios and the kinematics of the lobes (see Sect. 7). The ionization fraction in the shocked lobes is significantly lower than in region A' as suggested by the much larger [O I]λ6300/Hα ratio, ∼ 0.5-0.9 (excluding scattered light), in the former. We discuss in detail the ionization in the lobes in Sect. 9.2. The scattered-light halo A diffuse halo surrounding the bright optical lobes of CRL 618 is visible in the Hα direct images of the nebula (Figs. 1b,c). The scattered feature C in the Hα spectra arises in this halo, and it is particularly noticeable in the region just beyond the tip of the east lobe. Accordingly, the Hα emission from this halo is very likely light (originally arising in the H II region) that escapes preferentially in the direction of the lobes and is scattered by the innermost parts of the AGB CSE. (The more distant regions of the AGB CSE are visible in molecular line emission, § 1.) the inclination of the nebula The presence of scattered features (C and B) in the Hα spectrum of CRL 618 offers a valuable opportunity to derive the inclination of the nebula: by comparison of the red-shift of the scattered emission with the intrinsic velocity of the dust (whenever these quantities can be determined). Features B and C are produced, respectively, by: (i) fast-moving dust mixed with the atomic gas inside or in the walls of the lobes; and (ii) dust beyond the lobes, in the slowly expanding, extended envelope which surrounds the optical lobes (see § 5.1 and Fig. 7). The limited spectral resolution in our spectra does not allow us to separately measure the red-shift of features B and C when both components are simultaneously present (all along the east lobe up to offsets ∼ 8 ′′ ). However, at the tip of the east lobe, only feature C is observed in the Hα spectra, and the red-shift of this feature can be accurately determined. By comparison of this red-shift with the intrinsic expansion velocity of the dust in the AGB CSE producing feature C, V exp ∼ 17.5 km s −1 LSR ( § 1), we can readily derive the nebular inclination. The red-shift of feature C is apparent in Fig. 8 (left), where we show its spectral profile derived from slits N93 and S93 integrating the long-slit spectra from offsets 8. ′′ 5 to 11. ′′ 5. The red-shift of feature C for both slit positions has been obtained by fitting a Gaussian function to the core of the feature, which is roughly symmetric. For N93 and S93, the red-shift with respect to V sys (given by the Gaussian center) is 8±1.5 and 6±1.5 km s −1 respectively. These values yield an inclination (with respect to the plane of the sky) of ∼ 27 • ±6 • (from slit N93) and 20 • ±5 • (from slit S93) for the dust reflecting the Hα photons ahead of the east lobe. As we have shown, feature C beyond the east lobe has its counterpart in the weak halo seen in the Hα images of the nebula at the same position (Fig. 1). The orientation of this halo and the pure emission lobe in the plane of the sky is roughly the same. This is most likely due to light from the core escaping preferentially along the lobes of CRL 618. Hence we can expect the previous values of the inclination to be a relatively good representation of the mean nebular inclination (for an overall axial symmetry). The different inclinations (∆i=7 • ) derived for feature C in slits S93 and N93 could indicate different orientations of the different lobe components along the line-of-sight, a likely possibility given their different orientations in the plane of the sky (∆PA∼ 15 • ). We can also derive i by measuring the velocity of feature B (V B ) with respect to the local emission component (V local , obtained from the forbidden lines) at the same position. V =V B -V local is a direct measurement of the intrinsic expansion velocity of the lobes, therefore . From our spectra, we can derive only a lower limit to the red-shift of feature B because the Hα profile has also important contributions from features A and C at the position where feature B is observed (offset ∼3 ′′ ). This leads to a combined Hα profile with a smaller red-shift than that of feature B alone and, therefore, only a lower limit to the intrinsic expansion velocity of the lobes. The lower limit for V translates into an upper limit for i. In the right panel of Fig. 8, we show Hα and [O I]λ6300Å 1D-spectra at the position where the scattered continuum and feature B reach maximum intensity in the east lobe for slit N93. The velocities of the Hα and [O I] lines measured (with uncertainties of ±1.5 km s −1 ) at the peak are V B =5 km s −1 , V local =−55 km s −1 , which yields V > 60 km s −1 (at this position) and i <34 • . If, instead of using the line peaks for the velocity measurements, we use the centroids of the full width at half maximum (V B =5 km s −1 , V local =−63 km s −1 , V > 68 km s −1 ), or the centroids of the full width at a 3σ level (V B =−6 km s −1 , V local =−67 km s −1 , V > 73 km s −1 ), we get values of i < 38 • and i < 39 • , respectively. The mean value of, and upper limit to, the inclination derived above are smaller than the inclination previously obtained by Carsenty and Solf (1982) (hereafter CS82). These authors obtain i=45 • , based on the analysis of the Hα scattered component produced by the fast dust within the lobes. We believe this difference can be due to several reasons: First, the value given by CS82 for the intrinsic speed of the outflow, 80 km s −1 is really a lower limit to V because of the presence of multiple components in the Hα profile (as explained above). Second, the value for the radial velocity difference between the east and west lobes (which is equal to 2V sin i) measured by CS82 from the forbidden line emission (e.g. [N II]λλ6548,6583, [S II]λλ6716,6731), 114 km s −1 , is most likely an overestimate. This is because the radial velocity difference between the lobes is not constant along their length (see Figs. 2 and 4 and Table 3). In particular, the 114 km s −1 velocity difference occurs only at the tips of the lobes (at offsets ±6 ′′ ). But in the region where the most intense Hα scattered emission is observed (offset ∼3 ′′ ), coincident with the position where CS82 measure the red-shift of the Hα scattered component, we observe a smaller radial velocity difference, ∼ 90 km s −1 . Thus, if we use a larger value of the outflow speed (>80 km s −1 ) and a smaller radial velocity difference (∼ 90 km s −1 ) in the CS82 method, we get an upper limit to the inclination of < 34 • , which is consistent with our previous estimates. Finally, a low mean value of the inclination is in better agreement with the similar average extinction deduced for the east and west lobe (see § 8). The shocked lobes In order to study the kinematics in the lobes of CRL 618 we have chosen the most intense forbidden lines, which originate in the lobes and have profiles uncontaminated by scattered light components. Since the [N II] lines have also some contribution from reflected light (Trammell, Dinerstein, and Goodrich, 1993, found that the polarization of these lines is ∼ 6%), we have based most of our analysis on the intense, unpolarized [O I] and [S II] lines. The [O I] and [S II] profiles along the slit N93 for the west and east lobe are approximately point-symmetric with respect to the spatial origin and systemic velocity (Fig. 2), suggesting that they have probably a similar kinematical structure. This result is not in principle expected from the different morphology of the east and west lobes. The projected radial velocity gradient is not constant in either lobe. In the innermost parts of the lobes the projected velocity decreases with the distance from the nebular center, this behavior being more pronounced in the east lobe (Table 3). In the outer parts of the lobes, the velocity increases with the distance, reaching projected velocities (with respect to V sys ) up to 70 and 80 km s −1 at the tip of the east and west lobe, respectively. In these regions, the emission arises in the compact, bow-shaped emitting knots at ±6 ′′ . Considering the low inclination for the nebula derived in § 6, i ∼24 • , the lobes of CRL 618 are expanding with velocities up to ∼ 200 km s −1 (at the tips). The changing radial velocity gradient observed in our low angular-resolution (ground-based) spectra may result from superposition of multiple kinematical components as suggested by the complex morphology and small scale (∼0. ′′ 2) structure seen in the HST images. Of course it is also possible that there are acceleration and deceleration processes operating in the lobes. The FWHM of the forbidden lines along the lobes also increases outwards, from 60-75 km s −1 closer to the nebula center, to ∼ 150 km s −1 at the lobe tips (Table 3). The large line widths measured at the tips of the lobes (Fig. 9) are consistent with the bow-shaped features being regions accelerated by bow-shocks: the different orientations of the velocity vectors in a bow-shock structure produce a large velocity dispersion (see e.g. HRH87). The total width (at zero intensity level) of the line does not strongly vary along the lobes but rather is fairly constant with a mean value of ∼ 200 km s −1 . The spectral profile of any low-excitation emission line from a bow-shock can be used to estimate the shock velocity, V s . The shock velocity equals the full width (at zero intensity) of the line for radiating bow-shocks, independent of orientation angle, pre-shock density, bow-shock shape, and pre-shock ionization stage (HRH87). We derive V s =200-230 km s −1 for the bow-shocks seen at the tips of the east and west lobes. This value agrees well with the value of V s inferred directly from the projected velocities of these regions assuming an inclination of 24 • . This result therefore supports the low inclination of the nebula we have derived ( § 6). In addition, the centrally-peaked profiles in Fig. 9 are consistent with the predictions for bow-shock models by HRH87 with i <25 • (for larger inclinations the model profiles show two well separated peaks). The shock velocity derived from the observed line ratios and their comparison with predictions of bow-shock excitation models by HRH87 (V s < 90 km s −1 , § 5.2.3) is smaller than that directly derived from the line profile. Similar differences are also present for many Herbig-Haro objects observed and modeled by these authors suggesting that the existing models (necessarily simplistic) cannot accurately reproduce the complex excitation and spatiokinematical structure of the shocked material. Moreover, the preshock density in the HRH87 models, 300 cm −3 , is very likely lower than the actual value (see § 10). The shock velocity obtained from the line profile is probably less affected by errors in the model, since it arises only from geometrical considerations, and may better represent the actual velocity of the shock. Only if the line emission from the shocked post-AGB wind itself (which is moving faster than the forward shocks) was significant compared to that from the shocked AGB ambient material (unlikely to be the case in CRL 618; C-F. Lee and Sahai, in preparation), then the line FWZI would be an upper limit to V s . V s derived from the line ratios depends on poorly known parameters like, e.g., the presence and strength of magnetic fields. Models with higher magnetic fields require substantially larger shock velocities to reproduce the same line ratios (Hartigan, Morse, & Raymond 1994). For example, for a pre-shock density > ∼ 10 5 cm −3 ( § 10), magnetic fields of ∼ 30µG are required to reproduce the dereddened [N II]λ6583/[O I]λ 6300 ratio (∼ 0.5, see Table 1 and § 8) for a planar shock with speed V s = 90 km s −1 . To produce the same ratio when V s = 200 km s −1 , the required magnetic field is much larger (>3000µG). Finally, from a comparison of the spectra obtained for slits N93, S93, and PA3, we infer some differences in the kinematics of the different lobe components (note that each lobe of CRL 618 consists of at least two well separated features, Fig. 2). For example, the south sub-component of the east lobe (better probed by slit S93) is expanding 5-20 km s −1 faster than the north one. Similarly, the northern sub-feature of the west lobe (better seen in slit N93) is 5-20 km s −1 faster than the southern. These differences could be due to different absolute ejection velocities or inclinations of the lobe components. We note also that, in general, N93 and S93 provide similar FWHM for the west lobe but for the east lobe, the largest FWHM are measured from slit S93, where the line FWHM reaches up to ∼ 180 km s −1 . The larger widths in the southeast lobe sub-component are unlikely to be related to projection effects and must be due to intrinsic differences in the kinematics. Region A' and the inner H II region From the analysis of the Hα profile we can also obtain valuable information on the kinematics in regions of the CRL 618 nebula other than the shocked lobes. The presence of the blue-shifted, intense Hα emission component in the east lobe close to the nebula center (feature A, Fig. 2) indicate ionized gas moving away at moderate velocity (15-30 km s −1 ) from the central star. We have measured the velocity of this feature from the [N II] lines, which are less contaminated by scattered light (note that the blue-shift of feature A in the Hα spectra is masked by superposition with the intense, red-shifted feature B and, probably also, C). The kinematics of the H II region cannot be so straightforwardly obtained from our data, since the emission from this region is seen only after being reflected by the dust (through features B and C) and therefore the resulting line profile is affected by the distribution and kinematics of the dust. The FWHM of feature C, observed at the tip of the lobes (Fig. 8), and deconvolved with the spectral resolution of our data is ∼ 60 km s −1 , suggesting an expansion velocity for the gas in the H II region of ∼ 30 km s −1 . This figure has to be considered as an upper limit, since the reflecting dust could show a range of inclinations resulting from the opening angle of the lobes (up to 15 • according to the lobe aperture in the plane of the sky). Our result is consistent with the low expansion velocity of the H II region measured from the profile of H radio recombination lines (20 km s −1 ; Martín-Pintado et al. 1988). The weak wings of feature B in the Hα and Hβ lines extend up to LSR velocity ∼ 240 km s −1 (at a level of 5σ). Such extended red-wings are most likely the result of the large velocities of the reflecting grains within the lobe (up to ∼ 200 km s −1 ) and also partially to the lobe opening angle, however, the presence of rapidly outflowing material in the H II region cannot be ruled out. extinction We have estimated the relative variation of the extinction along the lobes of CRL 618 by comparing the Hα and Hβ spatial profiles along the axis (from slit position N93). The two long-slit spectra have been aligned in the direction of the slit using the west lobe brightest knot (at the tip of the lobe) as our spatial reference. We estimate that errors in the alignment are < ∼ 1 pixel ( < ∼ 0. ′′ 2). The uncertainty in the positions of the slits used for obtaining the Hα and Hβ spectra in the direction perpendicular to the slit is a few pixels and therefore, smaller than the seeing (1. ′′ 6, § 2.2). The spatial distribution for both transitions and the adjacent continuum was obtained integrating our long-slit spectra in the spectral direction. In order to derive the Balmer decrement, the continuum emission was first subtracted from the Hβ spatial profile but not from the much brighter Hα emission (in this case, the continuum contribution is negligible, < ∼ 1%). The spatial distribution of the extinction derived from the Hα/Hβ ratio is shown in Fig. 10 (top panel). In this figure, the optical depth at 4861.3Å, τ 4861 , has been calculated assuming the Galactic reddening curve by Howarth (1983) and an intrinsic Balmer decrement of 3. This value of the Balmer decrement allows easy comparison of our results with previous estimates of the extinction, which usually assume Hα/Hβ=3. Moreover, the previous value is expected for high-velocity shocks like those in CRL 618 ( § 7), for which the intrinsic Balmer ratio approaches the recombination value (collisional excitation increases the intrinsic Balmer ratio only for low-velocity shocks, < 70 km s −1 ; Hartigan, Morse, & Raymond 1994). The conversion from τ 4861 to extinction in magnitudes in the V band, τ 4861 =0.93A V , has been derived using the extinction law parametrization by Cardelli, Clayton, & Mathis (1989) and assuming the ratio of total-to-selective absorption, R V , equal to 3.1. The error-bars in Fig. 10 are statistical, and do not account for systematic errors (e.g. arising from an incorrect value of the intrinsic Balmer decrement, absolute flux calibration or small misalignments between the Hα and Hβ spectra). The Hα/Hβ ratio clearly varies along the nebular axis (Fig. 10). This variation is most likely not due to a change of the intrinsic Balmer ratio, since we do not expect the shock characteristics (e.g. shock speed and pre-shock density) to vary sufficiently along the nebular axis. Neither the velocity of the shocks nor the electron density show systematic variations along the lobes (see § 7 and § 9). Accordingly, the observed variation of the Hα/Hβ ratio must be related to extinction by nebular dust. The nebular extinction varies along the axis of CRL 618 as follows: the maximum extinction is produced close to the nebular center and then gradually decreases towards the outermost parts of the lobes. The extinction in the innermost nebular regions is particularly uncertain and is very likely underestimated by a large factor. In fact, from the HST images we can see that the emission from the 2 ′′ -region around the nebular center is well below the noise, suggesting a very high obscuration there (see Fig. 1). We deduce that the extinction towards the central star of CRL 618 is A V > 10 magnitudes, given the upper limit to the stellar continuum level estimated from the R1200Y spectra, < ∼ 2.5× 10 −16 erg s −1 cm −2Å−1 , and the expected flux from a 30,000 K black body, 2.5× 10 −12 erg s −1 cm −2Å−1 . Our low-resolution, groundbased Hα and Hβ spectra, used to calculate the extinction, show some weak emission from the innermost nebular region. However, this is a result of the poor spatial resolution in these spectra, which spreads out some light from nearby nebular regions in the lobes (where the extinction is smaller) into the nebular center (with, very likely, much larger extinction). The result is an underestimate of the extinction in these inner regions as deduced from the observed Balmer decrement. Beyond the central region, the extinction decreases with the distance from the nebular center reaching a relative minimum around offsets ∼ ±6. ′′ 5. At offsets larger than ±7 ′′ (beyond the tips of the lobes) the extinction increases again. We note that the extinction curve along the nebula is quite symmetric, i.e. the extinction is similar for the east and west lobe: the weak west lobe appears on average only about 15% more extinguished than the bright east lobe. The difference is even smaller at the tip of the lobes. This result indicates that the lower Hα surface brightness for the west lobe compared to the east (by a factor ∼ 5) is intrinsic and not due (at least totally) to a much larger extinction of the receding lobe. In fact, most of the Hα emission from the east lobe arises from region A', which does not have a counterpart in the west lobe. The similar extinction for the east and west lobe is consistent with the similar brightness of both lobes for most forbidden lines (e.g. [O I], [S II], etc), for which the east lobe is only a factor ∼ 1.5 brighter than the west lobe. Finally, the similar extinctions derived for the east and west lobe suggest a low inclination of the nebular axis with respect to the plane of the sky (assuming that most of the extinction is produced by the AGB CSE surrounding the lobes, § 1), consistent with the value we derive from the analysis of the scattered Hα emission (i∼24 • , § 6). Our results on the extinction in CRL 618 are in good agreement with previous estimates using the Balmer decrement method: Kelly, Latter, and Rieke (1992) derive an average extinction in the nebula of A V =4.5±0.2, which is within the nebular extinction range derived by us (Fig. 10). From the analysis of [S II]λλ6716,6731 and [S II]λλ4069,4076 lines, these authors also found that both the east and west lobe have similar extinction values, A V =2.1±0.2 mag and A V =2.9±0.3 mag, respectively. Effect of the scattering on A V : radial extinction beyond the lobes The estimate of the absolute extinction along the line of sight from the Balmer decrement in an object like CRL 618 is problematic mainly because of the contribution of scattered light to the total flux of the H I recombination lines, as already noticed by, e.g., Trammell, Dinerstein, and Goodrich (1993); Kelly, Latter, and Rieke (1992). The effects of such contribution on the value of the observed Hα/Hβ ratio are difficult to quantify: on the one hand, the scattering will produce an 'artificial' decrease of the observed Hα/Hβ ratio due to the highest reflection efficiency for blue photons; on the other hand, the extinction of the light from the innermost regions, where it originates, to the regions where it is scattered by the dust (radial extinction) will produce reddening which may or may not compensate the previous artificial increase of the Hα/Hβ ratio. The presence of nebular dust is expected to affect not only the absolute value of the extinction but also its relative variation from one region to another in the nebula. In regions where both local and scattered Hα and Hβ emission are present, the optical depth derived from the Balmer decrement, τ , is not only the line-of-sight or tangential optical thickness, τ t (from the lobes to the observer), as desired, but a complex function of τ t , the radial optical depth, τ r (from the star up to the lobes where the light is being reflected), and the relative intensity of the Balmer emission locally produced within the lobes and in the inner H II region. All these quantities will vary along the nebula in a different way and they will combine to yield the extinction profile in Fig. 10. At the tip of the CRL 618 east lobe (and tentatively in the tip of the west lobe) only Hα emission from feature C (and "C?") is observed, i.e. the Balmer emission is dominated by the scattered component (Fig. 4). In a region like this, where the local emission is negligible with respect to the scattered component, τ only depends on τ r and τ t in a relatively simple way: where τ sc is the scattering optical thickness of the small region at the tip of the east lobe that produces feature C (region C'; see Fig. 7) and the optical depths τ r and τ t include both scattering and absorption. This equation is valid if the scattering and aborption optical depths in region C' are small (≪ 1), as expected beyond the tips of the lobes. In this case, τ t is mostly due to foreground dust (not directly illuminated by the central source) between region C' and the Earth. Using this equation, we analyze the spatial variation of the extinction in Fig. 10 to study the structure of the lobes. At the tips of, and beyond, the lobes (offsets > ∼ 7 ′′ ), the extinction increases with the distance from the nebular center (Fig. 10). This increase, which is not expected a priori given the progressive decrease of τ with distance for inner regions, can be explained if τ ≈ τ r in this region, i.e., if the radial extinction is larger than the tangential component. In fact, the tangential optical depth will decrease (not increase) with the distance from the central star in the plane of the sky, p, for any reasonable density law for the outer circumstellar envelope (e.g. τ t ∝ p −1 for a density law ρ = ρ 0 (r 0 /r) 2 ). However, the radial extinction is expected to increase with p, as observed, for any reasonable density law within the lobes. We have computed the radial extinction as a function of the observed spatial offset, p=r cosi, for two different density laws within the lobes of CRL 618: (i) ρ = ρ 0 (r 0 /r) 2 , which leads to τ r ∝ (1/p 0 − 1/p); and (ii) ρ = constant, which yields τ r ∝ p−p 0 . The increasing extinction beyond offset +7 ′′ , can be reproduced for both laws (note that, in fact, for p ∼ p 0 , the radial extinction τ r in case i can be approximated by a straight line, which corresponds to case ii) but always requires values of p 0 > ∼ 7 ′′ . (We will not discuss case ii further because ρ ∝ r −2 best represents the density in most AGB circumstellar envelopes.) A small p 0 (≪7 ′′ ) results in the extinction reaching a constant value (τ r ∝ 1/p 0 ) very quickly (Fig. 10), which is inconsistent with the data. The large p 0 obtained implies that there is a substantial component of the radial extinction which is produced by dust outside of the lobes (i.e., in the AGB CSE). We can only set an upper limit to the component of the radial extinction produced by dust inside the lobes (presumably filled by the post-AGB wind), based on the minimum value of τ 4861 (about 2) obtained at the tips of the lobes (offsets of ±6. ′′ 5 ′′ ). Both the above components (referred to as τ AGB and τ pAGB ) and their sum are shown in Fig.10. The density contrast between the AGB and post-AGB wind Assuming that the gas filling the lobes is the post-AGB (pAGB) wind, we now derive ρ pAGB /ρ AGB by comparing τ AGB and τ pAGB . For a stationary outflow the mass loss rate per solid angle isṀ Ω ≡ dṀ /dΩ=r 2 ρV , and the corresponding radial optical thickness is: where r is the distance from the star, r 0 is the inner radius, ρ is the gas mass density, V is the wind expansion velocity, and κ depends on the grain properties and the gas-to-dust mass ratio. Combining Eq. 2 and the expression forṀ Ω above, we derive that: From our fitting to the radial optical depths ( § 8.1), which gives for the AGB CSE inner radius, r 0,AGB > ∼ 7. ′′ 5, and adopting for the post-AGB wind inner radius, r 0,pAGB ∼0. ′′ 06 (from the circumstellar dust model by Knapp, Sandell, & Robson 1993), we obtain: where the lower limit arises from the upper limit to τ pAGB . The density ratio above depends on the poorly known inner radius of the post-AGB wind as well as on the unknown but expected differences in the dust-to-gas ratio and dust grain properties in the AGB CSE and in the fast post-AGB wind. Moreover, the intrinsic ρ AGB /ρ pAGB ratio is likely to be lower because the region beyond the lobe tips is overdense compared to the original, unshocked, AGB CSE (since it contains material swept-up by the post-AGB wind). The AGB-to-pAGB density ratio would also be significantly lower if the gas-to-dust ratio in the post-AGB wind was much higher than that in the AGB CSE, perhaps as a result of inefficient grain formation in the post-AGB wind emanating from the vicinity of a 30,000 K star. Finally, as argued earlier, for offsets <7 ′′ the radial and tangential optical depth cannot be separately quantified from τ 4861 . The smooth decrease of τ 4861 with distance from the central star may suggest that the derived extinction is dominated by the tangential component (as shown above, the radial component would increase or remain constant with the distance from the star). This is, in particular, most likely true for the west lobe, for which the fraction of scattered light in the Hα and Hβ emission lines is small. The similarity between the extinction profile for the east and west lobe as well as the expected lower extinction derived for the east (approaching) lobe, again suggest that τ 4861 is probably dominated by the tangential component. However, the derived extinction, which may still be affected by the scattering, does not allow us to perform a detailed modelling of τ t to study the outer AGB CSE. 9. density and nebular mass 9.1. The electron density distribution We have used the [S II]λ6716/λ6731 doublet ratio to estimate the electron density, n e , along the shocked lobes of CRL 618. The spatial profiles along the nebula axis (slit N93) for the two doublet lines and their ratio are plotted in Fig. 10 (middle panel). We only show the ratios wherever the line intensities are over 3σ. There are no alignment errors to contend with since the two lines of the doublet were observed simultaneously. Within the observational errors, the [S II]λ6716/λ6731 ratio does not present a systematic variation along the east or west lobes, with a mean values of 0.55 and 0.50, respectively. The same average values are obtained for slit S93. Considering the [S II]λ6716/λ6731 ratio versus electron temperature and density contour plots by Cantó et al. (1980) and an average electron temperature T e = 10,000 K ( § 1), we derive densities for the east (west) lobe of ∼ [5-8]×10 3 (∼ 8×10 3 -10 4 ) cm −3 . These figures are close to the critical densities for the [S II]λλ6716,6731Å transitions, however, the good agreement between our results and previous estimates using different diagnostic lines (Kelly, Latter, and Rieke 1992), suggest that collisional de-excitation of the [S II] lines is not critical, at least, for most regions in the lobes. There are some non-systematic variations of the [S II] doublet ratio within the lobes which seem to be larger than their corresponding error-bars. These variations could suggest density inhomogeneities along the nebula (from ∼ 2,000 cm −3 to ∼ 2×10 4 cm −3 ). Atomic and ionized mass and ionization fraction We have estimated the total mass of atomic and ionized gas, M H and M H + , in the lobes of CRL 618 using the mean electron densities derived above and the total energy radiated by [O I](6300+6363) and Hα, respectively. The [O I] (Hα) intensity is proportional to the product of the electron density and the H (H + ) number density, assuming that the transitions are optically thin and that the electron temperature does not strongly vary within the emitting region (Gurzadyan 1997;Osterbrock 1989). Considering a mean electron temperature in the shocked lobes of 10,000 K, relative abundances of He/H = 0.1 and O/H= 3.3×10 −4 (appropriate for a C-rich star like CRL 618, Lambert et al. 1986), and the required atomic parameters (see e.g. Mendoza 1983;Gurzadyan 1997) we derive: where L is the dereddened luminosity of the line and D the distance to the source. Note that Eq. (5) is valid for electron densities smaller than the critical density of [O I], n c ∼ 10 6 cm −3 , which is the case for CRL 618. On the other hand, this equation only provides a lower limit of the total atomic mass (M H , which includes the He contribution), since we are assuming that most of the oxygen is neutral and in the ground state. In deriving Eq. (6) we have assumed the classic radiative recombination case b for T e ∼ 10,000 K (i.e. z 3 =n 3 /n e n H + =0.25×10 −20 cm 3 , where n 3 is the population of the level n = 3 of H; Gurzadyan 1997). We have separately estimated the masses of the east and west lobes using Eqs. (5) and (6). For the east (west) lobe we have used an average electron density of 6.5×10 3 cm −3 (9×10 3 cm −3 ), an average extinction of A V ∼ 2.7 mag (A V ∼ 3.1 mag), and the fluxes in Table 1 multiplied by: (i) a factor 2.5, which converts the flux within a 1 ′′ -wide slit to total nebular flux (estimated from the direct HST and ground-based images, Fig.1); and (ii) a factor 0.5 and 0.8, for the east and west lobe respectively, that is the fraction of the total Hα luminosity locally produced in the lobes (not scattered emission, see § 5.1 and Schmidt and Cohen 1981). We note that point (ii) applies only to the Hα luminosity and not to [O I] emission since the latter is not contaminated by scattered light. The masses derived for a distance to the source of 900 pc are given in Table 4. In this table M Hα H + for the east lobe is the mass of ionized shocked gas plus the mass of ionized gas in region A' (Fig. 1), since the latter contributes significantly to the total Hα luminosity. In contrast, M [OI] H only accounts for the mass of atomic shocked gas, since region A' is not detected in our [O I] long-slit spectra. We have estimated the mass of ionized gas in region A' from its Hα flux, which is ∼ 20% of the total flux (scattered + local emission) of the east lobe. The electron density and extinction in this region is ∼ 10 5 cm −3 and A V > 6, respectively (Sects. 5.2 and 8). Using Eq. (6) we derive The ionized mass in a spherical volume of gas with radius 0. ′′ 5, n e ∼ 10 5 cm −3 , and ionization fraction ∼1 (reasonable values for region A') is ∼ 10 −4 M ⊙ , also in agreement with our previous estimate. Subtracting the ionized mass of region A' from the total mass of the east lobe in Table 4 (column 2), the mass of ionized, shocked gas is (3-9)×10 −5 M ⊙ . Also, for an east lobe-to-west lobe ionized mass ratio similar to the corresponding atomic mass ratio, which is 1.9 (col. 1 in Table 4), M H + for the east lobe would be 4×10 −5 M ⊙ , within the previous mass range. Note that a similar ionization fraction for the east and west lobe is suggested by the similar [O I]/Hα ratio in both lobes (without including the scattered component). We have also estimated the mass of ionized material in CRL 618 geometrically, using the mean electron densities and the volume of the shocked lobes directly measured from the HST images, assuming a cylindrical geometry for the lobe components. We have separately considered the case of hollow lobes (with 0. ′′ 1-thick walls) and lobes uniformly filled with gas. Assuming that the majority of the electrons in the nebula come from H + , i.e. n e ≈ n H + , the ionized mass of the west and east lobe (excluding region A') for hollow (full) cylinders is 3.5×10 −5 M ⊙ (1.1×10 −4 M ⊙ ) and 4.2×10 −5 M ⊙ (1.3×10 −4 M ⊙ ) respectively. These values are consistent with those previously obtained from the Hα luminosity. The hollow-cylinder geometry is a better approximation to the structure of the lobes of CRL 618 than the filled-cylinder geometry: for the latter, the derived masses systematically exceed those obtained from the Hα luminosity (Table 4). The high intensity of the [O I] lines relative to Hα and, in general, the observed ratios for the rest of the lines suggest a low ionization fraction, X=n e /n H ∼ 0.03-0.1, in the lobes, by comparison with the predictions of planar shock models by Hartigan, Morse, & Raymond (1994). These values are also consistent with the upper limits to X derived from the ionized and atomic masses calculated above (X≈1.4M H + /M H ), X<0.8 and X<0.6 for the west and east lobe, respectively. An ionization fraction of only a few percent seems to be inconsistent with the high velocity of the shocks in CRL 618 (∼ 200 km s −1 ) deduced from the line width: full ionization is expected for planar shocks with V s > ∼ 80 km s −1 (Hartigan, Morse, & Raymond 1994). The fact that planar shocks are not appropriate for CRL 618 could partially (but probably not totally) resolve this problem. Fast bow-shocks produce a range of ionization fractions along the bow-shock, with a large X value at the head of the shock but smaller at the wings, where the normal component of the shock velocity is substantially smaller. The result is a smaller mean ionization fraction than that predicted by planar shocks moving at the same velocity, which produces equally high X values all along the front. It is widely accepted that the actual nebular geometry and kinematics of CRL 618 are the result of the interaction between the circumstellar material mostly resulting from the AGB mass-loss process with fast stellar winds ejected more recently, probably in the post-AGB phase ( § 1). Under this scenario the optical shock-excited, rapidly expanding lobes are made of circumstellar material that is undergoing the passage of shock fronts produced by the wind interaction. Our results support this scenario and add more details on how this process is taking place. In our opinion, the morphology of the nebula, consisting of several multiplydirected narrow lobes, suggest that the post-AGB wind is collimated rather than spherical. The presence of lobe components directed along different axes indicates multiple ejection events, that could have happened with different orientations and velocities, and/or at different epochs. The bright ripple-like structure found along the lobes could be due to instabilities in the flow or bow-shocks resulting from the episodic interaction between fast collimated jets and the slowly expanding envelope ejected during the AGB phase (see a more detailed analysis of and discussion on HST imaging in different filters in Sahai et al. 2002, in preparation). The complex spatio-kinematic structure of the lobes leads to kinematical ages ranging from ∼ 100 yr in the innermost regions of the lobes to ∼ 400 yr at the tips, without correcting for projection effects. This age difference could be real, in which case the innermost regions would have been more recently shocked, or it could be due to different inclinations of, and/or different decelerations suffered by, the lobe sub-components. Adopting a mean inclination of 24 • , the age of the optical nebula is < ∼ 180 yr. The electron density does not vary systematically along the shocked lobes of CRL 618 ( § 9.1). Since the ionization fraction is not expected to vary drastically along the lobes (note that V s , given by the line FWZI's, and the [O I]/Hα line ratio have similar values all along the nebula), a roughly constant n e suggests that the total density is constant as well. Such a density distribution is most likely inconsistent with most of the [S II] emission (and, probably, most forbidden lines) arising from the lobe interior, which is presumably filled with the fast post-AGB wind whose density is expected to decrease with the distance (C-F. Lee and Sahai, in preparation). We think that most of the emission arises from shocked gas in thin-walled lobes, a conclusion supported by the limb-brightened appearance of the lobes in the HST images (Fig. 1) and the better agreement between the geometrical and Hα masses of ionized gas derived in § 9.2. The spatial distribution of the radial optical thickness beyond the tips of the lobes suggests a high contrast between the density of dust in the gas filling the lobes and that in front of them, in the AGB CSE. This is consistent with the optical lobes of CRL 618 being cavities excavated in the AGB envelope by the impact of the post-AGB wind (but see § 8.2 for caveats). The forward shocks accelerating and shaping the lobes of CRL 618 are very likely in a radiative regime because the cooling time of the gas immediately behind the shock front in the lobes, t cool , is significantly smaller than the dynamical time scale of the nebula ( < ∼ 180 yr). Immediately after the shock, the gas temperature is expected to rise to ∼ 10 5 -10 6 K for a shock speed of ∼ 200 km s −1 (e.g. Dalgarno & McCray 1972). The time required by the gas to cool down to the present temperature of the shocked optically emitting lobes (of the order of 10 4 K) is t cool ∝ V 3 s /n 0 Λ(T ), where V s is the shock velocity, n 0 is the pre-shock number density of nuclei, and Λ(T ) is the cooling rate (Blondin, Fryxell, & Konigl 1990). For a shock velocity of V s < ∼ 200 km s −1 , Λ(T )=few×10 −22 erg cm −3 s −1 is appropriate (Dalgarno & McCray 1972) and t cool (yr)≈570×V 3 s,7 /n 0 , where V s and n 0 are given in 10 7 cm s −1 and cm −3 units, respectively. The density of the molecular AGB envelope at a radial distance of ∼7 ′′ , i.e. ahead of the shocked lobes, provides a lower limit to the density of the pre-shock material since the first interactions between the post-AGB wind and the AGB CSE presumably took place closer to the central star. For a mass-loss rateṀ = 5×10 −5 M ⊙ yr −1 and expansion velocity 17.5 km s −1 ( § 1), n H2 = 6×10 4 cm −3 (n 0 ∼2n H2 , assuming atomic pre-shock gas). For the numbers above, we derive t cool <0.04 yr. Even accounting for an uncertainty in the preshock density of one order of magnitude, the cooling time is much smaller than the dynamical time scale. Thin lobe walls, where most of the shocked material seems to be located, and high electron densities, up to ∼ 10 4 cm −3 , are both expected in the case of radiative shocks: the low thermal pressure after the shock passage leads to the collapse of the shocked material in a very small and dense region after the shock (see, e.g., Frank 1999, for a review). The large length-to-width ratio of the lobes, ∼ 7, is also in better agreement with the shocks being radiative, since for the adiabatic case the thermal pressure of the shocked gas is very high, leading to inflated bubbleshaped lobes. The nature of the reverse shocks, propagating in the post-AGB wind itself towards the nebula center, is uncertain because it depends on the unknown physical properties of the post-AGB wind. For ρ AGB /ρ pAGB ∼ 600 ( § 8.2) and a reverse shock velocity of the order of 100 km s −1 , the cooling time of the shocked post-AGB gas would be < ∼ 12 yr, and therefore the reverse shocks would also be in a radiative regime. In the absence of a presently active energy input, the dense gas in the optical lobes of CRL 618 should be at a temperature significantly less than 10 4 K. The high electron densities in the lobes of CRL 618 would have led to a very quick recombination of the presently shocked and ionized gas: the recombination time scale is about 10 yr for a total hydrogen recombination coefficient α H (10 4 K) = 4.18×10 −13 cm −3 s −1 (see, e.g., Kwok 2000, and references therein). Also, the time for the shocked gas to cool down one order of magnitude (to ∼ 1000 K) is < ∼ 2 yr, using a cooling rate Λ > ∼ 10 −25 erg cm −3 s −1 , appropriate for T e ∼ 10 4 K and ionization fraction > ∼ 0.01 (Dalgarno & McCray 1972). Such a short cooling time implies that, in the absence of a continuous ionization/heating source, the gas in the lobes of CRL 618 should have become atomic and cool in a time scale much smaller than the age of nebula, < ∼ 180 yr. Even if we have overestimated its kinematical age, CRL 618 is at least about 30 years old, since it was observed with similar characteristics in 1973 by, e.g., Westbrook et al. (1975). The most likely energy source for maintaining a high temperature (>1000 K) in the lobes is an ongoing interaction between the post-AGB wind and the circumstellar material (note that the central star is not hot enough to ionize and and heat the lobes). This implies that the fast wind, which is ultimately responsible for the shaping and evolution of its circumstellar envelope, is currently active in CRL 618. This fast wind is not necessarily stationary, but may rather be pulsed (episodic) with a period smaller but comparable to the recombination time scale of the shocked gas (∼ 10 yr). In this case, the shocked material (initially fully ionized) would have time to partially recombine between pulses, which might be a contributing factor to maintaining a low ionization fraction in the lobes of CRL 618 in spite of the high velocity of the shocks. On the other hand, to keep the temperature of the shocked gas above T e > 10 3 K, the pulse period should be > 2 yr. The idea of a pulsating fast wind is consistent with the presence of the multiple bright, bow-shaped and ripple-like structures seen along the lobes of CRL 618 (Fig. 1) Future circumstellar evolution of CRL 618 The shocked optical lobes of CRL 618 could evolve into a fast, bipolar molecular outflow if the time scale of molecular reformation in the shocked gas is smaller than the time required by the expanding ionization front (produced by the central star) to reach and fully ionize the whole nebula. The present radiative, forward shocks will continue propagating through the molecular cocoon (traced by CO), accelerating and shaping more and more material of the AGB CSE. We are observing the beginning of this process, when the shocked material is still hot (and, therefore, primarily visible in the optical), and its mass is still low compared to that of the unshocked envelope. The hot shocked material will cool down, allowing molecule reformation, and would be observed as an extended, bipolar outflow moving at high velocity. This is what presumably already happened to a part of the shocked material: the fast molecular bipolar outflow observed in the innermost parts of CRL 618 ( § 1, discussed in detail in Sánchez Contreras et al. 2002 -paper II, in preparation). If the shocks have sufficient energy to accelerate most of the AGB envelope, the mass of the future fast, bipolar outflow of CRL 618 could be a significant fraction of the total mass of the AGB CSE, ∼ 1.5 M ⊙ . The interaction of this fast and massive molecular jet with the interstellar medium or the low density, outermost parts of the CRL 618 CSE could also lead to new shocks that would be observed, in their first stages, through optical line emission surrounding the molecular bipolar outflow. In our opinion, the possible future evolution of CRL 618 described above is what already occurred in another well studied protoplanetary nebula, OH 231.8+4.2. This object shows a very collimated and fast molecular outflow which was presumably shaped and accelerated by radiative shocks resulting from a two-wind interaction process ∼ 1000 yr ago (Sánchez Contreras et al 2000; Alcolea et al. 2001); this outflow should have been in the past a very intense emitter of recombination and forbidden lines at short wavelengths, as the optical lobes in CRL 618. Surrounding the collimated molecular outflow of OH 231.8+4.2, there is a wide bow-shaped nebula of shock-excited warm gas which is observed in the visible (Bujarrabal et al. 2002;Sánchez Contreras et al 2000). This component is believed to result from the hydrodynamical interaction between the bipolar CO outflow and the tenuous circumstellar material, the shocks generated in this interaction being more likely in an adiabatic regime. The central star of OH 231.8+4.2 is still very cool (∼ 2500 K) and therefore is not able to ionize the circumstellar material. The evolution of the optical lobes of CRL 618 into a fast molecular outflow (like in OH 231.8+4.2) in the future requires that once the wind interaction stops, the time required by the ionization front, produced by the hot central star, to reach the lobes (t ion ) be larger than the molecular reformation time scale (t mol ). In the relatively dense envelopes of PPNe, the time scale for molecule formation is < ∼ 1000 yr, as deduced from the widespread existence of fast (V > ∼ 100 km s −1 ) molecular outflows with kinematical ages < ∼ 10 3 yr (see, e.g., Bujarrabal et al. 2001): note that molecules are totally destroyed by shocks at speeds > ∼ 30 km s −1 (Hollenbach & McKee 1989) and therefore molecules moving at such high velocities have most likely been reformed in the post-shock gas. The kinematical age of the compact molecular outflow of CRL 618 itself, moving at > ∼ 100 km s −1 , is only of the order of 10 2 yr ( § 1), so, in general, we can expect molecule reformation to occur in t mol ∼ 10 2 -10 3 yr. This value is also in reasonable agreement with theoretical estimates of t mol in dense, n H ∼ 10 6 cm −3 , post-shock gas (Hollenbach & McKee 1989). The time required by the ionization front to reach the lobes, t ion , depends on the relative expansion velocity of the front and the lobes. The expansion velocity of the ionization front in CRL 618, estimated from radio continuum mapping with high-angular resolution in different epochs, has decreased with time: from 50-74(D/0.9 kpc) km s −1 (Kwok & Feldman 1981) to 26(D/0.9 kpc) km s −1 (Spergel, Giuliani, & Knapp 1983). Expanding at this rate, the ionization front will never reach the lobes, which are moving much faster (up to 200 km s −1 ), and the molecules will reform in the shocked gas sooner or later. If the shocked lobes are decelerated by interaction with the surrounding circumstellar gas and/or the velocity of the ionization front increases (due, e.g., to the evolution of the central star, which will become hotter and hotter, and the dilution of the circumstellar gas) then the UV stellar photons will be able to fully ionize the lobes of CRL 618. The ionization time-scale depends on different unknown physical parameters of the central star, such as mass, temperature, mass-loss rate, etc, and their evolution with time. The shocked gas would be observed as a molecular outflow during t ion − t mol , whenever t ion > t mol , otherwise CRL 618 will never develop a massive bipolar molecular outflow (like e.g. OH 231.8+4.2). conclusions We have obtained ground-based, long-slit spectra in the ranges [6233-6806]Å and [4558-5331]Å for three different (1 ′′ -wide) slit positions in the PPN CRL 618. Based on the analysis of these spectra and direct (Hα and continuum) images, we find the following results: -Direct imaging of the PPN CRL 618. a) Narrow-band, ground-based image of the continuum emission at 6616Å. Contours are 3σ, 5σ, 10σ, 15σ, and from 20σ to 120σ by 20σ, with σ=7×10 −19 erg s −1 cm −2Å−1 arcsec −2 ; b) Narrow-band, ground-based image of the Hα emission line (continuum subtracted). Contours are 5σ, 10σ, 20σ, 40σ, 150σ, and from 300σ to 2300σ by 500σ, with σ=6.5×10 −17 erg s −1 cm −2 arcsec −2 ; c) Narrow-band WFPC2/HST image in the light of Hα+continuum. The continuum contribution in this image is less than ∼ 3% in any region. (PLEASE, DOWNLOAD csanchez.fig1.gif). Fig. 2.-Long-slit spectra of selected emission lines in CRL 618 observed with grating R1200Y for slits N93 (top), S93 (middle), and PA3 (bottom). The HST and ground-based Hα image of the nebula are shown in the left-most panels (top and bottom, respectively). Slits used for spectroscopy are superimposed on the ground-based image. Contours in the spectra are: 3σ×1.8 (i−1) from i=1 to 15 with σ=10 −16 erg s −1 cm −2Å−1 arcsec −2 for N93 and σ=1.4×10 −16 erg s −1 cm −2Å−1 arcsec −2 for S93 and PA3. The LSR velocity scale (in km s −1 ) is shown over the N93 Hα spectra. The systemic velocity (∼ −21 km s −1 ) is indicated by the vertical line. (PLEASE, DOWNLOAD csanchez.fig2.gif). Fig.2). In the top panel, we plot the total spectrum with degraded spatial and spectral resolutions by a factor 70 and 50%, respectively, with respect to the nominal value in order to show the continuum and weakest emission lines. In the lower panel, long-slit spectra for selected lines are shown. As in Fig. 2, these spectra have been smoothed within a 3×3 pixel box with no significant degradation (less than 4 percent) of the nomimal spatial and spectral resolution (see § 2.2). (PLEASE, DOWNLOAD csanchez.fig5.gif). Table 2). Filled triangles (squares) are used for the east (west) lobe, and open circles for average values in the whole nebula. The surrounding AGB envelope, which is radially expanding at velocity Vexp, is represented by the grey circle. The light arising in the H II region escapes preferentially through the lobes, illuminating a compact region beyond the east lobe tip (region C') that scatters light into the line of sight leading to feature C. The light from the H II region is also scattered by rapidly outflowing dust inside or in the walls of the lobes leading to feature B. The different optical depth components (τr, τt, & τsc, see § 8.1) are indicated. We do not intend to accurately reproduce the actual shape or multiple lobe character of CRL 618 or the relative dimensions of the different nebular components.
2014-10-01T00:00:00.000Z
2002-06-12T00:00:00.000
{ "year": 2002, "sha1": "dc82728ed404927505cf1d6f0862b7c25ed79754", "oa_license": null, "oa_url": "https://eprints.ucm.es/36057/1/gildepaz140libre.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dc82728ed404927505cf1d6f0862b7c25ed79754", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237592755
pes2o/s2orc
v3-fos-license
Harder's conjecture I Let $f$ be a primitive form with respect to $SL_2(Z)$. Then we propose a conjecture on the congruence between the Klingen-Eisenstein lift of the Duke-Imamoglu-Ikeda lift of $f$ and a certain lift of a vector valued Hecke eigenform with respect to $Sp_2(Z)$. This conjecture implies Harder's conjecture. We prove the above conjecture in some cases. k = ( n k, . . . , k)) for Sp n (Z), respectively. (For the definition of modular forms, see Section 2). Let f (z) = ∞ m=1 a(m, f ) exp(2π √ −1mz) be a primitive form in S 2k+j−2 (SL 2 (Z)), and suppose that a 'big prime' p divides the algebraic part of L(k + j, f ). Then, Harder [17] conjectured that there exists a Hecke eigenform F in S (k+j,k) (Sp 2 (Z)) such that λ F (T (p)) ≡ a(p, f ) + p k−2 + p j+k−1 (mod p ) for any prime number p, where λ F (T (p)) is the eigenvalue of the Hecke operator T (p) on F , and p is a prime ideal of Q(f ) · Q(F ) lying above p. One of main difficulties in treating this congruence arises from the fact that this is not concerning the congruence between Hecke eigenvalues of two Hecke eigenforms of the same weight. Indeed, the right-hand side of the above is not the Hecke eigenvalue of a Hecke eigenform if j > 0. Several attempts have been made to overcome this obstacle. Ibukiyama [21], [23] proposed a half-integral weight version of Harder's conjecture given as congruences of Hecke eigenforms and related it to the original Harder's conjecture through his conjectural Shimura type correspondence for vector valued Siegel modular forms of degree two (and this Shimura type conjecture was now proved by H. Ishimoto [34]). This explains the Harder conjecture for odd k ( [26]) and the proved example of congruence in [23] means the Harder conjecture for (k, j) = (5,18). In [5], Bergström and Dummigan, among other things, reformulated Harder's conjecture as congruence between a certain induced representation of π f and a cuspidal automorphic representation of GSp (2). In [12], Chenevier and Lannes gave several congruences between theta series of even unimodular lattices, and using Arthur's endoscopic classification and Galois representation theoretic method, they, among other things, proved Harder's conjecture for (k, j) = (10,4). In this paper we consider a conjecture concerning the congruence between two liftings to higher degree of Hecke eigenforms (of integral weight) of degree two. More precisely, for the f above, let I n (f ) be the Duke-Imamoglu-Ikeda lift of f to the space of cusp forms of weight j 2 + k + n 2 − 1 for Sp n (Z) with n even. For a sequence k = n j 2 + k + n 2 − 1, . . . , j 2 + k + n 2 − 1, n j 2 + 3n 2 + 1, . . . , j 2 + 3n 2 + 1 with k ≥ n + 2, let [I n (f )] k be the Klingen-Eisenstein lift of I n (f ) to M k (Sp 2n (Z)). Then, we propose the following conjecture: for any integral Hecke operator T . Therefore, to prove the above conjecture, it suffices to show that G is a lift of type A (I) . Here we expect that G can be taken as A (I) and indeed we will see that in the cases (k, j) = (10,4), (14,4) and (4,24) using the dimension formula due to Taibi https://otaibi.perso.math.cnrs.fr/dimtrace and the numerical tables of Hecke eigenvalues due to Poor-Ryan-Yuen [50] and Ibukiyama-Katsurada-Poor-Yuen [30]. As a result, we prove Conjecture 4.5 and so Harder's conjecture in those cases. This paper is organized as follows. In Section 2, we give a brief summary of Siegel modular forms, especially about their Q-structures or Z-structures. In Section 3, after giving a summary of several L-values, we state Harder's conjecture. In Section 4, we introduce several lifts, and among other things define the lift of A (I) -type of a vector valued modular form in S (k+j,k) (Sp 2 (Z)), and propose a conjecture and explain how this conjecture implies Harder's conjecture. In Section 5, we consider the pullback formula of the Siegel Eisenstein series with differential operators. In Section 6, we consider the congruence for vector valued Klingen-Eisenstein series, which is a generalization of [39], and explain how the assumption that p divides the algebraic part of L(k+j, f ) for f ∈ S 2k+j−2 (SL 2 (Z)) gives the congruence between [I n (f )] k and another Hecke eigenform in M k (Sp 2n (Z)). In Section 7, we give a formula for the Fourier coefficients of the Klingen-Eisenstein series, from which we can confirm some assumption in our main results. In Section 9, we state our main results, which confirm our conjecture, and so Harder's. In a subsequent paper, we will prove Conjecture 4.5 and so Harder's in more general setting, that is, in the case k is even and j ≡ 0 mod 4, that is, we will prove these conjectures without using the dimension formula or the computation of Hecke eigenvalues of Siegel modular forms (cf. [4]). Acknowledgments. The authors thank S. Böcherer, G. Chenevier, N. Dummigan, G. Harder, T. Ikeda, N. Kozima and S. Sugiyama for valuable comments. They also thank the referee for a careful and intelligent reading of their paper and for the numerous helpful suggestions to improve the exposition. Notation. Let R be a commutative ring. We denote by R × the unit group of R. We denote by M mn (R) the set of m × n-matrices with entries in R. In particular put M n (R) = M nn (R). Put GL m (R) = {A ∈ M m (R) | det A ∈ R × }, where det A denotes the determinant of a square matrix A. For an m×n-matrix X and an m×m-matrix A, we write A[X] = t XAX, where t X denotes the transpose of X. Let Sym n (R) denote the set of symmetric matrices of degree n with entries in R. Furthermore, if R is an integral domain of characteristic different from 2, let H n (R) denote the set of half-integral matrices of degree n over R, that is, H n (R) is the subset of symmetric matrices of degree n with entries in the field of fractions of R whose (i, j)-component belongs to R or 1 2 R according as i = j or not. We say that an element A of M n (R) is non-degenerate if det A = 0. For a subset S of M n (R) we denote by S nd the subset of S consisting of non-degenerate matrices. If S is a subset of Sym n (R) with R the field of real numbers, we denote by S >0 (resp. S ≥0 ) the subset of S consisting of positive definite (resp. semi-positive definite) matrices. The group GL n (R) acts on the set Sym n (R) by GL n (R) × Sym n (R) (g, A) −→ A[g] ∈ Sym n (R). Let G be a subgroup of GL n (R). For a G-stable subset B of Sym n (R) we denote by B/G the set of equivalence classes of B under the action of G. We sometimes use the same symbol B/G to denote a complete set of representatives of B/G. We abbreviate B/GL n (R) as B/∼ if there is no fear of confusion. Let R be a subring of R. Then two symmetric matrices A and A with entries in R are said to be equivalent over R with each other and write A ∼ R A if there is an element X of GL n (R ) such that A = A[X]. We also write A ∼ A if there is no fear of confusion. For square matrices X and Y we write For an integer D ∈ Z such that D ≡ 0 or ≡ 1 mod 4, let d D be the discriminant of Q ( √ D), We call an integer D a fundamental discriminant if it is the discriminant of some quadratic extension of Q or 1. For a fundamental discriminant D, let D * be the character corresponding to Q( √ D)/Q. Here we make the convention that D * = 1 if D = 1. For an integer D such that D ≡ 0 or ≡ 1 mod 4, we define D * = d D * . We put e(x) = exp(2π √ −1x) for x ∈ C, and for a prime number p we denote by e p ( * ) the continuous additive character of Q p such that e p (x) = e(x) for x ∈ Z[p −1 ]. Let K be an algebraic number field, and O = O K the ring of integers in K. For a prime ideal p we denote by K p and O p the p-adic completion of K and O, respectively, and put O (p) = O p ∩K. For a prime ideal number p of O, we denote by ord p ( * ) the additive valuation of K p normalized so that ord p ( ) = 1 for a prime element of K p . Moreover for any element a, b ∈ O p we write b ≡ a (mod p) if ord p (a − b) > 0. Siegel modular forms We denote by H n the Siegel upper half space of degree n, i.e., For any ring R and any natural integer n, we define the group GSp n (R) by GSp n (R) = {g ∈ M 2n (R) | gJ n t g = ν(g)J n with some ν(g) ∈ R × }, where J n = 0n −1n 1n 0n . We call ν(g) the symplectic similitude of g. We also define the symplectic group of degree n over R by Sp n (R) = {g ∈ GSp n (R) | ν(g) = 1}. We put Γ (n) = Sp n (Z) for the sake of simplicity. Now we define vector valued Siegel modular forms of Γ (n) . Let (ρ, V ) be a polynomial representation of GL n (C) on a finite dimensional complex vector space V . We fix a Hermitian inner product * , * on V such that ρ(g)v, w = v, ρ( tḡ )w for g ∈ GL n (C), v, w ∈ V. For a positive integer N , we define the principal congruence subgroup Γ (n) (N ) of Γ (n) of level N by is said to be a congruence subgroup if Γ contains Γ (n) (N ) with some N . By definition, Γ (n) (N ) is a congruence subgroup. Another example of congruence subgroup is the group Γ Let Γ be a congruence subgroup of Γ (n) . We say that F is a holomorphic Siegel modular form of weight ρ with respect to Γ if F is holomorphic on H and F | ρ [γ] = F for any γ ∈ Γ (with the extra condition of holomorphy at all the cusps if n = 1). We denote by M ρ (Γ ) the space of modular forms of weight ρ with respect to Γ , and by S ρ (Γ ) its subspace consisting of cusp forms. A modular form F ∈ M ρ (Γ ) has the following Fourier expansion Let λ = (k 1 , k 2 , . . .) be a finite or an infinite sequence of non-negative integers such that k i ≥ k i+1 for all i and k m = 0 for some m. We call this a dominant integral weight (or the Young diagram). We call the biggest integer m such that k m = 0 a depth of λ and write it by depth(λ). It is well known that the set of dominant integral weights λ with depth(λ) ≤ n corresponds bijectively to the set of irreducible polynomial representations of the GL n (C). We denote this representation by (ρ n,λ , V n,λ ). We also denote it by (ρ k , V k ) with k = (k 1 , . . . , k n ) and call it the irreducible polynomial representation of GL n (C) of highest weight k. We then set M k (Γ ) = M ρ k (Γ ) and S k (Γ ) = S ρ k (Γ ). We say F is a modular form of weight k if it is a modular form of weight ρ k . If k = ( n k, . . . , k), we simply write M k (Γ ) = M k (Γ ) and S k (Γ ) = S k (Γ ). We note that M (k+j,k) (Γ (2) ) = M det k ⊗Sym j (Γ (2) ) and S (k+j,k) (Γ (2) where Sym j is the j-th symmetric tensor representation of GL 2 (C). In general, for the k = (k 1 , . . . , k n ) above, we write k = (k 1 − k n , . . . , k n−1 − k n , 0). Then, we have ρ k ∼ = det kn ⊗ρ k with (ρ k , V k ) an irreducible polynomial representation of highest weight k . Here we understand that (ρ k , V k ) is the trivial representation on C if k 1 = · · · = k n−1 = k n . Moreover, we may regard an element F ∈ M k (Γ ) as a V k -valued holomorphic function on H such that F | det kn ⊗ρ k [γ] = F for any γ ∈ Γ (with the extra condition of holomorphy at all the cusps if n = 1). For a representation (ρ, V ) of GL n (C), we denote by F(H n , V ) the set of Fourier series F (Z) on H n with values in V of the following form: For F (Z) ∈ F(H n , V ) and a positive integer r ≤ n we define Φ(F )( We make the convention that F(H 0 , V ) = V and Φ n 0 (F ) = a(O n , F ). Then, Φ(F ) belongs to F(H r , V ). For a representation (ρ, V ) of GL n (C), we denote by F(H n , V ) = F(H n , (ρ, V )) the subset of F(H n , V ) consisting of elements F (Z) such that the following condition is satisfied: Now let = (l 1 , . . . , l n ) be a dominant integral weight of length n of depth m. Then we realize the representation space V in terms of bideterminants (cf. [31]). Let U = (u ij ) be an m × n matrix of variables. For a positive integer a ≤ m let SI n,a denote the set of strictly increasing sequences of positive integers not greater than n of length a. For each J = (j 1 , . . . , j a ) ∈ SI n,a we define U J as Then we say that a polynomial P (U ) in U is a bideterminant of weight if P (U ) is of the following form: Let BD be the set of all bideterminants of weight . Here we make the convention that BD = {1} if = (0, . . . , 0). For a commutative ring R and an R-algebra S let S[U ] denote the R-module of all S-linear combinations of P (U ) for P (U ) ∈ BD . Then we can define an action of GL n (C) on C[U ] as and we can take the C-vector space C[U ] as a representation space V of ρ under this action. Let m ≤ n − 1 be a non-negative integer and U = (u ij ) be an m × n matrix of variables. Let k = (k 1 , . . . , k n ) with k 1 ≥ · · · ≥ k m > k m+1 = · · · = k n and k = (k 1 − k m+1 , . . . , k m − k m+1 , n−m 0, . . . , 0). Here we make the convention that k = (k 1 , . . . , k 1 ) and k = (0, . . . , 0) if m = 0. Then under this notation and convention, M k (Γ (n) ) can be regarded as a C-sub-vector space of Hol(H n )[U ] k , where Hol(H n ) denotes the ring of holomorphic functions on H n . We sometimes write F (Z)(U ) for F (Z) ∈ M k (Γ (n) ). Moreover, the Fourier expansion of F (Z) ∈ M k (Γ (n) ) can be expressed as Let r be an integer such that m ≤ r ≤ n and let l = (k 1 , . . . , k r−1 , k r ) and l = ( . For the m × n matrix U , let U (r) = (u ij ) 1≤i≤m,1≤j≤r and put W = C[U (r) ] l . Then we can define a representation (τ , W ) of GL r (C). The representations (ρ k , V k ) and (τ , W ) satisfy the following conditions: Suppose that F (Z) belongs to F(H n , V k ). Then, by (K0), ≥0 . This implies that Φ n r (F ) belongs to F(H r , W ). We easily see that Φ n r (F ) belongs to F(H r , W ), and therefore Φ n r sends F(H n , V k ) to F(H r , W ). It is easily seen that it induces a mapping from M ρ (Γ (n) ) to M τ (Γ (r) ), where ρ = det kn ⊗ρ k and τ = det kn ⊗τ . Let ∆ n,r be the subgroup of Γ (n) defined by For F ∈ S τ (Γ (r) ) the Klingen-Eisenstein series [F ] ρ τ (Z, s) of F associated to ρ is defined by det Im(Z) det Im(pr n r (Z)) s F (pr n r (Z))| ρ γ. Here pr n r (Z) = Z 1 for Z = . Suppose that k n is even and 2Re(s) + k n > n + r + 1. Then, [F ] ρ τ (Z, s) converges absolutely and uniformly on H n . This is proved by [40] in the scalar valued case, and can be proved similarly in general case. If [F ] k (Z, s) can be continued holomorphically in the neighborhood of 0 as a function of s, we put is holomorphic as a function of Z, it belongs to M k (Γ (n) ), and we say that it is the Klingen-Eisenstein lift of F to M k (Γ (n) ). In particular, if k n > n + r + 1, then [F ] ρ τ (Z, s) is holomorphic at s = 0 as a function of s, and [F ] ρ τ (Z, 0) belongs to M k (Γ (n) ), and Φ ρ τ ([F ] ρ τ ) = F . We note that [F ] ρ τ (Z) is not necessarily a holomorphic as a function of Z if k n ≤ n + r + 1. We define E n,k (Z, s) as and call it the Siegel-Eisenstein series of weight k with respect to Γ (n) . In particular, if k = ( n k, . . . , k) with k even we write E n,k (Z, s) for E n,k (Z, s). If k > 0, then E n,k (Z, s) can be continued meromorphically to the whole s-plane as a function of s. Let k = ( m k + l, . . . , k + l, n−m k, . . . , k) such that k, l ≥ 0, and put ρ = det k ⊗ρ k and τ = det k ⊗ρ l with k = ( m l, . . . , l, 0, . . . , 0) and l = ( m l, . . . , l). Then, for F ∈ S τ (Γ (m) ) we can define the Klingen-Eisenstein series [F ] ρ k τ (Z, s) of F associated to ρ k if k is even and 2Re(s) + k > n + m + 1. We note that C[U (m) ] l is a subspace of C[U ] k spanned by (det U (m) ) l , and hence we have a natural isomorphism ). We state the holomorphy of the Klingen-Eisenstein series. Proposition 2.1. Let k be an even integer. . Then, (ρ |GL n (Q), V ) is a representation of GL n (Q), and V ⊗ C = V . We consider a Z structure of V . To do this, we fix a basis S = S = {P } of Z[U ] . We note here that the bideterminants are not linearly independent over Z and even over C in general, so the set BD is not necessarily a basis of Z[U ] . Let R be a subring of C. Since the set S is also linearly independent over C, an element a of R[U ] is uniquely written as a = P ∈S a P P with a P ∈ R. Let K be a number field, and O the ring of integers in K. For a prime ideal p of O and a = a(U ) = P ∈S a P P ∈ K[U ] with a P ∈ K, define ord p (a) = min P ∈S ord p (a P ). We say that p divides a if ord p (a) > 0. (1) The definition of ord p ( * ) does not depend on the choice of a basis of L. We note that p does not divide a = a(U ) if p does not divide a(U 0 ) for some element U 0 of M m,n (O). (2) There is no canonical choice of a basis of V . But several standard choices are known. One of them is a basis associated with the semi-standard Young tableaux (cf. [16]). We note that that it is also a basis of L. This can be proved by a careful analysis of the proof of [16, (4.5a)] combined with [16, (4.6a)]. To make our formulation smooth, we sometimes regard a modular form of scalar weight k for Γ (n) as a function with values in the one-dimensional vector space spanned by det U l with a non-negative integer l ≤ k, where U is an n × n matrix of variables. Let U 1 and U 2 be m × n 1 and m × n 2 matrices, respectively, of variables and for a commutative ring R and an R-algebra S let Here we make the convention that P j (U i ) ∈ (det U i ) k 1 −l C if n i = m and k 1 = · · · = k m as stated above. Then, as a representation space We note that Here we make the convention that P τ i (U 2 ) = (det U i ) k 1 −l if n i = m and k 1 = · · · = k m . Therefore, M is a lattice of W and has also a Q-structure and Z-structure and we can define ord p (a ⊗ b) for a ⊗ b ∈ W K . If dim C V 1 = 1, then we identify V 1 , V 1 and L 1 with C, Q and Z, respectively, and for a, b ∈ V 1 and w ∈ V 2 , we write a ⊗ b and a ⊗ w as ab and aw, respectively through the identifications The tensor product M k 1 (Γ (n 1 ) ) ⊗ M k 2 (Γ (n 2 ) ) is regarded as a C-subspace of (Hol(H n 1 ) ⊗ Hol(H n 2 ))[U 1 , U 2 ] k 1 ,k 2 . Harder's conjecture In this section we review several arithmetical properties of Hecke eigenvalues and L values of modular forms, then state the original Harder's conjecture in [17]. In the later section, we will treat a generalized version of the conjecture. Let L n = L(Γ (n) , GSp + n (Q) ∩ M 2n (Z)) be the Hecke algebra over Z associated to the Hecke pair (Γ (n) , GSp + n (Q) ∩ M 2n (Z)) and for a subring R of C put L n (R) = L n ⊗ Z R. For an element T = Γ (n) gΓ (n) ∈ L n (C), let be the coset decomposition. Then, for a modular form F ∈ M k (Γ (n) ) we define F |T as This defines an action of the Hecke algebra L n (C) on M k . The operator F → F |T with T ∈ L n (C) is called the Hecke operator. We say that F is a Hecke eigenform if F is a common eigenfunction of all Hecke operators T ∈ L n (C). Then we have We call λ F (T ) the Hecke eigenvalue of T with respect to F . For a Hecke eigenform F in M k (Γ (n) ), we denote by Q(F ) the field generated over Q by all the Hecke eigenvalues λ F (T ) with T ∈ L n (Q) and call it the Hecke field of F . For two Hecke eigenforms F and G we sometimes write Q(F, G) = Q(F )Q(G). We say that an element T ∈ L n (Q) is integral with respect to M k (Γ (n) ) if F |T ∈ M k (Γ (n) )(Z) for any F ∈ M k (Γ (n) )(Z). We denote by L (k) n the subset of L n (Q) consisting of all integral elements with respect to M k (Γ (n) ). The following proposition can be proved in the same manner as Proposition 4.2 of [36]. Furthermore, for i = 1, . . . , n and a prime number p put As is well known, L n (Q) is generated over Q by T (p), T i (p 2 ) (i = 1, . . . , n), and [p −1 ] n for all p. We note that T n (p 2 ) = [p] n . We note that L n is generated over Z by T (p) and T i (p 2 ) (i = 1, . . . , n) for all p. Let F be a Hecke eigenform in M k (Γ (n) ). As is well known, Q(F ) is a totally real algebraic number field of finite degree. Now, first we consider the integrality of the eigenvalues of Hecke operators. For an algebraic number field K, let O K denote the ring of integers in K. The following assertion can be proved in the same manner as in [48]. (See also [36].) Proposition 3.2. Let k n ≥ n + 1. Let F be a Hecke eigenform in S k (Γ (n) ). Then λ F (T ) belongs to O Q(F ) for any T ∈ L (k) n . Let L n,p = L(Γ (n) , GSp n (Q) + ∩ GL 2n (Z[p −1 ])) be the Hecke algebra associated with the pair (Γ (n) , GSp n (Q) + ∩ GL 2n (Z[p −1 ])). L n,p can be considered as a subalgebra of L n , and is generated over Q by T (p) and T i (p 2 ) (i = 1, 2, . . . , n), and [p −1 ] n . We now review the Satake p-parameters of L n,p ; let P n = Q[X ± 0 , X ± 1 , . . . , X ± n ] be the ring of Laurent polynomials in X 0 , X 1 , . . . , X n over Q. Let W n be the group of Q-automorphisms of P n generated by all permutations in variables X 1 , . . . , X n and by the automorphisms τ 1 , . . . , τ n defined by Moreover, a groupW n isomorphic to W n acts on the set T n = (C × ) n+1 in a way similar to the above. Then there exists a Q-algebra isomorphism Φ n,p , called the Satake isomorphism, from L n,p to the W n -invariant subring P Wn n of P n . Then for a Q-algebra homomorphism λ from L n,p to C, there exists an element (α 0 (p, λ), α 1 (p, λ), . . . , α n (p, λ)) of T n satisfying λ(Φ −1 n,p (F (X 0 , X 1 , . . . , X n ))) = F (α 0 (p, λ), α 1 (p, λ), . . . , α n (p, λ)) for F ∈ P Wn n . The equivalence class of (α 0 (p, λ), α 1 (p, λ), . . . , α n (p, λ)) under the action of W n is uniquely determined by λ. We call this the Satake parameters of L n,p determined by λ. Now let F be a Hecke eigenform in M k (Γ (n) ). Then for each prime number p, F defines a Q-algebra homomorphism λ F,p from L n,p to C in a usual way, and we denote by α 0 (p), α 1 (p), . . . , α n (p) the Satake parameters of L n,p determined by F. For later purpose, we consider special elements in L n,p ; the polynomials r n (X 1 , . . . , X n ) = n i=1 (X i + X −1 i ) and ρ n (X 0 , X 1 , . . . , X n ) = X 2 0 X 1 X 2 · · · X n r n (X 1 , . . . , X n ) are elements of P Wn n , and thus we can define elements Φ −1 n,p (r n (X 1 , . . . , X n )) and Φ −1 n,p (ρ n (X 0 , X 1 , . . . , X n )) of L n,p , which are denoted by r n,1 (p) and ρ n,1 , respectively. (1) We have ρ n,1 (p) = p n(n+1)/2 [p] n · r n,1 (p), and in particular (2) Let k = (k 1 , . . . , k n ) with k 1 ≥ · · · ≥ k n ≥ 0. Suppose that k n ≥ (n + 1)/2. Then r n,1 (p) := p k 1 −1 r n,1 (p) belongs to L (k) n . Proof. The assertion (1) can easily be checked remarking that Φ n,p (p n(n+1)/2 [p] n ) = X 2 0 X 1 · · · X n (cf. [1,Lemma 3.3.34]). We will prove the assertion (2). Put M n (Z) nd and L ∈ GL n (Z)DGL n (Z), we define the set B(L, M ) as where ν = ν(M ). Then # B(L, M )/L does not depend on the choice of L, and is uniquely determined by M , which will be denoted by α(M ), and in particular put β(D a,b ) = α(p 2 D * a,b ⊥D a,b ). Then, by [1, p.160], as an element of L n,∞ , ρ n,1 (p) is expressed as have Let k = (k 1 − k n , . . . , k n−1 − k n , 0) and put m = depth(k ). Let F (Z) be an element of M k (Γ (n) )(Z). Then we have We note that a(T, F )(U ) is expressed as a Z-linear combination of polynomials of the following form: . Therefore, to prove the assertion (2), it suffices to show that for any L ∈ Λ n D n−1,i Λ n with i = 0, 1. We may suppose L = D n−1,i with i = 0, 1. First write D n−1,1 = p d 1 ⊥ · · · ⊥p dn with d 1 = · · · = d n−1 = 1 and d n = 2. Then we have where {J a+j } 1≤i≤m,1≤j≤l i −l i+1 ,1≤a≤i is a set of integers such that 1 ≤ J a+j ≤ n and J a+j = J a +j if a + j = a + j . Then we have i a=1 d J a+j ≤ i + 1 for any i. Hence we have Hence (I) holds for any L ∈ Λ n D n−1,1 Λ n . Similarly, we have P (U D −1 n−1,0 ) = p −γ k,n P (U ) with γ k,n an integer such that By assumption, we have k 1 + k n ≥ n + 1, and hence (I) holds for any L ∈ Λ n D n−1,0 Λ n . This proves the assertion (2). We write Γ C (s) = 2(2π) −s Γ(s) and write Γ R (s) = π −s/2 Γ(s/2) as usual. Let be a primitive form in S k (SL 2 (Z)), that is let f be a Hecke eigenform whose first coefficient is 1. For a prime number p let β 1,p (f ) and β 2,p (f ) be complex numbers such that β 1,p (f ) + β 2,p (f ) = a(p, f ) and β 1,p (f )β 2,p (f ) = p k−1 . Then for a Dirichlet character χ we define Hecke's L function L(s, f ) twisted by χ as We write L(s, f, χ) = L(s, f ) if χ is the principal character. Let {f 1 , . . . , f d } be a basis of S k (Γ (1) ) consisting of primitive forms. Let K be an algebraic number field containing Q(f 1 ) · · · Q(f d ), and O the ring of integers in K. Let f be a primitive form in S k (SL 2 (Z)). Then Shimura [52] showed that there exist two complex numbers c ± (f ), uniquely determined up to Q(f ) × multiple such that the following property holds: We note that the above value belongs to K(χ). For short, we write We sometimes write c s(l,χ) (f ) = c s(l) (f ) and L(l, f, χ; c s(l,χ) (f )) = L(l, f ; c s(l) (f )) if χ is the principal character. We note that the value L(l, f, χ; c s (f )) depends on the choice of c s (f ), but if (χη)(−1) = (−1) l+m , then s := s(l, χ) = s(m, η) and, the ratio L(l, f, χ; c s (f )) L(m, f, η; c s (f ) does not depend on c s (f ), which will be denoted by . For two positive integers is the primitive character associated with χ 1 χ 2 (cf. [51]). We denote this value by L(l 1 , l 2 ; f ; χ 1 , χ 2 ). In particular, we put if χ 1 and χ 2 are the principal characters. This value does not depend upon the choice of c ± (f ). Let f be a primitive form in S k (SL 2 (Z)). Let f 1 , . . . , f d be a basis of S k (SL 2 (Z)) consisting of primitive forms with f 1 = f and let D f be the ideal of Q(f ) generated by all . For a prime ideal p of an algebraic number field, let p p be the prime number such that (p p ) = Z ∩ p. The following proposition is due to [38], Theorem 5.4. Proposition 3.4. Let f be a primitive form in S k (SL 2 (Z)). Let χ 1 and χ 2 be primitive characters with conductors N 1 and N 2 , respectively, and let l 1 , l 2 be positive integers such For two primitive forms f 1 ∈ S k 1 (SL 2 (Z)) and f 2 ∈ S k 2 (SL 2 (Z)) we define the Rankin- Let F be a Hecke eigenform in M (k 1 ,...,kn) (Sp n (Z)) with respect to L n , and for a prime number p we take the p-Satake parameters α 0 (p), α 1 (p), . . . , α n (p) of F so that We define the polynomial L p (X, F, Sp) by and the spinor L function L(s, F, Sp) by We note that L(s, f, Sp) is Hecke's L function L(s, f ) if f is a primitive form. In this case we write L p (s, f ) for L p (s, f, Sp). We also define the polynomial L p (X, F, St) by and the standard L-function L(s, F, St) by For a Hecke eigenform F ∈ S k (Γ (r) ) put Remark 3.5. We note that for a positive integer m ≤ k − r Proposition 3.6. Let F be a Hecke eigenform in S k (Γ (r) ). We define n 0 = 3 if r ≥ 5 with r ≡ 1 mod 4 and n 0 = 1 otherwise. Let m be a positive integer n 0 ≤ m ≤ k − r such that m ≡ r (mod 2). Then, a(A, F )a(B, F )L(m, F, St) belongs to Q(F ) for any A, B ∈ H r (Z) >0 . Proof. We note that the value a(A, F )a(B, F )L(m, F, St) remains unchanged if we replace by F by γF with any γ ∈ C × . By the multiplicity one theorem for Hecke eigenforms (cf. Theorem A.2 (3) and Remark A.2 (2)), we can take some non-zero complex number γ such that γF ∈ S k (Γ (r) )(Q(F )). For this γ, we see L(m, γF, St) ∈ Q(F ) by [48], Appendix A. This proves the assertion. Let R be a commutative ring, and a an ideal of R. For two polynomials for any 1 ≤ i ≤ m. Now we will state Harder's conjecture. Conjecture 3.7. ( [17]) Let k and j be non-negative integers such that j is even and k ≥ 3. Let f = a(n, f )e(nz) ∈ S 2k+j−2 (SL 2 (Z)) be a primitive form, and suppose that a "large" prime p of Q(f ) divides L(k + j, f ; c s(k+j) ). Then, there exists a Hecke eigenform F ∈ S (k+j,k) (Γ (2) ), and a prime ideal p | p in (any field containing) Q(f )Q(F ) such that, for all primes p Remark 3.8. (1) The original version of Harder's conjecture did not mention what "largeness" of p means. (2) To formulate Harder's conjecture we must choose the periods c s (f ) in an appropriate way. The original version of Harder's conjecture did not specify them. After that, Harder suggested assuming another type of divisibility condition instead of the divisibility of L(k + j, f ; c s (f )) in his conjecture (cf. [18]). However, it does not seem so easy to confirm such a condition. (3) The original version of Harder's conjecture, which states only the last congruence on λ F (T (p)), is naturally included in the above Euler factor version since we have The above congruence is trivial in the case k is even and j = 0. Indeed, for the Saito-Kurokawa lift F of f , we have so we have equality, not only congruence. To avoid the ambiguity in (1) and (2) of Remark 3.8, we propose the following conjecture, which we also call Harder's conjecture. Conjecture 3.9. Let k and j be non-negative integers such that j is even and k ≥ 3, j ≥ 4. Let f be as that in Conjecture 3.7. Suppose that a prime ideal p of Q(f ) satisfies p p > 2k + j − 2 and that p divides , where k j = k + j/2 or k + j/2 + 1 according as j ≡ 0 (mod 4) or j ≡ 2 (mod 4). Then the same assertion as Conjecture 3.7 holds. Remark 3.10. There is no ambiguity in the assumptions of Conjecture 3.9. Moreover, since we can compute L(k + j, f ) L(k j , f ) rigorously, we can easily check the assumption on p. 4. An enhanced version of Harder's conjecture Conjectures 3.7 and 3.9 are not concerning the congruence between the Hecke eigenvalues of two Hecke eigenforms in the same space, and this is one of the reasons that it is not easy to confirm it. To treat the conjecture more accessibly, we reformulate it in the case k is even. (For odd k, see [21], [23].) To do so, first, we review several results, on the Galois representations attached to automorphic forms, and on liftings. Let R be a locally compact topological ring, and M a free R-module of finite rank. For a profinite group G, let ρ : G −→ Aut R (M ) be a continuous representation of G. When we fix a basis of M with rank R M = n, we write ρ : G −→ GL n (R). The following result is due to Deligne [13] in the case n = 1, and due to Weissauer [56] in the case n = 2. Theorem 4.1. Let F be a Hecke eigenform in S kn (Γ (n) ) with n ≤ 2, where k n = k or (k + j, k) according as n = 1 or n = 2. Let K be a number field containing Q(F ) and p be a prime ideal of K. Then there exists a semi-simple Galois representation ρ F = ρ F,p : (1) Let k = (k 1 , . . . , k n ) ∈ Z n with k 1 ≥ · · · ≥ k n > n, and G be a Hecke eigenform in S k (Γ (n) ). Let k ≥ 4 and j > 0 and d > 0. Assume that (a) k ≡ n (mod 2), j ≡ 0 (mod 2); (b) k > 2d + 1 and j > 2d − 1; Then, for any Hecke eigenform F ∈ S (k+j,k) (Γ (2) ) there exists a Hecke eigenform A Here we make the convention that L(s, G, St) = ζ(s) if n = 0. (2) Let k and n be positive even integers such that k > n > 2. Let f be a primitive form in S 2k−n (SL 2 (Z)) and G be a Hecke eigenform in S (k,k−n+2) (Γ (2) ). Then, there exists a Hecke eigenform A Theorem 4.3. Let G be a Hecke eigenform in S k (Γ (n) ) for a fixed k = (k 1 , . . . , k n ) ∈ Z n with k 1 ≥ · · · ≥ k n > n. For positive integers k and d with k > d, we assume one of the following conditions: Then, for any Hecke eigenform f ∈ S 2k (SL 2 (Z)), there exists a Hecke eigenform Here we make the convention that L(s, G, (1) and Theorem 4.3 may be proved similarly. But, for readers' convenience we will give their proofs in Appendix A. Theorem 4.2 was conjectured by Ibukiyama [22] in special cases with numerical examples. We say that the lifts in (1) and (2) are the lifts of types A (I) and A (II) , respectively. If n = 0 and because k and k are determined by F and d. Theorem 4.3 was conjectured by Miyawaki [47] with numerical examples. We also note that M k n,d,k (f, G) was constructed by Ikeda [33] in the case (2) under the assumption k 1 = · · · = k n and the non-vanishing condition. In particular in the case n = 0, it was constructed by Ikeda [32], and we write it as I 2d (f ). The following proposition is more or less well known. Proposition 4.4. Let k = (k 1 , . . . , k m , . . . , k n ) and l = (k 1 , . . . , k m ) with k 1 ≥ · · · ≥ k m ≥ · · · ≥ k n . Let F be a Hecke eigenform in S l (Γ (m) ), and suppose that Proof. The assertion is well known in the case k 1 = · · · = k m = · · · = k n (cf. [1,Exercise 4.3.24], and a general case can also be proved by the same argument. Let F and G be Hecke eigenforms in M k (Γ (n) ) and p a prime ideal of Q(F ). We say that F is Hecke congruent to G modulo p if there is a prime ideal p of Q(F ) · Q(G) lying above p such that λ G (T ) ≡ λ F (T ) (mod p) for any T ∈ L (k) n . We denote this property by G ≡ ev F (mod p). Conjecture 4.5. Let k, j and n be positive integers. Suppose that there exists a Hecke eigenform F ∈ S (k+j,k) (Γ (2) ) such that Let O be the ring of integers in an algebraic number field K, and P a maximal ideal of O. Let A P be the Grothendieck group of finite dimensional Galois representations of Gal(Q/Q) with coefficients O P /P unramified outside P. Let S be the set of isomorphism classes of irreducible representations in A P. Write an element H of A P as Lemma 4.7. Let P be as above. Let H be an element of A P . Suppose that (χ i +1)H = 0 with i = 1, 2, whereχ is the mod P representation of the cyclotomic character χ : Proof. The assertion for i = 1 has been proved in Proposition 10.4.6 of Chenevier and Lannes [12], and the other assertion can also be proved by using the same argument as there. Theorem 4.8. Let the notation be as in Conjecture 4.5. Proof. Let f be a primitive form in Conjecture 3.9, and suppose that a prime ideal p of Q(f ) satisfies the assumptions in Conjecture 3.9. Then, by Conjecture 4.5, there exists a Hecke eigenform F in S (k+j,k) (Γ (2) ) satisfying the conditions in Conjecture 4.5. Let K = Q(f )·Q(F ) and O the ring of integers in K. Take a prime ideal P of O lying above p. Then it suffices to show that for any prime number p. By Proposition 4.4, we have Hence Then, by (1) of Theorem 4.2, we have and T i ∈ L k n , respectively. Therefore, for any prime number p = p p , they belong to O P , and by the assumption, we have for any prime number p = p p . Let ρ F : Gal(Q/Q) −→ GL 4 (K P ) and ρ f : Gal(Q/Q) −→ GL 2 (K P ) be the Galois representation attached to the spin L functions of F and f , respectively. For ρ = ρ F , ρ f letρ be the mod P representation of ρ. Then, by (D p ), in the Grothendieck ring A P , according as n = 2 or 4. Define an element H of A P as Then we have ||H|| ≤ 8. Let n = 2. Then, we have Since we have 2k + j − 2 ≡ 2 mod 4, by assumption we have p P = p p ≥ 2k + j − 2 ≥ 18. Hence, by Lemma 4.7, we have H = 0. Let n = 4. Then, Since we have p P = p p > 2k + j − 2 ≥ 20, using Lemma 4.7 repeatedly, we also have H = 0. Hence, in A P we haveρ This implies that the congruence relation (C p ) holds for any prime number p = p p . Let p = p p . Then we have Since the Hecke operator r 2n,1 (p) = p k+j/2+n/2−2 r 2n,1 (p) belongs to L k 2n by Proposition 3.3, we have . Moreover, all the coefficients of X m with m ≥ 2 of the both polynomials L p (X, F, Sp) and L(X, f )(1 − p k−2 X)(1 − p j+k−1 X) are congruent to 0 mod P. This proves the assertion. (1) Our conjecture is stronger than Conjecture 3.7 in the case k is even. (2) The above conjecture tells nothing about the case k is odd. However, we can propose a similar conjecture in the case k is odd. Pullback formula 5.1. Differential operators with automorphic property. In this section, we explain some explicit differential operators that are used in the pullback formula. 5.1.1. Setting. Now for an integer n ≥ 2, fix a partition (n 1 , n 2 ) with n = n 1 + n 2 with n i ≥ 1. Let λ be a dominant integral weight with depth(λ) ≤ min(n 1 , n 2 ). For i = 1, 2, let (ρ n i ,λ , V n i ,λ ) be the representation of GL n i (C) defined in Section 1. Put V λ,n 1 ,n 2 = V n 1 ,λ ⊗V n 2 ,λ . We regard H n 1 ×H n 2 as a subset of H n by the diagonal embedding. We consider V λ,n 1 ,n 2 -valued differential operators D on scalar valued functions of H n , satisfying Condition 1 below on automorphy: We fix λ, n 1 , n 2 as above. For variables We regard Sp n 1 (R) × Sp n 2 (R) as a subgroup of Sp n (R) by For Z = (z ij ) ∈ H n , we denote by ∂ Z the following n × n symmetric matrix of partial derivations For a V λ,n 1 ,n 2 valued polynomial P (T ) in components of n × n symmetric matrix T , we put Condition 1. We fix k and λ. Let D = P (∂ Z ) as above. For any holomorphic function F on H n and any (g 1 , g 2 ) ∈ Sp n 1 (R) × Sp n 2 (R), the operator D satisfies where Res means the restriction of a function on H n to H n 1 × H n 2 . ). This condition on D is, roughly speaking, the condition that, if F is a Siegel modular form of degree n of weight k, then Res(D(F )) is a Siegel modular form of weight det k ρ n i ,λ for each variable Z i for i = 1, 2. Here, if 2k ≥ n, the condition that ρ 1 and ρ 2 correspond to the same λ is a necessary and sufficient condition for the existence of D ( [20]). A characterization for P is given in [20]. We review it since we need it later. For an m × 2k matrix X = (x iν ) of variables and for any (i, j) with 1 ≤ i, j ≤ m, we put We say that a polynomial P (X) in 20]). We assume that 2k ≥ n. Notation and assumptions being as above, the operator D = P (∂ Z ) satisfies Condition 1 if and only if the V λ,n 1 ,n 2 valued polynomial P satisfies the following two conditions. (1) For i = 1, 2, let X i be an n i × 2k matrix of variables. Then the polynomial is pluri-harmonic for each X 1 , X 2 , that is, ∆ ij (X 1 ) P = ∆ ij (X 2 ) P = 0, regarding that the variables in X 1 and in X 2 are independent. (2) For any A 1 ∈ GL n 1 (C) and A 2 ∈ GL n 2 (C), we put Then we have Besides, for any fixed k with 2k ≥ n = n 1 + n 2 and λ, the polynomial P (T ) satisfying (1) and (2) exists and is unique up to constant. There are a lot of results concerning explicit description of P , notably in [25], [27]. But still we need more explicit formula for our purpose and we will explain it in the next subsection. 5.1.2. Explicit formula. In this section, we consider some special type of λ. We assume that λ = (l, . . . , l, 0, . . . , 0) with depth m. Put λ 0 = (l, . . . , l). Then first we explain some general way to construct V λ,n 1 ,n 2 polynomial P (T ) satisfying Condition 1 from a scalar valued polynomial P 0 (S) satisfying Condition 1 for ρ m,λ 0 ⊗ ρ m,λ 0 . Here T is an n × n symmetric matrix and S is an 2m × 2m symmetric matrix. Then for the case m ≤ 2 and any l, we give a completely explicit description of P (T ). (The case m = 1 has been already given in [20] and a new point is the case m = 2.) Here we note that ρ m,λ 0 = det l and det k ρ m,λ 0 = det k+l . For any positive integers k, l, we consider Condition 1 for n = 2m, (n 1 , n 2 ) = (m, m) and from weight k to weight det k+l ⊗ det k+l . If we denote by P k,k+l (S) a non-zero polynomial satisfying Condition 1 for this case, this is a scalar valued polynomial in components of an 2m × 2m symmetric matrix S. We assume that we know P k,k+l (S), and then we consider how to give more general case starting from this P k,k+l . Proof. For A 1 ∈ GL n 1 (C) and A 2 ∈ GL n 2 (C), we put The fact that Q(T ) is in the representation space of ρ n 1 ,λ ⊗ ρ n 2 ,λ for the action U → U A 1 and V → V A 2 is concretely proved by using a structure theorem on the shape of P k,k+l (S) in [29, Proposition 3.1], but we will later give a more abstract proof in the lemma below. So here we prove the rest. We write where T 11 is an n 1 × n 1 , T 12 is an n 1 × n 2 , and T 22 is an n 2 × n 2 matrix. Then So surely the action of A to T gives the action of A on U , V given by U A 1 and V A 2 . This means that Q(AT t A) = ρ n 1 ,λ (A 1 ) ⊗ ρ n 2 ,λ (A 2 )Q(T ). Finally we see the pluri-harmonicity. Let X and Y be n 1 × 2k and n 2 × 2k matrices, respectively. We put and we must show that Q is pluri-harmonic for each X and Y . As before, for m×2k matrices X 1 and X 2 , we write Then we have So for any l, t with 1 ≤ t, l ≤ n 1 , we have The last expression is 0 by the pluri-harmonicity of P k,k+l . In the same way, we can show that Q(X, Y ) is pluri-harmonic also for Y . Proof. We regard B = (b ij ) 1≤i,j≤m as a matrix of variables and define a matrix of operators by We consider where S m is the permutation group on m letters. By Cayley type identity ( [11]), we have where (x) m = x(x + 1) · · · (x + m − 1) is the ascending Pochhammer symbol, so by the assumption Repeating this process, we have On the other hand, writing U = (u iν ) and BU = (v iν ). we have v iν = m p=1 b ip u pν and Here if we fix ν 1 , . . . , ν m , then we have If ν i = ν j for some i = j, then of course this is 0 and if the cardinality |I| of I = {ν 1 , . . . , ν m } is m, then this is U I up to sign. By taking B to be scalar, we see that Q(U ) is a homogeneous polynomial of the total degree ml. so the ml-th derivatives of Q(U ) is a constant. So we see that is a linear combination of l products of m × m minors of U . Since this is equal to (l) m (l − 1) m−1 · · · (1) m Q(U ), we see that Q(U ) is a linear combination of l products of minors of degree m of U . If we write for m × m matrices S ij , then by definition, for B i ∈ GL m (C), we have So applying Lemma 5.3, we see that Q(T ) is in the representation space of ρ n 1 ,λ ⊗ ρ n 2 ,λ . Now we apply this for concrete cases. For m = 1, the polynomial P k,k+l is essentially the (homogeneous) Gegenbauer polynomial of degree l. Based on these facts and Lemma 5.2, this case gives differential operators for n = n 1 + n 2 and λ with depth 1, that is, the case λ = (l, 0, . . . , 0). The corresponding representation is the symmetric tensor representation Sym(l) of degree l so D is from weight k to det k Sym(l)⊗det k Sym(l). An explicit generating function of such operators for general n = n 1 + n 2 has been already given in [20, p. 113-114]. Here we give the depth 2 case with λ = (l, l, 0 . . . , 0). This means m = 2 and an explicit generating function of P k,k+l (S) for l ≥ 0 is given in [20, p. 114] For an indeterminate t, we put Then for each l, we define a polynomial Q l (T, U, V ) = Q l,n 1 ,n 2 (T, U, V ) by the following generating function. 1 Here Q l is a non-zero polynomial. For Z ∈ H n , we put (2) D l = D l,n 1 ,n 2 = Q l,n 1 ,n 2 (∂ Z , U, V ). Then D l is a differential operator satisfying Condition 1 for k and λ = (l, l, 0, . . . , 0), where the representation space is realized by bideterminants as we explained. When 2k ≥ n, such differential operator D l is unique up to constant. Actually, the generating series is easily expanded by a well-known formula, and more explicitly we have the following formula, Lemma 5.5. The polynomials f i being the same as above, we put for i = 1, 2, 3. Then we have 5.2. Weak pullback formula. Let n 1 , n 2 be positive integers such that n 1 ≤ n 2 . Let λ be a dominant integral weight such that depth(λ) ≤ n 1 . We consider a differential operator D λ = D λ,n 1 ,n 2 on H n 1 +n 2 satisfying Condition 1 for k and det k ρ n 1 ,λ ⊗det k ρ n 2 ,λ . For an integer r such that depth(λ) ≤ r, we put ρ r = det k ρ r,λ . For a Hecke eigenform f ∈ S ρr (Γ (r) ) we define D(s, f ) as So if we take a(T ) to be real, (which is possible), we just have θf = f . The next theorem is (a pullback formula) essentially due to Kozima [45]. Then for any Hecke eigenform f ∈ S ρn 1 (Γ (n 1 ) ) we have where c(s, ρ n 1 ) is a function of s depending on ρ n 1 but not on n 2 . Remark 5.7. This type of formula has been proved in the case k > n 1 + n 2 + 1 and s = 0 in [44] in more general setting, and it can also be generalized in the case s = 0 using the same method as in [44] (cf. [45]). Kozima [45] gave an abstract pullback formula for general λ assuming that P in Condition 1 is realized in his special way. The existence of P satisfying Condition 1 itself has been known in [20]. For further development on realization of P and exact pullback formula, see [28]. Now we prove Proposition 2.1 (2), that is, we prove the following statement: Let k = ( m k + l, . . . , k + l, n−m k, . . . , k) such that l ≥ 0 and k > 3m/2 + 1 and let f be a Hecke eigenform in S k+l (Γ (m) ). Then [f ] k (Z, s) can be continued meromorphically to the whole splane as a function of s, and holomorphic at s = 0. Moreover suppose that k > (n+m+3)/2. Proof. Suppose that l > 0. Let λ = ( m l, . . . , l, 0, . . . , 0). Then for any n 2 ≥ m we have In particular, We claim that c(s, ρ m ) is a meromorphic function of s, and holomorphic and non-zero at s = 0. We note that D λ,m,m coincides with the differential operator where Ω k+l,l (s) = (−1) . (There is a minor misprint in [10]. On page 1339, line 9, "2 1+n(n+1)/2−2ns " should be "2 1−2l+n(n+3)/2−2ns ".) Therefore c(s, ρ m ) coincides with Ω k+l,l (s) up to constant multiple. Hence, c(s, ρ m ) is a meromorphic function of s, and holomorphic and non-zero at s = 0. We have We note that the constant c r in the pullback formula depends on two things. One is a definition of the differential operator, and the other is a definition of the Petersson inner product (P) in Section 2. First we fix a definition of the inner product. For a while, we fix a dominant integral weight λ = (l 1 , . . . , l m , 0, . . . , 0) of depth m. For an integer r such that m ≤ r, let U be an m × r matrix of variables and k r = (l 1 , . . . , l m , r−m 0, . . . , 0), and we take V r,λ = C[U ] k r as the representation space of ρ r,λ as stated before. Here we make the convention that C[U ] k r = C if m = 0. We fix an inner product v, w of V r,λ such that ρ r,λ (g)v, w = v, ρ r,λ ( t g)w as in (H) in Section 2. This relation is valid also for the representation ρ r = det k ρ r,λ , so we often use the same inner product for these. Now we must fix an inner product * , * of V r,λ explicitly. Since we have V m,λ = C[U ] k m , an element of V r,λ is a polynomial in the components of an m × r matrix U where the action of ρ r,λ is induced by U → U A for any A ∈ GL r (C): (ρ r,λ (A)Q)(U ) = Q(U A). For any B ∈ GL r (C), we obviously have Here RHS means to substitute the argument U in Q(U ) by U B and LHS means to replace coefficients of Q(U B) by complex conjugates. We put For two homogeneous polynomials P (U ) and Q(U ) of the same degree, we define P, Q 0 = P ∂ ∂U Q(U ). Then we have Indeed, if we put V = U B, then by the chain rule we have so if we put A = t B −1 , then we have So (3) is proved. Of course such an inner product is determined only up to constant, and there is no canonical choice, but we must fix something. When ρ r,λ is scalar valued representation det l , if we define an inner product by P, Q 0 /(l) r (l − 1) r · · · (1) r then by the Cayley type identity [11], this just means to take a product of scalars, so (f, g) = Γ (r) \Hr f (Z)g(Z) det(Im(Z)) l−r−1 dZ. Then we have a weak type of the pullback formula. Let k and l be non-negative integers. For the dominant integral λ = (l, l, 0, . . . , 0) of depth m and integers n 1 , n 2 such that 2 ≤ n 1 ≤ n 2 , let ρ n 1 = det k ρ n 1 ,λ and ρ n 2 = det k ρ n 2 ,λ be the representations of GL n 1 (C) and GL n 2 (C), respectively, as above. We note that m = 0 or 2 according as l = 0 or l > 0. Moreover, let D l,n 1 ,n 2 be the differential operator corresponding to the polynomial Q l,n 1 ,n 2 in Lemma 5.5. On the other hand, let • D l 2,k be the differential operator in [10, (1.14)]. Then by [10, Theorem 3.1] we have By page 71 in [37], we have Hence we have c(0, ρ 2 ) = d k,l c 2 , and by a simple computation we prove the assertion. Next suppose that l = 0. Then the assertion can be proved using the same argument as above. Remark 5.9. (1) The second part of the assertion can be also proved by the fact that the differential operator is realized uniformly in Lemma 5.5 and its operation on the automorphy factor is essentially the same as the case when U = V = 1 2 . (2) If k > n 1 + n 2 + 1, then [f ] ρr (W, V ) are holomorphic modular forms for any Hecke eigenform f in S ρr (Γ (r) ), and we can get an explicit pullback formula (cf. Theorems B.1 and B.13). However, if k ≤ n 1 + n 2 + 1, it does not necessarily hold. This is why we say that the formula in the above theorem is a weak type of pullback formula. We note that it is sufficient for proving our main results in Section 8. Congruence for Klingen-Eisenstein lifts To explain why Conjecture 4.5 is reasonable, we consider congruence for Klingen-Eisenstein series, which is a generalization of [39]. For λ = (k − l, k − l, 0, 0) with k ≥ l and 2 ≤ m ≤ 4, let (ρ m,λ , V m,λ ) be the representation of GL m (C) defined in the previous section, and put ρ m = det l ⊗ρ m,λ and k m = (k − l, k − l, m−2 0, . . . , 0) and k m = (k, k, m−2 l, . . . , l). Let U and V be 2 × n 1 and 2 × n 2 matrices of variables, respectively. Then we recall that V n 1 ,λ = C[U ] k n 1 , V n 2 ,λ = C[V ] k n 2 and that every element F of M ρn 1 (Γ (n 1 ) ) ⊗ M ρn 2 (Γ (n 2 ) ) is expressed as For a subring R of C, we denote by (M ρn 1 (Γ (n 1 ) ) ⊗ M ρn 2 (Γ (n 2 ) ))(R) the submodule of M ρn 1 (Γ (n 1 ) ) ⊗ M ρn 2 (Γ (n 2 ) ) consisting of all F 's such that We also note that every element F of M ρn 1 (Γ (n 1 ) ) ⊗ V n 2 ,λ is expressed as For positive integers n and l, put We define E n,l as E n,l (Z) = Z(n, l)E n,l (Z) and we set Moreover, for positive integers m, l and a Hecke eigenform F ∈ S k (Γ (2) ) put We also use the same symbol C m,l (f ) to denote the value Z(m, l) Z(4, l) L(l − 2, f, St) for a Hecke eigenform f ∈ S ρ 2 (Γ (2) ). As sated before, we have the following isomorphism: where U is 2 × 2 matrix of variables. Then we note that C m,l ( F ) = C m,l (F ) for a Hecke eigenform F ∈ S k (Γ (2) ). Now, for our later purpose, we rewrite a special case of Theorem 5.8 as follows. Proposition 6.1. Let n 1 , n 2 be integers such that 2 ≤ n 1 ≤ n 2 ≤ 4 and let k, l be even positive integers such that k ≥ l. Then we have where γ 2 is a certain rational number which is p-unit for any prime number p > 2k, and G j (Z 2 )(V ) is an element of M ρn 2 (Γ (n 2 ) ). We write E(Z 1 , Z 2 ) as (k,l,n 1 ,n 2 ),N (Z 1 )e(tr(N Z 2 )). ( * ) Then g (n 1 ) N = g (n 1 ) (k,l,n 1 ,n 2 ),N belongs to M ρn 1 (Γ (n 1 ) ) ⊗ V n 2 ,λ . To consider congruence between Klingen-Eisenstein lift and another modular form of the same weight, we rewrite the above proposition as follows: Corollary 6.2. Under the same notation and the assumption as above, let N ∈ H n 2 (Z) >0 . Then, g (n 1 ) Observe that the first term in the right-hand side of the above is invariant if we multiply f 2,j by an element of C × . To see the Fourier expansion of E n,k (Z), we review the polynomial F p (B, X) attached to the local Siegel series b p (B, s) for an element B of H n (Z p ) (cf. [35]). We define χ p (a) for a ∈ Q × p as follows: For an element B ∈ H n (Z p ) nd with n even, we define ξ p (B) by For a non-degenerate half-integral matrix B of size n over Z p define a polynomial γ p (B, X) in X by if n is odd. Then it is well known that there exists a unique polynomial F p (B, X) in X over Z with constant term 1 such that (e.g. [35]). For B ∈ H n (Z) >0 with n even, let d B be the discriminant of Q( (−1) n/2 det B)/Q, and χ B = ( d B * ) the Kronecker character corresponding to Q( (−1) n/2 det B)/Q. We note that we have χ B (p) = ξ p (B) for any prime p. We define a polynomial F * p (T, X) for any T ∈ H n (Z p ) which is not-necessarily nondegenerate as follows: For an element T ∈ H n (Z p ) of rank m ≥ 1, there exists an element T ∈ H m (Z p ) nd such that T ∼ ZpT ⊥O n−m . We note that F p (T , X) does not depend on the choice ofT . Then we put F * p (T, X) = F p (T , X). For an element T ∈ H n (Z) ≥0 of rank m ≥ 1, there exists an elementT ∈ H m (Z) >0 such that T ∼ ZT ⊥O n−m . Then χT does not depend on the choice of T . We write χ * T = χT if m is even. Proposition 6.3. Let k ∈ 2Z. Assume that k ≥ (n + 1)/2 and that neither k = (n + 2)/2 ≡ 2 mod 4 nor k = (n + 3)/2 ≡ 2 mod 4. Then for T ∈ H n (Z) ≥0 of rank m, we have Here we make the convention F * To consider the integrality of a(T, E n,k ), we provide the following lemma. Proof. The assertion has been proved in [32,Lemma 15] in the case m is even, and the assertion for odd case can also be proved in the same manner. Proposition 6.5. Let the notation and the assumption be as in Proposition 6.3. Then, we have E n,k belongs to M k (Γ (n) )(Q). In particular, for any prime number p > 2k, E n,k belongs to M k (Γ (n) )(Z (p) ). Proof. For T 1 ∈ H n 1 and T 2 ∈ H n 2 , put where Q k−l,n 1 ,n 2 is the polynomial in Section 5.1.2. Then we have Hence the assertion follows from Proposition 6.5. Proof. g Hence the assertion directly follows from the above proposition. Proposition 6.8. Let the notation and the assumptions be as in Theorem 5.8, and let 2 ≤ m ≤ 4. Then for any Hecke eigenform f in S ρ 2 (Γ (2) )(Q(f )), [f ] ρm ρ 2 ∈ M ρm (Γ (m) )(Q(f )). Proof. The assertion in the case k = l has been proved by Mizumoto [48], and the other case can also be proved by using the same method. Proposition 6.9. Let the notation and the assumption be as in Proposition 6.1. Let f be a Hecke eigenform in S k (Γ (2) ). Then, for any N ∈ H n 2 (Z) >0 and N 1 ∈ H 2 (Z) >0 , the value Proof. The value in question remains unchanged if we replace f by cf with c ∈ C × . Moreover we can take c ∈ C × so that cf ∈ S k 2 (Γ (2) )(Q(f )). Thus the assertion follows from Proposition 6.8 remarking that a(N 1 , The following lemma can be proved by a careful analysis of the proof of [36, Lemma 5.1]. Lemma 6.10. Let F 1 , . . . , F d be Hecke eigenforms in M ρn 1 (Γ (n 1 ) ) linearly independent over C. Let K be the composite field Q(F 1 ) · · · Q(F d ), O the ring of integers in K and p a prime ideal of K. Let G(Z, U, V ) ∈ (M ρn 1 (Γ (n 1 ) ) ⊗ V n 1 ,λ )(O (p) ) and assume the following conditions (1) G is expressed as Theorem 6.11. Let k and l be positive even integers such that k ≥ l ≥ 6 and put k = (k, k, l, l) and M k (Γ (4) ) = M ρ 4 (Γ (4) ). Let F ∈ S k (Γ (2) ) be a Hecke eigenform, and p a prime ideal of Q(F ). Suppose that p divides |a(A 1 , F )| 2 L(l − 2, F, St) and does not divide ρ 2 as stated in Section 1. Then there exists a Hecke eigenform G ∈ M k (Γ (4) Proof. The assertion in the case k = l has been proved in [39] in more general setting and the other case can also be proved using the same argument as in its proof. But for the sake of convenience, we here give an outline of the proof. Suppose that k > l. Take a basis {F j } 1≤j≤d of M k (Γ (4) ) such that F 1 = [F ] k . Then, by Corollary 6.2, for any A ∈ H 4 (Z) >0 we have We note that the reduced denominator of Z(8, l) Z(4, l) is not divisible by p by the theorem of v. Staut-Clausen. Hence we have Hence the assertion follows from Lemma 6.10. (3) p divides neither D f nor ζ (3−2k), where D f is the ideal of Q(f ) defined in Proposition 3.4. Then for any N ∈ H 2 (Z) >0 such that p d N , we have Here, d N is the discriminant of Q( √ − det N )/Q as defined before. The next theorem clarifies what we need to look at to try to prove Conjecture 4.5. Theorem 6.13. Let k and l positive even integers such that 6 ≤ l ≤ k, and put k = (k, k, l, l). Let f be a primitive form in S 2k−2 (SL 2 (Z)) and p a prime ideal of Q(f ) such that Then there exists a Hecke eigenform G in M k (Γ (4) ) such that G is not a constant multiple of [I 2 (f )] k and G ≡ ev [I 2 (f )] k (mod p). Proof. The assertion follows from Theorem 6.11 and Proposition 6.12. Fourier coefficients of Klingen-Eisenstein lift Let k = (k, k, l, l) with k, l positive even integers such that k ≥ l. To confirm the condition (4) in Theorem 6.13, we give a formula for computing L(l − 2, F, St)a(T, F )a(N, [F ] k ) for a Hecke eigenform F in S k (Γ (2) ), T ∈ H 2 (Z) >0 and N ∈ H 4 (Z) >0 . For T ∈ H 2 (Z) >0 and N ∈ H 4 (Z) >0 , let k,l,2,4 (T, N )(U, V ) be as in (E) and put g N = g (2) (k,l,2,4),N . Recall that U and V are 2 × 2 and 2 × 4 matrices, respectively, of variables. We note that k,l,2,4 (T, N )(U, V ) can be expressed as Now, for a positive integer m, let T (m) be the element of L 2 defined in Section 3. For a positive integer m = p 1 · · · p r with p i a prime number, we define the Hecke operator T (m) = T (p 1 ) · · · T (p r ). We make the convention that T (1) = T (1). We note that T (m) = T (m) if p 1 , . . . , p r are distinct, but in general it is not. For each m ∈ Z >0 and N ∈ H 4 (Z) >0 , write g N |T (m) (W ) as with k,k (m, T, N ) ∈ C[V ] (k−l,k−l,0,0) . Let M k,l = M k (Γ (2) ) or S k (Γ (2) ) according as k = l or not, and let {F j } d j=1 be a basis of M k,l consisting of Hecke eigenforms. Furthermore write Then the following proposition is a consequence of applying T (m) to the formula in Corollary 6.2 with n 1 = 2 and n 2 = 4. Thus the assertion holds. The following lemma will be used in the next section. Let U and V be the matrices of variables stated above. Moreover, put Proof. Let Q k−l,2,4 be the polynomial in Section 5.1.2, Then, by Lemma 5.5, we have Thus by (E) in the proof of Proposition 6.6, Lemma 5.5, and Proposition 6.3, we prove the assertion. We have an explicit formula for F p (T, X) for any nondegenerate half-integral matrix T over Z p (cf. [35]), and an algorithm for computing it (cf. Chul-Hee Lee https://github.com/chlee-0/computeGK). Therefore, by using Proposition 7.2 and Theorem 7.5 we can compute the Fourier coefficients of the Klingen-Eisenstein series in question. Hence by Corollary 7.3, p does not divide C 8, 16 (I 2 (f ))a(N 1 , I 2 (f ))a(N, [I 2 (f )] k ). The prime ideal q does not divide D f . Hence, by Theorem 6.13, there exists a Hecke eigenform F in M 16 (Γ (4) ) such that To show that F is a lift of type A (I) , we classify the Hecke eigenforms in M 16 (Γ (4) ) following [50] and [30]. Moreover, we have for any 1 ≤ i ≤ 13 such that i = 7. Hence F coincides with h 7 up to constant multiple. Hence we have the following theorem. Theorem 8.7. There exists a Hecke eigenform G in S (28,4) (Γ (2) ) such that A (I) then the eigenvalues of the semisimple conjugacy class where the right-hand side is a product of the finite parts of the Godement-Jacquet L-functions. (2) Conversely, for any formal sum ψ = k i=1 π i [d i ] satisfying (a)-(f ) above, there exists a Hecke eigenform G such that ψ = ψ G . (3) For two Hecke eigenforms G 1 , G 2 in S k (Sp n (Z)), the following are equivalent: • G 1 and G 2 are constant multiples of one another; • L(s, G 1 , St) = L(s, G 2 , St); Here, a formal sum means that it is an equivalence class defined so that if t = t and if there exists a permutation σ ∈ S t such that π i ∼ = π σ(i) and d i = d σ(i) for any 1 ≤ i ≤ t. Using [2], the same proof is available even when k 1 ≥ · · · ≥ k n ≥ n + 1. (3) By condition (d), π i is an irreducible regular algebraic cuspidal self-dual automorphic representation of PGL n i (A Q ) ([12, Definition 8.2.7]). As explained in [12,Theorem 8.2.17], thanks to numerous mathematicians, one can prove that such a representation satisfies the Ramanujan conjecture. Namely, for any p, all the eigenvalues of the Satake parameter of π i,p have absolute value 1. In particular, a Hecke eigenform G in S k (Sp n (Z)) satisfies the Ramanujan conjecture if and only if its A-parameter is of the form ψ G = t i=1 π i [1]. In this case we call G tempered. (4) By the purity lemma of Clozel [12,Proposition 8.2.13], the Langlands parameter of π i,∞ is completely determined by the eigenvalues of the infinitesimal character c i,∞ . In particular, one can compute the Rankin-Selberg root number ε(π i × π j ) explicitly in terms of the eigenvalues of c i,∞ and c j,∞ by where w i (resp. w j ) runs over the positive eigenvalues of c i,∞ (resp. c j,∞ ). Note that w i and w j are in (1/2)Z and that 2w i ≡ d i − 1 (mod 2) for any (positive) eigenvalue . This is easily checked when i, j = i 0 . (6) Theorem A.1 is an existence theorem. To construct a modular form G from a parameter ψ is a different problem. (7) If we were not to assume that k n > n, the statement of theorem would be much more difficult. At least when the scalar weight case, i.e., when k 1 = · · · = k n = k with k ≤ n, a similar theorem, in particular a multiplicity one theorem, would follow from a result of Moeglin-Renard [49]. To obtain several lifting theorems, we need the following proposition which comes from accidental isomorphisms. (1) Suppose that k > 0 is even. For any primitive form f in S k (SL 2 (Z)), there exists an irreducible unitary cuspidal automorphic self-dual representation π f of PGL 2 (A Q ) such that and such that the eigenvalues of the infinitesimal character of π f,∞ are ±(k − 1)/2. (2) Suppose that j > 0 is even and that k ≥ 4. For any Hecke eigenform F in S (k+j,k) (Sp 2 (Z)), there exists an irreducible unitary cuspidal automorphic self-dual representation π F of PGL 4 (A Q ) such that and such that the eigenvalues of the infinitesimal character of π F,∞ are ±(j + 2k − 3)/2, ±(j + 1)/2. (3) Suppose that k 1 ≥ k 2 > 0 are even. For any primitive forms f 1 ∈ S k 1 (SL 2 (Z)) and f 2 ∈ S k 2 (SL 2 (Z)), there exists an irreducible unitary cuspidal automorphic self-dual representation π f 1 ,f 2 of PGL 4 (A Q ) such that and such that the eigenvalues of the infinitesimal character of π f 1 ,f 2 ,∞ are ±(k 1 + k 2 − 2)/2 and ±(k 1 − k 2 )/2. Proof of Theorem 4.2 (1). Let G be a Hecke eigenform in S k (Sp n (Z)), and ψ G = t i=1 π i [d i ] be its A-parameter. Here we make the convention that ψ G = 1 GL 1 (A Q ) [1] if n = 0. Let F ∈ S (k+j,k) (Sp 2 (Z)) be a Hecke eigenform with k ≥ 4 and j > 0 even, and π F be the irreducible unitary cuspidal automorphic self-dual representation of PGL 4 (A Q ) associated by F by Proposition A.3 (2). It suffices to show that under the assumptions of Theorem 4.2 (1), the parameter ψ = ψ G π F [2d] satisfies the conditions (a)-(f) of Theorem A.1 (1) with respect to k = (k 1 , . . . , k n+4d ) ∈ Z n+4d defined in Theorem 4.2 (1). The conditions (a), (b), (c) and (e) are obvious. The condition (d) follows from the definition of k . To check the sign condition (f), we will compute ε(π i ×π F ). By Remark A.2 (5), we have ε(π i ×π F ) = 1 if d i is even. When d i is odd, any positive eigenvalue w i of c i,∞ belongs to {k 1 −1, . . . , k n −n} so that j+1 < 2w i < j+2k−3. Hence when d i is odd and i = i 0 so that n i ≡ 0 (mod 4), we have Since the cardinality of the set K i for ψ is the same as the one for ψ G , we obtain the sign condition for π i . Also, since n i 0 ≡ 2n + 1 (mod 4) and j ≡ 0 (mod 2). Hence the sign condition for π F is equivalent to k ≡ n (mod 2). The proofs of Theorem 4.2 (2) and Theorem 4.3 are similar. Let G and f be as in the statements, ψ G = t i=1 π i [d i ] be the A-parameter of G. and π f be the irreducible unitary cuspidal automorphic self-dual representation of PGL 2 (A Q ) associated to f . We have to check the conditions (a)-(f) in Theorem A.1 for with 2d = n − 2 in the proof of Theorem 4.2 (2). Only the condition (f) is non-trivial. Appendix B. An explicit pullback formula Suppose that k > n 1 + n 2 + 1. Then for any r ≤ min(n 1 , n 2 ) and a Hecke eigenform f ∈ S ρr (Γ (r) ), the Klingen ρr (W, V ) become holomorphic modular forms, and we obtain more explicit results. The proof of the following theorem is independent of Böcherer's argument. The proof here is a brute force but we still believe this way of calculation would be useful in some cases. For more conceptual description of a complete general exact pullback formula, see [28]. Theorem B.1. Notation being as in Theorem 5.6, suppose that l > 0. Then we have where c r is a certain constant depending on D λ,n 1 ,n 2 . This has been already proved by Kozima [44] in more general setting. However he does not give explicit values of c r in general. He gave in [44] one strategy how to calculate the constant c r in the formula. Actually this calculation is difficult to execute in general, but in the next section, we explain how to do this when D λ = Q l (∂ Z , U, V ), where Q l is defined by (2) before Lemma 5.5. Though this calculation is not necessary for proving our main results, it is interesting in its own right, and may have an application for computing exact standard L-values for f ∈ S ρr (Γ (r) ). B.0.1. Calculation of the constant. To calculate the constant c r , we follow Kozima's formulation in [44]. We fix g = A B C D ∈ Sp n 1 +n 2 (R) and put δ = det(CZ + D). To obtain c r for our differential operator D we need, it is a key to calculate D(δ −k ) explicitly. For Z ∈ H n 1 +n 2 , we write For an m × n 1 matrix U and m × n 2 matrix V of variables, define U as before and we write We put ∆ = (CZ + D) −1 C and It is well-known and easy to see that this is a symmetric matrix. We write blocks of ∆ by where ∆ (ij) are m × m matrices. We consider a differential operator D = P (∂ Z ) which satisfies Condition 1. The following Proposition is the same as Proposition 4.1 in p 241 in [44] except for the point that the realization of the representation is slightly different. Proposition B.2. There exists a polynomial Q(X) in the components of m × m matrix X such that (12) ). The key point is that the polynomial does not contain components of ∆ (11) and ∆ (22) . Since the realization of our polynomials and Kozima's are different, we explain the relation. We fix λ = (l, . . . , l, 0, . . . , 0) with depth(λ) = m. We denote by u i and v j the i-th row vector of U and the j-th row vector of V respectively and write U = (u i ), V = (v j ). Our polynomial is a polynomial in components of T , U and V . We write this as P (T, U, V ) to emphasize its dependence on U and V . We define polarization of P for each rows u i , v j in the usual way as follows. We prepare ml row vectors ξ iν (1 ≤ i ≤ m, 1 ≤ ν ≤ l) and other ml vectors η jµ (1 ≤ j ≤ m, 1 ≤ µ ≤ l) of variables. We write ξ = (ξ iν ) and η = (η jµ ) for short. For P we define P * by . In other words, replacing u i and v j by u i = c i1 ξ i1 + · · · + c il ξ il and v j = d j1 η i1 + · · · + d jl η jl respectively in P (T, U, V ) and take coefficients of Here by definition, the polynomial P * is a multilinear polynomial in ξ iν T 11 t ξ jµ , ξ iν T 12 t η jµ , η iν T 22 t η jµ and it is homogeneous in the sense of Kozima. Since the polarization P → P * commutes with ∆ ij (X) and ∆ ij (Y ), the polynomial P * is also pluri-harmonic. The action of A 1 ∈ GL(n 1 , C) and A 2 ∈ GL(n 2 , C) is the same as P since we have u i A 1 = l ν=1 c iν x iν A 1 and v j A 2 = l µ=1 d jµ η jµ A 2 . So if we use P * instead of P , our formulation becomes Kozima's formulation. So we can use Kozima's Proposition 4.1 in [44], and the interpretation in our case is given in Proposition B.2 above. Our next problem is to obtain Q in Proposition B.2. For any row vector x, y of length n, we write The following formulas are given in Kozima [44]. (See also [24] for a precise proof.) For any row vectors u 1 , u 2 , u 3 , u 4 of length n and any functions f , g of Z, we have By iterate use of these formulas, we can calculate the action of P (U∂ Z t U) for any polynomial P . But actual calculation is a bit complicated. For our case, we have a following result. The rest of this section is devoted to prove this theorem. A simple theoretical proof of Theorem B.3 is given in [28], but here we give an original proof prepared for the present paper. This proof consists of complicated combinatorial brute force calculations, and we believe such alternative proof is not useless. To make it readable, we first explain rough idea of calculation, and then we give actual calculation. Define F i (T, U, V ) as in Lemma 5.5. We also define where T ij are defined by (1) in section 5. Then of course we have F 2 = F 4 F 5 . We write Here F 1 , F 4 , F 5 are differential operators of order 2 and F 3 of order 4. We put Now our strategy of calculation is as follows. (1) For any a, b, c, we see easily that F a is written as a product of δ −k and a polynomial in ∆ ij by virtue of the formula (D1), (D2), (D3). But in fact, more strongly, we can show that it is a product of δ −k and a polynomial in C 1 , C 2 , C 3 that is a weighted homogeneous polynomial of total degree a + 2b + 2c if we put deg(C 1 ) = 1, deg(C 2 ) = 2, deg(C 3 ) = 2. (2) Here we can show that C 1 , C 2 and C 3 are algebraically independent for generic g and ∆ U , so by virtue of Lemma B.2, we need only the coefficient of C a+2b+2c . So we calculate these coefficients for all (a, b, c) and sum them up according to the explicit definition of D l . Now we execute these calculations. Lemma B.4. For any non-negative integer, we have Proof. The operator F 3 is a differential operator of order 4. There are many ways to prove Lemma B.4. One way is to use computer directly. Actually, by (D1), (D2), (D3), it is clear that where P (r, ∆ ij ) is a polynomial in r of degree 4 whose coefficients are polynomials in ∆ ij that do not depend on r. So the calculation for r = 0, . . . , 4 is enough and executing these we obtain the above formula. An alternative way is to specialize ∆ to the case n 1 = n 2 = 2 and U = 1 2 , V = 1 2 , C = 1 4 , D = 0 4 . Then we have ∆ U = Z −1 for Z ∈ H 4 . Then under this specialization on ∆ U and F 3 , we have The second equality is nothing but the Cayley type identity for symmetric matrices ( [11]). Since the calculation is formally the same, we get Lemma B.4, Next we consider F 1 and F 2 = F 4 F 5 . Since F 1 , F 4 , F 5 are differential operators of order 2, the operation of these on products of functions can be calculated if we have several fundamental operations on factors. To explain this, we assume that F is a differential operator of homogeneous order 2 and define a bracket {A, B} F by We have {B, A} F = {A, B} F . For the operator ∂ 1 ∂ 2 where ∂ 1 and ∂ 2 are differential operators of the first order, we have so for general F of order 2 and functions A, B, C, we have So for example, the operation of F on a product A 1 · · · A ν of functions A i can be calculated if we have F(A i ) and {A i , A j } F . More generally, for δ −k , any functions A, B, C, and nonnegative integers p, q, r and for a differential operator F of second order, we can give the following general formula by repeating the above consideration. Next we consider F 1 and F 2 . We give fundamental formulas below. . Lemma B.6. For a generic g, U , V such that ∆ ij are algebraically independent, three variables C 1 , C 2 and C 3 are algebraically independent. Proof. Assume that p,q,r C(p, q, r)C p 1 C q 2 C q 3 = 0 for some constants C(p, q, r) where the degree of C 1 is the smallest among such relationsIf we put ∆ 14 = ∆ 24 = 0, then we have C 1 = 0, and C 3 = ∆ 2 13 ∆ 22 ∆ 44 + · · · . Then, since C 2 does not contain any ∆ 13 , this means that C(0, q, r) = 0 for any q, r. So we may assume that C 1 p≥1,q,r C(p, q, r)C p−1 Since the polynomial ring in ∆ ij is UFD, we have p≥1,q,r C(p, q, r)C p−1 1 C q 2 C r 3 = 0. This contradicts to the assumption. Lemma B.7. For any integers a, b, c, p, q, r ≥ 0, there exists a polynomial P (C 1 , ). Here we have F 2 = F 5 F 4 . By the fundamental formulas, we see that for i, j with 1 ≤ i < j ≤ 3, any of F 4 (δ −k ), F 4 (C i ), {δ −k , C i } 4 , {C i , C j } 4 are δ −k C 4 times a polynomial in C 1 , C 2 , C 3 , so by Lemma B.5, we see that for some polynomial P 1 (x, y, z) in three variables. We have Here by the same reason as before, the first term is equal to We can show inductively that the same is true for F b 2 and F a 1 F b 2 , so we prove Lemma. By Lemma B.2, we need only the power of C 1 part in the polynomial in C 1 , C 2 , C 3 , so we will study that. We denote by C 3 = C 3 C[C 1 , C 2 , C 3 ] and C 23 = (C 2 , C 3 )C[C 1 , C 2 , C 3 ] the ideals of C[C 1 , C 2 , C 3 ] generated by C 3 , and by C 2 and C 3 , respectively. We have the following result. Proposition B.8. (i) If c ≥ 1, then we have (ii) When c = 0 in the above, we have To prove Proposition B.8, we prepare several lemmas. Multiplying x b−1,p to this, we have e 3 x b,p where we put Since we easily see c 1 + c 2 + c 3 = 1, we prove (ii). Proof of Proposition B.8. The assertion (i) is clear from Lemma B.4 and Lemma B.9 (i), B.10 (i). The assertion (ii) is obvious by Lemma B.9 (ii) and B.10 (iii). So Proposition B.8 is proved. In order to prove Theorem B.3, we fix a non-negative integer l. In order to give D l (δ −k ), we must sum up each contribution of F a 1 F b 2 F c 3 (δ −k ) such that a + 2b + 2c = l. By Proposition B.2 and Proposition B.8 (i), the term with c ≥ 1 does not contribute to the final sum. So we assume c = 0 and a + 2b = l. We put By Proposition B.8, we see that this is the contribution from F a 1 F b 2 (δ −k ) times the coefficient of F a 1 F b 2 in the definition of D l , noting that (k) 2b (k + 2b) a = (k) l . What we want to calculate is q 0 = a+2b=l q a,b . Denote by [l/2] the maximum integer with does not exceed l/2. To calculate Q inductively, for any integer b such that 0 ≤ b ≤ [l/2], we put Lemma B.11. The notation being as above, for any b with 0 ≤ b ≤ [l/2], we have Proof. We prove this by induction from b = [l/2] to b = 0. First we show this for b = [l/2]. By definition, we have q b = q l−2b,b , so the problem is if the coefficient of RHS of (9) is 1. So the claim holds also for b − 1. B.0.2. Explicit pullback formula. Based on the results in the last subsection, we can write down the pullback formula for the differential operator D l in general for any n = n 1 + n 2 with 2 ≤ min(n 1 , n 2 ) and λ = (l, l, 0, . . . , 0). We use Kozima's formula in [44] p.247. We define Q(X) as in Proposition B.2 and Theorem 3.2. Then by Theorem B.3, we have Q(∆ (12) ) = q 0 C 2l 1 . For a n × n symmetric matrix, we write a block decomposition as where T 11 is n 1 × n 1 and T 22 is n 2 × n 2 . we define a polynomial Q(T ) in t ij by Q(T ) = Q(U T 12 t V ). Then we have Q((CZ + D) −1 C) = Q(∆ (12) ). For any r ≤ min(n 1 , n 2 ), we define n 1 × n 2 matrix by 1 r 0 0 0 , and by abuse of language, we denote this also by 1 r . As in Kozima, we are allowed to write Q(T ) for T 12 = 1 r as Q * 1 r * * , not specifying * , since this does not depend on * part by definition. Now we put Then for λ = (l, l, 0, . . . , 0), we have Q * 1 r * * = q 0 × R l r . We consider two isomorphic realizations of the representation det k ρ r,λ , one is on the space generated by bideterminants in u ij with i = 1, 2, j ≤ r and the other is on the space generated by bideterminants in v ij with i = 1, 2, j ≤ r. We denote the former representation space by V r * and the latter by V * r . We identify these representation spaces of GL r (C) on U variables and V variables by mapping u ij to v ij . For v * ∈ V * r , we denote by v * the corresponding element in V * r . Now we define S r = {S ∈ M r (C) | S = t S, 1 r − SS > 0}, where * > 0 means that the matrix is positive definite. We define a linear map from V r * to V r * by where ρ r = det k ρ r,λ and dS = h≤j dx hj dy hj for S = X + iY with X = (x hj ), Y = (y hj ) ∈ M r (R). Then we know that ψ(v * ) = ϕv * for some constant ϕ. Then by the result of [44] in p.247 , we have c r = 2 r(r+1)+1−(rk+2l) i rk+2l · ϕ. Here ϕ obviously depends on the inner product. We can explicitly calculate the inner product * , * 0 defined before for the necessary quantity. So if we put m = 2 and assume X and t A to be 2 × r matrices consisting of the first r columns of U and V respectively, and B = 1 2 0 r−2,2 , then the above formula means In the same way we have ∂R l r ∂u 1p ∂u 2q − ∂R l r ∂u 1q ∂u 2p = l(l + 1)(v 1p v 2q − v 1q v 2p )R l−1 r . So iterating these operations l times, we have the assertion. We note here that since we assumed that k is even, the number rk/2 is an integer.
2021-09-23T01:15:57.959Z
2021-09-22T00:00:00.000
{ "year": 2021, "sha1": "e6371d4e7260d53182ea0726f1bcfd7c8613f983", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e6371d4e7260d53182ea0726f1bcfd7c8613f983", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
55330943
pes2o/s2orc
v3-fos-license
Toward a better appraisal of urbanization in India : A fresh look at the landscape of morphological agglomerates Up to now, studies of urbanization in India have been based only on official urban figures as provided by the Census Surveys. This approach has inevitably introduced several avoidable biases into the picture, distortions further compounded by numerous regional inter-Census adjustments. A much sounder option is now available in the Geopolis approach [www.e-geopolis.eu], which follows the United Nations system of classifying as urban all physical agglomerates, no matter where, with at least 10,000 inhabitants. Looked at from this standpoint, the Indian scenario exhibits all signs that, far from a major demographic polarization led by mega-cities (as is commonly believed), what the country has been experiencing is a much-diffused process of urbanization. While 3,279 units were officially categorized as urban, the Geopolis criterion has identified 6,467 units—about twice as many— with at least 10,000 inhabitants. Again, in the matter of the rate of urbanization, the Geopolis yardstick places the figure at 37% for 2001, 10 points of percentage above the official estimate. In absolute terms, that difference accounts for 100 million inhabitants. Apart from this fact, brought to light by both physical identification and gradation of the census units of all localities, and a study of the morphological profiles of individual agglomerates, a major finding relates to the greater spread of the country’s metro and secondary cities than had been believed up to now. Yet another revelation thrown up by this study is that statistical and political considerations have obscured the emergence of small agglomerations of between 10,000 and 20,000 inhabitants. This omission can only be seen as a gap in the national policy on planning and urban development. In other words, the country seems to be firmly headed toward an extended process of metropolitanization alongside diffused combinations of localized socio-economic opportunities, clusters, cottage industries, and market towns partially interlinked by developmental corridors. It appears that, on the very wide and diverse Indian subcontinent, there have come into existence many sub-regional settings, which converge, overlap, and diverge, far indeed from a dual model of modern versus traditional, urban versus rural, metro city versus small town. This study of the distribution of today’s agglomerations and those emerging challenges the pertinence of the urban/rural divide as perceived through official eyes. Introduction With increasing attention being paid to India's economy after the impressive global surge of the country's top industrial corporations, a new perspective has emerged on the urbanization in the subcontinent.It now focuses on the metropolitanization process associated with the opening up of the economy and the concomitant high economic growth. The influential New Economic Geography (NEG) of Nobel Prize winner Paul Krugman accepts the view of India's metropolitan dynamics as the country's chief economic motor.The prominence of the NEG view in World Development Report 2009 1 lends particular weight to it. 2 India's urban policies during the last 15 years have supported polarized growth.Metropolitan concentration and neo-liberal restructuring have gone ahead hand-in-hand, the former at times even leading economic liberalization.The current spending of the Jawaharlal Nehru National Urban Renewal Mission (JNNURM), launched in December 2005 with a Rs100,000-crore 3 fund, targets 65 strategic urban centers with strong economic growth potential.They include 35 metro cities and mega-cities plus other cities of under one million inhabitants which are either state capitals or cities of particular cultural, historical or touristic importance, such as Pondicherry, Ujjain and Shillong.Its stated objective is "to create economically productive, efficient, equitable and responsive cities" 4 , which appears well aligned with national economic strategies such as promotion of public-private partnerships in infrastructure development and the government's Special Economic Zone (SEZ) policy. 5Half of the amount has been earmarked for use for essential infrastructure projects by the end of 2010 (Sivaramakrishnan, 2009).In this context, the role of the State Governments in urban affairs remains central, as demonstrated by efforts to promote their own capital cities as major investment centers using JNNURM resources, while quite often ignoring local urban authorities (Sivaramakrishnan, 2009).Indeed, the effects of those policies are likely to be evident in the metropolitan cities of India; Mumbai exhibits these to a considerable extent along with Delhi (S.Banerjee-Guha, 2009). However, this approach has led to a neglect of the secondary cities and small cities, which together represent a significant share of India's present urban dynamics. Following the Geopolis guidelines, we have classified all physical agglomerates of over 10,000 inhabitants as urban -an agglomerate being considered as a contiguous built-up area 6 .The objective of the e-Geopolis project is to promote the use of a single, globally applicable technical definition of the term.This approach is based on: • a simple morphological criterion across space and time: (the contiguity of built-up areas with a maximum of 200 meters separating constructions), and • a single threshold (10,000 inhabitants) applied uniformly across the board, even when the national definition uses other criteria. 7 Seen thus, the Geopolis definition eliminates the technical or methodological biases of the official definition, linked in the case of India to the problem of the spread of an agglomeration over different administrative units either officially rural or urban and/or crossing State borders. As far back as 1987, Ramachandran proposed the concept of "geographical city" to avoid biases introduced by the Census: "It [the geographical city] includes all contiguously built-up areas around a main settlement; irrespective of the administrative status of these areas… its limits can be accurately determined with the help of an aerial photograph of the city.However, Cybergeo : European Journal of Geography such aerial photographs are not always available, so the geographical city is essentially an idealized, abstract concept" (p.110). Having carried out the necessary processes of standardization on a diachronic set of data, verification on the ground and digitalization of each morphological configuration, we can now offer a scientifically valid view of India's urban continuum.These steps have eliminated most (if not all) of the distortions resulting from an unquestioning dependence on official figures.Besides attempting a classical analysis of demographic trends, we also consider in this paper the country's urban forms and, indeed, the complete settlement system. In examining the processes of physical and socio-demographic agglomeration, we impose no prior assumptions or models regarding economic, political, or administrative influences on the setting.The socio-geographic transformation that we uncover as we consider the physical and geometrical dimensions of agglomerates, and the trends in their distribution, concentration, diffusion, and redistribution that will become discernible, enable an analysis of the factors of the agglomeration process.It then becomes possible to take a closer look at the hypothesis of the New Economic Geography, which sees the polarization of the population as correlated to the economy and the financial conditions in the major metropolitan regions. Indeed, India has been experiencing a high level of GDP growth but with a moderate trend in demographic polarization by the major cities.Clearly, the two processes could hardly be said to overlap to any significant degree.The diffused demographic urban system that characterizes India, in fact, does not seem to have affected the country's strong economic growth, with GDP advancing at an annual average of 7.2% from 2000 to 2008 at constant prices.In 2009, its economic resilience and its ability to avoid most of the effects of the global financial crisis, especially in terms of industrial production, provided a clear proof of the trend, its GDP registering a 6.9% growth rate for the year.The parliamentary elections of May 2009 served to confirm the emergence of the ordinary provincial settlement as a focus of an urban setting with a clear political consciousness: the provincial electorate or the mofussil society distanced itself from both the Hindu nationalist and the Communist parties and their agendas, proving that a rural India for which those agendas might have held an appeal only existed in the minds of politicians. Let us first highlight the problems connected with the irregular uses of the official urban definition from States to States and other discrepancies.Then, we display the difference between the landscape of morphological agglomerates observed and the official set of urban Local Bodies regarding the primacy and metropolitanization before reviewing the emergence of secondary and small agglomerates.An extended urban landscape is revealed, and its consequences are analyzed. Problems of definition and Census-induced myopia regarding the agglomeration process As officially recorded, urban demographic growth in India was very significant during the past century.The urban population doubled from 1901 to 1947 (Independence of India) and grew at least six-fold during the period from 1951 to 2001. However, the rate of urbanization, as defined by the Census of India, remains one of the lowest in the world (28% for 2001), while the natural growth rate for the country remains relatively high (+1.6%/ year).It is therefore generally agreed that India is presents the highest potential of urban growth among world economies.Paradoxically, India's rate of urbanization appears even lower than that of Western Africa, which was 30% in 2000 (Denis and Moriconi-Ebrard, 2009). This single relative figure does not reflect urban reality and regional heterogeneity: the urban population in several states like Tamil Nadu (43.9%),Maharashtra (42.4%), and Gujarat (37.5%) was well above the national average for 2001.Even at less than 28%, according to the 2001 Census results, India's urban population officially amounted to 285 million inhabitants, the equivalent of almost the entire population of the United States of America.One out of every 10 urban citizens of the world is Indian.India has 35 cities of one million-plus inhabitants with a total of 108 million inhabitants accounting for 38% of the country's official urban population. One of the most talked-about aspects of this growth is the emergence of three agglomerations (Delhi, Mumbai, and Kolkata) with populations exceeding 17 million inhabitants each in 2009, thus belonging to the class of the world's "mega-cities" as defined by the United Nations/ESA (World Urbanization Prospects). However, the process of urbanization extends well beyond these mega-cities and the so-called metrocities in India (cities having more than one million inhabitants), spilling over into vaguely defined small-and medium-sized localities.It is this category, hovering between the urban and the rural, that eludes the statistician's eye and therefore presents difficulties for a proper demographic profiling of the country. Without denying the importance of metrocities in India's booming economy, one must in fairness take account of the spectacular growth of small-and medium-sized agglomerates. Their classification as either rural or urban is a critical issue vis-à-vis harmonious or inclusive development and spatial redistribution of wealth. The existence of the category was first noticed in the 1980s.P. Rao (1983) pointed to the state of the country's "rural-urban continuum", calling the topic a key question of that decade.The subject has suffered neglect in recent times, as urban development research and planning have tended to conform to mainstream thinking on economic liberalization with metro cities as its motors. In 2001, for the first time in India, official statistics (see Tab. 1) showed that the number of "villages" with more than 10,000 inhabitants exceeded the number of "towns" and "urban areas" with comparable populations.If these "villages" were added to the urban category, then India's proportion of that category in its total population would rise significantly.Thus, one may legitimately ask: Are these "villages" rural or urban in reality?In the interest of scientific propriety, one may question not only the methods used in the official measurement of the urban growth in the country, but also the representation of the phenomenon, given that they failed to take into account the existence of several thousand small-sized agglomerates meeting the specified demographic criteria of the urban category. One suspects that statistical and political considerations have led to an omission of thousands of small agglomerations of between 10,000 and 30,000 inhabitants.The problem is exacerbated by another anomaly: while Gandhi viewed the village as the soul of India, there is the transnational, cosmopolitan and neo-liberal view of the metrocity as a uniquely effective motor of all-round national development, a view favored by India's private and dominant public actors. In fact, the vision of village development as the objective of the PURA 8 program (Providing Urban Amenities in Rural Areas), launched in 2004 in several states, has already been overtaken by the reality of population and settlement dynamics.The aim of the program, which was proposed by former President A. P. J. Abdul Kalam, is to enhance village facilities in order to reduce migration of villagers to urban areas for employment, education, healthcare, etc. Demographic versus discretionary criteria Aiming to do away with the confusion surrounding the meaning of the term "town", the Census of India (The Registrar General & Census Commissioner, 2001) in 2001 revived two definitions, one from 1951 and the other from 1961, though with some slight modifications (Verma, 2006).A statutory town was defined as one which possessed one of the following: a municipality, a municipal corporation, a cantonment board, or a notified town area committee.That State law could determine the status of such a town meant, in effect, that often contradictory political pulls could play a role in the decision.A Census town had to meet the following minimum criteria: (a) a population of 5,000, (b) 75% of the males in the working population engaged in non-agricultural activities, and (c) a population density of 400/km2. The regional list of statutory towns depends on administrative decision.Indeed, the 2001 Census showed 191 local units with fewer than 5,000 inhabitants designated as urban, sometimes with only a few hundred.In contrast, Census towns only depend on statistical criteria.The declared Census towns are not necessarily becoming towns in terms of services and programs provided as for tax status for its inhabitants and activities.They stay mainly under the Rural Development and Panchayat Raj administration. How one chose to define a Census town-either in very liberal terms of the phrase at one extreme or a very restrictive interpretation at the other-depended upon the State in which one was located, the Census period, and, of course, one's political affiliation.The population density element added to the general variability characterizing the entire approach.The local area chosen for this parameter rarely corresponded to the built-up extension, which it often overestimated.There was also the incontrovertible fact of wide variations in population density prevailing among regions even within a single State, thus making it difficult to apply a common density referential across the board. A further complication arose when the directors of the Census operations were authorized in 1991 to designate some areas as urban even if those areas only possessed some of the distinct urban characteristics 9 .That such a step was to be taken in consultation with the concerned officers in the state governments, the union territories and the Census Commissioner of India did little to mitigate the arbitrary nature of the measure 10 .Sivaramakrishnan et al. (2005, p. 31) provides a more recent appraisal of the situation: "There are about 3,000 villages having a population above 10,000 in the country and their inclusion within the urban fold would immediately increase the percentage of the urban population by five percentage points.It may nevertheless be pointed out that, for acquiring urban status, it is necessary for a settlement to have 75% of the male workers involved in the nonagricultural activities.Unfortunately, the share of non-agricultural employment in these 3,000 large villages is less than 40% in 1991."The set of urban units as shown by the Census appeared to be affected by many discrepancies in the regional applications of the application. Our own provisional results confirm the underestimation of the small agglomerations in the current state of our knowledge: many of the 3,986 "villages" of more than 10,000 inhabitants in fact correspond to dense and continuous built-up extensions Census town notification is not linked with a change in administrative status; Census towns stay under the rural panchayat administration and do not beneficiate any urban schemes.We have to consider this fundamental point when we assess the urban dynamic. Complexity of the concept of Urban Agglomeration (UA) Also widely used for political, administrative and census purposes, the Urban Agglomeration (UA) notion introduces many instabilities and distortions in the common vision of India's urban distribution and its growth 11 .The 382 Urban Agglomerations of 2001 followed the definition given by the census administration 12 . One of the main problems with urban agglomeration as presently interpreted is that it should be defined only within political and administrative boundaries, that is, by the State. In this scenario, the real urban sprawl of agglomerates, such as Delhi's toward Haryana or Uttar Pradesh, gets totally neglected.Another example is Pondicherry, which extends unnoticed into Tamil Nadu, the surrounding State.Beside this constraint, many core cities that morphologically extend beyond their administrative boundaries and encapsulating villages could not be extended in UAs or enlarge their existing UA with OG.The procedure is very long and politically debated and negotiated.So, again, a deformed view of the urban setting becomes obvious, leading to a wide under-estimation of the "real" morphological process of agglomeration.Furthermore, the UA category does not incorporate the contiguous spreading of urban agglomerates adequately and on time: between 1991 and 2001, only one solitary UA was promoted; there were 105 between 1981 and 1991.As presently understood, the UA concept leads to a highly distorted view of the reality of urbanization in India. Declassification, Reclassification, and Decentralization In the context of an intense urban expansion, an application of the definitions enables the inclusion of new localities in the urban category at each Census; however, it is hard to understand the exclusion of hundreds of cities returning to the category of "villages" and therefore disappearing from the current "urban" statistics. The increase in the number of "cities" between two consecutive Censuses, and more generally the rate of urban growth, together reflect the difference between rural agglomerations that become urban ("incoming") and urban agglomerations that revert to their rural status ("outgoing").The small towns and those at the fringe of an urban agglomeration form a continuum with unstable thresholds at the lower end of the urban hierarchy and are clearly manipulated for political and administrative expediency. At the national level, the population figures for newly promoted towns minus those for the declassified towns indicate a general trend toward reduced upgrading of villages to town status, apart from a demographic and morphological trend clearly in favor of agglomerations: they represent 13.8% of the demographic urban growth during 1961-1971, 14.8% during 1971-1981, 9.4% during 1981-1991, and only 6.2% during 1991-2001.In Andhra Pradesh alone, declassification of 79 agglomerations accounted for a decrease of 7% in the urban population, and the inclusion of 35 (within UAs only) increased the urban population by 2%.The apparent stagnancy of the urbanization rate in Andhra Pradesh between 1991 and 2001 (from 26.8% to 27.1%, when the regional demographic annual rate of growth was 1.15% for the same decade) is largely the result of the missteps detailed above.In contrast, in Tamil Nadu, 456 new towns were created, while 61 were declassified-about 40% of the urban promotion all over India in that State only accounting for 6% of the Indian population.These urban promotions account for 21% of the regional urban population in 2001.Many similar anomalies could be cited. The foregoing analysis exposes the regional distortions and flawed diachronic approaches that affect a reading of the urbanization scene in India.Schaffar's claim (2010, p. 109) that data from the official Census figures, based on apparently harmonized official definitions, could be used to obtain reliable statistical series becomes as untenable as P. Datta's (2006) naïve and unquestioning acceptance of the official urban/rural divide. Under the dynamics of decentralization, especially after the progressive regional implementation since 1994 of the 73rd and 74th Amendments 13 , State decisions have been seen to misrepresent the urban setting, driven by political considerations and local questions of development.Our colleague Amitabh Kundu (JNU, Delhi), like many other economists involved in urban analysis and in studies of labor and production statistics, underlined "that the different states in India are not amenable to comparison amongst themselves".All analyses trying to explain inter-state differences in terms of urbanization usually commit a political Cybergeo : European Journal of Geography and statistical fallacy.For instance, the level of urbanization appears low, at 26%, in Kerala where 1,072 settlements, each with a population of more than 10,000 inhabitants together representing 21.7 million inhabitants, are registered as villages (Tab.1).Chaurasia believes that this is "because of the reluctance of the state government to grant municipal status to large villages" (2007,150).Certainly the same could be said about West Bengal, but more precise analysis based on fieldwork is required to understand the local, regional and national politics of promoting towns.We are not yet able to give a clear explanation for these regional distortions of the urban category. It is clear that reliance on official urban growth rates or official urban categories can lead to severely flawed statistical constructions, not valid physical distributions. An unsuitable classification for the current urban system The official Indian definition of "urban" introduces a high degree of variability in the contours of the urban system, especially at the bottom of the urban hierarchy.It is therefore rightly criticized by many scholars (Sivaramakrishnan et al., 2005, p. 19;Ramachandran, 1987).Official Indian statistics classify cities into six categories: Class I groups together cities with at least 100,000 inhabitants; Class II, towns with more than 50,000 inhabitants; Class III, towns with more than 20,000 inhabitants; Class IV, towns with more than 10,000 inhabitants; Class V towns with more than 5,000 inhabitants; and Class VI, towns with fewer than 5,000 inhabitants.Table 3 shows an additional category of "metro cities" with one million or more inhabitants. Retaining only 3,279 settlements as urban with at least 10,000 inhabitants each -389 in Class I (35 with more than 1 million inhabitants), 399 in Class II, 1,154 in Class III and 1,337 in Class IV-, the official Indian urban data set presents a very "Malthusian" vision. Indiapolis: an alternative vision of contemporary Indian urbanization In the 1891 Census, Class I included 23 cities for an overall national population of about 250 million inhabitants; 14 in 2001, it had 389 Cities.This type of representation in a system of classes with demographically fixed thresholds in an expanding world leads to the image of an urban explosion, which is primarily that of statistical categories: from Census to Census, few new settlements are brought into the urban category, and those presently existing automatically move toward Class I, which now includes many secondary cities with well over 100,000 inhabitants.The increasing dominance of the Class I cities results from an automatic graduation of lower-order towns into the Class I category.In fact, in absolute terms, even Class I becomes much more important with Indiapolis, numbering 451 units versus 389 official units (Table 3).Indiapolis adds two agglomerations, each with more than one million inhabitants, and 60 supplementary agglomerates of between 100,000 and 1,000,000 inhabitants each.In total, some 49 million inhabitants have been added to Class I, of which 38 million are living in contiguous units on the fringes of the official Class I units.Following the Geopolis definition, some 4,790 towns and villages have been incorporated in Class I cities as they fall in the category of contiguous built-up structures.In the Indiapolis approach, the distribution appears less truncated, given 37.5% as the global rate of urbanization for 2001 and considering all the physical agglomerates of at least 10,000 inhabitants.Compared to the official rate 16 of 27.1%, this is 10.4% higher in terms of the population but almost double in terms of the number of settlements with at least 10,000 inhabitants considered. Officially, there were 888 towns of between 5,000 to 10,000 inhabitants in 2001, whereas the Indiapolis program observed 11,917 continuous built-up footprints in this range.These towns added some 79 million inhabitants to the urban population, while the official Class V counts only 6.6 million.Globally, of the 464 million persons or 45% of the Indian citizens living in agglomerates with at least 5,000 inhabitants each in 2001, 176 million or 38% live in administrative units with a rural status.The use of the Geopolis definition makes possible an assessment of the agglomeration system and its trend much more uniformly at the scale of the sub-continent.Bottom-level urbanization is unveiled, and the regional and local political/administrative bias is avoided.A uniform use of the "contiguous built-up area" concept across the subcontinent unveils the spread of small agglomerates of below 50,000 inhabitants but also reveals the broad extent of the sub-metropolitan world that is not included in the official urban agglomerate category (UA) numbering 382 units in 2001. The problem that one encounters in studying the urbanization in India today is as much the expansion of small cities as the representation of the phenomenon itself.Yet, it is precisely in this category of localities that we presently observe the most significant changes, which lead to questions of enlarged regional metropolitanization, corridor development, and urban villages (McGee, 1992;Denis, 2006).Are we witnessing a dynamic of polarization in India or a more dispersed trend?Are the small localities going to be incorporated into extended metropolises polarizing the economic growth potential or developing a more diffused urban landscape? Metropolitanization versus proliferation of small agglomerations A "soft" primacy Following the examples of large countries like China and Brazil, the Indian urban system is not affected by an extreme primacy trend.The Indian's size and the colonial legacy of the past mitigated by the rule of princely states have been some of the major causes disallowing urban primacy.The subcontinent did not become a politically unified nation until 1947 (Verma, 2006, p. 194).However, over the course of the past half-century, the country's urban system and regional urban dynamics have changed.Mumbai, Delhi, and Kolkata, each a megacity of well over 10 million inhabitants, and Chennai, with almost 8 million, have led the transformation. While the absolute value of the slope of the rank-size curve for very large urban systems, such as those of the USA or Europe, is almost equal to 1.00, it is 1.02 for the official urban areas of over 10,000 inhabitants and 1.05 for the Geopolis/Indiapolis data (Fig. 2).The maturity of the urban system does not seem to be the only way to explain its conformity to Zipf's Law.An assemblage of multiple systems also introduces a kind of apparent harmonization, as in the case of Europe.India's long history of multiple and sometimes contradictory public interventions as well as divergent economic and social actions (cf.Chakravorty and Lall, 2007) have certainly contributed to dilute a balanced growth and reduced the prospects of a clear and unique primacy .Today, these contradictory policies appear in the form of "support India Inc." versus inclusive growth policies, or the JNNURM program versus the National Rural Employment Guarantee Act (NREGA) 17 . Cybergeo : European Journal of Geography The various indicators of a fast-growing regional capital suggest a State-wise centralization in India.In this era of decentralization and economic liberalization, the role of the hierarchy of public institutions vis-à-vis the hierarchy of the urban network invites particular attention and comment.In this context, the assertiveness of the States, starting from 1956, has given rise to a climate favorable to the ushering in of the "primate city" concept.The regional hierarchies are understandably perturbed by the involvement of certain major cities in the economic dynamics of the country. Bangalore/Bengaluru's dominance is strengthened by its leading position in the IT sector, as is Chennai's with its automotive, IT, and BPO industries.The economic rise of Pune, notably with the presence of the Tata group and the IT sector, has weakened the regional primacy of Mumbai.Visakhapatnam (1.4 million inhabitants), an industrial hub supported by a major harbor and boosted by the strong public investment by the Indian Navy since 2000, can now compete with Hyderabad 18 .Siliguri, in West Bengal, is also a fast-growing industrial city capable of reducing Kolkata's primacy in the State as well as the physical expansion of the Kolkata metropolis.Nevertheless, Kolkata is way ahead of the other agglomerates of West Bengal: its rate of primacy was 24.9 in 2001 and 189 of the largest agglomerates would be needed to equal its population.In Maharashtra, it would take the total population of just 14 of the largest agglomerates in the State to equal that of Mumbai.The explosion of what? The image of an urban explosion in the mega-cities belongs to the realm of statistics.The eight largest metropolitan cities of India, with projected populations of over 6 million in 2011, excepting Delhi, experienced a decline in their growth rates from 1991 to 2001 compared to the preceding decade (table 5).not growing much faster than Kolkata in spite of its high economic growth, the opening up of the economy to FDIs, and the city's financial centrality. The slight rise in the growth rate of Delhi's population is a result of the statistical definition provided by the Census of India, which, by minimizing the size of the real population of Delhi's agglomerations, biases the appraisal.The major hurdle to Mumbai spreading is its highly constrained geographical area.Its 472 km 2 area makes Mumbai the densest mega-city in the world, with 31,138 inhabitants/km 2 , three times as dense as Delhi, with its 1,413 km 2 area.In fact, Mumbai only covers an area equal to Chennai with more than twice its population. Outside of industrialized countries, Kolkata became the world's first agglomeration to have more than one million inhabitants in 1872.In 1991, it was overtaken by Mumbai, which started to become a financial hub; today, Delhi's agglomeration takes the top spot. Kolkata is again ahead of Mumbai in demographic terms: its built-up area has extended greatly, encompassing the surrounding towns and villages, a feat that has not been possible for the capital of Maharashtra.The mitigating trend of Mumbai trying to correspond to the archetype of the Indian Global City, after the famous McKinsey report "Vision Mumbai", has been described by Appadurai (2002) and Varma (2004) as being caught up in the "provincializing", "denationalization" and "decosmopolitanization" processes after the riots of the closing months of 1992.As Table 6 shows, Chennai, Hyderabad, Bangalore/Bengaluru, and Ahmadabad either show attrition in their shares or stagnancy since the 1980s. Taking into account the 35 official metro cities of 2001 (table 7), one might make the following observation: these cities have held 35% of the country's urban population in 1941; that share remained stable at around 39% between 1961 and 2001.However, as a percentage of the total population of India for comparable periods, their population showed significant variations, rising from 0.4% to almost 11.0%-pointing to a strong process of metropolitan concentration at work during the preceding half-century.In fact, we estimate that the number of agglomerates of over one million will reach 60 in 2011 and will accommodate 18% of India's population.It is our expectation that the process of metropolitanization will extend further and cover more cities.Demographic magnetism continued to be felt along with polarization resulting from high economic growth in the metropolitan regions.Delhi had to find room for 530,000 migrants per year between 1991 and 2001, while Mumbai had to cope with 516,000 new arrivals per year.Such trends imposed pressures on transport services and housing, among other facilities. The negative impacts of economic concentration on the entire social system clearly outweigh its apparent benefits.A corollary of a decline in the rate of natural growth is the significant growth that migrations contribute to major metropolitan agglomerates.Migrations accounted for 75% of new agglomerations in Delhi, where the rate of natural growth fell from 1.4% to 1.3%.For Mumbai, it was 65% of new agglomerations vis-à-vis the fall in the rate of natural growth from 1.5% to 1.2%. Metropolitan polarization? Amid the restructuring in the metro cities, particularly in those located in the more dynamic States, recent political happenings in West Bengal might well lead to a quicker economic repolarization within the Kolkata metropolitan area.Together with the accelerating changes in urban governance since the late 1990s, this movement toward increasing private development has been driven by the need for modernization experienced by the growing cities themselves as the country enters its second decade of steadily high economic growth.It appears sensible to expect that, having gone along with the Center's macro-economic policy and having accepted its neo-liberal dogma and India's positioning in the global markets, these forces would be able to sustain and nourish this growth machine (Shaw, 2006;Banerjee-Guha, 2009).However, the agglomeration process extends well beyond the major million cities and metro cities and increasingly to secondary and smaller cities, not to mention village agglomerates.The questions that arise in this context concern the aspect of spatial structuring of this extended urban world, specifically, how metropolitan regions are expanding and where and how corridors are coming into being. This "peripheralization" and satellite reorientation of economic development has been encouraged by state policy and local government support through simplified building laws and zonal regulations, as well as through incentives.One of the main drivers of the cluster approach with the purpose of better polarized economic growth has been the SEZ Act of 2005, which has given a powerful impetus to investment.Metropolitan areas are now spreading outwards to meet the rising demand for housing, office space, schools, hospitals, and other services (Shaw, 2005).This has resulted in the merger of smaller towns or their inclusion in the larger urban agglomerations.More and more villages are being encroached upon through this process. Our own calculations, based on the per capita GDP as provided by the Regional Human Development Report Series, confirm the metropolitan concentration in terms of the Gross Urban Product (GUP).The seven major metropolises' agglomerates with more than 5 million inhabitants each (Delhi, Mumbai, Kolkata, Chennai, Bangalore/Bengaluru, Hyderabad, and Ahmadabad) accounted for 19.6% of India's GDP in 2008 and 14.4% in 2001. Fig. 3: Share of major financial cities in the National Stock Exchange capital turnover (%) Source: National Stock Exchange of India Limited Some 73.6% of the stock exchange capital turnover in 2004 is concentrated in the three major financial and demographic cities of India: Mumbai, Delhi and Kolkata (figure 3).While their respective share had been more or less stable from the end of the 1990s until 2004, Chennai's Cybergeo : European Journal of Geography share declined steadily from 4.6% to 2.8% during the same period, as did that of Pune (from 2.2% to 0.9%) and Coimbatore, the city of the textile and mechanical industries (from 3.4% to 0.4%).A financial concentration has clearly been at work alongside industrial redistribution through the expanding network of metropolises and secondary cities. The extent of global influences and metropolitan polarization can also be seen through the concentration of FDIs within the metropolitan areas.In an earlier study, Shaw (1999) detailed the size and type of both global and domestic investments that the major urban regions of the country had attracted during the 1991 to 1998 period.The trend toward metropolitan polarization that the study revealed has since been confirmed by Chakravorty (2007) Three non-exhaustive hypotheses can be tested regarding the disjunction between the continuing economic polarization and relatively low demographic attraction of major metrocities: • The non-inclusive dimension of this economic transition and polarization may lead to a very selective magnetism of cities; • The urban population growth during 1991-2001 did not take place entirely within the UAs' legal framework; rather, it lay along the transport axis and might have been more dispersed than has been thought; • The growing dispersal of micro-agglomerates is associated with alternative local dynamics at the edge of an urban-rural linkage at a level that remains to be assessed in regard to diffusion and potential percolation from metropolitan growth. These potential trends will have to be more precisely assessed and explained in a near future using complementary economic information, notably GDP and financial data by district, and plant locations.From the Indiapolis Geodatabase, it appears that 2,685 contiguous built-up areas with at least 10,000 inhabitants each were composed of more than one unit (Tab.8).In other words, 41% of the agglomerates observed encroached on at least one adjacent administrative unit.Those multi-unit agglomerates accommodated 77.2% of the urban population.Urban local bodies incorporated in multi-unit built-up areas experienced on average an accelerated demographic annual growth rate between 1991 and 2001 of 3.5% or 1% more than peripheral rural local units where 11% of the agglomerate populations reside.Those 42 million citizens were living on fully adjacent outskirts of cities without enjoying any of the benefits of their agglomerate status guaranteed under the 74 th Amendment.They depend of the rural administration, tax, services and schemes.Another 18 million settled in 1,468 multi-villages agglomerates with no affiliation to an urban local body.More than 80 of those units had over 50,000 inhabitants each in 2001.There were also almost 27 million Indians living in unique rural localities with at least 10,000 inhabitants in contiguous built-up areas.Thus, in 2001, there were more than 87 million Indian citizens generating and experiencing agglomeration conditions of life without receiving any particular attention (Map.6).The agglomerated population living in contiguous built-up areas of at least 10,000 inhabitants in 2001 encroached on 1.6% of the territory of the Indian Union.It is in this tiny patch of land that most of the private and sizable public funds are invested.Nevertheless, the 54,366 km 2 that Cybergeo : European Journal of Geography this figure represents is more than double the area which the Indian Ministry of Agriculture has notified as land for non-agricultural use (excluding forests and barren lands).This shows yet another bias in the estimation of the agglomerate world in India (Indiastat.com, 2009).Urban encroachment is officially underestimated.The area of 185,500 km 2 under permanent cultivation accounts for 5.6% of the total area of the country.Its functional proximity with the urban system put this vital space at risk vis-à-vis housing, infrastructure and industrial encroachments that are occurring mostly in peripheral villages where urban legislation to control the land use is not in place. According to the Geopolis approach, the agglomeration of Thiruvananthapuram (previously Trivandrum), the capital of Kerala State (Map 7), has a contiguous built-up area of 9,040 km 2 (six times the built-up area of Delhi), and accommodates 16.7 million inhabitants.With an average of 1,850 inhabitants per sq.km, this extremely large unit still has a higher density than most of the metropolises of North America and three times the density of the agglomerate of Brussels.In this strategically important city, the military sector sustains and supports the surrounding IT sector. The Desakota model, well-known elsewhere in Asia, can be applied in Kerala as an urban/rural continuum as identified by Chattopadhyay (1988) and analyzed as such by Casinader R. (1992) and Pauchet and Oliveau (2008).In Kerala, the inclusion of the contiguous built-up villages within the urban agglomerations has the effect of substantially increasing the agglomerate area: one-third of the State appears completely agglomerated with less than 200 meters between constructions. Kerala is an interesting issue in terms of governance and planning if we consider the fact that it is no longer a rural state by all major indicators (availability of good infrastructure facilities, a well-educated workforce and post-reform laws-all of which have attracted a large number of private firms).The share of the service sector in the Regional Gross Product was 64% in 2006, and IT is becoming the major driver of growth (Amir Ullah Khan and Harsh Vivek, 2007).In Among all these aspects that characterize the extreme diversity of forms of Indian urbanization, the most remarkable is the recent proliferation of small agglomerations.Besides (or accompanying) the concentration of population and activities within the million plus cities and the formation of urban mega regions, the process of urbanization in India seems more and more oriented toward small and medium agglomerations that are mostly left out of the urban administrative reckoning.The million-plus cities were the fastest growing class at 3.1% of annual growth rate between 1991 and 2001.Agglomerates below 20,000 were growing at only 2.8%; however, during the 2001-2011 decade, their rate could reach over 4% if we consider the stability of the residential migration pattern, a trend which has already been confirmed by several national surveys.Small towns will have grown at a comparable rhythm to million-plus cities, notably through the emergence of new units crossing over 10,000 inhabitants. 74 The development of the Indiapolis dataset helps in an examination of the genesis of the situation in 2001 with 49 million Indian citizens living in agglomerates of between 10,000 Cybergeo : European Journal of Geography and 20,000 inhabitants-almost 13% of the country's total urban population distributed across 3,616 agglomerates.The Geopolis method has opened the way to a morphologically determined projection of the agglomeration landscape for 2011.Using satellite imagery, 21 the technique has enabled the observation and geolocalization of 11,913 agglomerates of between 5,000 and 10,000 inhabitants each.Together with a projection method based on the observed decrease in the natural growth by region and constant migration, this method yields a figure of 1,977 new settlements of at least 10,000 inhabitants each for 2011.The 26.7 million inhabitants of these agglomerates would have contributed to some 15.6% of the absolute growth in population between 2001 and 2011 (Tab.10). The emergence of small agglomerates such as these is changing the urban setting in respect to rural access to services.Goods and service facilities are coming nearer to the consumer, and services of new kinds are emerging, inducing a diversification of jobs, notably in construction, food supply and processing, and groceries.They extend to the more advanced areas of telecommunications, the Internet, and transportation.Improved transport facilities have led to daily commuting for seasonal or casual employment in preference to non-residential migration. One will need to further analyze this structural trend, which has only been identified via a limited number of field studies.For this purpose, one will require a systematically organized set of data with a bearing on the distribution of services, local public funding, and related activities.The trend holds potential for India's particular strengths, such as cheap labor, low land prices, tax holidays, and flexibility of employment through non-formal linkages with the concerned sectors-all great assets in the global market place.Dispersion rather than urban agglomeration should lead to reduced costs.This is particularly the case in sectors in which India is leading, such as Business Process Outsourcing, and equally in the textile industry and food processing, driven by local demand and rising consumption and boosted by simplified access to finance.The high economic growth of India is much more driven by low cost than innovation. Cybergeo : European Journal of Geography Three spatial combinations can be discerned in this context: (i) continuing polarization inducing the enlargement of the metropolitan area by incorporation of urbanized villages and small towns tending toward some emerging Desakota regions; (ii) axial interconnections supported by corridor infrastructure projects that will generate linear development merging rural and urban environments; and (iii) disseminated/centrifugal trends of work-intensive activities helping to generate some sort of dispersed city landscape, or città diffusa or agropolitana (Ferrario, 2009). Settlement distribution in India, dispersion and/or concentration? In the context of the relatively moderate and stable flows in residential migration in India for the past 20 years, the distribution of settlements and the progression toward concentration point to the future form of urbanization in the country.Distance to the nearest town is an easily understood and meaningful measure of urbanization (Ramachandran, 1987, p. 126). For 2001, the Indiapolis Urban System, with its 6,467 units, gives a mean distance of 10.1 km between agglomerates with at least 10,000 inhabitants each, whereas the mean distance between official cities is 18.1 km (the distance falls to an average of 6 km with all the 5,000). In 2011, the distance among 10,000 agglomerates is projected to be less than 10 km.The mean distance between towns with at least 20,000 inhabitants was 13.3 km in 2001, down from 56 km in 1981.In other words, the agglomerate landscape has shrunk significantly.Specialized services that are only available in large towns with over 100,000 inhabitants are on average 36 km away from any given location, meaning that it is always possible to commute between them in a matter of hours. In some much-urbanized regions, such as Tamil Nadu, the average maximum distance to the nearest city is already only 6 km and will drop to 5.6 km in 2011.For 2011, we can expect 12.3 km for Madhya Pradesh (13.7 km in 2001), 9.2 km for Gujarat (10.2 km in 2001), and 8.1 km for Andhra Pradesh (8.8 km in 2001). We see that the main difference between the advanced States and the backward regions lies in their share of official cities with respect to the morphological agglomerations existing on the ground.In Bihar, a State that appears very densely agglomerated, only 15% of the settlements having a population of 10,000 or more are officially designated as towns (Map 6).In comparison, in Tamil Nadu, 100% of the observed agglomerates are officially listed as towns, as are 67% in Maharashtra and 55% in Gujarat. 22This reveals the extent of the underestimation of the emerging agglomerates in backward states and the associated deficiency in governance. Bihar is a poorly governed State.For several reasons not yet clearly understood, and from previously observed trends in economic and political history, urbanization in relation to public infrastructures and services does not always cover the distribution of settlements or even support the trend of private investments. The question then becomes one of examining the distribution pattern of agglomerates all over the Indian Territory.Using the Nearest Neighbor Index 23 based on the average Euclidian distance from each feature to its nearest neighboring feature, it is possible to have a first approximation of the aggregation level and trend of the urban system at the regional level.The aggregation level for all of India has been quite high for all settlements with at least 10,000 inhabitants: 0.60 in 2001 potentially improving to 0.58 in 2011.The estimated 3,197 new agglomerates that will have at least 10,000 inhabitants each in 2011 do not skew the current level of aggregation of settlements, thereby implying that some of them will develop with the existing metropolitan halo and emerging corridors.Of course, the level of aggregation becomes more random as the cohort of towns considered moves up from Class IV to Class I and finally to million cities, which had an index of 0.85 in 2001 for all India.In some regions, like Karnataka in the Deccan plateau environment, the level of aggregation appears lower than the average: with an index of 0.91 in 2001, the distribution is one of the closest to random, followed by Tamil Nadu's 0.71, West Bengal's 0.63 and Gujarat's 0.55.The more dispersed distributions correspond to "rural" states like Bihar (0.86) but also to Punjab (1.03) with its long urban history and also its division after the Partition in 1947 (Grewal, 2005(Grewal, & 2009)).Historical structures of settlement distributions determined by a diversity of ancient territorial administrations as economic interlinks, commercial or/and temples itineraries and regional production specializations clearly reflect regional urban settings and can constraint their future. Conclusion With the adoption of the Geopolis approach, central to which is the concept of the agglomerate as a contiguous built-up area, it has become possible to place the contemporary agglomeration process in India in a sounder perspective than has been the case.The Geopolis methodology is closely aligned to the United Nations recommendations related to analysis and comparison of urban dynamics worldwide. In the particular case of India, the Geopolis model of standardized data, verification on the ground, and digitalization of all morphological structures involved reveals the precise nature of the country's urbanization: not what has been widely believed to be a kind of demographic polarization, but in fact a much-diffused process, with agglomerations of 10,000 or more inhabitants occupying 46,884 km 2 or 1.6% of the country's total area.Mega-cities and their satellites have been thriving under the impetus of economic liberalization. The Geopolis measure gives a global rate of urbanization for agglomerates with over 10,000 inhabitants as 37% in 2001 compared to the official rate of 27%.For statistical and political reasons, the tremendous proliferation of small agglomerations of between 10,000 and 20,000 inhabitants is hidden: this is a missing link in the national policy of planning and urban development.With the Geopolis interpretations, if the major and secondary city layers of over 100,000 coincide with the official ones, even differing marginally in relative terms, a significant feature of the finding is the strong emergence of the small-city layer of fewer than 20,000; its share more than doubled using Geopolis definition, from 19 million to 49 million inhabitants in 2001. At the same time, the group of Class I cities has grown larger, and 61 new agglomerates and 48 million potential urban inhabitants have been added.The dynamics of metropolitan agglomeration have been studied much more comprehensively and thoroughly, and bottomlevel urbanization has been unveiled. With up-to-date satellite imagery, and based on the existence of 11,698 agglomerates with between 5,000 and 10,000 inhabitants in 2001 and the local urban growth rates of 1981, 1991 and 2001, we estimate that 1,052 new settlements will have at least 10,000 inhabitants each in 2011.Their 11.7 million inhabitants would have contributed to 9.5% of the absolute growth between 2001 and 2011.The degree of urbanization in the country would be around 42% in 2011.In 2011, the distance between agglomerates will be less than 10 km as the average maximum distance to the nearest town (in a polygonal subdivided space). In the context of the local and regional implementation urban decentralization (74 th Constitutional Amendment Act of 1992), this alternative view of urbanization can be useful in examining the governance of Urban Local Bodies, since we consider that many states are much more urbanized than the official statistics would seem to indicate or at least have a higher share of agglomerates than is currently perceived.The exception is Tamil Nadu, where 100% of the observed agglomerates are included in the official Census list of towns, the other states recording between 14.7% (for Bihar) and 67% (for Maharashtra). On the other hand, the objectives of JNNURM-"to create economically, productive, efficient, equitable and responsive cities"-show close alignment with the national economic strategies, such as promotion of public -private partnerships in infrastructure development for millionplus cities and the SEZ policy of support for corridors as a "major emerging spatial pattern" (Centre for Policy Research, 2001).In respect to metropolises, urban public policies have led to a wide underestimation of the level of urbanization existing and emerging.For example, the Ministry of Finance in 2009 has estimated only 11.6% for 2026 the urbanization rate for Bihar (C.Vaidya, 2009, p. 10).Following the UN-Habitat projections, it also estimated that will India's rate of urbanization reach 50% of its total population in 2026.Consequently, we stress that globally the trend of concentration of the urban population in large cities and agglomerations is clearly getting stronger (ibid, p. 8). As can be seen from their attempts to promote their own capital cities as major investment centers, the States command considerable clout in the matter of urban affairs.The country's metropolitan cities bear witness to that power.It would be superfluous to add that such policies have led to a neglect of the secondary and small cities, which in fact represent a significant share of the urban dynamics of present-day India.A restructuring of the system of governance could be the most crucial element in a strategy aimed at reinforcing the country's infrastructure and civic amenities and at attracting investment through public-private partnerships in order to participate more fully in India Inc.Such a strategy calls for support in the areas of sustainable development and education. As with elsewhere in the world, the responsibility for the policy of neglect that this study has uncovered rests squarely on the shoulders of researchers, geographers, sociologists, and economists, who have been focusing their attention on the top layer of the urban hierarchy and routinely cultivating a metropolis-biased vision in the context of globalization.World cities/ global cities are grabbing their attention while the burgeoning common urban landscape, small and medium-sized, remains an unknown place.The ordinary city, the small and the mediumsized agglomerate where the growing majority of the world's urban citizens live, deserves more attention.(Kundu, 2011).These promoted local units in Kerala were counting for 47% of the official urban population in 2011.More than ever the extended urbanization is at stake and has to be studded precisely. What is needed, to begin, is a clearly defined agenda of research into the presently observed growth in small towns, how that expansion relates to the urban system, and how the two function vis-à-vis the new global division of labor, neo-liberal urban management and transnational investment (Satterthwaite and Tacoli, 2003;Bell and Jayne, 2009).The way in which some localities recombine local potential of development and mobilize or attract talents leading to clusters and innovations must also be documented.Creativity should not be a priori confined to major metropolises and monopolized by a "creative class" (Florida, 2002).A Cybergeo : European Journal of Geography "glocal" encoding of innovation combining globally circulating knowledge and "traditional" knowledge can produce new economic values that are much more diffused than the extended metropolis halos have so far been seen to do.The value of such a transformation cannot be overestimated in the context of a labour intensive economy like India, where small town locations can support lower costs. Cybergeo : European Journal of Geography spread" (Census of India, 2001).The total population of all constituents (UA + OG) should not be less than 20,000 (as per the 1991 Census)."This set of basic criteria having been met, the following are the possible different scenarios in which the UA could be constituted: i) A city or a town with one or more contiguous Out Growths; ii) Two or more adjoining towns with or without their Out Growths; iii) A city or one or more adjoining towns with their Out Growths all of which form a contiguous spread" (Census of India, 2001). 13 The 73 rd and 74 th Constitutional Amendment Acts were introduced in 1992 in a bid to achieve democratic decentralization and provide constitutional endorsement of local self-government authorities.These amendments confer authority on legislatures of States to endow respectively rural panchayats and Urban Local Bodies (ULB) with such powers and functions as may be necessary to enable them to act as institutions of self-government.For this purpose, rural panchayats and ULBs have been charged with the responsibility of preparing and implementing plans for economic development and social justice. The central objective of these amendments is the decentralization of planning and decision-making procedures.They also have the implicit intention of removing centralized control and monopoly over the collection and distribution of resources. The 12 th Schedule of the 74 th Constitutional Amendment Act defines twelve new tasks in the functional domain of the Urban Local Bodies (ULBs) as follows: 1) Urban planning including town planning; 2) Regulation of land use and construction of buildings; 3) Planning for economic and social development; 4) Roads and bridges; 5) Water supply for domestic, industrial and commercial purposes; 6) Public health, sanitation conservancy and solid waste management; 7) Fire services; 8) Urban forestry, protection of the environment and promotion of ecological aspects; 9) Safeguarding the interests of the weaker sections of society, including the handicapped and the mentally retarded; 10) Slum improvement and upgrade; 11) Urban poverty alleviation; and 12) Promotion of cultural, educational and aesthetic aspects. 14 Total population of India within the country's current borders (excluding Pakistan, Bangladesh and Burma/Myanmar). 15 The official Class I has been split into two sub-classes: one of 1 million or above (the so-called "metrocity" in India) and the other of up to 1 million. 16 The official rate of urbanization was 27.8% in 2001, including 1,066 towns with fewer than 10,000 inhabitants, the smallest one having only 338 inhabitants (191 statutory towns have less than 5,000).17 16 The Mahatma Gandhi National Rural Employment Guarantee Act aims to enhance the security of people's livelihood in rural areas by guaranteeing one hundred days of wage-employment in a financial year to a rural household whose adult members volunteer to do unskilled manual work. 18 The talks initiated in December 2009, under strong political pressure, to divide Andhra Pradesh between Telangana with Hyderabad as its capital and coastal Andhra with Vishakhapatnam as capital are to be viewed in that perspective. 19 Geopolis projections are based on the methodology that proved its reliability worldwide in the prediction of 2000 round census.Its key features are as follows: 1) The natural growth for the last Census interval 1991-2001 is calculated for each urban unit (The series of the district's urban natural growth given by the Registrar General & Census Commissioner, India, for 1996 has been used).Chandrasekhar, 2010). 20 Shaw and Satish (2007) present a different picture of the distribution of the FDI and domestic investments among the metropolises, giving less importance to Delhi than to Mumbai and Bengaluru.However, it must be noted that their ranking only relates to industry.Moreover, the database used, CAPEX, came from a private company named CMIE and only covered 'new and ongoing investment Fig. 1 : Fig. 1: Evolution of the official Urban and Rural Divide for the Local Units of at least 10,000 Inhabitants (1971-2001) Map 1 : The distribution of towns and villages in Chennai metropolitan Region and Geopolis built-up areas in 2001* * The Geopolis population of Chennai's agglomerate is based on the sum of all local units towns and villages population that are inside the continuous morphological agglomerate Sources: Census of India 2001, Indiapolis geodatabase, and Multispectral Landsat Image 2001 Cybergeo : European Journal of Geography Tab. 2 : Evolution of the official number of Towns and Villages 1971-2011* * Town and village panchayats are the smallest local administrative bodies within which census results are given.Sources: Census of India series 1971 to 2001, Census geographical preparation conceptual note for 2011 (The Registrar General & Census Commissioner, India)Between 1991 and 2001, 445 towns were declassified, while 1,138 were promoted including 636 within UAs(compared to 93 declassified and 856 promoted between 1981 and 1991.) Map 2 : Comparison of the share of urban population by district in 2001 Cybergeo : European Journal of Geography Sources: Census of India 2001, Geopolis geodatabase Tab. 3 : Official and Indiapolis Set of Urban Population Comparison Urban Population Distributed by Class of Demographic Size of urban units with at least 10,000 inhabitants in 2001 Sources: Census of India 2001 and Geopolis geodatabase 15 Map 3 : The distribution and physical extent of the 6,485 agglomerates with at least 10,000 inhabitants in 2001 Sources: Census of India 2001 and Geopolis geodatabaseIf the trend of the aggregate populations of cities of over 100,000, as provided by the Geopolis/ Indiapolis geodatabase and those as officially supplied, is near-equivalence the small-city layer of under 20,000 (the official Class IV: 10,000 to 20,000) is growing fast.Its share more than doubled from 19 million to 50 million inhabitants between 1991 and 2001.While the urban population of Class IV officially declined from 13.6% in 1951 to 6.9% in 2001, it is paradoxical that Class III had its maximum share in 1961 and Class II in 1981.The Geopolis/Indiapolis database also shows an expanded Class I in absolute terms, with the addition of 62 new agglomerates and 49 million urban inhabitants.These invisible citizens of the outskirts of large cities clearly lived and worked since Independence to 2001 outside the purview of an Urban Local Body.They depended, for all practical purposes, on a rural local government of sorts for their needs. Tab. 5 : Indian major metrocities with more than 6 million inhabitants in 2011 Sources: Census of India 2001, Geopolis geodatabase and 2011: projection 19 When it was still the capital of the Moghul Empire in 1872, Delhi was no more than a modest city of 162,000 inhabitants.In 1951, the agglomeration in Delhi was half as populated as Bombay (Mumbai) and one third as much as Calcutta.Delhi started expanding rapidly only after Independence to become the most populous city of India in 2001.Map 4: Mumbai, Delhi and Kolkata e-Geopolis agglomerates in 2001 Sources: Geopolis geodatabase This measure, assessed on the ground, contradicts the last UN projection (2007) that Mumbai, with an expected population of 26 million in 2015, would become the world's second largest agglomeration.In fact, the ratio of Mumbai's population to the country's total urban population declined from 8% in 1961 to 5.7% in 2001 (see Tab. 6).Mumbai, in demographic terms, is Cybergeo : European Journal of Geography Tab. 7 : The share of the 35 Indian metrocities with over 1 million inhabitants in the urban and total population of India (2001) Sources: Census of India 2001,Geopolis geodatabase sources, the extension of urban areas by means of the process of adding towns around core UAs accounts for 9.7% of the total urban population; specifically, 1,162 towns were included in the existing UAs in 2001.In fact, due to loss of urban status of peripheral units, that number was 140 less than in 1991.In other words, urban geographical extension appears officially to be shrinking.Of course, demographic growth is consequently reduced.There were fewer UAs in 2001 than in 1991, partly due to the UAs' fusion, but in most of the cases because of loss of urban status.Those adjustments again reduce the regionwise comparability of urban data: for example, in West Bengal 17 UAs have been declassified, while in Andhra Pradesh, 22 have been promoted. Map 6 : Distribution of Geopolis agglomerates of at least 10,000 inhabitants in 2001 with their share of rural population Sources: Census of India 2001, Geopolis geodatabase Tab. 8 : The distribution of the agglomerates of at least 10,000 inhabitants and their population distributed by type of component in 2001 Sources: Census of India 2001, Geopolis geodatabase 69 Map 8 : Emerging agglomerates with at least 10,000 inhabitants in 2011 around Delhi (in red local units below 10,000 inh. in 2001) Sources: Census of India 1991 &2001, Geopolis geodatabase Tab. 12 : Nearest Neighbor Index for India and selected States in 2001 and 2011 Sources: Census of India 1991, 2001 and Indiapolis geodatabase 2) The migrant proportion of the 2001's population for each urban local unit is then calculated.3) Using 2006 set of data for natural growth, the natural growth is calculated for 2001-2011; to this figure is added the constant number of migrants from the previous Census interval to arrive at the estimated population for 2011.(4) The area and the number of local units are considered constant, based on the extent of the contiguous built-up area of 2001.The method considers the changes in fertility and mortality as given by variations in the natural growth and presupposes constant in-flows and out-flows of migrants.In the Indian context, the hypothesis of constant flows of migrants in both directions is validated by migration studies, recent Census series and the National Sample Survey data(2008) showing a stable trend of migration toward the cities (See Tab. 1: Number of official towns and villages of at least 10,000 inhabitants in 2001 and share of population (in thousands) Cybergeo : European Journal of Geography Tab. 4: Evolution of primacy* in some of the major States of the Indian Union The ratio of the capital's population to the second agglomeration's population.Sources: Census of India 2001 and Geopolis geodatabase Cybergeo : European Journal of Geography * Tab. 6: Evolution of the population share of the top Indian cities in the official urban and total India's population Sources: Census of India 2001, Geopolis geodatabase . The largest cities have captured most of the FDI funds as domestic investment.In 2008 alone, 58% of the foreign companies registered in India were located in the Delhi-Gurgaon region, 21% in Mumbai-Pune, 9% in Bangalore, and 5% in Chennai, accounting for 93% of the total (Source: Lok Sabha Unstarred QuestionNo.4742, dated 25.04.2008).During the fiscal year 2006-2007, Delhi-Gurgaon was the location favored by 26.4% of new companies in India, representing 36.6% of authorized capital (Source: Ministry of Corporate Affairs, Govt. of India).Mumbai appears second with 27.2% in terms of capital 20 .While 11,000 firms were set up in the State of Maharashtra (including Mumbai and Pune) between April 2009 and January 2010, there were 44,000 in the National Capital Region (Source: Ministry of Finance, Government of India). The preliminary results of the Indian's Census 2011 are now being released.They indicate that the official Indian's urbanization level remains very low: the official share of urban population gains 2.3 from 2001 to reach only 31.1% in 2011.The growth rate of the biggest metrocities continues to decline.Some corrections have been done regarding the bottom agglomeration with 2,774 census towns added, 362 in 2001 only
2018-12-13T16:47:20.547Z
2010-11-28T00:00:00.000
{ "year": 2010, "sha1": "c39c6b3c794a89d77dbd8d4c1acfeef6f867f999", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.4000/cybergeo.24798", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c39c6b3c794a89d77dbd8d4c1acfeef6f867f999", "s2fieldsofstudy": [ "Geography" ], "extfieldsofstudy": [ "Geography" ] }
252439053
pes2o/s2orc
v3-fos-license
One-Shot Messaging at Any Load Through Random Sub-Channeling in OFDM Compressive Sensing has well boosted massive random access protocols over the last decade. In this paper we apply an orthogonal FFT basis as it is used in OFDM, but subdivide its image into so-called sub-channels and let each sub-channel take only a fraction of the load. In a random fashion the subdivision is consecutively applied over a suitable number of time-slots. Within the time-slots the users will not change their sub-channel assignment and send in parallel the data. Activity detection is carried out jointly across time-slots in each of the sub-channels. For such system design we derive three rather fundamental results: i) First, we prove that the subdivision can be driven to the extent that the activity in each sub-channel is sparse by design. An effect that we call sparsity capture effect. ii) Second, we prove that effectively the system can sustain any overload situation relative to the FFT dimension, i.e. detection failure of active and non-active users can be kept below any desired threshold regardless of the number of users. The only price to pay is delay, i.e. the number of time-slots over which cross-detection is performed. We achieve this by jointly exploring the effect of measure concentration in time and frequency and careful system parameter scaling. iii) Third, we prove that parallel to activity detection active users can carry one symbol per pilot resource and time-slot so it supports so-called one-shot messaging. The key to proving these results are new concentration results for sequences of randomly sub-sampled FFTs detecting the sparse vectors"en bloc". Eventually, we show by simulations that the system is scalable resulting in a coarsely 20-fold capacity increase compared to standard OFDM. I. INTRODUCTION T HERE is meanwhile an unmanageable body of literature on CS for the (massive) random access channel (RACH) in wireless networks, often termed as compressive random access [2], [3]. Zhu et al [4] and later Applebaum et al. [5] were the first to recognize the benefit of sparsity in multiuser detection, followed up by a series of works by Bockelmann et al. [6], [7] and recently by Choi [8], [9]. A single-stage, grant-free (i.e. one-shot) approach has been proposed in [10]- [12] where both data and pilot channels are overloaded within the same OFDM symbol. A new class of hierarchical CS (h-CS) algorithm tailored for this problem has been introduced in [13], [14] (for LASSO) [15] (for OMP) and recently in [16], [17] (for HTP and Kronecker measurements). A comprehensive overview of competitive approaches within 5G can be found in [18]. Recently, a surge of papers has combined RACH system design with massive MIMO which adds another design parameter (number of antennas) to the problem, see e.g. [19]- [22]. The informationtheoretic link between random access and CS, i.e. leveraging the use of a common codebook, has been explored in [23], [24]. This has been taken forward in many works, see [25]- [29]. Notably, CS together particularly with OFDM still plays a key role in upcoming 6G RACH design [30], see, e.g., the recent work by Fengler et al [31]. The very recent papers by Choi [32] [33], brought to our attention by the author, have revived our interest in the RACH design problem. In [32] a two-stage, grant-free approach has been presented. In the first stage, a classical CS-based detector detects the active n-dimensional pilots from a large set of size r > n. The second stage consists of data transmission using the pilots as spreading sequences. [33] has presented an improved version where the data slots are granted through prior feedback. The throughput is analysed and simulations show significant improvement over multi-channel ALOHA. However, by design the scheme cannot be overloaded (see equation (3) in [33]). Missed detection analysis is carried out under overly optimistic assumptions, such as ML detection, making the results fragile (e.g. the missed detection cannot be independent of r as the results in [33] suggest). Moreover no concrete pilot design (just random) and no frequency diversity is considered which is crucial for the applicability of the design. So, the achievable load scaling of this scheme remains unclear. We take a different approach here: Instead of overloading n compressive measurements with r > n pilots we use n-point DFT (orthogonal basis) and subdivide the available bandwidth into subchannels, each of which serving a few of the pilots. We send exclusive pilot symbols in the first timeslot only and data in the remaining time-slots. Then, we apply hierarchical CS algorithms for joint activity detection over a number of time-slots in each of the sub-channels. Notably, the system is reminiscent of an OFDM system where the subchannels correspond to bundled sub-carriers and the time-slot to a sequence of OFDM symbols. For this system, in essence, we provide a theoretical guarantee that the system can sustain any overload situation, as long as the other design parameters, e.g., number of pilots and time-slots, are appropriately scaled. Overload operation means that many more than n users, where n is the signal space dimension dictated by the DFT, can be reliably detected and each of which can carry one data symbol per pilot dimension and time-slot, i.e. the system supports one-shot messaging. The only price that we pay is delay, i.e. the number of time-slots might have to be adapted. This is achieved by roughly bundling log(n) sub-carriers for a pool of (only) log 2 (n) pilots over log 4 (n) log(log(n)) timeslots, which are then collaboratively detected. Not only will this scaling entail sparsity in each of the sub-channels by design, so-called sparsity capture effect, but still allow reliable detection by exploiting the joint detection over time-slots. The main tool is to establish new concentration results for a family of vectors with common sparsity pattern. Technically, our analysis rests entirely upon utilizing the mutual coherence properties of the DFT matrices instead of more sophisticated methods, such as the restricted isometry property (RIP) [34]. In the simulation section, this is validated for several system settings yielding a 20-fold increase in user capacity. The paper is organized as follows: In Section 2 we will introduce the system model in great detail. The sparsity capture effect is analysed in Section III. In Section IV the detection performance is analysed. Data recovery is analysed in Section V. Numerical experiments are provided in Section VI. An overview of our notation can be found in Table I. II. MODEL DESCRIPTION We imagine a set of u users k ∈ [u] := 0, 1, . . . , u − 1, that are communicating with a base station over t time-slots i ∈ [t] in an totally uncoordinated fashion. We assume an OFDM-like system, i.e. with some cyclic prefix, operating with IDFT/DFT matrix of size n×n where n is the signal space dimension. The time-slots correspond then to OFDM symbols. The ultimate objective for the users is to be reliably detected and to transmit one datum d i k ∈ C per time-slot at the same time, i.e. one-shot messaging. Time-slot 0 is reserved for multi-path channel estimation only, so that d 0 k = 1 for all k. The users transmit their data by modulating a set of pilots p i k ∈ C n , i ∈ [t]. Let h k ∈ C s denote the (sampled) channel impulse response (CIR) of the k:th active user, where s n is the length of the cyclic prefix, which is assumed to be constant over all time-slots. Inactive users are modeled by h k = 0. Hence, the base station will receive for i = 0, ..., t − 1 The users choose their pilots p i k , i ∈ [t], as follows. At the beginning of the t time-slots, each user k chooses two indices j ∈ [c] and ∈ [r]. Notably, this choice determines their pilot choices for all timeslots. We will refer to j as the sub-channel index, and as the pilot index. We assume that rs ≤ n. Accordingly, to each sub-channel, we associate a sequence of sets all have the size m (so that c = n/m), and are for each time-slot i disjoint: B i j ∩ B i j = ∅ for j = j . As we will see later in Section II-A, the B i j correspond to a sub-division of the DFT sub-carriers [n] in frequency domain into c sub-channels, hence the name sub-channel index. The B i j , j ∈ [c], will one by one be chosen disjoint to all previously drawn subsets, but otherwise uniformly at random. We shall consider both the fixed case B i j = B 0 j and the case where this procedure is repeated independently for each i. This will be done by the base-station, which will then broadcast the selection to the users. Let us denote the (random) mapping from users to index pairs by k → (j, ). We then have where circ (s) (p) denotes the circular matrix in C n,s defined by vector p ∈ C n . Also, we have defined effective channels Here we have included the data as a part of the effective channels for ease of notation. Next, let us stack the circular matrices circ (s) (p i ,j ) ∈ C n,s and effective CIRs h i ,j ∈ C s into matrices C(p i ·,j ) = [circ (s) (p i 0,j ), . . . , circ (s) (p i r,j )] ∈ C n,rs and vectors h i j ∈ C rs . By possibly concatenating zero elements into them, we may think of them as matrices in C n,n and vectors h i j ∈ C n . Overall, the signal the base station receives in time slot i is given by where e i ∼ CN (0, σ 2 I n ) is the white noise. A schematic description of the proposed scheme is given in Figure 1. A. Proxy measurement model We shall analyse a particular choice of the pilots in each time-slot. To each sub-channel (and timeslot), we associate r pilots p i ,j . These are constructed as follows: first, a 'base pilot' p i 0,j is chosen as a vector with DFT supported on B i j with constant unit power. More concretely, The other (r −1) pilots are defined as cyclical shifts of the base pilots Due to the duality of modulation in DFT domain and translation in time domain, all p i ,j , ∈ [r] are supported in DFT domain on the set B i j . Now, importantly, the structure of the p i ,j in (1) implies that each C(p i ·,j ) has the structure of a circular matrix: A key idea in CS is to perform the user identification and channel estimation task within a linear subspace of much smaller dimension m n. We propose for the base station to take such compressive measurements as follows: Let Φ ∈ C n,n be the normalized DFT matrix, Φ pq = n − 1 /2 e − 2πιpq /n , p, q ∈ [n]. The j :th compressive measurement in time-slot i, j ∈ [c], is then given by where B i j are the subsets defined in the last section. Now, the circular structure of the C(p i ·,j ) (2) together with their simultaneous diagonalisation property implies that C(p i ·,j ) = Φ * diag( for j = j . Hence, the measurement b i j only depends on the effective channel h i j associated to the sub-channel j . Disregarding the phases ofp i 0,j (which are not important for the following analysis) and renormalising, we can write it as n is an (average) energy-preserving, sub-sampled version of the DFT matrix, and z i j ∈ C n is Gaussian with zero mean and covariance matrix σ 2 m n 2 I n . Hence, b i j is a lowdimensional image of the effective channel h i j associated to the index j. We will in the following drop the sub-channel index j, since the sub-channels clearly can be processed completely in parallel. Note that expressing the noise as z i instead of e i has an formal regularizing effect -the variance of the entries of the z i are smaller than the ones of e i . We will use this heavily in our proofs. Also note that since only coefficients corresponding to the first rs members of [n] are active in h i , we may think of the A i as operators in C m,rs instead of C n,n . We will use this fact at some critical steps of our argument in the analysis. B. Hierarchical sparsity (by design) So far we have not assumed any sparsity of the vectors h i j at all, although this is a fundamental prerequisite of any CS detection algorithm. In fact, we will not explicitly assume sparsity anywhere in this paper, but instead show in the analysis later that each sub-channel will become essentially sparse by design. To be precise, they will become sparse in a more generalized sense, so-called hierarchically sparse. Let us shortly define hierarchical sparsity. As it is well known, a vector x ∈ C n is s-sparse if at most s coefficients x i are non-zero. A vector C bs x = [x 0 , . . . x b−1 ] consisting of b blocks x k ∈ C s is likewise called (κ, σ)-hierarchically sparse if at most κ of the x i are non-zero, and additionally each x i in itself is σ-sparse. Iteratively, one can define hierarchical sparsity of any number of levels (s 1 , s 2 , . . . , s L ). For a more thorough introduction, we refer to [35]. We will give a detailed analysis of the sparsity pattern of the effective channels below. To already now give some intuition what happens, recall that the h i 's are composed of r active or non-active blocks of length s. Each block corresponds to a pilot index. First, clearly, the individual CIRs h k can be interpreted as k s -sparse (i.e., including the special case k s = s). If the number of channels c is appropriately scaled, the users will distribute approximately equally between the sub-channels. Hence, each specific user will not compete with significantly more than u c = m n u other users -or in mathematical terms, the number of active blocks in the vector h i will not be not significantly more than m n u with high probability. Concretely, as we show in Section III, the number of nonzero blocks is with high probability bounded by Now, if in addition the overall number of blocks r (i.e. pilot indices) per sub-channel is large enough, all users falling within the same sub-channel will choose different pilots with high probability. Interestingly, it will turn out that the right scaling for r is sub-linear in n (as shown in the main result in Section IV). Consequently, the blocks will still be k s -sparse. Importantly, all (h i ) i∈ [t] have the same support. This fact is something we will heavily use in our analysis. The common support enables the detection of k i users with less measurements than in standard CS. Altogether, h = (h 0 , . . . , h (t−1) ) can be viewed as a (k u , k s , t)-sparse vector. C. A detection algorithm The recovery of hierarchically sparse vectors has been extensively studied in a series of papers by the authors of this article. We refer to [35] for an overview. One of the main findings is that the so-called HiHTP-algorithm can be used to recover them efficiently. The algorithm is in essence a projected gradient descent, whereby projection refers to projection onto the set of hierarchically sparse vector. This projection can be calculated efficiently using the principle of optimal substructures. As an example, to calculate the best (κ, σ)-sparse approximation of a vector (x 0 , . . . , x b−1 ), we first calculate the best σ-sparse approximationx k of each block x k , and subsequently choose the κ values of k for which x k are the largest. This strategy carries through to more levels of sparsity. In this paper, we will show that one step of the HiHTP algorithm can be used to detect the users and their data in each channel. Concretely, given a set of measurements (b i ) i∈[t] , we for each subchannel j calculate the vector ((A i ) * b i ) i∈[t] ∈ C trs , project it onto the set of (k u , k s , t)-sparse vectors, and subsequently solve a least-squares problem restricted to the support of the projection. Written out, this means: 1) For each k, determine the k s values of for which is the largest. Declare that set as ω k . 2) Determine the k u values of k for which are the largest, while still being larger than some threshold ϑ > 0. Call this set I. 5) Calculate estimates d i * of the data vectors by calculating the element-wise quotient h i * h 0 * . In the next sections, we will analyse this detection algorithm. The argument proceeds in two steps: We first prove in Section III that the h k are hierarchically sparse with high probability. With this in mind, we prove in the following Section IV that the users in any group are correctly classified with high probability. III. SPARSITY CAPTURE EFFECT Let us begin by carrying out the argument sketched in the introduction that the h i are, with high probability, hierarchically sparse. Proposition 1. For each sub-channel and λ > 0, the probability that there are more than users in the sub-channel is smaller than exp − 3λ 2 mu n(1+3λ) . Proof. Let X i , i ∈ [u] be random variables which are equal to 1 if user i is in the sub-channel, and zero otherwise. Of course, X i is Ber(p) distributed, where we for convenience defined p = m n . Therefore, E(X i ) = p, and V(X i ) = p(1 − p). Furthermore, we have |X i − p| ≤ 1 almost surely. Now set λ = λpu and estimate p(1 − p) ≤ p, to get the result. Hence, when analysing a specific channel, we may with very high probability assume that only k u of the potential u CIR's h k are non-zero, where k u is as defined in (3) (which corresponds to λ = 1). The same is true for the h k , since the k u users choose at most k u different pilots. We can however prove more. Proposition 2. Fix a channel. The probability of a collision in a sub-channel, i.e., two users choosing the same pilot, conditioned on the event that there are no more than k u users, is smaller than Proof. What we are dealing with is clearly a 'birthday paradox problem' with k u objects being distributed in r bins. For any given pair of objects, probability of a collision is clearly equal to r −1 . A union bound over the number of pairs now gives the claim. Remark 3. This bound is somewhat pessimistic, but not much so. It is not hard to prove that the probability of at least one collision happening is bigger than 1 − exp(− k 2 u 2r ). In the following, we will choose the parameters in a way that ensures that k 2 u /r is small. In this regime, 1 − exp(− k 2 u 2r ) is very close to the above value. In the event that no collisions occur, each effective CIR h is equal to at most one single h k . Hence, in this case, at most k u of the u blocks h in h are non-zero. Furthermore, each of these are the k ssparse by assumption. Putting things together, we obtain the following result. Theorem 4. Fix a channel. With a failure probability smaller than exp − 3mu 4n + 4m 2 u 2 n 2 r , the stacked vector of effective CIR's h is (k u , k s )sparse, with k u defined in (3), and h i = diag(d i )h, where d i ∈ C n are the stacked data of all users. IV. DETECTION ANALYSIS -MAIN RESULT We move on to analysing the performance of our recovery algorithm for detecting the correct users. We will pose the following assumptions on the transmitted data and effective channels. Assumption 1: The data scalars d i k ∈ C, k ∈ [u], i ∈ [t] are independent. Furthermore, they are independently distributed according to a centered distribution d on the complex unit circle. The above assumption is true if the users are sending messages which are uniformly randomly encoded using either a binary or QPSK coding. Likewise, we make the following assumption for the channels: Assumption 2: We assume that the norms of the h k are essentially constant. Formally, we assume that Note that the latter is simply a form of power control which keeps track of the received energy at the receiver. The absolute values of the constants in (4) are somewhat arbitrary -it would be possible to carry out the analysis under an assumption of the form α ≤ h k 2 ≤ β for any constants α, β > 0 -this would only lead to worse implicit constants. Since the concrete choice of constants ultimately increases readability, we have chosen to do so. In what follows, we present our main result. The figure of merit is the user load u/n. We use or to indicate that inequalities hold up to multiplicative constants independent of all other design parameters, similar to "big O" notation. Theorem 5. Let each user select its sub-channel and pilot independently. Let > 0 be a probability threshold and fix C o > 0 and κ > 2. Assume that the noise level and number of pilot sequences per sub-channel obey Then, if the sub-channel size m and threshold ϑ is chosen correctly, and 1) in the case of all A i being equal, the overload and acquisition times obey 2) in the case of the A i being independently drawn, the overload and acquisition times obey the probability that the algorithm will fail to classify the users in a specific sub-channel is smaller than The reader should pay close attention to the order of the words here: We do not claim that all users across all sub-channels will be correctly detected with high probability. Instead, we claim that for each user in a sub-channel there is a high probability that the user is correctly detected. In other words: In each transmission period, the base station will probably fail to a detect a few users correctly altogether. However, each sole user will only very infrequently experience not getting properly detected. The theorem shows that as long as the number of time-slots t and pilots r grow at a rate polylog(n), the probability that the algorithm will fail to classify the users in a specific sub-channel correctly will for large n, correctly scaling with u, be very small. The 'correct scaling' depends on how the B i j are chosen: • If they are drawn once for all time-slots, the number of users that can be accomodated grows as n k 2 s log(n) , • if the B i j are independent for different timeslots i, the number even scales linearly with n. By adjusting values of other constants, this ultimately means (for independent B i j ) that it will in theory work at any load. Notice that n and t are design parameters of the algorithm. For a concrete system design, let us provide a 'cooking recipe' for choosing them: 1) The choice of DFT size n in OFDM is typically a trade-off between spectral efficiency and how fast the channel is changing within the OFDM symbol. Let us for simplicity assume that the mobility is not the major limiting part as it is common in massive IoT systems. Nevertheless this sets an upper bound on n. 2) We can choose the maximum expected load in the system by fixing the constant C o > 0 in Theorem 11. Notably, C o can be interpreted as an inverse overload factor (with regard to nk 2 s ). E.g., say C o = 1 2 means 2n k 2 s users are served. Fixing also κ > 2 and which both govern the detection failure probability, a lower bound on n is given by 3) Given n, r and the channel parameters s, k s the number of time-slots can be fixed. Unfortunately, the implicit constant in the scaling of t would be very technical to estimate, and would not bring that much insight -any obtainable bound will in any case be very crude, and we think that it is better to tune t empirically. Summarizing, the only caveat here to keep the detection failure probability below some threshold is that both n ≥ rs and t ≥ 1 have to be adapted. Hence, the price to pay for the overload is delay (which in turn is limited by the mobility of the channel). A. A proof sketch The proof of the Theorem 5 will be long and technical. Let us therefore here first sketch its main steps. In the first step of the algorithm, we are investigating the value of Within each block, we then determine the ω which gives the highest value of ν(ω). Clearly, we may just as well compare the average of those expressions over i, i.e. Using We will in the following prove that as soon as t and m are large enough, From that, we will be able to deduce that which means that all users will be correctly classified as active by our algorithm. An intuitive sketch of how we are going to prove the above is as follows. • Conditioned on the draws of A i and z i , each of the terms above are averages of independent data variables. This means that they should concentrate around their expected values as soon as t is reasonably large. • Next, we move on to analyse those expected values. These values are affected by two layers of randomness: the randomness of the noise z i and the randomness of the matrices A i . We will therefore first bound the deviation caused by the noise with high probability. • Finally, we will investigate the expected values without noise. We will show that it is close to h ω 2 under an assumption of the coherence which we then argue will hold with high probability. In particular, in the case of independent A i , we will leverage the averaging over time to allow this bound to be slightly higher than in the constant case. This makes it possible to get rid of a term log(n) −1 in the number of allowed users. Our argumentation differs from standard compressed techniques in several ways. In comparison to the general results on hierarchically sparse recovery [35], our results do not rely directly on HiRIP (hierarchical restricted isometry) properties of the measurement operators. In particular, the only property of the A i that we use is that it has a small mutual coherence with high probability. Therefore, the results in this section apply to much more general A i than randomly subsampled DFT matrices. Furthermore, we rely on concentration both due to the random nature of the measurements A i and the data d i k . This has the technical consequence that terms of order 4 in the noise vectors appear in the expressions we need to prove concentration for. As a result, we need to use results that to the best of our knowledge has not been utilized for compressed sensing before. B. Mutual coherence All of our proofs will rely on the coherence of the matrices A i being bounded. If we could deterministically find a way to make the coherence low, we could possibly arrive at a guarantee involving 'less' randomness. This is however not possible while keeping the number of measurements m under control. To explain this, recall that the mutual coherence is lower bounded by the so called Welch bound [37]. It states that if (a k ) k∈[n] is a set of normalized vectors in C n , the coherence fulfills with equality if and only if (a k ) k∈[n] is an equiangular tight frame, i.e has a constant value for | a k , a | for all k = and fulfills k∈[n] a k a * k = n m I n . Note that the Welch bound can be rewritten as That is, for large n, a condition of the form As a detailed reading of the proof of the main theorem shows, this sample complexity would be totally acceptable for our needs. However, it was shown in [38], [39] that a subsampled DFT matrix only achieves the Welch bound if B is a difference an equal amount of times, say λ times. Such sets however necessarily satisfy i.e. the number of elements m ≥ √ n. Hence, the DFT matrices achieving the Welch bound are not useable for our needs. We can however show that if we subsample the DFT matrix randomly, it will have a mutual coherence smaller than τ √ kuk 2 s with high probability already when m k u k 2 s log(n), which in turn will be enough to prove our main result. Theorem 7. ( [34, Corollary 12.14]). The randomly subsampled DFT matrix A fulfills Let us end by noticing that from a practical point of view, our analysis resting upon the coherence of the matrix rather than its RIP is beneficial. Whether a matrix obeys a coherence bound is immediate to check, whereas checking whether it obeys the RIP is NP-hard [40]. Practically, this means that once a user has been distributed into a group B i j , he or she can check the coherence of the respective A i . If the coherence is high, they will know that their recovered vector probably cannot be trusted. C. Auxiliary results The proof of our main recovery result is very technical, and rests on a collection of auxiliary results. Let us state, and prove some of them, here. The first result shows measure concentration with respect to the random draw of the data. ,p∈[ns] be random variables, identically and independently distributed according to a distribution d on the unit circle in the complex plane, with Letting A ∈ C m,s denote the :th block of A, A = [A 0 , . . . , A n s ], define D i as the block diagonal matrix with p:th block d i p I s , and Then, since for all i and p, d i (p)d i (p) = |d i (p)| 2 = 1. By defining a matrix M ∈ C nst,nst through its blocks and ∆ ∈ C nst through ∆ j,q = d j q , the random variable we are trying to bound is equal to ∆, M∆ . For this variable, we may invoke the Hanson-Wright-inequality (see e.g [41]), which states that In this case, the expected value is zero and which gives the claim. The above theorem suggests that we now need to analyse the expressions Ψ(A, B, v) and ψ(A, B, v) for v = h + z and for different values of ω. The smaller we can get them, the tighter the above bound will be. The first step is to analyse the effect of the noise, i.e. to bound the deviations |Ψ(A, Here, we mainly use the Gaussianity of z. We arrive at the following results. Lemma 9. For an arbitrary (1, k s )-sparse support ω, define n with a failure probability smaller than ). Lemma 10. For an arbitrary (1, k s )-sparse support ω, define Under the assumption that the coherences of all A i are smaller than τ √ where κ is a constant dependent on the value of the constant C. The proofs are long and technical and are therefore postponed to Section B in the appendix. The next step of the argument is to bound the 'undeviated' expressions Ψ(A, B ω , h) and ψ (A, B ω , h). We arrive at the following two lemmas. The proofs are, albeit conceptually easy, quite long and technical, whence we postpone them to Section B of the appendix. The main idea is to utilize the low mutual coherence -and in particular only argues probabilistically in the case of independent A i . Lemma 11. Assume that the coherences of A i all are bounded by τ √ kuk 2 s . Then Additionally, if the A i are independent, we have for with a failure probability smaller than 4 exp − t max(τ 2 /γ 2 ,τ 4 /γ 4 ) . Remark 12. We emphasize at this point that Lemma 11 is quite essential for the main result in the next section. In fact, we see that for the independent case, the τ parameter or equivalently the coherence, can be large with some small probability. We can use the γ-parameter to compensate for this, and make the term Ψ(A, B ω , h) still rightly concentrate around its expected value h ω 2 . This will be pivotal for the overload situation! Lemma 13. Under the assumption that all coherences of the A i are bounded by τ √ kuk 2 s , we have D. Proof of the main result We can now prove Theorem 5 Proof of Theorem 5. Let us fix the sub-channel, set β = 32κ 3C 0 , and fix the number of measurements in the sub-channel to m = β log(n) n u . Then, Theorem 4 shows that the effective vector h is (2m· u n , k s )-sparse with failure probability smaller than where we used our assumption on the size of r in the final step. We may without loss of generality assume that k u ≥ 1 -if not, there are no users to detect in the channel. Theorem 7 shows that the coherence of each A i is smaller than τ 2 √ kuk 2 s with a failure probability smaller than Now, in the case of a single measurement operator A, our corresponding assumption on u reads n u ≥ C o k 2 s log(n) for some constant C o . Setting τ equal to 1, the above is hence smaller than where we in the third step used that β = 16κ 3Co . In the case of independent A i , the assumption on the number of measurements instead implies n u ≥ C o k 2 s again for some C o . Setting τ = log(tn) 1 /2 and applying a union bound over i implies that all A i have a coherence smaller than log(tn where we in the third step again used β = 16κ 3Co , and t ≥ 1 in the final step. In the following, let us for convenience write τ for 1 in the case of a single A and log(tn) for the case of independent A i . Our power assumption of the active blocks im- and consequently Also, using the notation of Lemma 10, our assumption on the noise level implies that This has a number of consequences. First, with Lemma 13, we may estimate Also, again using the notation of Lemma 10, we can estimate where we estimated τ ≤ τ . All in all, we conclude that Lemma 10 shows that with a failure probability smaller than t 1−κ r −κ s ks −κ , we have By a union bound, the above is true for all r s ks (1, k s )-sparse supports ω with a failure probability smaller than (tr s ks ) 1−κ . In particular, the above implies that for every γ > 0 . (9) Next, we move on to the expressions involving Ψ. Our bounds (7) and (8) imply where we used the notation defined in Lemma 9. That lemma, together with a union bound, hence implies that (10) for every support ω with a failure probability smaller than Due to the assumptions on the t-parameter in the respective cases, we have Lemma 11 further implies that ku , deterministically in the case of equal A i , and with a failure probability smaller than 4 exp −γ t τ 4 ) ≤ in the independent case. Importantly, we may choose the size of γ adaptively by adjusting the values of the implicit constant in the bound on t. Now, set ρ = γ h 2 ku . Applying Lemma 8, in particular utilizing the bound (10) implies that we have for all ω with a failure probability smaller than , which is smaller than in both cases. Now it is clear that as long as the implicit constant in (11) is smaller than γ = 1 5 , that equation will imply that the best k s -approximations η j of the active blocks ω will obey This shows that if we choose the threshold θ = 3 h 2 10ku , the k u blocks that will be chosen by the algorithm are exactly the ones that are active. The proof is finished. V. RECOVERY ANALYSIS Above, we provided conditions for all users within a sub-channel to be correctly classified as active. In this section we derive some guarantees for the actual data detection. To do so, we assume for simplicity that the system is not overloaded, i.e. that u ≤ n. This will allow us to bypass information-theoretic treatment of multiuser detection involving successive interference cancellation, pseudo-inverses etc. In other words, it is merely a stability result for the under-determined systems to be solved. We need to make statements about the solutions of the restricted least squares problems It will turn out that the analysis of this problem is very simple as long as n is prime and Ω is exactly equal to the support of h. Note that this does not automatically follow from the previous section -there, we only show that the blocks are correctly classified. It can very much be that the best k s -sparse approximations within the blocks are not equal to the h i of the original vectors, in particular considering noise effects. By a simple trick, it is however not hard to give conditions for when Ω = supp h with high probability. Notice that when applying Theorem 5, there is nothing stopping us from regarding h as a vector of rs blocks of length 1, out of which k u k s are active and 1-sparse (instead of r blocks of length s, out of which k u are active and k s -sparse). From this point of view however, all assumptions of the theorem are not met -what is missing is the power assumption. But if that assumption is made, which exactly corresponds to all non-zero entries of all h k having essentially constant magnitudes, i.e. the theorem shows that under the same conditions as before, all entries h k ( ) of the vector are correctly classified -i.e., the support is exactly recovered. Let us record this as a corollary. Corollary 14. In addition to the assumptions of Theorem 5, assume the fine-grained power control (12), and that the constant C o < k −1 s . Then, our algorithm exactly recovers the support of h. Once the support of h has been detected, we may recover all h i by solving the least squares problem In order to show that this problem succeeds at approximately recovering the h i , we need the following theorem. Theorem 15 (Reformulation of [42], Theorem 1.1). Suppose that n is prime and F is the n × n DFT matrix. Then, every square sub-matrix of F is invertible. In particular, any submatrix formed by concatenating at least as many columns as the number of rows is injective. Remark 16. The assumption that n is prime is a technical one, but really needed for Theorem 15 to hold. To see why, assume that n = pq for some integers p, q. Define I = p · [q] and let v be the indicator of I, i.e. the vector equal to 1 on I and else zero. Then, we have for k with e − 2πιk /q = 1, Hence, for any J ⊆ [n] with |J| = |I| and J ∩ q · [p] = ∅, the square submatrix (F ji ) j∈J,i∈I has a non-trivial kernel and hence is non-invertible. Note however that since we can choose n ourselves, the primality of n is actually not a restriction. We may now prove that under the same conditions as in our main theorem, and that n is prime, all h i will be succesfully recovered. It is in fact at this point relatively simple to prove estimates for the data recovery. Lemma 17. Under the same assumptions as the main theorem, with the additional condition that n is prime, all solutions h i * of (13) are given by with failure probability smaller than n − 3β /4 + . Proof. Let us for notational simplicity set Ω = supp h and α = (A i ) Ω . It is clear that the solution of (13), as soon as α * α is invertible, is given by However, by Theorem 15 and the primality assumption on n, α * α is invertible, as soon as m |Ω| = k u k s . So let us argue that m is of that size with high probability. Our model implies that with a failure probability less than the one given in the main theorem, k u ≤ 2m u n . This proves . The theorem has been proven. The above lemma tells us that the recovered vector actually is equal to the ground truth contaminated by a Gaussian vector. This has the following consequence for the recovery of the data. Theorem 18. Consider the data (d i ( )) ∈ω , with |ω| = k s . Under the additional assumption that σ log(n) with a failure probability smaller than n − 3β /4 + + 2k s (e −n + t 1−n ). Proof. By the previous lemma, with failure probability smaller than n − 3β /4 + , h i * = h i +z i . Therefore, for each ∈ ω, Since the z i are Gaussians and independent of d i , the above variable has the same distribution as By a simple union bound and utilizing that the noise is Gaussian, we have |h( ) + z 0 ( )| ≥ |h( )|− σm 1/2 n 1/2 for all with a probability bigger than 2k s e −n . Furthermore, the variables z i ( )−z 0 ( ), ∈ supp h, i ∈ [t], are Gaussians with variance 2σ 2 m n 2 , and hence, they are all smaller than √ 2σm 1/2 log(t) 1/2 n 1/2 with a probability smaller than 2tk s e −n log(t) = 2k s t 1−n . Also note that by the choice of m described in Theorem 5, we have m n = β log(n) 1 u inf |h( )| 2 σ −2 for some constant β. Hence, assuming that the implicit constant above is chosen large enough, the two estimates above imply that which was the claim. Let's give an interpretation of the last theorem. The additional assumption we're making is an assumption on the amplitude on the entries of the vectors h, in particular its relation to the noise level. This is natural -if the entries are too small, they will drown in the noise. Similarly, the estimate we prove is also meaningful only when σ |h( )| −1 is small enough. Note that the meaning of 'small enough' gets more relaxed as the number of users increase -in particular, if we choose u ∼ n k 2 s log(rs) , which Theorem 5 tells us is possible, the maximal error will be very small unless t is exponentially large in n log(n) 2 . This is also natural -when we send very long sequences of independent messages, the probability that at least one of them fails of course increases. VI. NUMERICAL EXPERIMENTS We have carried out two numerical experiments: In the first experiment we let n scale up and find the number of supported users given a predefined fixed probability of collisions in each of the sub-channels under noise. In the second experiment, we let the number of users u scale up while n is kept fixed, and count the correctly detected users under noise. It is important to emphasize that we have defined Signal-to-Noise Ratio SNR as SNR = 10 log 10 ( 1 σ 2 )dB. Hence, the true physical noise in the system is SNR true = SNR − 10 log 10 (n/m)dB (see our proxy measurement model) . By this definition, e.g, an SNR=20 dB corresponds to 13.0 dB for the experiments depicted in Figure 3-5 with n = 2048 and m = 256, which is a lot less! Parallel sub-channels are created by randomly partitioning the n-dimensional image space of a DFT matrix into blocks of length m, leading to c = n/m sub-channels. For simplicity, we set the number of available pilot resources to r = n/s. Consequently, for each sub-channel j = 1, . . . , c, a vector x j ∈ C n is divided into r blocks, each of length s. Hence, we use the full column space of the DFT and do not change the measurement pattern for simplicity. If a user chooses a pilot supported in DFT domain on block B k , k ∈ [r], block x k is filled with the kth user's k s -sparse signature. Each user is also encoding data into diagonal matrices D i , i = 2, . . . , t containing entries of modulus 1. Hence, at the access point, data blocks y j i = A j D i x j ∈ C m for j = 1, . . . , c, i = 1, . . . , t are received, forming the observation y ∈ C c×m×t . Here, A j is a matrix consisting of m rows of a n × n DFT matrix, corresponding to the sub-carriers allocated to subchannel j. User detection is performed by one step of HiIHT [22], [43], a slight variant of the HiHTP [17]. In the first experiment we investigate the scaling of the system under ideal conditions. We fix the number of sub-carriers bundled together to form one pilot resource to s = 8, resulting in r = n/8 available resources per sub-channel. Since each user chooses their sub-channel and resource (i.e. the pilot sequence) within that sub-channel uniformly at random, on average each sub-channel will be filled withk u = u/c users. The numberk u of users per sub-channel is chosen such that the probability of two or more users trying to use the same pilot sequence is below a preset probability 0 < p u < 1, i.e. the largestk u ∈ N such that The left hand side of inequality (14) is the probability that each of k indices out of [r] selected uniformly at random are unique. We set the number of sub-channels as c = n/m with m = 2 log 2 (ku·ks) and adjust t such that in noise-free simulations the user detection with HiIHT has a detection rate ≥ 1 − p md , where p md denotes the probability of misdetection. Setting p md = 0.1, the average collision probability p u = 0.1 and the in-block sparsity k s = 4 resulted in t = 100. With these choices, on average the total number of supported users is given by The number of supported users in this setting for n = 2 10 , . . . , 2 13 can be observed in Figure 3. With an SNR ≥ −10dB (note again that this is system SNR, which is lower than the true physical SNR!) the system performance is able to recover all users as in the noise free case, whereas the recovery breaks down for lower SNRs. To draw a comparison with other random access schemes (e.g. slotted ALOHA), we assume that there also the premise is to minimise collisions as done in our scheme in order to transmit the same amount of data in the same time as our scheme. We do not assume that the same pilot design is used though and hence let the users choose out of n instead of r Figure 2. By our subchanneling approach we can serve many more user without increasing the collision probability and at the same time stabilise the user detection by making use of the full data transmission time t. Comparing the orange curve in Figure 3,which following our discussion, corresponds to the number of users that can be served by slotted ALOHA, with the curve for our method for an SNR≥ −10 dB in gives roughly a 20 fold increase in user capacity for the tested system dimensions n. While the first experiment gauges the average system behaviour under ideal conditions, we conduct another simulation to investigate the ability to overload the system under more realistic premises. In the second experiment the number of users trying to communicate over the system is not known, and the distribution of users to the sub-channels is random. In this case, the detection algorithm has no prior information on the sparsity in each subchannel. To get a suitable estimate, the detection algorithm first thresholds each block to the assumed sparsity k s and computes each block's 2-norm. Then the block norms are clustered into 2 clusters and the blocks belonging to the smaller cluster are set Figure 4 shows the number of recovered users over the number of total system users for different SNRs. In gray the average number of users that send collision-free, i.e. using a pilot sequence not chosen by any other user, is shown. This is the maximum number of users that could possibly communicate reliably. Note that until ∼ 1000 users, and with noise level above 0 dB (system dB!), our recovery is almost optimal. The recovery rate is depicted in Figure 5. In this regime the false positive rate of our detection algorithm is also quite low as seen in Figure 6. The recovery rate is shown to degrade gracefully with worsening SNR. When the noise power is higher than the signal power, the norms of the thresholded blocks do not differ between active blocks and pure noisy blocks, and hence the hierarchical thresholding procedure is unable to produce a reasonable estimate for the active users. In this scenario, the number of false positives can even decrease with increasing user load because many non-active blocks are classified as active and with more users altogether less non-active blocks are present. Note, however that 0 dB in this simulation corresponds to a much lower true SNR as discussed in the beginning of this section. Improvements for the detection algorithm in a low SNR regime, e.g. by using HiHTP until convergence are a topic for future investigation. It is noteworthy that even in a setting where the recovery algorithm does not assume any knowledge of the number and distribution of users, the recovery is as good as in the first simulation, where exact values for the number of active resources per sub-channel are known. This experiment shows that the system can be overloaded with more users than available resources r in a system without sub-channels. VII. CONCLUSION We designed a one-shot messaging massive random access scheme based on hierarchical compressed sensing, conducted theoretical performance analysis and demonstrated its feasibility by numerical experiments. The proposed scheme promises huge gains in terms of number of supported users. Specifically, we rigorously proved that the system can sustain effectively any overload situation, i.e. detection failure of active and non-active users can be kept below any desired threshold regardless of the number of users. The only price to pay is delay, i.e. the number of time-slots over which crossdetection is performed. We achieved this by jointly exploring the effect of measure concentration in time and frequency and careful system parameter scaling. The key to proving these results were new concentration results for sequences of randomly sub-sampled DFTs detecting the sparse vectors "en bloc". Notably, since we use just mutual coherence, in principle, the results carry over to other families of matrices which are suited for the random access problem. This is a topic of further investigation. In the numerical experiments we were able to demonstrate the overload operation. Clearly these need to be extended to test the scheme in more practical setting. Several improvements are immediate: First, one could run several iterations of the HiHTP/HiIHT algorithms to obtain better performance when the noise is strong. Second, with smaller n, say n ≤ 2048, one should think of additional measures to control the sub-channel load and collisions therein. Eventually, we have also theoretically investigated the data detection but only for the nonoverload situation which gives merely stability results for the underlying underdetermined systems. It is left for future research to incorporate the effects of coding, successive interference cancellation etc. and, henceforth, carry out a complete throughput analysis for the proposed system. Axel Flinth is an assistant professor at Umeå University. He obtained his PhD Degree in 2018 from the Technische Universität Berlin. Before joining Umeå University, he was employed as as a post-doc at Université de Toulouse III -Paul Sabatier and Chalmers University, and as a guest lecturer at University of Gothenburg. Benedikt Groß received his diploma in mathematics from Humboldt Universität zu Berlin. He is currently a PhD student at Freie Universität Berlin under supervision of Prof. Gerhard Wunder. A. Two measure concentration inequalities Here, we present some theoretical results we will need in our later proof. First, in the proof of Lemma 9, we will use a combination of the Hoeffding and Bernstein inequalitites. To increase readability, we formulate this combination as a separate lemma. be independent real random variables of the form where c q are constants, g q are subgaussians with subgaussian norms γ q , and x q are subexponentials with subexponential norms ξ q . Consider the random variable where κ is a numerical constant. Proof. As mentioned, this is a simple combination of the Hoeffding inequality for subgaussians and the Bernstein inequality. The first namely states In the proof of Lemma 10, the above will not suffice, because the expression we are going to try to control is a fourth-order polynomial in the noise vector z (and hence cannot be conceived as a sum of Gaussian and subexponential variables only). For this, we will need the following more powerful result. where e i is the i:th unit vector. Then, for every polynomial f : C n → C of degree d, we have where κ is a numerical constant and λ k is defined as where f (k) denotes the k:th derivative of f . A few remarks are in order. First, the theorem in [44] is formulated and proved for real variableshowever, since a Gaussian in C n can be interpreted as a real Gaussian in R 2n , and we always can split the real and imaginary value of f (z), the theorem goes through also for complex variables (possibly with slightly worse implicit constant). Secondly, the theorem actually holds for any subgaussian distributions of the z i -however, we only need the Gaussian case. Finally, the bound claimed in the cited source looks much more complicated, since it involves other norms of the multilinear forms. However, as is remarked earlier in the paper, those norms are all bounded by · F , so that the above theorem still is true. B. A few deterministic bounds. Next, let us bound a few expressions involving A * ω A p under a coherence assumptions. In contrast to many of the other bounds we will prove, these are completely deterministic. We will need them all in the coming sections. A notational remark is in order: For a vector x ∈ C p , we will in the following refer to its i:th element either by x i or x(i), depending on what is more convenient. For instance, in instances where the vector itself is endowed with a sub-index, we will opt for the second alternative. where the notation h was defined in Lemma 9. Proof. We begin by fixing p and estimating C p h p 2 . Let I p be the indices in [n] corresponding to the p:th block. We distinguish two cases ω ∩ I p = ∅ In this case, we have | a k , a a , a j | ≤ τ 2 kuk 2 s for all values of k, j and . Consequently ∈ω k,j∈Ip a k , a a , a j h p (k)h p (j) where we in the final step utilized the Cauchy-Schwarz inequality together with the fact that h is (k u , k s )-sparse (so that h p is k s -sparse). ω ⊆ I p Let us divide the inner sum into the index pairs where k = j and the ones where k = j. Summing this equality over yields The rest term can be bounded by since | a k , a | 2 ≤ τ 2 kuk 2 s for k = , and |ω| = k s . We continue with the terms for which k = j. In this case, we can estimate We furthermore have, due to the Cauchy-Schwartz inequality together with the k s -sparsity of the h p , k,j∈Ip Consequently, the sum as a whole can be estimated with Hence, Summing this inequality over p yields the first inequality. We move on to C p 2 F . We again distinguish between two cases. ω ⊆ I p In this case, for each value of , | a , a k | 2 = 1 for exactly one value of k (k = ), but is else smaller than τ 2 kuk 2 s . Therefore kuks . ω ∩ I p = ∅ In this case, | a , a k | 2 is always smaller than τ 2 kuk 2 s . Therefore This implies the last estimate. Also, since ω ⊆ I p for exactly one value of p, we get i.e., the third. We move on to the final inequality. This however easily follows from the previous ones: Lemma 22. Let p = q and let k ∈ I p and ∈ I q and ω (1, k s )-sparse be arbitrary. Under the assumption that the coherence of the matrix A is bounded by Proof. Since p = q, there are three cases: ∈ ω, k ∈ ω and , k / ∈ ω. Let us treat them separately ∈ ω Note that since p = q, we then necessarily have ω ∩ I p = ∅. Hence, | a j , a k | ≤ τ √ kuk 2 s for all j. Furthermore, a , a j is equal to 1 exactly when j = , and else also smaller than τ √ , and we may simply estimate j∈ω | a , a j a j , a k | ≤ τ 2 kuks . The bound follows. We now arrive at the lemmata we claimed in the main text. Let us start with the one analysing the deviation of Ψ(A, B, h + z) from Ψ(A, B, h). Note that since z is independent of all other random variables, we may treat the latter as constant in our considerations. Proof of Lemma 9. Note that Ψ(A, B, h) is a mean of independent variables We will first prove a high-probability bound on all of them. We may then apply Hoeffding to get the final concentration. To further ease the notation, let us also drop the index i in passing. Let us first note that for u, w arbitrary where we used the previously defined notation C p = A * ω B p . Hence, Here, X p = C p h p , C p h p + 2Re( C p h p , C p zz p ) + C p z p , CC p z p ) are variables with By Lemma 21, we have . for all i with a failure probability smaller than 4te −κ log(t)n = 4t 1−κn . We (crudely) estimated k −1 u k −1 s ≤ 1. Now, applying Hoeffding on the event that the above bound holds yields that n with a failure probability smaller than ). Now it is only left to note that The proof has been finished. We move on to the lemma controlling the deviation of ψ (A, B, h + z) from ψ(A, B, h). As stated earlier, we can treat all terms but z as constant in these considerations. As such, the expression we are trying to control is a fourth order polynomial in the Gaussian vector z. Hence, we will be able to control it with Theorem 20. To do so, let us first prove some auxiliary results about the expectation of derivatives of certain polynomials in Gaussians. We here used the symmetry of z, and the assumption of Γ having a zero diagonal. Consequently We move on to the first term in the second derivative. We have The two terms with z appearing linearly vanish in expectation (since z is centered). The other nonconstant term equals The following two equalities hold The latter equality can be seen through direct calculation, or by the fact that iz is identically distributed to z. Hence The second derivative has been calculated. We move on to the first derivative. Let us treat the constant, linear and quadratic term of δ(z) separately. As for the first, it causes a term α(β(x 0 ) + γ(z, x 0 )). Here, only the constant term survives calculating the expected value. The linear part induces a term of the form β(z)(β(x 0 ) + γ(z, x 0 )). The linear part again vanishes. The bilinear equals Using an argument similar to that above, we obtain that the above equals We now only have the term associated to the quadratic term in δ. It causes the term z, Γz β(x 0 ) + z, Γz γ(z, x 0 ). Here, the cubic term vanishes in the expectation due to symmetry of z, and E ( z, Γz ) = tr(Γ) = 0 due to the zero diagonal assumption. We now move on to the final claim, namely the one about the expectation of π(z) = δ(z)δ(z) itself. We apply the same strategy as for the derivative, i.e. treat each term in δ(z) separately. As for the constant part, we have αδ(z) = |α| 2 + αβ(z) + α z, Γz . Again, only the constant survives taking the expectation. We continue with the linear part The linear and third order terms vanish in expectation due to symmetry. As for the final term, we have E |β(z)| 2 =E z, b b, z + b , z b, z + z, b z,b + b , z z,b The quadratic term is left. We have z, Γz δ(z) = z, Γz α + z, ΓZ β(z) + | z, Γz | 2 . The expectation of the two first terms vanishthe first due to tr(Γ) = 0, and the second due to symmetry. Let's expand the third one | z, Γz | 2 = i,j,k, The expectation of z i z j z k z is zero unless either i = j and k = or i = k and j = . In the first cases, Γ ij Γ k = 0. The same thing happens when the common value of i and k is the same as the common value of j and . When i = k = j = , we have E (z i z j z k z ) = E |z i | 2 E |z j | 2 = ς 4 Therefore, the only terms that survives taking the expected value is i =j The proof is finished. We draw the following immediate corollary. F It is now just a matter of utilizing the linearity of the derivative and the Cauchy-Schwarz inequality to obtain the stated result. With the above two auxilary results in our toolbox, we may now prove Lemma 10. Proof of Lemma 10. Notice that The expressions we are trying to control are hence as in Corollary 24 with α i p,q = A i h p , B i q h q Γ i p,q = (C i p ) * C i q , b i p,q = (C i p ) * C i q h qb i p,q = (C i q ) * C i p h p , where we used the notation C p = A * ω A p again. Let us estimate the values of α i , β i and Γ i in this case. To ease the notational burden slightly, let us drop the index i. α Here, we just need to recognize the term from the ψ(A, B, h)-expression β Let us first notice that since b p,q andb p,q have disjoint supports (remember that p = q), we have Remembering the notation I p for the p:th block, we have Summing this over implies where c q equals 1 if ω ∩ I q = ∅, and zero otherwise. Again summing over p = q, we obtain the final estimate Here, we used that ω is contained in only one I q , so that q c q = 1 Γ The argument is similar to the one above. We have Again using the squared version of the bound (17), we see that the above is smaller than Summing this bound over p = q, we obtain Γ 2 ≤ 6nnsτ 2 kuk 2 s + 3n 2 τ 4 k 2 u k 2 s ≤ n 2 τ 2 kuks ( 1 ks + τ 2 kuks ) ≤ n 2 k −2 s τ 2 (1 + τ 2 ) ≤ n 2 k −2 s τ 2 (1 + τ ) 2 We can now bound the expressions λ k from Theorem 20: Remember that in this case, ς 2 = σ 2 m θ min(log(tr s ks ), k −1 s log(tr s ks ) 2 )· ψ(A, B, h) 1 /2 (σhτ + σ 2 τ (1 + τ ) + σ 2 (σ 2 (1 + τ ) 2 τ 2 + στ (1 + τ )h + τ 2 h 2 ≤ ∆ where we dropped a few k −α s -factors and used that log(tr s ks ) k −1 s log(tr s ks ) 2 to make the expression a bit more tidy -this surely only makes the expression larger. Consequently, Theorem 20 implies |X − E (X)| ≤ ∆ with a failure probability smaller than exp(−κ min 1≤k≤4 min 1≤j≤k θ λ k ) 2 /j ) ≤ exp(−κ log(tr s ks )) ≤ (tr s ks ) −κ , where the value of the constant κ is dependent on the implicit constant C in the above. By a union bound, we get the inequality above for all times i with a probability smaller than t 1−κ r −κ s ks −κ Now let us calculate E (X). But this is easyby Corollary 23 and the observation β p,q 2 F = b p,q 2 + b p,q 2 , we have
2022-09-23T06:43:00.255Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "1862ef381b36c0af22aa077b05149334ae347ec8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1862ef381b36c0af22aa077b05149334ae347ec8", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
210913922
pes2o/s2orc
v3-fos-license
Neonatal survival and determinants of mortality in Aroresa district, Southern Ethiopia: a prospective cohort study Background The first 28 days of aliveness are the biggest challenge mentioned for the continuity of life for children. In Ethiopia, despite a significant reduction in under-five mortality during the last 15 years, neonatal mortality remains a public health problem accounting for 47% of under-five mortality. Understanding neonatal survival and risk factors for neonatal mortality could help devising tailored interventions. The aim of this study was to determine the neonatal survival and risk factors for neonatal mortality in Aroresa district, Southern Ethiopia. Methods A community based prospective follow up study was conducted among a cohort of term pregnant mothers and neonates delivered from January 1/2018 to March 30/2018. A total of 586 term pregnant mothers were selected with a multistage sampling technique and 584 neonates were followed-up for a total of 28 days, with 12 twin pairs. Data were coded, entered cleaned and analyzed using SPSS version 22. Kaplan–Meier survival curve was used to show pattern of neonatal death in 28 days. Independent and adjusted relationships of different predictors with neonates’ survival were assessed with Cox regression model. The risk of mortality was explored and presented with hazard ratio and 95% confidence interval and P-value less than 0.05 were considered as significant. Result The overall neonatal mortality was 41 per 1000 live births. Hazards of neonatal mortality was high for neonates with complications (AHR = 3.643; 95% CI, 1.36–9.77), male neonates (AHR = 2.71; 95% CI, 1.03–7.09), neonates that mothers perceived to be small (AHR = 3.46; 95% CI, 1.119–10.704), neonates who had initiated exclusive breast feeding (EBF) after 1 h (AHR = 3.572; 95% CI, 1.255–10.165) and mothers who had no postnatal care (AHR = 3.07; 95% CI, 1.16–8.12). Conclusion Neonatal mortality in the study area was 4.1% which was high and immediate action should be taken towards achieving the Sustainable Development Goals. To improve neonatal survival, high impact interventions such as promotion of maternal service utilization, essential newborn care and early initiation of exclusive breast feeding were recommended. Background Neonatal mortality (NNM) is the death of a baby within the first 28 days of life and is expressed as neonatal deaths per 1000 live births. The first 28 days of aliveness are the biggest challenge mentioned for the continuity of life for children. Figures in 2015 pointed out that nearly six million under-five died before celebrating their fifth year; close to one million newborns will lose their life at the first day of their birth. A one million newborns will die in the early neonatal period and, about 2.8 million will die in the late neonatal period though the overall neonatal mortality rate went down by 49% from 37 in 1990 to 19 deaths per 1000 live births in 2016 [1]. Neonatal mortality has become an important public health issue in many developing countries. Among newborns in sub-Saharan Africa, about 1 among 36 children dies in the neonatal period, while in the world's richest countries the neonatal death is 1 in 333 children [1]. Africa is one of the global regions that show the smallest declines in Neonatal Mortality Rate (NMR) in the so far Millennium Development Goal. In Sub-Saharan Africa, neonatal mortality accounts for 35% of all child deaths [2]. Ethiopia is the third highest neonatal mortality contributor in Africa with 187,000 neonatal deaths in 2015 [3]. According to Ethiopian Demographic Health Survey (EDHS) 2016, the neonatal mortality rate in Ethiopia was 29 deaths per 1000 live births [4]. Leading causes for neonatal death are pre-term birth, severe infections, and asphyxia. Neonatal factors like birth size, birth rank and birth interval and maternal complication during labour, as well as health service seeking behavior are the potential determinants of neonatal mortality [5]. The Ethiopian government has formulated and implemented a number of policies including Integrated Management of Newborn and Childhood Illness (IMNCI) strategy [6], Kangaroo Mother Care (KMC) [7] and Health Sector Development Plan (HSDP) [8], which aim at continued improvements in childhood survival by increasing access to and quality of health services to every segment of the society. Despite these policy and intervention initiatives, currently, Ethiopia has the third highest reported number of newborn deaths in Africa and ranks fifth having the highest number of deaths globally [1] and unluckily, the available information on neonatal mortality is overly scarce to blueprint locally specific interventions [1]. Therefore, this study was aimed at determining determinants of neonatal survival in Aroresa District, Southern Ethiopia. Study setting and period The study was conducted in Aroresa district which is one of 23 districts in Sidama Zone, Southern Nations, Nationalities and Peoples' Region (SNNPR), Ethiopia. It is located at the distance of 181 kms from Hawassa, the capital of SNNPR and 554 kms from Addis Ababa. The district has 30 rural and 3 urban Kebeles (the smallest administrative unit in Ethiopia) with a total population of 220,332 and of this, females constitute 49.8%. The women of reproductive age group account for 51,337(23.3%) of the total population. According to the district health office report, the total number of estimated deliveries in 2017/18 was 7623 and proportion of the utilization of first ANC, institutional delivery, Postnatal care (PNC) services and contraceptive prevalence were 77, 38, 69 and 49%, respectively. The district has one primary hospital, 8 Study design and population A community based prospective cohort study was conducted among a cohort of term pregnant mothers and neonates delivered from January 1/2018 and March 30/ 2018 in randomly selected kebeles. All term pregnant mothers (≥37 week Gestational Age (GA)) who live in the study kebeles were included in this study and followed up until they gave birth and their neonates were followed-up for a total of 28 days. All term pregnant mothers who had a known psychiatric disorder, unable to speak and residents for less than 6 months were excluded from this study. Sample size and sampling procedure The sample size was calculated using single population proportion with the assumption that the proportion of neonatal mortality was 2.9% [4], margin of error 2%, confidence interval of 95%, and a design effect (DE) of 2. Using the sample size correction formula and adding 15% non-response and lost to follow-up rate, and the final sample size became 600. Multistage sampling technique was used to identify 600 term pregnant women to be enrolled in the follow up for the study (all term pregnant mothers were recruited consecutively until sample size was reached). First, all the Kebeles: in Aroresa district were determined to be 33. Then, 10 kebeles (9 rural and 1 urban) were selected from the district by simple random sampling method using OpenEpi 3.03. The calculated sample size was proportionally allocated to each study kebele based on expected number of term pregnant women per 'Kebele'. Then the calculated sample was selected consecutively from each kebele. Variables The outcome variable is neonatal survival dichotomized as (alive =1 and died = 0) The predictor variables include; socio-demographic and economic factors: ethnicity, religion, place of residence, marital status, education status of mother and father, and occupational status of mother, age of mother, maternal factors: age at child birth, maternal complication (excessive bleeding, puerperal sepsis and fever, prolonged labour, eclampsia and preeclampsia malpresentation and malposition, premature rupture of membrane, and obstructed labour), maternal service utilization factors: place of delivery, ANC visit, postnatal care, initiation of exclusive breast feeding (EBF) and neonatal factors: birth size, birth order and interval and neonatal complication like asphyxia, infection, hypothermia, and jaundice. Operational definition Neonatal death: a death of neonate within 28 days of life according to report of mother participated in the study. Neonatal survival is defined as being alive up to the end of follow-up period (28 days). Term pregnancy is a pregnancy between 37 completed weeks up to 42 completed weeks of gestation. Birth size is defined as the size of newborn at birth according to the perception of mother. Stillbirth is defined as any fetus born without a heartbeat, respiratory effort or movement, or any other signs of life. Neonatal complication in this study is defined as a neonate experiencing at least one or more of the following conditions; asphyxia, hypothermia, jaundice, convulsion, unable to breast feed or any conditions which endangers the life the neonate. According to maternal estimate, mother's perception of baby's size is defined in this study as small equivalent to less than 2500 g, average 2500 g to 4000 g and large greater than 4000 g. This estimate was obtained because birth weight is unknown for most (86%) newborns in Ethiopia however the mother's estimate of weight is subjective and interpretation of the finding should be viewed with caution [4]. Data collection tool and procedure A structured questionnaire, first prepared in English and translated into Sidamu Afoo (local language), were employed to collect the data. All term (> 37 week GA) pregnant women at selected kebeles were identified by Health Extension Workers (HEW). Trained data collectors were contacted the women to obtain informed consent, to perform interviews and later to conduct postpartum follow-up, home visits, at week 1, and 4. All data collectors were contacted with the supervisor by mobile phone every week and on site supervision. Baseline data collected during recruitment were maternal socio-demographic information, medical history and use of health services like antenatal care. Pregnancy outcomes, the circumstances of delivery, date of birth, date of death of neonate, feeding patterns, and illness episodes of neonate were collected during follow-up period. The data collection processes were supervised strictly by trained supervisors and the principal investigator. The quality of data was assured using a properly designed questionnaire, proper training of the interviewers and supervisors about the data collection and follow-up procedures, proper categorization and coding of the questionnaire. Questionnaires were pre-tested on 5% of the sample outside the study area. Data were double entered and screened for missing, outlier values and data entry errors using the frequency distribution. Errors were corrected against the raw data and the necessary corrections were made before the analysis. Statistical analysis Data were coded, entered, cleaned and analysed using SPSS version 22. Pregnancy outcome variables were explained by descriptive statistics and neonatal outcome variables were examined against all confounding variables using regression analysis. Kaplan-Meier survival curve was used to show pattern of neonatal death in 28 days. Independent and adjusted relationships of different predictors with neonates' survival were assessed with Cox regression model. The risk of mortality was explored and presented with hazard ratio and 95% confidence interval. P-value less than 0.05 was considered as significant. Multicollinearity between the independent variable was assessed using variance inflation factors (VIF) and VIF greater than 10 was considered as existence of multicollinearity before interpreting the final output. Results A total of 586 term pregnant mothers were identified and enrolled in follow up consecutively for 3 months period and these pregnancies resulted in 14 (2.4%) stillbirths and 584 live births, including 12 pairs of twins. Then 584 neonates were followed until 28 days to determine their survival status. From a total of 600 neonates planned, 584 neonates were recruited making a response rate of 97.33%. The perinatal mortality (stillbirth and early neonatal deaths) was 49/1000 pregnancies. Characteristics of term pregnant mothers and their neonates Among 584 neonates born in the study area, 405 (69.3%) born from mothers who had ANC follow up and, of whom 209 (35.8%) had 1-2 ANC visits in health facilities. Five hundred and fifty seven (95.4%) of mothers had no history of chronic medical diseases and 500 (85.6%) had no history of previous still births. Majority of neonates, 501 (85.8%) born from mothers who had no complication during delivery. More than half, 318 (54.5%) of neonates born in home and 520 (89.0%) had started breast feeding within 1 h. According to the perception of mothers on size of their neonates, 404 (69.18%) reported average size and 258 (44.2%) were 24-48 months in birth interval. The incidence rate of neonatal mortality was 1.51 per 1000 person-days-of observation. Kaplan-Meir survival analysis Kaplan-Meier survival function curve shows that the probability of survival of a neonate who had initiated EBF after 1 h was lower than a neonate who had initiated EBF within 1 h. On day 5, the probability of survival Fig. 1). Moreover survival graph indicates that the probability of survival of both groups falls rapidly on first week and looks stable after the end of second week. Cox proportional hazards regression models Variables with p-value less than 0.25 in crude model were included in the Cox proportional hazards regression model. Maternal educational status, neonatal complication, maternal history of stillbirth, place of child birth, baby's sex, baby's size at birth, birth type, initiation of BF and postnatal care were variables included in Cox regression model. Maternal history of stillbirth, place of child birth and birth type lost their significance after adjusting for confounders. The risk of neonatal mortality was about 3.6 times higher for neonates who had neonatal complication compared to those who had no complication (AHR = 3.64, 95% CI 1.39-9.77). Male neonates had 2.7 times higher risk of neonatal mortality compared with female neonates (AHR 2.71, 95% CI 1.04-7.09). According to mother's perception of baby's size, neonates that mothers perceived to be small had 3.46 times more risk of dying compared with average size neonates (AHR 3.46, 95% CI 1.119-10.704). Time of initiation of BF and postnatal care were also independent predictors of neonatal mortality. Neonates initiated BF after 1 h and who had no postnatal care were about 3.6 (AHR = 3.57, 95% CI 1.26-10.17)) and 3 (AHR = 3.07, 95% CI 1. 16-8.12) times at higher risk of dying than neonates initiated BF within 1 h and had postnatal care, respectively ( Table 3). Discussion Our study showed that stillbirth rate in the study area was 24 per 1000 births which is much lower than hospital based prospective cohort study in Uganda which reported stillbirth rate of 120 per 1000 births [9]. This finding is also lower than EDHS 2016 report of perinatal mortality but it is higher than the study conducted in South-west Ethiopia [10]. Our result should be interpreted cautiously because it reports the stillbirth rate among term pregnancy only, which might underestimate actual burden of the stillbirth in the study area. On the other hand, there could be a misclassification of pregnancy outcome such as; severely asphyxiated neonates might be classified as stillbirth which could also overestimate the magnitude of stillbirth rate in the study setting. The neonatal mortality rate in the study area was 4.1%, which was higher than the global neonatal mortality rate in 2016 [1] and reports from studies conducted in Ethiopia [11,12], Sudan and Zambia [13,14] . The differences might be attributed to study designs, health service coverage and socio-economic factors. Our result is consistent with the NMR of Pakistan [1], studies conducted in Jimma, Ethiopia and Nigeria [15,16], but it is lower than a study done in Tigray, Ethiopia [17]. In our study, 62.5% neonatal deaths occurred in the first week of their life, which is similar with other studies [11,17,18]. The higher mortality in the study setting could be due poor access to quality maternal and child care services, high proportion of home deliveries, poor access to skilled attendants, where the coverage of skilled birth attendants is 28% in Ethiopia and Southern Ethiopia [4], and particularly midwives in rural settings. Tailored and high impact maternal and child care interventions should be implemented to avert avoidable deaths. Regarding determinants of neonatal survival, socio demographic factors contribute to neonatal mortality in this study. Additionally, place of child birth also contributes less to neonatal mortality, even though it was reported as one of the strongest predictors of neonatal mortality in different studies [15,19,20]. Our study revealed that male neonates were less likely to survive than female neonates. This is similar with studies done in different areas even though there are differences in the strength of association [18,[21][22][23][24]. In this study male neonates were about 2.7 times more likely to die than female neonates. This finding is consistent with the study conducted in Northern Shoa, Ethiopia and higher than studies conducted in Nigeria, Pakistan, Bangladesh and Indonesia [18,[23][24]. The possible explanations for the increase in mortality among male neonates, as mentioned in many studies, could be male sex appeared to have respiratory problems, intrauterine growth restriction insufficiency or prematurity and low Apgar score [25,26]. We did not measure the birth weight of newborns since the study was conducted at community level; instead we used mother's perception of baby's size to estimate the size of neonates. Our result shows that neonates that mothers perceived to be small were 3.46 times at risk of dying than those with average in size. Similarly, a study from Indonesia revealed that newborns whose birth size according to mothers' perception was smaller than average, had 2.8 times higher risk of dying than average sized babies and neonates of low birth weight had 5.5 times higher risk of neonatal death than normal weight babies [23]. Additionally, findings from Nigeria shows that neonates perceived as small or smaller by their mothers were 2.26 times at higher risk of dying than average sized neonates [21]. Another study from Pakistan also reported that a hazard of neonatal mortality was higher among small sized neonates than average sized neonates [22]. The prospective follow up study in Jimma, Ethiopia also showed that small size babies at birth were found to have an increased risk of death than average size neonates [15]. Post-natal care in this study was statistically significant determinant of neonatal survival. Neonates who had no post natal care during neonatal period had 3 times higher risk of neonatal mortality than neonates who had post natal care. A study from Ghana consistently reported that utilization of post natal service was inversely related with neonatal mortality [27]. Similarly, the study conducted in Northern Shoa, Ethiopia revealed that neonates born from mothers who did not receive postnatal service were 3 times more likely to experience neonatal death than neonates who born from mothers who received postnatal services [19]. This might be due to postnatal visits which could enable the HEWs to identify and screen health condition of mothers and their neonates and this could help in providing neonatal health care services. Our finding had a lower strength of association compared with a study conducted in Jimma, Ethiopia. The study reported that neonates having poor neonatal care had about ten times higher risk of dying during neonatal period as compared to those who received good comprehensive neonatal care [15]. A possible explanation for this difference might be due to classification of study variables. The studies classified postnatal care as having poor comprehensive neonatal care and good comprehensive neonatal care unlike our study. In this study, neonates initiated breast feeding after 1 h of delivery were at higher risk of dying during neonatal period. This agrees with the studies conducted in other parts of Ethiopia. The case control study in North Shoa Zone, Ethiopia indicated that neonatal death was in excess among neonates who were not breastfed within the first hour after delivery compared with those who were breastfed within the first hour after delivery [19]. A study from Tigray, Ethiopia also reported initiating exclusive breast feeding within 1 h has a protective effect on hazards of neonatal mortality [19]. Initiation of breast feeding within the first hour can help prevent neonatal deaths caused by infections such as sepsis, pneumonia, and diarrhea and may also prevent additional hypothermia related deaths, especially in preterm and low birth weight infants in developing countries [28]. Our study showed that neonates with complication survive less likely than neonates who had no complications. The study from Tigray, Ethiopia reported consistently that neonate who had no complications after birth had 99.86% less hazard of death than who had complications [17]. Similarly, the study from Ghana revealed that infections, preterm birth and low birth weight, birth trauma, and hypothermia were causes of neonatal mortality [29]. Preterm birth complications, intra partum related events and sepsis or meningitis were reported as among the leading causes of newborn deaths globally in 2016 [30]. In our study, preterm pregnant mothers were not participated in the study which might underestimate the strength of association of neonatal complication and neonatal mortality. As limitations, in this study, baby weight was not measured but estimated from mother's perception of baby's size which was subjective and may cause information bias. However, measures were taken to minimize the bias by helping mothers correctly estimate the size of their babies. The results are also consistent with other studies in using similar methods. We included term pregnant mothers and their newborns. This could affect or underestimates the neonatal and perinatal mortality rates. However, the study provided valuable information about the magnitude of the problem in the study setting. The study did not address all determinants of neonatal mortality which might affect true association of studied variables and unmeasured factors. Conclusion Neonatal mortality in study area was still high and calls for immediate action towards achieving the Ethiopian Health Sector Transformation plan of reducing neonatal mortality to 10% at the end of 2020 and the targets of the Sustainable Development Goals. A significant proportion of mothers also delivered at home, which requires strategies to improve coverage of institutional delivery. Various factors such as neonatal complications, duration of EBF, sex of neonates, size of neonates at birth, and postnatal care were identified as independent predictors of neonatal survival. Tailored interventions
2020-01-27T16:11:23.897Z
2019-08-12T00:00:00.000
{ "year": 2020, "sha1": "6014500e2851b563410c35eff2b3f41726b1e4e1", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-019-1907-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6014500e2851b563410c35eff2b3f41726b1e4e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54907230
pes2o/s2orc
v3-fos-license
Stochastic thermodynamics of rapidly driven systems We present the stochastic thermodynamics analysis of an open quantum system weakly coupled to multiple reservoirs and driven by a rapidly oscillating external field. The analysis is built on a modified stochastic master equation in the Floquet basis. Transition rates are shown to satisfy the local detailed balance involving the entropy flowing out of the reservoirs. The first and second law of thermodynamics are also identified at the trajectory level. Mechanical work is identified by means of initial and final projections on energy eigenstates of the system. We explicitly show that this two step measurement becomes unnecessary in the long time limit. A steady-state fluctuation theorem for the currents and rate of mechanical work is also established. This relation does not require the introduction of a time reversed external driving which is usually needed when considering systems subjected to time asymmetric external fields. This is understood as a consequence of the secular approximation applied in consistency with the large time scale separation between the fast driving oscillations and the slower relaxation dynamics induced by the environment. Our results are finally illustrated on a model describing a thermodynamic engine. Introduction The identification of thermodynamic quantities, such as heat, work and entropy, in open quantum systems driven by an external field is a central issue in quantum thermodynamics. Such systems are encountered in a variety of physical situations including the interaction with electromagnetic radiation [1], driven tunneling [2], switching in multi-stable quantum systems [3], transport properties of driven quantum dots [4], and non-equilibrium Bose-Einstein condensation [5]. Up to now, a consistent picture of the thermodynamics of these systems has only been given within specific limits or regimes. In particular, most studies have been focused on slowly driven and weakly coupled open systems [6,7,8,9,10]. Within this regime, the system dynamics is well described by a stochastic master equation in the basis of time dependent energies of the system. Entropy production, heat and work can then be identified at the single trajectory level, and the thermodynamic analysis of the system performed within the framework of stochastic thermodynamics (ST) [11,12,13]. More recently, work has been done on the study of thermodynamic properties of open quantum systems driven by a fast and periodic external field, weakly coupled to a single heat reservoir [4,14,15,16,17,18,19,20]. In the present paper, we perform the ST analysis of an open quantum system weakly coupled to multiple reservoirs and driven by a fast and time periodic external force. Considering multiple reservoirs considerably widens the scope of possible applications such as, for example, the study of energy conversion. We perform the statistics of the energy and matter currents out of the reservoirs using the counting statistics formalism (see Ref. [21] for a review). Within this formalism, the currents are determined by making initial and final measurement of the energy and particle number in the reservoirs. In the weakly coupled and fast driving regime, these statistics is shown to be independent of quantum coherences in the Floquet basis. This directly results from the dynamical decoupling between populations and coherences in the Floquet basis in this regime, together with the fact that measurements of the reservoirs energy and particle number are independent of the system state. The identification of the mechanical work further requires a double measurement of the initial and final energies of the system [22,23,24,25]. Contrary to the current statistics, the mechanical work statistics depends on the evolution of coherences in the Floquet basis, which both influence and depend on the outcome of the system energy measurements. However, we show that the double projection in the system becomes unnecessary at steady-state for the identification of the rate of mechanical work, i.e. power. In this limit, the mechanical power statistics is exclusively determined by the diagonal elements of the modified density matrix in the Floquet basis, independently of quantum coherences. The first law then leads to a balance equation for the currents and the mechanical power. Furthermore, the steady-state mechanical power is shown to be given by the transfer rate of quanta, with energy given by the driving frequency ( = 1), from the external driving to the reservoirs. An important consequence of the dynamical decoupling between populations and coherences in the Floquet basis is that the trajectory entropy production associated to the stochastic dynamics in the Floquet basis satisfies a transient FT. The Shannon entropy in the Floquet basis is thus the relevant entropy within this scheme. The local detailed balance (LDB) satisfied by the transition rates between Floquet states is here written in terms of the heat exchanged between the system and the reservoirs during the corresponding microscopic transition. The heat exchange includes multiples of the driving frequency which result from the presence of the non conservative external force due to the driving, and are identified as the dissipated mechanical work. We make use of the LDB condition in order to write the steady-state entropy production in terms of the currents and mechanical power. A steady-state FT for these quantities is also established by using the LDB, which is the steady-state version of the transient FT obtained in Ref. [10]. A steady-state FT for the mechanical power is recovered when considering a single heat reservoir [17,19]. This paper is organized as follows. In section 2, we first introduce the general Hamiltonian of a periodically driven open quantum system as well as the Floquet basis of the system and its associated quasi-energies. This section is mainly meant to fix notations. In section 3, we perform the counting statistics of the currents of energy and matter through the system by using the counting statistics formalism. We derive the modified stochastic master equation [21] by using standard assumptions: weak coupling between system and environment, wide spacing between the quasi-energies and fast driving as compared to the relaxation dynamics [26,27,6,28,16]. This section extends former results [4,14,15,16,18] to an environment consisting of multiple reservoirs. The ST analysis of the system starts in section 4. In the first part of this section we use the energy conservation law to construct the mechanical work statistics. The steady-state statistics is also discussed and the first law is introduced. In the second part, we show that the trajectory entropy production satisfies a transient FT and formally establish a FT for the currents and mechanical power. We apply our results to the analysis of a thermodynamic engine in section 5. This engine consists of a two level system, weakly coupled to two particle reservoirs. For this system, the stochastic master equation is exposed and the large deviation function of the output power is numerically obtained and illustrated. We also investigate both average and fluctuations of the output power. Quite remarkably, the output power is subject to large fluctuations in the regime of maximum output power in this model. Finally, a summary of the obtained results and possible perspectives are drawn in the concluding section 6. Model Hamiltonian We consider a periodically driven open quantum system modeled by a Hamiltonian of the form where H S (t) = H S (t + T ) denotes the Hamiltonian of the periodically driven system, H R stands for the environment Hamiltonian and V describes the interaction between the system and the environment. According to Floquet theory, the dynamics associated to the time-periodic Hamiltonian H S (t) admit a complete set of solutions under the form |ψ s (t) = e i st |s t , where s are the so-called quasi-energies of the system while the Floquet states |s t have the same periodicity as the Hamiltonian, that is, |s t+T = |s t [29,30]. These Floquet states and quasi-energies satisfy the eigenvalue problem which is obtained by inserting the quantum state |ψ s (t) into the Schrödinger equation associated to the system Hamiltonian H S (t). Floquet states can be Fourier expanded according to |s t = k e −ikωt |s k in terms of the driving frequency ω = 2π/T . The system quasi-energies are defined up to a multiple of the frequency ω, and can thus be restricted to the first Brillouin zone, s ∈ [0, ω]. We assume that the particle number operator in the system, denoted by N S , commutes with the system Hamiltonian at all times, i.e. [N S , H S (t)] = 0. As a result, the operators H S (t) − i∂ t and N S can be simultaneously diagonalized and the Floquet states |s may be chosen in such a way as to have a well defined particle number n s . The environment consists of a set of macroscopic reservoirs of energy and particles labelled by the index ν = 1, . . . , N . Its Hamiltonian is written as where |r ν is a quantum state in the reservoir ν with energy rν and particle number n rν . The particle number operator in reservoir ν is then given by Each reservoir ν is assumed to be initially at grand canonical equilibrium with inverse temperature β ν = (k B T ν ) −1 and chemical potential µ ν where Z ν = Tr e −βν (Hν −µν Nν ) is the partition function of reservoir ν. The interaction between the system and its environment is written as where the sum runs over all the possible interaction terms and S κ and R ν κ denote operators acting on the Hilbert space of the system and the reservoir ν, respectively. The total particle number operator, N = N S + ν N ν , is assumed to commute with the total Hamiltonian (1), i.e. [N, H] = 0, so that the total number of particles is conserved in the full system. Counting statistics of energy and matter currents At finite times, the statistical properties of the energy and matter currents are completely characterized by the generating function (GF) the average · t being taken with respect to the probability distribution p(∆ ν , ∆n ν , t) of observing an amount of energy ∆ ν and particles ∆n ν flowing out of reservoir ν between time 0 and time t. The counting statistics formalism provides a general framework to calculate the GF (7) in open quantum systems. One introduces the modified Hamiltonian [21] and the modified density matrix which satisfies the dynamical equation The current GF can then be written as the trace of the modified density matrix, G(ξ ν , λ ν , t) = Tr {ρ(iξ ν , iλ ν , t)}. We now proceed by making the standard assumptions leading to a stochastic master equation for the diagonal elements in the Floquet basis of the reduced density matrix of the system the trace Tr R {·} being taken over the reservoirs degrees of freedom. A similar approach has recently been used in order to study the thermodynamics of rapidly driven quantum systems connected to a single heat reservoir [16,18]. The whole system is assumed to be initially in the factorized state where ρ S (0) denotes the initial reduced density matrix of the system and the ρ eq ν are defined in (5). We further make the following assumptions: (i) The environment is composed of reservoirs which are weakly coupled to the quantum system and large enough to remain unaffected by the quantum system. Their correlation time τ C is then assumed to be much shorter than the typical relaxation time scale of the system τ R . (ii) The free oscillations at the driving frequency, ω, and at the Bohr frequencies of the Floquet basis, ω ss = s − s , are much faster than the relaxation process induced by the reservoirs over time scale τ R . We note that since quasi-energies are restricted to the first Brillouin zone, s − s ≤ ω, a fast driving frequency is necessary though not sufficient in order to have a sparse Floquet spectrum. This remark is particularly relevant when considering systems close to the so-called avoided crossings in the Floquet spectrum [28]. Under these assumptions, one can take the Born-Markov approximation and apply the rotating wave approximation (RWA) [26,27,6,28,16] by averaging the system dynamics over a time scale ∆t which is intermediate between As a result of this procedure, the dynamics of the populations and coherences in the Floquet basis are decouple. The GF of the currents is then completely determined by the diagonal elements of the system reduced density matrix where ρ ss (ξ ν , λ ν , t) = s|ρ S (ξ ν , λ ν , t)|s . We first give the modified stochastic master equation that rule the evolution of populations g s (ξ ν , λ ν , t) ≡ ρ ss (iξ ν , iλ ν , t). In the following, functions defined on the set of quasi-energy states f : s → f s are arranged into vectors with components [f ] s = f s . For brevity, the sum of their components are written as f ≡ s f s = 1 · f where 1 ≡ (1, 1, . . . , 1) and · denotes a matrix product. With these notations, populations in the Floquet basis follow the set of dynamical equationsġ where the matrix elements Γ(ξ ν , λ ν ) containing the counting parameters can be written as The transition rates appearing in (15) are given by In this last expression, the amplitudes characterize the time scale of the corresponding transitions and depend on the number of quanta l transferred from the driving protocol to the reservoirs. An important feature of these amplitudes is that they do not necessarily vanish for s = s leading to socalled pseudo-transitions between different modes of the same Floquet state [16]. These pseudo-transitions directly contribute to the statistics of the currents which is manifest by the presence of terms of the form e ξν lω along the diagonal of the transition rate matrix (15). Furthermore, relation (18) emphasizes the fact that the allowed number of quanta exchanged with the mechanical driving during stochastic transitions is determined by the spectral properties of the matrix elements s t |S κ |s t . The reservoir correlation functions, on the other hand, are given by with ρ ν denoting the grand canonical equilibrium density matrix (5) in reservoir ν and the trace Tr ν {·} being taken over its Hilbert space. These equilibrium correlation functions encapsulate the thermodynamic properties of the reservoir and satisfy the Kubo-Martin-Schwinger (KMS) condition where ∆n ν κ denotes the particle number change in reservoir ν induced by the operator R ν κ , that is, assuming that r|R ν κ |r ∝ δ(n r − n r − ∆n ν κ ). The coherences in the Floquet basis, ρ ss (ξ ν , λ ν , t) with s = s , are also shown to follow the dynamicṡ with damping rates given by and frequencies by where p.v. denotes the Cauchy principal value. The coherences thus evolve independently of each other and undergo exponentially damped oscillations. Quite remarkably the damping rates (22) depend on the energy counting fields ξ ν contrary to the autonomous situation. At this point, we note that the dynamical equations for the matrix elements of the reduced density matrix are simply obtained by setting the counting fields to zero in equations (14) and (21), i.e. ρ ss (t) = ρ ss (ξ ν , λ ν , t)| ξν =λν =0 . The amount of energy and matter exchanged between the system and reservoirs during a microscopic transition is apparent in the expression of the modified rates (15). The different transitions between states s and s are distinguished by their indices ν and l. Such transitions involve an energy and particle exchange between the system and reservoir ν respectively given by s − s − lω and n s − n s . The summation over integer multiples of the driving frequency ω in the rate matrix elements (15) is characteristic of the presence of the external periodic force. As we will later see, the non conservative contributions lω to the energy flow are identified as the mechanical work dissipated into the reservoirs at steady-state. The statistics of the mechanical work is then obtained in the long time limit by only counting these terms. In the absence of external driving, the quasi-energies s become the actual energies of the system and the summation over l disappears, leading to the usual master equation for autonomous open systems. An essential task in ST is the identification of the microscopic processes related through time reversal. Such processes involve opposite amounts of energy and particle exchanges with the environment as well as inverted initial and final states. From the above discussion, these pairs of processes have transition rates given by Γ νl ss and Γ ν−l s s . By virtue of the symmetry relation γ l κκ |ss = γ −l κ κ|s s and KMS condition (20), these pairs of transition rates satisfy the LDB condition where the right-hand side is the entropy flowing from reservoir ν during the transition. The presence of lω terms in the energetics of (24) shows that the mechanical driving can enhance or decrease the statistical frequency of particular transitions. For example, by providing an extra amount of energy through the exchange of quanta at the driving frequency, the mechanical driving effectively lowers the energy cost of a particular transition thus increasing its probability rate. This observation will prove useful in the study of the thermodynamic engine considered in section 5. Finally, we note that the quantity −µ ν (n s − n s ) is the chemical work performed by the system to bring n s − n s particles into reservoir ν against the chemical potential µ ν . The fundamental relation (24) plays a key role in writing the entropy production as the sum of the system entropy change and the entropy flow from the environment [31,12,11]. In addition, it also leads to a steady-state FT for the mechanical work and the currents out of the reservoirs as we show in section 4.2. For systems maintained in a non-equilibrium steady-state by boundary constraints, such as temperature and chemical potential differences between the reservoirs, the cumulant generating function (CGF) is a measure of the current fluctuations at steady-state. In particular, all the moments and correlations between the currents can be obtained by successive derivation of the CGF with respect to its counting parameters ξ ν and λ ν at zero values. A related object is the large deviation function (LDF) of the currents where the currents are defined as The CGF (25) and LDF (26) are related through the Legendre-Fenchel transformation as stated by the Gärtner-Ellis theorem [32]. Using the formal solution of equation (14), the current GF can be written as where p 0 denotes the initial occupation probability of the system. This also shows that the CGF (25) is given by the dominant eigenvalue of the rate matrix Γ(ξ ν , λ ν ) [21]. Besides, the average values of the energy and matter currents, obtained as the first derivatives of the CGF, are then given by in terms of the steady-state probabilities p st = lim t→∞ p(t). Finally, let us mention some interesting differences between the fast driving limit considered here and the slow driving limit. In this latter case, populations of the density matrix are known to satisfy a stochastic master equation in the time dependent energy eigenbasis of the system. The presence of the external field is then manifest by the time dependent system energies appearing in the tunnelling rates. These rates are known to satisfy a LDB condition, which depends on the time dependent parameters of the system. In the rapidly driven systems considered here, populations and coherences of the density matrix are dynamically decoupled in the Floquet basis and the stochastic master equation is now time independent in this basis. In this regime, the external driving results in the presence of non conservative terms in the LDB, which are expressed under the form of integer multiples of the driving frequency. Stochastic Thermodynamics The whole framework of ST relies on the identification of the first and second laws at the microscopic level. The first law requires the discrimination between the mechanical and thermal contributions to the energy balance of the considered physical system. The microscopic version of the second law is expressed as a transient FT for the trajectory entropy production. In the following subsection we identify mechanical work by using initial and final measurements of the system energy. Its statistics is derived and particular emphasis is put on the steady-state fluctuations of mechanical power. Within this limit, the initial and final measurements of energy are shown to be irrelevant and the mechanical power can then be interpreted as the transfer rate of quanta to the system at the driving frequency. The second law and FTs are discussed in subsection 4.2. Since populations and coherences in the Floquet basis are decoupled in the regime considered here, the trajectory entropy production of the stochastic process ruled by (14) and (15) satisfies a transient FT [33,34,11,13]. Quite remarkably, this is true despite the quantum coherences in the Floquet basis introduced by the initial measurement of the system energy. We further consider the long time limit and formally establish a steady-state FT for the currents and mechanical power [10]. Energy balance and work statistics In the weak coupling limit, the mechanical work performed by the external driving is given by the changes in system and reservoir energies between initial and final times, respectively chosen as time 0 and time t. Measuring the energy change in the system requires projective measurements of its initial and final energies. The necessity to project the system at initial and final times in order to perform the energetic analysis stems from the fact that Floquet states are not eigenstates of the time dependent system Hamiltonian H S (t). In the following, we denote by |e t the instantaneous eigenstate of the system Hamiltonian H s (t) with eigenvalue e t , that is H S (t)|e t = e t |e t . The system is assumed to undergo measurements of its energy at initial and final times yielding the values e 0 and e t respectively. Just after the initial measurement, the reduced density matrix of the system is given by ρ S (0; e 0 ) ≡ |e 0 e 0 |ρ S (0)|e 0 e 0 |. The mechanical work is then given by the changes in the system and environment energies, i.e. w = ∆e S − ν ∆ ν , with ∆e S = e t − e 0 and ∆ ν denoting the change of energy in reservoir ν between times 0 and t. By following the general approach exposed in Ref. [21], we obtain the generating function of the work as The average in the first line is taken with respect to the work distribution p(w, t) of observing an amount of work w performed by the external driving from time 0 to t. On the second line, the modified density matrix elements ρ s s (α, t; e 0 ) are obtained from those of the modified rate matrix of the currents introduced in the previous section as ρ s s (α, t; e 0 ) = ρ s s (ξ ν , λ ν , t)| ξν =−α,λν =0 . Note that the initial condition used in order to solve the dynamical equations (14) and (21) is now to be taken as ρ s s (0; e 0 ) = s |ρ S (0; e 0 )|s (33) due to the initial measurement of the system energy. We further note that this initial measurement of the system energy also affects the current statistics at finite times. Indeed, if an initial measurement of the system energy is performed, one must consider the initial condition g s (ξ ν , λ ν , 0) = e 0 ρ ss (0; e 0 ) when solving the dynamical equations (14). However, though coherences in the Floquet basis of the system density matrix ρ S (0) may affect the initial weight of the populations after the measurement as taken place, the GF (7) is independent of the subsequent evolution of coherences in the Floquet basis induced by this measurement. To the contrary, the mechanical work GF (32) does depend on the coherences in the Floquet basis induced by the initial measurement. This is mainly due to the fact that the operator which is counted in order to perform the work statistics, H S (t) + ν H ν , does not necessarily commute with the initial density matrix of the system ρ S (0) before the first measurement has been performed [21]. Nevertheless, the steady-state CGF of the mechanical work only depends on the populations of the modified density matrix ρ ss (α, t; e 0 ) since coherences vanish at steady-state, i.e. lim t→∞ ρ ss (t) = 0 for s = s (see the appendix for details). Provided the energy in the system remains finite in the long time limit, the mechanical work CGF (34) obtained by making the substitutions ξ ν → α and λ → 0 in the rate matrix (15) and noting that terms of the form e α( s − s) do not contribute to its eigenvalue. At the trajectory level, we see that the stochastic variable associated to the mechanical powerẇ is given by the transfer rate of quanta from the external driving to the system multiplied by the driving frequency, i.e.ẇ ∼ w∆l/t for t → ∞ and where ∆l denotes the number of quanta transfered from the driving during a given realization of the dynamics. At steady-state, this mechanical power is entirely dissipated into the reservoirs. The above discussion also shows that the mechanical power CGF (34) can be obtained from the current CGF (43) by the following substitution This relation emphasizes the fact that, at steady state, the mechanical power is equal to the sum of incoming energy currents from the reservoirs, that is,ẇ = ν j ν . We are now in position to write down the first law of thermodynamics at steadystate, relating the heat currents to the mechanical and chemical powers. By introducing the heat flowsq ν = j ν − µ ν j n ν in terms of the currents (27), as well as the chemical poweṙ w c = ν µ ν j n ν , the first law of thermodynamics reads νq ν +ẇ c +ẇ = 0 at steady-state, that is, for t → ∞. We note that the average rate of mechanical work is obtained from (34) aṡ which is the steady-state current of quanta with frequency ω injected into the system. A direct inspection of this relations together with (29) and (30) shows thatẆ = ν J ν . This relation can also be used in order to write the first law at the average level, in consistency with (37), where the average heat flow out of reservoir ν is given byQ ν = J ν − µ ν J ν , and the rate of chemical work provided to the system by particles flowing out of the reservoirs bẏ W c = ν µ ν J ν . Entropy balance and fluctuation theorem As we showed in section 3, the populations of the system in the Floquet basis satisfy a closed stochastic master equation. Since the corresponding transition rates satisfy the LDB (24), the trajectory entropy production associated to this stochastic process can be decomposed into [31,34] where ∆s denotes the change in the system entropy and ∆ e s = ν β ν ∆q ν is the entropy flow from the environment. The probability distribution of the entropy production p(∆ i s) in a system ruled by a stochastic master equation is known to satisfy a fundamental FT at finite times [33,34,11,13] p(∆ i s) The fact that this result applies in our case thus simply follows from the dynamical decoupling between populations and coherences in the Floquet basis when the driving frequency is sufficiently high. At steady-state, the entropy change in the system becomes negligible as compared to the entropy flow from the environment. As a result, the rate of entropy production becomes equal to the rate of entropy flow from the environment in this limit. The FT for the entropy production (41) then leads to a steady-state FT for the currents, independently of the initial condition in the system. We now rigorously prove such a steady-state FT for the currents by establishing a fluctuation symmetry for the current CGF (25). As a first step, we note that the rate matrix (16) satisfies by virtue of the LDB condition (24) and where denotes a matrix transposition. Since the CGF is obtained as the dominant eigenvalue of the modified rate matrix, this last relation leads in turn to the aforementioned fluctuation symmetry This FT can be equivalently restated in terms of the large deviation function of the currents as [21] where the stochastic variables j ν and j n ν stand for the steady-state currents of energy and matter, respectively, flowing out of reservoir ν. Alternatively, the current FT (41) leads to a FT for the currents and the mechanical power making explicit reference to the thermodynamic affinities applied to the system. By using the fact that the rate of power is equal to the sum of energy currents incoming from the reservoirs at steady-state, we note that a CGF of the work and currents can be obtained by making the following substitution in the counting fields In this last relation, the counting field α accounts for the mechanical power fluctuations. The symmetry relation (41) then leads to the steady-state FT for the mechanical work and current fluctuations, and in terms of the thermodynamic forces driving the currents Again, this FT is equivalent to in terms of the large deviation function of the mechanical power and currents This FT is the steady-state version of the finite-time FT for the work and currents obtained in Ref. [10]. The presence of a FT for the current fluctuations is known to have important consequences on the response properties of the system [35]. In the present case, the FT (48) can be used to obtain non-trivial relations between the mechanical response of a physical system and its electrical and/or thermal transport properties. In absence of mechanical driving, the mechanical power vanishes,ẇ = 0, and one recovers the usual steady-state FT for the currents [36,37]. On the other hand, a steady-state FT for the rate of mechanical work is recovered when considering a single heat reservoir [17,19]. Let us further mention the particular case of homogeneous temperatures, β ν = β, the FT (48) then relates the fluctuations of mechanical power performed by the external driving to the chemical power performed by the particle currents,ẇ c ≡ ν A n ν j n ν . At the average level, the entropy production can be decomposed intȯ the positivity resulting from the FT for the entropy production. The average rate of system entropy changeṠ is here given by the time derivative of the Shannon entropy in the Floquet basis, S = s p s ln p s . The average rates of entropy production and entropy flow are then given by [12,11,13] S i = respectively. At this point, we emphasize the importance of distinguishing between microscopic processes involving different numbers of quanta exchanged with the external driving, especially when assessing thermodynamic properties. Indeed, a coarse-graining of the dynamics over the number of quanta exchanged with the external driving leads to a systematic underestimation of the entropy production [12]. By using the log-sum inequality, one observes thaṫ where the coarse-grained entropy productionṠ cg i is written in terms of the coarse-grained transition rates Γ ν ss = l Γ νl ss . We note that this coarse-graining can be understood as a coarse-graining of the dynamics in an extended Schnakenberg network whose microstates correspond to individual Fourier modes of the Floquet states while the macrostates correspond to the Floquet states themselves [38,12]. At steady-state, the average rate of entropy change in the system vanishes, i.e. S = 0, so thatṠ where we used the LDB condition (24) and the conservation laws for the currents and power at steady-state to obtain the last equality. This last expression shows that the average irreversible entropy production can be written as the sum of the powers dissipated by the currents against the thermodynamic affinities (47) and the dissipated mechanical power. This picture proves useful at the time of characterizing the efficiency of thermodynamic engines as illustrated on the example exposed in the next section. This finalizes the stochastic thermodynamic analysis of the model Hamiltonian introduced in section 2. The key points of the analysis are the following. Though the current statistics is shown to be independent of quantum coherences in the Floquet basis, this is not the case for the mechanical work statistics at finite times. This is generally understood in the context of counting statistics by the fact that the quantum operator which is used to count mechanical work, H S (t) + ν H ν , does not commute in general with the density matrix of the system when the counting experiment begins. We note that this is in contrast to the slow driving situation, in which the environment naturally projects the system onto instantaneous eigenstates of the system Hamiltonian. Nevertheless, the steady-state fluctuations of mechanical power are shown to be independent of quantum coherences in the Floquet basis. Within this limit, the contribution of the initial and final measurements becomes negligible, and the rate of dissipated mechanical work is equal to the rate of injection of quanta from the external driving to the system. Despite the presence of coherences at finite times, we have shown that a thermodynamically consistent definition of entropy production can be introduced which only depends on the populations and their dynamics. As we explained, this peculiar property is mainly due to the dynamical decoupling between populations and coherences resulting from the use of the RWA. Figure 1. Schematic picture of the ac-driven QD connected to two particle reservoirs. The particle transfer processes with the reservoirs can be separated into two categories: those involving the absorption/emission of exactly one quantum of energy ω by the driving, and those that do not. Model system We now make use of the analysis developed above in the study of a thermodynamic engine based on an ac-driven quantum dot (QD) coupled to two particle reservoirs [39,40,4]. The ac-driven QD is conveniently modeled by the time-dependent Hamiltonian in terms of the splitting ω 0 between the two single particle states of the system in absence of driving, | ↑ and | ↓ , the coupling strength µF to the laser field and its frequency ω. The Floquet states of this system are readily obtained as in terms of the detuning parameter δ = ω 0 −ω and the Rabi frequency Ω = δ 2 + (µF ) 2 . Their corresponding quasi-energies are given by The particle number in the QD fluctuates as a consequence of its interaction with the particle reservoirs. In the present case, the system is allowed to be in either the empty state |0 or the singly occupied states |+ and |− . The interaction between the QD and the particle reservoirs is then modelled by the interaction Hamiltonian where c νk (c † νk ) denotes the annihilation (creation) operator of a single particle state with wave number k and energy k in reservoir ν, and T σ νk is a parameter characterizing the strength of the coupling to the same reservoir. The reservoirs are themselves assumed to be composed of a collection of single particle states with Hamiltonian given by H ν = k k c † νk c νk . The transition rates (16) for this model can be evaluated by using the method described in section 3 yielding where the energy dependent tunneling rates are given by γ σ ν (x) ≡ 2π k |T σ νk | 2 δ(x − k ). The Fermi-Dirac distributions f ν (x) = (exp β(x − µ ν ) + 1) −1 characterize the statistical occupation of single particle states in reservoir ν. As expected, these transition rates satisfy the LDB condition (24) ln We further note that the transitions with l = 1 involve the exchange of smaller amounts of energy with the reservoirs as compared to those with l = 0. This remark will have its importance when we later identify the best working regime of a thermodynamic engine based on this setup. As a result of the non-equilibrium constraints applied to the system, the chemical bias ∆µ = µ 1 − µ 2 and the periodic mechanical driving with frequency ω, the system is subject to steady fluxes of energy and matter. These lead to a positive rate of entropy production (54) here given bẏ In this last equation,Ẇ denotes the average mechanical power provided by the acdriving, while the quantityẆ c = ∆µ J n 1 is the rate of chemical work provided by the current J n 1 in order to bring particles from reservoir 1 to reservoir 2. The statistical properties of the dissipating fluxesẇ andẇ c = ∆µj n 1 are fully captured by their CGF or LDF as discussed in section 3. Both were evaluated numerically and shown to satisfy the steady-state FTs (46) and (48). For example, the joined LDF fo the mechanical and chemical powers I(ẇ,ẇ c ) was shown to satisfy the steady-state FT in consistency with (44). The right-hand side of this last relation is the fluctuating rate of entropy production of the system (cf. eq. (64)). In figure 2 we illustrate the marginal LDF of the chemical power, I(ẇ c ), for different values of the bias applied to the circuit. We now consider a thermodynamic engine based on this setup which converts the input mechanical powerẇ in =ẇ performed by the external ac-driving into an output chemical powerẇ out = −∆µ j n 1 provided to the particle current which now works against the chemical bias ∆µ > 0. The efficiency of such machine is defined as the ratio of its average output power divided by the average input power the last inequality resulting from the second law of thermodynamics. The upper bound in (66) is only reached for vanishingly small output and input powers, i.e. close to equilibrium. This has motivated the investigation of the maximum output power regime, with regard to practical implementations [41,42,43,44,45]. Two aspects must be considered to attain this regime. One is the identification of the properties of the external system to which our engine will provide the highest output power. In the present case, the external system consists of the circuit formed by the reservoirs themselves, and its adjustable parameter is the output bias ∆µ. The design of the system performing the conversion and its connection to the environment constitute other important aspects of power optimization. Here, the quantum dot itself is the vector of the conversion and its spectrum and interaction parameters with the driving and reservoirs provide the adjustable parameters in order to reach maximum output power. In particular, we note that an asymmetry in the coupling between the system and reservoirs 1 and 2 is necessary for the conversion from mechanical to chemical work to be possible. An extreme and ideal situation is the one for which the input and output powers are tightly coupled, i.e.ẇ out ∝ẇ in , and are thus maximally correlated. In the following, we consider our engine to work in the tight coupling regime with the only non-vanishing tunnelling amplitudes being γ 1 ( ± − 0 ) and γ 2 ( ± − 0 − ω). In this situation, the mechanical driving provides one quantum of energy equal to ω to charge the system from reservoir 2, while the system can be discharged into reservoir 1 without any energy supply from the environment. This favors the net pumping of particles from reservoir 2 to reservoir 1 against the bias ∆µ > 0. In this regime, the output and input powers are proportional to each other so that the engine efficiency can be simply written as Within the regime of maximal output power, most studies have focused on the average output power and the corresponding efficiency of the considered engine. Here, we use the counting statistics formalism exposed in section 3 above in order to investigate the fluctuations of output power. The average and mean root square of the output power, respectively given bẏ are illustrated in figure 3 as a function of the efficiency (66). The regimes of maximum output power are shown to correspond to high, although not necessarily maximal, power fluctuations. Another striking feature is the relative size of power fluctuations which are one order of magnitude larger than the average power in the illustrated regime. This last observation is best illustrated in figure 4. Parameters of the ac-driven QD and the output bias ∆µ were individually adjusted to reach maximum output power for each value of the driving frequency ω. Quite remarkably, the magnitude of power fluctuations is significantly bigger than the average up to a certain value of the driving frequency. Above this value, the situation is reversed and the average and root mean square output power increase, respectively, quadratically and linearly with the driving frequency. The above observations may be summarized as follows. First, the fluctuations of output power in the quantum engine we consider can be substantial and exceed its average by more than one order of magnitude. This assertion is in the present case particularly justified in the regime of maximum output power for which the fluctuations are shown to systematically exceed the average output power up to a certain value of the driving frequency. Second, the relative magnitude of fluctuations with respect to the average may be lowered by getting away from equilibrium. These observations suggest that compromises may be necessary in the design of nano-scaled heat engines, depending on the priority to deliver a high or stable output power. In this sense, engines perform better far rather than close to equilibrium. Conclusion and perspectives We reported the stochastic thermodynamics analysis of a weakly coupled open quantum system connected to multiple reservoirs and driven by a fast external field. This analysis is of particular interest in the context of the study of nano-scaled thermodynamic engines as illustrated in the example we considered in the previous section. Such models of thermodynamic engines may be realized through ac-driven semiconducting circuits [39,40,4] or cold atom gases [46,47,48,49]. The use of the rotating wave approximation (RWA) has proven useful in order to gain physical insight into the thermodynamic properties of open quantum systems as was already the case for the slow driving limit. However, several open issues remain regarding the thermodynamic properties of driven open quantum systems outside the range of application of the RWA. In particular, the characterization of the entropy production in quantum systems with sustained coherences and their subsequent thermodynamic analysis constitute challenging problems of non-equilibrium quantum thermodynamics. Another issue is the identification of thermodynamic quantities when broadening effects cannot be neglected. On top of these fundamental difficulties, the expansion to second order in the interaction between system and reservoirs is void in this case, and one must appeal to complementary methods such as the non-equilibrium Green's functions formalism [50].
2015-05-11T15:39:26.000Z
2014-11-30T00:00:00.000
{ "year": 2014, "sha1": "1a05747a330c7522054c94db57e4b2906debff84", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/17/5/055002", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "1a05747a330c7522054c94db57e4b2906debff84", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
256741773
pes2o/s2orc
v3-fos-license
Bone density of the axis (C2) measured using Hounsfield units of computed tomography Introduction The assessment of bone density is of great importance nowadays due to the increasing age of patients. Especially in regard to the surgical stabilization of the spine, the assessment of bone density is important for therapeutic decision making. The aim of this work was to record trabecular bone density values using Hounsfield units of the second cervical vertebra. Material and methods The study is a monocentric retrospective data analysis of 198 patients who received contrast-enhanced polytrauma computed tomography in a period of two years at a maximum care hospital. Hounsfield units were measured in three different regions within the C2: dens, transition area between dens and vertebral body and vertebral body. The measured Hounsfield units were converted into bone density values using a validated formula. Results A total of 198 patients were included. The median bone density varied in different regions of all measured C2 vertebrae: in the dens axis, C2 transition area between dens and vertebral body, and in the vertebral body bone densities were 302.79 mg/cm3, 160.08 mg/cm3, and 240.31 mg/cm3, respectively. The transition area from dens axis to corpus had statistically significant lower bone density values compared to the other regions (p < 0.001). There was a decrease in bone density values after age 50 years in both men and women (p < 0.001). Conclusions The transitional area from dens axis to corpus showed statistically significant lower bone density values compared to the adjacent regions (p < 0.001). This area seems to be a predilection site for fractures of the 2nd cervical vertebra, which is why special attention should be paid here in radiological diagnostics after a trauma. Introduction The assessment of bone density is of great importance nowadays due to the increasing age of patients. Especially with regard to the surgical stabilization of the spinal vertebrae following trauma, the assessment of bone density is important for the therapy decision. There are already several studies in the literature showing a significant correlation between the Hounsfield units (HU) of a computed tomography and the bone density measured by DXA (Dual energy X-ray absorptiometry) [1][2][3][4][5][6][7]. Measuring bone density is important in the context of diagnosing osteoporosis. The clinical importance of osteoporosis refers to the increased risk of fractures [8,9]. Osteoporosis is a disease that has received increasing attention due to the increasing age of the population. Bone density assessment is therefore very important [9]. Worldwide, osteoporosis causes more than 8.9 million fractures annually [10]. Often, the diagnosis is not made until an osteoporotic fracture occurs. In 2010, it was estimated that there were 158 million people at high risk of fracture. Due to demographic change, this number 18:93 is expected to double by 2040 [11]. According to Klotzbuecher et al. [12], a loss of 10% bone mass in the vertebrae can double the risk of vertebral fractures, and a loss of 10% bone mass in the hip can similarly lead to a 2.5-fold higher risk of hip fractures. Osteoporotic fractures lead to increased morbidity and mortality [13]. It is already known that age is an independent factor in the reduction of bone density [14,15]. Yu et. al reported a more rapid decrease in spinal bone density in women in age groups after 40-49 years compared to men [16]. For bone density measurement, DXA is the gold standard [17]. However, this examination has some disadvantages: For example, it cannot distinguish between cortical bone and cancellous bone, cannot examine specific spinal segments, and has a high cost. There are several methods to determine bone density based on a clinical CT scan: simultaneous calibration, asynchronous calibration, internal calibration or using the HU directly. The simultaneous phantom-based calibration is used in standard quantitative CT (QCT). In this method, the bone density is calculated from the CT values using a phantom calibration containing usually hydroxyapatite which is positioned under the patient. This procedure minimizes the differences in bone density between different models of CT scanners. Asynchronous calibration does not require the presence of a phantom during CT scan. This method separates patient investigation and phantom scan. The calibration phantom can be scanned once weekly or once monthly [18]. The internal density calibration eliminates the need of a calibration phantom for opportunistic CT screening. This method uses in-scan regions of interest (ROI) in different body tissues such as subcutaneous adipose tissue and blood for calibration. Michalski et al. [19] showed in the cadaveric analyses that internal calibration performs equivalently to the phantom-based calibration. The direct use of HU is another method to determine bone density. A calibration is not performed, making this method surely the easiest to use. The direct use of HU requires a CT scanner produced by the same manufacturer or ideally the same scanner [18,20]. Determination of HU is associated with no additional cost or additional radiation because a CT scan is usually already available after trauma or before spinal instrumentation [3]. There are already several studies in the literature showing a correlation between directly obtained HU from a CT scan and DXA and QCT values [2,5,[21][22][23][24]. The measurement of HU is currently accepted as a good tool for measuring bone density [3]. Based on their results, Pickhardt et al. [2] claim that the cutoff for osteoporosis is 135 HU. Buenger et al. also demonstrated a significant relationship between HU measured on CT and QCT values. For both native CT examinations and contrast-enhanced CT examinations, a conversion formula for measuring bone density was proposed: QCT value = 0.7 × HU + 17.8 and QCT value = 0.71 × HU + 13.82, respectively [1,3]. The first two vertebral bodies (C1, C2) have a significantly different structure compared to the other cervical vertebrae. C1 and C2 share about 60% of the rotational and 40% of the flexion-extension movements [25]. Fractures of C2 are the most common cervical spinal injury among elderly [26,27]. C2 fractures can be subdivided into odontoid fractures, Hangman's fractures, and atypical fractures [28]. The odontoid type 2 fractures are the most common type of C2 fractures [27]. The aim of this work was to evaluate the trabecular bone density values using Hounsfield units of the second cervical vertebra. Furthermore, it was to be examined whether differences in bone density exist between the vertebral regions of the axis in relation to sex and age. Material and methods This study is a monocentric retrospective data analysis. 198 patients who received contrast-enhanced polytrauma CT scans (256-slice Multi Detector Ct Scanner GE Healthcare Revolution; slice thickness 0.625 mm; tube spectra 80-120 kV, tube current: Smart mA 100-755, voxel size: 1,25 mm, pitch: 0.922:1, rotation time: 0.5 s, detection coverage: 80 mm) in a period between 01/01/2020 and 31/06/2021 at a maximum care hospital were included in the study. Patients were subsequently included in chronological order depending on the date of examination. Data collection was performed anonymously in an Excel spreadsheet by a single physician. Basic information (patient age, sex, examination date) and Hounsfield units and vertebral bone density of C2 were recorded. Bone density values were calculated using the formula of Buenger et al. (QCT value = 0.71 × HU + 13.82) [1]. The data were stratified by sex and decade of life. The overall study design and conduct were approved by the local ethic committee (Reg.-Nr.: 2020-2030-Daten). Patients without age limitation who underwent contrast-enhanced polytrauma CT polytrauma were included. Exclusion criteria were pathologies such as C2 fractures, surgery with material implantation in the upper cervical spine, signs of osteochondrosis or spondylodiscitis and artifacts due to implanted materials or other causes. Measurement of the hounsfield units Hounsfield units (HU) were recorded in the axial plane. HU were measured with an elliptical measurement field in three different localizations within the vertebral body: dens, transition area between dens and vertebral body and vertebral body. The transition zone was also localized using coronal CT imaging and was defined as the area between the dens and vertebral body of C2. In the axial plane, the "region of interest" (ROI) was chosen as large as possible, leaving out the cortical bone. Thus, only the trabecular bone was measured (Fig. 1). The HU were measured in the Centricity Universal Viewer Zero (GE Healthcare, Chicago, USA). The measurement of HU values was always performed by the same physician. Statistics The data were recorded anonymously in Microsoft Excel. Statistical analysis was performed using SPSS 26 (IBM Inc, USA). Data were grouped by sex and age (10-year intervals). The data were not normally distributed, so nonparametric tests were used for statistical analysis. Comparison of bone density of different localizations was performed using the two-sided Friedman test. Comparison of 2 localizations was performed using the twosided Wilcoxon test. The sex-based comparison was performed using the two-sided Mann-Whitney U test. Age-by-age comparison was performed using the twosided Kruskal-Wallis test. Paired comparisons between the age groups were performed using the two-sided Mann-Whitney U test. The p value was adjusted for multiple tests using Bonferroni correction. For all analyses, a p value < 0.05 was assumed to be significant. Table 1. The Shapiro-Wilks test (p < 0.05) and visual inspection of its histograms as well as QQ charts showed that bone density values were not normally distributed. Therefore, further statistical analysis was performed with nonparametric tests. There was a statistically significant difference in bone density values in the different regions of the second cervical vertebra (p < 0.001). Statistically significant lower values were measured in the C2 transition area between dens and vertebral body compared to the other regions (p < 0.001, Table 1). The same decrease in bone density values occurred between dens axis to the corpus of C2 with a further decrease to the C2 transition area. This was also noticeable when comparing by sex (p < 0.001, Table 2). In Fig. 2 are shown the bone density values in the dens, transition area and vertebral body according to age group. The difference between the bone density values between males and females was not statistically significant (231.79 vs. 213.56 mg/cm 3 ; p = 0.172). There was a statistically significant difference between the bone density values of the different age groups (p < 0.001, Table 3). A reduction in bone density was observed in patients over 50 years of age, with a statistically significant difference in paired comparisons of age groups below 50 years with those above 50 years (p < 0.001). Similar results were seen when comparing bone density values above 50 years of age for the different regions of C2 vertebra. The decrease in values was observed in both females and males (p < 0.001). Discussion A decrease in bone density as well as changes in the basic structure of the bone characterizes osteoporosis. On the one hand, routine measurement of bone density can diagnose osteoporosis earlier, and on the other hand, measurement of bone density is a very important factor to be considered in spine surgery. There are a few studies in the literature describing the differences in bone density between different regions of C2 vertebra. This information may have an importance not only for osteoporosis diagnosis and spine surgery preparation, but also for medical device industry and scientists who want to study the biomechanical properties of vertebral bodies. It is already known that contrast agent administration leads to a slight increase in HU values. On average, the differences between native and arterial phases in the study by Pompe et al. were 12 HU [29]. The main goal of our study was to measure bone density of the C2 vertebra in different regions. A strength of our study is the large number of patients. Likewise, the patients received the same examination after trauma. Contrast medium was applied to all patients before the In our study, when examining the bone density of C2, it was noticed that there was a transitional area between the dens and the corpus, where statistically significantly lower values were detected than in the adjacent areas. This hypodense bone area, located immediately below the dens, was also described in the anatomical studies of Heggeness et al. and Kandziora et al. [30,31]. This area is the area where the most common type of C2 fracture occurs: the type II odontoid fracture. This certainly seems to be related to the decrease in bone density in this area. The study by Lodin et al. is the only one we were able to find in which the bone density of the C2 vertebra was examined using HU. Bone density was measured preoperatively and postoperatively in patients with an odontoid fracture. A similar decrease in transitional bone density was also described in this study. It should be noted that this study involved patients who had already sustained a dens axis fracture and not healthy patients [32]. In comparison, only patients without a fracture were examined in our study. Some studies have already compared spinal bone density by sex. Lehmann et al. described no significant difference in bone density between premenopausal women and men [33]. Similar data were also noted by Cvijetic and Korsic [34]. However, there are several studies describing higher bone density values for premenopausal women compared with men [5,35]. Zhang et al. described greater values of bone density in women compared with men at all ages. All patients were under 59 years of age [25]. When comparing bone density by sex, we did not find a significant statistical difference in values in our study. The bone density values of all C2 vertebra regions by gender were very similar, with no significant differences. Comparing bone density by age, in our study, we observed a significant decrease in bone density of C2 in both men and women over 50 years of age. The decrease in bone density in women can certainly be explained by postmenopausal changes and was similarly described by Lehman et al. when measuring bone density using DXA [33]. However, the authors did not observe a significant decrease in bone density with the increase in age in men. Zhang et al. described higher bone density values in women compared with men at all ages [25]. The difference between the Lehmann et al. and Zhang studies may perhaps come from the different study methods. It is well known that both cortical and trabecular bones are examined using the DXA examination. However, overall, a decrease in bone density in postmenopausal women is certainly very credible and physiologically explainable. In our study, this was also observed. One possible cause of the marked decrease in bone density in men after the age of 50 would be the decrease in testosterone levels. However, it is already known that testosterone levels only decrease at about 1 percent per year [36]. Another possible cause would be that at older ages, bones are less stressed due to limited exercise, resulting in a decrease in bone density. There are already several prospective randomized studies demonstrating the protective effect of strength training on bone density [37,38]. Of course, our study also has limitations. The study is purely retrospective and monocentric. The CT scans were conducted in the context of a trauma. As a result, the selection of patients may naturally include subjects with pre-existing conditions. The medication that could have an influence on bone density was not recorded. Furthermore, the individual radiation dose values were not collected. In addition, all examinations were performed with contrast medium administration. On the one hand, this is an advantage because the examinations can be compared well with each other; on the other hand, as already mentioned, the measurement of bone density is influenced by the administration of contrast medium. Another limitation of the study is that approximately three times more men than women were included in the study. This can generally be explained by a higher proportion of trauma patients being men. Another limitation of the study is the method of determining bone density from directly measured HU on CT using the above equation of Buenger et al. [1]. Compared to the other clinically used methods for bone density determination on CT like asynchronous calibration and internal calibration, in the case of directly measured HU no calibration of the bone density values is performed. For this reason, the values obtained are dependent on the used CT scanner and may differ between CT scanners of different manufacturers. This was confirmed by a study of 67392 CT investigations obtained from four different CT scanners, showing differences in HU obtained between CT scanners [20]. In addition, the Buenger et al. equation was validated on a single CT scanner, so it is not known if it is valid on CT scanner from a different manufacturer. In the present study, we tried to avoid possible errors coming from measurement of HU on CT investigations obtained from different scanners. All CT investigations in this study were performed on the same CT scanner. In our study, we did not examine the inter-observer reliability, which also could influence the measurements. All HU measurements, i.e., the selection of ROI in the different areas of C2 were performed by one single person. However, in a study from 2022 also from our institution, the inter-observer reliability in direct measurement of HU of the same vertebra by four different investigators was very high [39]. Conclusion In summary, bone density values of the 2nd vertebra of 198 patients were determined by HU in this study. Bone density values were calculated using the formula of Buenger et. al (QCT value in mg/cm 3 = 0.71 × HU of contrast-enhanced CT + 13.82) [1]. When bone density of the second cervical vertebra was examined, the transitional area from dens to corpus vertebra showed statistically significant lower bone density values compared to the adjacent regions (p < 0.001). The results are consistent with previous anatomical studies of the C2 vertebra and explain the frequency of dens fractures in this transitional region. Bone density values generally decreased with age in all C2 regions. There was a clear decrease in values from age over 50 years, in both males and females (p < 0.001). The decrease in bone density in females can be explained by postmenopausal changes. A similar decrease in bone density in females was already described in the lumbar spine [33]. To our knowledge, a clear decrease in bone density of C2 vertebra in males over 50 years of age has not yet been described in the literature. Further studies examining these parameters in a different patient population are certainly needed to confirm this. These data of bone density of C2 vertebra, among others, may be helpful to comprehensively evaluate the status of the spine and to design a better preoperative plan before instrumentation. The bone density of different vertebral bodies can be equally important to the medical device industry, which must develop instrumentation such as screws and disk replacements. These medical devices must be specifically adapted for the local anatomical conditions of different spinal regions and their bone density.
2023-02-11T14:58:45.230Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "2db93e0d28f45f59dd83d8f04e781574f79aa81d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "2db93e0d28f45f59dd83d8f04e781574f79aa81d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219041794
pes2o/s2orc
v3-fos-license
Tiger grass (Thysanolaena maxima) cultivation in CALSANAG watershed in Romblon, Philippines: dilemmas and prospects for sustainable natural resources management Landicho LD, Ocampo MTNP, Cabahug RED, Baliton RS, Andalecio EV, Inocencio R, Servanez MV, Cosico RSA, Catillo AKA,Famisaran LDJ. 2020. Tiger grass (Thysanolaena maxima) cultivation in CALSANAG watershed in Romblon, Philippines: dilemmas and prospects for sustainable natural resources management. Biodiversitas 21: 2322-2330. Promoting sustainable natural resources management is a complex issue such that striking a balance between socioeconomic productivity and environmental integrity remains a challenge. This paper highlights the results of a study conducted from April to December 2019, which assessed the state of natural resources management in Barangay Mari-norte, San Andres, Romblon, which is part of the CALSANAG (Calatrava, San Andres, and San Agustin) Watershed. Biophysical characterization was done to determine land use and biodiversity, while farm household survey was administered to 133 farmers to characterize their socioeconomic conditions. Results showed that all of the farmerrespondents were engaged in the production of tiger grass (Thysanolaena maxima Roxb), where most of the farm households derived an estimated annual income of >Php50,000. Although their household income is higher as compared to other upland farming communit ies in the Philippines, most of them expressed that their income is insufficient since tiger grass is harvested only once a year, and the farmers have no alternative sources of income. On the other hand, biophysical characterization revealed the following: the farms are generally rainfed, have rolled to steep slopes, and have indications of low soil fertility, soil erosion incidence, and very low level of biodiversity (0.92). Most of the farmers practiced "slash-and-burn" to cultivate tiger grass as a single crop and hence, the forest cover has declined. A multi-agency collaboration jointly initiated agroforestry promotion in the upland farming communities through capability-building of upland farmers in agroforestry and establishment of tiger grass-based agroforestry model which showcases the economic and ecological viability of agroforestry systems in CALSANAG Watershed. INTRODUCTION Sustainable natural resources management is a perennial concern in many developing countries. In the Philippines, striking a balance between environmental development and economic growth remains a challenge. Antonio et al. (2012) cited the claim of the forestry sector (Department of Environment and Natural Resources), "that the country suffers from severe deforestation as over 100,000 hectares of forest are lost every year. Forest diversity has been reduced and only 800,000 hectares of virgin forest is left ". Because of the complex and intertwined issues on rural poverty, sustainable development and conservation, many research and conservation and development organizations have made efforts to bring non-timber forest products (NTFPs) at the center of discourse (Belcher et al. 2005;Benjade and Paudel 2008). The forest-dependent communities explore ways for their livelihoods and survival. In most cases, forest resources (i.e. lands, flora, and fauna) serve as their capitals for economic activities. In their study, Chechina et al. (2018) forwarded that incorporating local livelihoods into forest conservation strategies had a positive impact on the socio-economic conditions of the forest-dependent communities in the Philippines. Tiger grass (Thysanolaena maxima Roxb.), which is also widely known as "broom grass" is one of the NTFPs which belong to the Poaceae family. According to Tiwari et al. (2012), tiger grass is an important NTFP which grows in almost all parts of South and Southeast Asia up to an elevation of 1600 meters and in tropical to subtropical climatic conditions. Many upland communities in the Philippines are engaged in the production of tiger grass ( (Thysanolaena maxima (Roxb.) as a source of household income, particularly in Nueva Ecija (Armas and Moralde 2019); Benguet (Baldino 2002), Romblon (Feltavera 2011). Alam et al. (2013) also stressed that tiger grass is an important non-timber forest product which is collected by the tribal people in Bangladesh. Fadriquel (2016) estimated around 400 hectares of land in Tablas Island in Romblon, Philippines are planted to tiger grass primarily because of its economic potentials as a raw material for soft broom. The families are engaged in tiger grass production, trading the grass, processing the grass into soft brooms, and trading of the brooms. Value addition takes form along the value chain in the industry and benefits several other community members indirectly. The families engaged in production of the grass number about 300 and are scattered in the highlands of Calatrava, San Andres, and San Agustin, which lie within the Calatrava-San Andres-San Agustin (CALSANAG) watershed which is a protected area and proclaimed as an Important Bird Habitat. As the farmers have little knowledge of cultural practices of tiger grass they resort to slash and burn farming to avail of inherent soil fertility in the forest floors. There are claims worldwide, however, that slash-and-burn cultivation poses threats on environmental integrity, particularly deforestation (Neto et al. 2019), soil disturbances such as fine particle losses and nutrient leaching, soil fertility and agricultural sustainability (Beliveau et al. 2015). This paper highlights the socioeconomic and environmental conditions of the selected upland farms in Barangay Mari-norte, San Andres, Romblon, Philippines with emphasis on the dilemmas and prospects of its cultivation in promoting sustainable natural resources management within the watershed. Study site The study was conducted in Barangay Mari-norte, San Andres, Romblon, one of the villages within the CALSANAG Watershed (Figure 1), from March to December 2019. A farm household survey was administered to a sample of 133 farmers using pre-tested questionnaire. The respondents were selected using random sampling. The sampling size was computed following the formula below: n = N/1+ (N*e 2 ) Where: n : sampling size N: population of farmers in the area e : sampling error (0.05) The socioeconomic characteristics were determined using a set of pre-tested questionnaire. Key informant interviews (KII) and focus group discussions (FGD) were also conducted to validate the information and identify key issues in tiger grass production and the overall ecological/environmental status of the study site. Results of the household survey were analyzed using descriptive statistics such as frequency counts, percentages, and weighted scores. FGD and KII results were captured using thematic analysis. Farm visit was conducted to validate the farming systems and farm components, observe occurrence and indications of soil erosion, and measure the slope of the farms. Soil sampling was conducted to facilitate soil fertility analysis Biodiversity assessment was conducted by measuring the parameters such as population density or the number of individual species per unit area; frequency of species distribution; relative and importance values based on density and frequency; and, diversity and evenness indices based on the relative and importance values. Importance Value (IV) was computed to determine the dominant species for each site. The IV is the sum of the relative frequency and relative coverage. These values were computed using the following formula: The measures of biodiversity were obtained using the Shannon-Wiener Diversity Index (H) with the formula used by Magurran (2004) as follows: Where: H' : Shannon-Weiner Diversity Index Pi : fraction of the entire population made up of species i S : numbers of species encountered/species richness ∑ : sum from species 1 to species S Note: The power to which the base e (e = 2.718281828.......) must be raised to obtain a number is called the natural logarithm (ln) of the number. In addition, Pielou's evenness index (J) serves as a measure of the relative abundance of the different species that make up the plant community, using the following equation: Where: J : Pielou's Evenness Index H': Shannon's diversity index ln(S) : natural logarithm of species richness Table 1 shows that most (63%) of the farmerrespondents were male, with a mean age of 47 years old. Most of them were married with a mean household size of five (5). More than half (59%) of them derived their income solely from farming, and many (40%) combined farming with non-farm activities to augment their household income. Compared with other upland farming communities with incomes ranging from PHp10000-20000 (Landicho et al. 2015;Landicho 2016), the farmers in Barangay Mari-norte had generally higher household income. About 40% had an estimated annual household income of Php>50000; 17% with income ranging from Php41000-50000; 12% with income ranging from Php31000-40000; 15% with income ranging from Php21000-30000; 15% with income ranging fromPhp10000-20000. The farmer-respondents were all engaged in tiger grass or luway production, which serves as their primary source of income. As shown in Table 1, the total farm size of the 133 farmer-respondents accounts for 348.10 hectares with a mean farm size of 2.78 hectares. As compared to the mean farm size of smallholder upland farmers of 1.50 hectares (Tolentino et al. 2010;Visco et al. 2013;Landicho et al. 2015;Landicho et al. 2017), the farm size being cultivated by the farmers in Barangay Mari-norte is relatively bigger. Of the total farm size, about 102 hectares or a mean of 0.90 hectares is allocated to tiger grass production. Many (43%) of the farmers cultivate the farms as tenants, while a number of them (37%) have reported owned the farms. Table 2 shows the general biophysical characteristics of the farms in the study site, Data shows that majority of the farms are located in rolling to steep areas (combined percentage of 81%). Most of the farmers obtain water from spring and rivers (73%) while others rely mainly on rain for their crops. Soil in the demo farm is slightly acidic as shown by lower pH (5.37). Based on the soil fertility classification scheme proposed by Badayos et al (2007), soil in the demo farm has low to very low amounts of macro-nutrients. Typology of farming systems There are eight types of farming systems that were observed in the study site ( Figure 2). These include (i) tiger grass monocropping; (ii) tiger grass + annual crops; (iii) tiger grass + perennial (fruit trees and/or coconut); (iv) tiger grass + forest species; (v) tiger grass + perennial + annual; (vi) tiger grass + perennial + forest species; (vii) tiger grass + annual + forest species; and (viii) tiger grass + perennial + annual + forest species. The perennials refer to either fruit trees or coconut, while forest species refer to forest trees and non-timber forest products particularly the vines. Annual crops refer to vegetables and root crops. Tiger grass is a prominent dominant crop across the different types of farming systems. The average farm size of upland farms in the study site is 2.78 hectares, which is higher than the mean average farm size of 1.50 hectares in the Philippines. An average of one-hectare is allocated for tiger grass cultivation. This indicates farmers' preference for this species, primarily because of its economic contributions to the farm household. According to Sespene et al (2011), the tiger grass industry is a promising economic activity in Marigondon Norte, San Andres, Romblon. Biodiversity assessment A total of 22 species were found across the 20 sampling plots, representing the top five farming systems in Barangay Mari-norte. These include tiger grass monocrop, , tiger grass + perennials, tiger grass + annuals + perennials, tiger grass + perennials + forest species, and tiger grass+ annuals + perennials + forest species. These species consisted of 1801 individuals, with tiger grass (luway) being the dominant species across the sampling plots. Using the Shannon-Wiener diversity index (H), the computed biodiversity index, and following the classification scheme proposed by Fernando et al (1998) in Table 4, results show that the diversity across the five farming systems is very low, with an index ranging from 0.00-1.28. From the five farming systems, the combination of tiger grass + annual crops + perennials + forest species had the highest index of 1.28, while the tiger grass monocropping had the lowest (0.00). Similarly, Pielou's evenness index across the farming systems is low with a range of 0.00-0.21, suggesting that the number of individuals per species is not evenly distributed. Challenges confronting the sustainable natural resources management in Barangay Mari-norte Economic benefits derived from tiger grass production is not enough to meet the basic needs of the upland farmers Growing tiger grass is a viable livelihood because of its potential in generating cash income from the harvested panicles which are processed into soft brooms (www.pcaarrd.dost.gov.ph; Faltavera, undated). As shown in Table 1, nearly half (40%) of the farm households cultivating tiger grass derived a relatively higher annual income of more than Php50000 as compared to the other upland farming communities in the Philippines. In spite of this, however, 54% of the upland farmers reported that their income is not enough to sustain the basic needs of the households, primarily because harvesting of tiger grass is seasonal-only once a year, and the lack of alternative sources of income (Table 5). Most (84%) of the farmers were not engaged in the processing of tiger grass into broom because of the lack of knowledge and skills in processing; lack of capital to invest on the machinery and equipment for processing; and, the assured and immediate cash when sold as raw materials. Tiger grass production poses threats to the ecological/environmental condition of the watershed The farmers in the study site practiced swidden cultivation, where new areas are being opened three to four years after the initial establishment of tiger grass or when the tiger grass becomes less productive. Bruun et al. (2009) highlighted that swidden cultivation puts emphasis on the rotation of fields or lands, and through fallow, sustains the production of food crops. Moreover, the farmers practice "slash-and-burn" where woody perennials or trees are cut to give way for the production of tiger grass (Figure 3). Pollini (2014) noted that slash-and-burn agriculture is a widely adopted strategy to practice agriculture in forested landscapes. While shifting cultivation and slash-and-burn system are age-old practice in the upland areas of the Philippines, conservation policies that have evolved since the 1970s restrict such practices because of the negative impacts on the environment. Hauser and Norgrove (2013) argued that partial or complete removal of vegetation has differential effects on the microclimatic and hydrological conditions after clearing such as the redistribution of rains in the absence of tall vegetation, and soil erosion in the absence of vegetation. This literature is validated by the farmers' own observations on the rampant soil erosion in farms with steep slopes. The farmers have also observed the scarcity of water in the river system especially during the dry season, which they attribute to the cutting of trees within the watershed. Results of soil analysis also indicate that the farm soils in the study site are already acidic, having a mean soil pH of 5.37 as shown in Table 2. Results of biodiversity assessment indicate a very low biodiversity index of 0.00-1.28. In a related study, Lucidos et al (2017) pointed out a moderate bird diversity in the Balago Sub-Watershed within the CALSANAG Watershed. However, the decreasing population of endemic and endangered bird species are indicators that the area is under threat due to illegal wildlife hunting, timber poaching, and land conversion. Table 4. Classification scheme of Shannon diversity index (Fernando et al. 1998). Dominance of tiger grass in the farming system limits biodiversity and food security The upland farms in the study site were dominated by tiger grass, on the perception that integration of other crops within the farm parcels, particularly trees, would cause shading, which could, later on, affect the production and yield of tiger grass. While Figure 4 shows that the dominant farming system is the combination of tiger grass+annuals+perennials+forest tree species, the annuals and perennials were just planted in patches. As shown in Table 4, the number of individuals per species of perennials, annual, and forests were very few as compared to tiger grass. This practice limits the production of food and cash crops. As shown earlier, the biodiversity index in the study site is very low. This could be because of the practice of "slash-and-burn" and the farmers' preference on tiger grass. Besides limiting the biodiversity, the dominance of tiger grass also limits the potentials of the farm to contribute to the households' food security. The cultivation of food crops is a secondary priority among the upland farmers, farmers do not cultivate food crops which should have been the source of their food and nutrition. Opportunities towards sustainable natural resources management in CALSANAG watershed Research results revealed that farmers are already aware that their current farming system tends to contribute to ecological problems and issues. Specifically, they recognized that the practice of "slash-and-burn" and cutting of trees are destructive to the ecological conditions of the watershed. Thus, they believed that there is a need to minimize the practice of "slash-and-burn"; concentrate tiger grass farming in one production area only and minimize transfer and opening of new sites; need to plant trees to replace the tree that they have cut, and cutting of trees should be stopped. Almost all (94%) of the tiger grass farmers perceived the need of improving their current farming practices, and are open to crop diversification, particularly integrating fruit trees in their farms. The biophysical and socio-economic conditions of the upland farmers in the study site necessitate the promotion of tiger grass-based agroforestry system. Agroforestry is a dynamic, ecologically-based natural resource management system that through the integration of trees in farm and rangeland, diversifies and sustains smallholder production for increased social, economic and environmental benefits (Leakey 2017). According to Lasco and Visco (2003), agroforestry is characterized by two or more species of plants (or plants and animals) with at least one woody perennial; two or more outputs; usually have longer than one-year cycle; and, with significant interactions between woody and non-woody components. These characteristics of Agroforestry make it an appropriate technology intervention in the upland farming communities with marginal socioeconomic and environmental conditions. Ros-Tonen and Wiersum (2005) also highlight that contribution of non-timber forest products (NTFPs) to improve livelihoods can best be assured through a process of gradual domestication of NTFPs in agroforestry system. The viability of tiger grass-based agroforestry system was showcased in the study site through the establishment of a demonstration farm. Afzal (1995) in Khan et al. (2009) argued that among the major weapons to introduce the findings of modern agricultural research is through the use of extension methods such as establishment of demonstration plots. In their study, Khan et al. (2009) found out that demonstration plots were not only successful or effective means of creating awareness among the farmers about modern technologies, but also provide motivation for them to apply these technologies in their own farming practices. The tiger grass-based agroforestry model showcases the viability of integrating short-term agricultural crops and woody perennials in the tiger grass farms to enhance socioeconomic productivity and ecological stability ( Figure 5). Specifically, the agroforestry model incorporated pigeon pea (Cajanus cajan), calamansi (Citrofotunella microcarpa), and yam (Dioscorea sp.) as source of food and additional household income. In addition, pigeon pea was intercropped with tiger grass, not only as a food crop but to help restore soil fertility by fixing atmospheric nitrogen. Besides being a nutritious grain legume, Khoury et al. (2015) mentioned that pigeon pea is a stress-tolerant legume that enhances the sustainability of dry sub-tropical and tropical agricultural systems. In terms of enhancing soil condition, Sarkar et al. (2017) reported that the leaves and immature stem of pigeon pea can be used as green manure, while the fallen leaves can be used as mulch to enhance the water holding capacity of the soil. Based on first two (2) years data, the prediction analysis by Bora (2014) projects a profit of Rs 6.6 lakh/ha and Rs 8.1 lakh/ha under broom grass monoculture and when intercropped with pigeon pea respectively within a period of four (4) years. Pili (Canarium ovatum) was integrated along the farm boundaries to serve as windbreak and provide as an additional income source in the long run. Pili is suitable for windbreak in agroforestry systems, having a remarkable resistance to strong winds (Coronel 1996), and hence, making it a good living windbreak for other crops. Tiger grass was retained as the dominant species in the agroforestry model being the primary source of household income. Soil and water conservation measures were showcased through the establishment of hedgerows and check dams ( Figure 5). Hedgerows trap sediments at their base thereby minimizing soil erosion and surface runoff velocity while check dam is a structural form of soil and water conservation measure (Figure 6), which is appropriate for areas with gully erosion. Cuttings of kakawate (Gliricidia sepium) were planted along the contours to serve as a soil and water conservation measure, and restore soil fertility. Kakawate is also a leguminous tree that has the potential for soil amelioration. Meanwhile, Yuan et al. (2019) reported that sediment discharge is being reduced by 83.92% in the watershed as influence by the check dam system. The establishment of the tiger grass-based agroforestry model in the study site was an initiative of a multi-agency technical working group composed of the state universities and government sectors. The technical working group works towards institutionalizing the multi-agency implementation and promotion of agroforestry technologies; replicate the tiger grass-based agroforestry model in other upland communities within the CALSANAG watershed; explore for other alternative livelihood activities of the tiger grass farmers; and, conduct continuous education and awareness programs among the upland farming communities towards sustainable natural resources management of the watershed. Lessons from many development projects indicate the value of multisectoral collaboration in natural resources management (Prager 2010;Hvenegarrd et al. 2015); community-based development projects (Landicho et al. 2009;Cruz et al. 2011;Elauria et al. 2017) and agricultural innovations systems (Eidt et al. 2020). In summary, research results suggest that there are economic and environmental challenges confronting the current tiger grass production of farmers in CALSANAG Watershed. This farming system does not satisfy the economic requirements of the farmers, while contributes to environmental degradation. The upland farmers recognized the need to integrate other crop species in their farms, and employ farming practices that would help restore and improve their degraded environment. This research also suggests the need for the establishment of a tiger grassbased agroforestry model to showcase the viability of crop diversity to address socioeconomic concerns of the upland farmers, and integration of woody perennials and soil and water conservation measures to address the need for ecological restoration of the watershed. As designed, the tiger grass-based agroforestry model aims at addressing a balance between the socioeconomic and ecological concerns of the entire farming household and in the long run, for sustainable natural resource management of the CALSANAG Watershed.
2020-05-21T00:07:17.645Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "7c2ef9cda4d624ba713dc7f0288bc0880777d66c", "oa_license": "CCBYNCSA", "oa_url": "https://smujo.id/biodiv/article/download/5584/3891", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "095242816f8fea8ccfe32824c25cf9f3c06ff352", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography" ] }
245195946
pes2o/s2orc
v3-fos-license
Dietary Vitamin C and Vitamin C Derived from Vegetables Are Inversely Associated with the Risk of Depressive Symptoms among the General Population Vitamin C is a water-soluble antioxidant. Reducing the level of oxidative stress can alleviate depression. Therefore, we investigated the correlation between dietary vitamin C intake and the risk of depressive symptoms in the general population. Data from the 2007–2018 National Health and Nutrition Examination Survey were used in our study. The dietary intake of vitamin C was assessed by two 24-h dietary recalls. Depressive symptoms were assessed with the Patient Health Questionnaire-9. Logistic regression and restricted cubic spline models were applied to assess the relationship between dietary vitamin C intake and the risk of depressive symptoms. The multivariate adjusted odds ratio (95% confidence interval) of depressive symptoms for the highest vs. lowest category of dietary vitamin C intake and vitamin C intake derived from vegetables were 0.73 (0.58–0.91) and 0.73 (0.56–0.95). In subgroup analyses, dietary vitamin C intake was negatively correlated with the risk of depressive symptoms in females 18–39 years old and 40–59 year-old groups. A dose-response analysis showed that there was a nonlinear relationship between dietary vitamin C intake and the risk of depressive symptoms. Dietary vitamin C intake and vitamin C intake derived from vegetables were inversely associated with the risk of depressive symptoms among the general population. We recommend increasing the intake of vegetables in daily diet. Introduction Depression might be related to a variety of factors, such as heredity [1], environment [2], and diet. According to previous research, there might be a negative correlation between depressive symptoms and the intake of nutrients, such as protein [3], carotenoids [4], fiber [5], natural folic acid [6], magnesium [7], and total zinc, iron, copper, and selenium intake [8]. Previous studies have shown that oxidative stress level in patients with depression is increased [9]. Vitamin C is a water-soluble antioxidant [10]. Vitamin C is also a cofactor of several important hydroxylation reactions in the human body, such as the synthesis of catecholamine [11,12]. The increase or decrease in catecholamine and other substances may be related to depression [13]. Some studies investigated the relationship between vitamin C and depression. A previous study among male students aged 18-35 in New Zealand [14] showed that vitamin C levels were negatively correlated with depression. Another study [15] found that the intake of vitamin C in Japanese elderly people's diet was negatively correlated with the risk of depression symptoms. It was also found that vitamin C supplementation was significantly related to a decreased depression score in depressed shift workers in an oil refinery [16]. However, a study on the elderly Merseyside residents over 60 years of age living in residential homes [17] did not find the association between vitamin C supplementation and the improvement of depressive symptoms. Ascorbic acid mainly depends on the dietary intake of vegetables and fruits. Rare studies focused on the connection between vitamin C intake and the risk of depressive symptoms in the general population. At the same time, their dose-response relationship was also unclear. Therefore, we investigated the relationships between vitamin C (from dietary or different sources) and the risk of depressive symptoms in the general population. Study Population A National Health and Nutrition Examination Survey (NHANES) was approved by the National Health Organization Institutional Review Board, which is a nationally representative study, and all participants signed the informed consent [18]. This study analyzed the data of six investigation cycles, including 59,842 respondents. Individuals younger than 18 years old with incomplete or unreliable dietary retrospective survey data and incomplete depression questionnaire data were excluded. We further excluded pregnant females, lactating females, and individuals with extreme total energy intake (more than mean ± 3 standard deviations) [19]; participants under the age of 18 did not have complete data on depressive symptoms, and pregnant or lactating females might have special dietary intake and metabolism [20]. Finally, a total of 25,895 individuals were included in this study. The specific process was shown in Figure 1. Among 25,895 individuals, 18,341 had completed data on vitamin C from vegetable sources, 10,700 had completed vitamin C data from fruit sources, and 8,132 were users of vitamin C supplements. Among 25,895 individuals, 18,341 had completed data on vitamin C from vegetable sources, 10,700 had completed vitamin C data from fruit sources, and 8132 were users of vitamin C supplements. Assessment of Depressive Symptoms The outcome of the interest variable was depressive symptoms. The Patient Health Questionnaire (PHQ-9) is a nine-item scale with each item ranging from 0 to 3 [21]. We used it to evaluate the group of depressive symptoms. A total score ranged from 0 to 27, and 10 was the cut-off value [22]. According to the cut-off value, participants were divided into the depressive symptoms group and the non-depressive symptoms group. Evaluation of Dietary Vitamin C Intake and Vitamin C Supplements The daily vitamin C intake and vitamin C supplements of each individual was the average of two 24-h dietary retrospective interviews [23]. Vegetable and fruit sources of vitamin C were identified according to US Department of Agriculture (USDA) food codes from the Individual Foods data in NHANES [24]. Dietary vitamin C intake, different sources of vitamin C, and total vitamin C intake (food + supplements) were divided into 3 groups (T1, T2, and T3) according to the terciles. T1, T2, and T3 represented low, medium, and high vitamin C intake levels, respectively. Covariates According to previous literature on dietary intake and depression [25][26][27], we included a series of covariates. Demographic characteristic variables included sex, age, educational levels, marital status, poverty-income ratio (PIR), and race. Body mass index (BMI) was divided into three categories [28]. We also included some health behavior variables, such as alcohol consumption status and so on. Health factors included hypertension, diabetes, and stroke. In addition, we also adjusted the total energy intake [26]. Supplementary Table S1 describes the detailed condition of the covariates. Statistical Analysis In this study, we used Stata 15.0 (Stata Corp., College Station, TX, USA) for the main statistical analyses. According to the NHANES guidelines, we recalculated the new sample weights when merging two or more cycles of data [29]. We compared the characteristics with or without the depressive symptoms group and high or low vitamin C intake from vegetables. Characteristics of the study population were presented as numbers (percentages) for categorical variables. For continuous variables, we used the Kolmogorov-Smirnov normality test. If distribution conformed to the normal distribution, we used mean ± standard deviation (SD) to express it; otherwise, we used the median (interquartile range). If the distribution conformed to the normal, Student's t-tests would be applied; otherwise, we chose the Mann-Whitney U test for comparison. We used the χ 2 test for the classified variables. The lowest vitamin C intake group (T1) was taken as a reference group. The connection between dietary vitamin C intake, vitamin C derived from vegetables and fruits, total vitamin C intake (food + supplements), and depressive symptoms risk were analyzed by the logistic regression model, and the results were reported as an odds ratio (OR). In addition, we divided the participants into two categories according to whether they were vitamin C supplement users or not. Model 1 adjusted for age and sex. Model 2 additionally adjusted for education level, marital status, PIR, race, smoking, drinking, occupational and recreational physical activity level, BMI, hypertension, stroke, diabetes, and dietary energy intake. In addition, due to the relatively long time span, we added the time dummy variable into the regression model, and also increased the interaction of time dummy variables and vitamin C intake. We performed sensitivity analysis by excluding participants\taking antidepressant medication (Bupropion, Fluoxetine, Sertraline, Paroxetine, Venlafaxine, Citalopram, Esci-talopram, and Duloxetine). Considering different sex and age teams had different depression prevalence, we performed stratification analyses by sex and age. The dose-response relationship between dietary vitamin C intake and the risk of depressive symptoms was evaluated by a restricted cubic spline model, in which three nodes were located in the 5th, 50th, and 95th percentiles of dietary vitamin C intake. The result was statistically significant when the two-sided p value was less than 0.05. Characteristics of the Participants As for the intake data, a total of 25,895 individuals who met the inclusion criteria were included in this investigation. The average age was 49.00 ± 18.45 years. Among them, 2334 participants were in the depressive symptoms group, accounting for 9.01% of the total number. Table 1 shows the comparison results of the characteristics between the depressive symptoms group (PHQ-9 ≥ 10) and the non-depressive symptoms group (PHQ-9 < 10). Between with and without depressive symptoms groups, the latter was more likely to have a lower education level, higher BMI, lower PIR, lower occupational physical activity, lower recreational physical activity, lower vitamin C intake, lower vitamin C vegetable source intake, and lower total energy intake. In addition, for women, smokers, and stroke, hypertension, and diabetes patients, the proportion of the depressive symptoms group was significantly higher than that of the non-depressive symptoms group. Supplementary Table S2 shows the comparison results of the characteristic between the high (>50 mg/day) and low (≤50 mg/day) vitamin C intake from vegetables. Between the groups of high and low vitamin C intake from vegetables, the latter was more likely to have a lower education level, lower PIR, lower recreational physical activity, and lower total energy intake. In addition, for women and smokers, the proportion in the low vitamin C intake from the vegetable group was significantly higher than that in the high vitamin C intake from the vegetable group. Relationship between Dietary Vitamin C Intake and the Risk of Depressive Symptoms The results are shown in Table 2. After weighted calculation, the univariate logistic regression model shows that higher dietary vitamin C intake was associated with a lower risk of depressive symptoms (p < 0.001). Compared with T1, the ORs (95% confidence interval) of dietary vitamin C intake T2 and T3 were 0.58 (0.50-0.68) and 0.55 (0.48-0.63). In model 1, higher dietary vitamin C intake was related to a lower risk of depressive symptoms, and the correlation was statistically significant (p < 0.001). Compared with T1, the ORs (95% confidence interval) of dietary vitamin C intake T2 and T3 were 0.58 (0.50-0.68) and 0.56 (0.49-0.65), respectively. In model 2, the negative correlation between dietary vitamin C intake and depressive symptoms remained stable. Compared with T1, T2 and T3 of dietary vitamin C intake were negatively correlated with depressive symptoms, with ORs (95% confidence interval) of 0.69 (0.58-0.83) and 0.73 (0.58-0.91), respectively. The joint test of the effect for the multiple categorical variables was used, and dietary vitamin C intake was negatively correlated with depressive symptoms, with an OR value of 0.998, p = 0.018. In a sensitivity analysis, after excluding 2415 participants who took antidepressant medication, the association of dietary vitamin C intake with depressive symptoms was still significant in T2. Compared with T1, dietary vitamin C intake was negatively correlated with depressive symptoms, with an OR (95% confidence interval) of 0.72 (0.58-0.90). Table 3 shows the correlation between vitamin C intake and depressive symptoms after sex stratification. In a multiple-adjusted model, the T2 and T3 groups of female dietary vitamin C intake were negatively correlated with depressive symptoms, and the ORs (95% confidence interval) were 0.613 (0.48-0.78) and 0.648 (0.48-0.87), respectively. Table 4 shows the results of an age stratification analysis of the relationship between vitamin C intake and depressive symptoms risk. T2 in the age group 18-39 years old was inversely correlated with the risk of depressive symptoms, with an OR (95% confidence interval) of 0.708 (0.51-0.98). T2 and T3 were inversely correlated with the risk of depressive symptoms in the age group 40-59 years old, with ORs (95% confidence interval) of 0.630 (0.48-0.83) and 0.636 (0.44-0.91). In dose-response relationships, the adjustment of covariates was consistent with model 2. The dietary vitamin C intake was nonlinearly negatively associated with depressive symptoms (P -nonlinearity = 0.009). We found an L-shaped association. The prevalence of depressive symptoms reached a plateau when the dietary vitamin C intake was higher than 126 mg/day. Figure 2 shows the dose response relationship. Relationships between Dietary Vitamin C Intake Derived from Different Sources (Vegetables, Fruits), Vitamin C Supplements, Total Vitamin C Intake (Food + Supplements), and the Risk of Depressive Symptoms The association of dietary vitamin C intake derived from vegetables and fruits with depressive symptoms risk is shown in Table 5. In a multiple-adjusted model, compared with T1, the OR (95% confidence interval) of dietary vitamin C intake derived from vegetables T3 was 0.73 (0.56-0.95). The joint test of the effect for the multiple categorical variables was used, and the dietary vitamin C intake derived from vegetables was negatively correlated with depression, with an OR value of 0.996, p = 0.026. There was a linear association between the dietary vitamin C intake derived from vegetables and depressive symptoms (P-nonlinearity = 0.105). When dietary vitamin C derived from vegetables was about 50 mg/day, the risk of depressive symptoms reached a relatively low level. When dietary vitamin C derived from vegetables was more than 70 Relationships between Dietary Vitamin C Intake Derived from Different Sources (Vegetables, Fruits), Vitamin C Supplements, Total Vitamin C Intake (Food + Supplements), and the Risk of Depressive Symptoms The association of dietary vitamin C intake derived from vegetables and fruits with depressive symptoms risk is shown in Table 5. In a multiple-adjusted model, compared with T1, the OR (95% confidence interval) of dietary vitamin C intake derived from vegetables T3 was 0.73 (0.56-0.95). The joint test of the effect for the multiple categorical variables was used, and the dietary vitamin C intake derived from vegetables was negatively correlated with depression, with an OR value of 0.996, p = 0.026. There was a linear association between the dietary vitamin C intake derived from vegetables and depressive symptoms (P -nonlinearity = 0.105). When dietary vitamin C derived from vegetables was about 50 mg/day, the risk of depressive symptoms reached a relatively low level. When dietary vitamin C derived from vegetables was more than 70 mg/day, the relationships were no longer significant. The dose response between vitamin C derived from vegetables and depressive symptoms risk is shown in Figure 3. mg/day, the relationships were no longer significant. The dose response between vitamin C derived from vegetables and depressive symptoms risk is shown in Figure 3. Among 25,895 individuals, 8132 were vitamin C supplement users, accounting for 31.40% of the total participants. In the multiple adjustment model, compared with nonsupplement users, vitamin C supplement users were inversely correlated with depressive symptoms risk, with an OR (95% confidence interval) of 0.78 (0.66-0.93). The association of vitamin C supplement users and non-supplement users with depressive symptoms risk is shown in Table 6. Table 6. Association between vitamin C supplement users and non-supplement users with depressive symptoms. In the multiple adjustment model, compared with T1, T2 and T3 of vitamin C intake (food + supplements) were negatively correlated with depressive symptoms, with ORs (95% confidence interval) of 0.78 (0.65-0.94) and 0.72 (0.55-0.93), respectively. The association of total vitamin C intake (food + supplements) with depressive symptoms risk is shown in Table 7. Among 25,895 individuals, 8132 were vitamin C supplement users, accounting for 31.40% of the total participants. In the multiple adjustment model, compared with nonsupplement users, vitamin C supplement users were inversely correlated with depressive symptoms risk, with an OR (95% confidence interval) of 0.78 (0.66-0.93). The association of vitamin C supplement users and non-supplement users with depressive symptoms risk is shown in Table 6. Table 6. Association between vitamin C supplement users and non-supplement users with depressive symptoms. In the multiple adjustment model, compared with T1, T2 and T3 of vitamin C intake (food + supplements) were negatively correlated with depressive symptoms, with ORs (95% confidence interval) of 0.78 (0.65-0.94) and 0.72 (0.55-0.93), respectively. The association of total vitamin C intake (food + supplements) with depressive symptoms risk is shown in Table 7. Table 7. Association between total vitamin C intake (food + supplements) and depressive symptoms. Discussion We found that dietary vitamin C, total vitamin C intake (food + supplements), and vitamin C derived from vegetables were negatively correlated with the risk of depressive symptoms in the general population. In stratified analyses by sex, dietary vitamin C intake was negatively correlated with the risk of depressive symptoms in female. In age stratified analyses, we discovered a negative association in the 18-39 and 40-59 year-old groups. We found that there was a nonlinear association of dietary vitamin C intake and linear association of dietary vitamin C intake derived from vegetables with the risk of depressive symptoms. The relationship between ascorbic acid and depression was initially based on the observation of clinical manifestations of ascorbic acid deficiency [30], and some clinical studies had also explored vitamin C as an adjuvant therapy for depression [31,32]. Some studies explored the relationship between vitamin C and the emotional state in acutely hospitalized patients [31] and type 2 diabetic patients [33]; the study found that vitamin C supplementation improved their emotional levels. Some controlled experiments found that supplementing foods with high vitamin C can improve the mood of adult men [34], and the intake of vitamin C in the diet of depressed students decreased significantly [35]. In addition, some reviews summarized the existing evidence for the treatment of depression with ascorbic acid [36][37][38] and the relationship between vitamin C deficiency and depression [39,40]. However, it focused more on clinical trials, and lacked research on the relationship between dietary vitamin C intake and depressive symptoms in the general population. A cross-sectional survey of 139 male participants aged between 18 and 35 [14] found a reversed association between dietary vitamin C status and depression. This was consistent with our results. A cross-sectional study of 279 adults aged 65 to 75 [15] found that vitamin C intake was correlated with the alleviation of depressive symptoms in communitydwelling elderly persons in Japan. It was only statistically significant in men, while our results were only statistically significant in women by sex stratification. A study of 73 residents over 60 years old found that there was no relationship between the intake of vitamin C and the improvement of depressive symptoms in individuals over 60 years old [17]. Our results also found that there was no correlation between individuals over 60 years old. Inconsistent research results may be due to the different number, age, sex, and nationality of participants. Vitamin C may be associated with depression through the following mechanisms. Firstly, vitamin C may play an antidepressant role through its antioxidant and antiinflammatory properties [41]. Studies have shown that vitamin C can be used as an antioxidant at low doses and as a pre-oxidant at high doses [42]. In addition, vitamin C is essential for the synthesis of the monoamine neurotransmitter dopamine, norepinephrine, and serotonin [43], and studies have found that deficiencies and disorders of these substances can lead to depression [44]. In the results of age stratification, we did not find the relationship between dietary vitamin C intake and depressive symptoms in participants over 60 years old. Nutritional requirements may change with age, and the digestive organ function of the elderly grad-ually declines with aging [45]. Changes caused by aging on the digestive organs of the gastrointestinal tract may affect the absorption of vitamins [46]. The oxidative damage in the elderly is aggravated. Given that the activity of antioxidant enzymes decreases with age, it is very important to provide sufficient dietary antioxidants [47]. Almost all countries encourage the consumption of fruits and vegetables [48]. According to the 2020-2025 dietary guidelines for American residents [49], the recommended dietary allowance of vitamin C for men over 18 years old is 90 mg/day, and 75 mg/day for women. According to our data, 15,828 (61.12%) participants were below the recommended dietary allowance. So, we suggest that the intake of vegetables in the daily diet should be appropriately increased to prevent chronic diseases such as depression. There are several advantages in our research. Firstly, the dose-response relationship between vitamin C intake and depressive symptoms was discussed. Secondly, in multivariate analysis, we adjusted dietary energy intake and other confounding factors. How each covariate affects the depressive symptoms outcome was shown in Supplementary Figure S1. Thirdly, this study investigated the association between sex and age stratification. Fourthly, we also analyzed the association of vegetable-derived and fruit-derived vitamin C with the risk of depressive symptoms. Finally, because of the large sample size, the results are more reliable. There are also several limitations in our research. First of all, the study design was cross-sectional, so it was difficult to determine the causal relationship. Secondly, PHQ-9 is a kind of screening tool that might suffer from wrong classification bias. Thirdly, a 24-h dietary recall might lead to memory bias, which might lead to an overestimation or underestimation of the results in our study. However, we used the average of two 24-h dietary recollections as the dietary vitamin C intake, which might partially reduce the recollection bias. Fourthly, cooking often destroys vitamin C in vegetables. We have not controlled the influence of cooking on vitamin C derived from vegetables. Fifth, although vegetables contain high levels of vitamin C, they also contain reasonable levels of other important micronutrients that may help improve mood, such as B vitamins, magnesium, iron, vitamin A, and so on [50]. There is also a possibility that other vitamins and minerals in vegetables may increase the health benefits of vitamin C. Sixth, the best indicator of vitamin C status is plasma ascorbic acid. Due to the limitation of data, we only explored the dietary data. Seventh, due to the limitation of data, the full race classification was not available until 2011, so we cannot analyze the relationship between different races and depression. Conclusions Dietary vitamin C intake and vitamin C intake derived from vegetables were inversely associated with the risk of depressive symptoms among the general population. For the results of the stratified analysis, we found it in women and 18-59 years old group. We suggest increasing the intake of vegetables in our daily diet. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/antiox10121984/s1, Figure S1: How each covariate affects the depressive symptoms outcome. Table S1: The classifications of covariates, Table S2: The comparison results of the characteristic between the high (>50 mg/day) and low (≤50 mg/day) vitamin C intake from vegetables.
2021-12-16T16:38:47.617Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "75be691f698ea7c8b5de4dfd00bdc289c1cef20a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/10/12/1984/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dacaef92f961686b0d893e991c1930618cde51bb", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
118597361
pes2o/s2orc
v3-fos-license
Leptons in Holographic Composite Higgs Models with Non-Abelian Discrete Symmetries We study leptons in holographic composite Higgs models, namely in models possibly admitting a weakly coupled description in terms of five-dimensional (5D) theories. We introduce two scenarios leading to Majorana or Dirac neutrinos, based on the non-abelian discrete group $S_4\times \Z_3$ which is responsible for nearly tri-bimaximal lepton mixing. The smallness of neutrino masses is naturally explained and normal/inverted mass ordering can be accommodated. We analyze two specific 5D gauge-Higgs unification models in warped space as concrete examples of our framework. Both models pass the current bounds on Lepton Flavour Violation (LFV) processes. We pay special attention to the effect of so called boundary kinetic terms that are the dominant source of LFV. The model with Majorana neutrinos is compatible with a Kaluza-Klein vector mass scale $m_{KK}\gtrsim 3.5$ TeV, which is roughly the lowest scale allowed by electroweak considerations. The model with Dirac neutrinos, although not considerably constrained by LFV processes and data on lepton mixing, suffers from a too large deviation of the neutrino coupling to the $Z$ boson from its Standard Model value, pushing $m_{KK}\gtrsim 10$ TeV. Introduction The idea that the Standard Model (SM) Higgs might be a composite particle arising from a strongly coupled theory [1] has received considerable attention lately. One of the main reasons of this renewed interest comes from the observation that the composite Higgs paradigm is closely related to theories in extra dimensions [2]. This connection is particularly transparent in Randall-Sundrum (RS) models [3], thanks to the AdS/CFT duality [4]. More precisely, certain theories in extra dimensions, including RS models, can be seen as a (relatively) weakly coupled description of a sub-set of 4D composite Higgs models. They consist of two sectors: an "elementary" sector, which includes the gauge and fermion fields of the SM, and a "composite" sector, which is strongly coupled and gives rise to the SM Higgs. The form of the couplings between these two sectors is not the most general one allowed by symmetry considerations only, but is more constrained. We denote in the following this more constrained class of models as Holographic Composite Higgs Models (HCHM). The flavour structure of HCHM has been studied in detail in the past mostly in the 5D context of RS models with fermion and gauge fields in the bulk and it has been shown to be particularly successful [5]. It automatically implements the idea of [6] to explain the hierarchy of the quark and charged lepton masses in terms of field localization in an extra dimension. Moreover, HCHM are equipped with a built-in GIM mechanism that goes under the name of RS-GIM [7] and automatically protects the SM fields from possibly large flavour violating interactions coming from the composite sector. 1 Small neutrino masses and large lepton mixing are not easily accommodated in this set-up, because the large mixing potentially leads to excessive LFV. Neutrino oscillation experiments clearly show that the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix has a very peculiar structure well compatible with Tri-Bimaximal (TB) mixing [9]. There has been much progress in recent years in explaining TB lepton mixing and the absence of LFV interactions for charged leptons by means of discrete non-abelian symmetries. It is thus natural to apply such symmetries also in the context of HCHM in order to resolve the aforementioned problems. Aim of this paper is to introduce a class of HCHM where, thanks to a non-abelian discrete symmetry, lepton mixing is nearly TB, and at the same time bounds on LFV processes in the charged lepton sector are satisfied (see [10] for other proposals). The mass spectrum in the neutrino sector can be normally or inversely ordered. The pattern of flavour symmetry breaking is dictated by symmetry considerations only, without relying on extra assumptions [11] or specific mechanisms for the breaking of the flavour symmetry, such as the ones used in [12,13] (see also [14]) in the case of A 4 to reproduce TB mixing [15]. We discuss the case of flavour symmetry breaking in the elementary and composite sectors to certain non-trivial subgroups of the original symmetry without advocating an explicit realization of the breaking. 2 In particular, no flavons or other specific sources of flavour breaking are present in our set-up. We consider in this paper the discrete group S 4 ×Z 3 . The group S 4 has been shown [16] to be the minimal group giving rise to TB lepton mixing using symmetry principles only. The presence of an irreducible two-dimensional representation is another feature of S 4 . Such a representation allows to disentangle the symmetry properties of the third generation from the first two and is expected to be important when applying the flavour symmetry in the quark sector. We focus on two possible scenarios which only differ in the way SM neutrinos get a mass. In the first one, the SM neutrinos are Majorana fermions and the type I see-saw mechanism explains the smallness of their masses, with no need to introduce additional (intermediate) mass scales in the theory. In the second one, SM neutrinos are Dirac fermions and tiny Yukawa couplings are naturally explained by the ultra-composite nature of the right-handed (RH) neutrinos [17]. In both scenarios, the flavour symmetry is broken to Z 2 × Z 2 × Z 3 in the elementary and to Z (D) 3 in the composite sector. Note that the strength of this symmetry breaking is in general expected to be O(1). In the composite sector for the charged leptons such a large breaking is actually favoured, because it allows to decrease the degree of compositeness of SM leptons, suppressing large deviations from the SM Zττ coupling. 3 The breaking felt by neutrinos in the composite sector is instead required to be weak in the Majorana scenario, in order to not perturb too much TB lepton mixing. An alternative is to resort to an extra symmetry protecting neutrinos from being affected by the flavour symmetry breaking in the composite sector. On the contrary, flavour symmetry breaking in the composite sector can be large in Dirac models, provided that the tiny component of RH neutrinos in the elementary sector is flavour universal. After a general presentation of the basic 4D flavourful HCHM, we pass to construct two explicit realizations in terms of 5D warped models. For concreteness, we consider the HCHM where the Higgs is a pseudo-Goldstone boson, i.e. gauge-Higgs unification models [18]. The 5D models are based on the minimal SO(5) × U (1) X gauge symmetry [19], while the flavour symmetry group contains, in addition to the S 4 × Z 3 factor, model-dependent discrete abelian factors necessary to minimize the number of allowed (and often unwanted) terms. In the Majorana model, the leading source of flavour violation arises from so called fermion boundary kinetic terms (BKT), whose effect is analyzed in detail. The only sizable constraints come from lepton mixing, being LFV processes for charged leptons below the current bounds. We also argue that CP violating effects, such as the Electric Dipole Moments (EDM) for charged leptons, are negligibly small. Keeping the prediction of the solar mixing angle θ 12 within the experimentally allowed 3σ range requires flavour symmetry breaking in the composite sector to be at most of O(3% ÷ 4%) for neutrinos, unless a Z 2 exchange symmetry is present on the IR brane, in which case no constraint occurs. This Z 2 -invariant 5D model is surprisingly successful, simple and constrained, and essentially contains only one free real parameter and two Majorana phases! The model is compatible with the mass of the first Kaluza-Klein (KK) gauge resonances being m KK 3.5 TeV, which is roughly the lowest scale allowed by electroweak considerations (S parameter). The masses of all fermion KK resonances (charged and neutral) are always above the TeV scale. In the Dirac model the most significant constraint does not arise from LFV processes or lepton mixing, but from a too large deviation of the gauge coupling of neutrinos to the Z boson from its SM value, which is constrained by LEP I to be roughly at the per mille level. This bound is satisfied by taking m KK 10 TeV, well above the LHC reach, with an O(1%) tuning in the electroweak sector. The masses of charged fermion KK resonances are above the TeV scale, while in the neutral fermion sector potentially light (sub-TeV) states can appear. The structure of the paper is as follows. In section 2 we describe our set-up from a general effective 4D point of view both for the Majorana and Dirac models. In section 3 we briefly review the relevant operators entering in the LFV processes we focus on, radiative lepton decays l 1 → l 2 γ, decays to three leptons l 1 → l 2 l 3l4 and µ − e conversion in nuclei. In section 4 we construct the 5D Majorana model, compute its mass spectrum in subsection 4.1, the deviations from gauge coupling universality in subsection 4.2, LFV processes and lepton mixing in subsection 4.3 and estimate uncalculable effects in subsection 4.4. In section 5 a similar, but more concise, analysis is repeated for the Dirac model. We conclude in section 6. Three appendices are added. In appendix A basic definitions and properties of S 4 are reviewed, in appendix B we report our conventions for the SO(5) generators and representations and in appendix C we write the detailed structure of the two form factors governing the charged lepton radiative decays. General Set-up We consider CHM with a non-abelian discrete flavour symmetry G f = S 4 × Z 3 . They consist of an "elementary" and a "composite" sector: (2.1) The symmetry G f is broken in the elementary sector to Z 2 × Z 2 × Z 3 , where Z 2 × Z 2 ⊂ S 4 is generated by S and U , and in the composite sector to Z (D) 3 , the diagonal subgroup of the external Z 3 and Z 3 ⊂ S 4 generated by T (see appendix A for our notation and details on S 4 group theory). We do not need to specify how the flavour symmetry breaking pattern is achieved. The term L mix governs the mixing between the two sectors and is assumed to be invariant under the whole flavour group G f . This is our definition of HCHM in the following. We have two different classes of models, depending on whether neutrino masses are of Majorana or Dirac type. We will refer to the two cases as Majorana/Dirac models (or scenarios). Majorana Models The elementary sector is invariant under the SM gauge group and includes three generations of SM left-handed (LH) and RH leptons l α L , l α R and three RH neutrinos ν α R . Here and in the following Greek letters from the beginning of the alphabet denote generation indices; depending on the context, α = e, µ, τ or equivalently α = 1, 2, 3. The LH leptons l α L and the RH neutrinos ν α R transform as (3, 1) under S 4 × Z 3 , while the RH leptons l α R transform as (1, ω 2(α−1) ), where ω ≡ e 2πi/3 is the third root of unity. The elementary Lagrangian (up to dimension four terms) is taken to be where the superscript c denotes charge conjugation and M is the most general mass matrix invariant under Z 2 × Z 2 × Z 3 . In flavour space, it is of the form with U T B the TB mixing matrix and M D a diagonal matrix. We use the notation ≡ γ µ A µ , for any vector A µ . The composite sector is an unspecified strongly coupled theory, that gives rise, among other states, to a composite SM Higgs field. The latter may or may not be Goldstone fields coming from a spontaneously broken global symmetry. In absence of any interaction between the elementary and the composite sector, the SM fermions are massless. They gain masses, after ElectroWeak Symmetry Breaking (EWSB), by mixing with fermion operators Ψ belonging to the strongly coupled sector. The mixing Lagrangian L mix is where Λ is a high UV cut-off scale of the composite sector, Ψ α l L , Ψ α l R and Ψ α ν R are fermion operators of (quantum) dimensions 5/2 + γ lL , 5/2 + γ α lR , 5/2 + γ νR , transforming as (3, 1), (1, ω 2(α−1) ) and (3, 1) under S 4 × Z 3 , respectively. The mixing parameters λ l L and λ ν R are flavour universal, while λ α l R are flavour diagonal, but non-universal. For simplicity, we assume that all of them are real. Although strictly not necessary, we take γ lL , γ α lR > 0, so that these mixing couplings are irrelevant. To a good approximation, l α L and l α R can be identified with the SM fields, with a small mixing with the strongly coupled sector. Integrating out the composite fermion operators and taking into account that L comp is invariant under Z (D) 3 only, gives the following charged lepton mass matrix (in left-right convention,ψ L M ψ R ) where v H is the electroweak scale, µ is the O(TeV) scale at which the composite theory becomes strongly coupled and b α are O(1) coefficients. 4 The hierarchy of the charged lepton masses naturally arises from the (µ/Λ) suppression factor in (2.6) with a proper choice of anomalous dimensions γ lL and γ α lR . The coupling λ ν R is in general relevant and ν α R strongly mix with the composite sector. The latter gives the following contribution to the kinetic terms of ν α R : 7) withb α O(1) coefficients. When γ ν R < 0, the kinetic term in (2.7) dominates over the O(1) term (2.2) present in the elementary Lagrangian, and it is more appropriate to say that ν α R are states in the composite sector with a small component in the elementary sector. When ν α R are canonically normalized, the relevant coupling λ ν R in (2.5) becomes effectively a constant. 5 The canonically normalized neutrino Dirac mass terms are of the form coefficients. Notice the crucial difference between the charged lepton (2.6) and neutrino (2.8) masses. The former explicitly break the flavour symmetry, sincel d L l R is not S 4 ×Z 3 invariant, while the latter do not, beingl u L ν R an invariant. This implies that the coefficients b α vanish in the limit of exact S 4 × Z 3 symmetry, whileb α =b,b α =b become flavour independent. Assuming a small breaking of the flavour symmetry in the neutrino sector, one can takeb α ≈b, b α ≈b and, independently of b α , the Dirac neutrino mass terms (2.8) become universal. We stress the importance of having a small breaking of the flavour symmetry in the neutrino sector but not necessarily for charged leptons, because the masses of the latter are already suppressed by their small degree of compositeness. Demanding a higher degree of compositeness, in particular for the τ lepton, might result in too large deviations of its coupling to the Z from its SM value. Integrating out ν α R in this limit gives the following see-saw like neutrino mass matrix: where we have again taken into account the scaling required to canonically normalize ν α R . Thanks to the latter, the factorsb cancel from the final formula (2.9) but, more importantly, we gain a crucial enhancement factor (µ/Λ) 2γν R , without which the light neutrino masses would be far too small for M D ∼ O(M P l ), M P l being the reduced Planck mass, considering that the Dirac mass terms are at most of O(v H ). No intermediate mass scale has then to be advocated for M D . The mass matrix (2.6) is diagonal in flavour space and no rotation of charged leptons is needed to go to the mass basis. On the other hand, the neutrino mass matrix (2.9) is diagonalized by the matrix U T B (up to phases), which leads to the identification (2.10) Dirac Models The elementary sector includes, like in the Majorana scenario, three generations of LH and RH leptons l α L , l α R in (3, 1) and (1, ω 2(α−1) ) of S 4 × Z 3 , respectively, and in addition we now have LH exotic neutrino singletsν α L in (3,1). The composite sector is assumed to contain two massless RH fermion bound states, singlets under G SM , both in (3, 1) of S 4 × Z 3 . One of them mixes withν α L giving rise to vector-like massive neutrinosν α . The remaining fermions, denoted by ν α R , mix with some heavy vector-like states in the elementary sector. When the latter are integrated out, one is left with a tiny mixing mass term ǫ between ν α R andν α L in the elementary sector, which is flavour universal. 6 The elementary Lagrangian (up to dimension four terms) is taken to be with M as in (2.3). The mixing Lagrangian L mix is The operators Ψ α l L , Ψ α l R and Ψ α ν L are of dimensions 5/2 + γ lL , 5/2 + γ α lR , 5/2 + γν L , transforming as (3, 1), (1, ω 2(α−1) ) and (3, 1) under S 4 × Z 3 , respectively. The mixing parameters λ l L and λν L are flavour universal, while λ α l R are flavour diagonal, but non-universal. The charged lepton mass matrix is the same as (2.6). The operators Ψ α ν L ,R excite, among other states, the RH massless neutrino bound states that pair up withν α L . The vector-like mass ofν α depends on the nature of the coupling λν L : where d α are O(1) coefficients. When EWSB occurs, Yukawa couplings between Ψ α l L and Ψ α ν L induce mixing among ν α L andν α . Whenν α are integrated out, one getŝ whered α are O(1) coefficients. Plugging (2.14) into the mass term in (2.11) gives the SM neutrino mass matrix In the limit in which the masses m α ν and the mixing are universal, m α ν = mν, d α = d,d α =d, the mass matrix (2.15) leads to TB mixing. The composite nature of the RH neutrino naturally explains the smallness of ǫ and hence the actual SM neutrino masses [17]. In both scenarios, the flavour symmetry Z (D) 3 , present in the composite sector, remains unbroken in the limit in which the neutrino mass term M in the elementary sector is proportional to the identity. Correspondingly, all tree-level flavour changing charged gauge boson interactions are vanishing in this limit. When M is not proportional to the identity, the latter are still negligible in the Majorana scenario, being suppressed by the masses of the heavy RH neutrinos, but can be sizable in the Dirac one, leading to processes such as µ → eγ. Tree-level flavour violating Higgs and neutral gauge boson interactions vanish in both scenarios. This summarizes the basic set-up of our Majorana and Dirac HCHM. There are of course several sub-leading effects that should consistently be analyzed. We have not performed such analysis, but have preferred to postpone their discussion to the explicit 5D models that will follow. We only comment here that a relevant source of flavour violation arises from the elementary sector, since the kinetic terms of the SM fermions (in the basis where L mix is G f invariant) are constrained in general to be only Z 2 × Z 2 × Z 3 invariant, rather than S 4 × Z 3 invariant: , and Z D l a diagonal matrix. Similar considerations apply of course to ν α R andν α L , while the additional unbroken Z 3 symmetry forbids flavour violating kinetic terms for l α R . As we will see, in 5D models the Z l factors are mapped to BKT at the UV brane. Effective Field Theory for LFV Processes In this section we review, closely following [22] and their notation, the most relevant effective operators entering in LFV processes. The most experimentally constrained LFV observables are the radiative lepton decays l 1 → l 2 γ, the decays to three leptons l 1 → l 2 l 3l4 and the µ − e conversion in nuclei. Particularly relevant are the muon decays µ → eγ and µ → eeē (µ → 3e for short). These LFV processes are described by the following effective dimension 5 and 6 operators Terms of the form (μ R e L )(ē L e R ) and (μ L e R )(ē R e L ), by a Fierz identity, are shown to contribute to g 5 and g 6 , respectively. The first two terms contribute to µ → eγ while all terms contribute to µ → 3e. One finds the following branching ratio for these processes: The µ − e conversion in nuclei is more involved and described by an additional set of effective operators, that contain quark fields. The most relevant ones are the vector 4 fermion operators: The branching ratio is given by (see [22] for more details) where Z and N are the proton and neutron numbers of the nucleus, F p is the nuclear form factor, Z ef f is the effective atomic charge and Γ capt is the total muon capture rate. A similar analysis applies to LFV processes involving the τ lepton, see e.g. [23] for details. Explicit 5D Majorana Model It is useful to construct a specific 5D weakly coupled description of our Majorana scenario, in order to concretely address its phenomenological viability beyond possible estimates based on naïve dimensional analysis only. We consider in the following a gauge-Higgs unification model in warped space [19,24]. As known, these models describe the sub-class of CHM where in the composite sector (a spontaneously broken CFT) a global symmetry G is spontaneously broken to a sub-group H, giving rise to a set of Goldstone fields including the SM Higgs field [25]. We consider the minimal symmetry breaking pattern SO(5) → SO(4), leading only to the SM Higgs doublet. We use the conformally flat coordinates in which the 5D metric reads The UV and IR branes are located at z = R ∼ 1/M P l , where M P l is the reduced Planck mass, and at z = R ′ ∼ 1/TeV, respectively. The gauge symmetry in the bulk is and the flavour symmetry is The gauge symmetry breaking is standard, with G gauge broken at the UV and IR boundaries to where P LR is a LR Z 2 symmetry, useful to suppress deviations of the couplings of fermions to the Z boson from their SM values [26]. The flavour symmetry is broken to G flavour, 3 , respectively. In order to constrain the number of terms allowed at the UV and IR boundaries, two additional symmetries Z ′ 3 and Z ′′ 3 have been included. Bulk UV IR Table 1: Transformation properties of the 5D multiplets ξ l , ξ e and ξ ν under G flavour and their decomposition properties under the subgroups G flavour,UV and G flavour,IR in the Majorana model. The lepton particle content of the model consists of 5D bulk fermions only: one fundamental ξ l,α , one adjoint ξ e,α and one singlet representation ξ ν,α of SO(5), for each generation, all neutral under U (1) X (see [27] for a similar construction), where the first and second entries in round brackets refer to the + (−) Neumann (Dirichlet) boundary conditions (b.c.) at the UV and IR branes, respectively. We have written the SO(5) multiplets in (4.4) in terms of their SU (2) L × SU (2) R decomposition, where [ψ 1 , ψ 2 ] denotes the two components of the bi-doublet (2, 2) with T 3R = +1/2 (ψ 1 ) and T 3R = −1/2 (ψ 2 ). The SM LH lepton doublets arise from the zero modes of the 5D field L αL in the 5, the RH charged lepton singlets arise from the zero modes of e αR , T 3R = −1 component of the SU (2) R triplet in the 10, and the RH neutrinos arise from the zero modes of the singlet ν αR . 7 Notice that with the embedding (4.4), the LH SM charged leptons, originating from 5D fields with T 3R = T 3L = −1/2, are expected to have suppressed SM Z coupling deviations. In addition to the SM fields and their KK towers, the 5D fields (4.4) give also rise to a set of exotic particles. In terms of SU (2) L × U (1) Y , these are two doubletsL 1,αL andL 2,αL with Y = 1/2, one doublet L αL with Y = −1/2, two singletsν αL andν αL with Y = 0, one singlet x αL with Y = 1 and one triplet Z αL with Y = 0. The flavour properties of the fields (4.4) are summarized in table 1. Notice that the decomposition of the 3 of S 4 into representations of the remnant group Z 2 × Z 2 at the UV boundary implies a non-trivial basis transformation, see appendix A. 7 The hypercharge Y and electric charge Q are given by Y = X + T3R and Q = T3L + Y . The most general G gauge,IR × G flavour,IR invariant mass terms at the IR brane are 8 all flavour diagonal. The only G gauge,UV × G flavour,UV invariant mass terms at the UV brane are Majorana mass terms for RH neutrinos: m UV = diag (m UV,e , m UV,µ , m UV,τ ) and U T B as in (2.4). Notice that the UV and IR localized mass terms are dimensionless. The phases of the IR mass terms m l IR,α and m ν IR,α can be removed by properly re-defining the 5D SO(5) fields ξ l,α and ξ e,α . We can also remove one of the three phases of the UV mass terms m UV,α , so that, in total, the Majorana model contains just two phases. Mass Spectrum The mass spectrum of the theory (including all KK states) is efficiently computed using the so-called holographic approach [28], which is also very useful to match the 5D theory to the 4D description given in section 2. As far as the lightest modes are concerned, however, simple and reliable formulas are more easily obtained using the more standard KK approach and the so called Zero Mode Approximation (ZMA), which we use in the following. The ZMA is defined as the approximation in which EWSB effects (i.e. Higgs insertions) are taken as perturbations and mixing with the KK states coming from Higgs insertions is neglected. The spectrum of the zero modes is then entirely fixed by the unperturbed zero mode wave functions and their overlap with the Higgs field. These unperturbed wave functions satisfy the new b.c. as given by the localized IR terms. As explained in [29], the localized UV Majorana mass terms, instead, must be considered as a perturbative mass insertion (like the Higgs) if one wants to recover a meaningful mass spectrum for the light SM neutrinos without taking into account mixing with the KK states. Due to the wave function localization of zero and KK modes, as a rule of thumb, the lighter the zero mode masses are, the more accurate the ZMA is. Taking into account the localized IR mass terms (4.5), the IR b.c. for the non-vanishing 5D field components in ZMA arê We get the following zero mode expansion: αR (x) are the canonically normalized LH lepton doublets, RH charged leptons and RH neutrino zero modes, respectively. We use the standard notation where c = M R are the dimensionless bulk mass terms of the 5D fermions. We denote by c l and c ν the bulk mass terms of ξ l,α and ξ ν,α , constrained by the flavour symmetry to be flavourindependent. We denote by c α the remaining 3 bulk mass terms for ξ e,α . The parameters ρ α and σ α are defined as We take the unitary gauge for the SO(5) → SO(4) symmetry breaking pattern in which the Higgs field wave function is (see appendix B for our SO(5) conventions) with v H ≃ 250 GeV. We find useful to introduce where f H is the Higgs decay constant. It is defined as In the second equality we have used the approximate tree-level matching between the SO(5) 5D coupling g 5 and the SU (2) L 4D coupling g Flavour-independent bounds, essentially the S parameter in models with a custodial symmetry, constrain 1/R ′ 1.5 TeV, corresponding to h 1/3. By computing the wave-function overlap with the Higgs field, we get the following charged lepton mass matrix As usual, the SM fermion masses are naturally obtained by taking c α < −1/2, in which case f −cα are exponentially small and hierarchical. For the Dirac neutrino mass matrix we get The Majorana mass matrix in 4D is the one on the UV boundary. Taking into account the wave functions of the RH neutrinos, we get Integrating out the heavy Majorana fields N (0) αR (x), the factors σ α cancel out and the actual form of the light neutrino mass matrix is, using (4.7), , the size of the neutrino masses is mainly governed by the bulk mass term c ν . The latter is essentially fixed to be We have explicitly checked that the masses of the zero modes obtained in the ZMA (and treating the UV Majorana mass term as a perturbative mass insertion) are in excellent agreement with the exact tree-level spectrum. Let us consider the relation between the 5D model and the general 4D analysis performed in subsection 2.1. The strongly coupled sector is a CFT spontaneously broken at the scale µ ≃ 1/R ′ with a cut-off Λ ≃ 1/R. The anomalous dimensions appearing in (2.5) are uniquely fixed by the bulk masses of the 5D multiplets ξ l , ξ e and ξ ν [21]: 9 It is straightforward to show that, for c l > 1/2 and c α < −1/2, the (µ/Λ) suppression factors appearing in (2.6) arise from the factors f c l and f −cα defined in (4.14) and that b α ∼ m l IR,α . With the value of c ν taken as in (4.24), the coupling λ ν R is relevant and The latter factors, as expected, do not appear in the final mass formula (2.9). The ρ α are wave function normalization factors that take into account the contribution of the composite sector to the kinetic terms of the LH doublets, given by ρ α − 1. In the limit of an S 4 invariant IR Lagrangian, m l IR,α → 0 (so that ρ α → 1) and m ν IR,α = m ν IR , the neutrino mass matrix (4.23) leads to TB mixing. As we will see, bounds on gauge coupling deviations favour the region in parameter space where c l is close to 1/2, in which case ρ α is equal to one (since f cα 1) to a reasonable approximation even for m l IR,α ∼ O(1). This accidental property allows us to also explore the region in parameter space where the flavour symmetry breaking in the charged lepton sector on the IR brane is large, while in the neutrino sector it remains small, namely m ν IR,α = m ν IR (1 + δm ν IR,α ), with δm ν IR,α ≪ 1. Instead of assuming a small breaking in the neutrino sector and for the sake of reducing the number of parameters in the model, one might also advocate an accidental Z 2 exchange symmetry present only in the IR localized Lagrangian, under whicĥ If the symmetry (4.26) is imposed, the IR mass parameters m ν IR,α are constrained to be equal to ±1. Among the four inequivalent choices of ±1, we can take the universal choice m ν IR,α = 1. Although not necessary, an analogous Z 2 symmetry exchanging the two bi-doublets in the 5 and the 10 of SO(5) (a single Z 2 exchanging the bi-doublets and the singlets is also a viable possibility) might be advocated to also set m l IR,α = 1. The resulting model can be seen as an ultra-minimal 5D model, with in total only 8 real parameters (5 bulk mass terms and 3 localized UV mass parameters) and two phases (contained in the UV mass parameters), 4 of which are essentially fixed by the SM charged leptons (c α , c l ), 1 by the overall neutrino mass scale (c ν ) and 2 by the neutrino mass square differences (two combinations of m UV,α ), leaving in this way just one free real parameter and two Majorana phases! We denote this constrained model by the "Z 2 -invariant" model. Deviations from Gauge Coupling Universality In this subsection we compute the deviations from the SM values of the couplings of leptons to the Z and W bosons. In RS-like models such deviations can play an important role, since their expected order of magnitude for natural models with 1/R ′ 1.5 TeV can be of the same order of magnitude or larger than the experimental bounds, which are at the per mille level. The size of the deviation is mainly fixed by the wave function profile in the fifth dimension of the 4D lepton. The more the field is UV peaked, the smaller the deviation is. On general grounds, one might expect sizable deviations for all the Zl LlL couplings and for the Zτ RτR one. Deviations of the LH neutrino couplings Zν LνL should also be studied. The latter have indirectly been measured by LEP I and are constrained at the per mille level with an accuracy comparable to that for charged leptons, using the invisible decay width of the Z boson, under the assumption that this is entirely given by neutrinos [30]. An efficient way to compute these deviations, automatically summing over all the KK contributions, is provided by the holographic approach. In the latter, the effective 4D gauge couplings between fermions and gauge bosons are obtained by integrating over the internal dimension the 5D gauge vertex, with the 5D fields replaced by bulk-to-boundary propagators and 4D fields. The main source of deviation arises from higher-order operators with Higgs insertions, which give a contribution of O(h 2 ). Higher-order derivative operators are negligible, being suppressed by the fermion masses or Z boson mass and are O(M l R ′ ) 2 or O(m Z R ′ ) 2 , respectively. The momentum of all external fields (and hence of all bulk-to-boundary propagators) can be then reliably set to zero. In this limit, the computation greatly simplifies and compact analytic formulae can be derived. In the following we do not report all the details of our computation but only the final results. We define the 4D SM couplings g l,SM as without additional factors of the coupling g or of the weak mixing angle θ W . Let us first consider the LH charged leptons l α L . Given our embedding of l α L into 5D multiplets with T 3L = T 3R , we simply have δg α l L = g α l L − g α l L ,SM = 0 (4. 28) and no deviations occur at all. 10 They occur for the RH charged leptons l α R . We get , (4.29) 10 The coupling deviations above are defined in the field basis in which a completely localized UV fermion has SM gauge couplings, with no deviations. In this basis the fermion independent universal coupling deviation arising from gauge field mixing is encoded in the S parameter. where M l,α are the charged lepton masses (4.20) and it is understood that c α entering in (4.29) are determined as a function of c l , m l IR,α and M l,α . Equation (4.29) clearly shows that a small flavour symmetry breaking in the composite sector for charged leptons, i.e. m l IR,α ≪ 1, is disfavoured. Keeping c l and M l,α fixed, for small IR mass terms δg α l R ∝ 1/|m l IR,α | 2 . It is intuitively clear that δg α l R grow when the localized IR mass terms decrease, since one needs to delocalize more the RH leptons to get their correct masses, resulting in larger mixing with the KK spectrum and hence larger deviations. Alternatively, one has to decrease the value of c l , increasing the degree of compositeness of the LH leptons. Let us now turn to the neutrino Z couplings. Since ν α L are embedded into SO(5) multiplets with T 3L = T 3R , non-trivial deviations are expected. In the limit R ′ ≫ R and in the relevant range c l > 0, c α < 0, c ν < 0, we have . (4.30) The couplings of the W boson to the LH doublets and their deviations from the SM values, denoted by δg α ν L l L , are computed in the same way. In the same limit as (4.30), we find We demand that δg α l g α l < 2 0 / 00 , δg α ν g α ν < 4 0 / 00 (4.32) for LH and RH charged leptons and LH neutrinos. The LH neutrino deviations (4.30) are mostly sensitive to c l , requiring c l 0.49, with a mild dependence on the other parameters, while the RH charged lepton deviations are also very sensitive to the bi-doublet IR mass parameters, disfavouring small values of m l IR,α . Independently of m l IR,α , we get an upper bound on c l from the τ lepton, c l 0.56. As in many warped models with bulk fermions, the region c l ≃ 1/2 is preferred by electroweak bounds. LFV Processes and BKT Due to our choice of discrete symmetries, no 5D operators that reduce to the operators appearing in (3.1) and in (3.3) are allowed in the bulk or on the IR brane. The flavour preserving dipole operators responsible for lepton EDM are also forbidden by gauge invariance. Operators associated with the couplings g 4 and g 6 in (3.1) and g LV Flavour violation occurs in the neutrino sector and hence radiative decays mediated by neutrinos and charged gauge bosons do not vanish, A L , A R = 0. However, these are negligible, because effectively mediated only by heavy Majorana neutrinos. This is best seen by considering again the UV Majorana mass term as a mass insertion, but beyond ZMA, including all KK wave functions. The mass terms (4.6) can be written as follows: are not yet in their mass basis and (4.34) is not diagonal in flavour space. However, since these fields are very heavy, we can integrate them out. In the limit of infinite mass, this implies setting N h αR = 0. Eventually, we see that the remaining terms in the Lagrangian involving the fields N l(n) αR are flavour-diagonal with real coefficients. For finite mass, flavour and CP violating interactions are generated, but suppressed by the heavy Majorana mass and are completely negligible. It is important to study at this stage the impact of higher dimensional flavour violating operators in the model. These can only occur at the UV brane. The lowest dimensional operators of this form are fermion BKT. In principle all possible BKT allowed by the symmetries must be considered. In practice this is rather difficult to do, so we focus only on those BKT, whose presence with all others set to zero, causes flavour violation. From the table 1, we see that Z 3 forbids the appearance of flavour violating BKT for ξ e,α , while these are allowed for ξ l,α and ξ ν,α . There are in principle four possible flavour violating BKT at the UV brane, forL 1,αR , ν αR , ν αR and L αL . The KK expansion of fields with b.c. modified by both boundary mass and kinetic terms is quite involved. In order to simplify the analysis, we consider the BKT as a perturbation and treat them as insertions, like the Majorana mass terms. Namely, we take as b.c. for all fields the ones with vanishing BKT and then plug the resulting KK expansion into the BKT. This approximation is clearly valid for parametrically small BKT, but it is actually very good at the UV brane even for BKT of O(1), as we will see (see [32] for an analysis of fermion BKT in warped models). Among the 4 BKT above, the UV BKT forL 1,αR ,ν αR and ν αR are strongly suppressed (at least for the most relevant low KK modes), due to the form of the wave functions of these fields, and can be neglected. We are only left with where Z 2 × Z 2 constrainsẐ l to be of the formẐ l = U T B diag (z el , z µl , z τ l )U t T B . The coefficients z αl are dimensionless and their natural values are O(1), although smaller values ∼ 1/(16π 2 ) can also be radiatively stable. If one assumes a small breaking of S 4 → Z 2 × Z 2 at the UV brane, the relative differences in the z αl can be taken parametrically smaller than ∼ 1/(16π 2 ). In presence of the flavour violating operators (4.35), the couplings (3.1) and (3.3) become non-vanishing. Let us first write down, in the mass basis, the relevant interaction terms of our 5D Lagrangian that give rise to the effective couplings present in (3.1) and (3.3). We have where a and b run over all charged leptons, q runs over the light SM quarks u and d, i runs over all the neutrinos, V − and V 0 run over all charged and neutral gauge fields, respectively. By "all" we here mean all species of particles, including their KK resonances. For simplicity of notation, we have omitted the implicit dependence of the couplings in (4.36) on the gauge fields V − and V 0 . The couplings in (3.3) depend on how the quark sector is realized in the theory. We assume here that up and down quarks are genuine 4D fields localized at the UV brane and singlets under the flavour symmetry. The coefficients A L and A R are radiatively generated and receive contributions from 3 different classes of one-loop diagrams, where A The operators associated with the couplings g 1−6 are generated at tree-level by Higgs and neutral gauge boson exchange. By matching, we have where m W and m H are the masses of the SM W and Higgs bosons, respectively. The couplings in (3.3) are given by Strictly speaking, the effective couplings appearing in (3.1) and (3.3) should be evaluated at the scale of the decaying charged lepton mass, while (4.37), (4.38) and (4.39) give the couplings at the energy scale corresponding to the mass of the state that has been integrated out. Contrary to, say, non-leptonic quark decays, renormalization group effects in leptonic decays are sub-leading and can be neglected in first approximation. We can then directly identify the coefficients (4.37), (4.38) and (4.39) as the low-energy couplings relevant for the LFV processes. We have numerically computed the LFV processes by keeping, for each independent KK tower of states, the first heavy KK mode. For tree-level processes this approximation is quite accurate and should differ from the full result by O(10%), as we have numerically checked by keeping more KK states. For radiative decays, the approximation is less accurate and might differ from the full result by O(50%). This accuracy is enough for our purposes. If one demands a higher precision, a full 5D computation, as e.g. in [33], should be performed, although one should keep in mind that the limited range of validity of the effective field theory of 5D warped models puts a stringent bound on the accuracy one can in principle achieve. As we already said, all LFV processes are induced by the BKT (4.35). More precisely, LFV processes are induced by the relative differences in the z el , z µl , z τ l factors, since universal BKT simply amount to a trivial rescaling of the fields. Let us first give an estimate of the relative relevance of the couplings g 1 -g 6 and g L/RV (q) . They are induced by the tree-level exchange of Higgs and neutral gauge bosons, namely the SM boson Z and its first KK mode Z (1) , the first KK mode of the photon γ (1) , the first KK mode of the neutral SO(5)/SO(4) fields A3 (1) and A4 (1) , the first KK mode of the 5D gauge field Z ′(1) . The 5D fields Z, γ and Z ′ are related as follows to the SO(5) × U (1) X fields W 3L , W 3R and X: with g 5X the 5D coupling of the U (1) X field, determined in terms of θ W : (4.41) Due to the IR-peaked profile of the KK wave functions, the leading effect of (4.35) is to mix the LH zero mode fields l (0) αL among themselves. The main source of flavour violation clearly arises from LH fields. Since fermion Yukawa couplings are negligible, we have (4.42) The LFV couplings D eµ L in (4.36) govern the size of the relevant effective couplings g 4 , g 6 and g LV (q) . The dominant LFV effects arise from the rotation and rescaling of l (0) αL necessary to get canonically normalized kinetic terms. Before EWSB effects are considered, no flavour violation is expected from the SM Z boson by gauge invariance. The leading deviations arise from the gauge fields Z (1) , γ (1) and Z ′(1) . It is straightforward to derive a reasonable accurate estimate for the couplings D eµ L : where g loc and g bulk are the BKT and bulk contributions to the gauge couplings, respectively. When EWSB effects are considered, LFV effects are transmitted to the SM Z boson as well. The resulting D eµ L (Z) is suppressed by the mixing, but the latter is approximately compensated by the absence of the mass suppression factors appearing in the couplings g i (4.38). Eventually, the SM Z boson contribution to LFV is of the same order of magnitude of that of the fields Z (1) , γ (1) and Z ′(1) . In (4.43), Z l is the effective BKT felt by the zero mode, which is obtained by multiplyingẐ l by the square of the zero-mode wave function (4.9) evaluated at the UV brane: For c l > 1/2, the factor entering in (4.44) becomes of O(1), while it is exponentially small for c l < 1/2. For the relevant region where c l ≃ 1/2 and ρ α ≃ 1, the effective BKT Z l is considerably smaller thanẐ l . For c l = 1/2 + δ, at linear order in δ, we have The effect of the BKT on the LFV is naturally suppressed. This is the main reason why most of the parameter space of our model successfully passes the bounds imposed by LFV processes. The suppression factor (4.45) also explains why the approximation of treating the BKT as insertions is valid even for O(1) BKT at the UV brane. Let us now consider the couplings A L and A R . It is immediately clear from the more composite nature of the muon with respect to the electron that A L ≪ A R , so in first approximation A L can be neglected. Higgs and neutral gauge boson mediated contributions A L . It turns out to be rather difficult to derive an accurate analytic formula for A (W ) R since neutrino, charged lepton and gauge boson Yukawa couplings significantly contribute to the branching ratio. An order of magnitude estimate can be obtained by focusing on a definite contribution that is always one of the dominant ones, although not the only one. It arises from the Yukawa couplings between the SM neutrinos l u(0) L and the RH singlet fields N l αR , the combination of N (0) αR and N (1) αR orthonormal to the heavy Majorana fermions N h αR . It is relevant because these Yukawas are sizable and N l α is typically the lightest fermion resonance in the model. We get where c is an order 1 coefficient and Y is the approximate flavour universal value of the Yuakwa coupling in the original basis of fields, before the redefinitions needed to get canonically normalized kinetic terms. We plot in figure 1 the bounds arising from µ → eγ and µ − e conversion in Ti (the most constraining case) as a function of δz ≡ z µl − z el = 3(Ẑ l ) eµ . Both processes depend quadratically on δz, as expected from (4.43) and (4.46). As can be seen from figure 1, the IR masses m l IR,α do not play an important role, provided that m l IR,τ is large enough, as required by δg τ l R . Thanks again to the suppression factor appearing in (4.45), the branching ratio is almost always below the current limit of 2.4 × 10 −12 for |δz| < 1. Using however the future bound expected from the MEG experiment of 10 −13 , we find that |δz| is constrained to be less than 0.25. The decay to three leptons µ → 3e and radiative τ decays are always well below the experimental bounds and are not reported. 11 We also performed an analysis for larger m 0 and for both, normal and inverted, neutrino mass orderings, with results identical to those shown in figure 1. Let us finally consider the bounds arising from lepton mixing, assuming vanishing phases. As we have already mentioned, in order to avoid too large deviations from TB lepton mixing, the IR localized neutrino mass terms m ν IR,α should be taken close to universal. Parametrizing m ν IR,α in the following way: m ν IR,e = m ν IR , m ν IR,µ = m ν IR (1 + δm ν IR ), m ν IR,τ = m ν IR , 12 we can 11 Notice that, due to the smallness of the couplings of the SM leptons to the neutral KK gauge bosons for c l 1/2, the contribution of A (W ) R to (3.2) is comparable to that given by the couplings g4 and g6. 12 We have chosen this particular parametrization of m ν IR,α , since in this way all mixing angles are subject to a deviation linear in δm ν IR and neither accidental cancellation nor accidental enhancement of the coefficient of the linear perturbation is encountered. analyze neutrino masses and mixing arising from the light neutrino mass matrix in (4.23) in an expansion in δm ν IR . We neglect the effects of BKT in the following and take c l = 0.52 and h = 1/3 so that the parameters ρ α are universal to a good approximation. For normally ordered light neutrinos with m 0 = 0.01 eV and mass square differences given by the experimental best fit values [37], the mixing angles turn out to be sin θ 13 ≈ 0.05 |δm ν IR | , sin 2 θ 23 ≈ 1 2 + 0.82 δm ν IR , showing that the requirement of having sin 2 θ 12 in the experimentally allowed 3σ range [37], 0.27 sin 2 θ 12 0.38, leads to the constraint At the same time sin 2 θ 23 remains within its 1σ range, 0.475 sin 2 θ 23 0.533. The reactor mixing angle sin 2 θ 13 takes as maximal value 4 × 10 −6 , well below the current and prospective future bounds. These statements are in agreement with our numerical results. The validity of the expansion in δm ν IR strongly depends on m 0 . For instance, by taking δm ν IR = 0.1, the perturbative expansion in δm ν IR breaks down for m 0 0.03 eV. We also performed a study for inverted mass hierarchy. In this case the above perturbative expansion is not valid for any value of m 0 . From the numerical results we see that large corrections to the solar mixing angle always arise, whereas the atmospheric mixing angle gets small corrections and the reactor mixing angle remains always very small. The large deviations of θ 12 can be easily understood by noticing that for inverted neutrino mass ordering the relative splitting between the two heavier light neutrinos is in general small compared to the scale m 2 0 + ∆m 2 atm 0.049 eV. Thus, the angle θ 12 associated with the mixing in this almost degenerate sub-sector is subject to large deviations from its initial TB value even for very small deviations δm ν IR from universality. As a consequence, the latter have to be as small as possible in the case of inverted neutrino mass ordering, which is most naturally achieved in the Z 2 -invariant model. In summary, in the case of normally ordered light neutrinos and a rather small mass scale m 0 , deviations from universality of m ν IR,α are admissible up to the level |δm ν IR | 0.04. Generically, the solar mixing angle, which is the most precisely measured one up to date in neutrino oscillation experiments, turns out to be the most sensitive one to corrections. For a neutrino mass spectrum with inverted hierarchy, the most natural situation is the one in which an additional accidental Z 2 exchange symmetry on the IR brane renders the mass terms m ν IR,α universal. The deviations of θ 12 and θ 23 are well under control in the Z 2 -invariant model for all values of m 0 and both types of neutrino mass hierarchy (with sin 2 θ 23 in the experimentally allowed 1σ range and sin 2 θ 12 in the 2σ range). The angle θ 13 is in this case always constrained to be very small, sin 2 θ 13 10 −6 , and cannot be detected. Uncalculable Corrections and τ Decays Contrary to the operators in (3.1), where only two flavours appear, LFV operators involving simultaneously three different flavours are not constrained effectively by our choice of discrete symmetries. Dimension 8, 4 fermion S 4 × Z 3 (Z (D) 3 ) invariant bulk (IR localized) operators reducing to flavour violating dimension 6 LL, RR and LR/RL operators can be constructed. Among these, the ones of the form (τ Γµ)(ēΓµ), (ēΓµ)(ēΓτ ), with Γ = γ µ , γ 5 , and their hermitian conjugates, can directly mediate the τ decays τ → e2µ and τ → µ2e. The branching ratio for these decays is of order 10 −8 [38]. The size of the couplings of these operators, uncalculable within the 5D theory, can be estimated by using naïve dimensional analysis. For IR brane operators (bulk operators give roughly the same result) we get where c n = −c α for RH leptons, c n = c l for LH leptons, κ U V is an O(1) dimensionless coupling and in the second equality we have plugged in the zero mode wave function of the SM leptons l (0) for the 5D fermion fields ξ. The most stringent bounds arise from the LL operators, since f −ce , f −cµ ≪ f c l . By matching with (3.1), we get Notice that flavour preserving dimension 8 operators are also potentially dangerous, contributing, e.g., to the deviation from the SM values of the couplings of leptons to the vector bosons. From a quick estimate, we find that the bound (4.51) is more constraining. Summarizing, demanding that the uncalculable contributions coming from higher dimensional operators are sufficiently suppressed results in a bound on the degree of compositeness of the SM leptons. Explicit 5D Dirac Model In this section we provide an explicit 5D gauge-Higgs unification warped model realizing the Dirac scenario outlined in subsection 2.2. The model is very closely related to the Majorana model of section 4, so we focus on the key differences between the two. The gauge symmetry and its breaking pattern is the same as before, while the flavour symmetry is slightly different: Table 2: Transformation properties of the 5D multiplets ξ l , ξ e and ξ ν under G flavour and their decomposition properties under the subgroups G flavour,UV and G flavour,IR in the Dirac model. ω 5 is the fifth root of unity ω 5 ≡ e 2πi/5 . 3 × Z 5 at the UV and IR branes, respectively. Like in the Majorana model, Z 5 and Z ′ 3 are included to constrain the number of terms allowed at the UV and IR boundaries. The particle content and b.c. for the fields are identical to those in the Majorana model, with the only exception of a crucial flip in the b.c. for the singlet neutrinoν in the 5: The flavour properties of the fields are summarized in table 2. Notice that the discrete symmetries forbid the appearance of any bulk or boundary Majorana mass term. The invariant mass terms at the IR and UV branes are α=e,µ,τ m l IR,α L 1,αLL2,αR + L αLLαR + h.c. with M UV as in (4.7). The phases of the IR masses m l IR,α can still be absorbed by re-defining the 5D SO(5) fields ξ e,α and one of the three phases contained in M UV through re-phasing the fields ξ ν,α . Again, we are left with two non-trivial phases coming from the UV mass terms. Mass Spectrum The KK expansion in the ZMA of the doublets L αL ,L αL and e αR is identical to (4.9), (4.10) and (4.13) and gives rise to the same charged lepton mass matrix (4.20). The KK expansion of neutrinos is of course different. The IR b.c. are not affected by mass terms, while the UV b.c. read ν αR = M UV,αβ ν βR , ν αL = −M * UV,βανβL (5.4) and lead to the following canonically normalized zero mode expansion where By computing the wave function overlap with the Higgs field, we get the neutrino mass matrix: where ρ α is defined as in (4.15). Thanks to the factor (R/R ′ ) in (5.7), the correct order of magnitude for neutrino masses is naturally obtained by choosing The only source of deviation from TB mixing in (5.7) is given by the factor ρ α , which should be contrasted with the situation in the Majorana model, where the deviations are given by ρ α and the neutrino mass terms m ν IR,α . Let us consider the relation between the 5D model and the general 4D analysis performed in subsection 2.2. The anomalous dimensions of Ψ α l L ,R , Ψ α l R ,L are the same as in the Majorana model. Sinceν L and ν L belong to the same 5D bulk multiplet, we have γν L = γ lL . The states denoted byν α in (2.13) are the lightest KK vector-like states of the tower of modes coming from ν α and ν α in the 5D model. Their masses are determined as the zeros of a certain combination of Bessel functions and are approximately flavour independent. For c l 0.44, we have The coefficients d α appearing in (2.13) are correspondingly flavour independent in first approximation. Along the lines of [21], the parameter ǫ defined in subsection 2.2 can be seen to arise from the mixing between two heavy elementary fermions Ψ L and Ψ R of opposite chiralities with a RH massless bound state ν R of the CFT, all in 3 of S 4 . Omitting flavour indices, the relevant Lagrangian is with γ νR = |c ν + 1/2| − 1 the anomalous dimension of ν R and c a universal O(1) coefficient. If we assume that Λ is flavour independent, all the non-trivial flavour dependence is in the mass term M , which is of the form (2.3). Integrating out the fields Ψ gives at leading order Rescaling ν R → µ γ νR +1 ν R to effectively get canonical kinetic terms for a free fermion field, gives The form of the coefficientsd α introduced in (2.14) will be determined in subsection 5.2. We anticipate here that they are flavour independent, coming from S 4 invariant bulk interactions. It is important to notice that for γν L = γ lL = c l − 1/2 (taking c l + 1/2 > 0), (2.14) shows a non-decoupling effect. For γν L > 0 the explicit (µ/Λ) suppression factor in (2.14) cancels the one coming from mν. For γν L < 0, the mass ofν is unsuppressed, but the LH leptons are mostly composite and one has to perform a field rescaling to get the canonically normalized LH field ν L , as in (2.7). Its effect is again to compensate for the explicit factor (µ/Λ) in (2.14). For any γν L , then, we getν L ∼ O(h)ν L , a result that leads to unsuppressed deviations of neutrino couplings to the W and Z from their SM values, as shown below. The non-trivial factor κ α in (5.7) comes in the 4D picture from corrections to the kinetic term of ν R we have neglected, appearing when the heavy fields Ψ are integrated out. They are completely negligible, given the suppression factor appearing in (5.6). The factors ρ α , as in the Majorana model, encode corrections to the kinetic term of l (0) L coming from the composite sector. Summarizing, the mass formula (5.7) is a particular realization of the more general expression (2.15) where all deviations from TB mixing are naturally suppressed. In the limit of an S 4 invariant IR Lagrangian, m l IR,α → 0, the neutrino mass matrix (5.7) leads to exact TB mixing. However, the factors ρ α disfavour composite LH leptons, because the more these states are composite, the smaller m l IR,α should be to keep ρ α ≃ 1. Bounds on gauge coupling deviations favour the region in parameter space where c l 1/2. Given that TB lepton mixing and (4.51) favour c l 1/2, the region c l ≃ 1/2 is again the one of interest. The mass spectrum of the neutral KK resonances in the Dirac model differs from that in the Majorana model mostly for the presence of the light statesν. For the benchmark values h = 1/3, c l = 0.52, (5.9) gives m α ν 200 GeV. The masses of the next-to-lightest charged and neutral KK fermion resonances (taking m l IR,α between 1/2 and 3/2) are slightly below 2 TeV, so approximately comparable to the spectrum found in the Majorana model. The KK gauge boson masses are obviously identical in the two cases. Deviations from Gauge Coupling Universality The realization of the SM charged leptons in the 5D Dirac and Majorana models is identical, so (4.28) and (4.29) continue to apply. In the Dirac model, ν αL are still embedded into SO(5) multiplets with T 3L = T 3R , so non-trivial deviations are expected. The holographic analysis is complicated by the presence of the 4D singlet fieldsν αL (x, z = R) that should be kept and eventually integrated out. 14 Omitting intermediate steps, one simply getŝ independently of any parameter. As anticipated below (5.12), a non-decoupling occurs in (2.14). 15 At leading order in h, (5.13) leads to the following universal deviation (5.14) By demanding (4.32), we get the universal bound independently of c l . In light of the bound (5.15), the Dirac model appears to be fine-tuned at O(1%) level, unless one advocates exotic hidden physics that is responsible for a fraction of the invisible partial width of the Z boson. LFV Processes and BKT Several considerations made in the Majorana model continue to apply in the Dirac case. The form of the interaction Lagrangian is the same as in (4.36), and the matching given by (4.38) and (4.39) still holds. The analysis in (4.39)-(4.45) is valid also here. 16 The bound on c l (4.51) coming from UV uncalculable corrections also applies here. In contrast to the Majorana model, radiative decays mediated by neutrinos and charged gauge bosons are no longer negligible, even in the absence of BKT. 17 Interestingly enough, in this case we have been able to find a reasonable analytic formula for A (W ) R , see (5.18), working in the flavour basis where Yukawa couplings are treated as perturbative mass insertions. Given the difficulty of finding such formulae, we report in the following some details on how (5.18) has been derived. We still adopt a KK approach and keep, for each 5D fermion field, only the first KK resonance. Even in this approximation, an analytic computation is complicated by the large number of fields that are present. The most important point to note is that the flavour violation comes from the singlet fields N (1) α and N (0) αR , arising from the expansion of the 5D fields (5.5). The zero modes N (0) αR , due to their ultra-localization towards the IR brane, are effectively decoupled and can be neglected. Among the massive KK gauge bosons, the leading contribution comes from the charged gauge field A (1) in the SO(5)/SO(4) coset, since it directly couples N (1) α to the SM leptons. We can then safely neglect the SU (2) L × SU (2) R massive gauge fields W 15 A similar non-decoupling effect has recently been noted in 5D models in flat space, see (3.15) of [20]. 16 Notice that in the Dirac model, in principle, we might have LFV BKT on the IR brane. They arise from the RH KK neutrinos that, in analogy to the zero mode fields (5.5), contain UT B in their expansion. However, this effect is indirect, and driven by the UV mass terms mUV,α. Their impact on the model is sub-leading. We have numerically checked it in the BKT insertion approximation. 17 The same is valid for CP violating effects, that we have not studied in the Dirac model. where the flavour index has been omitted. The Yukawa couplings Y N ν are flavour non-diagonal with roughly the following structure: Y N ν ≃ U t T B Y 0,N ν + δY N ν , where Y 0,N ν is a number and δY N ν a matrix in flavour space with |δY N ν | ≪ |Y 0,N ν |. The gauge couplings g L N are also flavour violating and have the approximate form g L N ≃ U t T B g L 0 + δg L , where g L 0 is a number and δg L a matrix in flavour space with |δg L | ≪ |g L 0 |. It turns out that the leading contribution to A R comes from the first term in square brackets in A (W ) R , see (C.1). Indeed, the potential enhancement of the second term coming from the muon mass in the denominator is compensated by the smallness of the Yukawa coupling responsible for a non-vanishing RH coupling C µ iR . The leading contributions coming from the W and A (1) exchange are depicted in figure 2. Notice that no Yukawa insertion in the loop is needed in the diagram (b), because the relevant gauge interactions are already flavour violating. The computation of the two diagrams gives: is the mass splitting before EWSB and for simplicity we have taken δg L all equal in flavour space. Terms proportional to m N (1) δY N ν are sub-leading and have been neglected in A W R . In A A (1) R we keep the leading terms in the expansion m N (1) ≪ m A (1) . Indeed, in most of the parameter space, due to the chosen b.c. for the singlet in the 5, the neutrinos N (1) α (which should be identified with the fieldsν α defined in subsection 2.2) are sensibly lighter than the SO(5)/SO(4) gauge field A (1) . In particular, for c l > 1/2, m N (1) become very light. Roughly speaking, it turns out that |δg L | |δm N (1) ) 2 ≪ 1 and the dominant contribution comes from the exchange of the SM W boson. Expanding (5.9) up to O(|m UV,α | 2 ) and using the ZMA formula (5.7), we get the following estimate for the branching ratio: The Yukawa coupling Y 0,N ν depends of course on the input parameters as well, but there seems to be no simple expression for it. We have checked, by comparison with the full numerical [34] and the expected future bound [35] given by the MEG experiment. The (red) line in the right panel is the experimental bound as given by SINDRUM II [36]. The plots refer to the Dirac model with c l = 0.52, c ν = 1.33, h = 1/3 and normal neutrino mass hierarchy. The IR masses m l IR,α are random numbers chosen between 0.05 and 1.5 for m l IR,e,µ and 0.5 and 1.5 for m l IR,τ (blue points) or all set to one (green diamonds). The masses m UV,α are chosen such that the lightest neutrino mass is m 0 = 0.01 eV and the best fit values [37] of the solar and atmospheric mass square differences ∆m 2 sol = 7.59 × 10 −5 eV 2 and ∆m 2 atm = 2.40 × 10 −3 eV 2 are reproduced using (5.7), corrected for the effect of the BKT. computation, that (5.18) is accurate at the O(10%) level. The branching ratio crucially depends on the values of c ν and c l . When the UV BKT in (4.35) are considered, the branching ratio of µ → eγ receives extra contributions of the form (4.46) that for |δz| 0.02 dominate. Unfortunately, it is not simple to derive a reasonably accurate analytic expression for BR(µ → eγ) in this case. We plot in figure 3 the branching ratio of µ → eγ and µ − e conversion in Ti for the Dirac model for c l = 0.52 and c ν = 1.33, setting all phases to zero. As shown in subsection 5.2, the deviation of g α ν L from its SM value puts a strong bound on h, h 1/10, see (5.15). We take here a value of h = 1/3 in order to compare the results for the LFV processes in the Majorana model with those in the Dirac model. 18 The bounds are mainly governed by the UV BKT, with a very mild dependence on m l IR,α . As can be seen from figure 3, both processes depend quadratically on δz, as expected from (4.43) and (4.46), and the relative difference |δz| of the UV localized BKT is constrained to be smaller than 0.15 in order to pass the actual MEG bound of 2.4 × 10 −12 . This becomes smaller than 0.05 to pass the expected future MEG bound BR(µ → eγ) < 10 −13 . For such small values of δz, cancellations between the contribution (5.18) and the one associated with the UV localized BKT can occur and further suppress BR(µ → eγ), see figure 3. The results for BR(µ → 3e) and B conv (µTi → eTi) are automatically below the current experimental bounds, as soon as BR(µ → eγ) is below the new limit set by the MEG Collaboration. Radiative τ decays, τ → µγ and τ → eγ, have branching ratios 10 −9 . The branching ratio of µ → eγ in the Majorana model is about two orders of magnitude smaller than the one in the Dirac model, for equal values of c l . In contrast, B conv (µTi → eTi) is similar, being governed in both cases by the same tree-level FCNC. Concerning the lepton mixing angles, we find sin 2 θ 23 well within the experimentally allowed 1σ range and sin 2 θ 12 still within the 2σ range [37]. The value of sin 2 θ 13 is smaller than 10 −8 . As already discussed in detail in the case of the Majorana model with non-universal masses m ν IR,α , an inverted neutrino mass hierarchy is disfavoured, because the solar mixing angle receives in general too large corrections, while the atmospheric mixing angle still remains within the experimentally allowed 1σ range and sin 2 θ 13 10 −8 . A noteworthy effect in the Dirac scenario is the rather large deviation of the lepton mixing matrix U P M N S from unitarity. For example, in the case of the set of parameters used to generate the plots shown in figure 3, we checked that the diagonal elements of U † P M N S U P M N S and U P M N S U † P M N S can deviate up to 0.05 from one. Their off-diagonal elements are in general much smaller. The non-unitarity of U P M N S is associated with the non-decoupling of the light states N (1) α . This is in sharp contrast with the results found in the Majorana model in which the deviation from unitarity of U P M N S is in general less than 10 −3 for the diagonal elements of U † P M N S U P M N S and U P M N S U † P M N S . Conclusions We have introduced a class of 4D HCHM based on the non-abelian flavour group S 4 × Z 3 , where lepton masses can be naturally reproduced and nearly TB lepton mixing is predicted. Both Majorana and Dirac neutrinos can be accommodated. A small breaking of the flavour symmetry for charged leptons is disfavoured in the composite sector, typically leading to a too large deviation of the coupling of the τ to the Z from its SM value. The latter observation is linked to the choice of representations of the discrete flavour group used for the LH and RH charged leptons and needed to forbid large flavour violating effects. It applies to more general constructions based on different flavour groups. The breaking of the flavour symmetry for neutrinos in the composite sector can be large in the Dirac model, whereas it must be small in the Majorana model to suppress too large deviations from TB mixing. We have also constructed two explicit realizations of our framework in terms of 5D gauge-Higgs unification theories. We have computed in detail the relevant bounds coming from LFV processes in the charged lepton sector and shown that no significant constraints arise in both models. In the Majorana model, all the spectrum of fermion resonances is above the TeV scale, 19 while in the Dirac case light (sub-TeV) neutral fermions appear and are responsible for a too large deviation of the coupling of neutrinos to the Z from its SM value. A particularly economic and successful Majorana model can be constructed by postulating a Z 2 exchange symmetry on the IR boundary which protects neutrinos from being affected by the flavour symmetry breaking. In both models, Majorana and Dirac, two CP phases are present. We have not studied in detail their effects on the lepton EDM, but have argued that in the Majorana model these are expected to be negligibly small. Overall, the 5D Dirac model performs worse than the 5D Majorana model but, of course, this does not necessarily imply that more natural HCHM based on the Dirac scenario cannot be constructed, rather we might have missed to find a better representative. Note Added During the final stages of the preparation of this paper new experimental results have been released by the T2K [39] and MINOS [40] Collaborations indicating that θ 13 = 0 is disfavoured at the level of 2.5σ and 89% confidence level, respectively. Subsequently, three groups [41,42,43] have performed a global fit of the available neutrino data finding at different levels of significance θ 13 = 0. The strongest indication of θ 13 = 0 is found by [41] at a level of (more than) 3σ, while the analysis in [43] shows that the mixing angle θ 13 is still compatible with zero at the latter level. The best fit value of sin 2 θ 13 is 0.01 ÷ 0.02 in all three analyses [41,42,43]. If such sizable value of sin 2 θ 13 will be confirmed in the future, our models (with Dirac and Majorana neutrinos, respectively) become disfavoured, because they generically foresee small values of sin 2 θ 13 below 10 −4 without additional (new) sources giving rise to θ 13 = 0. However, this does not rule out HCHM with flavour symmetries (broken in the manner as proposed by us) in general, since other mixing patterns, see e.g. [44], apart from TB mixing, can be implemented as well [45]. where (±1, ±1) indicate the transformation properties under the two Z 2 factors of Z 2 × Z 2 . Since S and U are not diagonal, the decomposition of the S 4 representations under Z 2 × Z 2 is non-trivial. For φ i ∼ 2 we get 1 for a triplet ψ i ∼ 3
2011-10-04T06:18:28.000Z
2011-06-20T00:00:00.000
{ "year": 2011, "sha1": "ab893f1a49260665dd2d0ea6288492b3764ac2bc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1106.4021", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ab893f1a49260665dd2d0ea6288492b3764ac2bc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234033405
pes2o/s2orc
v3-fos-license
A Defected Metasurface for Field-Localizing Wireless Power Transfer The potential of wireless power transfer (WPT) has attracted considerable interest for various research and commercial applications for home and industry. Two important topics including transfer efficiency and electromotive force (EMF) leakage are concerned with modern WPT systems. This work presents the defected metasurface for localized WPT to prevent the transfer efficiency degraded by tuning the resonance of only one-unit cell at the certain metasurface (MTS). Localization cavities on the metasurface can be formed in a defected metasurface, thus fields can be confined to the region around a small receiver, which enhances the transfer efficiency and reduces leakage of electromagnetic fields. To create a cavity in MTS, a defected unit cell at the receiving coils’ positions for enhancing the efficiency will be designed, aiming to confine the magnetic field. Results show that the peak efficiency of 1.9% for the case of the free space is improved to 60% when the proposed defected metasurface is applied, which corresponds to 31.2 times enhancements. Therefore, the defected MTS can control the wave propagation in two-dimensional of WPT system. Introduction Nowadays, wireless technologies are important on our life societies. It is not only on emerging wireless communication systems e.g., 5G, WiFi6, but also on wireless power transfer (WPT) systems. A WPT system can deliver the electrical energy from one to another across the air gap without the need for wires or exposed contacts. An example of the most commonly used WPT is the charging systems of mobile phones and electronics gadget devices. Recently, iPhone12 has been released and all models feature wireless charging [1]. Its power is up to 15 W with the most up-to-date Qi-standard [2]. However, most of the mobile wireless charging is not truly wireless because a phone needs to touch a charging station. Another application of WPT technology is an electric vehicle (EV) charging station [3][4][5], which requires high power and without touching the charging station. The required power is up to a few kilowatts. Although the two systems seem different, both share the same technique, which are used in a near-field WPT. The near-field WPT can be A Defected Metasurface for Field-Localizing Wireless Power Transfer DOI: http://dx.doi.org /10.5772/intechopen.95812 the MTS. There are different mechanisms to create the cavity on the MTS. In [30], the cavity is created by Fano-type interference, while the hybridization bandgap is used in [20] and MIW is applied in [25]. Motivated by these observations, we modify the cavity mode concept to create a new defected metasurface (MTS) for enhancing transferred efficiency and reducing the electromotive force (EMF). The MTS is based on a defected unit cell at the desired receiving position formed a twodimensional cavity with configuration of the conventional WPT system. Besides, a free space case, a conventional uniform MTS and the proposed defected MTS have been studied and compared. System configuration and metasurface characteristics 2.1 System configuration A common WPT system consists of a transmitting coil and a receiving coil. Firstly, we investigate four configurations of the conventional WPT systems with and without uniform metasurface: (i) large Tx and large Rx coils (LTLR) without metasurface, (ii) large Tx coil and small Rx coil (LTSR) without metasurface, (iii) LTLR with uniform metasurface, and (iv) LTSR with uniform metasurface as shown in Figure 1(a) to Figure 1(d), respectively. These four configurations are the reference configurations to compare with the proposed defected metasurface. The large size of coils is compared with the unit cell size of an MTS. For uniform MTS, all unit cells are identical, and its property is the negative effective permeability. The transmitting and receiving coils are used planar four-coils WPT system configuration and the operating frequency is 13.56 MHz band, which is an ISM (Industrial, Scientific and Medical) band. The large transmitting coil composes of the feed/load coil and three-turns planar resonator coil with a trace width of 3 mm and 17 mm. The gap between the adjacent loop of the resonator coil is 5 mm with the largest diameter of 240 mm. The radius of the feed/load coil is 48.5 mm. Due to the requirement of ISM bands; it is vital to fix the resonant frequency of WPT system. To resonate at a desired frequency of 13.56 MHz, a 68 pF chip capacitor is connected in series. The details are shown in Figure 1(a). For the purposes of comparison, two different sizes of the receiving coil are examined. The first size of the receiving coil is identical to the large transmitting coil and the second one is smaller than the transmitting coil about five times, which the largest diameter is 60 mm. It is a two-turn planar coil and loaded with two capacitors at the driving loop coil and resonant loop coil. The detailed design of the two-turn planar coil is similar to our proposed in [32]. To enhance the efficiency and focus the magnetic field, the uniform MTS is placed between the transmitting and receiving coils as shown in Figure 1(c) and Figure 1(d). All transmitting coil, receiving coil and unit cell are designed on FR-4 material with a dielectric constant (ε r ) of 4.3 and loss tangent (tan δ) of 0.025. Then, to study the performance of the proposed defected metasurface on the WPT system, a system configuration is shown in Figure 2. It is composed of three parts including a transmitting coil, a defected MTS and a receiving coil. For the LTLR cases, the separation between the transmitting-to-metasurface and receivingto-metasurface is the equal to 240 mm, so the distance from the transmitting to receiving coils is the total 480 mm as shown in Figure 2(a). As we mentioned previous section, users prefer to able to freely location the receiver. A model of freely multiple receiver locations shows in Figure 2(b). This configuration is not only freely movement the receiver, but also like misalignment between transmitter and receiver when the location is positioned on No. 2 to No. 6, so the efficiency deviates from the center (No. 1). Then, the effects of localization field have been extensively studied when the small receiving coil is placed closer to the defected MTS as shown in Figure 2(b). The distance between the small receiving coil and the defected MTS is 40 mm, whereas the transmitting side is keeping the same. Numbers (No. 1 to No. 6) on unit cells are positioned of the cavities or hotspots formed in the defected MTS. The defected unit cell formed cavity is designed with different resonant frequencies from the uniform MTS. The resonant frequency of the defected unit cell is higher than the uniform one, which can be tuned by using the series chip capacitors. We examine the effects of the positions on the defected MTS using an EM simulator and measurement. The results will show and discuss in the following sections. Metasurface characteristics The metasurface is usually constructed using locally resonant unit cells in the deep subwavelength scale. For realized a compact metasurface and compromise between magnetic field enhancement, an array of 5 × 5-unit cells is chosen in this work. Each unit cell is a single-side planar 5-turn spiral with loaded a capacitor in order to resonate at the desired frequency as shown in Figure 3 that the resonant frequencies of 117 pF and 98 pF loadings are 11.88 MHz and 12.95 MHz, respectively. The frequency response of the 98 pF loading is also shown and compared because this configuration is designed for the selective created cavity. The resonant frequency of the cavity is slightly higher than the uniform unit cell. Then, the effective permeability of the proposed MTS can be obtained by using the CST simulation [33] with extracting the S-parameters of the proposed unit-cell as shown in Figure 3(b). Since the WPT system is based on magnetic field coupling when an incident EM wave with a magnetic field is perpendicular to the plane of the MTS, the MTS obeys the frequency dispersive Lorentz model to produce an effective negative permeability. At 13.56 MHz, the effective permeability of both capacitor-loading achieves the negative permeability. To adjust the resonant frequency, it can be changed to the series capacitor [18]. Results and discussion To compare the transfer efficiency (η) of all configurations, we use the magnitude of |S 21 |, which can be easily measured using a vector network analyzer (VNA) in experiments. By using the |S 21 | and |S 11 |, the function of the transfer efficiency on η = |S 21 | 2 /(1-|S 11 | 2 ), however, when the network is matching at both ports, the transfer efficiency equals |S 21 | 2 . A comparison of simulated |S 21 | for the four reference cases and the LTLR with the defected MTS (Figure 2(a)) is shown in Figure 4. Compared with the free space, the magnitude of S 21 is increased when a uniform or a defected MTS is used. At 13.56 MHz, the uniform MTS case has a maximum power transfer. It can be observed that a case of free-space case (without MTS) and the uniform MTS have only a single peak. The magnitude of S 21 for the free-space case (|S 21 | = 0.24) is remarkable to lower compared with the case of the uniform MTS (|S 21 | = 0.69). When the defected unit cell is placed in each position, the magnitudes of |S 21 | are separated into two or more peaks, called frequency splitting, which its mechanism is totally different from the two-coil system as a function of separation distance between coils. In a two-coil system, generally, when the distance between transmitting and receiving coils is closer and smaller than a threshold value, it creates two frequency splitting due to the magnetic over-couplings. Hence, many research efforts have been developed the system performance against frequency splitting using optimizing and compensation methods such as non-identical resonant coil [34]. The frequency splitting phenomenon of the defected MTS occurs when non-identical resonant unit cells. It can be explained in terms of Fano interference [30,35], which is much more sensitive to the defected position in its periodicity. It is observed that there are two peaks and the first one is smaller than the second one except two positions of No. 5 and No. 6. The first peak of the separating frequency is shifted to close the resonant frequency (13.56 MHz) when the defected position moves away from the center. As the defected position moves outward to the center of the MTS, the |S 21 | and the frequency of the first peak also slowly increased accordingly. At the defected position of No. 4, the magnitude of |S 21 | gets the minimum of 0.106 at 13.56 MHz. From the results, thus, the defected MTS for the cases of the large transmitting and receiving coils has not improved the efficiency due to the magnetic field confinement on the defected cell, as shown in Figure 5(c). In practice, a transmitting side with a larger coil size than a receiving side for charging portable and implantable devices is used. The typical relative size ratio between transmitting and receiving coils is large, so the efficiency results low. The configurations for a large transmitting side and a small receiving side with uniform and defected MTSs are shown in Figure 1(d) and Figure 2(b), respectively. A comparison of the |S 21 | at six positions for uniform and defected MTSs is shown in Figure 6. It can be seen in Meanwhile, the defected MTS shows a |S 21 | from 0.64 to 0.57. This is confirmed that the proposed defected MTS provides a relatively constant transfer efficiency in all the areas of the MTS. In addition, the defected MTS provides increased transfer efficiency compared with the cases of the free space and uniform one. This proposed configuration is in contrast of the [20,24], which is constructed in the cavity using an array of the defected unit cells. So, the proposed defected MTS can enhance the transfer efficiency, even with its own loss. The magnitudes of the magnetic field distribution are also used to compare between the WPT systems as shown in Figure 5. When whether the uniform or defected MTSs are inserted in the system, strong surface waves existing on both sides of the MTS are observed, which are responsible for the increased magnetic coupling. Obviously, the magnetic field intensity of free space is relatively weak in comparison with the metasurface cases. In case of free-space and the uniform MTS with the LTLR, the receiving coil is placed at the center. The intensity for both cases at the center is more concentrated than at the edge and the uniform MTS has a higher since the magnetic field is enhanced and focused by the negative permeability of the uniform MTS. In the case of the uniform MTS in Figure 5(b), it is showed that the focusing effect of the magnetic field is present due to the negative refractive lens, which the field gradually decays from the center to the edge so, lead to better efficiencies. When the defected unit cell is selected at No. 4 as shown in Figure 5(c), the focusing effect of the magnetic field is presented at position No. 4. However, For the other unit cell of the defected metasurface, the magnetic field intensity is significantly dropped compared with the case of the uniform MTS. According to the magnetic field distribution, the magnitude S 21 value of the case of the defected metasurface is not too high, as the case of the uniform metasurface at a frequency of 13.56 MHz, as shown in Figure 4 because the magnetic field intensity is confined only specific position. Figure 7 shows the magnetic field intensity distributions of LTSR with the uniform and defected MTSs at 13.56 MHz, respectively. On the uniform MTS in Figure 7(a), the magnetic field spreads over a relative area, while the defected MTS shows that the focusing effect of the magnetic field from the cavity is presented, which can suppress the filed elsewhere. It is observed in Figure 7(b) that the strong magnetic field confinement is realized on the defected MTS at the receiving position and the field is relatively low with uniform outside the cavity. It means that the defected MTS creates the field localization, hence, it can enhance transfer efficiency for a small receiver and reduce leakage EMF. It is because the unit cell forming the cavity resonance while the surrounding cells resonance at lower frequency. When the resonant frequency of surrounding cells falls into the hybridization bandgap, the negative permeability of metasurface forms a stopband for the cavity. Thus, the magnetic fields are prohibited from propagation in an outside area than the cavity. For the uniform MTS, the magnitude of the magnetic field is distributed relatively evenly about 14-22 dB along the surface of the MTS whereas the defected MTS is below 6 dB. Therefore, the defected MTS can enhance not only the field at the receiving coil, but also suppress the field elsewhere without additional shielding box or ferrite. To experimentally validate the performance of the uniform and defected MTSs, several experimental studies are conducted. Figure 8 shows the prototype of the proposed metasurface and experimental setup by using the VNA (Rohde & Schwarz model ZVB20). The fabricated MTS consists of 5 × 5 arrays of unit cells that are depicts the LTLR with the metasurfaces. In case of LTLR (Figure 8(b)), the large receiving is not fixed at the center, but it is moved and located following the defected unit cell. Figure 8(c) shows the case of the LTSR with the metasurfaces when the receiving coil can move all the unit cell locations. The distance between the receiving coil and MTS is fixed and kept by using the plastic pipe. It is a thin and non-magnetic material; its effect is negligible. The connection is accomplished using standard SMAs through two identical low-loss cables. The standard SOLT (short-open-line-trough) calibration has been performed at the desired frequency range before measurement, so the end of the cables has been considered as a reference plane for S-parameters. Figure 9 shows the measured results of LTLR and LTSR cases with the uniform and defected MTSs, respectively. Figure 9(a) shows variation of the measured efficiency depending on the defected positions and the defected cell at position No. 0 is a uniform MTS case. As seen, the efficiency shows a significant dependence on the receiving positions. The WPT system is directly measured at the operating frequency of 13.56 MHz and using two-port method with VNA. It is clearly seen in Figure 9(b) that the transfer efficiency of the defected MTS is quite flat; it means the defected MTS can enhance the efficiency regardless of the positions of the receiving coil with various distances. Figure 9(b) shows the simulated and measured power transfer efficiency of LTLR and LTSR cases with the uniform and defected MTSs. We obtained the measured transfer efficiency of LTSR case of 51,5%, 50.1%, 48.3%,43.2%, 39.5% and 26.6% at the receiver positions of No. 1 to No. 6, respectively. Contrastingly, for the uniform MTS, the efficiency decreases obviously, when the receiving positions are moved far away from the center since the intensity of magnetic field at the edge is lower than at the center for the uniform MTS. Conclusions In this chapter, we propose the defected MTS for enhancing transferred efficiency and reducing the EMF. The MTS is based on a defected unit cell at the desired receiving position formed two-dimensional cavity. It can improve the localization of the magnetic field; thus, the strong confinement of the receiving location can be realized. Compared to previous defected MTSs especially in case of the small transmitter, using this proposed method, it is a simple way to create the cavity of twodimension area because the single unit cell is only used and controlled in contrast of the series of an array of the defected unit cells via hopping or MIW route. The power transfer efficiency of the proposed defected MTS increases significantly from 1.9% (free space case) and 38% (uniform MTS) to 60% with the size ratio between transmitter and receiver is 4:1. Moreover, the confinement can reduce the leakage EMF around the surrounding system without additional shielding box. It can reduce EMF leakage about 10 to 15 dB at 13.56 MHz with the defected MTS. When the defected MTS is integrated in the WPT system, however, the size of the receivers is affected of the overall efficiency. If the size of the receiver is comparable compared with the unit cell size, it gets better efficiency than the larger receivers. Consequently, the defected MTS is not suitable for the case of the large transmitting and receiving coils due to splitting frequency effect. We hope that thought the proposed defected MTS operates in low frequency, the proposed structure can be easily scaled and tuned by adjusting the geometry of the coil and changing the capacitor value.
2021-05-10T00:04:15.196Z
2021-01-30T00:00:00.000
{ "year": 2021, "sha1": "a2fb9ddfe1cd267a5cc4ddb6365db35877b84917", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/74990", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "c560e03122b7e3f99f099b623e42193c9a93015a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
238770425
pes2o/s2orc
v3-fos-license
Green Job Opportunities Through the Curriculum and Community Enterprise for Restoration Science in New York City This paper examines the current understanding of the green economy movement and the critical role that education plays in attracting a viable workforce for this relatively new crusade. By connecting youth with the importance of environmental concerns in their community, tangible opportunities for sustainable change are created. By giving human agency to some of the most marginalized populations in New York City, the opportunity to experience environmental challenges in the community in which they live exposes these students to a plethora of enriching and rewarding employment opportunities. By combining the stewardship of their environment with formal and informal education, the Curriculum and Community Enterprise for Restoration Science in New York City is presenting multiple pathways for employment and educational opportunities in the green economy. growth increased the growth of commerce, triggering the over-harvesting of the oyster. Differences in how species respond to physical conditions lead to changes in their relative abundance within an ecosystem as species decline or increase in abundance, colonize new locations, or leave places where conditions are no longer favorable (Poloczanska, et al, 2013). The population growth in New York City had a ripple effect on the oyster population in the harbor. Gradually oyster beds began to dwindle and with the contamination of the waters in the estuary, the oysters became too polluted to eat. A ban on harvesting oysters from New York Harbor was enacted when it was felt the oysters were too polluted for consumption. New York Harbor and its watershed areas continued to suffer from the expansion of the city and both its biotic and abiotic systems suffered through increased contamination and pollution. It was not until 1972 that the Clean Water Act was passed (EPA, 33 U.S.C. §1251 et seq., 1972). Although the Federal Water Pollution Control Act was passed in 1948, the mandates on industrial pollution were limited. In 1972, the act was restructured and expanded, requiring industrial and municipal facilities to obtain permits limiting their direct discharges into surface waters in the United States. Gradually, there was a resurgence in the quality of the waters of New York Harbor but oyster restoration efforts were not introduced for several more decades. Environmental restoration is a term that has been extensively used in the past few decades. The need for the restoration of the environment is in part a consequence of natural phenomena but to a greater extent, because of the actions of mankind. The degeneration of natural habitats and the destruction of natural resources have consequential effects on both the biotic and abiotic components of the world's ecosystems. Energy flows connect the biotic and abiotic components and the delicate balance of these dynamic and homeostatic systems, none more than the rapid expansion of the human population and the demands that are placed on the environment. By meeting, and most times exceeding our needs, we have exacerbated environmental degradation to its breaking point. Unfortunately, the pollution of the waterways around New York City was not the only area affected by the adverse influence of human impact. Global degradation of water, air, and land was occurring at a rapid and unchecked pace. As noted by several historians, rapid population growth and imbalances in the distribution of population concerning natural resources increased environmental degradation and is particularly evident in the pollution in coastal areas, especially where urban populations grow rapidly and critical resources are depleted at an accelerated rate (Furtado, Belt, & Jammi, 2000). Due to the very nature of coastal areas and their access to transportation and water, most metropolitan areas around the world are situated on the coasts of continents and support economic and cultural exchange. The boundary between the land and the ocean is home to the coastal environments. Coastal environmental systems play a vital role in human well-being and economic capital, providing resources and employment opportunities but in turn, they are also vulnerable to anthropogenic impacts (Sherbinin, Carr, Cassels, & Jiang, 2007). Anthropogenic (influence of humans on natural world) factors emerge as the dominant driver of change, both as planned exploitation of coastal resources and as unforeseen side effects of human activities. The buffer area between permanent land and water, the coastal active zone hosts a wide range of precious marine bio-systems (Mentaschi, Vousdoukas, Pekel, Voukouvalas, & Feyen, 2018). Coastal areas host key infrastructures, ecosystems, and about 40% of the world's global population (Martinez, et al, 2007). In the United States, the environmental movement began to take shape in the 1960s and 1970s. The use of toxic chemicals in insecticides was brought to the forefront with marine biologist Rachel Carson's scathing expose, Silent Springs (1962). The book not only documented the misuse of chemical pesticides but called into question the role of the federal government and its environmentally altering decision-making. Other events such as the pollution of Lake Erieso polluted that it caught fire in 1969, galvanized the need for laws to protect the environment and meaningful consequences for violations of these laws. The 1970s brought forth several monumental legislative acts intended to create a more sustainable environment. In 1970, the Clean Air Act became the basic legislative framework for the control of air pollution. This was quickly followed by the Clean Water Act of 1972, regulating water quality and monitoring the discharge of pollutants into the water. Today, legislation can be found on the local, national and international levels. In response, metropolitan cities such as New York City and Los Angeles have initiated large urban sustainability plans (National Academies of Sciences, Engineering & Medicine, 2016). career opportunities in the green economy at the foundational level. Angela Calabrese Barton (2012) cites the need for a place in science education with disengaged youth. She further emphasizes the importance of the community immersion model and a deep and critical connection to their community. An example of this educational opportunity that is woven into workforce development, STEM education, and sustainability is the Curriculum and Community Enterprise for New York Harbor Restoration. The Billion Oyster Project (BOP) is a citizen science restoration project that has been in existence in New York Harbor since 2008. It was inspired by the Chesapeake Bay Oyster Recovery Project which was successful in replanting approximately 6.7 billion oysters in the bay. BOP has its origins in the restoration activities started by the faculty and students at the Urban Assembly Harbor School located on Governors' Island in New York Harbor. The goal of the Billion Oyster Project is to restore 1 billion oysters to New York Harbor by the year 2035. In August 2014, the National Science Foundation (Award # DRL 1440869) awarded $5,374,972.00 to the Curriculum and Community Enterprise for New York Harbor Restoration in New York City Public Schools Project (CCERS). This added more structure and several new components to the Billion Oyster Project by creating a model of curriculum and community enterprise within the New York City public school system. Middle school students attending New York City schools in underserved communities were selected to study the New York harbor and its impact on the community within the harbor and surrounding it. Historical, geological, and environmental impacts are explored with the intent of eliciting empathy for the restoration of the native oyster population while instilling motivation for stewardship of the harbor, heightened motivation in STEM disciplines, and a possible pipeline to future career paths. Students experience higher levels of engagement and take a deeper approach to learn when they can apply what they are studying to address a real-world problem (Lombardi, 2007). To accomplish these goals, the BOP CCERS project engages students in the restoration of the native oyster (Crassostrea virginica) habitats in New York Harbor through the combination of field research and classroom investigations. The project is based on five separate but interconnected pillars: (1) Teacher education curriculumcreated by CCERS staff and content experts in the partnership (2) Student learning curriculumcreated by CCERS staff and classroom teachers and implemented in the NYC middle school classrooms (3) Digital platform for resourceshousing lessons, units, empirical data from the field stations, and research papers (4) After-school STEM programconducted through the joint efforts of Good Shepard Services, the New York Academy of Science, and the NYC Department of Education (5) Public exhibitsat the Hudson River Project headquarters and the New York Aquarium The project is based on partnerships with several stakeholders including Pace University, the New York City Department of Education, the Columbia University Lamont-Doherty Earth Observatory, the New York Academy of Sciences, the New York Harbor Foundation, Good Shepherd Services, and the Hudson River Project among others. The strength of many community-based partnerships (Minkler, Vá squez, Tajik, & Petersen, 2008), is the ability of the partners to demonstrate a strong commitment to collaborate with all stakeholders as well as the other partners in the project. The oyster filtration has removed approximately 16 million pounds of nitrogen The project has been in existence since 2014 and the major impacts of the project can be seen in Table 1. http://journal.julypress.com/index.php/jed Vol. 5, No. 2;August, 2021 In February 2018, the National Science Foundation (Award# DRL 759006) awarded an additional $2,000,000.00 to the Curriculum and Community Enterprise for a Keystone Species in New York Harbor. In addition to the original model and its areas of implementation, the project is expanding its real-world, problem-based curriculum model to focus on practices that increase student motivations and capacities to pursue careers in fields of science, technology, engineering, or mathematics (STEM). Science for all is a moral and ethical imperative. It opens the door to high-paying professions and demystifies urban environmental issues (Calabrese Barton, 2002). It levels the playing field by giving all students an equal opportunity. Continuing to focus on New York Harbor and oyster restoration, students and teachers in grades K-12 conduct field research in support of restoring native oyster habitats, and the project is implemented by a broad partnership of institutions and community resources, including Pace University, the New York City Department of Education, the Columbia University Lamont-Doherty Earth Observatory, the New York Academy of Sciences, the New York Harbor Foundation, the New York Aquarium, and others. Each of the pillars in the project has been expanded to accurately depict the additions to the program. The project model includes several interrelated components, including a teacher education curriculum that includes a component for elementary teachers that focuses on restoration science; a student learning curriculum; a digital platform for project resources; an aquarium exhibit; an after-school STEM mentoring program and a near-peer mentoring program; community-based restoration science hubs, and advanced methods in restoration science for high school students that include genetic barcoding (species ID), environmental DNA sampling and analysis, bacterial monitoring, and basic water chemistry analysis. The project targets students in low-income neighborhoods with high populations of English language learners and students from groups underrepresented in STEM fields and education pathways. The project directly involves 97 schools, over 300 teachers, and approximately 15,000 K-12 students over a period of four years. The Confluence of Green Careers and the Billion Oyster Project Curriculum and Community Enterprise for Restoration Science BOP-CCERS has become widely recognized and sought-after as a comprehensive STEM education program within the New York City Department of Education. Its fundamental goal has been to incorporate authentic environmental restoration science and inquiry-based research into the educational experience of students in predominantly low-income urban public schools. Of great significance in the pedagogical world is the fact that thousands of students who might otherwise have been lost in the complexities of the educational system in New York City are experiencing education through connections to community and nature In a study conducted on the impact of real-world learning and minority student achievement, (Cervantes, Hemmer & Kouzekanani, 2015) researchers documented a statistically significant increase in the student achievement of low-income children when coupled with explicit support and high standards. As citizen science becomes more holistic, it embodies the responsibility of youths who are prepared to engage real concerns in their community (Mueller, Tippins & Bryan, 2011). However, the enduring objective is focused on systemic outcomes that encourage and equip students to pursue practicable STEM career pathways. There is an urgent need to have the students understand the employment opportunities in a STEM-related career pathway. As reported in 2015, ninety-three of the 100 STEM occupations in the United States had wages above the national average (Fayer, Lacey & Watson, 2017) and STEM occupations are growing at a much more robust level than non-STEM occupations. The showcase for the connection between sustainable environmental education and green careers is the Urban Assembly New York Harbor School. The New York Harbor School is a secondary school in the New York City Department of Education and is unique in many ways. Located on Governors' Island in Lower New York Harbor, its 550 students can only reach their school by ferry. Once there, they are offered coursework that is found in no other school in New York City. The school's Career and Technology Education (CTE) offers seven programs of study, all focused on maritime content and skills, and each having practicing specialists acting as advisors and instructors. Research finds that high school CTE programs are associated with higher future employment and earnings (Jacobs, 2017). Each of the programs extends from the classroom to include a work-based learning component and provide a real-world context for practicing maritime skills. A section of the school's overview, found in the 2018-19 School Quality Snapshot (New York City Department 0f Education, 2020) includes, "We work to develop authentic activities on, around and related to the water that creates a sense of responsibility to New York Harbor and develops a new generation of maritime advocates, enthusiasts, employees, and decision-makers". The seven programs offered at the Harbor Schools are: 1. Marine Policy & Advocacythis program is designed to guide students through the principles and applications of environmental/ marine policy and law by studying the Maritime industry and oyster restoration. Students develop a clear understanding of the Public Trust Doctrine, Clean Water Act, and other applicable statutes and regulations from state and City Marine & Environmental Policy. Also, the students attend lobbying hearings related to the Albany and City legislature and visit with Pace University faculty in the Marine Policy Law Program. http://journal.julypress.com/index.php/jed Vol. 5, No. 2;August, 2021 Utilizing analytics, web design, public speaking, writing, and developing critical thinking skills are developed and allow students to pursue college and begin to think about the fields of environmental education, environmental law, environmental journalism, environmental management, environmental ethics, environmental policy, urban planning and design, international environmental politics, and sustainable development management. The role of citizen involvement in policymaking on community, educational and governmental policy of water resources will be emphasized. 2. Marine Biology Researchstudents create Aquatic Ecosystem Models through three college-level courses, learning the basics of biology, ecology, oceanography, and statistics. The program consists of two paths -GISgeographic information system (map making) and Indy -Independent Marine Research (statistics). With the help of a scientist, these come together to create an environmental project and solution to a resource management issue for the Hudson-Raritan Estuary. In the Graphic Information System, students learn strong basic and intermediate skills in map-making. These skills are then applied to real-world projects as the student works in the Indy program. The Indy students acquire college-level reading, writing, and statistics and apply this to the real-world model. A scientist-mentor helps in the completion of the research using the student data to propose resource management solutions for the Hudson-Raritan Estuary. Included throughout the program are the management of projects, submission of professional reports, and presentations at national and international conferences. Finally, the focus is placed on career development through career exploration techniques, resume writing, and ePortfolio development. 3. Aquaculture -Aquaculture is an industry that is continually growing in response to the anthropogenic impact on the environment. Due to the growth of the human population and overfishing, the aquaculture industry has been expanding, both locally and globally. This program allows students to understand the rudiments of marine biology, environmental science, water chemistry, and animal husbandry through real-world projects. In concert with the Billion Oyster Project, students actively participate in the oyster restoration efforts by preparing the oysters for rebuilding. Students learn nutrient balance, water quality, feed ratios, filtration, system design, hydrology, fish disease identification and treatment, and entrepreneurship by designing recirculating aquaculture systems. Research projects, fieldwork, laboratory activities, classwork, internships, and work-based learning activities are all included to prepare the students for college and the workforce. Ocean Engineers -This program is designed to expose students to the areas of engineering and technology. Using three-dimension modeling programs, students are tasked with designing, building, testing, and making alterations to remote underwater vehicles. These vehicles will be used to probe the harbor and are equipped with cameras and lights, making it possible to study the marine life. Attaining certification in Solidworks, the students now have the opportunity to apply this knowledge to the maritime industry or to further pursue marine engineering and technology in college. This program is unique to the Harbor School. 5. Professional Divingstudents in this program have the opportunity to participate in the only such program in the country. The location of the Harbor school is the ideal setting for this diving program which allows the students to do underwater research connected to the CCERS Billion Oyster Project. Certification in entry-level open water diving and advanced open water diving prepare the students for both commercial and recreational work in the diving industry. The students also receive a foundation in scientific diving which can be applied if they decide to study marine science after high school. Among the skills learned are CPR and first aid, care for marine life injuries, emergency oxygen treatment, and rescue diving techniques. The knowledge and use of diving equipment such as dry suits, full face masks, harnesses, and tethers as well as underwater video and camera equipment are all part of this unique program. Career and Financial Management training is given to the students so that they can either work in recreational diving upon graduation or intern in professional diving opportunities like those offered at the New York Aquarium. 6. Vessel Operations -The focus of this program is geared toward commercial vessels and small passenger vessels. Students receive training in boat handling, navigation, safety at sea, and seamanship. This training prepares the students to become deckhands with the opportunity to advance their career to higher level maritime operations and management positions. The Harbor School's vessels serve as the training ground for this program and the students have firsthand experience as they practice mooring, overboard drills, and fire drills on ships such as the Indy 7 (originally part of the navy's fleet during the Vietnam War). 7. Marine Systems Technology -A vital part of the maritime industry is engineering. Marine technicians and welders, marine mechanics, and marine engineers all make a vital contribution to the maritime industry. Students in this program are trained both on board vessels and in shore-side marinas and shipyards. Areas of study include marine electrical systems and marine mechanics. The students work with wood, fiberglass/composite, and http://journal.julypress.com/index.php/jed Vol. 5, No. 2;August, 2021 welding metal fabrications. Also, students can earn college-level courses through the Kingsborough Community College Maritime Technology program. Each of these fields of expertise (marine engineers, marine mechanics, marine technicians, and welders/metal fabricators) is in high demand offering competitive salaries with room for growth and advancement. The student population of the Harbor School is 67% male and 33% female. Over 40% of the population is Hispanic, 25% of the population is categorized as Special Education students, and the school is entitled to Title I Funding. According to the 2018-2019 Quality Review for the Harbor School, "Strengths of the school include the integration of the instructional shifts into the CTE curricula to incorporate literacy, technology, scientific research, and real-world applications that offer college and career readiness. Additionally, the school effectively uses partnerships with The Harbor Foundation, industry partners, and university partners to support the school's mission of preparing students for college and careers" (New York City Department of Education, 2020). The data for students enrolled in the Certified Industry Programs is as follows: Table 2. NYSED -2017-2018 CTE Programs -Urban Assembly Harbor School (02M551) As shown in Table 2, the programs align with the industry clusters for approval by the New York State Education Department. COVID19 -Serendipitous Benefits of the COVID19 Pandemic In March 2020, COVID-19 had invaded New York City and its insidious effects were far-reaching. Businesses were converted to at-home, online operations, restaurants were closed or reimagined as take-out only facilities and all of the schools in New York City were shuttered. This turn of events abruptly halted the shell collection program at Billion Oyster Project. Additionally, this drastically decreasing the stewardship of New York Harbor and virtually placed maintenance of the Oyster Restoration Stations on hold. The waters of New York Harbor experienced a reduction in maritime activity and the impact on the aquaculture was palpable. The result was a serendipitous resurgence of vitality throughout the New Yo rk estuary. Samples of the collected water were clearer and less polluted, aquatic life was seen in areas of the harbor that had not had life for some time and people were even swimming in waters that were once deemed, "the most heavily polluted and physically degraded of any in the United States" (U.S. Department of Commerce, 1988). While these improvements in environmental pollution are considered to be temporary, the current level of pollution in the atmosphere, biosphere, and hydrosphere could be much lower than the pre-COVID-19 period (Yunus, Masago & Hijioka, 2020). Although COVID-19 has had a temporarily positive effect on the environment, much of the progress of the oyster restoration must be done in person. Without the citizen science volunteers and the students to monitor the Oyster Restoration Stations and reintroduce new oyster reefs into the harbor, the fieldwork on the project came to a near stand-still. Shell collection has also suffered during the COVID-19 crisis as restaurants had to limit their capacity to outdoor and take out dining only. The shells are needed as the surface for the oyster spat to adhere to and grow. With the help of the New York State Environmental Protection Agency, the fieldwork began again in July 2020. Millions of oysters that were fastened to the recycled shells were barged up the East River and placed in the (Simko, 2020). In direct response to the pandemic, socially distanced "quaran-teams" were able to install an estimated 40 million oyster during the summer of 2020. Remote schooling was the primary form of education in New York City and in response, the staff at the BOP+CCERS Project created over 500 hours of educational videos to continue a seamless educational experience for the students. Implications for Expansion of the Project The findings garnered by the external evaluation team for the BOP-CCERS Project indicate that participating students reported knowing more about careers in marine, enginerering and environmental sciences than students in the comparision group. Participating students responded 0.73 points more positively than the student comparision group. In an unpaired t-test between the groups, the results were statistically significant. In addition, it was reported that participating in the project does increase student knowledge about STEM careers and improves their perceptions of their scientific skills compared to those with less involvement in the project. and by extension, STEM Careers. The Curriculum and Community Enterprise for Restoration Science and its many iterations, can be found in over 100 K-12 school in New York City. The alternative career exploration, with its focus on the restoration of the New York City Harbor's waterways, and the natives species found within, especially the oyster restoration, opens possibilities to thousands of students who would overwise have not been exposed to this potential field. Being able to initiate the STEM pipeline early in a student's education path and continue this through all grades and into post secondary education allows for a better opportunity for career choices in the green economy. The potential implications drawn from the success of this project can have universal appeal and application, much like the other oyster restoration projects that are going on throughout the country (e.g., Charlotte Harbor Estuary, Punta Gorda, Florida; Oyster Recovery Partnership, Chesapeake Bay, Maryland; Olympia Oyster Restoration, Fidalgo Bay, Washington). Each of these restoration efforts is supported through the communities in which they are situated and relies on the partnerships they have established. What makes the Curriculum and Community Partnership for Restoration Science unique is the inclusion of the New York City Department of Education as one of its lead partners. These students are benefiting from a socially and environmentally significant learning experience. Its replication in other locations could have a substantial impact on other communities as well. Building on the practical applications seen in this study and others with similar goals, and the recent developments in the Biden Administration's focus on Global Warming and Greenhuose Gas Reduction seem to indicate that green education and employment opportunities will continue along their current trajectory. Conclusion Before the last 25 million oysters were placed in the waters of Soundview Park, 30 million oysters had already been restored to New York Harbor and the surrounding waters that compose its estuary, 13 reefs have been installed across the five boroughs and 15 million shells from 75 donating restaurants had been collected. Pre-COVID-19, over 6,000 NYC students from more than 100 schools had been participating in the project. This is the biggest shellfish installation in the history of New York and it is poised to being the answer to protecting the city from storm surge while keeping the waterways clean. After the devastation of Super Storm Sandy in 2012, funding from the U.S. Department of Housing and Urban Development and the State of New York was implemented for the Staten Island Living Breakwater Project. The purpose of the project is three-fold: 1. Curb Environmental Degradation -To reduce or prevent shoreline erosion, both short and long-term aimed at preventing damage to building and infrastructure. 2. Habitat Enhancement -To increase the fauna, flora, and aquatic habitats in the harbor and simulate the rocky structure of the acres of oyster reefs that once were found in the area. 3. Human Capital -To build community through pedagogy, ecosystem stewardship, scientific research, and citizen science activities and enhance the green economy. At the forefront of the Living Breakwater Project is the inclusion of the CCERS + BOP Project. Active oyster restoration on and adjacent to the breakwaters will further enhance the habitat opportunities created by the breakwaters. The oyster wirework containers will use the same design being employed by the Billion Oyster Project in locations in the Harbor as part of the Hudson Raritan Estuary comprehensive restoration plan (GOSR, 2020). Also, a proposal for a Climate Center on Governors' Island, offering public programs, offices for green technology companies, and a research institute is on the slate for review by City Hall (Kimmelman, 2020). The CCERS + BOP Project and the CTE coursework strands are the essential conduits for connecting New York City students to their marine environment. Facilitating early access to the scientific community is important for encouraging participation in science, technology, engineering, and mathematics (STEM) research careers and fostering persistence in that career pathway. For STEM engagement in higher education and the workforce to succeed, youth need exposure to role models and connections with mentors who can show them that a STEM career path is attainable (Garces & Espinosa, 2013). One of the major tasks of high school students is to plan and make career decisions regarding post-secondary career options (Mei, Wei, & Newmeyer, 2008). To make such decisions, students are required to have an understanding of the career opportunities that are available to them early enough to engage in proactive secondary school course options. Nowhere is that more evident than in the Career and Technical Education options. The Urban Advantage Harbor School has combined the CTE and CCERS + BOP to fill a void that is unique to New York Harbor and its estuaries but can also serve as a model for any school system that is located in or near a major harbor. No swath of the economy has been more widely celebrated as a source of economic renewal and potential job creation than the green economy (Muro, Rothwell & Saha, 2011). With employment levels, that range from lower-skilled worked to high-end innovative careers, and the benefits to human capital and environmental improvement, the green economy seems to hold the most promise for social and economic betterment. Models such as the CCERS + BOP are the key to obtaining the workforce needed to make this promise a reality. Shellfish restoration can transform, through education and hands-on participation in rebuilding nature, how local communities value and perceive the ecological landscape in their backyards (DeAngelis et al., 2018).
2021-08-19T19:49:34.585Z
2021-05-26T00:00:00.000
{ "year": 2021, "sha1": "788b008687eb0f8fed9904a011f0e2a3685f21a0", "oa_license": "CCBY", "oa_url": "http://journal.julypress.com/index.php/jed/article/download/920/658", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ddaa7583a84efe45a38b213f3b98565bf49c6283", "s2fieldsofstudy": [ "Environmental Science", "Education" ], "extfieldsofstudy": [ "Political Science" ] }
13933149
pes2o/s2orc
v3-fos-license
Visual tracing of diffusion and biodistribution for amphiphilic cationic nanoparticles using photoacoustic imaging after ex vivo intravitreal injections To visually trace the diffusion and biodistribution of amphiphilic cation micelles after vitreous injection, various triblock copolymers of monomethoxy poly(ethylene glycol)–poly(ε-caprolactone)–polyethylenimine were synthesized with different structures of hydrophilic and hydrophobic segments, followed by labeling with near-infrared fluorescent dye Cyanine5 or Cyanine7. The micellar size, polydispersity index, and surface charge were measured by dynamic light scattering. The diffusion was monitored using photoacoustic imaging in real time after intravitreal injections. Moreover, the labeled nanoparticle distribution in the posterior segment of the eye was imaged histologically by confocal microscopy. The results showed that the hydrophilic segment increased vitreous diffusion, while a positive charge on the particle surface hindered diffusion. In addition, the particles diffused through the retinal layers and were enriched in the retinal pigment epithelial layer. This work tried to study the diffusion rate via a simple method by using visible images, and then provided basic data for the development of intraocular drug carriers. Introduction Diabetic retinopathy, age-related macular degeneration, and uveitis are disorders that significantly impact vision and the quality of life. 1,2 To treat these disorders, most drugs need sustained and effective delivery to the posterior segment of the eye. 3 This part of the eye is composed of a blood-retinal barrier and an inner retinal limiting membrane that are relatively impermeable to many therapeutic agents. To improve the effectiveness of the drugs, invasive local therapy such as intravitreal injection is generally required. Intravitreal injections have been considered the preferred route of drug delivery to the eye during the past 2 decades, and they are the most effective drug delivery method for the retina. [4][5][6][7] Compared with systemic administration, this method has the advantages of localizing the drug effect with higher drug concentrations in the vitreous and retina. It is commonly performed by injecting a drug suspension or solution into the vitreous cavity in the center of the eye via pars plana using a 30 G needle. synthesis of cy-labeled amphiphilic copolymers Various MPEG-PCL-g-PEIs with different block formations were synthesized according to previous reports. 18,19 Briefly, MPEG-PCL with designed formations was synthesized using ring-opening polymerization of ε-CL initiated by PEG2000 or PEG5000. After being modified by acryloyl chloride at the end of the PCL, the copolymer MPEG-PCL-C=C was added to a PEI2000 solution in methanol, drop by drop, and stirred for 24 hours at 50°C to complete the reaction. The products were purified by dialysis and then lyophilized. By mixing Cy7-NHS or Cy5-NHS with triblock copolymer MPEG-PCL-g-PEI in an aqueous solution at room temperature, the self-assembled NPs were labeled with the fluorescent probe. The excess probes were removed by dialysis within 48 hours. Preparation and characterization of self-assembled NPs All synthesized copolymers were amphiphilic, and their NPs were prepared using the self-assembly method in aqueous solution at temperatures 55°C. Particle size distribution, polydispersity index, and the zeta potential of NPs were measured by dynamic light scattering, using a Zeta Sizer (NanoZS; Malvern Instruments, Malvern, UK). The critical micelle concentration was determined by fluorescence measurements using pyrene as the fluorescent probe at 25°C. 18 The excitation wavelength was 333 nm with a scanning speed 500 nm/min, and the emission fluorescence was monitored at 373 nm and 384 nm. The critical micelle concentration was estimated by analysis of the ratio of intensities at 373-384 nm (I373/I384). To determine labeling with a fluorophore, a fluorescence spectra assay of Cy7/Cy5-labeled NPs was measured using a fluorescence spectrometer (LS55; PerkinElmer Inc., Waltham, MA, USA). The cytotoxicity of the mPEG-PCL-g-PEI NPs was evaluated by 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT) assay. ARPE cells were seeded in 96-well plates at a density of 4×10 4 cells/well in 100 μL of growth medium and incubated 24 hours, and then 100 μL of a series of mPEG-PCL-g-PEI NPs at different concentrations were added. After incubating the cells for a further 24 hours, 10% Cell Counting Kit 8 was added. All absorbances were measured at 450 nm using a Spectramax microplate reader (Molecular Devices LLC, Sunnyvale, CA, USA), and untreated cells were used as controls. Intravitreal injections Cy-labeled MPEG-PCL-g-PEI NPs were prepared by dissolving the copolymer in water at 60°C, and after self-assembly, the solution was filtered using a 0.22 μm membrane filter before in vivo injection. Sprague Dawley rats (female, 4-8 weeks old) were anesthetized by intraperitoneal injection of 100 mg/kg ketamine and 10 mg/kg xylazine. Both eyes were injected intravitreally with 3 μL of Cy5-labeled amphiphilic copolymer using a 33 G syringe (Hamilton, Reno, NV, USA) into a quadrant posterior to the limbus. 5081 Diffusion and biodistribution of amphiphilic cationic NPs (2-6-2) group, and a free Cy7 group. Each of the eyes received an intravitreal injection of 20 μL of the abovementioned micelle solutions (5 mg/mL). Photoacoustic (PA) imaging at 1, 10, and 18 minutes after injection was performed using a preclinical PA computerized tomography scanner (Nexus 128; Endra, Inc., Ann Arbor, MI, USA). A freshly enucleated eye was used for PA imaging before intravitreal injection and was designated as 0 minutes in each group. Three eyes were used for each treatment (or control) group at each time point. characterization of NPs with different formations As a triblock copolymer, the formation of MPEG-PCL-g-PEI NPs was adjustable during the synthesis process. Two PEG ligands with molecular weight of 2,000 Da or 5,000 Da were chosen to initiate ring-open reaction with ε-CL. Ligand PCL with different molecular weights can be synthesized by adding various ratios between PEG and ε-CL, which eventually adjusts the proportion of hydrophilic and hydrophobic segments in the amphiphilic polymer. After chemically grafting PEI, we obtained three amphiphilic polymer models, including MPEG2000-PCL2000-g-PEI2000 (2-2-2), MPEG5000-PCL2000-g-PEI2000 (5-2-2), and MPEG2000-PCL6000-g-PEI2000 (2-6-2). To assess the effect of micelle formulation conditions, the size distribution and particle zeta potential were characterized using dynamic light scattering and zeta potential analysis, respectively. The results are shown in Table 1. In a similar manner as previously reported, these amphiphilic copolymers could be self-assembled into nanoscale particles with a uniform dispersion ( Figure 1A). Particle sizes of the three polymers had no significant difference in the range of 40-62 nm. Owing to the presence of PEI segments, positive charges on three self-assembled micelle surfaces were MPEG2k-PCL2k-g-PEI2k (2-2-2) at 24.5±1.67 mV, MPEG2k-PCL6k-g-PEI2k (2-6-2) at 32.5±0.35 mV, and MPEG5k-PCL2k-g-PEI2k (5-2-2) at 20.47±2.74 mV. Due to a longer hydrophobic segment in MPEG2000-PEG6000-g-PEI, its hydrophobic interactions between PCL segments made it more stable during selfassembling process, and thus, MPEG2000-PEG6000-g-PEI appears to be tenfold more thermodynamically stable based on critical micelle concentration. After labeling with two fluorochromes, Cy5 and Cy7 (Lumiprobe), fluorescence spectra characterization was performed as shown in Figure 1B. These three polymers were used for pairwise comparisons of vitreous cavity diffusion and tissue distribution after intravitreal injection. Pa imaging of NP diffusion These triblock copolymers were labeled with Cy7, so they could be traced by PA imaging (Figures 3 and 4). A blank 5082 Xu et al eye with no injection was used as a control and named "0 minutes". Owing to the presence of blood vessels in the choroid, significant PA signals were observed in all the control groups. A significant weakening of the Cy7 PA signal from 1 minute to 10 minutes indicated diffusion after the Cy7 vitreous cavity injection. The diffusion of different micelles could be tracked in real time after labeling with Cy7 followed by PA imaging. Figure 4 shows the PA signal intensities (indicated by white circles) of three experimental groups that showed sustained decreases with increasing duration after the injection. The rate of signal weakening corresponded to the rate of diffusion in the vitreous cavity. When comparing the PA signal intensities at 1 and 18 minutes, the signal weakening rate of the MPEG5000-PCL2000-g-PEI polymer was faster than that in the other two groups, followed by the MPEG2000-PCL2000-g-PEI polymer, then the MPEG2000-PCL6000-g-PEI polymer. Biodistribution after intravitreal injection After injection, all three Cy5-labeled NPs freely diffused throughout the vitreous cavity. The distribution of these NPs was examined by taking fluorescence images at intervals of 1, 3, 5, and 7 days ( Figure 5). During the first week after the intravitreal injection, each eye was enucleated and cut into 8 μm thick sections. One day after intravitreal injection, fluorescence microscopy showed that the Cy5-labeled NPs had already accumulated preferentially within the retinal pigment epithelial (RPE) cells ( Figure 5). With increasing time, the fluorescence intensity of the RPE layer increased ( Figure 5), with a maximum intensity on the fifth day ( Figure 5). The three polymer compositions had different diffusion rates in the retina. Next, the kinetics of tissue and cellular localization in the retina of these three amphiphilic self-assembled NPs were followed by semi-quantitative statistics ( Figure 6). The confocal fluorescence microscope could quantitate the fluorescence intensity via its own software. This tool was used to evaluate the biodistribution of micelles in each retinal layer. All eight layers of the retina were observed, and the fluorescent intensities of three selected sections in each layer were recorded. The biodistribution and infiltration process of the intravitreally injected NPs were Figure 5 shows that cationic NPs permeated the retina continually, with a maximum fluorescence on the fourth day ( Figure 6C). Discussion With advances in bionanotechnology, a variety of polymeric controlled release systems have been developed for delivery of drugs to the posterior segment of the eye. These systems include degradable polymeric NPs, polypeptide hydrogel, block copolymer micelles, and ocular implants. They were developed to minimize the frequency of injection and increase drug targeting to specific target sites. As an intraocular administration, intravitreal injections are a more clinically applicable and practical route than subretinal injections. 10,20 Therefore, the relationship between small NP compositions and the diffusion and distribution after intravitreal injection of small NPs is very important. In a previous study by Shi et al, 18 MPEG-PCL-g-PEI copolymers with various compositions were used as gene and drug co-delivery vectors. In this study, three composite micelles with various hydrophobic and hydrophilic ligands were used to study the relationship between diffusion rate and their composition. These micelles had similar particle size, which can be diffused freely in the vitreous cavity. However, MPEG2000-PEG2000-g-PEI has the largest particle size. In MPEG2000-PEG2000-g-PEI, the hydrophobic segment is relatively short, and thus, the self-assembly force is less than the other two polymers. On the other hand, as PEI is hydrophilic, it can appear at the hydrophilic shell rather than the hydrophobic core of these self-assembled micelles. In this study, we used PA imaging, a widely used method in cancer research, to monitor the diffusion of injected Cy7labeled triblock amphiphilic polymers in the vitreous cavity in real time. By comparing the images before and after injection, the PA signal areas were determined, and the changes of fluorescence intensity in the PA images represented the diffusion of Cy7-labeled polymers (Figure 2). After the vitreous cavity, slices of rat retina were used to evaluate the biodistribution, while PA images using rabbit were used to study the diffusion rate. Two experiments made full use of the existing experimental conditions, and their results can be complementary to each other. Two results confirmed that the copolymer MPEG-PCL-g-PEI could be quickly dispersed through retinal cell layers and enriched into RPE layer after intravitreal injection. Three block copolymers (MPEG5000-PCL2000-g-PEI, MPEG2000-PCL2000-g-PEI, and MPEG2000-PCL6000g-PEI) could self-assemble into NPs with similar sizes and different surface charges. In the three polymer PA images of Figure 3, MPEG5000-PCL2000-g-PEI with the longest PEG segments and 20.47±2.74 mV surface charges had the most obvious changes in PA signals. By comparing the signal changes in the eye injected with MPEG2000-PCL2000-g-PEI or MPEG2000-PCL6000-g-PEI, the results shown in Figure 3 indicate that a hydrophilic segment on the particle surface enhanced the diffusion efficiency of the particle after intravitreal injection. 5085 Diffusion and biodistribution of amphiphilic cationic NPs A different rate of diffusion between the group injected with MPEG2000-PCL2000-g-PEI and MPEG2000-PCL6000-g-PEI might be due to their different surface charges. MPEG2000-PCL2000-g-PEI could self-assemble into 62 nm NPs and with 24 mV surface charges, while MPEG2000-PCL6000-g-PEI NPs assembled into 40 nm particles with a 32 mV zeta potential. These three selfassembled NPs with sizes 100 nm can diffuse freely in vitreous (Table 1). Thus, a strongly positive charge (30 mV) on the particle surface has no effect on diffusion after intra vitreal injection. Conclusion Hydrophilic segments on the surface of amphiphilic selfassembled NPs improved their diffusion after intravitreal injection. A positive charge did not affect the rate of diffusion. The cationic amphiphilic polymer quickly reached the retina and was concentrated in RPE cells.
2018-04-03T04:45:39.493Z
2016-10-05T00:00:00.000
{ "year": 2016, "sha1": "0ceeac5821ace67139d7a35843f00b05d8808a90", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=32795", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0995520e7fa6643fa2f961871e4d2c4b76e79ae4", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
53363487
pes2o/s2orc
v3-fos-license
Quantitative Evaluation of Twelve Major Components of Astragali Radix Sulfur-Fumigated with Different Durations by UPLC-MS 1 Key Laboratory of Bioactive Substances and Resource Utilization of Chinese Herbal Medicine, Ministry of Education, Beijing Key Laboratory of Innovative Drug Discovery of Traditional Chinese Medicine (Natural Medicine) and Translational Medicine, Key Laboratory of Efficacy Evaluation of Chinese Medicine against Glycolipid Metabolic Disorders,State Administration of Traditional Chinese Medicine, Institute of Medicinal Plant Development, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100193, China; xyxing@implad.ac.cn (X. Xing), sun_zhonghao@126.com (Z. Sun), mhyang@implad.ac.cn (M. Yang), nlzhu@implad.ac.cn (N. Zhu), jsyang@implad.ac.cn (J. Yang), mgxfl8785@163.com (G. Ma), xdxu@implad.ac.cn (X. Xu) a These authors contributed equally to this work. * Correspondence: mgxfl8785@163.com (G. Ma), xdxu@implad.ac.cn (X. Xu); Tel.: +86-010-5783-3296 Introduction Astragali Radix (AR), the dry roots of Astragalus membranaceus (Fisch.)Bunge.or Astragalus mongholicus Bunge., is one of the most widely used Qi-tonifying Chinese herbal medicines.In theory of traditional Chinese medicines (TCM), AR has good effects, including tonify Qi of the kidney [1], strengthening exterior and reducing sweat, tonifying Qi and lifting yang, inducing diuresis to alleviate edema, sweet and warm tastes removing fever, promoting wound healing and tissue regeneration [2].Combining with pharmacological studies, AR has been used in clinic to treat diabetic and reduce the risk of diabetic complications [3], cardio-cerebrovascular disease, respiratory disease, and digestive system disease [4], due to its immunomodulation [5], anti-inflammation [6], anti-tumor [7], nerve cell protecting and recovery [8], anti-aging, and cardioprotective effects [9].Previous researches found that the main active ingredients of AR includes flavonoids and isoflavones, saponins, polysaccharide, and others [10].Traditionally, the post-harvest processing of the roots of AR is sun-dry the whole fresh root after cleaning.Because of mildew prone, AR was recently reported as being sulfur-fumigated during post-harvesting handled to storage.So it's necessary to compare its chemical profiles variations after the sulfur-fumigation.Sulfur-fumigation, which is low-cost and easy operation, has been commonly used to prevent medicinal herbs from pest infestation, mold, and bacterial contamination [11].But recent studies demonstrated that this method could cause the residue of hazardous substances such as sulfur dioxide and heavy metals, which posed a threat to human health [12].Furthermore, sulfur-fumigation was reported to reduce the content of active ingredient in herbs, and influence the chemical transformation of bioactive components, even alleviate the pharmacological activities of edible herbs [13][14][15].In 2004, the State Food and Drug Administration of China pointed out that sulfur fumigated medicinal herbs are inferior [16].However, the method of sulfur fumigation handled medicinal herbs and foods still prevails all over the world, which exerts a negative impact on the safe application of edible herbs.To the best of our knowledge, few systemic studies were reported on quantitative evaluation of sulfur-fumigated AR, in particular the effects of sulfur-fumigation durations on the proportions of bioactive components in AR has not been quantitatively evaluated. The fresh reference Astragali Radix sample was collected from Inner Mongolia Autonomous region, the indigenous cultivating region of Astragali Radix and authenticated by Prof. Rong-Tao Li.The voucher specimen (AM171114-1) was deposited at the Institute of Medicinal Plant Development, Beijing, China. Sulphur-fumigation of AR The sulphur-fumigated AR samples were self-prepared in our lab from the non-fumigated reference AR sample (AM171114-1) following the modified procedures similar to that by herbal farmers or wholesalers: 50 g AR slices were moistened with 4 mL of water, and left for 0.5 h.Two grams of sulphur powder was heated until burning, then the burning sulphur and the moistened AR slices were carefully put into the lower and upper layer of a desiccator respectively.Seven portions (50 g each portion) were prepared to study of the sulfur-fumigation extent at different collection points of 1, 2, 4, 6, 8, 12, 16, 24, 36, 48, 60, and 72 h, respectively.After fumigation, the AR slices were dried at 40 ℃and ground into fine powder. An Applied Biosystems 3200 Q-Trap system (AB SCIEX, Singapore) equipped with an electrospray ionization (ESI) source was used and the system was operated in positive and negative mode.Optimization of multiple reaction monitoring (MRM) conditions were carried out with a followed source-dependent parameters: Gas 1 and gas 2 were set at 50 psi.The optimized ion spray voltages were set at 5500 V and -4500V in positive and negative ion mode, respectively.The optimized ion spray voltage and temperature were set at 5500V and 700 •C, respectively.The operating vaporizer temperature, 500 °C.Nitrogen gas was used in all analyses, and data acquisition and processing were performed using Analyst software version 1.6.2.The MRM parameters are outlined in Table 1.Standard stock solution 1 and 2 consisted of 7 (1 -7) and 5 (8 -12) accurately weighed reference compounds were directly prepared in methanol, respectively.The final concentrations of these twelve reference compounds in stock solutions were prepared to be 11.88 ug/ml for formononetin, 21.6 ug/ml for calycosin, 17.41 ug/ml for methylnissolin, 15.5ug/ml for 7,2'-dihydroxy-3',4'-dimethoxyisoflavane, 17.44 ug/ml for calycosin-7-glucoside, 38.064 ug/ml for Calycosin-7-glucoside, 6.427 ug/ml for astraisoflavan-7-O-β-D-glucoside, 62.4ug/ml for 7,2'-dihydroxy-3',4'-dimethoxyisoflavan-7-O-β-D-glucopyranoside, 214 ug/ml for astragaloside I, 61.2 ug/ml for astragaloside II, 28.2 ug/ml for astragaloside III and 114.6 ug/ml for astragaloside IV, respectively.The working standard solutions were prepared by diluting the stock solutions with methanol to a series of proper concentrations.The solutions were brought to room temperature and filtered through 0.22 μm membrane filters, and an aliquot of 5 μL was injected into UPLC-MS for the followed analysis. Sample preparation Methanol extracts: Each AR was accurately weighed (approximately 1.0 g) and heat relux with 50.0 mL of methanol for 4 h.The extract was then filtered by a 0.22 μm PTFE syringe filter before LC-MS analysis. Method validation Method validation assays were carried out according to currently accepted Food and Drug Administration (FDA) guidance. Calibration curves, limits of detection and quantification. The calibration curves for 12 reference compounds were established by plotting peak area ratios of each analyte using the linear regression analysis using 1/X 2 as a weighting factor.Calibration curves had to have a correlation coefficient (r) of 0.995 or better.The limit of detection (LOD) was determined as signal-to-noise ratio >3 and the limit of quantification (LOQ) was measured as signal-to-noise ratio >10 (Table 2).The intra-and inter-day precisions were determined by analyzing 12 analytes from standard stock solution in six replicates during a single day and by duplicating the experiments on three successive days.To further evaluate the repeatability of the developed assays, samples were analyzed in six replicates.Their criteria for acceptability of data were within ± 15% relative error (R.E.) from the nominal values and a precision of within ±1 5% relative standard deviation (R.S.D.).Stability of AR sample was tested at room temperature and analyzed at 0, 2, 4, 6, 8, 10, 12 and 24 h.The contents of the corresponding compounds were calculated from the corresponding calibration curves. Recovery test The measured recoveries of the compounds were determined by the method of standard addition.Three concentration levels (low, medium, high) of the mixed standard solutions were spiked with a sample of AR, which was analyzed previously using the above described method and the concentration of each component was calculated according to the calibration curves (Table 3). Optimization of suitable LC-MS conditions We initially attempted to optimize one suitable LC-MS method to simultaneously determine all 12 chemical marker compounds in AR.However, we could not obtain acceptable results using one method, where the tested compounds could simultaneously achieve well ion response in single ion mode.Thus, separated two batches of analysis were performed under different ion modes.Compounds 1 -7 and 8 -12 were performed in positive and negative ion mode, respectively.Figure.Meanwhile, two different columns, different mobile phases and detecting ion modes were tested during method development.The selection of UPLC columns with high separation efficiency is a prerequisite.Here, two chromatographic columns, BEH (ethylene bridged hybrid) C18 column (2.1 mm × 100 mm, 1.7 μm, Waters) and HSS (high strength silica) T3 column (2.1 mm × 100 mm, 1.8 μm, Waters), were utilized to investigate for the comprehensive metabolome.The BEH C18 column is the universal column choice for UPLC separations.While HSS T3 column with 100% silica particle, is used to retain and separate smaller, more water-soluble polar organic compounds than the BEH C18 column (Zhao et al., 2013).The result showed that HSS T3 column could gain a more extensive retention and a better chromatographic separation for the 12 tested analysts. Analytes Mobile phases including acetonitrile-water and methanol-water with modifiers such as acetic acid, formic acid, and different gradient elution modes were all investigated.The results showed that the mobile phase consisted of water (0.2% formic acid) and acetonitrile (0.2% formic acid) gave the best separation and peak shape. Calibration curves, limits of detection and quantification. Standard stock solutions were prepared as described in the section 'Preparation of standard solutions' and diluted to appropriate concentrations to establish the calibration curves.At least six different concentrations were analyzed in triplicate, and the calibration curves were then constructed by plotting the peak areas vs. the concentration of each analyte.As showed in Table 2, all the analytes showed good linearity (R 2 ≥ 0.9986) in a relatively wide concentration range.The analysis of LOD and LOQ also showed a well quantification, which were ranged from 0.001-1.55ng/ml and 0.01-3.10ng/ml, respectively.The precisions were determined by analyzing known concentrations of the 12 analytes from two standard stock solutions in six replicates during a single day and by duplicating the experiments.To further evaluate the repeatability of the developed assays, samples were analyzed in six replicates as described above.Stability of AR sample was tested at room temperature and analyzed at different time points within one day.The contents of the 12 analytes were calculated from the corresponding calibration curves.Table 2 indicated that the RSD values for precision, repeatability and stability of the 12 compounds were all less than 5.0%, which demonstrates good precision, repeatability and stability of the developed method. 3.2.2.Accuracy Accuracy of the analytical method was evaluated by measuring percentage recovery of 12 analytes.The results of the recovery test are shown in Table 3, which were all ranged from 95%-105% at three spiked concentrations. Quantification of the major components in AR with and without sulphur-fumigation The validated LC-MS method was applied for quantitative determination of the 12 components with and without sulphur-fumigation.The contents of eight flavonoids and four triterpenoid saponins were summarized in Table 4. From the results, it can be found that compared with the non-fumigated sample, the contents of two flavonoids calycosin (1) and formononetin (3) decreased significantly ranging from 39.2% to 45.4% and 35.5% to 40.5%, respectively; 7,2'-dihydroxy -3',4'-dimethoxyisoflavane (7) had a large fluctuation ranging from 6.5% to 39.8%; the content of methylnissolin (5) had no obvious change in the sulfur-fumigated samples; while the contents of four flavonoid glycosides (compounds 2, 4, 6, and 8) all increased remarkably which suggested that the happening of chemical transformation of flavonoids and glycosides in the sulfur-fumigated samples.In addition, the contents of Astragaloside III (11) and Astragaloside IV (12) decreased moderately ranging from 11.5% to 40.0% and 15.5% to 47.7%, respectively, when compared with the non-fumigated sample; the content of Astragaloside I (9) also displayed no obvious change in the sulfur-fumigated sample; the content of Astragaloside II (10) was not detected because of the limited detection.Furthermore, the analyses of the detected compounds' contents in different sulfur-fumigated time suggested that the reduction proportions of compounds 7, 11, and 12 had a proportional relationship with sulfur-fumigated time.All above results indicated that sulphur-fumigation can decrease the contents of partial aglycones and triterpenoid saponins and increase the contents of flavonoid glycosides in AR significantly.Therefore, it could be concluded that sulphur-fumigation can significantly influence the inherent quality of raw materials of AR. General procedure for the synthesis of flavonoid glycosides The variation of flavonoids and glycosides contents in the sulphur-fumigation of AR compared with the reference sample suggested the flavonoids may have a reaction with glucoses under high temperature and acidic conditions during the sulphur-fumigation process.In order to confirm the deduction, we further designed the procedure for the synthesis of flavonoid glycosides which was similar to the sulphur-fumigation circumstances. Conclusions In the present study, a LC-MS method was established for simultaneous quantification of twelve major components in AR, and successfully applied for quantitatively evaluating the effects of sulfur-fumigation on the quality of AR.Compared with previously reported methods, the newly developed method used MRM mode of LC-MS which was the first application to simultaneously detect flavonoids and triterpenoid saponins in A. mongholicus. On the other hand, the contents of the major flavonoids decreased significantly, while its corresponding glycosides increased accordingly when compared with non-fumigated AR.The contents of the major triterpene glycosides also decreased in the sulfur-fumigation samples, but the degree of reductions were limited.sulphur-fumigation can influence not only the contents of components in AR, but also the chemical transformation of flavonoids and glycosides.It was suggested that sulphur-fumigation should be forbidden for processing and conservation of Chinese medicinal herbs before the efficacy and safety of sulphur-fumigated herbs are systematically investigated.Alternatives to sulphur-fumigation for processing and conservation of AR should also be further developed. Figure 3 . Figure 3.The synthesis of flavonoid glycosides Table 1 . MRM transitions and parameters for the detection of the 12 analytes. Table 3 . Results of recovery. Table 4 . The contents of twelve reference compounds in AR with and without sulphur-fumigation (mg/g, n=3).
2018-10-14T22:45:12.475Z
2018-09-05T00:00:00.000
{ "year": 2018, "sha1": "0949d0ac47518bd5558cc81ff48e72367f33477f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/23/10/2609/pdf?version=1539257909", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "bb33d3c309598f6fd2d168c8da2d5d41cfada30f", "s2fieldsofstudy": [ "Chemistry", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
237579417
pes2o/s2orc
v3-fos-license
Chest Wall Mass in Infancy: The Presentation of Bone-Tumor-Like BCG Osteitis Chest wall mass in infancy is rare. Malignant lesions are more common than infection or benign tumors. This is a case of a 12-month-old girl who presented with a 2 cm mass at the right costal margin and poor weight gain. Chest radiograph demonstrated a moth-eaten osteolytic lesion at the 8th rib. The resection was performed, and a mass with pus content was found. The positive acid fast stain (AFB) organism was noted. Pathology confirmed caseous granulomatous inflammation compatible with mycobacterial infection. However, QuantiFERON-TB Gold was negative, so Mycobacterium bovis (M. bovis) osteitis is highly suspected. She was treated with antimycobacterium drugs and showed good results. Osteomyelitis can manifest by mimicking bone tumors. Without a biopsy, the pathogen may go undetected. So, interventions such as biopsy are warranted and avoid mass resection without indication. High C-reactive protein (CRP), alkaline phosphatase (ALP), periosteal reaction of radiating spicules, and penumbra sign in magnetic resonance imaging (MRI) are helpful for discriminating osteomyelitis from bone tumor. Introduction Chest wall mass in infancy is rare [1].As an infectious cause, benign and malignant tumors must be a concern.However, malignant lesions are more common than the others [2] and favor metastasis [3].A primary neoplasm of the thoracic wall is caused only in 5-10% [4] of bone tumors.Malignancies include Ewing's sarcoma, Langerhans cell histiocytosis, osteosarcoma, primitive neuroectodermal tumor, metastatic neuroblastoma, and leukemia [5]. Infection in the form of chest wall osteomyelitis can be caused by bacteria, fungi, and mycobacteria, whether from direct extension of empyema and pneumonia or hematogenous spread [5].World incidence of new tuberculosis (TB) is about 142 per 100,000 population reported each year [6], with 29% cases being extrapulmonary.Bones and joints are affected by only 1% infection of all TB manifestations [7]. Case A 12-month-old girl came from Nakornsawan province, who presented with painless progressive palpable round shape mass at the right costal margin.She had no fever.Her appetite was good, but still she had poor weight gain with height and weight both lower than fifth percentile.No history of contact with tuberculosis was found, but her mother had a positive history of TB lymphadenitis and completed treatment more than ten years ago. e natal history was unremarkable.She completed the ai national immunization program and was given complementary vaccines including pneumococcus, H. influenza, and intradermal tuberculosis vaccine at birth.Physical examination revealed a 2 cm round fixed firm mass at the right anterior lower chest wall.Bacillus Calmette-Guérin (BCG) scar was found at right upper leg.Otherwise, it was unremarkable.Chest radiograph demonstrated a moth-eaten osteolytic lesion at the right anterior 8 th rib, shown in Figure 1.Complete blood count showed the following: hemoglobin: 11.2 (>10.5)g/dL, white blood cell: 12,790 (6,000-17,500)/mcL (N: 20.4%, L: 73.1%), PLT: 339,000/mcL, alkaline phosphatase: 201 (150-420) U/L, and hs-CRP: 20.98 (0-5) mg/L.Chest magnetic resonance imaging (MRI) is shown in Figure 2. e presumptive diagnosis is probable Ewing's sarcoma or Langerhans cell histiocytosis. Resection of the mass was arranged.Intraoperative examination revealed a firm round 5 cm mass with signs of osteomyelitis on the rib and pus. e pus was further processed and revealed positive acid fast bacilli stain (AFB) and was reactive of polymerase chain reaction (PCR) for Mycobacterium tuberculosis complex (MBTC).Pathology, shown in Figure 3, confirmed caseous granulomatous inflammation compatible with mycobacterial infection.A positive purified protein derivative (PPD) skin test at 72 hours was reported afterward.However, mycobacterial pus culture, QuantiFERON-TB Gold, 3-day morning gastric content for AFB, MTBC PCR, and culture were all negative.Because of clinical and negative QuantiFERON-TB Gold, Mycobacterium bovis (M.bovis) osteitis is highly suspected.She was treated with HRZE standard regimen (isoniazid, rifampicin, pyrazinamide, and ethambutol) for 2 months and switched to HR (isoniazid and rifampicin) for a total of 12 months.Outpatient 3-year follow-up demonstrates excellent weight gain, good appetite, and normal wound healing without mass progression. Discussion e diagnostic guidelines recommend that the diagnosis of TB should follow the cautious history of TB contact, symptoms and signs, abnormal chest radiograph, sputum/ gastric aspiration smear AFB/culture, and tuberculin skin test [7].Negative QuantiFERON-TB Gold reflects that the organism may not be M. tuberculosis.erefore, other species should be suspected including M. bovis. Bacillus Calmette-Guérin (BCG), Tokyo 172-strains, was employed for preventing tuberculosis and has shown excellent results in preventing miliary and meningeal tuberculosis [8].Although the report of Tokyo 172-strains vaccine in Taiwan demonstrated that incidence of BCG osteitis/osteomyelitis was 30.1 cases per million vaccinations during years 2008-2012.M. bovis infection is more likely to present with extrapulmonary infection [9,10].Osteomyelitis commonly involves the long bone of extremities with 80% located on epiphysis or metaphysis [11].Only 2% occurrence of tuberculosis in rib was reported in the southeast Asian endemic area [12]. Review from 8 cases of chest wall mycobacterial infection (Table 1) shows that the male-to-female ratio is 1 : 1. e age range was from 9 months till 10 years (median age: 13.5 months).Pathogens vary from M. bovis to M. tuberculosis.Infected sites are often the rib (6 of 8 cases), and the others are sternum, chest wall, and pulmonary and hematologic dissemination.M. bovis infection was only found in the infant period.erefore, if age is more than 12 months, M. tuberculosis should be considered first [9,[12][13][14][15][16][17][18]. Naturally, M. bovis is resistant to pyrazinamide.So, the combination of 2HRZE + 10HR is strongly recommended for osteomyelitis and a longer course is considered in severe disease [8]. Osteomyelitis can manifest by mimicking bone tumors.e most common pathogen in nontuberculous endemic Case Reports in Pediatrics areas are bacteria such as Staphylococcus aureus and Klebsiella pneumoniae [19].Without a mass biopsy, the pathogen may go undetected.So, interventions such as biopsy are warranted.Mass resection with bone debridement will avoid the need for and delay caused by multiple trials of antimicrobial agents.So, the infection always occurs in the differential part of chest wall mass, and only biopsy is required.Even incision and drainage should be avoided in BCG osteitis due to the possibility of delayed wound healing.However, the needle tract should be appropriately placed to be able to subsequently perform en-bloc resection in case the biopsy reveals a malignant pathology.It is not possible to distinguish between osteomyelitis and a bone tumor clinically.e majority of osteomyelitis cases did not show concurrent fever and mild or no local reactions [20].But multiple methods of laboratory and imaging techniques are proposed in Table 2. High C-reactive protein (CRP) shows sensitivity of 60% and specificity of 90.8% [22] for diagnosis of osteomyelitis.e levels of calcium, phosphate, and alkaline phosphatase are unremarkable in osteomyelitis, in contrast to metastatic or some metabolic bone diseases [23].e white blood cell count in some reports fails to show a correlation [22].Pathological fractures are often reported in malignant bone tumors and extremities are also reported in osteomyelitis [22].Plain radiographs fail to distinguish these two disease entities since the lytic and sclerotic sign can be found in both.But in osteomyelitis, most periosteal reactions do not report radiating spicules which differ from osteosarcoma and Ewing's sarcoma.Penumbra sign in T1-weighted magnetic resonance (MR) images has also been reported to be helpful for discriminating osteomyelitis with a sensitivity of 73.3% and specificity of 99.1% [22]. is sign is characterized by a ''target'' appearance with four layers.e hypointense central abscess cavity is followed by an inner relatively hyperintense ring of granulation tissue, an outer hypointense ring, and a peripheral hypointense halo.However, tumors with internal bleeding may cause false positive results, and acute osteomyelitis may cause a false negative sign. Summary is case report shows an atypical finding of a rib mass in infancy.As atypical infection, benign and malignant tumors must be considered.Although infection is in the differential, tissue biopsy or resection may be necessary.CRP, ALP, and penumbra sign on MRI may support the diagnosis of osteomyelitis and favor a biopsy rather than mass excision.For younger patients with mycobacterium osteomyelitis or abscess, M. bovis should be proven due to difference in drug regimens. Figure 2 : Figure 2: Chest MRI shows lobulated irregular rim-enhancing lesion epicenter with destruction of the right lateral 8 th rib causing pressure effect to right hepatic lobe.Penumbra sign is positive. Table 1 : Review reports from 8 cases of chest wall mycobacterial infection. Table 2 : Comparison of clinical characteristics of bone tumors and osteomyelitis of the chest wall.
2020-11-19T09:16:52.111Z
2020-11-11T00:00:00.000
{ "year": 2020, "sha1": "6f0811033e238cef8cfa9b0df43e158135c60108", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cripe/2020/8884770.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "814fd3468061aa0860854f6c7264a75aa64ddf76", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214152030
pes2o/s2orc
v3-fos-license
The Life and Death of Residential Room Types: A Study of Swedish Building Plans, 1750–2010 While the study of building types is a well-known and relatively active research field, the topic of room types is less explored. This article takes a broad approach to spatial categorization, enabling the examina - tion of different types of spaces over longer periods. How do different room types evolve and die? How do the different residential room types relate to each other? Do they act alone or do they follow each other over time? The article looks at the particular evolution and development of Swedish residential room types and is based on the study of plans of 2,340 Swedish buildings from about 1750 to 2010. Six themes emerged from this study: thresholds of birth and extinction, abruptive change, the relation between absent and present room types, contagious types, different temporal scales and the stabilization of prototypical sets. Introduction Spatial entities can be classified into different types of rooms.These types are often used in building programmes and briefs (Markus and Cameron 2002), and to set the plan of buildings, buildings that subsequently are aggregated into urban areas, and urban areas into cities.Room types here play an important part in how we behave in everyday life (for example, justifying certain restrictions, such as ' quiet, this is a reading room'), and they take part in the transformation of objects and cultures of different scales.Like the classification of building types, the naming and designing of room types is a matter of territorialising specific kinds of spaces (Kärrholm 2013), and as such, types of rooms participate in the controlling and ordering of movements and behaviour (Sack 1986).However, whereas the study of building types is a well-known and active research field (Markus 1993;Forty 2000;Scheer 2010;Guggenheim and Söderström 2009;Steadman 2014;Karlsmo and Löfgren 2016), the topic of room types has received less scrutiny.Research on residential room types is so far a quite fragmented field, encompassing everything from general and specific design guidelines (Neufert 1936) to more descriptive and historical writings on room types (Barley 1963;Muthesius 1979;Gejvall 1988).It also includes research on the relationships of room types and their spatial distribution within dwellings (Hanson 1999;Nylander 2013).In general, literature on residential room types has often focused on a specific historical, typological and geographical setting, such as large country houses in Sweden (Selling 1937), England (Girouard 1978) or Ireland (MacCarthy 2016); bourgeois apartments in Stockholm during the 19th century (Gejvall 1988); Victorian homes (Girouard 1979;Flanders 2004); or the room types of specific rural contexts (Erixon 1947;Barley 1963;Hansson 1999). I look at the evolution and development of residential room types and how they relate to each other.The article can thus be seen as an initial investigation of some general themes of room type transformation: How do different room types evolve and die?How do different residential room types interrelate?Do they act alone, or do they follow each other over time?Sweden provides an interesting case study, since the country underwent an unusual, quick, thorough and dramatic modernization (and urbanization) during the 20th century, and so trends in transformation can be easily identified.However, this transformation is set within the broader historical context of the modern era of architecture, starting (as suggested, for example, by Collins 1965) around 1750.The article is thus based on a study of 2,340 building plans of Swedish buildings from about 1750 to 2010.It explores different themes that emerge about the transformation of room types and ends with six ways that room types intermingle and come and go. Spatial Types, Room Types and Territorial Sorts In this article I discuss type as a spatial category that matters in everyday use.I do thus not follow the morphological conception of type, sometimes called form type, that can be found, for example, in the famous studies of Jean-Nicolas-Louis Durand (2000Durand ( [1802-5]-5]) and Saverio Muratori (1959).Since form and use must be studied together, it is better not to employ the more common notion of use type (Scheer 2010: 10;Steadman 2014: 354).Rather, I follow Steadman's more general definition of building type as ' a classificatory unit by which similar buildings can be grouped and enumerated ' (2014: 353).To this definition I would add a more pragmatic perspective: a type is also always a kind of actor, something that has an effect (Latour 2005) in an everyday life situation.A certain type of space, such as a bathing place, might come in a variety of different forms and host a series of different functions (Carl 2011).However, it is because someone recognizes it as a bathing place and uses it accordingly that it makes a difference in our everyday life.Both social (who can bathe and how?) and material aspects (what kind of bathing does this specific place afford?) have their role to play, and these roles are interdependent.Issues of form have often been distinguished from issues of activity or use, so that the difference between single-family detached houses and row houses has do with form, whereas that between student housing and elderly housing is about use.One problem with putting a focus on either use or form is that it tends to omit buildings without clear purposes or with irregular forms (Karlsmo and Löfgren 2016: 12). 1 Also, when we look at transformations, there is always (as we shall see) a change both in form and use.The room type of the Swedish kitchen, for example, has changed in both form and use over the centuries.What it can be used for and by whom, as well as its form, location and integration in the house has changed many times, yet it has retained its identity as a kitchen.Types can also be described as 'territorial sorts', since they territorialize a certain object or space with a certain meaning/intensity (Brighenti 2010;Kärrholm 2013).Types are not innocent, but are soaked in power.They are in fact a way of turning a certain space into a socio-material actor with a certain effect, i.e., into a territory.One could also describe them as 'sorts' to make clear that they are not defined by a standard set of entities (like prototypes might be), but must be seen as a more fluid assemblage where no entity is in itself obligatory.Instead, different entities can come and go over time as long as they share some kind of family resemblance (Law and Mol 1994). So, how do we approach the question of typological transformation?In 1825, Quatremère de Quincy defined an idealistic concept of type: ' an object after which each [artist] can conceive works of art that have no resemblance' (Quatremère de Quincy in Steadman 2014: 353) but instead have an elementary principle in common.Ever since then, type has often been used from a normative and prototypical perspective (Steadman 2014: 354).In Aldo Rossi's L'architettura della citta (1966), for example, identifying recurrent types of buildings was a way to justify architectural form, because recurrent types ensured a certain meaning, producing a historical continuity within the city (Forty 2000: 304-11).Urban morphologists as well as urban and architectural historians have also used a more empirical notion of type, but nevertheless remarked on its form as a mental image.For example, Caniggia and Maffei argue that types can be defined a posteriori, but they also claim that their existence actually depends on the fact that 'it [the building type] exists in the builder's mind before producing a house' (Caniggia and Maffei 2001: 53; see also Kropf 2001). Types, or territorial sorts, are abstractions that enable us to think and do, but they should not be seen as ready-made solutions or an ordered list of rules.Territorial sorts are often too dependent on vague associations, atmospheres and affects to be formalized as some sort of mental model or a rule of thumb.As crime fiction has taught us, a gunshot in the dark might change one type of place into another, an idyllic village street into ' a dangerous and scary place' or even into ' a crime scene' in the blink of an eye.To name or categorize something as a specific type or sort of space is thus a quite basic phenomenon, and should not be confused with the much more specific case of typification that we see in modern and industrialized housing (where room types might be sorted into taxonomies, and where each type might be defined more formally through different kinds of regulations).Formalization is an exception rather than a rule when it comes to the effect of types.I would therefore like to suggest a much more fluid view on spatial types.They might of course hold a certain stability, but arriving at a definition is always a struggle, because all types are always on the way towards something new (Kärrholm 2016).In short, when it comes to types, there is always continuity as well as continuous change in both form and use (Koch 2014). Typological transformations can be investigated through comprehensive historical and ethnographic studies (e.g.Paulsson 1950).Although I advocate studies of this kind, I also consider it necessary to address the question on a more abstract and general level.One way to do this is through a biological analogy (Kropf 2001;Steadman 2008Steadman , 2014)).The analogy between typological transformation and Darwinian evolution was often taken to mean that types improved and became more complex and advanced (with a 'better fit') over time.This view was especially popular during the late 19th century (Karlsmo & Löfgren 2016;Werne 1997), but can actually also be found in more recent building type theory (Scheer 2010: 27).The theory of evolution does not, however, state that there is a pre-determined hierarchy of types (in terms of value), nor that types always develop from basic to more complex ones -both directions are always possible.The development of room types or building types seldom occurs through random variations (Steadman 2014: 3).Neither are such types provided by a certain environment or context in any deterministic sense. Nevertheless, a metaphor of spatial species, or more specifically, room species, is a first step towards a more animated and ecological discussion of types.Darwin himself was well aware of the problem of defining a species (Darwin 2011: 44-50), and species are always, as Brighenti suggests, 'both territories and movement' (Brighenti 2014: 11).Species are constantly moving figures, and categorization is always a temporary abstraction.Evolution theory has shown us that we are machines of difference (cf.Deleuze 1994).Life is continually producing differences, and if selection (random or not) seems to temporarily stabilize a species, enabling an abstraction, the forces of deterritorialization are always working in its very midst.This process can, for example, be discussed in relation to the visitor centre type of building (Kärrholm 2016).The establishment of different kinds of information and welcoming spaces relating to tourist attractions led to the development of the visitor centre as a new building type in Sweden during the early 1990s.However, only a decade or so later it seems as if different subtypes were formed, bearing the seeds of several new species of space.Theoretically, every individual difference always has the potential to be the start of a new species.Room types, involved and entangled as they are in our everyday lives, are no exception. The Study of Building Plans A large base of empirical material, preferably covering a long period, is necessary to study the evolution of room types.The empirical material for this article is based on the study of the plans of 2,340 buildings made in Sweden or drawn by Swedish architects from around 1750 until 2010.Of these, 816 buildings are residential types and 1,524 represent building types primarily built for other purposes (hospitals, schools, governmental buildings, railway stations, museums, etc.).The residential types were more important for this article, but residential room types in Sweden were also found in public buildings, factories, office buildings, etc., until at least the 1960s. 2 In the study, all of the different room types in each building plan were noted and sorted chronologically according to building type.The database consists of a total of approximately 40,000 to 45,000 rooms.The building plans were collected through an inventory of all issues of the Swedish magazine Arkitektur (the Swedish architectural review and the largest Nordic magazine about architecture), which began in 1901. 3To better account for the years before 1901, a number of reference works on important building types and architects was also used (see reference list).Information on room types between 1750 and 1900 is also taken from the well-documented architectural history of residential buildings in Sweden (e.g., Selling 1937;Lundberg 1942;Erixon 1947;Gejvall 1988;Nylander 2013). The homes of the middle and upper classes are well represented because these sources focus on buildings drawn by architects and published in architectural journals.Some room types might therefore also be omitted altogether, such as antiquated but enduring rural room types and room types common in poorer housing, like the spisrum (literally, stove room), found in and around Stockholm in the second half of the 19th century.In single room apartments, the spisrum was a combined living room, bedroom and kitchen.Although this type was quite common, it does not appear on any of the plans I studied. The work also relies on terms found in the Swedish national dictionary, Svensk Akademisk Ordbok (SAOB), an ongoing project that began in 1893, as well as in the shorter but complete and more updated Svensk Ordbok (SO). 4 While naming is of course an important part in the spread of spatial species (Steadman 2014: 360), it should be remembered that these dictionaries are based on texts and not on plans, and actually account for words used for room types whether built as such or not. Themes in the Life and Death of Residential Room Types Rather than presented in a strictly chronological way, the findings are explored as a series of different themes.Tables 1 and 2 provide background for the discussion, since they present the approximate lifespans and dates of origin of a number of room types.Room types often feature in everyday language long after they have ceased appearing in plans (during my own childhood in the 1970s and 80s, it was thus not unusual for older people to refer to a room in their apartment or villa as a maiden chamber, even though the room had not been used as such since the 1940s or '50s).A room type might also appear both as a concept and as an actual place before it receives a specific and deliberate design by an architect (cf.Steadman 2014: 368). Thresholds of Birth and Extinction In the plans for the large Swedish houses of the 17th and early 18th centuries, rooms were often classified according to size, and not always according to function (Gejvall 1988: 171;cf. Rybczynski 1986: 42).Houses included kammare (small chambers) and kabinett (cabinets), and salar (larger rooms) and salonger (salons), and the interior arrangement of these rooms was often important.Larger bedrooms could, for example, have both antechambers and smaller back rooms to which inhabitants could withdraw (Figure 1).The sequence of movement and the notion of hosting guests was thus an important matter in these plans (cf.Baeckström, 1917: 46). The chambers and cabinets, however, soon grew more specific -porcelain chambers, milk chambers, writing chambers, guest chambers, etc. -while the more general types of chambers and cabinets gradually disappeared.The early Swedish texts on architecture, such as those by Johan Eberhard Carlberg (1740), Carl Wijnblad (1755-56) andCarl Stål (1834), were all greatly influenced by French architecture, and Swedish residential plans often followed the French style of organizing prominent houses, with an enfilade of predominantly large rooms towards the front of the building, combined with two sets of rooms (antechamber, bedroom, cabinet and wardrobe), one for the husband and one for the wife (Figure 2).These were principles that French architects such as Augustin-Charles d'Aviler, Charles-Étienne Briseux and Jacques-François Blondel introduced during the 17th and 18th centuries.An interesting Swedish example can, for example, be found in Baron F. Löwen's house in Stockholm, from the 1740s (Gejvall 1988: 105f). The earlier Swedish houses and apartments, influenced by French architecture, could thus be seen as divided into two parts (Gejvall 1988: 255).One part contained social rooms and living quarters, with a large room -the sal -in the middle, and with one series of rooms on the husband's side and one on the wife's side.These two different suites of living quarters often included a förmak (antechamber), a bedroom and a cabinet.The other part of the house contained the kitchen area with servants' rooms and possibly children's chambers. The great room, or sal (quite similar in use and style to the French salle; Rybczynski 1986: 38) was an important Swedish room type that acted as a living room, a dining room, and a room for work as well as for parties and for hosting guests.It was also an important room for movement, as many apartments actually required all visitors and family members to pass through the sal to reach the other rooms of the apartment (Gejvall 1988: 188-200).The room type tambur, a kind of lobby or antechamber, was introduced in Stockholm apartments at the end of the 18th century, and came to act as a kind of small entrance room to the sal.It was often used to hang clothes and store wood for the fireplaces.Later it became an important connector to other rooms as well, and as such was well integrated in the spaces of the home.The idea of an antechamber to main rooms was a French idea, but the role that the tambur soon took on was probably more influenced by the way that the Vorzimmer was used in German apartments during the 19th century (see Gejvall 1988: 175-86 for a more thorough discussion on the tambur). The importance of the tambur increased, and it become larger and lighter, growing into a kind of lobby during the second half of the 19th century (Figure 3).The tambur also came to be physically connected to such increasingly popular types as serving rooms and linen rooms, as well as to corridors and passages, and as such it became key to enabling movement through the dwelling without passage through the more presentable, formal rooms (Gejvall 1988: 107f).The role of the tambur thus changed.It, together with the sal, takes on a less presentable and more functional role, becoming a passage, while the sal often is reduced to the function of dining room (mat-sal).Instead of the sal, which became less central over time, the main formal rooms now became the salong and the förmak.The spatial organization of the late 19th-century Swedish apartment, inspired by the continental apartment in countries such as France, and interestingly also Germany, can, according to Gejvall (1988: 255), be described as divided into four areas (Figure 3): 1942 247).The distance between these rooms thus tended to be extended during the second half of the 19th century (Gejvall 1988: 199). From the 1890s onwards, the English influence grew stronger, and the hall started to become a more important and more dominant room (Lundberg 1942: 257;Gejvall 1988: 168ff;Paulsson 1950: 120).The hall plan became popular in the new middle and upper-class villas of the late 19th century, where rooms were arranged around the hall, rather than in line.Soon, the tambur was exchanged for a hall and a cloakroom.The cloakroom allowed the hall to be free from clothes and other paraphernalia that often had cluttered the tambur (Figure 4).The hall, which was more of a room to dwell in than the tambur, also opened up for the spatial connection to even more rooms (of different types), such as the vardagsrum (approximately equivalent to living room, but literally meaning ' every-day room'). 5At this time, there was also a critique of the salong/förmak (Gejvall 1988: 224) and the idea that hosting visitors seemed to be more important than the living conditions and comforts of the inhabitants.Perhaps this was one of the reasons why the centrally integrated hall, the cloakroom and the vardagsrum became popular, whereas the salong and the förmak slowly disappeared (Figure 5).In Sweden, the number of residential room types also seems to peak around 1910 with Isak Gustaf Clason's houses.Clason designed a series of large houses during the decades around the year 1900 (the last of the large Swedish estates of the 19th-century tradition), still following the Victorian tradition and its plenitude of room types (Edestrand and Lundberg 1968: 48-62).After the 1920s, the number of room types seems to decrease, and during the 1950s several of the former important types were gone altogether (Figure 6).In non-residential building types, the proliferation of more and more specific room types went on longer, until the 1950s.One example is the hospital Sahlgrenska, in Gothenburg, which had about 170 different room types.Perhaps it was only with structuralism, and the call for flexibility during the 1960s (Forty 2000), that the decrease in room types became a general trend. As the functional differentiation of the home increased during the 19th century, so did different kinds of power asymmetries.Moving the living quarters of servants, as well as kitchens, bathrooms, etc., into the home also played a part in the co-production of distances and asymmetries within the home itself.Social distinctions were important and thus came to take an architectural form (Rybczynski 1986: 49), where servants' rooms were named according to the title or category of the servant.The builtin asymmetries at the room type level seem to decrease as servants moved out and as the number of room types declined in the mid-20th century. In summary, residential room types proliferated, first through the French idea of different suites of rooms, then on through the continental apartment plan and the English hall model; all three different models made it possible for the number of room types to increase.The first wave of extinction started slowly during the 1910s and '20s, with the fall of the representative room types.A second wave of extinction was around the Second World War.The linen rooms, brushing room, maid's chambers, and gentleman's rooms 6 all disappeared during the 1940s; the tambur and the serving room declined quickly at the same time but endured for a few more decades before disappearing altogether.The tambur, which perhaps was the first catalyst for this rich tradition of different room types, was thus the last to go.Following this history, we can observe that spatial species do not always come and go one at a time.Instead, it seems that there are quite often thresholds where series of new spaces evolve or die.This has been made clear in studies of building types that show, for example, that industrial society came with a series of new building types concerned with production (Markus 1993), but these thresholds seem less studied when it comes to room types.This change of the mid-20thcentury residential spaces was perhaps most of all about fewer people and more things (Westerberg and Eriksson 1998: 268); as a welfare society developed and the home became a place for the family to host things rather than people, room types connected to an older kind of society started to decline. Abruptive Room Types Not all room types come and go in groups.Indeed, a type might also arise from a sudden, abrupt or disruptive invention, a ' chronic' moment (Brighenti and Kärrholm 2019) of upheaval and change.Here, I will just briefly mention two examples of room types where this more subversive aspect (breaking with former tradition), is stronger: namely the divan room and the allrum (family room, or literally, ' everything room'). The divan as a piece of Ottoman furniture was popular in Europe from the last decades of the 18th century and into the first half of the 19th century.Sweden had good diplomatic connections with Turkey during the 18th century (Avcioglu 2011: 255ff), and King Gustav III made several divan rooms in his palace at Haga (for example, in the pavilion drawn by Olof Tempelman in 1787).The divan thus migrated into Swedish architectural culture and got its very own room.In fact, in Swedish, divan did not refer to the furniture at that time; in a Swedish dictionary from 1853, a divan is defined as a room with low sofas along the wall (Gejvall 1988: 241f).Most often the divan room was a bit smaller than the salon and could be found in large private apartments or houses, like at Stora Bjurum, where a divan room was introduced in 1869 (Figure 7).In at least one case, however, the divan room was also used in a public building.In Helgo Zettervall's first plans for Sweden's Parliament House and Central Bank building in 1884, he placed a large divan room very centrally and in direct relation to the foyer (Bodin 2017: 815).This room disappeared in later plans for the building, and to my knowledge it has not appeared on any Swedish plans since then.As a room type, the divan room has no clear (Swedish) predecessor, and it disappeared almost as quickly as it appeared.The second example of abrupt inventions is the allrum.Although the vardagsrum had replaced the salon during the early part of the 20th century, it appears as if this room also became used for more formal events and the reception of guests, and thus the struggle of architects and politicians to establish a room for everyday use continued.The importance of a space used for display and hosting guests, for example, often meant that sleeping facilities were less prioritized, and this was perceived as a societal problem (cf.Paulsson 1950: 120;Perers et al. 2013).One important attempt to change this was the allrum (literally, ' everything room'), a new room type that was introduced in the housing competition of Baronbackarna in Örebro 1950Baronbackarna in Örebro (built 1953-57)-57).The housing competition was not aimed at inventing a new room type, but the winning proposal, announced in 1951 (and designed by P.A. Ekholm and S. White), suggested that the traditional living room could act as a kind of study room or studio (arbetsrum).On later drawings this room was referred to as an allrum.An experimental apartment with the very first allrum was built during the summer of 1952, and furnished with a kitchen sofa, a dinner table, a bookshelf and a desk with a sewing machine (Krantz 1987: 96ff; see also Mack 2017: 228ff).The new room type received a lot of attention among architects (Figure 8), but studies in 1956 showed that the introduction of the allrum initially failed.Rather than being used as the family room, as intended, it was used as a kind of salon or finrum (literally, 'nice room'), and it was only with the introduction of TV sets during the early 1960s that the allrum became a more everyday kind of space (Krantz 1987: 103).Its popularity increased from the 1970s onwards, finally making the idea of a salon or a finrum (even if enacted within spaces tagged as living rooms) obsolete. Absent Friends and the Ever-Changing Boundaries of the Dwelling In his book Objects of Desire (1986), Adrian Forty has argued that the modern Western home is generally a product of the Industrial Revolution.Through the development of specific work places, coupled with the regulation of work and work hours, the home soon became an important haven of privacy, comfort and leisure: an antipode to work (Forty 1986: 99).During the 19th and 20th centuries, the boundaries of the home were also changing.Rooms and functions that formerly were located outside the dwelling itself, like places for food, bathing, laundry, latrines, etc., now moved inside the dwelling (Gejvall 1988: 101).From a situation where rooms could be rented out even without kitchen facilities (so-called bachelor flats), the standard and comfort of living thus slowly began to increase.New technical infrastructures, such as water pipes, were also introduced in several Swedish cities during the 1860s (Paulsson 1950: 185), and the elevator, which afforded an easier movement of goods and people in and out of the home, was introduced during the 1880s. New technology developed between around 1880 and 1920 made it possible to achieve a degree of domestic comfort without servants and the accompanying spatial separation between room types.Some room types were no longer needed, or their functions could be integrated into other rooms.As we have seen, the number of 'indoor' room types seems to decline from about the 1920s and onwards, and the decrease in secondary spaces outside the home proper actually also continues slowly throughout the 20th century.During the 1930s and '40s, storage spaces in cellars decreased (Gejvall 1998: 212), as refrigerators and freezers became more common and fresh food could be bought all year round.Another function that moved out of the home was child care.The building of Swedish day-care centres in the 1940s and '50s meant that child care in the home or in other people's homes decreased.Bringing laundry facilities closer to the home was another important issue during this century.Collective laundry rooms with washing machines were introduced from the mid-1920s by HSB (a Swedish cooperative association for housing); a national suggestion for municipal regulation requiring laundry rooms was made in 1948 (Figure 9), and in 1965, 90% of the Swedish population had access to a laundry room (Björkman 1985: 88;Lund 2009).However, in the 1990s a series of neighbourhood disputes centred around these places, and as technical developments were made, washing machines moved into apartments (into kitchens and bathrooms). Parallel to the privatization and movement of activities and goods into the home, the possibility to actually harbour all of these objects and functions had grown increasingly problematic.Although the rising problem of storage space was noted already in an investigation in 1980 (Konsumentverket 1980), the decline of community spaces in the large residential housing areas continued during the 1980s and 1990s, and storage spaces today have to an extent been commercialized and outsourced, as new companies specializing in storage facilities have developed, often in former industrial areas (Brembeck 2019). Dwellings changed as new room types moved in and others moved out.This can partly be explained by new technology (like water toilets, freezers, etc.), as well as an ongoing urbanization in combination with higher living standards and increasing consumption, which meant that storage spaces were externalised.Activities that were once performed in the home are now executed in other neighbourhoods, cities, regions or even countries (sites of production that were formerly in the home might, for example, have moved to the other side of the world).The home and its room types thus co-evolve with the environment in which they are located.The absence or presence of the different activities of the home seems, for example, to be related to longer trends such as technological development, transformations from a rural to an urban society and globalization. Contagious Room Types Related to yet different from the notion of absence or presence is the exchange of room types between different building types.Until the 1960s, it was not uncommon for the homes of rectors, cleaners, teachers, drivers or janitors to be integrated into public buildings such as schools, museums, public baths, etc. (Figure 10).Residential functions do, however, disappear from public buildings with the growing tendency to see the home as a protected and individualized unit separate from work, a place exclusively reserved for leisure and a nuclear family life.The sharing and exchange of room types between residential and non-residential building types nevertheless continued.For example, the vilrum (resting room), which was established as a Swedish room type at workplaces during 1940s, appeared in residential buildings during the 1970s.Similarly, as larger structures and building complexes became more common in the modernistic large-scale plans of the 1960s and '70s, more traditionally urban categories such as 'square', ' area' and 'street' began appearing indoors.This is connected to the state-subsi- dised large-scale housing projects in Sweden that began in the mid-1960s, when housing blocks and houses took on a whole new scale (Andersson 1976;Hall & Vidén 2005). Influenced by Team X, large corridors were for example called 'interior streets' in Bengt Edman's Sparta student housing in Lund in 1971.The suffix -yta (area) also became more common, such as arbetsyta (work area) or lekyta (play area) (Figure 11).Indoor squares may not be found in residential buildings, but they began appearing quite often in schools, churches, commercial buildings and offices.There are also examples of how residential room types move into non-residential building types; for exam- ple, the allrum soon become popular in schools, and the hobbyrum appeared in churches and parish halls from the 1960s and up until at least the 1980s. Trends of Different Temporal Scales The evolution of room types proceeds at no predictable pace; some develop slowly and over a long time, whereas others might bloom during a short period of time and then fade away.Old and new room types can thus always be found side by side.One example of a short-lived trend was to refer to room types in a diminutive form, by exchanging the suffix -rum for -vrå (nook).Due to the density of the population and a housing shortage, there was in interest in the concept of small dwellings during the late 1920s and '30s (Björkman 1985: 97), when small room types also developed, also with the suffix -vrå.The kokvrå (cooking nook, or kitchenette) makes its first appearance in 1927 as the most long-lived and important of these types. Other examples include the frukostvrå (breakfast nook), introduced in 1920, sittvrå (sitting nook) 1925, bokvrå (book nook) 1927 and arbetsvrå (study nook) in the 1940s (Figure 12).The suffix did not take off to the extent first suggested, but it was not a total failure either.Some of the nook types still exist, and new versions were also tried later (like the short-lived tonårsvrå (teenager nook) in public libraries during the 1960s). Another trend that peaked between the 1950s and the 1980s was room types related to leisure, consumption and free time.For example, although it already existed as a room type, the sauna suddenly became more popular during the '50s (Figure 6).Rooms for hobbies, ping-pong and weaving also appeared during these post-war decades, as well as storage spaces specifically for sports equipment like skis and sleighs.Some rooms might thus only live or thrive for a couple of decades, but there are also room types that disappear even quicker.The 'battery room for the door bell' (on a plan from 1870) is one such very short-lived room; another one is the bodega (Figure 13).These shortlived rooms are quite often (but not always, as the case of the bodega shows) related to new technology, and stabilized through what can be called network stabilization (Law 2002); they depend on a series of obligatory actors, such as laws, certain technical infrastructures, etc.One example of this is the telephone room. The long-lived room type, on the other hand, evolves slowly and mutates, like the garderob (wardrobe), which over time changed almost beyond recognition (Figure 14).In the plans of Jean Eric Rehn's Lambohov of 1762-66, the garderob is a quite spacious through-way room with a window and a tiled stove.Throughout the 19th century, however, the Swedish garderob was a small, often dark, walk-in closet.It was quite popular, and there were often more wardrobes in Swedish flats than in corresponding flats on the continent (Gejvall 1986: 240).Today, walk-in closets are more often referred to as klädkammare (clothes chamber), whereas garderob more often refers to a standardized, built-in piece of furniture for clothes (cf.Rybczynski 1986). 7 Other very long-lived room types manage to resist evolution and to some extent remain the same over centuries.A case in point is the kitchen.Kitchens may have changed from closed to integrated, small to large, but they are still recognizable as kitchens.The kitchen relies on a more fluid stabilization (Law and Mol 1994); i.e., actors may come and go, new ones may be added, others may disappear.Fluid stabilization does not rely on a specific set of actors, but more on a family resemblance between actors.The associations between actors might change -as long as changes are not too abrupt, new actors might be welcomed or released from 'the family'.One way of illustrating the fluidity and versatility of the kitchen as a category is through its many variants.In the plans studied, from 1750 to 2010, at least 35 different kinds of kitchen types appear, including the quite common grovkök (literally, 'rough kitchen', which is the Swedish term for a scullery), sandwich kitchens, milk kitchens, children's play kitchens (Figure 15), training kitchens, tea kitchens, paint kitchens (färgkök, found in theatres), and barium kitchens (found in children's hospitals). 8 Prototypical Sets During the period 1750 to 2010, Sweden developed from a rural to an urban society less focused on production and more on leisure and comfort.In cities, residential room types such as the dining room, salon and gentleman's room, thrived and evolved during the 19th century.However, many rural room types disappeared during the same time (cf.Erixon 1947).The first Swedish norms and recommendations around housing, and its minimal requirements, were published in 1921, and a more proper Swedish science around housing started with the foundation of the research institute on housing (Hemmets forskningsinstitut) in 1944.These efforts eventually led to better living conditions, but also to standardization, and to fewer and more uniform room types.In fact, the number of residential room types appears to have decreased over the centuries, reaching its minimum with a small set of general (and on plans often nameless), standardized room types in the 1960s and 70s. The stabilization of a series of aligned room types (first by research and then by state recommendations and legislation), is related to the home as a stabilized type.The home grows more and more stabilized as a place of comfort and retreat, but also of functional efficiency.The increasing research on residential room types during the 20th century -how to design kitchens that were easy to work in (Thiberg 1968) or bathrooms that were easy to clean (Linn 1985), etc. -together with the decrease in the number of room types, also paved the way for the standardization of a set of obligatory room types (bedroom, kitchen, bathroom, living room).The home became a type produced 'from a standard "kit of parts"' (Steadman 2014: 358). The general decline in the number of room types also means that some rooms had to take in activities which had previously been performed in a set of other rooms; the bathroom, for example, now includes activities which formerly might have been done in boudoirs, laundry rooms, nurseries and dressing rooms (Rybczynski 1986: 223).The bedchamber, the children's chamber and the guest room were all transformed into the more standardized 'bedroom'.From the 1960s and 1970s, the names of residential room types also began to disappear from plans in the journal Arkitektur.Since these room types were now rather few and tended to be the same from house to house, year after year, there was no need to mark them out.In the early 1990s, however, this changed.The Swedish building standard for room sizes was abandoned, and the building code no longer dictated a detailed prescription for the layout of housing plans, which in turn opened the way for new room types, as well as a new fluidity when it came to the existing ones.The names of room types on residential plans become somewhat more common again, and new types, such as the relaxation room and the spa, have appeared, a trend that seems to continue during the 2000s. Conclusions The aim of this article has been to explore an evolutionary approach to room types, arguing that the historical development of room types should not be studied on the basis of single entities, but must be understood in relation to other room types and their ecology -in the house and elsewhere, and at other times.From the study of Swedish residential plans, I have derived and discussed six themes that are most probably not unique to Sweden (even though they sometimes took a particular form in Sweden), but are similar to trends in other European countries.After all, Sweden was entangled with the development of residential cultures in other countries -like France during the 18th century (with an emphasis on rooms such as the salon), Germany (with the influence of the Herrenzimmer) and later Britain (with the hall plan) during the 19th century.Room types tend to come and go in packs, and they seem to be subject to thresholds of evolution and extinction.Room types can, however, also be more abruptive.Although room types often are associated with each other during longer times, single room types can also come and go suddenly (like the divan room or the allrum).Room types are also always dependent on 'absent friends'.The room types associated with a certain place, such as the home, are produced in relation to the environment of this place.There is thus a constant exchange of room types as the home and its environment are co-produced over time.Furthermore, some room type trends are contagious.Room types do not stick to specific building types but can spread between different contexts and be reused or reinvented across different building types or urban types (for example, between homes, schools and churches, between cities and buildings, etc.).Series of rooms also follow different temporal scales, that is, old and young room types often live side by side.Room types that are stabilized through a series of obligatory actors seem to be more short-lived than the more fluid territorial sorts.Finally, room types can stabilize into prototypical sets, the obligatory room types that tend to make up a certain place or building type. It is fruitful to discuss room types as a kind of spatial species.Rather than separating them into use types or form types, we address them as important and transformative actors in architectural and societal production.Individual differences always have the potential to be the start of a new species and if we are interested in studying this change -types on the move (cf.Latour and Yaneva 2008) -we cannot reduce the notion of type to a certain category, but need to follow all the different ways in which it makes a difference.When looking at how types transform, the question is thus not whether they are defined by form or use, but how they have an effect on ongoing life, how they keep or change their identity and how they evolve, decline or even die. Notes 1 On a more general note, a focus on types might also risk a focus on spaces as objects, obscuring situations and practices (see, for example, Carl 2011 on the relation between 'type' and the 'typical'). 2 At that time, there was no longer any need for someone to keep an eye on the building during evenings and weekends (keeping the fire going to heat the building, etc.).Housing became more affordable and readily available during the 1960s, and the idea of 'the home' was also changing rapidly during the post-war decades.Home became a place of leisure, and more firmly separated from the place of work (see, for example, Forty 1986). 3The journal Arkitektur was published in Stockholm by Arkitektur förlag between 1901 and 2010, and appeared under the following titles: Arkitektur och dekorativ konst, from 1901to 1908;Arkitektur, from 1909Arkitektur, from to 1921;;Byggmästaren, from 1922Byggmästaren, from to 1959;;andArkitektur, from 1960 to 2010. 2. Dining room (sal or matsal).3.Living and bedroom areas.These sometimes included a small living room for the family.Especially in winter, the bedrooms were also important living areas, especially for the woman of the house, who might use the bedroom for receiving female guests, writing, and sewing(Paulsson 1950: 121; Gejvall 1988: 234ff).4. Kitchen area with servants' rooms.The difference between serving spaces and served spaces became more important, as did the effort to keep the sounds and smells of the kitchen and servant's quarters away from the salons and drawing rooms (Lundberg Figure 1 : Figure 1: J.G. Destain's plan of Björksund from the 1720s(Selling 1937: 47).Here the room types are still quite generic, relating to the size or position of the rooms rather than to their use, like antechamber (förmak), chamber (kammare) and cabinet (kabinet). Figure 2 : Figure 2: E. Palmstedt's plan of Skinnskatteberg from 1770s (from Selling 1937: 324).The wife's quarters are on right side at the the back, whereas the husband's quarters are on the left side at the front.The kitchen (kjök), maid's chamber (jungfrukamare) and a chamber for porcelain (porcelainer) are at the back left side.The dining room (matsal) is connected to the entrance (förstufva), and the great room (sal) is connected to the main stairs on the second floor. Figure 3 : Figure 3: Two Stockholm flats, drawn by A. Johansson in 1894-96 (Teknisk tidskrift, Afd.för byggnadskonst, 1897:pl.13).The tambur to the left follows an older tradition where movement through more representational spaces becomes obligatory.The tambur to the right is connected to a set of passages and secondary spaces.Here we can also see a division of the apartment into four different areas, as described above. Figure 5 : Figure 5: The death of the salon (salong) and the antechamber/drawing room (förmak) and the rise of the living room (vardagsrum) and the family room (allrum), showing the percentage of residential plans with a specific room type, 1890-1990 (number given for the decade: 1890 comprises 1881-90, etc.).Diagram by Mattias Kärrholm. Figure 8 : Figure 8: Allrum with dinner table and a table designated for work and play (with the kitchen in the background); interior design by Lena Larsson.Photo from an apartment in 'Das Schwedenhaus' at the Interbau exhibition in Berlin 1957 (Byggmästaren, 1957: 210). Figure 9 : Figure 9: The building of collective laundry rooms in housing areas increased in Sweden during the 1940s.Here is an example from Klippan, built around 1940 (Byggmästaren 1942: 295). Figure 11 : Figure 11: Here we can see how the left part of the hall has been designated for play with the inclusion of an early example of the room type lekyta (play area).Row house designed by Gustaf Lettström for the Housing exhibition H55 in Helsingborg (Byggmästaren 1955: 233). Figure 10 : Figure 10: The Hovrätt (Court of Appeal) in Malmö, designed by Ivar Callmander (Arkitektur 1919: 139).The ground floor has offices, an archive and an apartment for the building supervisor (on the upper right side). Figure 12 : Figure 12: House in Växjö designed by Gösta Brügger (presented in Byggmästaren 1946: 417).Here one can see a small arbetsvrå (study nook) next to the terrace.The house also shows a late example of a borstrum (brush room) next to the entrance. Figure 13 : Figure 13: The bodega room is a one-off room type in the journal Byggmästaren(1946: 433), and can be found on the plans for a detached house in Tyresö drawn by Holger Blom and Jan Wahlman.The bodega room was placed in the basement and was intended to be used as a kind of party room.The name did not quite catch on, but the name gillestuga was later used for a very similar room type that played an important role in Swedish dwellings from the early 1960s and up to the 1980s. Table 1 : The years of first and last appearances of still-active residential room types, based on the database developed by the author from architectural plans dating between 1750 and 2010.The second column shows the first appearance according to the Swedish national dictionaries (SAOB and SO).The following two columns show the first and last appearances according to the room type database (on which this article is based).The final column shows the age of each room type (in years), calculated as the difference in years between 2010 and its first appearance (marked in bold). Table 2 : Lifespan of short-lived or unusual residential room types, based on the database developed by the author from architectural plans dating between 1750 and 2010.The second column shows the first appearance according to the Swedish national dictionaries (SAOB and SO).The following two columns show the first and last appearances according to the room type database (that this article is based on).The final column shows the total number of years since the room type's last appearance in a residential building (in 2010).
2020-01-09T09:11:28.561Z
2020-01-02T00:00:00.000
{ "year": 2019, "sha1": "e5a3efe539b1fcc848ca1e7d5b8fcaffe6ea593b", "oa_license": "CCBY", "oa_url": "http://journal.eahn.org/articles/10.5334/ah.343/galley/351/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "46e4cd9795e56e9d89238863e040f690aa67dbd3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Geography" ] }
210948820
pes2o/s2orc
v3-fos-license
Glutathione S-Transferases Play a Crucial Role in Mitochondrial Function, Plasma Membrane Stability and Oxidative Regulation of Mammalian Sperm Glutathione S-transferases (GSTs) are essential sperm antioxidant enzymes involved in cell protection against oxidative stress and toxic chemicals, preserving sperm function and fertilising ability. Artificial insemination (AI) in pigs is commonly carried out through the use of liquid-stored semen at 17 °C, which not only reduces sperm metabolic activity but also sperm quality and AI-farrowing rates within the 72 h of storage. While one may reasonably suggest that such enzymes are implicated in the physiology and maintenance of mammalian sperm function during liquid-storage, no previous studies conducted on any species have addressed this hypothesis. Therefore, the objective of the present work was to characterise the presence and function of sperm GSTs in mammalian sperm, using the pig as a model. In this regard, inhibition of such enzymes by ethacrynic acid (EA) during semen storage at 17 °C was performed to evaluate the effects of GSTs in liquid-preserved boar sperm by flow cytometry, immunofluorescence, and immunoblotting analysis. The results of this study have shown, for the first time in mammalian species, that the inhibition of GSTs reduces sperm quality and functionality parameters during their storage at 17 °C. These findings highlight the key role of such enzymes, especially preserving mitochondrial function and maintaining plasma membrane stability. In addition, this study has identified and localised GSTM3 in the tail and equatorial subdomain of the head of boar sperm. Finally, this study has set grounds for future investigations testing supplementation of semen extenders with GSTs, as this may improve fertility outcomes of swine AIs. Introduction Pig breeding worldwide is fundamentally based on the use of artificial insemination (AI). Such a technique is commonly performed through the use of liquid-preserved semen diluted with a proper extender and stored at 15-20 • C, usually for 1 to 5 days [1]; in some cases, however, media may preserve sperm up to 12-15 days [2]. Extender composition and low temperatures contribute to the decrease of sperm metabolic activity, thus maintaining their function and fertilising ability [2]. However, AI-fertility rates using liquid-stored sperm are known to decline within 72 h of its storage [3]. Experimental Design Ten ejaculates (one per boar) were split into two aliquots containing 100 µmol/L of EA and the same volume of DMSO as a vehicle control group. The concentration of DMSO in all treatments was 0.15% (v:v). Inhibitor concentration was selected based on previous studies [23] and preliminary concentration tests performed at our laboratory. All samples were stored for 72 h in closed plastic containers with constantly and gently agitation at 17 • C. Sperm motility and flow cytometry parameters were evaluated at 0, 24, 48, and 72 h, whereas immunofluorescence and immunoblotting analysis against GSTM3 were assessed at 0 and 72 h of semen storage at 17 • C. Immunofluorescence Localisation of GSTM3 in boar sperm during liquid preservation was evaluated through immunofluorescence at 0 and 72 h of storage at 17 • C in each treatment. Samples containing 3 × 10 6 sperm/mL were fixed with 2% (w:v) paraformaldehyde and subsequently washed. The different slides containing two drops per sample were blocked and permeabilised with a blocking solution containing 0.25% (v:v) Triton X-100 and 3% (w:v) Bovine serum albumin (BSA) for 40 min. Then, samples were incubated with a primary anti-GSTM3 antibody (1:250; v:v) overnight. Following this, slides were washed and incubated with an anti-rabbit antibody (1:500; v:v). Then, 10 µL of Vectashield mounting medium containing 4 ,6-Diamidino-2-phenylindole dihydrochloride (DAPI) was added. Finally, a coverslip was placed, and samples were sealed with nail varnish. In negative controls, the primary antibody was omitted. For the peptide competition assay, samples were incubated with GSTM3-specific blocking peptide, which was 20 times in excess with regard to the corresponding primary antibody. A confocal laser-scanning microscope (CLSM, Nikon A1R; Nikon Corp., Tokyo, Japan) was used to evaluate all samples. Immunoblotting Boar sperm samples of all treatments at 0 and 72 h of storage at 17 • C were used for Western blot analysis. In brief, samples were centrifuged twice at 3000× g for 5 min and resuspended in lysis buffer (RIPA Buffer, Sigma-Aldrich) prior to incubation in agitation at 4 • C for 30 min. Triple sonication per sample was carried out, followed by centrifugation at 10,000× g, and the supernatant was stored at −80 • C. A detergent compatible method (BioRad; Hercules, CA, USA) was used to quantify total protein. Ten micrograms of total protein were diluted 1:1 (v:v) in Laemmli reducing buffer 2× and boiled at 96 • C for 5 min before proteins were loaded onto the gel and subsequently transferred onto polyvinylidene difluoride (PVDF) membranes using Trans-Blot ® Turbo™ (BioRad) and blocked with 5% BSA. Blocked membranes were then incubated with the anti-GSTM3 primary antibody (1:20,000; v:v) overnight. Next, membranes were washed thrice and incubated with the secondary antibody for an hour with agitation (1:35,000; v:v). Finally, bands were visualised using a chemiluminescent substrate (Immobilion TM Western Detection Reagents, Millipore) and scanned with G:BOX Chemi XL 1.4 (SynGene, Frederick, MT, USA). Next, membranes were stripped and blocked prior to incubation with an anti-α-tubulin antibody (1:100,000, v:v) overnight. Subsequently, membranes were washed trice and incubated with an anti-mouse antibody (1:200,000, v:v) for 1 h. Finally, membranes were washed, visualised, and scanned as described previously. The specificity of the primary antibody was confirmed through peptide competition assays utilising GSTM3-immunising peptides, 20 times in excess with regard to the antibody. Bands of three technical replicates per samples were quantified using Quantity One software package (Version 4.6.2; BioRad), and pattern quantifications were normalized using α-tubulin. Statistical Analysis Results were analysed using a statistical package (IBM SPSS for Windows 25.0; Armonk, NY, USA). First, data were checked for normality and homogeneity of variances using Shapiro-Wilk and Levene tests, respectively. When required, data were transformed with arcsin √ x and then re-assessed for normality and homogeneity of variances. Each statistical case consisted of a separate biological replicate. Sperm quality and functionality parameters, as well as the relative content of GSTM3, were compared between treatments (EA-treated and control spermatozoa) and throughout storage time (0, 24, 48 and 72 h) with a linear mixed model (repeated measures); within-subjects factor was the time of storage, between-subjects factor was the treatment, and the random-effects factor was the boar. The post-hoc Sidak test was used for pair-wise comparisons. Finally, Pearson correlation coefficients were calculated between the relative content of the GSTM3 band and quality and functionality parameters. Data are shown as mean ± SEM. For all analyses, the level of significance was set at p ≤ 0.05. Results All sperm quality and functionality parameters (total and progressive motility, ∆Ψm, viability, membrane lipid disorder, acrosome membrane integrity, apoptotic-like changes, intracellular Ca 2+ levels, and total intracellular O 2 − • and H 2 O 2 levels) of semen samples incubated with EA and the control group were assessed at 0, 24, 48 and 72 h of storage at 17 • C. No differences between groups were found in any sperm quality and functionality parameter at 0 h of storage at 17 • C. Inhibition of GSTs Impairs Sperm Motility and ∆Ψm Motility was assessed by the percentage of total and progressively motile sperm and the VAP at 0, 24, 48, and 72 h of liquid-storage at 17 • C, whereas sperm mitochondrial function was assessed by the percentage of high ∆Ψm resulting from the orange-stained populations (JC1 agg ) ( Figure 1). Compared to the control group, total and progressive motilities and the VAP of EA-treated sperm samples dramatically decreased within the first 24 h of liquid-storage and remained low until 72 h of storage (p < 0.05). On the other hand, a dramatic decrease in the percentage of sperm showing high ∆Ψm was observed in EA-treated samples compared to the control within the first 24 h of liquid-storage (p < 0.05). Moreover, a strong correlation between total motility and ∆Ψm was observed (r = 0.873; p < 0.01). Compared to the control group, total and progressive motilities and the VAP of EA-treated sperm samples dramatically decreased within the first 24 h of liquid-storage and remained low until 72 h of storage (p < 0.05). On the other hand, a dramatic decrease in the percentage of sperm showing high ΔΨm was observed in EA-treated samples compared to the control within the first 24 h of liquidstorage (p < 0.05). Moreover, a strong correlation between total motility and ΔΨm was observed (r = 0.873; p < 0.01). Inhibition of GSTs Causes Sperm Plasma Membrane but not Acrosome Damage Sperm plasma membrane status was characterised through SYBR14/PI, M540/YO-PRO-1, PNA-FITC/PI, and Annexin V/PI staining ( Figure 2). Although no statistically significant differences in the percentage of viable spermatozoa (SYBR14 + /PI -) were found between control and EA-treated samples at 0, 24, and 48 h of semen storage, a reduced viability was evidenced at 72 h (p < 0.05). Inhibition of GSTs Causes Sperm Plasma Membrane but not Acrosome Damage Sperm plasma membrane status was characterised through SYBR14/PI, M540/YO-PRO-1, PNA-FITC/PI, and Annexin V/PI staining ( Figure 2). Although no statistically significant differences in the percentage of viable spermatozoa (SYBR14 + /PI -) were found between control and EA-treated samples at 0, 24, and 48 h of semen storage, a reduced viability was evidenced at 72 h (p < 0.05). On the other hand, the percentage of sperm with high membrane lipid disorder (M540 + /YO-PRO-1 -) was higher in EA-treated samples at 24, 48, and 72 h of liquid-storage (p < 0.05). Related to this, the percentage of viable membrane-intact sperm (PNA-FITC -/PI -) was used to assess acrosome membrane intactness, whereas the percentage of viable Annexin V-positive sperm (Annexin V + /PI -) was used to assess apoptotic-like changes. EA-treated samples did not show either acrosome membrane damage or apoptotic-like changes at any time-point in comparison to the control group. On the other hand, the percentage of sperm with high membrane lipid disorder (M540 + /YO-PRO-1 -) was higher in EA-treated samples at 24, 48, and 72 h of liquid-storage (p < 0.05). Related to this, the percentage of viable membrane-intact sperm (PNA-FITC -/PI -) was used to assess acrosome membrane intactness, whereas the percentage of viable Annexin V-positive sperm (Annexin V + /PI -) was used to assess apoptotic-like changes. EA-treated samples did not show either acrosome membrane damage or apoptotic-like changes at any time-point in comparison to the control group. Sperm GSTs are Involved in Ca 2+ Homeostasis The percentage and fluorescence intensity of viable spermatozoa showing high Ca 2+ levels (Fluo3 + /PIand Rhod5 + /YO-PRO-1 -) were used to assess sperm intracellular Ca 2+ levels ( Figure 3). Although no differences in the percentage of Fluo3 + /PIsperm were found, Fluo3 + /PIfluorescence intensity in EA-treated spermatozoa was higher in comparison to the control group after 24, 48 and 72 h of liquid preservation (p < 0.05), showing increased Ca 2+ levels in GSTs-inhibited samples. On the other hand, percentages of Rhod5 + /YO-PRO-1 -sperm and Rhod5 + -fluorescence intensity did not show differences between treatments at any time-point of semen storage at 17 °C. Sperm GSTs Are Involved in Ca 2+ Homeostasis The percentage and fluorescence intensity of viable spermatozoa showing high Ca 2+ levels (Fluo3 + /PIand Rhod5 + /YO-PRO-1 -) were used to assess sperm intracellular Ca 2+ levels ( Figure 3). Although no differences in the percentage of Fluo3 + /PIsperm were found, Fluo3 + /PIfluorescence intensity in EA-treated spermatozoa was higher in comparison to the control group after 24, 48 and 72 h of liquid preservation (p < 0.05), showing increased Ca 2+ levels in GSTs-inhibited samples. On the other hand, percentages of Rhod5 + /YO-PRO-1sperm and Rhod5 + -fluorescence intensity did not show differences between treatments at any time-point of semen storage at 17 • C. Sperm GSTs are Involved in Intracellular ROS Regulation Percentages of E + /YO-PRO-1 -and DCF + /PI − sperm and fluorescence intensities of E + and DCF + were assessed to evaluate intracellular levels of O2 − • and H2O2 (Figure 4). An increase of O2 − • in sperm cells due to GSTs inhibition was detected, since E + -fluorescence intensity, although not the percentage of E + /YO-PRO-1 -sperm, in EA-treated samples was higher than the control at 24, 48, and 72 h of liquid-storage (p < 0.05). On the other hand, a decrease in the percentage of H2O2-positive sperm cells (DCF + /PI − sperm) was found due to GSTs inhibition at any evaluation time (p < 0.05), even though the mean fluorescence intensity of DCF + did not differ from the control group. Sperm GSTs are Involved in Intracellular ROS Regulation Percentages of E + /YO-PRO-1and DCF + /PI − sperm and fluorescence intensities of E + and DCF + were assessed to evaluate intracellular levels of O 2 − • and H 2 O 2 ( Figure 4). An increase of O 2 − • in sperm cells due to GSTs inhibition was detected, since E + -fluorescence intensity, although not the percentage of E + /YO-PRO-1sperm, in EA-treated samples was higher than the control at 24, 48, and 72 h of liquid-storage (p < 0.05). On the other hand, a decrease in the percentage of H 2 O 2 -positive sperm cells (DCF + /PI − sperm) was found due to GSTs inhibition at any evaluation time (p < 0.05), even though the mean fluorescence intensity of DCF + did not differ from the control group. GSTM3 Partially Disappear from the Boar Sperm Mid-Piece during Liquid-Storage Localisation of GSTM3 was resolved at 0 and 72 h of storage at 17 °C by immunofluorescence. Figure 5 shows representative localisation patterns of GSTM3 in GSTs-inhibited and non-inhibited sperm samples at 0 and 72 h of liquid-storage at 17 °C. All sperm cells showed the GSTM3 fluorescence signal, and the negative control and peptide competition assay confirmed the specificity of the GSTM3 antibody. GSTM3 was found to be localized in the mid, principal, and end pieces of the tail and the equatorial subdomain of the head in sperm samples at 0 h of liquid-storage. However, after 72 h of semen storage at 17 °C, the GSTM3 signal partially disappeared from the mid-piece in both the control and GSTs-inhibited samples. GSTM3 Partially Disappear from the Boar Sperm Mid-Piece during Liquid-Storage Localisation of GSTM3 was resolved at 0 and 72 h of storage at 17 • C by immunofluorescence. Figure 5 shows representative localisation patterns of GSTM3 in GSTs-inhibited and non-inhibited sperm samples at 0 and 72 h of liquid-storage at 17 • C. All sperm cells showed the GSTM3 fluorescence signal, and the negative control and peptide competition assay confirmed the specificity of the GSTM3 antibody. GSTM3 was found to be localized in the mid, principal, and end pieces of the tail and the equatorial subdomain of the head in sperm samples at 0 h of liquid-storage. However, after 72 h of semen storage at 17 • C, the GSTM3 signal partially disappeared from the mid-piece in both the control and GSTs-inhibited samples. Sperm GSTM3 Content was Reduced During Sperm Liquid Preservation. Immunoblotting analysis of GSTM3 showed a triple-band pattern of ~25, ~28, and ~48 kDa in every experimental condition. Peptide competition assay utilising GSTM3 immunising peptide confirmed both ~25 (GSTM3-A) and ~28 (GSTM3-B) kDa-bands as GSTM3-specific ( Figure 6). As shown in Figure 7, normalised GSTM3-A content was found to be significantly higher than GSTM3-B after 0 h of storage at 17 • C (p < 0.05). Additionally, at 0 h, GSTM3-A was significantly higher than GSTM3-A and GSTM3-B after 72 h of storage in control and GSTs-inhibited groups (p < 0.05). However, the relative abundance of GSTM3 in GSTs-inhibited samples at 72 h of liquid storage did not differ from the control. Figure 7, normalised GSTM3-A content was found to be significantly higher than GSTM3-B after 0 h of storage at 17 °C (p < 0.05). Additionally, at 0 h, GSTM3-A was significantly higher than GSTM3-A and GSTM3-B after 72 h of storage in control and GSTs-inhibited groups (p < 0.05). However, the relative abundance of GSTM3 in GSTs-inhibited samples at 72 h of liquid storage did not differ from the control. Relative Content of GSTM3 was Highly Correlated with ΔΨm and Motility. Pearson correlation coefficients of relative content of GSTM3-A and GSTM3-B at 0 h with sperm quality and functionality parameters of liquid-stored sperm at 72 h are shown in Table 1. No correlation between the relative abundance of GSTM3-A relative and any sperm quality or functionality parameters was found. However, the relative abundance of GSTM3-B was negatively correlated with total and progressive sperm motility and ΔΨm (JC1agg) (p < 0.05). Relative Content of GSTM3 Was Highly Correlated with ∆Ψm and Motility Pearson correlation coefficients of relative content of GSTM3-A and GSTM3-B at 0 h with sperm quality and functionality parameters of liquid-stored sperm at 72 h are shown in Table 1. No correlation between the relative abundance of GSTM3-A relative and any sperm quality or functionality parameters was found. However, the relative abundance of GSTM3-B was negatively correlated with total and progressive sperm motility and ∆Ψm (JC1 agg ) (p < 0.05). Discussion Preservation of boar semen in liquid storage at 17 • C leads to a decrease in sperm metabolic activity in order to maintain their function and fertilising ability [2]. However, sperm liquid-preservation may result in impaired motility, viability, membrane stability, OS, and apoptotic-like changes [3,4]. GSTs in sperm are membrane-attached, detoxifying enzymes [14], which have been considered to be fertility [16], and cryotolerance [17] biomarkers in boar sperm. Furthermore, previous studies have shown that extender supplementation with glutathione decreases OS and improves the quality of boar semen during liquid storage at 17 • C [33]. Such findings suggest that GSTs play a vital role in maintaining sperm physiology during liquid preservation. However, its effects upon quality and functionality parameters of sperm have never been investigated. Findings from this work are in accordance with the aforementioned studies since GSTs-inhibition during boar semen storage was found to decrease sperm quality and function parameters dramatically. The most noticeable effect of GSTs-inhibition was evidenced by the complete loss of total and progressive motility and a significant reduction in VAP within the first 24 h of semen liquid-storage at 17 • C. Such motility impairment is in agreement with previous studies performed in the goat, where sperm motility is known to decrease due to GSTs-inhibition [23]. Furthermore, the fact that GSTM3 was localised along the principal piece of boar sperm supports these results. In addition, not only did JC1 staining show a dramatic decrease in ∆Ψm due to GSTs-inhibition, which was also described in goat sperm by Hemachand and Shaha [34], but ∆Ψm was strongly and positively correlated with total motility. Correlation between these factors has been extensively reported in the literature, as adenosine triphosphate (ATP) production [35] and controlled ROS levels [36] are known to be required for proper sperm motility. Together, these findings suggest that sperm GSTs play an essential role in regulating mitochondrial function and motility performance during liquid-storage of boar semen. The results of the present study have also confirmed that sperm plasma membrane status is impaired by GSTs-inhibition. Although the percentage of viable sperm was not significantly affected until 72 h of semen liquid-storage at 17 • C, the percentage of viable sperm with high membrane lipid disorder dramatically increased within the first 24 h of semen storage. Such findings are in agreement with previously-reported studies confirming that GSTs function is mainly located in the sperm plasma membrane [34], and their inhibition causes sperm membrane damage in goat sperm [23]. Along these lines, the present study provides evidence confirming that membrane-bound GSTs prevent cholesterol efflux and membrane lipid disorder, and thus delay capacitation-like changes in liquid-stored boar sperm. However, further experiments regarding the specific role of GSTs in sperm capacitation should be performed in order to clarify their specific role in the changes in the sperm plasma membrane. Although the inhibition of sperm GSTs during liquid-storage was found to cause sperm membrane destabilisation, the acrosome membrane remained intact. These findings suggest that despite GSTs-inhibition increasing the lipid disorder of the sperm plasma membrane and causing capacitation like-changes in sperm cells, GSTs do not exert a direct effect on the acrosome membrane. On the other hand, and in agreement with the maintenance of sperm viability during the first hours of semen storage, apoptotic-like changes in sperm do not increase due to GSTs inhibition. Such results suggest that GSTs are not involved in apoptotic-like processes in sperm during boar semen liquid-storage. Likewise, GSTs were found to be involved in sperm intracellular Ca 2+ content release. Intracellular Ca 2+ levels from the mid-piece and sperm head were observed to increase within 24 h of semen storage due to GSTs-inhibition, whereas Ca 2+ levels in sperm head did not. These findings indicate that the inhibition of sperm GSTs augment Ca 2+ levels in the sperm mid-piece rather than in the head. While mitochondrial Ca 2+ signalling is not completely understood, such organelles are known to function as intracellular Ca 2+ stores, since the negatively charged mitochondrial matrix can sequester Ca 2+ ions [36]. The impairment of mitochondrial Ca 2+ homeostasis due to GSTs-inhibition may be caused by the destabilisation of mitochondrial membranes. However, further research is required in order to elucidate this hypothesis. In spite of the aforementioned, these results evidence, altogether, the crucial role of sperm GSTs in the regulation of mitochondrial Ca 2+ homeostasis during liquid-storage of boar semen. Our results also demonstrated that the inhibition of GSTs led to changes in the physiological ROS levels of sperm during storage at 17 • C. Although the percentage of O 2 − •-positive sperm increased because of GSTs-inhibition, intracellular levels of H 2 O 2 decreased. The fact that the main ROS source in sperm is thought to reside in the mitochondria [37], which have been shown to be impaired by GSTs-inhibition, supports the apparent role of GSTs in sperm ROS production. Impaired mitochondrial activity by GSTs-inhibition may contribute to the formation, but not removal of the O 2 − • in sperm [38] [11]. Therefore, our study could serve as a basis for further studies aimed at clarifying the specific role of GSTs during sperm capacitation and fertilisation, as physiological ROS levels are essential for both processes. Results from the immunoblotting analysis of GSTM3 showed a specific two-band pattern consisting of~25 (GSTM3-A) and~28 (GSTM3-B) kDa-bands, and a non-specific band of~48 kDa. The double-band pattern found in the present work could be caused as a result of post-translational modifications of GSTM3 such as phosphorylation, acetylation or glycosylation, among others, which are widely reported in the literature (reviewed by [40]). Quantification of both bands showed higher relative levels of GSTM3-A than GSTM3-B at 0 h of liquid storage. Furthermore, GSTM3-A at 0 h showed higher relative levels than GSTM3-A and GSTM3-B after 72 h of liquid storage in control and GSTs-inhibited samples. Therefore, the results shown herein indicate that a loss of GSTM3 content occurs during liquid storage at 17 • C. However, inhibition of GSTs does not induce changes in the GSTM3 content. In this regard, the preservation of boar semen in liquid storage could induce GSTM3 loss and, consequently, impairment of its function. Nevertheless, a specific assay confirming the presence of post-translational modifications should be performed to gain further insights into the molecular action of GSTM3 in sperm. The localisation patterns of GSTs in boar sperm during liquid preservation has been established for the first time in the present study. Sperm GSTM3 was localised in the mid, principal, and end pieces of the tail and the equatorial subdomain of the head of samples at 0 h of liquid-storage. This localisation pattern is similar to that found in the boar [17] and other species, such as the buffalo [41]. Moreover, the localisation of GSTM3 in the sperm tail would contribute to explaining the dramatic effect of GSTs-inhibition upon sperm motility and mitochondrial function. Interestingly, the GSTM3 signal was found to be partially reduced from the mid-piece during boar semen liquid-storage in both the control and GSTs-inhibited samples. Since immunoblotting analysis found GSTM3 content to be reduced during semen storage, it becomes apparent that such enzyme is lost rather than relocalised from the mid-piece during liquid-storage. Contrary to the results of the present study, GSTM3 was reported to relocalise to the mid-piece following boar sperm cryopreservation [17]. Finally, the present study also attempted to find a relationship between sperm quality and functionality parameters after 72 h of liquid storage and the relative amounts of GSTM3 at 0 h. Interestingly, a negative correlation between relative levels of GSTM3-B at 0 h and motility and mitochondrial function after 72 h of sperm preservation was observed. Mounting evidence in the literature supports the relationship between GSTM3 and mitochondrial function, since GSTM3 content is known to be higher in mitochondrial-altered sperm of men [42]. Moreover, recent studies demonstrated that GSTM3 content in fresh sperm is highly correlated to the mitochondrial activity of frozen-thawed sperm, and relocalisation of this enzyme from the entire tail to the mid-piece occurs during cryopreservation of boar [17] and buffalo [41] sperm. Therefore, the relationship between GSTM3 content and mitochondrial activity found in the present study strengthens the hypothesis of a tight molecular relationship between sperm GSTs and mitochondrial function. Moreover, GSTM3 is clearly related to sperm quality, as it has been established as a quality [42][43][44], fertility [16], and cryotolerance [17] biomarker in both boar and human sperm. Hence, one could suggest that the GSTM3 content in fresh boar semen may be used as a biomarker of sperm quality during liquid preservation. Conclusions In conclusion, the data reported in the present study revealed the essential role of membrane-attached sperm GSTs to preserve sperm function and quality in liquid-stored boar semen. Specifically, inhibition of sperm GSTs evidenced that these enzymes are highly related to the preservation of mitochondrial function and maintenance of the plasma membrane stability, thus preserving sperm motility, maintaining physiological ROS levels, and regulating mitochondrial Ca 2+ homeostasis. In addition, this study identified and localised GSTM3 for the first time in boar sperm during storage at 17 • C for 72 h. GSTM3 was localised in the mid, principal and end pieces of the tail and the equatorial subdomain of the head, and was partially lost from the mid-piece after 72 h of liquid preservation. Matching with this, immunoblotting showed that the relative amounts of sperm GSTM3 decreased after 72 h of liquid storage at 17 • C. Additionally, relative GSTM3-content at 0 h of storage was negatively correlated to sperm mitochondrial function and motility after 72 h of storage, supporting the mitochondrial-protective role of GSTs and suggesting GSTM3 as a putative biomarker of sperm quality during semen liquid-storage. Finally, while the molecular role of GSTs on sperm physiology and specifically on mitochondrial function is yet to be elucidated, the findings reported in this study warrant further research testing the supplementation of boar semen extender with GSTs, as this may preserve sperm mitochondrial function and plasma membrane stability during liquid storage and improve subsequent reproductive performance of boar AI-doses.
2020-01-30T09:05:59.664Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "e5bc9b48401af56b2b39f5c6c6d01dfb52fb8fcb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/9/2/100/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3252050c1f31e968f0bced9fa8f76446a4baa3ed", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247368733
pes2o/s2orc
v3-fos-license
LEGAL ANALYSIS OF THE PROTECTION OF THE RIGHTS OF VICTIMS OF THE CRIME OF HUMAN TRAFFICKING This study aimed to analyze the law on protecting the rights of victims of the crime of human trafficking. This research uses normative juridical analysis and a statute approach. The sources of law in this study are based on primary and secondary sources of law, which refer to the norms contained in the legislation. Then the research results are described systematically to answer the questions that have been asked. Analysts of legal materials in this study use a grammatical-interpretation method, namely interpretation according to the grammar and words used for legislators to state their objectives. To protect the rights of victims of human trafficking, there are still dualisms in the TIP law and the Criminal Procedure Code in settlement of restitution for victims. This makes solving the problem of human trafficking difficult. In this case, it is necessary to review the substance of the TIP law; this is due to other issues as contained in this law. Figure 1. Age of Detected Victims Source: The Counter-Trafficking Data Collaborative (CTDC) (2019) The data above is still tiny because there are many Indonesian immigrant workers with undocumented status who do not hesitate to smuggle themselves through the port route using illegal boats through the waters of the Riau Islands or Johor waters which have an impact on the lack of security factors such as the safety of the ship itself. It was found that Indonesian migrant workers died because they drowned in the waters. The evolution of the quality of criminal acts or crimes demonstrates that territorial barriers between countries, both within and beyond regions, have increasingly dissolved (Okaych et al., 2018). The development of cross-territorial criminal acts further heightens the difficulty of cooperation between countries in their prevention and eradication efforts, especially if the crime involves foreign nationals (Sweileh, 2018). A transnational crime, transnational offense, or transnational offense is a crime or offense that spans national borders. This notion was initially promoted internationally in the 1990s at the United Nations' Eighth Congress on Crime Prevention and Offender Treatment (Annafi, 2020). Transnational crime, as defined by the United Nations, is large-scale and complex criminal activity carried out closely or loosely by organized groupings with the purpose of establishing, supplying, and exploiting unlawful markets at the expense of society (Putri et al., 2016). A crime can be said to be transnational if it has the following elements: a) it is committed by more than one country; b) preparation, planning, direction and supervision are carried out in other countries; c) involving an organized criminal group where the crime is committed in more than one country, and d) have a severe impact on other countries. Data on victims of human trafficking based on the educational background is presented in the following figure: . Trafficking in Persons by Education Level Source: IOM Indonesia Based on the data above, it can be said that minimal education can make the chances of being entangled in human trafficking higher. In addition to low education, other factors can influence victims to be implicated in the crime circle of human trafficking, namely natural disasters, inability or lack of life skills. In some cases, the perpetrators of human trafficking act as labour agents who distribute workers to certain companies abroad. The victims who survived and testified have admitted that they were caught in the circle of a human trafficking syndicate precisely because they were tempted by the amount of income they would earn if they worked abroad (Sumirat, 2017). But specifically, what they experienced was far from what they had imagined. Coercion, intimidation, and violence are things they have to face. In several cases found, victims of human trafficking who were not rescued always ended up in death or mental illness due to the violence they experienced (Sulistiyo, 2012). To break the cycle of trafficking, the government must provide a decent livelihood to help curb this illegal activity. If the state can give certainty to its citizens, citizens will not think about looking for a living abroad and breaking the chain (Sari et al., 2021). In addition, other efforts can be made, namely providing information and understanding to various groups regarding this issue. The socialization efforts are more than just giving knowledge to the community about this matter. It also warns public awareness about human trafficking (Salsa, 2020). Meanwhile, the existing state of affairs means that many victims of human trafficking are effectively unable to obtain reparation for their suffering. Where payback is extremely rare because victims are unaware of their rights and law enforcement officials failed to tell victims of their rights from the start, it was discovered that even law enforcement personnel were unaware of how to file for the appropriate restitution method (Wagianto, 2014). Within the scope of the notion of victim protection legislation, the first consideration must be the nature of the victim's loss. The loss is significant not only in terms of material or physical pain, but also in terms of psychological distress (Yusitarani & Sa'adah, 2020). This is a result of the trauma associated with the loss of trust in society and public order. Anxiety, distrust, cynicism, despair, loneliness, and other avoidance behaviors are all possible symptoms of the condition. As a result, this study will examine the law governing the protection of the rights of victims of human trafficking. B. METHOD This research uses normative juridical analysis and a statute approach. The sources of law in this study are based on primary and secondary sources of law, which refer to the norms contained in the legislation. The technique of tracing legal materials is obtained from library research, regulations, journals, and legal cases. Then the research results are described systematically to answer the questions that have been asked. Analysts of legal materials in this study use a grammatical-interpretation method, namely interpretation according to the grammar and words used for legislators to state their goals (Nenohai, 2021). C. RESULT AND DISCUSSION 1. Government Efforts in Combating Human Trafficking To prevent the crime of human trafficking, the government has begun to improve the legal system. This improvement starts with improving the legal system, from the substance of the law and the legal culture that lives in society-then implemented with a law enforcement process following existing rules. The government's role in providing protection and regulating placement for migrant workers includes issuing various legal instruments ranging from the constitution to implementing regulations. The following laws regulate the protection of migrant workers who are victims of the crime of human trafficking. Most Indonesian migrant workers face forced labour conditions, most cases related to working conditions in the receiving country, such as unpaid wages, forced labour, irregular working hours, sexual harassment and physical violence (Mawardi, 2020). The preamble to the 1945 Constitution of the Republic of Indonesia safeguards the entire nation and homeland of Indonesia in paragraph IV. Become one of the Indonesian state's objectives. Along with the protection of sovereignty and natural resources, one of the components that must be safeguarded is the protection of the people and the fulfillment of citizens' rights. Additionally, Article 27 paragraph (2) of the Republic of Indonesia's 1945 Constitution states that every person has the right to work and an acceptable standard of living for mankind. The meaning and significance of the importance of decent work and residence are clearly stated in the constitution. However, in reality, the limited number of job vacancies has made many people choose to become Indonesian migrant workers abroad. According to Article 28 letter (D) of the Republic of Indonesia's 1945 Constitution, everyone has the right to recognition, guarantees, protection, and fair legal certainty, as well as equal treatment before the law. And have the right to work and to be compensated and treated fairly in a job relationship. It is also regulated in various laws, including Law Number 13 of 2003 concerning Manpower (the Manpower Act). Article 31 and Article 32 of the Manpower Law explain that it is the right of TKI to obtain, choose and change places of work according to their respective abilities, expertise and skills. It also includes receiving a decent income at home or abroad by considering dignity, human rights and legal protection. Indonesian Migrant Workers Abroad Placement and Protection Law No. 39 of 2004 This is the first law that controls the legality of sending migrant workers as well as the prevention and eradication of human trafficking. However, it does not need a proportional division of tasks and authorities between the central government, regional governments, and the private sector. Additionally, there is law number 21 of 2007 aimed at combating human trafficking. Human trafficking is defined in this statute as the act of conveying, recruiting, sending, transferring, or receiving a person by force, kidnapping, forgery, fraud, or abuse of power or position. This law regulates the criminal threat for perpetrators of trafficking in persons ARTIKEL JURNAL PAPATUNG: Vol. adheres to a minimum of punishment to the maximum. The victim is also entitled to compensation, restitution, and compensation from the perpetrator. This law also provides opportunities for government efforts to provide protection for victims, witnesses and reporters. Besides that, it is also known that the punishment in the case of trafficking in persons is known as in the Indonesian criminal law. The current enforcement of criminal law in dealing with TIP cases is based on several statutory provisions, namely the PTPPO Law, the Criminal Code, the Criminal Procedure Code, and several law enforcement regulations. Legal Protection for Victims of Human Trafficking Various studies and reports from several NGOs state that Indonesia is still a source of trafficking in persons and a transit and receiving country. At least ten provinces in Indonesia were identified as sources, 16 provinces as transit points, and at least 12 provinces as recipients. An accurate number has not been found for the number of women and children victims of trafficking in persons in Indonesia (Wulandari & Wicaksono, 2014). In 2018, the available data on victims in that year amounted to 74,616 people. However, most victims are afraid to report this crime to the authorities because there is a possibility of deportation and a possible realization of the behaviour. The crucial problem in eradicating the crime of trafficking in persons is that a patriarchal culture still positions women as unequal to men, and there are still limited career opportunities for women. The paradigm regarding the ability and professionalism of women has not been considered equal to men; women are still considered as subordinates in the family. In many cases, there is still a culture of shame or taboo reporting abusive treatment from husbands to wives, children and women in their environment, etc. The functions of legal protection provided by representatives of the Republic of Indonesia include avoiding or correcting the practices of the placement country that are discriminatory towards the state and its citizens, providing assistance or services to citizens who violate legal regulations abroad, and providing legal protection and assistance. The forms or models of protection for victims of crime can be given to victims of the crime of trafficking in persons to be able to explore the forms or models of legal protection that can be given to victims, namely as follows: (1) Provision of Restitution and Compensation, (2 ) Counseling and Medical Services/Assistance, (3) Legal Aid, (4) Information Provision (Tatali, 2021). International Organization for Migration Indonesia, or IOM Indonesia, has established itself as a critical actor and partner of the Indonesian government in the fight against human trafficking. IOM Indonesia, as part of the Victim Assistance Program, assists Indonesian and foreign victims with repatriation, recuperation, and reintegration through its "victim relief fund" program. Physical and mental health care programs, temporary housing, family counseling, educational help, livelihood assistance, and legal assistance are all included in reintegration assistance. The service is given via a referral mechanism that collaborates with over 80 countries and non-state organizations. Along with protection, the legislation gives victims of human trafficking with rights, including the right to the confidentiality of their identification and the right to seek protection against threats that jeopardize their lives or property. This includes the right to receive restitution from the government, as well as the right to rehabilitation, health, social rehabilitation, repatriation, and social reintegration. Victims residing abroad have a right to protection and repatriation at the expense of the state. Meanwhile, legal protection for Indonesian migrant workers who are victims of trafficking currently prioritizes the fulfilment of victims' rights. ISSN: 2715-0186 Analysis of the Ineffectiveness of Restitution Settlement Procedures for Victims By statute, the government is law enforcement and is accountable for victim protection within the criminal justice system. However, it turns out that law enforcement authorities have divergent views on how to administer the criminal justice system, particularly when it comes to reparations for human trafficking victims. This is prompted, among other things, by the dualism inherent in the treatment of trafficking victims as a result of existing legislation. On the one hand, law enforcers prefer to use or apply a fusion of cases as defined in the Criminal Procedure Code because it is believed to provide greater legal certainty (as the degree of Criminal Procedure Code is greater than that of Government Regulation (PP) Number 44 of 2008, an elaboration of the Witness and Victim Protection Law). However, restitution is restricted in extent in terms of material losses. On the other hand, he supports the implementation of the Witness and Victim Protection Law and Government Regulation No. 44 of 2008, believing that this system can give restitution with a greater scope than the Criminal Procedure Code. Furthermore, in the regulations agreed upon by the international community, any loss suffered by the victim of a crime can be replaced and counted as one of the rights of the victim of a crime. Compensation may take the form of the restoration of stolen goods, damage, payment of lost money, and injuries and psychological stress suffered by victims, as well as payments for pain and suffering and help to victims. This notion highlights the need of a victim's healing being as comprehensive as possible and covering all areas of the crime's effects. Restitution enables the victim to reclaim his or her freedom, legal rights, social position, family life, and citizenship, as well as to return to his or her place of residence, reclaim his or her work, and reclaim his or her property. Based on the two principles of settlement in the process of alternating losses, there is a dualism regarding the payment of the restitution, which is explained in the following The request for restitution is made jointly: a) since the victim reports the case to the local POLRI (the police must notify/inform the victim). b) investigators handle requests for restitution together with handling trafficking (which means the police are obliged to handle the request (explanation of Article 48 paragraph (1)) A compensation claim must be filed no later than a) prior to the Public Prosecutor submitting a criminal accusation. b) in the absence of the Public Prosecutor, the request is made no later than the judge's judgment (Article 98) 2 The Public Prosecutor is responsible for notifying the victim of her right to seek restitution. Additionally, the public prosecutor transmits the magnitude of the victim's damages as a result of trafficking, as well as the requests (Explanation of Article 48 paragraph (1)) The public prosecutor has no obligation to notify/inform the victim's right to apply for compensation in question. Therefore, in the implementation of joint civil cases, compensation for damages so far has not had much success 3 Restitution is deposited first in the court where the case is decided (Article 48 (1)) delivery of compensation to victims through consignment/depository 4 If the culprit is acquitted on appeal/cassation, the court orders that the restitution money be restored to the victim in his ruling (Article 48 paragraph (7)) If the case is not submitted for an appeal, then the request for an appeal regarding compensation is not permitted (Article 100 paragraph (2)) 5 It is regulated about restitution (compensation) for victims, and prosecutors are given the authority to represent victims to apply for re imbursement. The compensation claim must be submitted by the victim or his family himself through a merger of cases. In this dualism problem, the settlement of restitution through the mechanism contained in the Criminal Procedure Code is considered more effective and penetrative in terms of suppressing compensation for victims of human trafficking, but with several conditions such as law enforcement at the police level, prosecutors, and judges in a court having a perception in terms of giving loss in the form of restitution for the victim. Restitution is not an act that exceeds the power of a court ruling or an act of trafficking, as it is governed in Article 48 paragraph (1) of the TIP Law, which reads. Each victim of a crime is entitled to compensation. In this instance, imposing a crime on the convict is insufficient to resolve the issue, even if the imposition of a crime satisfies the idea of the goal of punishment, which is to instill sorrow and teach the offender so that he does not do similar acts in the future. However, how about the victim's recovery as the party most damaged by a criminal conduct in other instances? According to Primoratz, punishment is unjustifiable if it has no effect on or even harms the perpetrator. Along with these issues, there are several others concerning the TIP Law that have legal flaws, including the following: The TIP statute does not restrict the public prosecutor's jurisdiction to pursue plain legal remedies, both on appeal and in cassation, for a court ruling involving human trafficking. Article 28 of the TIP Law, on the other hand, specifies that investigations, prosecutions, and exams in court procedures involving human trafficking are conducted in accordance with the appropriate Criminal Procedure Code, unless otherwise specified in this law. Thus, Criminal Procedure Code becomes procedural law in this case. This article is injurious to the victim since it violates his right to restitution, which is governed by Criminal Procedure Code articles 98-101. If the victim does not submit an appeal, the request for review is denied. The victim will suffer harm as a result of having to accept the choice. After all, if the defendant files an appeal, the civil case is automatically reopened. If not, the victim may appeal a decision denying compensation or restitution based on the burden of the victim's material and immaterial losses. According to the explanation of Article 48 paragraph (1) of the TIP Law, the system for requesting compensation begins when the victim reports the incident to the local police and is handled by the investigator assigned to the crime committed. Additionally, the Public Prosecutor advises the victim of her right to seek restitution and submits the amount of loss specified in the claim (requisitor). This method does not preclude the victim from suing for her loss; nonetheless, the Public Prosecutor has the jurisdiction to seek restitution; the implementation mechanism is not governed by laws or regulations, for example, in terms of limiting the amount of reimbursement filed. The requirements of the article governing the restitution process are not contained in the article's body. According to Article 48 paragraph (5) of the TIP Law, reparation may be deposited first with the court that decides the case. As a result of the above provisions, it is clear that ISSN: 2715-0186 the TIP Law contains provisions that contradict the law's spirit of victim protection, specifically the clauses on voluntary restitution deposits. In comparison, the article's explanation explains that restitution in the form of money is deposited in court in accordance with applicable rules and regulations. This provision is analogous to resolving civil disputes through conciliation. Depositing restitution funds begins during the inquiry stage. The article's use of the term "may" implies that there is no phrase "required," implying that restitution is deposited in court first. Ideally, the term should be modified to required. Mandatory carries the connotation of firmness or urgency, implying that statutory orders must be followed without exception. In other words, offenders of human trafficking are forced to make a deposit; if no coercive actions are used, the rules will be ineffective. Because if the perpetrator continues to withhold restitution funds from the court, the culprit will also face no punishment. This precludes the realization of one part of the legal system, namely the application of regulations. If one component of the legal system fails to function properly, the provisions will be ineffective. D. CONCLUSION Based on the analysis of legal sources, it can be seen that the government's efforts in eradicating human trafficking against Indonesian workers include issuing various legal instruments ranging from the constitution to implementing regulations, cooperating both bilaterally, regionally, and multilaterally with other countries. However, in protecting the rights of victims of human trafficking, there are still dualisms in the TIP law and the Criminal Procedure Code in settlement of restitution for victims. This makes solving the problem of human trafficking difficult. In this case, it is necessary to review the substance of the TIP law, this is due to other issues as contained in this law, including the lack of regulated authority of the Public Prosecutor in carrying out further legal remedies, lack of information for the public and law enforcement in terms of the ideal mechanism for filing restitution, in short, replacement confinement if restitution is not carried out.
2022-03-11T16:09:34.625Z
2022-02-10T00:00:00.000
{ "year": 2022, "sha1": "6760f073b8e797ec59adb5156564be6003714a5f", "oa_license": "CCBYSA", "oa_url": "https://ejournal.goacademica.com/index.php/japp/article/download/493/459", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "641a4fe9755929cd40e3c71c7ce126fa8a1ad6e1", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
10987393
pes2o/s2orc
v3-fos-license
DeepID3: Face Recognition with Very Deep Neural Networks The state-of-the-art of face recognition has been significantly advanced by the emergence of deep learning. Very deep neural networks recently achieved great success on general object recognition because of their superb learning capacity. This motivates us to investigate their effectiveness on face recognition. This paper proposes two very deep neural network architectures, referred to as DeepID3, for face recognition. These two architectures are rebuilt from stacked convolution and inception layers proposed in VGG net and GoogLeNet to make them suitable to face recognition. Joint face identification-verification supervisory signals are added to both intermediate and final feature extraction layers during training. An ensemble of the proposed two architectures achieves 99.53% LFW face verification accuracy and 96.0% LFW rank-1 face identification accuracy, respectively. A further discussion of LFW face verification result is given in the end. Introduction Using deep neural networks to learn effective feature representations has become popular in face recognition [12,20,17,22,14,13,18,21,19,15]. With better deep network architectures and supervisory methods, face recognition accuracy has been boosted rapidly in recent years. In particular, a few noticeable face representation learning techniques are evolved recently. An early effort of learning deep face representation in a supervised way was to employ face verification as the supervisory signal [12], which required classifying a pair of training images as being the same person or not. It greatly reduced the intra-personal variations in the face representation. Then learning discriminative deep face representation through large-scale face identity classification (face identification) was proposed by DeepID [14] and DeepFace [17,18]. By classifying training images into a large amount of identities, the last hidden layer of deep neural networks would form rich identity-related features. With this technique, deep learning got close to human performance for the first time on tightly cropped face images of the extensively evaluated LFW face verification dataset [6]. However, the learned face representation could also contain significant intrapersonal variations. Motivated by both [12] and [14], an approach of learning deep face representation by joint face identification-verification was proposed in DeepID2 [13] and was further improved in DeepID2+ [15]. Adding verification supervisory signals significantly reduced intrapersonal variations, leading to another significant improvement on face recognition performance. Human face verification accuracy on the entire face images of LFW was surpassed finally [13,15]. Both GoogLeNet [16] and VGG [10] ranked in the top in general image classification in ILSVRC 2014. This motivates us to investigate whether the superb learning capacity brought by very deep net structures can also benefit face recognition. Although supervised by advanced supervisory signals, the network architectures of DeepID2 and DeepID2+ are much shallower compared to recently proposed highperformance deep neural networks in general object recognition such as VGG and GoogLeNet. VGG net stacked multiple convolutional layers together to form complex features. GoogLeNet is more advanced by incorporating multi-scale convolutions and pooling into a single feature extraction layer coined inception [16]. To learn efficiently, it also introduced 1x1 convolutions for feature dimension reduction. In this paper, we propose two deep neural network architectures, referred to as DeepID3, which are significantly deeper than the previous state-of-the-art DeepID2+ architecture for face recognition. DeepID3 networks are rebuilt from basic elements (i.e., stacked convolution or inception layers) of VGG net [10] and GoogLeNet [16]. During training, joint face identification-verification supervisory signals [13] are added to the final feature extraction layer as well as a few intermediate layers of each network. In addition, to learn a richer pool of facial features, weights in higher layers of some of DeepID3 networks are unshared. Being trained on the same dataset as DeepID2+, DeepID3 improves the face verification accuracy from 99.47% to 99.53% and rank-1 face identification accuracy from 95.0% to 96.0% on LFW, compared with DeepID2+. The "true" face verification accuracy when wrongly labeled face pairs are corrected and a few hard test samples will be further discussed in the end. DeepID3 net For the comparison purpose, we briefly review the previously proposed DeepID2+ net architecture [15]. As illustrated in Fig. 1, DeepID2+ net has three convolutional layers followed by max-pooling (neurons in the third convolutional layer share weights in only local regions), followed by one locally-connected layer and one fully-connected layer. Joint identification-verification supervisory signals [13] are added to the last fully-connected layer (from which the final features are extracted for face recognition) as well as a few fully connected layers branched out from intermediate pooling layers to better supervise early feature extraction processes. The proposed DeepID3 net inherits a few characteristics of the DeepID2+ net, including unshared neural weights in the last few feature extraction layers and the way of adding supervisory signals to early layers. However, the DeepID3 net is significantly deeper, with ten to fifteen non-linear feature extraction layers, compared to five in DeepID2+. In particular, we propose two DeepID3 net architectures, referred to as DeepID3 net1 and DeepID3 net2, as illustrated in Fig. 2 and Fig. 3, respectively. The depth of DeepID3 net is due to stacking multiple convolution/inception layers before each pooling layer. Continuous convolution/inception helps to form features with larger receptive fields and more complex nonlinearity while restricting the number of parameters [10]. The proposed DeepID3 net1 takes two continuous convolutional layers before each pooling layer. Compared to the VGG net proposed in previous literature [10,19], we add additional supervisory signals in a number of fullconnection layers branched out from intermediate layers, which helps to learn better mid-level features and makes optimization of a very deep neural network easier. The top two convolutional layers are replaced by locally connected layers. With unshared parameters, top layers could form more expressive features with a reduced feature dimension. The last locally connected layer of our DeepID3 net1 is used to extract the final features without an additional fully DeepID3 net2 starts with every two continuous convolutional layers followed by one pooling layer as does in DeepID3 net1, while taking inception layers [16] in later feature extraction stages: there are three continuous inception layers before the third pooling layer and two inception layers before the fourth pooling layer. Joint identification-verification supervisory signals are added on fully connected layers following each pooling layer. In the proposed two network architectures, rectified linear non-linearity [9] is used for all except pooling layers, and dropout learning [5] is added on the final feature extraction layer. Although with significant depth, our DeepID3 networks are much smaller than VGG net or GoogLeNet proposed in general object recognition due to a restricted number of feature maps in each layer. The proposed DeepID3 nets are trained on the same 25 face regions as DeepID2+ nets [15], with each network taking a particular face region as input. These face regions are selected by feature selection in the previous work [13], which differ in positions, scales, and color channels such that different networks could learn complementary information. After training, these networks are used to extract features from respective face regions. Then an additional Joint Bayesian model [3] is learned on these features for face verification or identification. All the DeepID3 networks and Joint Bayesian models are learned on the same approximately 300 thousand training samples as used in DeepID2+ [15], which is a combination of CelebFaces+ [14] and WDRef [3] datasets, and tested on LFW [6]. People in these two training data sets and the LFW test set are mutually exclusive. The face verification performance on LFW of individual DeepID3 net is compared to DeepID2+ net in Fig. 4 on the 25 face regions (with horizontal flipping), respectively. On average, DeepID3 net1 and DeepID3 net2 reduce the error rate by 0.81% and 0.26% compared to DeepID2+ net, respectively. Experiments To reduce redundancy, DeepID3 net1 and net2 are used to extract features on either the original or the horizontally High-dim LBP [4] 95.17 ± 1.13 TL Joint Bayesian [2] 96.33 ± 1.08 DeepFace [17] 97.35 ± 0.25 DeepID [14] 97.45 ± 0.26 GaussianFace [7,8] 98.52 ± 0.66 DeepID2 [13,11] 99.15 ± 0.13 DeepID2+ [15] 99.47 ± 0.12 DeepID3 99.53 ± 0.10 flipped face region but not both. In test, feature extraction takes 50 times of forward propagation with half from DeepID3 net1 and the other half from net2. These features are concatenated into a long feature vector of approximately 30, 000 dimensions. With PCA, it is reduced to 300 dimensions on which a Joint Bayesian model is learned for face recognition. We evaluate DeepID3 networks under the LFW face verification [6] and LFW face identification [1,18] protocols, respectively. For face verification, 6000 given face pairs are verified to tell if they are from the same person. We achieve a mean accuracy of 99.53% under this protocol. Comparisons with previous works on mean accuracy and ROC curves are shown in Tab. 1 and Fig. 5, respectively. For face identification, we take one closed-set and one open-set identification protocols. For closed-set identification, the gallery set contains 4249 subjects with a single face image per subject, and the probe set contains 3143 face images from the same set of subjects in the gallery. For open-set identification, the gallery set contains 596 subjects with a single face image per subject, and the probe set contains 596 genuine probes and 9494 imposter ones. Discussion There are three test face pairs which are labeled as the same person but are actually different people as announced on the LFW website. Among these three pairs, two are classified as the same person while the other one is classified as different people by our DeepID3 algorithm. Therefore, when the label of these three face pairs are corrected, the actual face verification accuracy of DeepID3 is 99.52%. For DeepID2+ [15], its face We examine the test face pairs in LFW which are wrongly classified by all the DeepID series algorithms including DeepID [14], DeepID2 [13,11],DeepID2+ [15], and DeepID3. There are nine common false positives and three common false negatives in total, around half of all wrongly classified face pairs by DeepID3. The three face pairs labeled as the same person but being classified as different people are shown in Fig. 6. The first pair of faces show great contrast of ages. The second pair is actually different people due to errors in labeling. The third one is an actress with significantly different makeups. Fig. 7 shows the nine face pairs labeled as different people while being classified as the same person by algorithms. Most of them look similar or have interference such as occlusions. Conclusion This paper proposes two significantly deeper neural network architectures, coined DeepID3, for face recognition. The proposed DeepID3 networks achieve the state-of-the-art performance on both LFW face verification and identification tasks. However, when a few wrong labels in LFW are corrected, the improvement of DeepID3 over DeepID2+ on LFW face verification vanished. The effectiveness of very deep neural networks would be further investigated on larger scale training data in the future.
2015-02-03T06:28:55.000Z
2015-02-03T00:00:00.000
{ "year": 2015, "sha1": "b8084d5e193633462e56f897f3d81b2832b72dff", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b8084d5e193633462e56f897f3d81b2832b72dff", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
221077948
pes2o/s2orc
v3-fos-license
Emerging Insights on the Biological Impact of Extracellular Vesicle-Associated ncRNAs in Multiple Myeloma Increasing evidence indicates that extracellular vesicles (EVs) released from both tumor cells and the cells of the bone marrow microenvironment contribute to the pathobiology of multiple myeloma (MM). Recent studies on the mechanisms by which EVs exert their biological activity have indicated that the non-coding RNA (ncRNA) cargo is key in mediating their effect on MM development and progression. In this review, we will first discuss the role of EV-associated ncRNAs in different aspects of MM pathobiology, including proliferation, angiogenesis, bone disease development, and drug resistance. Finally, since ncRNAs carried by MM vesicles have also emerged as a promising tool for early diagnosis and therapy response prediction, we will report evidence of their potential use as clinical biomarkers. Introduction Multiple myeloma (MM) is a hematological disease caused by the monoclonal expansion of malignant plasma cells (PCs) within the bone marrow. It evolves from two asymptomatic conditions, namely monoclonal gammopathy of uncertain significance (MGUS) and multiple myeloma "smoldering" (SMM) [1]. Diagnostic criteria for the different stages include a percentage of infiltration of bone marrow PCs of less than 10% in MGUS that increases to up to 60% in subjects with SMM who have a higher risk of progression to overt disease [2,3]. The major complication observed in patients with MM is the occurrence of osteolytic lesions in the later stages of the neoplasm. The therapeutic scenario of MM has evolved in the last two decades; currently, the combination of proteasome inhibitors such as bortezomib with immunomodulatory agents like lenalidomide and pomalidomide is the preferred pharmacological choice [4]. The use of bisphosphonates is intended for the management of patients with osteolytic bone disease [5,6]. However, MM, unfortunately, remains Piwi-interacting RNAs (piRNAs) work similarly; these are highly enriched in the germline tissues where, in combination with Piwi proteins, they selectively silence mobile genetic elements (transposons). LncRNAs localize both in the nucleus and in the cytoplasm of cells, thus controlling gene expression at different levels. Inside the nucleus, lncRNAs can control chromatin condensation, such as the X-inactive-specific transcript (XIST) that has a role in the formation of Barr bodies reviewed in [35]. However, the ability to recruit chromatin modifiers as the polycomb complex is common to several lncRNA, i.e., HOTAIR (Hox antisense intergenic RNA), antisense non-coding RNA in the INK4 locus (ANRIL), and metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) [36]. In addition, by physical interaction with DNA, lncRNAs regulate transcription by favoring or inhibiting transcription factor recruitment and RNA polymerase activity [37,38]. Meanwhile, by orchestrating protein and nucleic acid interaction, lncRNAs give shape to the nuclear bodies [39]. Matured lncRNAs, provided with 5 -capping, polyadenylation, and methylation of adenosine (N6-methyladenosine, m6A), are exported to the cytoplasm, where they control post-transcriptional gene regulation. Once associated with RNA binding proteins (RBPs) such as HuR, they compete with mRNAs, thus controlling its stability or alternative splicing [40]. Recent work aimed at correlating lncRNA expression levels with the onset/progression of malignancies has also disclosed the capability of lncRNAs to sponge miRNAs, thus protecting mRNAs from miRNA-mediated targeting; this was largely demonstrated, e.g., for lncH19, NEAT1, MALAT1, and HOTAIR [41][42][43][44][45][46]. Strongly involved in the regulation of mRNA stability is the family of circular (circRNAs), a family of ncRNAs lacking the 5 or 3 ends, generated from the splicing machinery through a process called back-splicing. circRNAs emerged relatively recently and were found to be involved in various cellular events, both in physiological and pathological conditions. CircRNAs work as competing endogenous RNAs, thus sponging miRNAs and RNA-binding proteins and competing with linear splicing; they have been found localized in nuclear and cytoplasmic compartments, and growing evidence has demonstrated that they can be also transported between cells by EVs [47]. The identification of new forms of RNAs has led to a scientific fervor aimed at understanding their mechanisms of action, while the discovery of circulating non-coding RNA, associated or not to EVs, emerged from high throughput RNA sequencing studies. To sort through this complex amount of information, the Extracellular RNA Communication Consortium (ERCC) developed an exRNA Atlas (https://exrna-atlas.org), collecting data from different human biofluids. This represents a map of cell-cell communication mediated by extracellular RNA, associated with vesicular and non-vesicular (RNP or lipoprotein) extracellular RNA carriers [48]. Mainly focused on non-coding RNA carried by EVs is the exoRBase (http://www.exoRBase.org), which collects circRNA, lncRNA, and mRNA data derived from RNA-seq data analyses of human blood exosomes [49]. Conversely, focused on miRNAs is the EVmiRNA (http://bioinfo.life.hust.edu.cn/EVmiRNA#!/) database, which collects comprehensive miRNA profiles in EVs [50]. Despite the technical constraints of each study, with limits and biases associated with RNA extraction and/or vesicle isolation protocols, it is, however, possible to make some general considerations about the most abundant types and the roles of non-coding RNAs in the EVs. Starting from the pioneering studies of Valadi [25] and Nolte-t Hoen and colleagues [51], subsequent research has confirmed that EVs are enriched in small non-coding RNAs, such as vault RNA, Y-RNA, and specific tRNAs [51], repeat and transposable elements, small nuclear RNA (snRNA), and signal recognition particle RNA (srpRNA) [52]. Increasing evidence has suggested that the presence of these ncRNAs in the EVs is a highly controlled and specific process; therefore, the evaluation of the mechanisms underlying the sorting of these ncRNAs in the EVs is among the most interesting questions for researchers. In a study by Villarroya-Beltri et al., it was found that the most abundant miRNAs in EVs from human primary T-cells possess a specific sequence called EXOmotif (GGAG); this sequence is bound by the Heterogeneous Nuclear Ribonucleoprotein A2/B1 (hnRNPA2B1), which guides the sorting process of miRNAs into EVs [29]. In accordance with this groundbreaking study, another group demonstrated that the loading of specific miRNAs in hepatocyte-derived EVs depends on the presence of the EXO motif, GGCU, which is recognized by the RBP SYNCRIP [30]. Overall, these findings provide the basis for the identification of further sequence-specific transport systems. Another important aspect analyzed by the scientific community concerns the abundance of ncRNAs in EVs as compared with the EV-producing cells. Today, RNA-seq technologies allow the qualitative comparison of the RNA species in the two biological samples; however, different limits still exist for the definition of the relative abundances. It remains difficult to assess whether an enrichment in the EVs exists. The absence of exclusive housekeeping genes for vesicle samples [53], as well as the heterogeneity of EVs produced by the same cell type [28], still represent important challenges to be addressed. EV-Associated ncRNAs Contribute to Tumor Pathobiology The generation of a complex network of interactions among different cells of the bone marrow microenvironment is critical in MM pathophysiology. In this context, EVs released by several cell types, including MM cells and host cells, contribute to MM onset and progression [54,55]. In this section, we will discuss findings highlighting the pivotal contribution of EV-ncRNAs to MM onset and progression, focusing on different aspects of MM pathobiology. EV-ncRNAs in MM Proliferation and Spreading The EV-mediated crosstalk between MM cells and bone marrow (BM) cells has been found to affect the proliferation, survival, and aggressiveness of cancer cells [55,56]. The ncRNA content of EVs is accountable for this phenomenon and seems to involve EVs from both cancer and normal cells [57]. Cheng and colleagues observed that three different human MM cell lines release EVs with a high content of miR-21 [58], which is involved in MM initiation and recurrence [59]. In this pioneering study, the authors found that EVs from the OPM2 cell line were able to promote MSC proliferation and cancer-associated fibroblast (CAF) transformation by increasing SDF-1, FAP, and α-SMA expression levels. The observed effects were ascribed to the presence of miR-21 in EVs; in fact, the inhibition of miR-21 led to a decrease in the CAF markers [58]. Along with miR-21, the authors also examined the role of miR-146a, since it was also found to be highly abundant in EVs and it was correlated with cancer progression [60]. They demonstrated that this miRNA can be transferred to MSCs after treatment with OPM2 EVs, inducing an increase in IL-6, although the underlying mechanism remains to be clarified [58]. Other studies correlated the presence of miR-146a in MM EVs with their pro-inflammatory effects in target cells. De Veirman and colleagues correlated this to the IL-6 increase in MSCs treated with EVs from U266 [61]. In the same study, the authors demonstrated that EV miR-146a stimulated in MSCs the expression and the release of factors affecting MM viability and migration, namely CXCL1, IP-10, and CCL5 [61]. The authors hypothesized the involvement of the Notch signaling pathway in such effects. Indeed, Notch signaling inhibition using the gamma secretase inhibitor DAPT resulted in a reduced expression of its target genes, Hes 5 and Hey 2, and of the pro-inflammatory cytokines released by MSCs transfectd with miR-146 mimics [61]. Several data revealed that also EVs from bone marrow stromal cells (BMSCs) of MM patients play a key role in disease progression. A recent study showed that the EVs released by BMSCs isolated from MM patients have a different miRNA profile compared to EVs from healthy donors. In particular, the authors demonstrated that MM-BMSCs EVs carry higher amounts of miR-10a, miR-346, and miR-135b. Among these miRNAs, miR-10a was transferred to MM cells by MM-BMSCs EVs, leading to tumor cell proliferation. Bioinformatic analysis identified TAK1 and βTRC as being positively regulated by miR-10a; interestingly, by using a β-TRC inhibitor, miR-10a-mediated MM cell proliferation was arrested [62]. Roccaro et al. treated MM cells with BMSC-derived EVs from healthy donors (HD) or MM patients; the authors observed a reduction in the proliferation of MM cells when treated with HD-BMSCs EVs. They further analyzed the miRNA content of EVs and found a higher level of miR-15a in HD-BMSC-derived EVs compared to those of MM and MGUS-BMSCs [56]. Moreover, miR-15a, which is considered a tumor suppressor miRNA in MM [63], was underexpressed in MM cells. To better understand the role of miR-15a in HD-BMSCs' EV-mediated anti-proliferative effect on MM cells, the authors isolated BMSCs' EVs from wild-type or miR-15a/16-1 −/− mice and found that miR-15a/16-1 −/− BMSC-derived EVs did not stimulate MM cell proliferation. By performing gain and loss of function assays, the authors confirmed that miR-15a acts as a tumor suppressor miR [56]. The involvement of long non-coding RNAs cargo of MSC EVs in the promotion of MM cell proliferation has been also documented. A recent study revealed that LINC00461 is highly expressed both in MM cell lines and in MM patients' plasma cells, where it exerts an anti-apoptotic role [64]. LINC00461 is transferred to MM cells through MSC-derived EVs and it sponges miR-15a and miR-16, inhibiting their expression and upregulating their target, Bcl2, which in turn favors MM cell proliferation and suppresses apoptosis [64]. EV-ncRNAs in MM Angiogenesis Bone marrow angiogenesis is essential for MM progression. Recent studies have shown that MM EVs enhance angiogenesis since they increase the viability of the endothelium by modulating different signaling pathways, such as the STAT3, JNK, AKT, p38, and p53 pathways [65]. MM EVs have been shown to induce endothelial cell proliferation [66] by delivering several angiogenic factors such as angiogenin, basic fibroblast growth factor (bFGF), and vascular endothelial growth factor (VEGF) [65] and by inducing endothelial cells to secrete IL-6 and VEGF [67]. ncRNAs have been also demonstrated to act as molecular mediators of EV-mediated angiogenesis. Interestingly, a study demonstrated that MM EVs released under hypoxic conditions (H-EVs) significantly increase the tube formation of endothelial cells compared with EVs derived from MM cells under normoxic conditions (N-EVs) [68]. By analyzing the miRNA profile of EVs, the authors found that H-EVs' miRNA content differed from that of N-EVs; in particular, H-EVs had higher levels of miR-210 and miR-135b compared to N-EVs. Nevertheless, miR-210 was upregulated in the endothelial cell line HUVEC under hypoxic conditions, regardless of treatment with MM EVs; miR-135b level, instead, was dependent on EV treatment. H-EVs' miR-135b was transferred to endothelial cells and increased angiogenesis both in vitro and in vivo through the suppression of the inhibitor of hypoxia-induced factor 1 (FIH-1), which is a negative regulator of HIF-1 [68]. Among ncRNAs playing a key role in MM angiogenesis, there is piRNA-823. It is noteworthy that MM patients' plasma cells have a higher level of piRNA-823 compared to healthy individuals' cells, and overexpression of this piRNA stimulated MM cell proliferation and angiogenesis [69]. Recently, it was found that MM cells package piRNA-823 in their EVs for subsequent transfer to endothelial cells, enhancing their proliferation through the inhibition of pro-apoptotic proteins and triggering of reactive oxigen species (ROS); the mechanisms underlying these effects have not been clarified yet. Moreover, EV piRNA-823 could promote endothelial cell release of IL-6 and VEGF, thus favoring angiogenesis. Finally, piRNA-823 triggered the expression of ICAM-1 and CXCR4, two essential molecules in the cell invasion process [70]. EV-ncRNAs in MM Bone Disease The main complication of MM is osteolytic bone disease, which affects more than 80% of patients [71][72][73]. In physiological conditions, bone homeostasis is maintained by the balanced activity of osteoblasts (OBs) and osteoclasts (OCs). In MM, such a balance is lost since osteoblast functions are inhibited while osteoclasts are hyper-activated, thus promoting the occurrence of osteolytic lesions [72][73][74]. This condition is due to the crosstalk within the bone marrow niche, among MM cells and surrounding cells, mediated by secreted factors, including EVs [71,74]. In recent years, increasing evidence has shown that EVs, and specifically their ncRNA content, are responsible for the onset of bone disease since they can promote osteoclast activation and inhibit the osteogenic differentiation of mesenchymal stem cells (MSCs) [75][76][77][78][79]. A groundbreaking study by Li and colleagues has firstly demonstrated that MSCs internalize MM cell-derived exosomes, which causes a reduction in MSC bone nodule formation, a key marker of osteogenic differentiation, compared to the control cells that were treated with phosphate-buffered saline (PBS) or with MM-conditioned medium depleted of exosomes [77]. They identified in MM-EVs the lncRUNX2-AS1, a long non-coding RNA that is negatively correlated with the expression of RUNX2, a marker of osteogenic differentiation [72]. EV-carried lncRUNX2-AS1 forms a duplex with RUNX2 pre-mRNA, thus blocking its splicing and leading to the inhibition of the osteogenic differentiation of target cells. The in vitro results were confirmed also in in vivo experiments by treating mice xenografted with MM cells with GW4869, an inhibitor of EV secretion. The MSCs derived from mice treated with GW4869 presented higher levels of RUNX2 and lower lncRUNX2-AS1 expression with respect to MSCs derived from mice of the control group [77]. Moreover, another group confirmed that treatment of MSCs with MM-EVs inhibited osteoblast differentiation and reported the presence of some miRNAs in EVs with a putative role in MM bone disease [78]. They found that the MSC treatment with MM-EVs caused an increase in miR-103a-3p levels, which inhibited bone formation through RUNX2 targeting [80]. Indeed, miR-103a-3p upregulation in MSCs led to decreased osteoblastogenesis [78]. Similarly, in a recent study, we identified a panel of miRNAs packed in MM-EVs from MM cell lines or primary plasma cells from bone marrow aspirates that can negatively regulate osteogenesis [79]. Among them, we identified and characterized miR-129-5p, which targets various mRNAs involved in osteoblast differentiation [81][82][83][84]. Moreover, we observed that this miRNA is transferred to MSCs by MM-EVs, causing a downregulation of SP-1, a transcription factor implicated in osteogenesis through the targeting of alkaline phosphatase (ALPL) [79] and also involved in MM cell proliferation [85]. Overall, in this section, we have reported the studies that correlate the ncRNA content of EVs with disease onset or progression. The evidence discussed above is summarized in the Figure 1 cartoon. A groundbreaking study by Li and colleagues has firstly demonstrated that MSCs internalize MM cell-derived exosomes, which causes a reduction in MSC bone nodule formation, a key marker of osteogenic differentiation, compared to the control cells that were treated with phosphate-buffered saline (PBS) or with MM-conditioned medium depleted of exosomes [77]. They identified in MM-EVs the lncRUNX2-AS1, a long non-coding RNA that is negatively correlated with the expression of RUNX2, a marker of osteogenic differentiation [72]. EV-carried lncRUNX2-AS1 forms a duplex with RUNX2 pre-mRNA, thus blocking its splicing and leading to the inhibition of the osteogenic differentiation of target cells. The in vitro results were confirmed also in in vivo experiments by treating mice xenografted with MM cells with GW4869, an inhibitor of EV secretion. The MSCs derived from mice treated with GW4869 presented higher levels of RUNX2 and lower lncRUNX2-AS1 expression with respect to MSCs derived from mice of the control group [77]. Moreover, another group confirmed that treatment of MSCs with MM-EVs inhibited osteoblast differentiation and reported the presence of some miRNAs in EVs with a putative role in MM bone disease [78]. They found that the MSC treatment with MM-EVs caused an increase in miR-103a-3p levels, which inhibited bone formation through RUNX2 targeting [80]. Indeed, miR-103a-3p upregulation in MSCs led to decreased osteoblastogenesis [78]. Similarly, in a recent study, we identified a panel of miRNAs packed in MM-EVs from MM cell lines or primary plasma cells from bone marrow aspirates that can negatively regulate osteogenesis [79]. Among them, we identified and characterized miR-129-5p, which targets various mRNAs involved in osteoblast differentiation [81][82][83][84]. Moreover, we observed that this miRNA is transferred to MSCs by MM-EVs, causing a downregulation of SP-1, a transcription factor implicated in osteogenesis through the targeting of alkaline phosphatase (ALPL) [79] and also involved in MM cell proliferation [85]. Overall, in this section, we have reported the studies that correlate the ncRNA content of EVs with disease onset or progression. The evidence discussed above is summarized in the Figure 1 cartoon. EV-ncRNAs Mediate Drug Resistance in MM Although considerable progress has been made in the development of effective treatment strategies, MM remains an incurable malignancy. Different therapeutic approaches have been adopted over time, which include immunomodulatory drugs (pomalidomide), proteasome inhibitors (bortezomib, carfilzomib and, ixazomib), histone deacetylase inhibitors (panobinostat), and monoclonal antibodies (elotuzumab and daratumumab) [9,86]. One of the limitations of the existing treatment is the emergence of drug resistance [9,86]. Accumulating evidence shows that EVs could participate in this process [87,88]. It was found that BMSC-derived EVs isolated from both normal donors and MM patients could induce drug resistance in human MM cells; in fact, EVs increased MM cell viability by 25% in the presence of bortezomib and by 9% in the absence of the drug [87]. Furthermore, Faict and colleagues highlighted that MM cell treatment with melphalan or bortezomib induced the release of EVs with a higher amount of acid sphingomyelinase, which could have a role in the MM drug resistance mechanism [88]. Intriguingly, results from different studies have underlined that ncRNAs contained in EVs are partially responsible for this phenomenon [89,90]. In a study focused on bortezomib resistance, the authors identified, through a microarray approach, the miRNA content of EVs isolated from the peripheral blood of bortezomib-resistant (Bz-resistant) MM patients. Firstly, the authors found a significant difference in the total RNA content of EVs purified from Bz-responsive and Bz-resistant groups. In particular, a higher concentration of EV-RNA was found in Bz-resistant samples; the rationale underlying this has yet to be investigated. Furthermore, 83 miRNAs were found to be more abundant and 88 less abundant in the EVs of Bz-resistant patients compared to EVs from Bz-responders. miRNAs with a different amount included miR-513a-5p, miR-20b-3p, and let-7d-3p (higher amount) and miR-125b-5p, miR-19a-3p, miR-21-5p, miR-20a-5p, miR-17-5p, miR-15a-5p, and miR-16-5p (lower amount). By performing computational analyses, the authors speculated that these miRNAs could have a role in the bortezomib-resistance mechanism as they participate in post-transcriptional regulation by modulating co-factors of the MAP kinase and ubiquitin-conjugating enzyme activity pathways [89]. Along with cancer cells, BMSCs may take part in the drug resistance process. BM stromal cells are involved both in the beginning and in the maintenance of the drug-resistant phenotype of MM cells [91][92][93]. Xu and colleagues have studied the role of EV-ncRNAs transferred from MSCs to MM cells in the resistance to proteasome inhibitors [90]. The authors demonstrated that treatment of MM cells with EVs from MSCs isolated from bortezomib-resistant patients (r-MSCs) reduced cancer cell sensitivity to proteasome inhibitors; conversely, treating MM cells with EVs from MSCs of bortezomib-sensitive patients (s-MSCs) did not affect the tumor response to therapy. By analyzing the EV content, the authors identified the presence of PSMA3 and PSMA3-AS1 transcripts in MSC-derived EVs, especially in vesicles isolated from resistant patients' cells. PSMA3 encodes the proteasome type-3 alpha subunit [94], whereas PSMA3-AS1 is a lncRNA that modulates PSMA3 levels by increasing its stability. PSMA3 and PSMA3-AS1 expression levels were upregulated in MM cells treated with r-MSC-EVs but not in cells treated with s-MSC-EVs. The upregulation of these two transcripts led to enhanced proteasome activity, which could explain the resistance to proteasome inhibitors. These results were confirmed also in vivo using U266-luc mice. The authors demonstrated that PSMA3-AS1 downregulation via siRNAs increased the sensitivity of MM cells xenografted in mice to proteasome inhibitors [90]. EV-ncRNAs as Diagnostic and Prognostic Biomarkers in MM Nucleic acids circulating in biological fluids represent an important source of biomarkers for the early detection of cancer and the monitoring of treatments. This preventive, diagnostic, and monitoring approach is currently known as liquid biopsy, a minimally invasive procedure for the patient. In particular, the advantage of liquid biopsy in oncology is to obtain the tumor molecular profile when the biopsy from the primary or metastatic tumor is not feasible due to risks associated with the surgical procedure. The existence of extracellular RNAs [95,96], also called circulating tumor RNAs (ctRNAs), in serum or plasma as well as in other biological fluids such as urine, saliva, cerebrospinal, seminal, and ascitic fluid, has highlighted the possibility that they may represent ideal candidates as tumor biomarkers. To date, increasing scientific evidence has indicated that several miRNAs and lncRNAs are found in the plasma of MM patients and that these may correlate with the disease stage, thus representing potential biomarkers. In this context, the majority of studies have focused on circulating miRNAs differentially expressed in the distinctive monoclonal gammopathies. In a study by Jones et al., differences in serum miRNA levels of healthy subjects and MM and MGUS patients were identified. Among these, miR-720 was higher in MM and MGUS patients compared with healthy subjects, while miR-1308 was lower; also, the combination of miR-1246 and miR-1308 levels might be used to discriminate MGUS from MM [97]. Similarly, plasma levels of miR-92a were found to be lower in MM patients than in healthy and MGUS individuals [98]. Interestingly, increased levels of miR-214 and miR-135b were found in the serum of MM patients with osteolytic lesions, thus representing a predictive marker of MM bone disease [99]. More recently, a study characterized the peripheral blood plasma transcriptomic profile of newly diagnosed and relapsed and refractory MM patients, and this was compared to that of healthy individuals [100]. The authors not only identified differences in the expression of protein-coding genes but also reported variations in the levels of some non-coding genes; these include antisense genes such as FAM83C-AS1, ZNF32-AS1, TMC3-AS1, and TAT-AS1, long intergenic noncoding RNA (LincRNA) such as LINC00863, LINC01123, LINC00349, LINC00677, and LINC00462, and microRNAs including miR-301A, miR-378H, miR-425, and miR-647 [100]. Circulating non-coding RNA has proven to be a predictive tool to monitor patient response to therapies with lenalidomide and dexamethasone [101]. In this study, the authors analyzed the profiles of miRNAs from the serum of patients with relapsed/refractory MM (RRMM) exhibiting different responses to treatment. Although various miRNAs were differentially expressed between the two groups, the levels of five of these (miR-26a-5p, miR-29c-3p, miR-30b-5p, miR-30c-5p, and miR-331-3p) were significantly reduced in partially-responsive patients [101]. To date, the major limitations for the use of ctRNAs as biomarkers in clinical settings are their instability in biological fluids and the difficulty of processing and analysis [102,103]. However, RNAs are also contained in EVs that, thanks to their stability and abundance, may represent today a great opportunity for cancer biomarker development, also for MM. In particular, increasing evidence indicates that the analysis of ncRNA levels in serum/plasma EVs could be a useful tool for the differential diagnosis of patients with different gammopathies [104] and the prediction of patient outcomes [105]. For example, a comparison of the miRNAs in the vesicles of patients with MM and SMM and healthy subjects showed differences between the different groups; moreover, in the same study, it was found that the levels of miRNAs associated with vesicles are different from those circulating in serum [104]. Among miRNAs differentially packed in EVs of affected subjects, let-7c-5p, miR-20a-5p, miR-103a-3p, miR-140-3p, and miR-185-5p were consistently reduced in patients with MM than in those with SMM, while miR-4505 and miR-4741 were higher [101]. In our recent study, we found higher levels of miR-129-5p in vesicles isolated from the bone marrow of MM patients than in SMM [79]; since this miRNA targets different mRNAs involved in osteoblast differentiation [81][82][83][84], its presence in plasma EVs could be relevant to discriminate between the two pathological conditions. In addition to mRNAs, lncRNAs delivered by EVs have the potential to distinguish patients with different monoclonal gammopathies. The description of the lncRNA content of vesicles isolated from the peripheral blood of MM, MGUS, and healthy individuals showed differential amounts of various lncRNAs [106]; in particular, among 84 identified lncRNAs, the levels of PRINS (psoriasis susceptibility-related RNA gene induced by stress) discriminated patients with monoclonal gammopathies from healthy subjects. Importantly, the abundance of PRINS in the EVs from patients correlated with clinical parameters such as bone marrow plasma cell infiltration rate, albumin, creatinine, and lactate dehydrogenase levels [106]. The identification of EV-associated ncRNAs may have a prognostic impact. In a cohort of MM patients, the correlation between EV-miRNAs and patient outcomes revealed that the levels of let-7b and miR-18a were predictors of overall survival; in particular, low levels of these two miRNAs were associated with poor outcomes [105]. EV-associated ncRNAs have also emerged as tools for the non-invasive evaluation of the therapeutic response. For example, the identification of the miRNA profile of EVs from the MM patients' plasma revealed different miRNA patterns between patients who were either responsive or resistant to bortezomib treatment. Among them, levels of miR-16-5p, miR-15a-5p, miR-20a-5p, and miR-17-5p were reduced in the EVs from resistant patients [89]. Altogether, in this section, we have reported findings suggesting that EV-associated ncRNAs represent new biomarkers for the differential diagnosis of monoclonal gammopathies and the monitoring of the therapeutic response ( Figure 2). Non-Coding RNA 2020, 6, x 9 of 17 correlated with clinical parameters such as bone marrow plasma cell infiltration rate, albumin, creatinine, and lactate dehydrogenase levels [106]. The identification of EV-associated ncRNAs may have a prognostic impact. In a cohort of MM patients, the correlation between EV-miRNAs and patient outcomes revealed that the levels of let-7b and miR-18a were predictors of overall survival; in particular, low levels of these two miRNAs were associated with poor outcomes [105]. EV-associated ncRNAs have also emerged as tools for the non-invasive evaluation of the therapeutic response. For example, the identification of the miRNA profile of EVs from the MM patients' plasma revealed different miRNA patterns between patients who were either responsive or resistant to bortezomib treatment. Among them, levels of miR-16-5p, miR-15a-5p, miR-20a-5p, and miR-17-5p were reduced in the EVs from resistant patients [89]. Altogether, in this section, we have reported findings suggesting that EV-associated ncRNAs represent new biomarkers for the differential diagnosis of monoclonal gammopathies and the monitoring of the therapeutic response ( Figure 2). Figure 2. Schematic representation of the clinical value of EV-associated ncRNAs: EVs can be isolated from the peripheral blood or the bone marrow aspirates of patients with different gammopathies. EV-ncRNAs may serve as biomarkers for the differential diagnosis, prognosis, or prediction of therapeutic response. Conclusions In summary, we have reported and discussed evidence indicating that ncRNAs contribute to the pathogenetic activity of EVs in MM. In particular, the involvement of ncRNAs contained in EVs in the proliferation of tumor and microenvironmental cells, drug resistance, increased angiogenesis, and bone disease associated with MM is nowadays clear. However, it has to be underlined that, although many works confirmed a lower abundance of miRNAs in EVs respect to other ncRNAs [107][108][109], most of the functional studies about the EVmediated transfer of non-coding RNAs, some of which are reviewed above, focus on miRNAs. Further in-depth characterization of the ncRNA cargo of vesicles is therefore necessary and will allow us to identify new possible therapeutic targets as well as to develop diagnostic and prognostic tools. Furthermore, additional validation studies of the identified EV-ncRNAs are needed before translation into the clinical setting. The tables reported below summarize major findings on the biological (Table 1) and clinical (Table 2) impact of EV-ncRNAs in MM. EV-ncRNAs may serve as biomarkers for the differential diagnosis, prognosis, or prediction of therapeutic response. Conclusions In summary, we have reported and discussed evidence indicating that ncRNAs contribute to the pathogenetic activity of EVs in MM. In particular, the involvement of ncRNAs contained in EVs in the proliferation of tumor and microenvironmental cells, drug resistance, increased angiogenesis, and bone disease associated with MM is nowadays clear. However, it has to be underlined that, although many works confirmed a lower abundance of miRNAs in EVs respect to other ncRNAs [107][108][109], most of the functional studies about the EV-mediated transfer of non-coding RNAs, some of which are reviewed above, focus on miRNAs. Further in-depth characterization of the ncRNA cargo of vesicles is therefore necessary and will allow us to identify new possible therapeutic targets as well as to develop diagnostic and prognostic tools. Furthermore, additional validation studies of the identified EV-ncRNAs are needed before translation into the clinical setting. The tables reported below summarize major findings on the biological (Table 1) and clinical (Table 2) impact of EV-ncRNAs in MM.
2020-08-06T09:07:33.308Z
2020-08-05T00:00:00.000
{ "year": 2020, "sha1": "a5414342053da0eb22328387dbd25e47d9b07feb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2311-553X/6/3/30/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b129637c4fcbf04a86b8d6b7cbbf9d2df9a12ebe", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
2577351
pes2o/s2orc
v3-fos-license
Current status of the plant phosphorylation site database PhosPhAt and its use as a resource for molecular plant physiology As the most studied post-translational modification, protein phosphorylation is analyzed in a growing number of proteomic experiments. These high-throughput approaches generate large datasets, from which specific spectrum-based information can be hard to find. In 2007, the PhosPhAt database was launched to collect and present Arabidopsis phosphorylation sites identified by mass spectrometry from and for the scientific community. At present, PhosPhAt 3.0 consolidates phosphoproteomics data from 19 published proteomic studies. Out of 5460 listed unique phosphoproteins, about 25% have been identified in at least two independent experimental setups. This is especially important when considering issues of false positive and false negative identification rates and data quality (Durek etal., 2010). This valuable data set encompasses over 13205 unique phosphopeptides, with unambiguous mapping to serine (77%), threonine (17%), and tyrosine (6%). Sorting the functional annotations of experimentally found phosphorylated proteins in PhosPhAt using Gene Ontology terms shows an over-representation of proteins in regulatory pathways and signaling processes. A similar distribution is found when the PhosPhAt predictor, trained on experimentally obtained plant phosphorylation sites, is used to predict phosphorylation sites for the Arabidopsis genome. Finally, the possibility to insert a protein sequence into the PhosPhAt predictor allows species independent use of the prediction resource. In practice, PhosPhAt also allows easy exploitation of proteomic data for design of further targeted experiments. INTRODUCTION Protein post-translational modifications (PTMs) are one of the fastest processes through which plants respond to various stimuli. Thus, they increasingly became the focus of scientific studies. Among the various PTMs, phosphorylation is one of the most studied modifications, due to the large number of affected proteins and its involvement in many cellular processes like signaling, nutrient uptake, and transport. In the mammalian field, the function of particular phosphorylation sites for activation or inactivation of proteins or as docking sites for interaction partners has been particularly well studied (Pawson and Gish, 1992;Chung et al., 1999;Yaffe, 2002;Pawson, 2004). So far, most studies of protein phosphorylation in plant biology have been focused on phosphorylation of specific proteins and protein families (Camoni et al., 2000;Hrabak et al., 2003) and the study of specific signaling pathways (Wang et al., 2005). However, more recently several unbiased large-scale studies of plant protein phosphorylation have been carried out, comparing different physiological states or mutants (Li et al., 2009;Reiland et al., 2009;Nakagami et al., 2010), or by analyzing a time-course after stimulation (Niittyla et al., 2007;Chen et al., 2010;Kline et al., 2010;Engelsberger and Schulze, 2011). One characteristic of mass spectrometric analyses however is the generation of large datasets, which usually remain difficult to access for the general public. Even if identified peptides are listed, access to spectral data is often very limited. One way of allowing public access is the use of data repositories like Tranche (Smith et al., 2011), but even here much of the data is uploaded in raw and possibly vendor-specific format. This often results in time penalty and high workload when individual verification of only few specific peptides is required. As a consequence, already existing measurements are difficult to re-asses or use for different purposes. Thus, providing data storage and accessibility to this type of experimental information is of utmost importance. The Phos-PhAt database of plant phosphorylation sites including a plant phosphorylation site predictor provides such a resource. It aims to compile predicted and experimental evidence of protein phosphorylation from large-scale proteomic studies with bioinformatics resources. Since its launch in 2007, the database has constantly been updated with new experimental evidence from the growing number of phosphoproteomic experiments (see list at http://phosphat.mpimp-golm.mpg.de/). Dynamic links to further external resources, such as Aramemnon (Schwacke et al., 2003), eFP browser (Winter et al., 2007), co-expression networks ATTED-II (Obayashi et al., 2009), and subcellular localization (Heazlewood et al., 2008) are implemented in PhosPhAt for each phosphoprotein. Additionally, the www.frontiersin.org phosphorylation prediction function implemented in PhosPhAt allows users to paste any given protein sequence into the prediction query window and obtain prediction of phosphorylation sites. Thus, the support vector machine-based predictor trained on Arabidopsis-specific phosphorylation sites can be used independently of the plant species (Durek et al., 2010). Recently, the PhosPhAt database itself has been integrated into a larger common interface, the GATOR portal (Joshi et al., 2011), which allows concurrent query of various proteomic resources. In its current form, the PhosPhAt database contains evidence for 12404 different phosphorylation sites mapping to 5460 different proteins and 94284 high confidence predicted sites mapping to 21764 proteins. We have found a significant overrepresentation of proteins involved in regulatory and signaling processes among the highly confident phosphorylated proteins, while housekeeping and other enzymatic functions are underrepresented (Heazlewood et al., 2008; Figure 1). The proteome-wide magnitude of protein phosphorylation becomes apparent when looking at the high confidence prediction of protein phosphorylation: mapping these predicted phosphorylation sites to the number of proteins that are affected by phosphorylation, about 64% of the proteins listed in TAIR9 (January 2010; Lamesch et al., 2012) are predicted to be phosphorylated with high confidence (score >1). However, until now only about one-quarter of these predicted sites have been experimentally confirmed using mass spectrometry (Figure 1). Probably due to the focus of various proteomic studies on particular cellular compartments (i.e., plasma membrane, chloroplasts), larger numbers of experimentally confirmed phosphorylated proteins have been found for particular functional categories (MapMan bins; Thimm et al., 2004). Examples of these include proteins with functions in photosynthesis (bin 1), glycolysis (bin 4), N-metabolism (bin 12), C1 metabolism (bin 25), as well as microRNA and natural antisense-related proteins (bin 32). In other functional categories, the fraction of proteins with only predicted phosphorylation is very high, while often only one-third of the phosphorylated proteins has been identified experimentally (Figure 1). These include signaling functions (bin 30), cytoskeleton and vesicle trafficking (bin 31), as well as major carbohydrate metabolism (bin 2). This points not only to the versatility of processes in which phosphorylation plays a role, but also from experimental point of view indicates remaining work to confirm predicted phosphorylations. The most challenging part, however, lies in the precise molecular characterization of the identified phosphorylation sites with regards to their effect on protein function. In this regards, current knowledge is still very limited. To this end, it is extremely valuable to study these experimentally determined phosphorylation sites and their role in specific physiological conditions, tissue types, or in the whole organism context. Thus, besides global analysis of protein phosphorylation and discovery of new phosphorylation sites, precise targeted studies of particular proteins of interest are necessary to finely elucidate the role of phosphorylation sites in particular proteins. This becomes especially important as protein phosphorylation functionally interacts with other protein modifications such as methionine oxidation (Hardin et al., 2009), lysine-acetylation (van Noort et al., 2012), and ubiquitination (Hunter, 2007;Thomas et al., 2009). Therefore, in this mini review, we aim at providing a detailed overview of the PhosPhAt features through a specific example for further utilization of the PhosPhAt resources in new experimental design of targeted phosphoprotein analysis. FUNCTIONS OF PhosPhAt RESOURCE The PhosPhAt web resource allows the user to search for experimental and predicted phosphorylation sites in a given protein (see phosphat.mpimp-golm.mpg.de). Queries can be run based on Arabidopsis gene identifiers (AGI coded) or based on peptide sequences or protein annotation text queries. The advanced search possibilities allow users to include meta-information from experimental context (tissue type, experimental treatment, etc.). Both, for queries of experimental sites as well as for queries of phosphorylation site predictions, multiple AGI codes can be submitted (see phosphat.mpimp-golm.mpg.de). Query results will then be displayed on a multipage result window, sorted by gene identifiers. Upon selecting one of the protein identifiers, followed by a peptide, the protein prediction tab becomes activated and upon clicking displays a detailed protein view tab. The top right corner of this protein tab contains links to various other resources: SUBA, TAIR, ATTED, Aramemnon, and GabiPD. Below the protein ID, its functional description and the MapMan bin classification, the middle part of the protein tab is allocated to the phosphorylation site predictor. Here the amino acids from experimentally identified peptides are underlined, and predicted phosphorylated amino acids are marked with a green background. Amino acids that were experimentally confirmed to be phosphorylated are shown in bold, and hovering with the mouse over one of those will display the details for this identification or prediction just below the protein sequence. Positive score values indicate positive prediction, while increasing value indicates increasing probability of phosphorylation. Predicted Pfam domain structures are mapped onto the protein sequence and displayed in a yellow background, allowing the user to put the experimental and predicted phosphorylation sites in functional context (Durek et al., 2010). Below the sequence display, a list of experimentally identified phosphopeptides is available with icons signifying MS spectrum availability and quantitative information. In the list of experimental data, phosphorylation sites are marked as defined if the precise location of the phosphorylated amino acid has been unambiguously determined by mass spectrometric analysis. Clear identification of the phosphorylated amino acid in the phosphopeptides often requires manual interpretation of mass spectra and use of additional fragment ion scoring algorithms (Olsen et al., 2006;MacLean et al., 2008). These defined sites in PhosPhAt are marked with brackets and a lowercase p, such as (pS), (pT), (pY). Phosphorylation sites marked as ambiguous were not clearly resolved by the mass spectrometric experiments. These sites are marked as lowercase letters in brackets, e.g., (s), (t), (y). The undefined sites are usually putatively phosphorylated amino acids in close proximity. In PhosPhAt, the remark "site undetermined" on the modified tryptic peptide is used to mark those situations where no statement could be made on the FIGURE 1 | Distribution of experimental and predicted phosphorylation sites to functional categories of MapMan. The bins which group encoded proteins in their functional categories are: photosynthesis (PS)-1; major carbohydrate (CHO) metabolism-2; minor CHO metabolism-3; glycolysis-4; fermentation-5; gluconeogenese/glyoxylate cycle-6; OPP-7; TCA/org. transformation-8; mitochondrial electron transport/ATP synthesis -9; cell wall-10; lipid metabolism-11; N-metabolism-12; amino acid metabolism-13; S-assimilation-14; metal handling-15; secondary metabolism-16; hormone metabolism-17; co-factor and vitamin metabolism-18: tetrapyrrole synthesis-19; stress-20; redox-21; polyamine metabolism-22; nucleotide metabolism-23; biodegradation of xenobiotics-24; C1-metabolism-25; misc-26; microRNA, www.frontiersin.org FIGURE 2 | Example spectra from PhosPhAt and selected transitions for quantification in targeted SRM analyses for NIA2. location of the phosphorylation site based on the mass spectrum (Heazlewood et al., 2008). Upon double-clicking on a peptide-row in the list of experimentally validated phosphopeptides, the fragment spectrum of this ion, experimental origin, and available quantitative information are displayed. The annotated ions in the fragment spectrum are indicated by blue bars, and clicking one of them displays the fragment-specific information. The mass list of each particular peptide ion can be exported as peak list (.csv format). Also at the level of primary query result, custom information can be exported as tab delimited tables, Mascot compatible .mgf format, or in Motif-X format (Schwartz and Gygi, 2005). In all view pages, information displayed can be custom-adjusted by clicking on the column title and selecting desired information for display. A complete tab-delimited table of all database contents can be downloaded from the PhosPhAt main page (phosphat.mpimp-golm.mpg.de). USING PhosPhAt RESOURCE FOR DESIGN OF TARGETED EXPERIMENTS The experimental and predicted phosphorylation sites available in PhosPhAt and particularly the spectra deposited in the database provide a valuable resource for targeted in-depth analysis of the role of protein phosphorylation in physiological contexts. The use of fragment spectrum libraries for the design of targeted analyses has previously been described in detail (Gillet et al., 2012). Examples for targeted analysis of metabolic pathways are already available from yeast, providing a detailed dynamic proteome profile of the glycolytic pathway or microbial proteomes (Carroll et al., 2011;Schmidt et al., 2011). However, the combination of targeted protein analysis with monitoring of phosphorylation stoichiometry has not yet been widely applied in plant science. The commonly used methods for targeted protein quantification can also be well applied to phosphopeptides (Johnson et al., 2009) and synthetic standard (phospho)-peptides can be used to determine phosphorylation stoichiometry (Steen et al., 2005). Both approaches require reliable information of phosphopeptide identity and fragmentation properties. The starting point for a targeted phosphorylation site analysis is a limited set of proteins of interest. The query will return experimentally identified phosphorylation sites from the desired proteins, and ideally an experimentally acquired fragment spectrum is hosted in the PhosPhAt database. By clicking on the peptide-rows, the individual spectra can be assessed, and a combined export of the peak list is available from the first query result page. The information from PhosPhAt may be complemented by additional literature information about the biological relevance of particular phosphorylation sites. In an example, we are interested in studying phosphorylation stoichiometry of proteins involved in nitrogen uptake and assimilation. A query of ammonium and nitrate transporters as well as nitrate reductase reveals spectral information for nitrate reductase (AT1G37130.1), a nitrate transporter NRT2.1 (AT1G08090), and an ammonium transporter AMT 1.1 (AT4G13510.1) among others. We have selected peptides found in phosphorylated and non-phosphorylated form for each one of these proteins. They are: SV(pS)TPFMNTTAK/SVSTPFMNTTAK for nitrate reductase; EQSFAFSVQ(pS)PIVHTDK/EQSFAFSVQSPIVHTDK for nitrate transporter NRT2.1, and ISSEDEMAGMDM(pT)R/ ISSEDEMAGMDMTR for ammonium transporter AMT 1.1. Independent studies show that phosphorylation of S534 in nitrate reductase (Kaiser and Huber, 2001) is involved in the activity regulation of nitrate reductase NIA2 and especially in its interaction with 14-3-3 proteins (Kaiser and Huber, 2001;Lillo et al., 2004). Conveniently this region is covered by the experimentally identified phosphopeptide. To our knowledge, there is no precise information about the function of the phosphorylation site of the NRT2.1 transporter, although the level of phosphorylation of this peptide has been found to change upon nitrate re-supply to starved seedlings (Engelsberger and Schulze, 2011). For the ammonium transporter peptide it has been shown that it is subject to inactivation by C-terminal phosphorylation at the threonine residue in the experimentally confirmed phosphopeptide (Yuan et al., 2007). Thus, on one hand among these three proteins we have clear examples of experimentally verified phosphopeptides, where phosphorylation has been shown to influence protein activity and can be used for diagnostics purposes in various mutants. On the other hand there are novel phosphopeptides, like the NRT2.1 peptide, where we know that the level of phosphorylation changes but we are not sure yet how this influences the protein itself. Following the choice of target peptides, a selected reaction monitoring method is designed as described (Lange et al., 2008) and applied to mutant or wildtype plants subjected to various treatments. When selecting the transitions to be monitored for each peptide, we could use the annotated fragment spectra available from PhosPhAt, and select a number of reliable ions that can be reproducibly monitored, as in the example of NIA2 shown in Figure 2. PhosPhAt therefore also serves as a phosphopeptide library resource. SUMMARY The PhosPhAt database was initiated to provide a resource that consolidates our current knowledge of mass spectrometry-based identified phosphorylation sites in the model plant Arabidopsis. It is combined with a phosphorylation site prediction tool specifically trained on plant type phosphorylation motifs. Thus, PhosPhAt not only serves as a searchable knowledge base for experimentally identified phosphorylation sites, but also provides a powerful resource for the characterization and annotation of yet unidentified phosphorylation sites in plant proteins. Furthermore, the stored spectra for large numbers of phosphorylation sites provide a direct resource for the design of additional targeted experiments.
2016-06-17T23:13:12.537Z
2012-06-19T00:00:00.000
{ "year": 2012, "sha1": "fef7e4532cc0c86ec805d4b32f05c94561803535", "oa_license": "CCBYNC", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2012.00132/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fef7e4532cc0c86ec805d4b32f05c94561803535", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
40428796
pes2o/s2orc
v3-fos-license
Hubble and Spitzer Space Telescope Observations of the Debris Disk around the Nearby K Dwarf HD 92945 [ABRIDGED] We present the first resolved images of the debris disk around the nearby K dwarf HD 92945. Our F606W (V) and F814W (I) HST/ACS coronagraphic images reveal an inclined, axisymmetric disk consisting of an inner ring 2".0-3".0 (43-65 AU) from the star and an extended outer disk whose surface brightness declines slowly with increasing radius 3".0-5".1 (65-110 AU) from the star. A precipitous drop in the surface brightness beyond 110 AU suggests that the outer disk is truncated at that distance. The radial surface-density profile is peaked at both the inner ring and the outer edge of the disk. The dust in the outer disk scatters neutrally but isotropically, and it has a low V-band albedo of 0.1. We also present new Spitzer MIPS photometry and IRS spectra of HD 92945. These data reveal no infrared excess from the disk shortward of 30 micron and constrain the width of the 70 micron source to<180 AU. Assuming the dust comprises compact grains of astronomical silicate with a surface-density profile described by our scattered-light model of the disk, we successfully model the 24-350 micron emission with a minimum grain size of a_min = 4.5 micron and a size distribution proportional to a^-3.7 throughout the disk, but with a maximum grain size of 900 micron in the inner ring and 50 micron in the outer disk. Our observations indicate a total dust mass of ~0.001 M_earth. However, they provide contradictory evidence of the dust's physical characteristics: its neutral V-I color and lack of 24 micron emission imply grains larger than a few microns, but its isotropic scattering and low albedo suggest a large population of submicron-sized grains. The dynamical causes of the disk's morphology are unclear, but recent models of dust creation and transport in the presence of migrating planets indicate an advanced state of planet formation around HD 92945. Introduction A circumstellar disk is called a "debris disk" if the age of its host star exceeds the lifetimes of its constituent dust grains. Intergrain collisions, radiation pressure, and Poynting-Robertson drag eliminate dust around solar-type stars in ∼ 10 Myr (Backman & Paresce 1993;Lagrange et al. 2000); (Zuckerman 2001;Meyer et al. 2007;Wyatt 2008). The presence of dust around older stars suggests that the dust has been replenished by cometary evaporation or by collisions among unseen planetesimals. Resolved images of debris disks enable studies of the properties and dynamics of grains that compose the debris in extrasolar planetary systems. They also provide opportunities for indirectly detecting planets via their dynamical effects on the distribution of the dust. Observing light scattered by debris disks is difficult because the disks have small optical depths (τ 10 −3 ) and their host stars are bright. Dozens of optically thick disks around young stars have been imaged in scattered light, but only 16 debris disks have been resolved in this manner ( Table 1). Five of these debris disks surround stars with masses 1 M ⊙ : AU Microscopii (spectral type M1 Ve; Kalas et al. 2004), HD 107146 (G2 V; Ardila et al. 2004), HD 53143 (K1 V; Kalas et al. 2006), HD 61005 (G8 V; Hines et al. 2007), and HD 92945 (K1 V; this paper). Thus, few examples of the circumstellar environments of low-mass stars in the late stages of planet formation are known. The nearby star HD 92945 (K1 V, V = 7.72, distance = 21.57 ± 0.39 pc; ESA 1997) was first identified by Silverstone (2000) as a candidate for attendant circumstellar dust based on its 60 µm flux measured by the Infrared Astronomical Satellite (IRAS), which significantly exceeds the expected photospheric flux at that wavelength. Mid-infrared and submillimeter images of HD 92945 from the Spitzer Space Telescope and the Caltech Submillimeter Observatory also reveal excess fluxes at 70 µm and 350 µm that, along with the IRAS measurement, are consistent with an optically thin disk of blackbody grains having an inner radius of 15-23 AU, an equilibrium temperature of 40-46 K, and a fractional infrared luminosity of L IR /L * = 7.7 × 10 −4 , where L * is the bolometric luminosity of the star (Chen et al. 2005;Plavchan et al. 2009 fractional luminosity is about half that of the disk surrounding the ∼ 12 Myr-old, A5 V star β Pictoris . López-Santiago et al. (2006) estimate an age of ∼ 80-150 Myr for HD 92945 based on its luminosity, lithium absorption strength, and association with the Local (Pleiades) Moving Group. However, Plavchan et al. (2009) suggest an age of ∼ 300 Myr based on correlations of age with X-ray and Ca II H and K emission (Mamajek & Hillenbrand 2008). In this paper, we present the first resolved scatteredlight images of the debris disk around HD 92945. These coronagraphic images from the Hubble Space Telescope (HST) provide only the second opportunity to study a debris disk around a K dwarf at visible wavelengths. We also present new mid-infrared photometry and spectra of the disk obtained from the Spitzer Space Telescope. Together, these HST and Spitzer observations allow us to constrain the optical properties, sizes, and spatial distribution of the dust grains within the disk. We use these constraints to revise the estimates of the disk's fractional infrared luminosity and mass previously reported by Chen et al. (2005) and Plavchan et al. (2009). Finally, we discuss the morphology of the disk in the context of other spatially resolved debris disks and current models of dust production and dynamics during the late stages of planet formation. Observations and Data Processing Our HST and Spitzer observations are presented jointly as the result of agreements between the HST Advanced Camera for Surveys (ACS) Investigation Definition Team (IDT), the Multiband Imaging Photometer for Spitzer (MIPS) Instrument Team, and the investigators associated with Spitzer Fellowship program 241 (C. H. Chen, Principal Investigator), which utilizes Spitzer's InfraRed Spectrograph (IRS) and the MIPS Spectral Energy Distribution (SED) mode. The inclusion of HD 92945 in the ACS IDT's dedicated effort to discover and characterize nearby debris disks was informed by an early 70 µm MIPS detection of cold circumstellar dust around the star (Chen et al. 2005). The MIPS detection confirmed the previous IRAS measurement and established an acceptible likelihood of detection by ACS based on previous successful imaging of the debris disk around HD 107146 ). Imaging Strategy and Reduction The ACS observations of HD 92945 were performed as part of HST Guaranteed Time Observer (GTO) program 10330, which followed the IDT's standard strategy of first obtaining coronagraphic images through a single broadband filter to assess the presence of a debris disk and later obtaining multiband images of the disk, if warranted. The first images of HD 92945 were obtained on UT 2004 December 1 using the High Resolution Channel (HRC) of ACS Maybhate et al. 2010). The HRC has a 1024 × 1024-pixel CCD detector whose pixels subtend an area of 0. ′′ 028 × 0. ′′ 025, providing a ∼ 29 ′′ × 26 ′′ field of view (FOV). HD 92945 was acquired in the standard "peak-up" mode with the coronagraph assembly deployed in the focal plane of the aberrated beam. The star was then positioned behind the small (0. ′′ 9 radius) occulting spot located approximately at the center of the FOV. Two successive coronagraphic exposures of 1100 s were recorded using the F606W (Broad V ) filter. Two 0.1 s and two 35 s direct (i.e., non-coronagraphic) exposures of HD 92945 were also recorded for the purposes of photometric calibration and detection of any bright circumstellar material obscured by the occulting spot. The 0.1 s exposures were digitized using an analog-to-digital (A/D) gain of 4 e − DN −1 to allow unsaturated images of the unocculted star; the other images were recorded with an A/D gain of 2 e − DN −1 for better sampling of faint sources. All images were recorded within one HST orbit. A similar set of F606W images of HD 100623 (K0 V, V = 5.96, d = 9.54 ± 0.07 pc; ESA 1997) was recorded during the orbit immediately following the observation of HD 92945. These images provide references for the instrumental point-spread functions (PSFs) of a nearby star with colors similar to HD 92945 but having no known circumstellar dust (Smith et al. 1992). Follow-up HRC observations of HD 92945 were conducted on UT 2005 July 12. (The same observations attempted previously on UT 2005 April 21 were unsatisfactory because of failed guide-star acquisition.) Because of changing seasonal constraints on the spacecraft's roll angle, the FOV was rotated counterclockwise by 157 • from the orientation obtained 7 months earlier. Multiple coronagraphic exposures totalling 4870 s and 7600 s were recorded through the F606W and F814W (Broad I) filters, re-spectively, over 5 consecutive HST orbits. One direct exposure of 0.1 s was also recorded through each filter for photometric calibration. The A/D gain settings conformed with those used in the first-epoch observation. Likewise, coronagraphic exposures of HD 100623 totalling 2178 s and 2412 s were recorded through F606W and F814W, respectively, over 2 consecutive orbits immediately following the observation of HD 92945. One 10 s coronagraphic exposure of HD 100623 was recorded through each filter to ensure unsaturated imaging of the PSF along the perimeter of the occulting spot. One direct exposure of 0.1 s was also recorded through each filter for photometric purposes. The initial stages of image reduction (i.e., subtraction of bias and dark frames and division by a direct or coronagraphic flat field) were performed by the ACS image calibration pipeline at the Space Telescope Science Institute (STScI; Pavlovsky et al. 2006). We averaged the images recorded with the same combinations of filter, exposure time, and roll angle after interpolating over permanent bad pixels and rejecting transient artifacts identified as statistical outliers. We then normalized the averaged images to unit exposure time. We replaced saturated pixels in the long-exposure images of HD 100623 with unsaturated pixels at corresponding locations in the 10 s images. Throughout this process, we tracked the uncertainties associated with each image pixel. In this manner, we created cosmetically clean, high-contrast images and meaningful error maps for each combination of star, filter, and roll angle. The reduced, second-epoch F606W images of HD 92945 and HD 100623 are shown in the top and middle panels of Figure 1, respectively. The corresponding firstepoch F606W and second-epoch F814W images are qualitatively similar to those shown in Figure 1. Subtraction of the Coronagraphic PSF To examine the scattered-light component of the dust around HD 92945, we removed from each image the overwhelming light from the occulted star's PSF. By observing HD 92945 and HD 100623 in consecutive HST orbits, we limited the differences between the PSFs of the two stars that would otherwise be caused by inconsistent deployment of the coronagraph assembly, gradual migration of the occulting spot, or changes in HST's thermally driven focus cycles (commonly called "breathing"; Krist 2002). We measured the positions of the stars behind the occulting spot using the central peaks of the reduced coronagraphic PSFs (Figure 1) that resulted from the reimaging of incompletely occulted, spherically aberrated starlight by ACS's corrective optics (Krist 2000). The positions of the two stars differed by ∼ 0.3-0.5 pixel (∼ 0. ′′ 008-0. ′′ 013). Similar offsets between two coronagraphic images of the same star cause ∼ 1% variations in the surface brightness profiles of the occulted PSFs over most of the HRC's FOV (Krist 2000). For different stars, this alignment error may be compounded by differences between the stars' colors and brightnesses. Optimal subtraction of the coronagraphic PSF requires accurate normalization and registration of the filter images of the reference star HD 100623 with the corresponding images of HD 92945. We used the 0.1 s direct F606W and F814W images and conventional aperture photometry to measure the relative brightnesses of each star in each bandpass. The firstepoch F606W images of HD 100623 were slightly saturated, but the flux-conserving A/D gain of 4 e − DN −1 (Gilliland 2004) and a large (0. ′′ 5 radius) aperture permitted accurate measurement of the integrated flux. We computed flux ratios, F HD100623 /F HD92945 , of 5.00 and 4.83 for F606W and F814W, respectively. We divided the images of HD 100623 by these ratios to bring the integrated brightnesses of the reference PSFs into conformity with those of HD 92945. We aligned the normalized images of HD 100623 with the corresponding images of HD 92945 using an interactive routine that permits orthogonal shifts of an image with subpixel resolution and cubic convolution interpolation. The shift intervals and normalization factors (i.e., F HD100623 /F HD92945 ) were progressively refined throughout the iterative process. We assessed the quality of the normalization and registration by visually inspecting the difference image created after each shift or normalization adjustment. Convergence was reached when the subtraction residuals were visibly minimized and refinements of the shift interval or normalization factor had inconsequential effects (e.g., bottom panel of Figure 1). Based on these qualitative assessments, we estimate that the uncertainty of the registration along each axis is ±0.0625 pixel and the uncertainty of F HD100623 /F HD92945 in each bandpass is ±1%. After subtracting the coronagraphic PSFs from each image of HD 92945, we transformed the images to correct the pronounced geometric distortion across the HRC's image plane. In doing so, we used the coefficients of the biquartic-polynomial distortion map provided by STScI (Meurer et al. 2002) and cu-bic convolution interpolation to conserve the imaged flux. This correction yields a rectified pixel scale of 0. ′′ 025 × 0. ′′ 025. We then combined the first-and second-epoch F606W images by rotating the former images counterclockwise by 157 • , aligning the images according to the previously measured stellar centroids, and averaging the images after rejecting pixels that exceeded their local 3σ values. Regions shaded by the HRC's occulting bar and large occulting spot were excluded from the average, as were regions corrupted by incomplete subtraction of the linear PSF artifact seen in Figure 1. (Because the orientations of the first-and second-epoch F606W images differed by 157 • , the regions obscured by the occulters and the linear artifact at one epoch were unobscured at the other epoch.) Again, we tracked the uncertainties associated with each stage of image processing to maintain a meaningful map of random pixel errors. We combined in quadrature the final random-error maps with estimates of the systematic errors caused by uncertainties in the normalization and registration of the reference PSFs. The systematic-error maps represent the convolved differences between the optimal PSF-subtracted image of HD 92945 and three nonoptimal ones generated by purposefully misaligning (along each axis) or misscaling the images of HD 100623 by amounts equal to our estimated uncertainties in PSF registration and F HD100623 /F HD92945 . The total systematic errors are 1-5 times larger than the random errors within ∼ 1. ′′ 5 of HD 92945, but they diminish to 2-25% of the random errors beyond ∼ 3 ′′ -4 ′′ from the star. We refer to the combined maps of random and systematic errors as total-error maps. Note that these maps exclude systematic errors caused by changes in HST's focus between the observations of HD 92945 and HD 100623 or by small differences in the stars' colors, which affect the spatial and spectral distributions of light within the PSFs. (Krist 2000;Krist et al. 2003). Unfortunately, without precise knowledge of HST's thermally varying optical configuration or the stars' spectra, these systematic errors cannot be accurately predicted or assessed. Figure 2 shows the PSF-subtracted and distortioncorrected images of HD 92945 in F606W and F814W. Each image has been divided by the measured brightness of the star in the respective bandpass. The alternating light and dark bands around the occulting spot reflect imperfect PSF subtraction caused by the slightly mismatched centroids of HD 92945 and HD 100623 . These subtraction residuals preclude accurate photometry of the disk within ∼ 2. ′′ 0 (∼ 43 AU) of the star. Likewise, the residual bands extending from the occulting bar in the F814W image are caused by imperfect subtraction of the linear PSF artifact seen in Figure 1. (Similar residual bands were removed from the F606W image, as described above.) The images show three distinct structures centered about the star: (1) a relatively bright but partially obscured elliptical ring extending ∼ 2. ′′ 0-3. ′′ 0 from the star, (2) an elliptical disk of lower but more uniform surface brightness extending beyond the ring and oriented at a position angle (PA) of ∼ 100 • , and (3) a circular halo of very-low surface brightness encompassing both the ring and disk and extending outward to ∼ 6 ′′ and 7 ′′ in the F606W and F814W images, respectively. The halo itself is surrounded by a ∼ 2. ′′ 5-wide ring having slightly lower surface brightness than the residual sky at the edges of the FOV. The faint extended halo and surrounding dark ring are well-known artifacts of the subtraction of two coronagraphic HRC images recorded at different phases of HST's breathing cycle (Krist 2000;Krist et al. 2003). The prominent halo in the F814W image may also reflect incomplete subtraction of red photons (λ 0.7 µm) that have diffusely scattered from the CCD substrate back into the CCD (Sirianni et al. 2005). Some of this red halo may remain if the far-red colors of HD 92945 and HD 100623 differ slightly. To remove the halos from our F606W and F814W images, we azimuthally subtracted 9th-and 10th-order polynomials fitted to the average radial profiles of the halos measured beyond 3. ′′ 0 and 3. ′′ 2 from the star, respectively, and within ±30 • of the minor axis of the elliptical disk. The accuracy of this subtraction is clearly contingent on the assumptions that the halo is azimuthally symmetric and that any actual circumstellar flux in the fitted region is negliglible. The halosubtracted images are shown in Figure 3; they represent the final stage of our HRC image processing. The 70 µm fine-scale images were recorded on UT 2008 February 16 using two cycles of 10 s exposures and a small field dither pattern located at each of 4 target positions on a square grid with sides of 16. ′′ 22 (3.16 pixels). The total effective on-source exposure time was 672 s. The 160 µm images were obtained on UT 2008 June 22 using one cycle of the small field dither pattern and 10 s exposures at each of 9 target positions with relative offsets of 0 ′′ and ±36 ′′ (±2.5 pixels) along the short axis of the 2×20 pixel array and with 0 ′′ , 72 ′′ , and 140 ′′ offsets along the long axis of the array to provide a large area for background measurement. For both imaging bandpasses, the observing strategy provided enhanced subpixel sampling for improved PSF subtraction and deconvolution. The 70 µm and 160 µm data were processed using the MIPS Data Analysis Tool (DAT; Gordon et al. 2005) version 3.10, which produces mosaics with rectified pixel sizes of 2. ′′ 62 and 8 ′′ , respectively. No field sources brighter than 3% of HD 92945's brightness were detected within 2 ′ of the star. Photometry within a 12-pixel square aperture yielded a 70 µm flux density of 278 ± 42 mJy after subtraction of the local background, which is consistent with the defaultscale measurement of Plavchan et al. (2009). After applying a 7% color correction relative to a fiducial Rayleigh-Jeans spectrum within the 70 µm bandpass, we obtain a final 70 µm flux density of 298 ± 45 mJy for HD 92945. We derived a 160 µm flux density of 285 ± 34 mJy using a 12-pixel square aperture and an aperture correction of 0.73 derived from an Tiny Tim/Spitzer model PSF (Krist 2006). In both bandpasses, the formal error is dominated by the absolute photometric calibration. The near-infrared spectral leak of the MIPS 160 µm bandpass is not a concern for HD 92945, which has a relatively faint K-band magnitude of 5.7 (Colbert et al. 2010). MIPS and IRS Spectroscopy We observed HD 92945 with the MIPS SED mode on UT 2005 June 23. This mode produces a lowresolution (R ≈ 15-25) spectrum from 55 to 90 µm (Heim et al. 1998;Colbert et al. 2010). The 120 ′′ × 20 ′′ slit was oriented at PA = 300 • , i.e., ∼ 20 • from the projected major axis of the disk seen in the ACS/HRC images. Ten cycles of 10 s exposures were recorded for a total integration time of 629 s. We obtained a background SED by chopping 1 ′ from the nominal pointing, and we then subtracted the background SED from the target SED. The images were reduced us-ing MIPS DAT version 3.10, which produces a mosaic with 44 pixels × 4. ′′ 9 pixel −1 in the spatial direction and 65 pixels in the spectral direction (Gordon et al. 2005). We smoothed the mosaic in the spectral direction with a 5-pixel boxcar to improve the measured signal-to-noise ratio (S/N). We then summed the signal in 5 columns centered on the peak emission along each row of the mosaic and extracted the SED. To calibrate the flux, we scaled a similarly reduced and extracted SED of the spectrophotometric standard star Canopus (spectral type G8 III) to a flux density of 3.11 Jy at 70 µm and then corrected for the MIPS spectral response function by assuming that Canopus has a Rayleigh-Jeans SED at far-infrared wavelengths. We used a point-source aperture correction, keeping in mind that a secondary correction is needed when fitting a spatially extended disk model to the SED ( §4.2). We performed IRS observations of HD 92945 on UT 2005 May 21 using both Short-Low (SL1: 7.4-14.5 µm; SL2: 5.2-8.7 µm) and Long-Low (LL1: 19.5-38.0 µm; LL2: 14.0-21.3 µm) spectroscopic modes (Houck et al. 2004). The widths of the SL and LL entrance slits are 3. ′′ 7 and 10. ′′ 6, respectively. For each mode, we obtained 6 s exposures at two positions along the slit. We extracted and summed the spectra from the two slit positions to enhance the S/N and then subtracted the same spectra to establish the observational uncertainty above the normal shot noise and 5% uncertainty in the absolute photometric calibration. We then spliced the four spectral orders after removing their low-S/N edges, thereby obtaining a total wavelength range of 5.3-34 µm. Our measured flux density of 42 mJy at 24 µm matches very well the MIPS 24 µm value reported by Plavchan et al. (2009). As before, we assumed a point-source aperture correction with the understanding that a secondary correction should be applied if the source is found to be extended ( §4.2). Surface Brightness The fully processed ACS/HRC images ( Figure 3) suggest that HD 92945 is surrounded by an azimuthally symmetric disk of dust that is moderately inclined with respect to the line of sight and sharply confined to the region within ∼ 5. ′′ 1 (∼ 110 AU) of the star. Any obvious azimuthal variations can be attributed either to geometric projection or to PSF-subtraction residuals. Such axisymmetry is rare within the present group of debris disks resolved in scattered light (Table 1), which typically display asymmetries associated with perturbations from embedded planets or nearby stars (e.g., β Pic and HD 141569A; Golimowski et al. 2006;Clampin et al. 2003), interaction with the interstellar medium (e.g., HD 61005; Hines et al. 2007), and/or enhanced forward scattering by micron-sized dust grains (e.g., HD 15745; Kalas et al. 2007a). The apparent axisymmetry of HD 92945's disk permits us to examine the surface brightness profiles along the projected semimajor axes of the disk and to ascribe these profiles to the entire disk with reasonable confidence. Figures 4a and 4b show the F606W and F814W surface brightness profiles (and their ±1 σ errors) along the opposing semimajor axes of the projected disk. The profiles were extracted from the respective images shown in Figure 3 after smoothing the images with an 11×11 pixel boxcar. The F606W profile has a credible inner radial limit of ∼ 2 ′′ (∼ 43 AU) because the two orientations used for the F606W images allow us to remove obvious PSF-subtraction artifacts near the occulting spot ( §2.1.2). However, the single orientation of the F814W image prevents credible measurement of the F814W profiles within ∼ 3. ′′ 4 of the star. The F606W profiles are peaked between 2 ′′ and 3 ′′ (43-65 AU) from the star -the region we call the inner ring -with maxima of 21.0-21.5 mag arcsec −2 . In the region 3. ′′ 0-5. ′′ 1 (67-110 AU) from the star, the F606W and F814W profiles decline approximately as radial power laws, r −α , where α = 0.5-1.5 for F606W and α = 0.3-0.7 for F814W. Beyond r = 5. ′′ 1 (110 AU), the profiles decrease so precipitously (α = 6-11) that the disk is effectively truncated at that radius. The ±1 σ error profiles indicate no significant differences between the F814W profiles of the eastern and western sides of the projected disk, but the opposing F606W profiles differ by 2-3 σ over most of the imaged region. These error profiles, which are obtained from our 1 σ error maps ( §2.1.2), reflect the combined uncertainties incurred from random errors (i.e., read and photon noise) and systematic errors from improperly scaled or aligned reference PSFs. They do not include systematic errors stemming from any intrinsic but unquantifiable discrepancies between the coronagraphic PSFs of HD 92945 and HD 100623 caused by differences in the stars' colors, HST's focus during the stars' observations, or the positions of the stars behind the occulting spot. The maximum difference of ∼ 0.5 mag arcsec −2 between the opposing F606W profiles corresponds to a PSF-subtraction error of ∼ 20% at r = 3 ′′ , which is within the range of local residuals expected for focus variations of a few microns typically associated with breathing or changes in HST's orientation with respect to the Sun (Krist 2000;Krist et al. 2003). Therefore, the F606W and F814W surface brightness profiles along the opposing semimajor axes are consistent with our initial assumption of an axisymmetric disk. Figure 5 shows the averages of the F606W and F814W surface brightness profiles extracted from both sides of the projected disk. The profiles have been normalized to the measured brightnesses of the star in the respective bands, so that the ordinate now expresses the surface brightness of the disk as a differential magnitude: (mag arcsec −2 ) disk -mag star . The F606W and F814W profiles are nearly coincident 3. ′′ 5-6. ′′ 7 from the star and can be represented by radial power laws with α ≈ 0.75 for r = 3. ′′ 0-5. ′′ 1 (65-110 AU) and α ≈ 8 for r > 5. ′′ 1. These overlapping profiles indicate that the disk has an intrinsic color of m F606W − m F814W ≈ 0, i.e., the dust grains are neutral scatterers at red wavelengths. This neutrality is unusual among the debris disks listed in Table 1; only HD 32297's disk was previously reported to have intrinsic visible and nearinfrared colors that are neither strongly blue nor red (Mawet et al. 2009). Intrinsic Color The disk's neutral color at visible wavelengths suggests that the minimum size of the dust grains is at least a few microns for the expected varieties of grain compositions and porosities (Golimowski et al. 2006). If so, then the near-infrared colors of the disk should not deviate significantly from gray. Schneider et al. (2011, in preparation) report a possible detection of HD 92945's disk using the coronagraphic imaging mode of HST's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) in the F110W bandpass (λ c = 1.1246 µm, ∆λ = 0.5261 µm; Viana et al. 2009). 1 The PSF-subtracted NICMOS image shows excess signal beyond 3 ′′ from the star in all directions, but prominent residuals from imperfect PSF subtraction obscure any morphology resembling the disk seen in our ACS images. An azimuthal median profile of the unobscured circumstellar region indicates that the residual F110W surface brightnesses 3 ′′ -4 ′′ from the star are nearly an order of magnitude higher than those expected from our ACS images, assuming neutral scattering at visible and near-infrared wavelengths. If valid, the NICMOS results indicate that the disk exhibits a nonintuitive combination of gray colors at visible wavelengths and increased brightness around λ ≈ 1 µm. Surface Density of the Dust The surface brightness S of an optically and geometrically thin disk relative to the incident flux F at a given angular radius r from the star is where σ and Σ are the scattering cross section and surface density of the dust, respectively. Assuming that the grains have homogeneous composition and scatter isotropically, we can map the surface density of dust in HD 92945's disk by scaling a deprojected version of the scattered-light image by r 2 . Figure 6 shows the sequential stages of the conversion of our F606W image to such a map. Figures 6a and 6b are false-color reproductions of our F606W image shown in Figure 3 before and after rotating the disk by ∼ 62 • about its projected major axis so that the disk appears circular and coplanar with the sky. Figures 6c and 6d show with different color tables and smoothing factors the deprojected image after multiplying by r 2 to compensate for the geometric dilution of incident starlight with angular distance from the star. These images represent the surface brightness of the dust scaled by an assumed constant value σ/4π. Our scaled surface-density map reveals concentrations of grains not only in the previously described inner ring, but also along the outer edge of the disk defined by the sharp decline in the radial surface brightness profile at r = 5. ′′ 1 or 110 AU ( §3.1). These overdensities are apparent on both the eastern and western sides of the disk; they appear to be concentric rings of dust centered on the star. The outer ring appears less azimuthally uniform on the eastern side, which may indicate nonisotropic scattering but more likely reflects lower S/N in the southeastern region of the image. If the major axis of the projected disk is properly identified and the dust distribution is axisymmetric, then any evidence of forward scattering should be symmet-ric about the projected minor axis of the disk and not concentrated in its southeastern quadrant. Scattered Light Model To obtain more quantitative constraints on the density distribution and scattering properties of the circumstellar dust, we applied the three-dimensional scattering model and nonlinear least-squares fitting code previously used to characterize the disk around AU Mic . Using the azimuthally averaged radial profile from Figure 6c as a guide, the model simultaneously computes the scattering phase function, radial density profile, and inclination of the disk that best fits the observed scattered-light image. The model image also provides a better estimate of the ratio of integrated reflected light from the disk and emitted star light than can be obtained from coronagraphic images that have regions with large PSFsubtraction residuals. Consequently, we modeled only our F606W image of the disk, which has better overall S/N and PSF subtraction than the F814W image. The F606W radial density profile used to guide the model is represented by the solid curve in Figure 7. Because the model code is designed to produce bestfitting radial density profiles as a series of contiguous power-law functions, we allowed the code to fit up to 18 distinct power-law segments to approximate the unusually complex F606W profile of HD 92945. These 18 segments were established for computational convenience; they are not associated with any physical attributes of the disk itself. We set the inner radius of the model profile at 41 AU, as there appears to be a true clearing in the F606W image despite the large PSFsubtraction residuals near the star. (The disk's surface brightness inside 41 AU certainly does not increase in a manner consistent with the steep increase associated with the outer edge of the inner ring.) We also placed an outer limit of 150 AU on the model profile, in conformity with the largest distance at which the disk is confidently seen in our F606W image. Because the plane of HD 92945's disk is inclined ∼ 28 • from the line of sight (i.e., ∼ 28 • from an "edgeon" presentation), our scattering model cannot constrain the vertical distribution of the dust as a function of radial distance from the star. We can only derive a radial density distribution integrated along lines of sight across the disk. Nevertheless, if we assume that the disk is vertically thin relative to its radial extent (as in the case of AU Mic, whose disk is viewed edge-on), the models show that the vertical density profile does not strongly affect the appearance of a moderately inclined disk in scattered light. We therefore adopted a priori for HD 92945 a Lorentzian vertical density profile similar to those observed in the edgeon disks around AU Mic ) and β Pic (Golimowski et al. 2006). We also assumed a flat disk with a fixed scale height z of either 0.5 AU or 3.0 AU, which bracket the scale heights observed for AU Mic's disk within ∼ 60 AU of the star . Having fixed the disk's inner and outer radii, thickness, and vertical density profile, we allowed our code to determine the radial density profile, intensity normalization, scattering phase function (as described by the asymmetry parameter g; Henyey & Greenstein 1941), and disk inclination that best fits the observed F606W image of HD 92945. The model images were convolved with an "off-spot" coronagraphic HRC PSF appropriate for unocculted field sources (Krist 2000) before comparison with the actual F606W image. Figure 8 shows the best-fit model for z = 0.5 AU; the corresponding model for z = 3.0 AU is effectively identical. The dashed and dotted curves in Figure 7 represent the radial-density profile of the best-fit model and the azimuthally averaged profile of the model's surface density map (i.e., the equivalent of Figure 6c), respectively. The models yield g = 0.015 ± 0.015 for both vertical scale heights, which confirms our qualitative presumption of isotropic scattering. The derived inclinations are 27. • 4 and 27. • 8 (relative to the line of sight) for z = 0.5 AU and 3.0 AU, respectively. Finally, both models yield an integrated F606W flux ratio of (F disk /F * ) F606W = 6.9 × 10 −5 . Figure 9 shows our MIPS and IRS data together with previously reported MIPS and ground-based submillimeter measurements of HD 92945. The solid line represents the theoretical Rayleigh-Jeans profile extrapolated from a T eff = 5000 K model atmosphere (Castelli & Kurucz 2003) normalized to the measured Two Micron All Sky Survey (2MASS; Skrutskie et al. 1997) magnitude of K s = 5.66. The stellar photospheric emission detected with IRS closely tracks the theoretical Rayleigh-Jeans profile and dominates the total emission shortward of 30 µm. An infrared excess begins in the 30-35 µm region of the IRS spectrum, rises steeply at the onset of MIPS SED coverage at 55 µm, and then flattens across the 70-160 µm region spanned by the MIPS broadband photometry. Undulations in the MIPS SED between 55 and 90 µm are artifacts of the SED extraction process. Far-Infrared Spectral Energy Distribution Fits of elliptical gaussian profiles to the MIPS finescale 70 µm and coarse-scale 160 µm images reveal no significant source extension. Our results constrain the full width of the 70 µm source at half the maximum emission to 180 AU. These findings are consistent with the course-scale 70 µm results of Chen et al. (2005). Modeling the Infrared SED We seek a model that reproduces our photometric and spectroscopic Spitzer observations of HD 92945, including an upper limit on the angular extent of the 70 µm MIPS image. To attain this model, we followed the strategy applied by Krist et al. (2010) to their HST and Spitzer observations of the dust ring surrounding HD 207129. We initially assumed the following: 1. The radial distribution of dust obtained from our scattered-light model also delimits the dust responsible for the mid-infrared excess detected in the MIPS and IRS data. 2. All dust is in local thermal equilibrium (LTE) with a star of luminosity 0.38 L ⊙ and mass 0.77 M ⊙ . 2 3. The dust comprises astronomical silicate grains whose wavelength-dependent emissivities are calculated from Mie theory and the optical constants of Laor & Draine (1993). 4. The grains have a size distribution, dn/da ∝ a −3.5 , appropriate for a steady-state collisional cascade (Dohnanyi 1969), but bounded between minimum and maximum radii a min and a max . We calculated model thermal images with 1 ′′ spatial resolution for each of 31 wavelengths spanning 10-850 µm for this dust distribution and various combinations of vertical optical depth, a min , and a max . Our initial value of a min = 0.23 µm was based on the maximum radius (a blow ) of grains with mass density 2.5 g cm −3 that are immediately blown out from the disk by radiation pressure (Strubbe & Chiang 2006;Plavchan et al. 2009). We compared the total flux from each set of 31 models with the observed SED of HD 92945's excess infrared emission. We combined appropriate subsets of the model images to simulate the MIPS 24, 70, and 160 µm broadband images. To produce synthetic IRS and MIPS SED data, we convolved the model images with instrumental PSFs suitably windowed by the entrance slits and position angles used in our Spitzer observations. Because the disk appears unresolved in our broadband MIPS images, we assume that the losses of flux through the slits are identical to those of a point source. The strongest constraint on a min is provided by the weak infrared excess seen in the IRS spectrum longward of 30 µm. Our initial model yielded a min = 3.5 µm, a max = 900 µm, and a vertical optical depth of 0.0027 for grains of size a min located at the inner edge of the disk (r = 41 AU). Thus, a min ≫ a blow . Although these results are consistent with the observed upper limit of the 70 µm source size, we had to steepen the grain-size distribution to dn/da ∝ a −3.7 to fit the 70 µm and 160 µm photometry simultaneously. Even with this adjustment, the initial model overestimated the 350 µm flux density by a factor of 1.7. To address this problem, we postulated that the large grains are not evenly distributed throughout the observed disk but are confined to the inner ring at 41-63 AU. We also surmised that the outer disk (63-150 AU) consists of smaller particles that have been ejected from the inner ring by some combination of collisions, stellar wind, radiation pressure, and/or gravitational scattering by planetary bodies. Similar structures have been inferred for the disks surrounding AU Mic and β Pic (Strubbe & Chiang 2006), HD 61005 (Maness et al. 2009), Fomalhaut (Chiang et al. 2009), and Vega (Müller et al. 2010). We thus adopted a two-component dust model for HD 92945's disk. We maintained the grain-size distribution and surface-density profile derived from the scattered-light images, but assumed that a max = 900 µm within the ring and a max = 50 µm elsewhere in the disk. This change reduced the total number of large grains in the disk and brought the model into agreement with the 350 µm measurement. While this two-component model provides a good fit to the MIPS and IRS data, it is inconsistent with the observed surface brightness of the disk in scattered light. The average albedo of the dust is where F scat and F emit are the fractions of bolometric stellar flux that are scattered and emitted by the dust, respectively. According to Mie theory, astronomical silicate grains with a > 1 µm should have ω ≈ 0.55 from visible to near-infrared wavelengths (Voshchinnikov et al. 2005). We used this value in the LTE calculations intrinsic to our SED model. However, if we assume from our ACS F606W and F814W images that the disk scatters neutrally at all wavelengths that significantly heat the dust (notwithstanding the NICMOS results of Schneider et al. 2011, in preparation), then our scattered-light model implies F scat = 6.9 × 10 −5 ( §3.4). We also derive F emit = 6.0 × 10 −4 from the measured infrared SED (Figure 9), which supersedes previous estimates of L IR /L * reported by Chen et al. (2005) and Plavchan et al. (2009). We therefore obtain ω ≈ 0.10, which is ∼ 5 times smaller than the predicted Mie value. As noted before for HD 207129's disk , simple Mie grains are inconsistent with the combined visible and far-infrared observations of the disk surrounding HD 92945. We produced another two-component model using ω ≈ 0.10 but retaining the original Mie value for the grain absorption efficiency, Q abs . In other words, we retained the emissive properties of astronomical silicate grains but modified their reflectivity so they are essentially black. To compensate for the effect of this adjustment on the short-wavelength end of the SED, we increased a min to 4.5 µm because larger, less reflective grains achieve the same equilibrium temperature as smaller, more reflective grains at a given distance from the star. Figure 10 shows our final model, which best matches the observed SED and surface brightness of the disk if the sources of the scattered and emitted light are cospatial. The component SEDs show that the outer disk is the dominant contributor to the observed 70, 160, and 350 µm emission and that the larger grains in the inner ring should become the dominant emitters longward of ∼ 400 µm. The constraints placed on dn/da, a min , and a max by our final model are unique for the assumed emissive properties of compact astrosilicate grains, but other constraints may be obtained for different assumptions about these properties. Supplemental photometry at 450 and 850 µm would be a useful check of our values of a max in both the inner ring and outer disk. We can however exclude blackbody grains from consideration, as they would be too cold at the radii delimited by the scattered-light images to produce the observed SED. Moreover, the disk is too extensive to be characterized by a single blackbody temperature. Any resemblance of our SED model longward of 160 µm (Figure 10) to the Rayleigh-Jeans tail of a single Planck distribution is coincidental. Integrated Dust Mass Combining our final values of dn/da, a min , and a max for each dust component with the surface-density profile from our scattered-light model, we obtain an integrated dust mass of 5.5 × 10 24 g (∼ 0.001 M ⊕ ) for compact grains of pure astronomical silicate. Although a max is 18 times larger in the inner ring than the outer disk, the inner ring is 4 times less massive than the outer disk. This disparity is caused by the relatively large surface area of the outer disk and the steeply declining grain-size distribution throughout both the inner ring and outer disk. Our estimated mass of 1.1 × 10 24 g for grains smaller than 900 µm in the inner ring is ∼ 1% of the computed mass of Fomalhaut's ring (Holland et al. 1998;, 3 and ∼ 0.05% of the predicted mass of the ring around HR 4796A (Klahr & Lin 2000). Our integrated dust mass is ∼ 60 times larger than the value obtained by Plavchan et al. (2009) from early MIPS 24 and 70 µm photometry of HD 92945. This discrepancy is due to prior assumptions that the dust lies within a thin shell of radius 23 AU, dn/da ∝ a −3.5 , and a ≈ 0.25 µm. None of these assumptions is upheld by our scattered-light images and thermal model. For comparative purposes, we applied the same formula used by Plavchan et al. (2009) to compute the dust mass and obtained a value that is only 20% larger than the one derived from our observed surfacedensity profile. Therefore, our revised dust mass of ∼ 0.001 M ⊕ supersedes those previously reported by Chen et al. (2005) and Plavchan et al. (2009). Dust Properties The dust's neutral color and negligible emission at 24 µm indicate that the collisional cascade is suppressed below radii of several microns, which is an order of magnitude larger than a blow . This truncation of the grain-size distribution is counterintuitive for a star of subsolar mass and luminosity, especially as the dust's isotropic scattering and low albedo indicate a large population of submicron-sized grains (Voshchinnikov et al. 2005). This contradiction suggests that our initial assumptions about the location, composition and/or size distribution of the thermally emitting dust should be reviewed. Radial Distribution Our invocation of two dust components with common a min but different a max to explain the disk's neutral color and infrared SED is based on the assumption that the dust grains responsible for the scattered light and the infrared excess are cospatial. Although the inner ring seen in our coronagraphic F606W image suggests that the dust is created in a coincident planetesimal belt, the presence of dust closer to the star cannot be ruled out by our coronagraphic images alone. If such dust is present and in LTE with the stellar radiation field, it must have a min > 4.5 µm (our derived value for the inner ring) or a large albedo to avoid a detectable 24 µm excess. The former option worsens the discrepancy between the large a min needed to fit the infrared SED and the small a blow for a young K dwarf like HD 92945. The latter option counterintuitively suggests that conditions favorable for icy grains increase within 40 AU of the star rather than beyond the expected snow line at ∼ 2 AU (Kennedy & Kenyon 2008). Thus, neither option favors the presence of much dust between the star and the inner ring. Conversely, if much dust exists beyond the apparent outer limit of the scattered-light disk (r ≈ 5. ′′ 1 or 110 AU), then its albedo must be nearly zero to conform with the observed surface brightness profiles ( Figure 5). Moreover, dn/da must be much steeper than the traditional a −3.5 power law (Dohnanyi 1969) to account for both the observed SED longward of 160 µm and the unresolved 70 µm image of the disk. Such a small albedo again counters the expected formation of icy mantles beyond the snow line, and it im-plies a corresponding increase in emissivity for grains larger than ∼ 0.1 µm. Such small grains would be efficiently expelled from the disk by radiation pressure. Collectively, these arguments support our assumption that all the thermally emitting dust is confined within the angular limits of the visible scattered light seen Figure 3. Composition and Porosity Recent theoretical studies of other debris disks have frequently invoked highly porous aggregate grains to explain the observed scattered light and/or thermal emission from the dust (Li & Greenberg 1998;Li & Lunine 2003a,b;Li et al. 2003;Graham et al. 2007;Fitzgerald et al. 2007;Shen et al. 2009). Models of the optical properties of porous grains have yielded diverse and sometimes contradictory results, so we defer a detailed model of HD 92945's disk using porous aggregate grains to other investigations. For now, we qualitatively assess the substitution of such grains for the compact grains in our scattered-light and thermal models. Voshchinnikov et al. (2005) showed that the Henyey-Greenstein scattering asymmetry parameter, g, of moderately porous grains increases by factors of 1-1.25 over that of compact grains for a = 0.5-2.5 µm and V -band wavelengths. Shen et al. (2009) showed that the degree of porosity has little effect on the scattering phase functions of grains at wavelengths λ ≈ 2πa. Consequently, neither compact nor porous micron-sized grains account for the isotropic scattering observed from HD 92945's disk. The effect of porosity on the albedo is unclear, however, as Hage & Greenberg (1990) and Voshchinnikov et al. (2005) oppositely predict that highly porous grains have lower and higher albedos, respectively, than compact grains of similar composition. The effect of porosity on the sizes of grains that succumb to radiation pressure is also ambiguous because a blow also depends on the composition of the grains. Mukai et al. (1992) showed that the ratio β of the forces of radiation and gravity on a grain decreases with increasing porosity for a 1 µm, regardless of grain composition. Saija et al. (2003) confirmed this result and emphasized that the decreasing β was limited to ultraviolet and visible wavelengths. Mukai et al. (1992) also showed that β increases with porosity for absorptive grains (e.g., magnetite and graphite) with a 1 µm, and it becomes nearly independent of grain size at the highest porosities. On the other hand, dielectric grains (e.g. astronomical silicate) with a 10 µm experience decreasing β as porosity increases. Köhler et al. (2007) determined that β increases rapidly with increasing porosity for both graphite and silicate grains with a 10 µm. Collectively, these results indicate that high porosity of grains in HD 92945's disk does not explain the a min ≈ 4.5 µm obtained from our silicate-based thermal model. However, high porosity may be responsible for such a large value of a min if the dust is mostly composed of more absorptive materials like graphite. Size Distribution Our need to steepen HD 92945's grain-size distribution from the traditional a −3.5 representation (Dohnanyi 1969) to reproduce the observed SED is consistent with thermal models of the disks around AU Mic (Strubbe & Chiang 2006) and HD 207129 , which, like our model, employ compact spherical grains. On the other hand, models that employ fluffy porous grains, like those developed for the disks around HR 4796A (Li & Lunine 2003a), HD 141569A (Li & Lunine 2003b), and ǫ Eridani (Li et al. 2003), require grain-size distributions that are significantly shallower than a −3.5 . Recent models of collisional cascades in the presence of radiation pressure have shown that the rapid loss of grains with a < a blow creates a wave in the size distribution of the remaining grains (Thébault et al. 2003;Krivov et al. 2006;Thébault & Augereau 2007), so a single powerlaw representation may be inappropriate regardless of porosity or composition. Thébault & Augereau (2007) noted that the wave becomes sufficiently damped for a 100 a blow that the size distribution can be approximated by a −3.7 , which matches our best-fitting thermal model of HD 92945. This agreement is surprising because our model suggests that most grains in the outer disk, which is the dominant contributor to the observed SED at λ 400 µm (Figure 10), are smaller than 100 a blow = 23 µm and should therefore lie in the wavy part of the distribution. Our result may indicate that the wave is actually smaller and/or less extensive than predicted by Thébault & Augereau (2007) and that the size distribution follows the a −3.7 power law for grains as small as 10 a blow (A. Gáspár, personal communication). Thébault et al. (2003) noted that the wavy structure in the size distribution is more pronounced for materials that are prone to cratering rather than catastrophic fragmentation. If so, then softer and more absorptive grains like graphite may not easily account for both the large a min ≈ 4.5 µm ( §5.1.2) and the monotonic a −3.7 size distribution obtained from our thermal model. Of course, our model is based on the common a priori assumption of a power-law distribution without any regard to goodness-of-fit, so we cannot discount any possible compositions of the dust without an exhaustive investigation of alternative size distributions. Plavchan et al. (2005Plavchan et al. ( , 2009) studied the tangential and radial contributions of corpuscular stellar wind to the removal of grains in the debris disks of late-type dwarfs. They found that the tangential component (or "corpuscular drag") is more important than Poynting-Robertson drag for K and M dwarfs whose mass-loss rates from stellar wind (Ṁ sw ) are similar to the solar rate. Strubbe & Chiang (2006) obtainedṀ sw 10Ṁ ⊙ for AU Mic from their scattered light and thermal model of its disk, but they concluded that corpuscular drag is not a significant mechanism of grain removal from the disk because the grains are more quickly destroyed by mutual collisions within the dust's "birth ring." Plavchan et al. (2009) confirmed that the collisional lifetime of grains in the birth ring is ∼ 10 4 times smaller than the timescale for removal by corpuscular drag, so stellar wind does not presently contribute to the evolution of AU Mic's disk. Wood et al. (2005) developed an empirical relationship between X-ray luminosity andṀ sw for G, K, and M dwarfs with X-ray luminosities 8 × 10 5 erg cm −2 s −1 and ages 700 Myr. Although HD 92945's X-ray luminosity of 1.4×10 6 erg cm −2 s −1 is formally beyond the applicable range of this relation, we nonetheless apply the relation to determine whether stellar wind may be a contributing factor to the large a min observed in HD 92945's disk. We find thatṀ sw ≈ 100Ṁ ⊙ , which implies that the stellar wind augments the radiation pressure on the grains by only ∼ 8% ). In other words, HD 92945's radiation and corpuscular wind together yield a blow ≈ 0.25 µm, which is 18 times smaller than a min obtained from our thermal model. Effects of Stellar Wind For a blow to be consistent with a min , HD 92945's mass-loss rate would have to be an implausible 2 × 10 4Ṁ ⊙ . However, if the stellar wind velocity was larger than υ sw /c = 10 −3 assumed by Plavchan et al. (2009), then the requiredṀ sw could be significantly reduced. This possibility is unlikely, however, because υ sw is approximately equal to the escape velocity, which is approximately constant for stars on the lower main sequence. We therefore conclude that neither corpuscular drag nor blow-out is responsible for the a min derived for HD 92945's inner ring. Disk Morphology The other known neutrally scattering debris disk surrounds the A star HD 32297 (Mawet et al. 2009) and presents a nearly edge-on and highly asymmetric appearance that has alternately been attributed to collisional interaction with the interstellar medium , the destruction of a large planetesimal (Grigorieva et al. 2007), or the resonant trapping of dust by an inner planet (Maness et al. 2008). Although HD 92945's disk shows no significant asymmetry along its projected major axis, its morphology may represent an azimuthally smoother remnant of collisions within one or more planetesimal belts. In fact, the combination of an inner ring surrounded by a diffuse disk conforms very well to structures predicted by a variety of dynamical models that explore the evolution of debris disks as embedded planets form and/or migrate (Kenyon & Bromley 2002Wyatt 2003Wyatt , 2006Thébault & Wu 2008), or as the dust alone migrates in the face of radiation pressure and gas drag (Klahr & Lin 2000;Takeuchi & Artymowicz 2001). HD 92945's age ) and undetected H 2 emission (Ingleby et al. 2009) suggest that the disk is largely depleted of gas, so we discount gas-dust coupling as a viable cause of the disk's ringed structure. Comparison with Dynamical Models The surface brightness and density maps of HD 92945's disk ( Figure 6) strikingly resemble those produced by Kenyon & Bromley (2004) and Wyatt (2006) from their dynamical models of dust created from collisions of planetesimals confined to rings or planetary resonances. Kenyon & Bromley (2004) modeled the scenario in which the collisions occur in an expanding ring associated with an outwardly propagating wave of planet formation. At any given epoch, this scenario conforms to the "birth ring" concept of Strubbe & Chiang (2006). Wyatt (2006) considered an alternative scenario in which the colliding planetesimals are trapped in a gravitational resonance of an outwardly migrating planet. The resulting dust grains either remain in or migrate from resonance according to their size (or, more accurately, their associated value of β). In this model, Kuiper Belt grains with 0.008 β 0.5 would fall out of resonance with a migrating Neptune but remain bound to the Sun in increasingly eccentric and axisymmetric orbits as they are scattered by multiple close encounters with the planet. Whereas HD 92945's inner ring may be readily described by such dynamical models, the precipitous decline in its surface brightness beyond 110 AU is more problematic. For example, Kenyon & Bromley (2004) initially considered a 3 M ⊙ star with a quiescent disk having a surface-density profile ∝ r −1.5 at r = 30-150 AU, which after several hundred Myr remains smoothly asymptotic at large radii except when a ring, gap, or shadow passes through the region. Changes in the the initial parameters of the system affected the time scales but not the outcomes of the disk evolution. To reproduce HD 92945's double-peaked surfacedensity profile (Figure 7), this model requires two concurrent waves of planet formation at the inner and outer edges of the imaged disk. Kenyon & Bromley (2004) showed that a close encounter with a passing star can also initiate a wave of planet formation at the inner edge of the disk, but we identify no such fly-by candidates in our HST and Spitzer images. 4 Therefore, the model of Kenyon & Bromley (2004) is probably insufficient to explain the apparently sharp boundaries of HD 92945's disk. The resonant-dust model of Wyatt (2006) accommodates sharp inner and outer edges of the disk if the grains have values of β between those necessary for escape from the gravitational resonance and radiative expulsion from the disk. The former limit depends on the planet's mass, so we can assess whether the different values of a max obtained from our thermal model of the inner ring and outer disk ( §4.2) are consistent with the presence of a migrating planet between the star and the inner ring. If we crudely extrapolate plots of β(a) computed for a variety of solar-system grains with a < 10 µm and apply them to the dust surrounding HD 92945, then our value of a max = 50 µm in the outer disk indicates that β 0.004 for compact grains that leave the resonance. For a 0.77 M ⊙ star like HD 92945, this lower limit of β is consistent with a migrating planet of mass ∼ 3 M ⊕ (Wyatt 2006). This scenario implies that larger grains in the disk -like those confined by our thermal model to the inner ring -librate resonantly over a broad but finite range of azimuths. Returning to the birth-ring scenario, Thébault & Wu (2008) found that the surface brightness of an optically thin dust disk of mass ∼ 0.1 M ⊕ achieves a collisionally steady-state profile ∝ r −3.5 with no sharp outer edge. The smooth decline at large distances is mainly caused by the forced outward migration of small grains by radiation pressure. They also found that if either the migration or the production of small grains were somehow inhibited, then the disk would retain the sharp edge initially defined by the planetesimal belt. The former possibility does not apply to HD 92945 because its disk has very little mass ( §4.3) and is optically thin, but the latter one may be relevant to both the morphology and large a min of the disk if it is dynamically cold. Thébault & Wu (2008) determined that the low eccentricities of bodies in dynamically cold disks not only reduce the production of small grains from destructive collisions of larger bodies, but also increase the destruction of small grains because the grains are more likely to collide with larger bodies as they are radiatively expelled from the disk. Although the potential link between a cold disk and large a min is intriguing, the low eccentricities needed for such a condition conflict with the high eccentricities of grains in the outer disk (caused perhaps by repeated encounters with a migrating planet) that likely produce the double-peaked surface-density profiles observed for HD 92945 (Figure 7) and modeled by Wyatt (2006). Effect of Disk Inclination The intermediate inclination of HD 92945's disk makes our scattered-light model ( §3.4) insensitive to the disk's vertical thickness and its possible radial dependence. We are therefore unable to constrain the scale height of the disk, which is an indicator of its dynamical temperature and a means for assessing the relevance of the dynamical models just described. Moreover, the disk's inclination inhibits the detection of small-scale clumps or perturbations like those seen in the disks of AU Mic and β Pic (Golimowski et al. 2006), which are viewed along their edges through extensive columns of dust. Consequently, we are unable to ascribe the apparent axisymmetry of the dust to the dominance of radiation pressure in a quiescent disk, a temporary lull in the stochastic collisions of planetesimals, or some other process. Resolved images of the disk at far-infrared or submillimeter wavelengths would allow us to determine whether the axisymmetry persists over a large range of particle sizes and whether resonant trapping of planetesimals dust by a migrating planet is indeed a viable explanation for the observed surface density of the disk (Wyatt 2006). Summary and Concluding Remarks Our ACS/HRC coronagraphic images of HD 92945 reveal an inclined axisymmetric debris disk comprising an inner ring ∼ 2. ′′ 0-3. ′′ 0 (43-65 AU) from the star and a faint outer disk with average F606W (Broad V ) and F814W (Broad I) surface brightness profiles declining as r −0.75 for r = 3. ′′ 0-5. ′′ 1 (65-110 AU) and r −8 for r = 5. ′′ 1-6. ′′ 7 (110-145 AU). The sharp break in the profiles at 110 AU suggests that the disk is truncated at that distance. The observed relative surfacedensity profile is peaked at both the inner ring and the outer edge of the disk. This morphology is unusual among the 15 other disks that have been spatially resolved in scattered light, which typically exhibit either solitary rings with sharp edges (e.g., HR 4796A and Fomalhaut) or asymmetric nebulosity with indefinite outer limits (e.g., β Pic and HD 61005). Only HD 181327 has an axisymmetric ring and extended nebulosity somewhat akin to HD 92945's disk, but its nebulosity is asymmetric and has a steep (but not truncated) surface brightness profile within ∼ 450 AU . The dust in HD 92945's outer disk scatters neutrally and isotropically in the V and I bands. These characteristics contradict current optical models of compact and porous grains, which predict that grains larger than a few microns are neutral scatterers and submicron-sized grains are isotropic scatterers at these wavelengths. The disk's anomalously low V -band albedo (∼ 10%) also suggests a large population of submicron-sized grains. If grains smaller than a few microns are absent, then stellar radiation pressure may be the cause only if the dust is composed of highly absorptive materials like graphite. Optical models of compact silicate grains suggest a maximum blow-out size of ∼ 0.25 µm, and this size decreases as porosity increases (Mukai et al. 1992). Our Spitzer MIPS and IRS measurements reveal no significant infrared excess from HD 92945's disk shortward of 30 µm, and they constrain the width of the 70 µm source to 180 AU. Assuming that the dust comprises compact grains of astronomical silicate confined to the disk imaged with ACS, we modeled the 24-350 µm emission with a grain-size distribution dn/da ∝ a −3.7 and a min = 4.5 µm throughout the disk, but with a max = 900 µm and 50 µm in the inner ring and outer disk, respectively. Combining these thermal constraints with the albedo and surface-density profile obtained from our ACS images, we obtain an integrated dust mass of ∼ 0.001 M ⊕ . Conflicting indicators of minimum grain size are not unique to HD 92945. Krist et al. (2010) reported that the narrow ring around the G0 V star HD 207129 exhibits both isotropic scattering and very low albedo (∼ 5%), but its far-infrared SED is adequately modeled with silicate grains with a min = 2.8 µm and dn/da ∝ a −3.9 . As in the case of HD 92945, this value of a min greatly exceeds the radiative blow-out size for the host star, and dn/da is significantly steeper than the traditional a −3.5 representation for a steady-state collisional cascade (Dohnanyi 1969). The results for HD 92945 and HD 207129 are based on reliable and well-calibrated HST and Spitzer data, so it appears that the conflicting indicators stem from an incomplete understanding of the composition and optical properties of compact and porous grains around low-mass stars. Current grain models indicate that the scattering asymmetry parameter and albedo are insufficiently sensitive to composition and porosity to account for our observational results, but some contradictory trends demand that caution be exercised when applying the models to observational data. As demonstrated by Graham et al. (2007) for AU Mic, polarimetric imaging of HD 92945's disk may provide definitive constraints on the sizes and porosity of the grains and thus avoid some ambiguities of the grain models. HD 92945's disk morphology is remarkably like those predicted from dynamical models of dust produced from collisions of planetesimals perturbed by coalescing or migrating planets. The planet-resonance model of (Wyatt 2006) is particularly intriguing because it yields, for a plausible range of grain sizes, a double-peaked surface density profile that resembles the observed profile of HD 92945's disk. Furthermore, this model predicts differences between the size distributions of grains that remain in the resonance in which they were created and those that leave the resonance on bound, axisymmetric orbits because of radiation pres-sure. Such spatial segregation by grain size may be relevant to our thermal model of HD 92945's infrared SED, which requires two components of dust distinguished by values of a max that are 18 times larger in the inner ring than in the outer disk in order to match the observed 70, 160, and 350 µm fluxes. As Wyatt (2006) has advocated and as existing multiband images of the disks around β Pic, Vega, and Fomalhaut have shown, resolved images over a broad spectral range are needed to constrain the composition and location of the HD 92945's dust, as well as the mechanism(s) responsible for the disk's morphology at each wavelength. The high-resolution infrared and millimeter imaging capabilities of the Herschel Observatory and the Atacama Large Millimeter Array (ALMA) are well suited for determining the location or distribution of the unresolved thermal emission detected with Spitzer. If the planet-resonance model of Wyatt (2006) applies to HD 92945, then Herschel and ALMA images should reveal increasing concentrations of resonant dust as the imaging wavelength increases. Constraining the location of the resonances would in turn constrain the mass and location of a putative migrating planet, which, at an age of ∼ 300 Myr and possible mass of only a few M ⊕ ( §5.2.1), may not be directly detected with a high-resolution, nearinfrared coronagraph such as that used to image the younger giant planet β Pic b (Lagrange et al. 2010). Given the demise of the ACS/HRC and the uncertain future of NICMOS, the next likely opportunity for imaging HD 92945's disk in scattered light will follow the launch of the James Webb Space Telescope. JWST's Near-Infrared Camera and Tunable Filter Imager will provide coronagraphic imaging from 1.5-5.0 µm, which together with our ACS images will permit an assessment of the chromatic dependence of the albedo, color, and scattering asymmetry of the dust over nearly a decade of wavelengths. Coronagraphic imaging with JWST's Mid-Infrared Instrument (MIRI) will probably be less fruitful, as Spitzer observations have shown no significant excess flux from dust shortward of 30 µm. However, MIRI will be useful for assessing the presence and characteristics of an infant planetary system. That said, more immediately accessible 450 and 850 µm ground-based photometry would help to constrain the submillimeter end of HD 92945's SED and, consequently, the distribution and size limits of grains in the inner ring and outer disk. We gratefully acknowledge Paul Smith from the University of Arizona for his assistance with MIPS SED data processing. We also thank Glenn Schneider and collaborators for sharing the results of their NIC-MOS observations of HD 92945 prior to publication. ACS was developed under NASA contract NAS 5-32865, and this research has been supported by NASA grant NAG5-7697 to the ACS Investigation Definition Team. Additional support for John Krist Fig. 1.-Reduced F606W (Broad V ) images of HD 92945 (top) and the PSF-reference star HD 100623 (middle) obtained with the ACS HRC coronagraph on UT 2005 July 12. The linear scattered-light feature extending from the tip of the occulting bar is an intrinsic component of the coronagraphic PSF. The bottom panel shows the image of HD 92945 after normalization, registration, and subtraction of the reference PSF. This "difference" image reveals HD 92945's dusty debris disk, which is partly obscured by PSF-subtraction residuals surrounding the central occulting spot. The overlapping shadows of the HRC's occulting bar and large occulting spot are seen protruding from the top of each image. All images are displayed with logarithmic scaling and 2 × 2-pixel binning, but without correction of geometric distortion. -F606W and F814W images of HD 92945 after subtraction of the coronagraphic PSF and correction of HRC's geometric distortion, but before removal of the wide-angle halo. Both images are displayed with logarithmic scaling and 2 × 2-pixel binning. The orientation of the FOV reflects HST's roll angle during the second-epoch observations. The F606W image is the average of the images recorded at both observing epochs, excluding regions obscured by the occulting bar, the large occulting spot, and subtraction residuals from the linear PSF artifact seen in Figure 1. No F814W images were recorded at the first epoch, so the regions affected by these artifacts remain. The 5 ′′ scale bar corresponds to a projected distance of 108 ± 2 AU. Figure 3 after smoothing the images with an 11 × 11 pixel boxcar. The dotted curves show the same profiles after addition and subtraction of the 1 σ error maps (see §2.1.2) to the pre-smoothed images. The heavy dashed lines represent radial power law fits to the regions 3. ′′ 0-5. ′′ 1 (F606W only), 3. ′′ 5-5. ′′ 1 (F814W only), and 5. ′′ 1-6. ′′ 7 (both F606W and F814W). The vertical lines represent the boundaries of these regions. The surface brightnesses were calibrated by matching the observed brightness of HD 92945 in our unocculted HRC images to synthetic photometry from a model K1 V spectrum (T eff = 5000 K, log g = 4.5; [M/H] = 0.0; Castelli & Kurucz 2003) obtained with STScI's synphot package (Laidler et al. 2005). The synthetic magnitudes of HD 92945 itself are m F606W = 7.54 and m F814W = 6.80. Figure 6(d). The dashed curve shows the 18segment density profile generated by our scattered-light model that best fits the F606W image of the disk, assuming a scale height of 0.5 AU. The dotted curve is the equivalent of the solid curve for the best-fit model image (Figure 8) after convolution with a synthetic off-spot coronagraphic PSF obtained with the HST version of Tiny Tim (Krist & Hook 2004). Data Model Data-Model 2) with observed excess infrared flux after removal of the Rayleigh-Jeans fit to the stellar photosphere. An average albedo of ω = 0.10 is assumed. The measured fluxes are represented with the same symbols described in Figure 9, but the IRS and MIPS SED data are no longer corrected for point-source flux losses due to the finite slit widths. The thick solid line shows our preferred two-component disk model; the thin solid lines reflect the contributions from the inner ring and the outer disk. The dashed lines show the disk model windowed by the IRS and MIPS SED slits.
2011-05-04T18:25:58.000Z
2011-05-04T00:00:00.000
{ "year": 2011, "sha1": "71297311511263dbd9103bd337e522a2c0121741", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/0004-6256/142/1/30/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "71297311511263dbd9103bd337e522a2c0121741", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
249969159
pes2o/s2orc
v3-fos-license
RANKING FACTORS THAT AFFECT SATISFACTION AND MOTIVATION OF EMPLOYEES USING THE PIPRECIA METHOD : Motivating employees is a crucial factor in the use and development of human resources and their focus on achieving organizational goals, increasing satisfaction, retaining quality people, encouraging creativity, and eliminating all forms of counterproductive behavior. Through a series of activities, managers can increase individual development needs, making the objective situation more demanding. Through the example of the generation we paid attention to, Generation Y (1981 - 2000) is often called Millennials and Generation Z, which enters the labor market as the youngest, after millennials. The task of human resource managers is to make each member feel involved in organizational culture in the right way and to feel valued. Therefore, a multicriteria approach based on the application of PIvot has been applied in this paper Pair-wise Relative Criteria Importance Assessment - PIPRECIA method. A detailed review of the literature defines a list of factors and relevant subfactors evaluated by three Generation Y decision-makers and three Generation Z decision-makers. The obtained results are relevant and authoritative, thus unequivocally confirming the usefulness and applicability of the proposed approach. Introduction Human resource management is important to create a stimulating environment for acquiring knowledge and personal development (Grmuša, 2021). Human resource managers need to look at the change in the light of success. Therefore, their role is great both in overcoming resistance to change, in teaching employees to do business in new, changed conditions, and in encouraging all kinds of change. Research has shown that in 87% of companies, organizational development and change were managed by the Human Resources Management Department. In today's turbulent environment, Human Resources Managers have a very important leadership role, which brings the leadership skills of managers to the fore. Leadership today includes helping an organization manage change and requires the ability to diagnose problems, implement organizational change, and evaluate results. The changes most often lead to conflicts, resistance, and confusion among employees. Employee performance is linked to the achievement of each employee by different rules, regulations, or expectations of organizations or leaders (Fuertes et al., 2020). Employee performance reflects the abilities of each individual in the organization (Kovačević, 2021). Most competent and experienced employees tend to show a high level of expertise and commitment at work, which leads to greater performance of employees compared to those with less expertise and skills (Jiang et al., 2020). What most researchers state is that leadership and employee performance there should be a mediator variable and in most research, that is employee satisfaction (Paais & Pattiruhu, 2020;Zaim et al., 2020;Eliyana et al., 2019;Chiniara & Bentein, 2016). We could define job satisfaction as a positive attitude that an employee has towards his / her work, which is the result of the assessment and evaluation of job characteristics. There are several basic factors from the work environment, which can affect job satisfaction. The most important are the nature of the job, salary and benefits, and the relationship with supervisors and colleagues. There are numerous negative consequences of job dissatisfaction: leaving the organization and being absent from work, lack of commitment to the organization, lack of behavior of an organizational citizen, reduced productivity, stress, and burnout syndrome. Given the importance of job satisfaction for employee productivity and the efficiency of the organization itself, organizations today invest a lot of time and effort in measuring the degree of satisfaction of their employees. The measurement is conducted once or even more times a year, with the aim of monitoring changes in employee satisfaction so that the organization can react to them on time and prevent possible consequences that may occur due to employee dissatisfaction. In practice, two approaches are most often used in measuring job satisfaction: a single assessment and an analysis of several aspects of work (Đorđević -Boljanović, 2018). For the millennial generation, skills development, acquisition of new knowledge, mentoring opportunities, fast and efficient feedback, an environment in which cooperation with others is possible, flexible schedules, free time, and enabling the use of the latest technology are of great importance. Of the financial awards, they would be most motivated to participate in actions. The approach to motivation is reflected primarily in respect for diversity among employees. Hence the focus of our interest is on the differences in the motivation of different generations of employees: from the baby boom generation, through Generation X and millennials, to Generation Z, whose greater impact on the labor market is eagerly awaited (Rampton, 2018). Millennials have really made a turnaround in the world labor market, and that is still expected from Generation Z. The above presentation in Table 1. speaks in favor of the fact that changes in the labor market are becoming more intense, and the demands and needs of employees, as well as their values, beliefs, and perceptions, are increasingly different (Đorđević -Boljanović, 2018). From the research aspect, this paper aims to fill the research gaps by applying the PIPRECIA method to factors that affect employee satisfaction and motivation. The motivation for this kind of research is based on further answering the questions what are the factors and subfactors of motivation that affect different types of generations and make them satisfied with their work. In the methodological part, are presented the steps of the PIPRECIA method that we applied in this paper. In the numerical presentation, using the method, we calculated the weights factors and subfactors of factors that are relevant to employee satisfaction and motivation and presented the results that show which factors are more important and which are less important. The part of the conclusion summarizes which factors stand out, as well as why they are important for human resource management because each generation has its characteristics, so a different and tailored approach to their needs is necessary to be motivated and satisfied. Millenials Generation Z Don't just work for a paycheck, they want a purpose. Money and job security are their top motivators. They want to make a difference but surviving and thriving are more important. They aren't pursuing job satisfaction; they are pursuing their development. They want to accumulate rewarding experiences. Gen Z tends towards being impatient and often experiences FOMO (Fear of Missing Out), so instant feedback and satisfaction are key. They don't want bosses, they want coaches. They want to be mentored in an environment where they can advance quickly. They want to look their leader in the eye and experience honesty and transparency. They don't want annual reviews; they want ongoing conversations. They don't want an annual work assessment; they want to be mentored and given feedback on an ongoing frequent (daily) basis. They don't want to fix their weaknesses; they want to develop their strengths. They were raised during the Great Recession and believe that there are winners and losers and more people fall into the losing category. They want to have the tools to win, either thought developing weaknesses or strengths. They have a collaborative mentality where everyone pitches in and work together. They are competitive. 72% of Gen Z said they are competitive with doing the same job. They are independent and want to be judged on their own merits and showcase their talents. It's not just their job, it's their life. Salary and benefits and how they can advance are pivotal. They are a DIY generation and they feel that other generations have overcomplicated the workplace. Source: Millennials vs. Generation Z, research credit: https://www.forbes.com/sites/christinecomaford/2018/01/20/why-leaders-need-to-embrace-employeemotivation/?sh=6e65abbc1272 https://www.shrm.org/ResourcesAndTools/hr-topics/behavioral-competencies/global-and-culturaleffectiveness/Pages/Move-over-Millennials-Generation-Z-Is-Here.aspx For this paper, a relatively new MCDM (multiple-criteria decision-making) method called the PIvotPairvise Relative Criteria Importance Assessment (PIPRECIA) method was proposed by Stanujkić et al. (2017). This method is primarily intended to define the importance (weight) of evaluation criteria, but it can be equally successfully applied to solve MCDM problems, ie to assess alternatives and choose the best (Stanujkić et al., 2021). So far, the authors have used the PIPRECIA method to facilitate decision-making in various fields, such as hospitality and tourism The PIPRECIA method The PIPRECIA method (Stanujkic et al., 2017) is very suitable for defining the meaning of criteria, especially in the conditions of group decision-making. The idea for the development of the PIPRECIA method originated from the Step-Wise Weight Assessment Ratio Analysis -SWARA method (Keršuliene et al., 2020), or, more precisely, from the perceived lack of the SWARA method related to the need to pre-sort criteria according to expected importance. This initial step of the SWARA method automatically disqualifies it as a technique suitable for use in group decision-making conditions. The authors of the PIPRECIA method have made some adjustments, so it does not require prior sorting of criteria and allows the definition of importance simply and understandably. The PIRECIA method can be illustrated by the following series of steps: Step 1. Selection of the evaluation criteria where presorting is not obligatory. Step 2. Determination of the relative importance that begins from the second criterion as follows: (2) Step 3. Definition of the coefficient in the following way: (3) Step 4. Detection of the recalculated value as follows: q j (4) Step 5. Determination of the relative weights of the estimated criteria by using the following Eq: where w j represents the relative weight of the criterion. j. Step 6. In the case of a larger number of decision-makers, the mean value is taken out of the account using the formula: . ( When w j * is the average value of w j of decision-makers, n is the number of decision-makers. Table 2 shows a list of factors that are relevant to employee satisfaction and motivation. All the above factors and subfactors to a greater or lesser extent indicate the degree of satisfaction and motivation. However, it is important to establish which factors and subfactors have a greater and which less influence on different types of generations in this case generations Y and Z. (Dramićanin, 2021). A Numerical Illustration In this case, six decision-makers are human resource managers, three are generations Z, and three are generations Y. They are all involved in the decision-making process because the paper aims to show the convenience of applying the PIPRECIA method in the conditions of group decision-making. Initially, the importance of the basic factors that motivate generations Y and Z and influence their job satisfaction will be determined. This will be shown by applying formulas (1) -(6). Tables 3 -4 show the results obtained. Source: Author's research Note: the first decision-maker -DMa, the second decision-maker -DMb, and the third decision-maker -DMc According to the first decision-maker from generation Y, the most important factor is C1leadership, while the second decision-maker believes that the most important factor is C3employee satisfaction. The third decision maker believes that the most important factor is C4employee motivation. When it comes to Generation Z, all decision-makers equally believe that the most important factor is C1 -leadership. To minimize the subjectivity of decision-makers and determine the most relevant results, the mean value of the weights obtained was calculated using the formula (5). The obtained results showed that in the first place the factor C4 -employee motivation is the most important attitude of the decision-maker of generation Y, and the factor C1 -leadership is of the greatest importance for the decision-makers of generation Z. Based on Table 2, we could notice that each of the factors includes several subfactors, so in the next phase of the analysis, the relative importance of the respective subfactors will be determined in Tables 5 -12. According to the first decision-maker of Generation Y, the subfactor that has the greatest importance is subfactor C15a -a leader allows work to be done creatively, as long as it does not violate the rules of the organization. According to another decision-maker, the most important subfactor is C11b -the reliable direction in which my organization is moving. The third decision decision-maker that the most important subfactor is C16c -a leader always thinks about the interests of employees. According to the first and second decision decision-maker station Z, the subfactor that has the greatest importance is C12ab -A leader always invites employees to talk to him, especially if the issues concern them directly. The third decision maker believes that the most important subfactor C13c -a leader is an honest person. The overall results of Generation Y, obtained by applying formula (5) showed that in the first place the most important subfactor C11*, while in the last place, as the least important, factor C13* -a leader is an honest person. The overall result according to the decision decision-maceration Z showed that subfactor C12* is in the first place in terms of importance -A leader always invites employees to talk to him, especially if the issues concern them directly, while in the last place is the least important subfactor C14* -A leader treats employees professionally and distinguishes between personal and professional issues. In this case, according to the first decision-maker of generation Y, the subfactor that has the greatest importance is subfactor C24a -training for acquiring skills for using information systems. The second decision-maker believes that the most important subfactor of C21b -Developing managers' ability to support innovation and creativity. The third decision maker believes that the most important factor is C26c -The development of effective continuous professional development programs. In this case, according to the first and sec decision-make makers of generation Z, the most important subfactor is subfactor C22ab -level of qualification, while according to the third decision-maker the most important subfactor is C25c -Building teamwork skills and cooperative work environment models. is in the first place in terms of importance factor C21* -Developing managers' ability to support innovation and creativity, while in last place is the factor C22* -The desire for employees to have the skills to manage their learning and development. The average value of the obtained results of generation Z shows that the most important factor is C22* -The desire for employees to have the skills to manage their learning and development, while in the last place the factor C26* -Development of effective continuous professional development programs. The first decision-maker of generation Y gave the greatest importance to sub-factor C37amanagement supports employee career development, and the second decision-maker to subfactor C36b -the organization considers all suggestions and complaints received from its employees. The third decision-maker considers that the most important sub-factor is C32c -I feel proud to work for this organization. According to the first decision-maker of Generation Z, the most important subfactor is C31athere is a balance between social life and work. According to the second and third decisionmakers, the most important subfactor is factor C34bc -The organization inspires me and the team. The general results of Generation Y indicate that the most important subfactor is the C36* subfactor -the organization considers all suggestions and complaints received from its employees, while the last factor is C34* -The organization inspires me and the team. The general results of Generation Z indicate that the subfactor with the greatest importance is the subfactor C33* -I feel motivated to actively continue working, while in this case, the subfactor of the least important was the subfactor C37* -Management supports employee career development. The sub-factor that is most important to the first and second decision-makers of Generation Y is the sub-factor C45ab -employees come to work regularly (no absences). The third decisionmaker considers that the C44c -subfactors are important -training and development motivate employees to work optimally. According to the decision-makers of Generation Z, the first decision-maker gave the greatest importance to the C41a -sub-factor -there are strict rules and regulations that employees must abide by. The second and third decision-makers gave the greatest importance to the sub-factor C43bc -the organization rewards employees who have achieved their goals. Finally, the overall results show that the C45* -sub-factor -employees come to work regularly (no absences) is most important for Generation Y, while the C41* -sub-factor -there are strict rules and regulations that employees must abide by, the least important sub-factor. The overall results for Generation Z show that the subfactor of greatest importance is C43* -the organization rewards employees who have achieved their goals, while C45* -employees come to work regularly (no absences) is the subfactor of least importance for Generation Z decision-makers. To obtain the global significance of the observed factors, the local significance of the observed factors and the corresponding subfactors were multiplied. The obtained results are shown in Tables 13 -14. The obtained results indicate that it is extremely important to enable employees to participate in decision-making to better understand what motivates them. Generation Y singled out employee motivation as an important factor, and the most important subfactors are C45 -Employees come to work regularly (no absences) as a subfactor of C44 -Training and development motivate employees to work optimally. Generation Z singled out C12 -a leader always invites employees to talk to him, especially if the questions concern them directly, and C11 -the reliable direction in which my organization is moving, as subfactors of leadership factors as subfactors of the greatest importance. In addition, in Tables 13 -14 it can be seen that individual factors occupy the same rank, which means that they have the same significance for decision-makers. It is obvious that all factors are important for employee satisfaction and motivation, but it is useful to define which of them need special attention in the current conditions and adapt to the needs of employees following the requirements of generations. We can notice that the differences between these generations are evident and drastic and of great importance for the organization, leaders who guide employees need to pay attention to factors and subfactors that have a special impact on them. Conclusion Today's organizations face several challenges. This development has an impact on employees as well as managers and leaders. On the one hand, this development can be an opportunity, and on the other hand, it can lead to problems for organizations. That is why managers need to be aware of the differences and similarities between the generational cohorts that are currently in the workplace. These differences and similarities have several effects on organizations as well as on managers. Managers need to address the differences between different levels of generations by paying attention to their preferred leadership styles to lead effectively and keep employees motivated and satisfied with the organization and work. The ability to recognize and understand generational differences and preferences in leadership style gives organizations and managers an advantage in effectively managing their diverse workforce. That way I can achieve higher productivity and generate a competitive advantage, which benefits both the organization and employees. To develop the human resources sector and a better understanding of employees, it is necessary to determine which motivational factors need special attention. Therefore, the PIPRECIA method in the conditions of group decision-making was applied in this paper. Based on the literature review, four key factors have been identified that include the appropriate number of subfactors to be evaluated. The final results show that the key motivating factors of generations Y and Z of employees are the following: C11 -leadership; C21 -Training and development of employees; C31-Employee satisfaction; C43 -Employee motivation; The obtained results would be more relevant if several groups of decision-makers were involved in the decision-making process, as well as if the evaluation process itself was related to a certain type of organization or a specific type of work. Nevertheless, the proposed methodology has confirmed its usefulness and applicability in decision-making in this area. The application of this method should not be limited to the issue of employee motivation and satisfaction, but it is necessary to examine the possibility of application in other areas of business as well as levels. The recommendation for future work includes the evaluation and ranking of certain factors that affect the motivation of employees engaged in a particular type of work about the list of previously defined subfactors.
2022-06-24T15:07:46.342Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "b185196f77393b4ccdb35f4731e714bb7c1fc39e", "oa_license": "CCBY", "oa_url": "https://aseestant.ceon.rs/index.php/jouproman/article/download/38300/20369", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e1d7b161abea383a1b9feaf0e54ac0857f435a8", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
10398360
pes2o/s2orc
v3-fos-license
A selenium species in cerebrospinal fluid predicts conversion to Alzheimer’s dementia in persons with mild cognitive impairment Background Little is known about factors influencing progression from mild cognitive impairment to Alzheimer’s dementia. A potential role of environmental chemicals and specifically of selenium, a trace element of nutritional and toxicological relevance, has been suggested. Epidemiologic studies of selenium are lacking, however, with the exception of a recent randomized trial based on an organic selenium form. Methods We determined concentrations of selenium species in cerebrospinal fluid sampled at diagnosis in 56 participants with mild cognitive impairment of nonvascular origin. We then investigated the relation of these concentrations to subsequent conversion from mild cognitive impairment to Alzheimer’s dementia. Results Twenty-one out of the 56 subjects developed Alzheimer’s dementia during a median follow-up of 42 months; four subjects developed frontotemporal dementia and two patients Lewy body dementia. In a Cox proportional hazards model adjusting for age, sex, duration of sample storage, and education, an inorganic selenium form, selenate, showed a strong association with Alzheimer’s dementia risk, with an adjusted hazard ratio of 3.1 (95% confidence interval 1.0–9.5) in subjects having a cerebrospinal fluid content above the median level, compared with those with lower concentration. The hazard ratio of Alzheimer’s dementia showed little departure from unity for all other inorganic and organic selenium species. These associations were similar in analyses that measured exposure on a continuous scale, and also after excluding individuals who converted to Alzheimer’s dementia at the beginning of the follow-up. Conclusions These results indicate that higher amounts of a potentially toxic inorganic selenium form in cerebrospinal fluid may predict conversion from mild cognitive impairment to Alzheimer’s dementia. Electronic supplementary material The online version of this article (doi:10.1186/s13195-017-0323-1) contains supplementary material, which is available to authorized users. Background Neurodegenerative dementias are well-recognized, severe medical conditions that are prevalent worldwide and expected to rise in western countries in the coming years [1,2]. Effective therapies are lacking, as is adequate knowledge of their risk factors. In addition to genetic susceptibility, there is increasing evidence that environmental determinants, including environmental pollutants [3,4], are important in dementia etiology. Among the large number of chemical factors that have been implicated in the etiology of dementia, particularly its most common form, Alzheimer's dementia (AD), recent concern has focused on both increased and decreased exposure to the metalloid selenium (Se), an element of strong nutritional and toxicological interest [5][6][7][8]. Se exists in several chemical species with markedly different and even opposite biological properties [9][10][11]. In its selenocysteine-bound organic form, Se is an indispensable component in selenoprotein biosynthesis [7,12], while other organic species such as selenomethioninebound Se [13,14] and the inorganic forms such as selenate or selenite [11,[15][16][17] are also well recognized as powerful toxicants. Se exposure in the human mainly occurs through diet and in its organic forms, its major sources being meat and fish, cereals, eggs, and dairy products [18,19]. Se has been a topic of interest in recent decades mainly with reference to its possible role in cancer prevention and therapy [20][21][22]. More recently, its involvement in human brain pathology, and particularly with the risk of amyotrophic lateral sclerosis and AD, has become a focus [17,[23][24][25][26][27][28][29]. Such a relation, however, may exist only for some Se species, and particularly for the inorganic ones [30,31]. However, while the results of a randomized trial assessing the effect of selenomethionine supplementation in dementia prevention have been published recently [32], there are no observational cohort studies specifically taking into account the speciation of the different chemical forms of Se in relation to the risk of AD. We collected cerebrospinal fluid (CSF) samples in a cohort of Italian participants diagnosed with mild cognitive impairment (MCI) [33]. These participants were then followed for occurrence of dementia, enabling us to assess the relation between concentrations of various Se species in CSF and the risk of conversion to AD or other dementia. Study cohort As shown in the flowchart (Fig. 1), following approval by the Modena Ethics Committee, we considered as eligible for our cohort study all participants who received a clinical diagnosis of MCI (amnestic MCI, single domain or multiple domain, or nonamnestic MCI [34,35]) and who were admitted from 2008 to 2014 to the Neurology Memory Clinic of Sant' Agostino-Estense Hospital of Modena, Italy. Participants were then further selected if, following informed consent, they underwent a lumbar puncture (LP) for diagnostic purposes and had no brain imaging abnormalities or medical history suggestive of a vascular origin of their cognitive impairment [33,36,37]. Out of 71 potentially eligible Fig. 1 Flowchart for design of the cohort study. AD Alzheimer's dementia, CSF cerebrospinal fluid, FTD frontotemporal dementia, LBD Lewy body dementia, MCI mild cognitive impairment participants, 56 had 1 mL of CSF or more available, and these constituted the cohort for the present study. At baseline, all participants underwent routine blood tests, neurological and neuropsychological examination, and brain MRI. They also had a LP (within 1 month of clinical and neuropsychological examination) to measure CSF levels of Aβ 1-42 (β-amyloid), total tau (t-tau), and phosphorylated tau (p-tau) proteins. APOE ε4 allele status was determined for 39 participants. All participants were subsequently followed up every 6 months through December 2016. At each visit they were classified according to whether their condition was stable or had converted to any clinical type of dementia, including AD [38], Lewy body dementia (LBD) [39], and frontotemporal dementia (FTD) [40,41]. Analytical determinations Lumbar punctures were performed using a standard procedure to minimize the risk of biological and chemical contamination [17]. We collected CSF in sterile polypropylene tubes, which were transported to the adjacent laboratory within 30 min of collection. We centrifuged CSF for 15 min at 2700 × g at controlled room temperature and aliquoted into polypropylene storage tubes. CSF β-amyloid, t-tau, and p-tau 181 were measured as described previously [33]. The remaining, anonymized aliquots were immediately stored at -80°C and later transported deep frozen in dry ice by air courier to the element speciation laboratory at the Helmholtz Zentrum München, and kept continuously frozen until use. Concerning the analytical figures of merit and analytical quality control, the limit of detection (LOD) was 19.5 ng Se/L for Se species. Only values above the LOD are reported throughout the manuscript. Accuracy of Se determination and Se species quantification was checked by analyzing control materials and a certified reference material: quality control for total Se determination was performed by analyzing control materials 'human serum' and 'urine' from Recipe (Munich, Germany). Control materials were reconstituted as indicated on the flask labels. The resulting solutions were diluted 1/50 (serum, measurement concentration 1.25 μg/L) or 1/10 (urine, measurement concentration 2.35 μg/L) with Milli-Q water before measurements for adjusting the measurement concentration to the expected concentration range of CSF (no CSF reference material for Se was available). Accuracy values were 98.4 ± 3.8% (serum) and 102.1 ± 5.4% (urine). Data analysis For analytical values below the LOD, we input half of the threshold [44,45]. Most participants had Se species well above the LOD (from 89 to 100% depending on the single Se form), with lower values only for the three organic Se species Se-Cys, Se-GPX, and Se-TXNRD, which had values above the LOD only in 21%, 43%, and 0% of the participants, respectively. We assessed the association between Se species through Spearman correlation. To evaluate the possible influence of Se species on β-amyloid and p-tau, we also fitted a linear regression model of log-transformed CSF concentrations of βamyloid and, separately, p-tau at baseline. In both the Spearman correlation analysis and the linear regression analysis, values of Se species below the LOD were excluded. After defining the person-time of follow-up as the time of MCI diagnosis/CSF sampling until the last follow-up visit, December 2016, or the date of AD/dementia diagnosis, whichever occurred first, we estimated the hazard ratio (HR) of progressing to AD (as well as to any dementia subtype, i.e., AD + FTD + LBD) in a Cox proportional hazards model. After assessing all variables for the proportional hazard assumption, we fitted a multivariable Cox model stratified by sex, and adjusted for age (years), education (years), and duration of sample storage (years). Table 1 reports the main demographic and clinical characteristics of MCI cohort members at baseline according to dementia diagnosis during the follow-up, and Table 2 reports their CSF concentrations of Se species, βamyloid, t-tau, and p-tau. Of the original 56 participants, 21 converted to AD, four to FTD, and two to LBD, and 29 did not convert at the end of the follow-up. Followup lasted on average 43.3 months, with a median of 42 months and an interquartile range of 30.4-51.2, with a total number of person-months of follow-up equal to 2423.5. Results When we assessed the correlation between Se species, the only forms not associated with total Se were Se(VI), Se-GPX, and Se-Cys, the latter form being also unrelated to total organic Se. Inorganic Se (particularly Se(VI)) was inversely correlated with organic Se and particularly Se-SelenoP and Se-Met. Se-HSA was directly correlated with all Se species and categories except for Se-Met, Se-Cys, and Se-GPX. However, all correlations involving Se-Cys and especially Se-GPX were based on a small number of individuals, because a large proportion of the sample concentrations fell below the LOD. Inorganic Se and Se(VI) concentrations in baseline CSF samples were inversely associated with CSF βamyloid concentration (Table 3). By contrast, higher organic Se concentration and particularly Se-Met in CSF were associated with higher CSF β-amyloid. There was little evidence of an association of any Se species in CSF with CSF concentration of p-tau; Se(VI) showed some evidence of a direct relation with this protein, although this was statistically unstable as shown by the wide confidence interval of the regression coefficient. Further adjustment for APOE ɛ4 allele carriership did not substantially change the results (Additional file 1: Table S1). Results of the proportional hazards regression analysis are reported in Table 4. We observed an excess, albeit statistically unstable, AD risk associated with higher total Se, with exposure classified into a dichotomy, above or below the median both in the crude analysis and taking Table 1 Baseline characteristics of study population according to diagnosis at the end of follow-up into account age and sex, time elapsed since year of first storage, and education. When looking at the single Se species, we found a strongly increased AD risk associated with Se(VI) exposure and to a lesser extent with Se-Met, and more weakly with Se-HSA. Results were roughly comparable in the analysis based on continuous values of Se exposure (both one-unit and one-standarddeviation increase; data not shown). When we adjusted for β-amyloid and p-tau, in addition to age and sex, we obtained HRs comparable with those obtained in the less adjusted analysis, although in this adjusted model the excess risk associated with overall Se, organic Se, Se-SelenoP, and particularly Se-Met levels was enhanced, and that associated with Se(VI) was reduced. Finally, adding APOE ε4 status to the most adjusted multivariable model shown in Table 4 had little effect on the estimates (Additional file 1: Table S2), with the exception of the HR associated with selenate which became 7.6 (95% CI 1.2-49.5). However, results of the latter analysis were statistically less stable due to fewer participants (for a few cohort members this genetic datum was not available) and more variables in the model. When we replaced AD with any dementia occurrence in the aforementioned analyses, effect estimates were substantially unchanged (data not shown). We also repeated the aforementioned Cox analysis by omitting the AD cases detected after the first 2 years of follow-up (two participants): results were substantially unchanged, with the HR associated with selenate levels above the median being 3.5 (95% CI 1.0-11.9). When we stratified the analysis according to the APOE ε4 status, a direct association of Se(VI) with AD risk emerged in both APOE ε4 categories (Additional file 1: Table S3), either in crude analyses or after adjusting for potential confounders. Considering the most adjusted model, in the 21 APOE ε4 noncarriers HRs were also increased for total Se, Se-GPX, and Se-HSA, while in the 18 APOE ε4 carriers an increased AD risk was apparent for total Se, Se-Met, and Se-Cys. Because of the small numbers within strata, however, these effect estimates were imprecise, as indicated by their wide confidence intervals. Discussion We investigated whether the risk of conversion to AD in patients with MCI is influenced by exposure to Se. We found that one out of several Se species in CSF was positively associated with subsequent AD, and results were similar when we included in the outcome the few additional incident cases of neurodegenerative dementia; that is, the four cases of FTD and the two cases of LBD. The Se species associated with AD was the inorganic hexavalent one, selenate (Se(VI)), a species that lacks a direct physiological role by itself as it is not incorporated into selenoproteins. In addition, we found some associations between the organic form Se-Met and AD risk, but these were not confirmed in analyses based on the exclusion of cases diagnosed early during the follow-up. Se(VI) is characterized by a peculiar metabolic pattern and toxicity [8,11,14,24,[46][47][48][49][50]. Our findings, which AD Alzheimer's dementia, β-amyloid Aβ 1-42 , FTD frontotemporal dementia, IQR interquartile range, LBD Lewy body dementia, MCI mild cognitive impairment, p-tau phosphorylated tau protein, Se selenium, Se(IV) selenite, Se(VI) selenate, Se-SelenoP selenoprotein P-bound Se, Se-Met selenomethionine-bound Se, Se-Cys selenocysteine-bound Se, Se-GPX glutathione-peroxidase-bound Se, Se-HSA human serum albumin selenium-bound Se, t-tau total tau protein were confirmed in analyses that take into account potential confounders such as education [51] and duration of sample storage [10], indicate that higher amounts of this Se species may predict and possibly cause AD. This association remained, but was weaker, after including in the multivariable model two biomarkers of Alzheimer's disease pathology, β-amyloid and p-tau levels. This finding indicates that these two CSF proteomic indicators may be mediators of Se(VI) toxicity, in which case they should be omitted from the regression model. Some association remained when these factors were included in the model, which could indicate that these two factors may not mediate the entire possible neurodegenerative effect of Se(VI), or result from discrepancy in the CSF measures as indicators of neuropathology in the parenchyma. We also noted in the linear regression analysis an inverse association between baseline CSF Se(VI) (and more generally inorganic Se) and β-amyloid. This observation strengthens our findings from the Cox regression model, since low CSF β-amyloid levels are a marker of AD conversion risk in individuals with MCI [33]. The reason for such an inverse relation specifically restricted to inorganic Se and particularly to Se(VI) is difficult to surmise. It may be linked to some specific toxic properties such as pro-oxidant activity and promotion of protein misfolding by this Se species, as suggested or documented in laboratory and animal studies [16,[52][53][54] and suspected to occur in humans [8,24]. It also may relate to specific genetic features in Se(VI) metabolism characterizing some individuals [49]. A deleterious effect of Se(VI) on the central nervous system (CNS), and more generally inorganic Se, is biologically plausible, since these Se compounds have been long known to be very toxic [24,55]. Such an effect would contrast with suggestions of potential beneficial effects of Se and selenoproteins in AD progression from laboratory studies [56][57][58], although this hypothesis was recently contradicted by results from the PREADVISE study (Prevention of Alzheimer's Disease by Vitamin E and Selenium Trial) [32]. In that trial, 7540 asymptomatic Adjusted for sex, age at entry, years of storage, years of education, and β-amyloid and phosphorylated tau protein level older adults in North America were randomized to either placebo or 200 μg/day Se as L-selenomethionine or both L-selenomethionine and vitamin E for an average of 5.4 years, but there was no effect of any Se supplementation on dementia or AD incidence during the active supplementation period, or within a subset of the study cohort up to 6 additional years [32]. In the laboratory and veterinary medicine studies, inorganic and some organic Se species were shown to disrupt physiological pathways related to the etiology of neurological disorders or induce frank neurotoxicity [24]. This is particularly true for inorganic Se [59][60][61] including Se(VI) [62,63], which may induce oxidative stress [62,64,65] and cause genotoxicity and apoptosis [53,[66][67][68][69], particularly in neural cells [60]. This Se species may also be incorporated into protein as a replacement for sulfur, with consequent misfolding and functional impairment [65,70] and endoplasmic reticulum stress [54,71], all mechanisms potentially involved in AD etiology [72,73]. In humans, specific neurotoxicity data are available for Se(VI) only for acute high-dose intoxication, which includes confusion, memory loss, anxiety, depression, irritability, insomnia, and dizziness [74,75]. Exposure to inorganic Se species and Se(VI) in the human is limited, since Se(VI) levels in food are low compared with organic Se forms, and Se in drinking water (generally containing Se as Se(VI)) contributes little to total Se intake [76]. However, there are scarce data on Se speciation in food, and some sources such as seafood or Se-enriched vegetables may contain higher levels of the inorganic Se(VI) species [18,77]. Dietary supplements may also represent a source of Se(VI), although they contain a mixture of Se species, in most cases represented mainly by its organic forms, especially Se-Met [78,79]. The two key features of our study are the longitudinal design and the speciation approach. The cohort design allowed us to avoid reverse causality, the major potential limitation of certain case-control studies and all crosssectional studies. In fact, a progressive deterioration of nutritional status may characterize progression to AD, in parallel with the worsening of cognitive impairment [29,80], and selectively involve at an early stage of the disease some dietary factors including Se [81]. In addition, an effect of disease itself on Se tissue distribution and metabolism might exist. This effect is also suggested by the higher levels in post-mortem AD brains of the antioxidant SelenoP, which has been interpreted as a compensatory response to the oxidative stress characterizing disease progression [82]. Despite the strength of the longitudinal design of our study, we also took into account the possibility that some clinically undetected incipient disease already characterized our participants later converting to AD, and therefore that some possibility of reverse causation still existed. We addressed this point by removing participants developing AD in the first period of follow-up, and our associations did not change or were even strengthened. Focusing on Se speciation is something that has not been done before in similar research. Earlier studies assessed only overall Se levels or, very rarely, selenoprotein activity [82,83]. Since the chemical form of Se plays a major role in driving both its toxicological and nutritional effects, any exposure assessment based on overall Se content may be misleading [8,9,11,15]. Also neurotoxic properties of various Se species may differ considerably, independently of the overall Se exposure [24,84]. These considerations accentuate the potential for bias due to exposure misclassification based on overall Se determination in epidemiologic studies [10,21,22], and highlight the relevance of speciation analysis in neurodegeneration research [30]. Another important feature of this study is the investigation of Se status in a CNS compartment. In fact, peripheral biomarkers of exposure, either based on overall Se or on single Se species, may not adequately predict CNS levels, especially in view of the known peculiarities of metabolism and retention of this element in the brain [85,86] and the lack of correlation between some circulating Se species, especially its inorganic forms, with CSF levels [15,43,87]. Most case-control studies of AD have focused on peripheral indicators of exposure, such as blood, urine, hair, and nail samples, finding conflicting results ranging from adverse to protective [27,28], while little association was noted with Se CSF levels [88][89][90]. A recent study based on 286 autopsied samples found Se brain content to be positively associated with brain neuropathology [91]. Se content was directly and positively correlated with neurofibrillary tangle severity, and in the highest exposure category a higher but statistically unstable risk of global Alzheimer's disease pathology and of Lewy bodies also emerged [91]. However, the crosssectional study design and lack of speciation analyses made it impossible to assess whether the higher Se levels preceded brain neuropathology or were due to compensatory selenoprotein synthesis [7,8,92]. We also found an excess but statistically imprecise AD risk associated with Se-HSA. However, the interpretation of this finding is challenging because of uncertainties about the exact nature of this chemical species, which might include both organic and inorganic Se forms [10,15]. Finally, we found some evidence of an excess AD risk associated with 'unknown' Se species, but the very wide confidence interval of the effect estimate and the uncertainties of the nature of this Se compound(s) hamper a reliable assessment of this finding. We found some evidence of effect modification by the APOE ε4 status on AD risk, indicating that carriers of the APOE ε4 allele may undergo an excess disease risk for higher levels of Se-Met and Se-SelenoP. Interestingly, an indication that APOE ε4 status and Se metabolism may interact has been provided in an older Chinese population [93]. Se-Met is a Se form which has cytotoxic and pro-oxidant activities [69,94], and this species has been recently observed to induce cognitive impairment in an animal model [95]. As already indicated, our study was small, with insufficient data to assess the associations within subgroups according to sex, age, or other factors. Similarly, we lacked data to assess the role of the three Se species (Se-Cys, Se-GPX, and Se-TXNRD) for which most samples (all for Se-TXNRD) fell below the LOD. Therefore, the involvement of these species in AD etiology could not be adequately or even partially assessed in the present study. Another limitation is the prospect of unmeasured confounding [96], which appears to be of particular relevance in epidemiologic studies dealing with Se [97]. Finally, the association between selenium as Se(VI) and AD risk found in our study may apply only to a population having the Se exposure typical of residents in the study area, already shown in previous studies to be comparable with the Italian national average [98][99][100], while such association may not necessarily exist in other populations characterized by considerably lower intake of this element [90,101]. Conclusions We found in persons with mild cognitive impairment of nonvascular origin that a higher cerebrospinal fluid content of an inorganic Se species, selenate, predicted progression toward AD. No other Se form was related to either increased or increased AD risk. Since results were strengthened when participants who were diagnosed early during the follow-up were excluded from the analysis, thus limiting any effect of reverse causality, our results indicate that selenate levels in the central nervous system compartment may predict and possibly influence AD risk. However, the possibility of unmeasured confounding and the statistical imprecision of our results emphasize the need to replicate these findings in other studies. Additional file Additional file 1: Table S1. presenting APOE ɛ4-adjusted linear regression analysis estimates of CSF Se species versus log-transformed values of biomarkers of Alzheimer's disease pathology (β-amyloid and p-tau as dependent variables) in the MCI study participants at baseline (values below the limit of detection for each specific species excluded from analysis). Most-adjusted estimates are from a multivariable model including sex, age, education, and duration of sample storage as potential confounders, in addition to APOE ɛ4 allele carriership (presence/absence). Table S2. presenting crude and APOE ɛ4-adjusted HRs of developing AD in a Cox proportional hazards model according to baseline CSF Se species in the 39 MCI subjects at baseline having information about their APOE ɛ4 status. Se exposure status defined as 0 (below or equal) and 1 (above) with reference to the median value. Table S3. presenting crude and adjusted HRs of developing AD in a Cox proportional hazards model, related to baseline CSF Se species in MCI subjects at baseline, according to APOE ɛ4 carriership status (carriers, N = 18; noncarriers, N = 21). Se exposure status defined as 0 (below or equal) and 1 (above) with reference to the median value. (
2017-12-19T15:04:00.549Z
2017-12-19T00:00:00.000
{ "year": 2017, "sha1": "96168d970ca7b3dd30135ed5d8e36fe3da4544b8", "oa_license": "CCBY", "oa_url": "https://alzres.biomedcentral.com/track/pdf/10.1186/s13195-017-0323-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96168d970ca7b3dd30135ed5d8e36fe3da4544b8", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
255171844
pes2o/s2orc
v3-fos-license
Multi-View Robust Tensor-Based Subspace Clustering In this era of technology advancement, huge amount of data is collected from different disciplines. This data needs to be stored, processed and analyzed to understand its nature. Networks or graphs arise to model real-world systems in the different fields. Early work in network theory adopted simple graphs to model systems where the system’s entities and interactions among them are modeled as nodes and static, single-type edges, respectively. However, this representation is considered limited when the system’s entities interact through different sources. Multi-view networks have recently attracted attention due to its ability to consider the different interactions between entities explicitly. An important tool to understand the structure of multi-view networks is community detection. Community detection or clustering reveals the significant communities in the network which provides dimensionality reduction and a better understanding of the network. In this paper, a new robust clustering algorithm is proposed to detect the community structure in multi-view networks. In particular, the proposed approach constructs a 3-mode tensor from the normalized adjacency matrices that represent the different views. The constructed tensor is decomposed into a self-representation and error components where the extracted self-representation tensor is used to detect the community structure of the multi-view network. Moreover, a common subspace is computed among all views where the contribution of each view to the common subspace is optimized. The proposed method is applied to several real-world data sets and the results show that the proposed method achieves the best performance compared to other state-of-the-art algorithms. I. INTRODUCTION Recently, the world has witnessed a great and rapid development in data science. This development has led to a massive availability of structured data where the structure reflects crucial information about the data. Over the past decades, graph and network theory has emerged to analyze and interpret this structure [1], [2]. In fact, real-world systems can be modeled as graphs where objects and interactions between them are modeled as nodes and edges of the graph, respectively [3]. Early work adopted simple or static graphs to model systems where this is considered inadequate for complex systems, especially when the data is collected from The associate editor coordinating the review of this manuscript and approving it for publication was Chao Tong . multiple resources or views [4], [5]. In particular, each view carries unique information about the system and reflects different interactions. This type of systems can be modeled properly as a multi-view network [6], [7]. Learning from multi-view network-type data has become an essential and advanced task in machine learning [8]. In multi-view network representation, the interactions between the nodes are considered explicitly for each view. One of the most popular approaches to understand the structure of multi-view networks is community detection [9]. Community detection aims to reduce the dimensionality of the network and partition it into a set of clusters or communities, where the nodes within the cluster are strongly connected with each other and sparsely connected with nodes from other clusters [10]. Early work in clustering multi-view networks adopted Single-View Clustering (SVC) techniques to reveal the structure of the network [11]. The existing SVC techniques that are widely used can be divided into two classes. First, aggregation of all views to create a single network, then apply classical SVC techniques [12]. Second, applying any conventional SVC technique to each view separately, then use ensemble or consensus clustering to determine the final community structure [13]. Examples of SVC include Low-Rank Representation (LRR) [14] and SubSpace Clustering (SSC) [15]. However, SVC approaches are insufficient to reveal the structure of the multi-view network since each view has its own statistical properties that are not considered properly during the aggregation process. Consequently, Multi-View Clustering (MVC) approaches have recently arisen to overcome the disadvantages of single-view clustering [16], [17]. These approaches include multi-view clustering based on LRR (MVC-LRR) [17], [18], [19], multi-view clustering based on robust principal component analysis (MVC-RPCA) [20], [21], [22] and multi-view clustering based on graphs (MVC-G) [23], [24], [25]. Under MVC-LRR, In [17], a Low-rank Tensor constrained Multi-view Subspace Clustering (LT-MSC) approach was proposed to recover the data structure via exploiting the complementary information of different views. Another technique that was proposed in [16] called Tensor Singular Value Decomposition for Multi-view Spectral Clustering (t-SVD-MSC). This approach studied how to obtain the low-rankness of the representation tensor using the Tensor Nuclear Norm (TNN) instead of the sum nuclear norm. On the other hand, the authors in [26] introduced a Diversity-induced Multi-view Subspace Clustering (DiMSC) approach, which studied the multi-view clustering based on Hilbert-Schmidt Independence Criterion (HSIC). The HSIC is used to measure the dependence of variables by mapping the variables into a reproducing kernel Hilbert space. Furthermore, the authors in [27] introduced an Exclusivity-Consistency regularized Multi-view Subspace Clustering (ECMSC) algorithm. In particular, ECMSC studied the multi-view clustering by using the complementary information between different representations using a position-aware exclusivity term. In addition, the authors in [19] proposed a Split Multiplicative Multi-View Subspace Clustering (SM 2 SC) approach, which is based on multiplicative multi-view clustering and a variable splitting scheme. However, the proposed methods under MVC-LRR used the original data set, i.e. the data samples and their features to recover a clean tensor. Moreover, they need to follow a two-step strategy to construct the affinity matrix from the recovered clean tensor which is used to detect the network's structure. Unlike our proposed algorithm that uses the multi-view network as an input and solves for the clean tensor and the community structure through a single optimization problem. Under MVC-RPCA, the authors in [28] proposed a single view clustering based on robust graph learning. The proposed model vectorized all views in columns to create a single matrix and then applied RPCA to recover the low-rank matrix, which is used to detect the network's structure. For high dimensional array, the authors in [20] proposed an Essential Tensor Learning for Multi-view Spectral Clustering (ETLMSC) approach that considers high-dimensional data, i.e. tensors. ETLMSC constructs the adjacency tensor using random walk and the transition probability matrix definitions. Also, tensor RPCA is used to recover the low-rank component, which is used to detect the structure's network using Markov chain spectral clustering. However, ETLMSC follows a two-step strategy to construct the affinity matrix from the recovered clean tensor which is used to detect the network's structure. In [22], the authors studied the different possible errors in the multi-view features and proposed an error robust multi-view spectral clustering model. In [29], the authors proposed a multi-view subspace clustering model based on the transition probability matrix learning and a nonconvex low-rank tensor approximation instead of a convex nuclear norm. For more MVC-RPCA, please refer to [21] and [30]. Under MVC-G, in [31], the introduced model studied the multi-view clustering based on the partition of the graph into specific clusters directly by learning the local manifold structure of the similarity matrix. Moreover, this method can automatically estimate the factors after finite iterations to deal with real-world applications. On the other hand, a parameter-free algorithm was proposed in [32] called Adaptive Weighted Procrustes (AWP), which is based on weighting each view with its clustering capacities. Another spectral clustering-based approach was proposed in [33], namely Co-regularized multi-view Spectral Clustering (CoRegSC). The CoRegSC approach studied the multi-view clustering, assuming that the clustering is consistent across the views. In [24], a Multi-View for Graph Learning (MVGL) method was proposed. The purpose of MVGL is to enhance the quality of the graph by learning a global graph from different single-view graphs. The authors in [34] proposed a Multi-view Consensus Graph Clustering (MCGC) approach to minimize the disagreement between all views and constrain the Laplacian matrix rank. In addition, the introduced model in [35] called Weighted Multi-view Spectral Clustering (WMSC) was proposed to model the weights of all views using spectral perturbation, where the clustering results on each view are close to the consensus clustering result. In [36], the authors proposed a Non-negative Matrix Factorization with Co-orthogonal Constraints (NMF-CC). The NMF-CC model aimed to capture the intra-view diversity matrices to achieve the best clustering representation. In [37], the authors proposed a Consensus Graph Learning for multi-view clustering (CGL). The CGL method was proposed to weight the nuclear tensor norm by assigning different singular values with different weights to improve the flexibility of the nuclear tensor norm in the low-rank approximation problem. Also, the consensus similarity graph was constructed from the normalized multi-view spectral embedding matrices using an adaptive neighbor graph learning manner to study the network's structure. In [38], a Multiplex Cellular communities for Tissue Phenotyping (MCTP) was proposed to cluster the multi-dimensional cellular based on symmetric nonnegative matrix tri-factorization. In [39], a Common Subspace Fusion (CSF) model was proposed to track the objects based on a low-rank response map representation of various features and trackers. In [40], the authors proposed a Consensus and Complementary information for Multi-View data (2CMV). More precisely, the 2CMV model studied the consensus and complementary information from all views based on Coupled Matrix Factorization (CMF) and NMF. At the same time, the authors in [41] proposed a Diverse Manifold for Multi-Aspect Data (DiMMA) to study the diverse manifold for data clustering by using distance information of data points from different views. However, the aforementioned methods under MVC-G are sensitive to noise and their performance decays in the presence of noise and outliers. Table 1 summarizes some examples of some SVC and MVC methods and introduces the proposed method. In this paper, we propose a novel robust tensor-based multiview clustering approach to detect the community structure in multi-view networks. In particular, the proposed objective function benefits from prior work in tensor decomposition and subspace decomposition to create a unified framework that jointly extracts a clean low-rank adjacency tensor from the original corrupted multi-view network and use it to compute a common subspace across all views. The contributions of the proposed method are summarized as follows: 1) The proposed algorithm can efficiently recover the low-rank representation component from the noisy corrupted tensor by minimizing the tensor nuclear norm. In particular, the recovered low-rank tensor is a clean representation of the original network where we use it to detect the community structure of the network. 2) The proposed approach endorse the L 2,1 -norm to minimize the error in order to soothe the noise effect. 3) The proposed approach adopts Tucker decomposition to obtain a common subspace across all views where the contribution of each view to the common subspace is optimized. The resultant common subspace is then used to obtain the nodes' cluster assignment. 4) The proposed objective function is solved efficiently using an iterative alternating approach. In particular, the low-rank self representation tenor is obtained using Alternating Direction Method of Multipliers (ADMM) and the common subspace is estimated using High Order Orthogonal Iteration (HOOI). 5) The performance of the proposed approach is evaluated by conducting extensive experiments on multiple realworld multi-view networks and the results show that MV-RTSC outperforms other existing state-of-the-art algorithms. The organization of this paper is as follows. In Section II, the background and notations are summarized. The proposed method is introduced and optimized in Section III. The results and discussions are presented in Section IV. Finally, we conclude our work in Section V. II. BACKGROUND AND NOTATIONS A. NOTATIONS AND PRELIMINARIES In this section, the notations and basic operations used throughout the paper are presented. In this paper, we use {A, A, a, a ij .} for tensor, matrix, vector and scalar elements, respectively. Let A be a matrix, then the Frobenius norm of A can be calculated as where Q l and Q r are the left and right singular vectors matrices, respectively. is a diagonal matrix with the singular values of A on its diagonal, where σ i (A) is defined as i th singular value of A. The nuclear norm of A is defined as the summation of its singular values, i.e. A * = i σ i (A). For high dimensional arrays, let A ∈ R n 1 ×n 2 ×n 3 be a 3-mode tensor, then the slice of A is 2D while the fiber of A is 1D. In MATLAB notations, A(j, :, :), A(:, j, :) and A(:, :, j) are used to denote the j th horizontal, lateral and frontal slices of A, respectively. In this paper, we denote the j th -frontal slice of the tensor by A (j) . Also, A(:, i, j), A(i, :, j) and A(i, j, : ) are used to denote mode-1, mode-2, and mode-3 fibers, respectively. The Fourier transform of A can be calculated B. GRAPH THEORY A simple graph can be defined as a triplet, G = {V, E, A} where V is the set of vertices and E is the set of edges. The matrix A ∈ R n×n is known as the adjacency matrix which indicates the similarities between each pair of vertices, where A is a undirected symmetric nonnegative matrix and a ij ∈ [0, 1]. The degree of any vertex is defined as d i = n j=1 a ij , and the degree matrix D, is defined as the diagonal matrix with {d 1 , d 2 , . . . , d n } on its diagonal [42], [43]. The normalized adjacency matrix is defined as A N = D (−0.5) AD (−0.5) , which is a positive semi-definite matrix. A multi-view network, G, with n nodes and V views can be defined as a set of static or simple graphs that reflects the interactions between nodes in the different views, i.e. C. TENSOR OPERATIONS Let A ∈ R n 1 ×n 2 ×n 3 be a 3-mode tensor, then the block circulant matrix can be computed as follows: where A (j) represents the j th frontal slice of the tensor A. The block vectorizing operation: and the block vectorizing opposite, bvfold, operation is given as: The block diagonal matrix of a tensor A is given by: and the block diagonal opposite, bdfold, operation is given by: Tensor Product: Let J 1 ∈ R n 1 ×n 2 ×n 3 , and J 2 ∈ R n 1 ×n 4 ×n 3 . The tensor product (t-product) J 1 * J 2 ∈ R n 1 ×n 4 ×n 3 tensor: Tensor Transpose: Let A be a 3-mode tensor with dimension n 1 × n 2 × n 3 then the tensor transpose has a dimension of n 2 × n 1 × n 3 . Identity Tensor: The tensor I ∈ R n 1 ×n 2 ×n 3 is called identity if and only if all slices are equal to zero except the frontal slice is an n 1 × n 2 identity matrix. Orthogonal Tensor: A tensor A is orthogonal if it satisfies the following condition: Tensor SVD (t-SVD): The t-SVD of A ∈ R n 1 ×n 2 ×n 3 is defined as: where Q l ∈ R n 2 ×n 2 ×n 3 and Q r ∈ R n 1 ×n 1 ×n 3 are the left and right singular vectors tensors, respectively. The tensor S ∈ R n 1 ×n 2 ×n 3 is an f-diagonal tensor in Fourier domain and its contains the singular values on its diagonal. T-SVD-Based Tensor Nuclear Norm (T-SVD-TNN): Let A ∈ R n 1 ×n 2 ×n 3 , then the T-SVD-TNN defined as: where| · | is the absolute value, and A f is the SVD tensor of A in the Fourier domain. Tucker Decomposition: Assume A ∈ R n 1 ×n 2 ×n 3 is a 3-mode tensor, the Tucker decomposition is used to provide the orthogonal subspace along each mode of A. The tucker decomposition of tensor A is given as [44]: where C r ∈ R r 1 ×r 2 ×r 3 is the tensor core, is the orthogonal matrix along the i-mode, and r i is the rank of the orthogonal i th matrix where r i ≤ n i . Tucker decomposition problem can be solved using HOOI. Multi-view Networks Construction: Given a multi-view data set, n ] ∈ R d v ×n , has n samples and d v features. In this paper, an adjacency tensor is constructed from the multi-view data set. This is achieved by first constructing a similarity matrix, A (v) ∈ R n×n , for each view where the similarity matrix can be constructed using different similarity measures. In this paper, the similarity measure that is adopted to construct the similarity matrices is the Gaussian kernel similarity [42] and can be defined as: where α, β ∈ {1, 2, . . . , n}, . 2 is the L 2 -norm, and σ is used to control the width of the neighborhoods. After constructing a set of similarity matrices, , a 3-mode tensor, A ∈ R n×n×V , is constructed by concatenating the constructed adjacency matrices where each adjacency matrix represents a frontal slice in the adjacency tensor, A(:, D. SPECTRAL CLUSTERING In graph theory, spectral clustering is an important method to reveal the clusters in static networks. An efficient solution is provided to the relaxed versions of the cut problem by spectral clustering. More precisely, it solves the following optimization problem [42]: where L N is the normalized symmetric Laplacian matrix and defined as L N = I − A N . Eq. (12) can also be written as follows: The normalized Laplacian matrix, L N , and the normalized adjacency matrix, A N , are always positive semi-definite, then VOLUME 10, 2022 optimizing Eq. (13) is equivalent to optimizing the following problem: The solution of optimization problem in Eq. (12) can be found by computing the matrix U as the matrix that contains the k eigenvectors that correspond to the smallest k eigenvalues of L N . Whereas, the solution of optimization problems in Eq. (13) and Eq. (14) can be found by choosing the matrix U as the matrix that contains the k eigenvectors that correspond to the largest k eigenvalues of A N . The nodes' community assignment is then determined by applying kmeans to U [45]. In recent work [46], we have shown that minimizing the normalized cut across a multi-view network can be equivalently written as: This optimization problem can be reformulated in terms of Tucker decomposition as: where A N ∈ R n×n×V corresponds to the tensor representation of the adjacency matrices across views. The optimization problem in Eq. (16) can then be solved efficiently using HOOI [47]. III. MULTI-VIEW ROBUST TENSOR-BASED SUBSPACE CLUSTERING (MV-RTSC) In this paper, we propose a novel approach to detect the community structure in multi-view networks. In particular, a 3-mode tensor is assembled from the normalized adjacency matrices that represent the different views of the network. The constructed tensor is decomposed into a self-representation and error components where the extracted self-representation tensor is used to detect the community structure of the multiview network. The algorithm is explained in details in the following subsections. A. PROBLEM FORMULATION Given a corrupted multi-view adjacency tensor, A ∈ R n×n×V that represents a corrupted multi-view network, G, the objective of the proposed algorithm is to detect the community structure of the input multi-view network. In order to account for noise or missing edge value, a low-rank approximation of the corrupted adjacency tensor is extracted and used to obtain a common subspace, U ∈ R n×k , across all views. In particular, this common subspace will be obtained by optimizing the contribution of each view to the common subspace. The resultant common subspace can then be used to determine the community assignment of each node in the network. In order to attain this objective, we propose minimizing the following objective function: (1) ; E (2) ; . . . ; The proposed objective function consists of multiple terms, where each term is included to meet a certain goal as follows: 1) The first two terms represent the tensor LRR where the first term, Z , is included to recover the selfrepresentation tensor, Z ∈ R n×n×V , of the original adjacency tensor, A, by using the T-SVD-TNN. The tensor Z can be constructed by using the function (·) that converts the self-representation matrices into a 3-mode tensor. The function, (·), is also used to rotate the tensor Z where its dimension will be n × V × n [16]. The main advantage of the tensor rotation is to reduce the computational complexity of updating Z. The matrix n ] represents a clean version of the corrupted adjacency matrix, A (v) . In particular, each z (v) i presents a new representation of the node i which refers to the sample i. Ideally, the low-rank representation of each node corresponds to a combination of all the other nodes that belong to the same subspace [15] which leads to a block diagonal structure in the clean adjacency matrix that is constructed from Z (v) . In fact, it was proved in [14] that as the adjacency matrix gets closer to a block diagonal structure, better clustering results can be achieved. The second term, E 2,1 , is considered to detect the outliers of the original tensor A, by using the L 2,1 norm of the matrix E ∈ R nV ×n which is obtained by concatenating the error matrices that correspond to each view vertically. The L 2,1 norm of the matrix E can be defined as E 2,1 = m i=1 n j=1 e 2 i,j , where m = nV [48]. 2) The third term represents the subspace decomposition. The main goal of this term to estimate the common subspace across all views, U, where Z is the normalized adjacency matrix of the self-representation matrix of each view. w is the weighting vector across views where w v , v = {1, . . . , V } weighs the contribution of each view to the common subspace. B. OPTIMIZATION OF THE PROPOSED MODEL The solution of the proposed model can be obtained by following an iterative alternating approach. In particular, fixing the common subspace between all views U, the variables Z and E can be obtained by ADMM, where the convergence of ADMM is guaranteed [16], [49], [50]. After recovering the self-representation tensor, Z, the common subspace, U, can be updated efficiently using HOOI. This problem can be solved by separating the tensor decomposition problem from the subspace decomposition problem and alternating between the two optimization problems. Once Z is obtained U and w will be optimized using HOOI. The solution of the optimization problem introduced in Eq. (17) starts with introducing an auxiliary variable, B, to provide variables' separability where the optimization problem will be reformulated as follows: (1) ; E (2) ; . . . , E (V ) ]. Eq. (18) can be solved by dividing it into multiple subproblems and optimizing each variable while fixing the others as follows: 1) B-Subproblem: In order to update B l+1 , we fix all the other variables and consider the terms with B only as follows: The solution of Eq. (19) can be found in Appendix A. 2) E-Subproblem: To update E l+1 , we keep the terms that only includes E in Eq. (18) and fix Z (v) . The Esubproblem can be formulated as: (1) ; E (2) ; . . . , E ( l+1 -subproblem can be defined as: The solution of Eq. (21) is given in Appendix C where the solution of each Z (v) is computed separately. 4) U-Subproblem : Considering the terms with U in Eq. The solution of Eq. (22) can be found in Appendix D. 5) Updating Lagrange multipliers and penality parameters: The Lagrangian multipliers Y (v) 1 l+1 , Y 2l+1 , and the penalty parameters γ 1l+1 , γ 2l+1 , are updated as follows: where τ is set to be 2. The stopping criteria of the proposed method are defined as: where is defined as the tolerance. Finally, k-means is applied to the normalized common subspace, U norm , to obtain the final clustering labels. The pseudo code of the proposed algorithm is summarized in Algorithm 1. C. COMPUTATIONAL COMPLEXITY OF THE PROPOSED METHOD The computational cost of the proposed method is mainly due to the cost of updating E, B, and U. For each iteration, the computational cost of updating E and B is equal to O(Vn 2 ) and O(Vn 2 log(n) + V 2 n 2 ), respectively. Moreover, the computational complexity of HOOI is O(Vn 3 ). Consequently, the total computational complexity of the proposed method is O(L(Vn 2 log(n)+V 2 n 2 +Vn 3 )) which is approximately equal to O(LVn 3 ), where V , L and n represent the number of views, iterations, and nodes, respectively. IV. RESULTS AND DISCUSSIONS Extensive experiments have been conducted to evaluate the performance of the proposed method, MV-RTSC. In particular, MV-RTSC is applied to several well-known realworld multi-view networks, and its performance is compared to other existing state-of-the-art SVC and MVC methods. The SVC methods include LRR 1 [14] and SSC 2 [15]. The MVC methods include LT-MSC 3 [17], t-SVD-MSC 4 [35], NMF-CC [36], CGL 9 [37], MCTP [38], CSF [39], 2CMV 10 [51], and DiMMA 11 [41]. The performance evaluation of the different algorithms is conducted in terms of the most famous quality metrics, including ACCuracy (ACC), Normalized Mutual Information (NMI), Adjusted Rand index (AR), F-score, Precision, and Recall [52], [53]. The values of all evaluation metrics are normalized between [0, 1]. Higher values of the quality metrics refer to better clustering results. All experiments are conducted using MATLAB 2020a on a desktop with the specifications Intel(R) Core(TM) i7 with RAM of 16GB. The MATLAB code of the proposed MV-RTSC approach is available online: https://github.com/wardat99/MV-RTSC A. EXPERIMENTAL SETTING In order to test the performance of the proposed algorithm, several famous real-world multi-view data sets are adopted. The multi-view networks are constructed following Section II-C where all adjacency matrices are nonnegative, symmetric and undirected. The data sets we use in our experiments are: • MSRC-V1 is a data set consisting of 210 images of different scenes, including cars, trees, bicycles, faces, cows, buildings and airplanes with 7 communities [54]. Each one of the images is represented by 5 features including Color Moment (CM), Histogram of Oriented Gradient (HOG), GIST, Local Binary Pattern (LBP) and Centrist features (CENTRIST). • Newsgroups (NGs) 12 is a text data set that contains 500 newsgroups documents that are collected from 20 newsgroups data sets, with 5 communities. • BBCSPORT 13 is a text data set which consists of 544 news articles from the BBC Sport website with 5 communities including athletics, cricket, football, rugby, and tennis [55]. • BBC4view 13 is a text data set that consists of 685 documents from the BBC website with 5 communities including business, entertainment, politics, sport, and technology [55]. • Flowers 14 is a data set containing 1360 flower images belonging to 17 communities with three different visual features, including color, shape and texture. • COIL-20 15 is a data set from the Columbia object image library that contains 1440 images with 3 views and 20 communities with three features including intensity, LBP, and Gabor. • UCI 16 is a handwritten digits images data set which contains 2000 images from the UCI machine learning repository with 10 communities (included digits are 0 − 9) [56]. It consists of three features including 216 profile correlations, 76 Fourier coefficients and 6 Karhunen-Loéve coefficients of the character shapes. • CiteSeer 17 is a text data set that contains of 3312 documents described by 2 views (citation links and word presence), and 6 communities. • Scene-15 is a scene data set which consists of 4485 indoor and outdoor scene images with 15 communities and 3 views including Pyramid Histograms Of visual Words (PHOW), LBP, and CENTRIST. [57]. • Handwritten digit (Hdigit) 18 is a handwritten digits images data set which contains 10000 images with 2 views including MNIST Handwritten Digits and USPS Handwritten Digits and 10 communities (includes the digits from 0 to 9). TABLE 3. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on MSRC-V1 data set. We set µ = 0.2 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. TABLE 4. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on NGs data set. We set µ = 0.5 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. TABLE 5. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on BBCSport data set. We set µ = 4 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. VOLUME 10, 2022 TABLE 6. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on BBC4view data set. We set µ = 8 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. TABLE 7. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on Flowers data set. We set µ = 4 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. TABLE 8. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on COIL-20 data set. We set µ = 9 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. TABLE 9. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on UCI data set. We set µ = 5 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. TABLE 10. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on CiteSeer data set. We set µ = 7 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. TABLE 11. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on Scene-15 data set. We set µ = 7 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. B. CLUSTERING RESULTS The performance of the proposed approach in clustering multi-view networks is compared to the state-of-the-art algorithms using multiple clustering evaluation metrics. The performance comparison is conducted in terms of ACC, NMI, AR, F-score, precision, and recall. For an unbiased VOLUME 10, 2022 TABLE 12. Clustering performance comparison between the different methods using multiple quality metrics averaged over 10 runs (Mean±Standard deviation) on Hdigits data set. We set µ = 6 and λ = 0.1 for MV-RTSC. The best performance is boldface and the second best performance is underlined. comparison, each experiment is repeated ten times, and the average values and standard deviation of the different quality metrics are reported in Tables 3−12. The best and second-best performance results are denoted by bold font and underlined, respectively. The methods under comparison can be divided into SVC and MVC methods. As it can be seen from Tables 3−12, the proposed algorithm achieves the highest scores in terms of the different evaluation metrics for all data sets. In addition, it outperforms many recent methods, including t-SVD-MSC, ETLMSC, NMF-CC, CGL, 2CMV, and DiMMA, especially in BBC4views, Flowers, and Hdigit data sets. For example, the proposed method improves around 16.5% and 11.3% in terms of NMI over t-SVD-MSC in BBC4views and Flowers data sets, respectively. On the other hand, inline with previous studies [16], [20], we observe that the performance of many MVC methods is better than the SVC methods in most data sets and the proposed MV-RTSC achieves the best result over SVC methods in all data sets. For example, in the Scene-15 data set, the proposed method improves the performance over LRR and SSC by 43.7% and 43.8% in terms of ACC, respectively. Furthermore, we notice that the tensor-based optimization methods accomplish better performance than the matrix-based optimization methods, such as t-SVD-MSC, ETLMSC, DiMSC, and LT-MSC. Moreover, the proposed approach, which relies on decomposing the multi-view adjacency tensor, achieves the best results compared with the methods that decompose the original data set tensor into clean and error components such as t-SVD-MSC, DiMSC, and LT-MSC methods. For instance, in the UCI data set, the proposed method achieves better performance over t-SVD-MSC by 4.5% and 6.8% in terms of ACC and NMI, respectively. Besides, in the COIL-20 data set, the proposed method achieves better performance over t-SVD-MSC by 9.7% and 9.5% in terms of ACC and NMI, respectively. This illustrates that decomposing the adjacency tensor leads to better results compared to the algorithms that decompose the data tensor. Finally, even though, some of the existing algorithms achieve good results in detecting the community structure in some networks, none of them maintained the good performance among all the networks similar to the proposed MV-RTSC. Table 13. The proposed MV-RTSC takes ≈ 7 − 11 iterations to converge. As it can be seen from the table, the running time of the proposed algorithm is higher than some existing D. PARAMETERS SELECTION Two regularization parameters are included in the objective function of the proposed approach, MV-RTSC, namely µ and λ. µ is used to control the L 2,1 -norm of the error component, and λ is used to penalize the subspace decomposition term. In order to study the effect of the regularization parameters on the performance of the proposed approach, the two parameters are tuned in the range [0.01, 10]. In particular, the consequence of varying one of the parameters is investigated while fixing the other one. Fig. 1 and Fig. 2 show the influence of tuning µ and λ in terms of ACC and NMI for some data sets, respectively. As it can be concluded from the figures, the best range for λ and µ that leads to the best performance of MV-RTSC is λ ∈ [0.1, 0.5] and µ ∈ [3,10]. Another parameter that is considered in this paper prior to applying the proposed approach is the parameter σ in the Gaussian kernel similarity defined in Eq. (11). The parameter σ is used to control the width of the neighborhoods during the construction of the multi-view network. In particular, if the value of σ is chosen too small, the constructed graph will be very sparse, i.e. similarities between nodes will be close to zero. Whereas, the constructed graph will be fully connected,i.e. similarities between nodes will be close to one, if the value of σ is chosen to be too large. Fig. 3 shows the effectiveness of tuning the parameter σ on the results for some data sets. In order to construct a meaningful graph, we tuned σ ∈ [1,50] and the recommended range for σ is σ ∈ [1,30]. V. CONCLUSION In this paper, a novel robust tensor-based approach is proposed to detect the community structure in multi-view networks. The proposed MV-RTSC presents a unified framework that combines tensor and subspace decomposition terms to reveal the community structure in multi-view networks. In particular, the proposed approach recovers a clean adjacency tensor from the noise corrupted adjacency tensor which is then used to compute a common subspace across all views. More specifically, this common subspace is computed by optimizing the contribution of each view through Tucker decomposition. Several real-world multiview networks are used to test the performance of the proposed method and compare it to other existing methods. Finally, the experiments' results show that the proposed method achieves better clustering results in terms of multiple quality metrics compared to many state-of-the-art SVC and MVC algorithms. APPENDIX A SUBPROBLEM B The subproblem B in Eq. (19) can be written as follows: Eq. (27) can be written as follows: Let N l = Z l + Y 2l γ 2 , by using tensor tubal-shrinkage operator [16], the solution of Eq. (28) is given as: where the t-SVD of N l is N l = Q l * S * Q r , where R S = S * J , and J ∈ R n×n× V is an f-diagonal tensor whose diagonal element in the Fourier domain is J (i, i, j) = ) + , where = V γ 2 , and t + is the positive values of t. APPENDIX B SUBPROBLEM E The Lagrangian unconstrained problem of Eq. (20) can be written as: Eq. (30) can be simplified as: Let (v) l γ 1 , then a matrix, C, can be constructed by arranging all views of C (v) vertically and Eq. (31) can be rewritten as: Then the update of E can be calculated following [14] as: where C :,j 2 is the L 2 -norm of the j th column in the matrix C. APPENDIX C SUBPROBLEM Z The subproblem in Eq. (21) can be written as: Eq. (34) can be simplified as follows: A closed form solution of Eq. (35) can be computed by taking the gradient of Eq. (35) with respect to Z (v) and set it to zero [58]. The solution of Z (v) l+1 is then given by: where F
2022-12-28T16:07:42.635Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "5c54d96199be3021d33e4b57b3be0515508b4237", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09999185.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "0d278744c0591b280887f8faa7c82c5a844db889", "s2fieldsofstudy": [ "Computer Science", "Mathematics", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
260091424
pes2o/s2orc
v3-fos-license
A Graceful Basis in the Solution Space of an ODE with Constant Coefficients We revisit the classical problem of construction of a fundamental system of solutions to a linear ODE whose elements remain analytic and linearly independent for all values of the roots of the characteristic polynomial. It is well-known that the canonical basis in the space of analytic solutions to the ordinary differential equation with constant coefficients (1) The search for fundamental systems of solutions to (2) that do not degenerate for confluent values of its parameters α ∈ C m (i.e., such that at least two of them are equal) received a lot of attention, both classically and recently (see [1, § 26] and the references therein). The canonical method of multiplying an exponent in (2) with an arbitrary polynomial whose degree is one smaller than the multiplicity of a root yields a basis whose elements depend on α in a discontinuous fashion. Besides, it does not work in the case of symbolic α j whose values are not a priori known. In [2], a universal fundamental system of solutions to a generic ordinary linear differential equation with constant coefficients is given in terms of formal power series. In [6, Section 10.1], the space of solutions to a second-order differential equation is endowed with a basis whose elements remain analytic and linearly independent for all values of the roots of the characteristic polynomial. This behavior of a basis is referred to in [6, p. 134] as "graceful". Following [6], we adopt the next definition. Definition 1. An m-tuple f j (x; α), j = 1, . . . , m of analytic solutions to the equation (2) is called a graceful basis if the following hold: 1) f j (x; α) are entire functions of x ∈ C and α ∈ C m ; 2) for any fixed α ∈ C m the univariate functions f j (x; α) are linearly independent. This research was performed in the framework of the state task in the field of scientific activity of the Ministry of Science and Higher Education of the Russian Federation, project no. FSSW-2023-0004. Clearly the canonical basis (2) is not graceful, nor is it its standard completion by polynomial factors in the case of confluent values of the characteristic roots α 1 , . . . , α m . The next lemma remedies this flaw. is a graceful basis in the space of analytic solutions to (1). For m > 1 the basis (3) It is well-known that the determinant of this matrix equals 1≤j<k≤m (α j −α k ). Since (3) are linear combinations of the elements of the canonical basis (2), they satisfy the equation (1). By [3, p. 38] (see also [4,7] and the references therein), the coefficients of these linear combinations form the inverse to the matrix V (α) defined by nonconfluent characteristic roots. By the conservation principle for analytic differential equations [5] vanishes if and only if the characteristic polynomial of (1) has multiple roots. It therefore follows that for nonconfluent α the Wronskian determinant of the family of functions (3) is given by By the uniqueness theorem for analytic functions this equality holds for all α ∈ C m . Thus the above Wronskian does not vanish anywhere and hence (3) is indeed a graceful basis in the space of solutions to (1). Skipping the explicit inverse to the nondegenerate Vandermonde matrix, we arrive at the following particularly simple form of (3): The entire functions form a graceful basis in the space of solutions to (1) for m = 3.
2023-07-24T04:01:53.629Z
2023-07-21T00:00:00.000
{ "year": 2023, "sha1": "9e79a832a9eb372eca66aa9a3d6f87e58f823e7f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9e79a832a9eb372eca66aa9a3d6f87e58f823e7f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
230556785
pes2o/s2orc
v3-fos-license
Immune-Checkpoint Inhibitors in B-Cell Lymphoma Simple Summary Immune-based treatment strategies, which include immune checkpoint inhibition, have recently become a new frontier for the treatment of B-cell-derived lymphoma. Whereas checkpoint inhibition has given oncologists and patients hope in specific lymphoma subtypes like Hodgkin lymphoma, other entities do not benefit from such promising agents. Understanding the factors that determine the efficacy and safety of checkpoint inhibition in different lymphoma subtypes can lead to improved therapeutic strategies, including combinations with various chemotherapies, biologics and/or different immunologic agents with manageable safety profiles. Abstract For years, immunotherapy has been considered a viable and attractive treatment option for patients with cancer. Among the immunotherapy arsenal, the targeting of intratumoral immune cells by immune-checkpoint inhibitory agents has recently revolutionised the treatment of several subtypes of tumours. These approaches, aimed at restoring an effective antitumour immunity, rapidly reached the market thanks to the simultaneous identification of inhibitory signals that dampen an effective antitumor response in a large variety of neoplastic cells and the clinical development of monoclonal antibodies targeting checkpoint receptors. Leading therapies in solid tumours are mainly focused on the cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) and programmed death 1 (PD-1) pathways. These approaches have found a promising testing ground in both Hodgkin lymphoma and non-Hodgkin lymphoma, mainly because, in these diseases, the malignant cells interact with the immune system and commonly provide signals that regulate immune function. Although several trials have already demonstrated evidence of therapeutic activity with some checkpoint inhibitors in lymphoma, many of the immunologic lessons learned from solid tumours may not directly translate to lymphoid malignancies. In this sense, the mechanisms of effective antitumor responses are different between the different lymphoma subtypes, while the reasons for this substantial difference remain partially unknown. This review will discuss the current advances of immune-checkpoint blockade therapies in B-cell lymphoma and build a projection of how the field may evolve in the near future. In particular, we will analyse the current strategies being evaluated both preclinically and clinically, with the aim of fostering the use of immune-checkpoint inhibitors in lymphoma, including combination approaches with chemotherapeutics, biological agents and/or different immunologic therapies. Biology of B-Cell Lymphoma The term B-cell lymphoma encompasses different neoplasms characterised by an abnormal proliferation of lymphoid cells at various stages of differentiation. B-cell lymphoma develops more frequently in older adults and immunocompromised individuals The crosstalk between malignant FL cells and the surrounding cells of their TME is driven by some recurrent genetic events [14]. FL is strongly regulated by direct interaction with a germinal centre (GC)-like microenvironment, including myeloid cells, follicular helper T-cells (T FH ), and stromal cells, that may orchestrate efficient immune escape mechanisms [15]. The TME of FL also displays deregulation of the extracellular matrix proteins involved in collagen deposition and organization [16]. Cancer-associated fibroblasts (CAFs) are another important FL tumour-promoting actor, providing a niche with high levels of factors involved in B-cell activation and the activation/recruitment of some TME components such as TAMs [17]. The crosstalk between T FH cells and FL cells is orchestrated by the interaction between antigen-loaded MHC class II molecules and antigen-specific T-cell receptors. Burkitt Lymphoma Burkitt lymphoma (BL) includes a heterogeneous group of highly aggressive malignancies of intermediate-sized B-cells that may be found infiltrating both nodal or extranodal tissues in a diffuse pattern [18]. BL is invariably associated with chromosomal translocations that dysregulate the expression of c-MYC, and, consequently, several downstream genes involved in the control of cellular processes such as cell cycle progression and apoptosis [19]. The malignant cells usually express the B-cell-specific surface markers CD19 and CD20, as well as low-to-intermediate levels of common acute lymphoblastic leukaemia (ALL) antigen (CD10/CALLA) [20]. The complex interplay between BL cells and the TME also regulates lymphomagenesis and provides new insights for target immunotherapies. Like DLBCL, BL tumours harbour a noninflamed environment with low infiltration of immune cells and are usually resistant to immune checkpoint blockade. One of the hallmarks of the TME in BL tumours is the high content of TAMs which contribute to tumour progression through the secretion of cytokines and chemokines, and the expression of immune checkpoint proteins such as programmed death ligand 1 (PD-L1) [21] (see below). The crosstalk between tumour cells, TAMs, PD-1 signalling, viral antigens, and T-cells may result in the high prevalence of M2 macrophages in the TME and contribute to the failed immunity of BL patients [22]. The course of MZL disease is strongly influenced by the TME, and this latter may therefore represent a promising strategy for early diagnosis and therapy choice. SMZL cells are supported by immune cells such as mast cells and macrophages, which may be recruited by tumour cells through the secretion of cytokines and chemokines [29]. The TME components of SMZL can regulate stromal cell proliferation, angiogenesis, extracellular matrix remodelling, and induction of adhesion molecule expression [29]. The chronic inflammation of MALT lymphomas not only triggers B-cell growth but also recruits T-cells, macrophages and neutrophils to the site of inflammation, which contribute to genetic aberrations, DNA damage and genetic instability of the B-cells during somatic hypermutation and class-switching recombination [30]. Mantle Cell Lymphoma Mantle cell lymphoma (MCL) originates from B-cells, a proportion of them being antigen-experienced B-cells, in the mantle zone of lymph nodes. MCL is usually diagnosed as a late-stage disease and may be observed in both the gastrointestinal tract and bone marrow [31]. The diagnosis of MCL is mainly performed by a microscopic evaluation of a biopsy, although the detection of chromosomal translocation t (11:14), with the consequent cyclin D1 expression, is considered the molecular hallmark [32]. The crosstalk between MCL tumour cells and its microenvironment has a central role in disease expansion [33]. MCL cells have shown constitutive expression of PD-1 and its ligand PD-L1, which converts it into an interesting candidate for immunotherapy targeting this checkpoint [34] (see below). Aggressive MCL cases are characterised by a low number of T-cells [35] and a high frequency of regulatory T-cells (Treg) [36]. Moreover, follicular dendritic cells (FDCs) have been shown to support MCL cell survival through a cell-cell interaction mechanism [37]. Autocrine and paracrine secretion of soluble factors could also have an important role within the MCL TME. Interestingly, the blood of MCL patients contains high levels of several cytokines and chemokines, such as IL-8, CCL3 and CCL4, which are correlated with poor survival [38]. Classical Hodgkin Lymphoma Classical Hodgkin lymphoma (cHL) is a neoplasm derived from B-cells and is mainly constituted by a small number of neoplastic mononuclear cells, i.e., Hodgkin cells, and multinucleated Reed-Sternberg (HRS) cells. cHL accounts for 15-25% of all lymphomas and represents the most common lymphoma subtype in children and young adults in the Western world. The cell of origin (COO) is nowadays unequivocally considered to be a (post)germinal centre B-cell [39]. Several genetic alterations, targeting a few pathways, have been identified, but none of them can be considered "dominant". The affected pathways include NF-κB and JAK-STAT, whose aberrant activation fuel HRS cells with proliferative and antiapoptotic stimuli [40]. Moreover, the LMP1 protein, encoded by EBV that often latently infects HRS cells, likely contributes to NF-κB signalling since LMP1 mimics constitutively active CD402 [41]. Genetic lesions of NF-κB pathway genes largely contribute to aberrant activation of this cascade in a cell-intrinsic manner and/or by amplifying signals from the microenvironment [42]. In addition, HRS cells are outnumbered by reactive cells in the TME, including T-and B-lymphocytes, eosinophils, macrophages, mast cells, plasma cells and stromal cells [43,44]. Immune Checkpoint Blockade in B-Cell Lymphoma Among the armament of immunotherapies aimed at allowing the host's own immune system to detect and eliminate malignant cells, immune checkpoints blockers are able to modulate molecules that regulate immune signalling, either positively, by promoting the activation, maturation and proliferation of T-cells, or negatively, by blocking T-cell activity, eventually leading to the programmed death of these latter ( Figure 1). Most B-NHLs, including BL, DLBCL, FL and CLL, are characterised by a low infiltration of immune cells, a feature that may condition a priori the applicability of immune checkpoint blockers. Although there is no evidence of a specific genetic immune escape program that may prevent immune cells from entering the local TME to promote an effective antitumor response in a determined lymphoma subtype, recent data support the notion that oncogenic signalling can promote a "noninflamed" TME. As an example, PTEN, EZH2, and TP53 dysregulation have been associated with the downregulation of genes related to innate or adaptive immunity in DLBCL, potentially leading to immune suppression, decreased HLA expression and reduced T-cell infiltration [45][46][47][48][49][50]. The oncogene MYC, involved in the pathogenesis of BL and other lymphoma subtypes, may also be involved in the regulation of the immune environment by regulating the transcription of different immune checkpoint molecules, including CD47 and PD-L1 [51]. Therapeutic approaches based on immune checkpoint blockade in B-cell lymphomas. Different therapeutic strategies to block PD-1/PD-L1 interaction are under clinical development in order to prevent PD-1-mediated attenuation of TCR signalling, allowing for activity restoration of exhausted CD8+ T-cells. CTLA-4 inhibition by monoclonal antibodies may induce tumour rejection through direct blockade of CTLA-4 competition for CD-80 (B7-1) and CD-86 (B7-2) ligands, which enhances CD28 costimulation and, thus, activation. Alternative immune checkpoint molecules expressed on tumour cells or immune cells in the TME can be simultaneously modulated to restore an effective antilymphoma immune response. Overexpression of PD-1 and its ligands, PD-L1 (CD274) and/or PD-L2 (PDCD1LG2), by malignant neoplastic cells allows the ligation of PD-1 on T-cells and the consequent induction of T-cell "exhaustion", a phenomenon closely linked to peripheral tolerance and homeostasis. That way, the malignant cells escape from the antitumor immune response in a process known as immune evasion [52]. The PD-L1 protein is encoded by the CD274 gene on chromosome 9p24.1 and harbours two extracellular domains, a transmembrane domain, and a short cytoplasmic tail that lacks signalling motifs [57]. The expression of PD-L1 is strongly affected by structural alterations such as amplifications, gains, and translocations of chromosome 9p24.1 [58]. Remarkably, 9p24.1 amplification also induces Janus kinase 2 (JAK2) expression, leading to activation of JAK/signal transducers and activators of transcription (STAT) signalling, which in turn, upregulates PD-L1 [41]. Upon engagement with PD-L1, PD-1 becomes phosphorylated by Src family kinases and transmits a negative costimulatory signal through tyrosine phosphatase proteins to attenuate the strength of T-cell receptor (TCR) signals and downstream signalling pathways such as PTEN-PI3K-AKT and RAS-MEK-ERK. The functional outcome of this regulation is the inhibition of cytotoxic T-lymphocyte function [59][60][61][62][63]. In DLBCL, PD-L1 has been shown to be expressed by the nonmalignant compartment in only 26% to 75% of the cases [65,[72][73][74][75]. Godfrey et al. showed that 27% of DLBCL patients (especially from the nongerminal centre subgroup) presented a PD-L1 amplification associated with inferior PFS following front-line chemoimmunotherapy [58,71,72,74,[76][77][78]; this was more often detected in de-novo than transformed cases [65,76]. Similar to cHL, EBV infection has been correlated with a much higher PD-L1 expression in DLBCL tumours [74]. The prognostic significance of PD-L1 expression in DLBCL patients is controversial, but most of the studies have reported a poorer outcome in cases with PD-L1+ macrophages [74]. Additionally, overexpression of PD-L1 is associated with the immune escape gene signature involving Bruton's tyrosine kinase (BTK) and JAK/STAT signalling [79]. Regarding PD-1, receptor expression was detected in 39.5-68.6% of DLBCL cases [85], and data support the notion that a high number of PD-1+ tumour-infiltrating lymphocytes (TILs) are associated with favourable clinical features and prognosis [72,86]. In contrast to DLBCL, FL tumour cells are largely negative for PD-L1 and PD-L2, and in this disease, the TILS are characterised by high PD-1 expression and suppressed cytokine signalling [87]. Importantly, the presence of PD-1+ TILs is a favourable prognostic factor, whereas a low number of TILs is associated with increased risk of histologic transformation [88,89]. Finally, in MCL, available data on the expression of PD-L1 are often conflicting. Several studies have shown that PD-L1 expression is low or absent in MCL [65,90], whereas others have shown a variable but constitutive expression of PD-L1 on tumour cells in both cell lines and primary patient samples [34]. PD-1/PD-L1 Inhibition in B-Cell Lymphoma The blockade of the PD-1/PD-L1 pathway ( Figure 2) has transformed immunotherapy with a promising increase in OS rates, leading to U.S. Food and Drug Administration (FDA) approval of these immune checkpoint blockade drugs for the treatment of a broad range of tumour types over the past decade. Two anti-PD-1 antibodies (nivolumab (BMS-936558/ONO-4538, Opdivo ® ) and pembrolizumab (Keytruda ® )) and three anti-PD-L1 antibodies (durvalumab, atezolizumab, and avelumab) have been approved for the treatment of various types of cancer, including lymphomas [68,[91][92][93][94][95][96]. Nivolumab and pembrolizumab, two fully humanised IgG4-kappa-blocking monoclonal antibodies, target the PD-1 receptor on human T-cells [97][98][99]. Nivolumab binds specifically to PD-1 and does not affect the related members of the CD28 family, such as CD28, CTLA-4, inducible co-stimulator, and B-or T-lymphocyte attenuator. The blockade of the PD-1 signalling pathway by nivolumab induces both the proliferation of lymphocytes and the release of IFN-γ. Pembrolizumab binds with high affinity to human PD-1, blocking receptor ligation by both PD-L1 and PD-L2 and leading to enhanced T-lymphocyte immune responses in preclinical models of cancer, with the modulation of key cytokines like interleukin (IL)-2, tumour necrosis factor (TNF)-α, and IFN-γ [100,101]. Most DLBCL patients were initially thought to be not amenable to PD-1 blockade since PDL-1/2 alterations are nonfrequent in this disease, and, accordingly, PD-1 blockade therapy has been disappointing to date in R/R DLBCL and FL. While several ongoing clinical trials are evaluating the use of pembrolizumab in different DLBCL subtypes, this antibody failed to improve PFS in ASCT-relapsed patients [102]. Similarly, a first phase-1b dose-escalation cohort expansion study evaluating nivolumab in R/R DLBCL patients (NCT01592370) and a subsequent larger phase-2 study (NCT02038933) in ASCTrelapsed and ASCT-ineligible DLBCL patients reported overall response rates (ORRs) <40% [78,97,100,103] (Table 1). In contrast, in CLL patients with Richter's transformation (RT), the recent phase-2 trial, MC1485 (NCT02332980), demonstrated an ORR of 44%, including 1 complete response (CR), 2 partial responses (PR) and median PFS and OS of 5.4 months and 10.7 months, with manageable adverse events (AEs; Table 2). As expected, those patients displayed higher levels of PD-L1 expression related to the presence of chromosome 9p24.1 amplification or EBV infection [104]. Considering the recurrent alteration of the PD-L1 gene in PCNSL and PTL and the poor prognosis of these rare subtypes of DLBCL [80], nivolumab was evaluated in patients with R/R PCNSL or R/R PTL, in whom it demonstrated impressive activity (NCT02857426), with clinical and radiographic response and PFS extended to 13+ to 17+ months for some patients [105]. The phase-2 CheckMate 436 clinical trial further demonstrated that nivolumab combined with brentuximab vedotin represents a promising therapy in PBML patients post-ASCT or after ≥2 prior chemotherapy regimens [106]. Similarly, pembrolizumab therapy has also yielded excellent results in PMBL patients. In the phase-1b multicohort KEYNOTE-013 study, the ORR was 41%, while the median duration of response (DOR) and OS were not reached in this subset of patients [107]. The subsequent international phase-2 KEYNOTE-170 study (NCT02576990), with 53 R/R PBML patients enrolled, reported an ORR of 45% (including 13% CR) [108]. As expected, the magnitude of chromosome 9p24.1 abnormality was associated with PD-L1 expression in responding patients [108]. Importantly, in both KEYNOTE studies, no patient who previously achieved CR relapsed during the follow-up. Altogether, these results led to the accelerated FDA approval of pembrolizumab in 2018 for the treatment of R/R PMBL [99]. Similarly, the phase-1/2 studies employing nivolumab and pembrolizumab have reported high ORRs in patients with R/R cHL; the patients that reached CR were characterised by higher PFS [67,68,[109][110][111][112] (Table 1). In subsequent phase-2 studies, pembrolizumab led to 100% OS and 82% PFS at 18 months in post-ASCT consolidation settings, suggesting that pembrolizumab could be used in high-risk patients after ASCT to remodel the immune landscape [113]. Following these trials, pembrolizumab is currently being used in frontline and salvage regimens in R/R cHL patients [114]. Two ongoing and three completed clinical trials evaluated the safety and efficacy of the humanised IgG1 monoclonal anti-PD-L1 antibody atezolizumab (MPDL-3280A), in combination with the anti-CD20 antibody obinutuzumab, for the treatment of aggressive B-cell lymphoma. A phase-1/2 trial (NCT02596971) evaluated the safety and efficacy of atezolizumab, in combination with either obinutuzumab + the alkylating agent bendamustine or obinutuzumab + chemotherapy (CHOP) in FL patients, and atezolizumab + rituximab + chemotherapy in DLBCL patients. The analysis from 40 patients demonstrated high efficacy (ORR of 95%) and durable responses (24 months for 80% of patients) for the combinational approach. Another phase-1/2 study (NCT02631577) that enrolled 38 patients with R/R FL demonstrated durable clinical responses and the remarkable ORR of atezolizumab, in combination with obinutuzumab, plus the immunomodulatory drug lenalidomide. Nevertheless, a phase-1 trial (NCT02220842), including 14 patients with R/R FL and 17 patients with R/R DLBCL, showed the weak efficacy of atezolizumab in combination with obinutuzumab or the EZH2 inhibitor tazemetostat. Subsequently, a multicentre phase-2 trial (NCT03276468) assessed the antilymphoma activity of atorolimumab associated with Veneto lax (a BCL-2 inhibitor) and blinatumomab in three cohorts: R/R FL patients, R/R DLBCL patients and iNHLs, including MZL and MALT cases. The data from the 58 DLBCL patients enrolled at the time of the primary analysis demonstrated that the efficacy of combinatory therapy is comparable with currently available options for this population, with durable responses. The phase-1/2 clinical trial NCT02729896 evaluated the combination of atezolizumab with obinutuzumab and polatuzumab, an anti-CD79b, in 13 participants with R/R FL and atezolizumab with the anti-CD20 antibody rituximab and polatuzumab in 21 participants with R/R DLBCL. The percentage of participants with an objective response (CR + PR) was 33.33-57.14% (depending on the polatuzumab dose) for FL patients and 25% for DLBCL patients. The results of a large phase-1 clinical trial (NCT02500407) that enrolled 72 iNHL patients (including 69 FL) and 141 cases with aggressive B-NHL (87 DLBCL and 29 tFL) to evaluate the combination of atezolizumab with mosunetuzumab, a bispecific CD20-CD3 monoclonal antibody, demonstrated high response rates and durable complete remissions, as well as the maximum tolerated dose. The ORR and CR of the iNHL patients across all dose levels were 64% and 42%, respectively. The ORR and CR of aggressive NHL patients across all dose levels were 34.7% and 18.6%, respectively (Table 1). According to the outcome of the NP39488 study (NCT03533283), the combination of atezolizumab with glofitamab, another bispecific antibody designed to target CD20 on the surface of B-cells and CD3 on the surface of T-cells, resulted in low ORR in 38 aggressive B-NHL patients or iNHL. The use of CAR-modified T-cells targeting specific tumour cell antigens to enhance immune responses against tumour cells is certainly a great breakthrough in oncoimmunotherapy research. In NHLs, targeting CD19-malignant B-cells has proven highly efficacious in the refractory-disease setting, resulting in T-cell activation, proliferation and secretion of inflammatory cytokines and chemokines, with consequent tumour cell lysis [115,116]. KTE-C19 (Axi-cel) is an autologous anti-CD19 CAR T-cell that was approved by the FDA in October 2017 for the treatment of R/R aggressive B-cell lymphomas after two or more lines of systemic therapy. As PD-1/PD-L1 blockade has been shown to be upregulated after CAR T-cell infusion, the ZUMA-6 clinical trial (NCT02926833) evaluated outcomes of KTE-C19 combined with the anti-PD-L1 atezolizumab. The data suggested that PD-L1 blockade with atezolizumab after KTE-C19 has a manageable safety profile and a promising efficacy outcome (Table 1). Durvalumab is a selective, high-affinity, humanised IgG1-kappa monoclonal antibody against PD-L1 [117]. In vitro and in vivo xenograft assays have demonstrated that durvalumab evokes a 75% tumour growth reduction in the presence of tumour-reactive human T-cells, supporting the immunological mechanism of action of this drug [118]. Currently, ten clinical trials are underway to investigate the use of durvalumab as monotherapy or in combination with other reagents or CAR T-cells to treat B-NHL patients. Data from murine lymphoma models suggest that the BTK inhibitor ibrutinib, combined with an anti-PD-L1 therapy, may have synergistic antitumor activity [119]. A phase-1b/2 study (NCT02401048) evaluating the efficacy and safety of the combination of ibrutinib and durvalumab in patients with R/R DLBCL or FL has highlighted longer PFS and OS in patients with FL compared to those with DLBCL. However, the efficacy of ibrutinib + durvalumab treatment demonstrated similar activity to single-agent ibrutinib [120] (Table 1). FUSION NHL 001 (NCT02733042) is a phase-1/2, open-label, multicentre study to assess the safety and tolerability of durvalumab as monotherapy or in combination with different regimens (lenalidomide ± rituximab; ibrutinib; rituximab ± bendamustine (an alkylating agent)) in subjects with B-NHL or CLL. From the 106 enrolled participants, 23 were FL, 37 were DLBCL, 17 were MCL, 5 were MZL, 1 was t-FL, 5 were cHL and 18 were CLL/SLL. The efficacy of the durvalumab and rituximab or durvalumab and lenalidomide + rituximab combination was evaluated initially in 3 B-NHL patients. The ORR of durvalumab and rituximab therapy was 33.3% and reached 66.7-80% with the addition of lenalidomide. A remarkable ORR was seen in ten MCL patients after durvalumab and ibrutinib combination therapy. The combination treatment of durvalumab, rituximab and bendamustine led to an ORR of 88.9% in FL patients and 30% in DLBCL patients. On the other hand, none of FL (n = 5), MCL (n = 5) or DLBCL (n = 10) patients responded to durvalumab as a monotherapy. Although these early findings are encouraging, serious AEs were commonly seen in patients treated with durvalumab when administrated either alone or in combination therapy (Table 2). Another phase-2, two-arm, open-label clinical trial (NCT03003520) is ongoing to evaluate the safety, activity, and predictive biomarkers of durvalumab in combination with chemoimmunotherapy (R-CHOP) or lenalidomide plus R-CHOP, followed by Durvalumab consolidation therapy, in previously untreated subjects with DLBCL. The ORR from the evaluable patients of the durvalumab-R-CHOP arm showed that 54.10% of the patients achieved CR but 51% of the cases presented serious AEs. Finally, NCT03310619 (PLATFORM) and NCT02706405 are two studies aimed at determining the safety, tolerability, and efficacy of CAR T-cells (JCAR017 and JCAR014, respectively) in combination with durvalumab in subjects with R/R B-cell malignancies. Among the first 11 evaluable patients, investigators reported an ORR of 91%, including 64% CR. The NCT02706405 study enrolled 15 patients, in which 12 were DLBCL, 2 were high-grade B-cell lymphoma with MYC and BCL2 and/or BCL6 rearrangements, and 1 was PMBL. The ORR from the 12 evaluable patients was 50%, with 42% CR and 8% PR. Only one patient who achieved CR has relapsed. CTLA-4 Signalling and Inhibition Cytotoxic T-lymphocyte antigen 4 (CTLA-4 or CD152) is expressed by both CD4+ and CD8+ T-cells and mediates T-cell activation together with CD28 as both receptors are homologous and share a pair of ligands, CD80 and CD86, found on the surface of APCs [164]. The interaction between CTLA-4 and both ligands is of higher affinity and avidity than CD28 and plays opposite roles. While CD28 mediates T-cell costimulation in conjunction with TCR signals, CTLA-4 and its interaction with its ligands drive the inhibi-tion of T-cell responses, although the precise mechanisms are not fully understood [165]. Two CTLA-4 blocking antibodies have been developed: tremelimumab, the first full human CTLA-4 antibody [166], and ipilimumab, an anti-CTLA-4 IgG2 monoclonal antibody and IL-2 stimulant [167]. Both agents are able to recognise human CTLA-4 and to block its interaction with CD80 or CD86 [168], potentiating an antitumor T-cell response [169]. Though ipilimumab binds to the same epitope, with a similar affinity as tremelimumab, the higher dissociation rate of ipilimumab may indicate a dynamic binding to CTLA-4, which may provide it with an improved pharmacokinetic profile [170]. While only one ongoing trial is evaluating the efficacy of tremelimumab as a single agent and in combination with durvalumab and the JAK/STAT inhibitor AZD9150 in R/R DLBCL patients, several trials have evaluated the efficacy of ipilimumab in lymphoma patients in combination with other existing therapies (mostly rituximab and nivolumab). The first phase-1/2 trial evaluating ipilimumab in relapsed settings was launched in 2004 with 18 lymphoma patients (NCT00089076). Although only two patients could be evaluated, both had clinical responses: 1 DLBCL patient had a CR of 31+ months and 1 FL patient had a PR for up to 19 months [143]. Due to the study design of phase 1, the trial was finally terminated. Subsequently, a phase-1 clinical trial was launched in 2012 to assess the effect of ipilimumab in combination with rituximab in the same cases as the previous trial (NCT01729806). The enrolment was formed by patients with FL (n = 13), DLBCL (n = 7), MCL (n = 2), SLL (n = 2) and 9 patients with an undetermined diagnosis. At 7 weeks, toxicity was evaluated and considered as manageable ( Table 2). The combination of rituximab and ipilimumab resulted in more effective B-cell depletion, together with an increase in IL-2 and TNF-α levels; both phenomenons were associated with treatment response [144]. Ipilimumab was also evaluated in patients with relapsed hematologic malignancies after an allogeneic stem cell transplant (Allo-SCT) in combination with lenalidomide (NCT01919619) or nivolumab (NCT01822509). In this last combination, only a modest antitumour activity was observed, mainly in lymphoid patients. However, substantial toxicities were also observed due to graft-vs.-host disease (GVHD) [171]. In a second trial evaluating the combination of lenalidomide and ipilimumab in 13 post-Allo-SCT patients with DLBCL, FL or MCL, a 46% CR and only one GVHD were reported, although one patient died after developing a T-cell lymph proliferative disorder after treatment [146]. The combination of ipilimumab and nivolumab is currently under study in a phase-1/2 trial in patients at high risk of recurrence after Allo-SCT (NCT02681302). Out of 31 patients, 14 have DLBCL, both primary refractory (n = 7) and relapsed (n = 7). As of 2018, 65% of patients had developed immune-related AEs of grade 2 or higher, which required treatment with systemic steroids, but no GVHD (Table 2). [173] that interacts with SIRPα (signal regulatory protein-alpha), spreading the "don't eat me" signal to macrophages, a strategy employed as an immune-mediated clearance evasion mechanism in several types of cancers [174] (Figure 1). Mechanistically, the CD47-SIRPα binding leads to tyrosine phosphorylation of SIRPα immunotyrosine inhibitory motifs and activates SHP (tyrosine phosphatases Src homology 2 (SH2)-containing protein tyrosine phosphatase)-1 and -2. The interaction of the phosphatase SH2 domains to phosphorylated SIRPα disrupts their autoinhibitory activity, triggering enzymatic activity and, ultimately, leading to the blockade of macrophage phagocytic function [175,176]. CD47-SIRPα Axis Inhibition One of the first attempts to target CD47 was carried out therapeutically in AML primary human xenograft models [174]. The CD47 antibody B6H12 induced phagocytosis and eliminated AML stem cells. Subsequently, it was demonstrated that this antibody synergised with anti-CD20 (which can also bind Fc-receptors), promoting a more potent prophagocytic signal in B-NHL xenograft models [179,180,183]. Based on these results, the humanised anti-CD47 antibody Hu5F9-G4 was developed [184]. Preclinical studies showed that this antibody could bind specifically to CD47, blocking CD47-SIRPα interaction and enabling macrophage-mediated phagocytosis in primary AML cells. This antibody was further shown to potently synergise with rituximab in B-NHL xenografts, supporting its evaluation in a phase-1 clinical trial (NCT02216409) that revealed its safety, pharmacokinetics, and pharmacodynamics [185]. Subsequently, a larger phase-1b/2 study was launched to evaluate the combination of Hu5F9-G4 with rituximab in 115 B-NHL patients (70 DLBCL and 45 iNHL (41 FL and 4 MZL); NCT02953509). In this trial, a Hu5F9-G4 and rituximab combination was well tolerated, with rapid and durable responses [147,186] (Table 1). Recently, the fully human anti-CD47 IgG4 antibody TJC4 (TJ011133) was shown to specifically block the CD47-SIRPα axis, enhancing phagocytosis in a set of tumour cell lines and AML primary cells. In BL and DLBCL xenograft models, TJC4 inhibited tumour growth and extended mice OS as monotherapy. When combined with rituximab, the antibody showed superior efficacy in a DLBCL model over the single agent. In addition, singledose or repeat-dose treatment of TJC4 minimally affected red blood cells in cynomolgus monkeys, with no impact on platelets [187]. TTI-621 is a fully human recombinant fusion protein based on the structure of SIRPα linked to the Fc region of human IgG1; it was conceived and designed as a decoy receptor. First, in vitro data showed that TTI-621 could bind CD47 and induce a potent macrophage-mediated antibody-dependent cell phagocytosis (ADCP) and apoptosis in an extensive range of hematologic and solid tumour cells [188][189][190]. In vivo data indicated that the fusion protein was able to block CD47 and impair the tumour growth in several haematological xenograft models, including AML, BL and DLBCL. Preclinical data also suggested that TTI-621 was less likely to evoke anaemia when compared to other anti-CD47, thanks to its low erythrocyte-binding profile [188]. TTI-621 was also suggested to enhance the adaptive immune response [188,191]. A phase-1, open-label, multicentre study is currently ongoing to evaluate the activity of TTI-621 as a single agent or in combination with rituximab in R/R cohorts of haematologic malignancies (NCT02663518; Table 1) [148]. Another set of preclinical data show TTI-622, a new human SIRPα linked to human IgG4, induces ADCP in a panel of haematological and solid tumour cells, with a superior affinity for tumour cells than for platelets. In vivo DLBCL xenograft models indicated that TTI-622 treatment leads to a decrease in tumour growth and improves survival [192]. Based on these data, a phase-1 dose-escalation study was initiated (NCT03530683). As of April 2020, 19 R/R lymphoma patients have been enrolled (n = 10 DLBCL, n = 5 HL, n = 1 FL, n = 1 MCL, n = 2 peripheral T-cell lymphoma (PTCL)) and objective response has been reported in 2 DLBCL patients (1 PR and 1 CR) [149]. A newly engineered high-affinity SIRPα-Fc fusion protein, ALX148, was able to trigger both innate and adaptive antitumor immune responses, characterised by an enhancement on phagocytosis. In an MCL xenograft mice model, although ALX148 was able to inhibit tumour growth, superior activity was observed by combining this agent with obinutuzumab. Similarly, in a BL xenograft mice model, the combination of ALX148 with rituximab enhanced tumour growth inhibition (TGI) and improved mice survival when compared to the control group [193]. Currently, ALX148 is being investigated in a phase-1 doseescalation/expansion in patients with R/R B-NHL patients (NCT03013218). Preliminary data showed that ALX148 is well tolerated, with ORR ranging from 41% (9% CR) to 62.5% (11% CR) [150,151]. Assuming that CD47 is upregulated in both tumour cells and erythrocytes and platelets, it is understandable that targeting CD47 leads to side effects, including anaemia. To get through such unwanted effects, a fully human bispecific antibody, TG-1801 (NI-1701), comprising a high-affinity CD19-targeting arm combined with CD47-blocking arms, with a range of affinities on a human IgG1 Fc backbone, was developed [194]. In vitro TG-1801 specifically and strongly binds to human B-cells, avoiding hemagglutination. The specific blockade of the CD47-SIRPα axis on CD19-expressing cells mediates effective killing of primary and immortalised B-NHL cells via ADCP and antibody-dependent cell cytotoxicity (ADCC) [195,196]. Moreover, the bispecific antibody prevented the recruitment of CD19 to the BCR signalling complex, and the coligation of CD19 and CD47 by TG-1801 limited CD19 mobility at the B-cell surface by the cytoskeleton-anchored glycoprotein CD47, inhibiting B-cell proliferation and BCR-mediated gene expression [197]. While TG-1801 has demonstrated to be superior to rituximab in killing B-cells from primary leukaemia and lymphoma cells [196], its combination with the novel glycoengineered anti-CD20 mAb ublituximab or U2-regimen-associating ublituximab, with the dual PI3Kσ/CK1ε inhibitor umbralisib, allowed a synergistic effect in both ADCC and ADCP [198,199]. In vivo xenograft BL and B-ALL models showed that TG-1801 reduced tumour growth and also increased survival time [196]. Complementarily in DLBCL patient-derived xenografts (PDX), the antibody reduced tumour burden, with significantly higher efficacy than ibrutinib [200]. Lastly, the TG-1801-U2 combination has shown synergistic activity in-vivo in a BL xenograft model, associated with infiltration of effector cells (NK and macrophages) [198,199]. Based on the preclinical data, TG-1801 is currently in a phase-1 trial (NCT03804996) for histologically confirmed B-cell lymphoma, relapsed or refractory to prior standard therapy. CD40 Signalling and Inhibition CD40, a member of the TNF receptor family expressed by APCs (DCs, macrophages, NK cells, and mature B-cells), interacts with its ligand CD40L (CD154), which is expressed by activated T-cells, stimulating cytokine secretion by B-cells and allowing T-cell activation [201,202]. CD40 activation promotes the conversion of DCs to APCs, the phagocytic ability of macrophages, and proliferation and antigen presentation on B-cells [203]. CD40 is expressed in a wide range of B-NHL, CLL and MM [204]. CDX-1140 is a novel agonist antibody against CD40, binding outside of the CD40L ligation site. Preclinical data showed enhanced DC and B-cell activation by CDX-1140, which synergises with recombinant CD40L to enhance agonist activity [205]. While xenograft models using CD40+ lymphoma cell lines have shown antitumour activity by CDX-1140, with attenuated tumour growth and increased survival, safety studies in cynomolgus macaques support the use of the antibody in humans [203,205,206]. A phase-1 trial (NCT03329950) is currently recruiting and will evaluate the safety and efficacy of CDX-1140 alone or in combination with the soluble recombinant Flt3 ligand CDX-301, pembrozilumab or chemotherapy (gemcitabine and nab-paclitaxel) [207]. Selicrelumab is an agonist antibody that activates both memory and naïve B-cells and triggers T-cell activation [208]. Preclinical studies, both in vivo and in vitro, resulted in antitumour activity via an immune activation; a synergy was observed in vivo when combined with chemotherapy agents or in a triple-combination with PD-L1 inhibition and the FAP-IL2v immunocytokine [209]. A phase-1 clinical trial is ongoing (NCT03892525), with an estimated enrolment of 44 patients, to assess selicrelumab's safety profile in combination with atezolizumab in patients with R/R lymphoma. Ad-ISF35 is a replication-defective adenovirus vector that encodes for the chimeric protein CD154. Its induction results in an antitumour response associated with macrophage infiltration and an increased proinflammatory cytokine release that will lead to a break in tumour immune tolerance and tumour regression [210,211]. Both in vitro and in vivo assays have shown safer administration and significant antitumoral activity as a single agent. In parallel, combinations of this agent with anti-PD1 or a triple-combination with an anti-PD1 and an anti-CTLA-4 have shown synergistic effects in melanoma [212]. Dacetuzumab, also known as SGN-40, provides inhibitory proliferation and apoptosis signals in high-grade B-NHL. Its signalling contributed to cell death by the degradation of BCL-6 and an increased expression of proapoptotic proteins [213][214][215]. A phase-1 clinical trial (NCT00103779) [152] was completed with 50 patients of refractory or recurrent Bcell lymphomas, and a phase-2 clinical trial (NCT00435916) [153] was completed with 46 relapsed DLBCL patients; however, due to its modest effect as a single agent, clinical trials were continued as a combination with other immune checkpoint inhibitors. A phase-1 clinical trial (NCT00655837) was completed, with 30 patients receiving dacetuzumab in combination with rituximab and chemotherapy (gemcitabine) [155]. A phase-2 clinical trial (NCT00529503) was completed with 154 DLBCL and FL patients with improved OR when combined with rituximab and chemotherapy (etoposide, carboplatin, and ifosfamide) [156]. SEA-CD40 is an agonist antibody with improved properties in vitro and in vivo when compared to dacetuzumab, as it induces more robust cytokine production and results in the activation of CD4+ and CD8+ T-cells [216,217]. A phase-1 clinical trial (NCT02376699) with an estimated enrolment of 135 patients is currently open to assess SEA-CD40's safety profile as a single agent [218]. Lucatumumab, also known as CHIR-12.12, is an antagonist antibody that blocks CD40/CD40L interaction, thereby blocking a survival signal in B-cell lymphomas [219]. In xenograft models, the antibody reduced tumour growth and increased CD40 expression on tumour tissue [220,221]. Lucatumumab was tested in a phase-1/2 clinical trial (NCT00670592) with 74 NHL patients; nevertheless, it was discontinued in 2013 due to minimal clinical activity [154]. CD27 Signalling and Inhibition CD27 is a transmembrane homodimeric phosphoglycoprotein and a member of the TNF superfamily; its ligand is CD70. It is constitutively expressed by most CD4+ and CD8+ T-cells, memory B-cells, and a portion of NK cells [222,223]. The CD27-CD70 activation on T-cells causes the activation, proliferation, survival, and maturation of the effector and memory capacity of those cells as in-vivo stimulation of CD27 with its ligand promotes strong cytotoxic T-cells responses. Naïve T-cells express CD27, and TCR signalling further upregulates its expression, suggesting a role during T-cell priming. Its stimulation on the Bcell subpopulation activates and promotes the generation of plasma cells, its proliferation, and the production of immunoglobulin [222][223][224]. Finally, it is also expressed in NK cells, where its activation induces cytolytic activity. Its expression is also detected in Tcell populations of different cancer subtypes, including B-cell malignancies, suggesting potential therapeutic targeting of CD27 immunomodulation [225]. Varlilumab (CDX-1127), is a monoclonal antibody that acts as an agonist of CD27-CD70 interaction. This anti-CD27 mAb provided costimulatory signals to human T-cells in a TCR-dependent manner and enhanced the number and activity of TILs [226,227]. Both in vitro assays and in vivo models have shown direct antitumor activity against CD27-positive lymphomas [228]. In-vivo assays, in combination with other immunecheckpoint-blocking antibodies such as anti-PD-L1 or anti-CD20 Abs, have demonstrated a synergistic antitumour activity [228,229]. A phase-1 clinical trial (NCT01460134) was completed with 25 DLBCL and FL patients to assess the safety and pharmacokinetic profiles of varlilumab [157]. Doses up to 10 mg/kg weekly were well tolerated, and the results obtained in this clinical trial support the hypothesis that combination therapy can enhance and improve the overall outcome. Nowadays, two clinical trials are active: a phase-1/2 (NCT03307746) study and a phase-2 (NCT03038672) study, with an estimated enrolment of 40 and 106 patients, aimed at evaluating varlilumab-rituximab and varlilumab-nivolumab combinations in R/R B-cell lymphoma patients, respectively [230,231]. CD80 Signalling and Inhibition Cluster of differentiation 80 (CD80, B7-1) is a type I membrane protein member of the Ig superfamily that is expressed by various immune cells, from monocytes to APCs [232]. It binds to CD28 on the T-cell surface to activate the autoregulation of several functions, including CTLA-4 signalling (Figure 1). The interaction between this protein and the CD28 antigen is a costimulatory signal for the activation and proliferation of T-cells, inducing cytokine production [233]. Due to its intricate role in immune regulation, targeting CD80 for diverse B-cell lymphomas and autoimmune diseases has been attractive to both researchers and clinicians [234]. To date, only one antibody targeting CD80 has been developed; it is being evaluated in several clinical trials in B-NHL patients, specifically in FL. Galiximab (IDEC-114) is an IgG1 lambda mAb, with a high affinity to CD80. Galiximab effectively blocks CD80-CD28 interactions on T-lymphocytes but has no significant effect on CD80-CTLA-4 interactions [235]. This interaction usually leads to downregulation of T-cell activity, and it should, therefore, remain intact during galiximab therapy. Galiximab acts primarily via cross-linking of CD80 molecules and induction of ADCC, but it also inhibits cellular proliferation and upregulates apoptotic proteins [159]. In 2002, the first phase-1/2 clinical trial was launched, with the enrolment of 38 R/R FL patients (NCT00575068) [159]. In the same year, the combination of galiximab with rituximab was also evaluated in 73 patients with progressive FL that had failed at least one prior standard therapy, excluding rituximab (NCT00048555). This combination has also been evaluated as a first therapy for stages 3 and 4 or bulky FL in a 2005 clinical trial with 61 patients enrolled (NCT00117975) [160]. In 2006, a randomised phase 3 trial was initiated to evaluate if the galiximab-rituximab combination extended PFS compared to rituximab + placebo in 337 patients with grade 1-3a FL that had progressed or relapsed after at least one prior treatment (NCT00363636). One hundred seventy-five patients were given the combination and the remaining 162 were given rituximab + placebo, with 3% more incidence of side effects in the combination group [158]. 4-1BB Signalling and Inhibition 4-1BB (CD137, TNFRSF9) is another surface glycoprotein member of the TNF receptor superfamily that is expressed in a variety of immune cells, including T-lymphocytes and NK cells. A-1BB ligation by its natural ligand 4-1BBL (CD137L), expressed by DCs, macrophages and B-cells, among others, induces the activation of NF-kB and MAPK pathways [236], increasing survival, proliferation and effector function [236,237]. 4-1BB is considered a promising target for immunotherapy in B-NHL patients since microarray analyses have shown the overexpression of 4-1BB in DLBCL and FL biopsies [238]. Accordingly, treatment with agonistic anti-4-1BB antibodies in a mouse model of B-cell lymphoma eliminated the tumour in 60% of the animals, which became immune to a rechallenge after 100 days [238]. Stimulation of NK cell proliferation and function [239] and inhibition of Treg cell suppressive activity [240] could be contributing to the antitumoral effect of anti-4-1BB therapy as well; however, the role of 4-1BB signalling in these cell types is still controversial [241,242]. Urelumab (BMS-662513), the first anti-4-1BB agent to enter clinical trials, is an agonistic antibody that has shown costimulatory activities both in vitro and in primates [243,244]. In a phase-1 clinical trial (NCT01471210) with R/R B-NHL patients dosed with urelumab as a single agent, ORRs were modest in both DLBCL and FL patients (Table 1). Furthermore, half of the responses occurred in patients treated with urelumab 0.3 mg/kg, above the laterdetermined maximum tolerated dose (MTD) of 0.1 mg/kg, and toxicity was prominent (Table 2) [161]. The combination of urelumab plus rituximab was evaluated in a phase-1 clinical trial (NCT01775631) in relapsed B-NHL patients. The toxicity profile was similar to that of monotherapy, and ORRs were similar or lower than those previously reported for rituximab monotherapy (37% in DLBCL and 36-48% in FL), indicating no synergistic effect of the two drugs [161]. On the other hand, the combination of urelumab with nivolumab was well tolerated in a phase-1/2 clinical trial (NCT02253992) for refractory DLBCL patients. Again, no significant clinical benefit was found, as none of the patients achieved a response [172] despite the promising additive effect observed in animal models of solid cancers [245]. After these overall discouraging results in the clinical setting, there are currently no trials evaluating urelumab in B-NHL patients. Utomilumab (PF-05082566) is an anti-4-1BB antibody with promising costimulatory activity in vitro and in vivo and antitumor efficacy in several solid cancer models [237,244,246]. Utomilumab monotherapy displayed manageable toxicity in a phase-1 clinical trial (NCT01307267) with 55 patients, including 2 relapsed B-NHL; however, these were not included in the efficacy analyses [247]. In the same trial, utomilumab in combination with rituximab achieved an ORR of 21% (n = 67 B-NHL) and presented an improved safety profile [162] that is likely due to the ability of utomilumab to block ligand binding, in contrast with urelumab [244]. Several clinical trials (NCT02951156, NCT03440567, NCT03704298) are currently evaluating this antibody in combination with other immunotherapeutic agents like avelumab, ibrutinib, CD19-CAR T-cells, or chemotherapeutic agents, but no results are available at the moment. Two novel anti-4-1BB antibodies are being evaluated in clinical trials that include refractory B-NHL patients. The ligand-blocking agonistic antibody ADG106 has shown promising results in animal models of several cancers [248,249] and is being tested as a monotherapy in two clinical trials (NCT03707093; NCT03802955). The 4-1BB x PD-L1 bispecific antibody MCLA-145 has been developed with the specific aim of activating 4-1BB signalling in the tumour, where PD-L1 is expressed, as well as blocking immuneinhibitory signalling from the PD-1/PD-L1 axis. Antitumor efficacy has been reported in mouse models of several solid cancers [250,251], and, consequently, a phase-1 clinical trial (NCT03922204) is testing MCLA-145 as a single agent. CD70 Signalling and Inhibition CD70 is another transmembrane glycoprotein of the TNF superfamily that acts as a ligand for CD27. CD70 is transiently found on T-cells, B-cells, DCs, and also NK cells [222,223,252]. CD70 is controlled and induced by antigen receptor stimulation and its expression is under cytokine regulation; its expression is enhanced due to proinflammatory cytokines, such as IL-1a or IL12, or decreased due to anti-inflammatory cytokines like IL-4 or IL-10 [253]. The protein is also expressed in highly activated lymphocytes, and its expression was confirmed across different subtypes of T-and B-cell lymphomas but found absent in their normal counterparts [254,255]. SGN-CD70A is a potent antibody-drug conjugate (ADC) that consists of three functional subunits composed of an anti-CD70 antibody, a protease-cleavable linker, and a DNA-crosslinking pyrrolobenzodiazepine (PBD) dimer drug. Upon binding with its target, CD70, the complex is internalised and traffics to the lysosomes, where the drug is released and will initiate cellular events when it crosslinks DNA. The drug works by activating the DNA damage pathways, in both in-vitro and in-vivo studies, causing a G2 cell cycle arrest and high levels of DNA damage in treated cells [256]. Preclinical in-vitro assays have demonstrated that the formation of double-strand breaks (DSB) is an early event that will be followed by an inhibition of proliferation and induction of apoptosis in NHL cell lines [254,257]. SGN-70A inhibited cell growth and induced higher caspase activity in CD70-positive cell lines of cutaneous T-cell lymphoma (CTCL) and patient-derived T-cell lymphoma primary cells. A phase-1 clinical trial (NCT02216890) with 38 patients of R/R MCL and DLBCL was terminated to assess the safety profile of SGN-CD70A [163]. The treatment showed antitumor activity, but no further clinical trials were conducted due to the frequency and severity of the AEs (Table 2). LAG-3 Signalling and Inhibition LAG-3 (lymphocyte activation gene-3), a CD4 homolog, is a member of the Ig superfamily expressed by TILs, activated CD4+ and CD8+ T-cells, regulatory T-cells, and NK, DC and B-cells [258][259][260]. LAG-3 has a high affinity for MHC class II molecules and exerts an inhibitory role on T-cell-mediated immune responses [261] and CD4+ and CD8+ memory T-cell activation [262]. Previous data has shown that LAG-3 is coexpressed with PD-1 in the development of T-cell exhaustion in viral infections [263]. In FL patients, LAG-3 expression was observed within PD-1 + functionally exhausted T-cells. Interestingly, the dual treatment with anti-PD-1 and anti-LAG-3 antibodies restored the T-cell function more efficiently. In a small cohort of 28 patients with FL, LAG-3 expression by T-cells was clinically relevant and related to patient outcome [264]. Recently, LAG-3 overexpression has been shown in a cohort of 163 DLBCL patients, mostly at the surface of CD4+ Tregs and CD8+ TILs. In this cohort, a high expression of LAG-3 and PD-1 was associated with inferior PFS and OS. In addition, the authors were able to identify a population of regulatory LAG3 high B-cells that polarise tissue-resident macrophages to promote a tolerogenic TME, which could influence the response to therapy [265]. Recently, a very innovative approach was used to unravel TME architecture in cHL by spatial-resolution-based single-cell analysis. The authors were able to identify and characterised novel cellular subpopulations, including immunosuppressive LAG3+ T-cells. Interestingly, it was observed that of LAG3+ CD4+ T-cells did not coexpress PD-1. Mechanistically, an in-vitro HL coculture system revealed crosstalk between the cytokines and chemokines released by Hodgkin and HRS cells and LAG3+ T-cells, favouring the immunosuppressive activity in the cHL TME. Furthermore, the removal of the LAG3-population in primary samples of cH could restore T-cell activity [266]. IMP321 was the first LAG-3 Ig fusion protein investigated in clinical trials [267]. Relatlimab (BMS-986016), the first anti-LAG-3 antibody, is being evaluated as a single agent or in combination with nivolumab in a phase-1/2a clinical trial (NCT02061761) with 132 patients with R/R B-NHL, CLL, cHL and MM. The results of this study are expected by January 2021. In parallel, a phase-2 open-label study (NCT03365791) of the humanised IgG4 mAbs spartalizumab and ieramilimab (LAG525) was conducted in patients with solid tumours and haematological cancers (n = 7 DLBCL). The results were assessed as a clinical benefit rate after 24 weeks, and the combination therapy showed promising activity in DLBCL patients who reach the expansion criteria [268]. Recently, a high-affinity anti-LAG-3 IgG1κ antibody, INCAGN02385, was engineered. Preclinical data showed that the antibody, alone or in combination with an anti-PD1, enhanced T-cell responsiveness to TCR stimulation. These data supported the evaluation of INCAGN02385 in early phase-1 (NCT03538028) testing in patients with advanced or metastatic cancers, including DLBCL [269]. Lastly, fianlimab (previously known as REGN3767), a human-engineered IgG4 antibody, was able to rescue T-cell activation in vitro and synergise with cemiplimab. This combination was able to surpass the inhibitory effects of MHC II/LAG-3 and PD-L1 signalling. Indeed, in a PD-1xLAG-3 knock-in mice model, REGN3767 treatment was able to reduce tumour growth and enhance the antitumor efficacy of cemiplimab [270]. Accordingly, simultaneous PD-1 and LAG-3 blockades are currently being investigated as a phase-1 study in multiple tumour subtypes (NCT03005782). Finally, a radionuclideconjugated antibody, 89 Zr-REGN3767, designed for immuno-PET analysis, was useful to identify the LAG-3-expressing intratumoral T-cells in a BL xenograft model and human PBMCs [271]. Following these results, an early phase-1 study is evaluating the 89Zr-DFO-REGN3767 anti-LAG-3 antibody for PET in R/R DLBCL patients (NCT04566978). TIM-3 Signalling and Inhibition TIM-3 (T-cell immunoglobulin and mucin-domain containing-3) is an Ig superfamily member that is preferentially expressed in fully differentiated Th1 lymphocytes [272]. TIM-3 has recently emerged as an immune checkpoint receptor in cancer due to its selective expression in tumour tissue and the key role it plays in immunosuppression [273]. In DLBCL patients, an increased level of TIM-3 was observed in both CD4+ and CD8+ T-cells, which was positively correlated with tumour stages [274]. In addition, it was described that TIM-3 is coexpressed with PD-1 in the CD3+ T-cells of patients with DLBCL, and high levels were related to tumour stage and response to conventional chemotherapy [275]. More recently, in a set of DLBCL patients, it was shown that high levels of TIM-3 in tumour cells and TILs were associated with worse OS. The authors also suggested that the TME could be directly affected by TIM-3, which leads to decreased immune surveillance and tumour clearance [276]. Preclinical data pointed out the relevance of blocking TIM-3 together with PD-1, in particular, in several cancer models [277]. These data led to the clinical development of TIM-3 antibodies, which are being tested in combination with anti-PD-1/L1 mAbs. However, so far, no clinical trial is evaluating the effect of anti-TIM-3 alone or in combination therapy in patients with B-NHL. OX40 Signalling and Inhibition OX40 (CD134, TNFRSF4) is another member of the TNFR superfamily, mostly expressed in CD4+ and CD8+ T-cells. Its natural ligand, OX40L (CD252), is expressed in DCs, B-cells and macrophages, among others, and activates the NF-kB, MAPK and BCL-2/XLdependent antiapoptotic pathways [278,279]. OX40 signalling on T-cells promotes survival and proliferation, as well as CD4+ T-cell production of IL-2, IL-5 and IFN-γ [280,281]. OX40 signalling has been shown to reduce the ability of Treg to suppress T-cell proliferation [282], setting the basis for the antitumor activity of the anti-OX40 antibody [283]. The first anti-OX40 antibody evaluated in NHL patients, MEDI-6469, is a murine antibody with demonstrated agonistic T-cell activity in animal models [284]. The combination of MEDI6469 plus rituximab was tested in a phase-1/2 clinical trial, which included 4 DLBCL cases. No responses were detected, as two patients presented stable disease and the other two patients died during the trial (NCT02205333). After early termination of this trial in 2016, MEDI-6469 has not been further evaluated in lymphoma patients. The human OX40-agonist PF-04518600 has been reported to promote human T-cell proliferation, to increase cytokine production, to mediate Treg cell depletion ex vivo and to inhibit tumour growth in a mouse model of lymphoma [285,286]. Currently, a phase-1 clinical trial (NCT03636503) is evaluating PF-04518600 in combination with rituximab, utomilumab and avelumab in FL patients. BMS-986178 is another human OX40-agonist with costimulatory effects observed in ex vivo human CD4+ T-cells as it enhances their T-cell effector functions and inhibits T-cell suppression by Treg cells [287,288]. After demonstrating a favourable safety profile in mice [287], BMS-986178 is currently being evaluated in a phase-1 clinical trial (NCT03410901) in combination with the TLR9 agonist SD-101 and radiotherapy in B-NHL patients, including FL, MCL and MZL, among others. TIGIT Signalling and Inhibition TIGIT (T-cell immunoglobulin and ITIM (immunoreceptor tyrosine-based inhibitory motif) domain) is an inhibitory receptor that is expressed in CD4+ and CD8+ T-cells, Tregs, and NK cells, among others. Its main ligand, CD155 (poliovirus receptor, PVR), is expressed on DCs, B-cells and macrophages [289,290]. Studies with mouse and human cells have revealed a variety of immunomodulatory functions of the TIGIT/CD155 axis, including polarization towards tolerogenic phenotypes of DCs and M2 macrophages, inhibition of NK cell functions, inhibition of proliferation and cytokine production by T-cells, and stimulation of Treg cells. Moreover, TIGIT blocks the costimulatory activity of DNAM1 (DNAX accessory molecule-1, CD226), which binds to CD155 with lower affinity than TIGIT [290]. Tumoral samples from B-NHL patients, including FL, DLBCL, MCL, MZL and CLL, present overexpression of TIGIT in CD4+ and CD8+ T-cells, which produce lower levels of proinflammatory cytokines as well as expression of the ligand CD155 in the TME [291]. In FL patients, high numbers of TIGIT+ T-cells have been associated with lower EFS and OS [292]. Two monoclonal antibodies targeting TIGIT are currently being evaluated in clinical trials with NHL patients. SEA-TGT (SGN-TGT) is a human anti-TIGIT antibody that blocks ligand binding and allows DNAM1 costimulatory signalling. This led to Treg depletion and CD8+ T-cell activation in in-vitro and in-vivo models, in which the drug induced a long-term antitumor response and made animals immune to a tumour rechallenge [293]. A phase-1 clinical trial (NCT04254107) recently tested SEA-TGT as a monotherapy in patients with cHL, DLBCL and PTCL, among others. The other anti-TIGIT antibody, tiragolumab (MTIG7192A, RG6058), is also being tested in a phase-1 clinical trial (NCT04045028), alone or in combination with rituximab, in B-NHL patients. Following the observation that TIGIT and PD-1 are coexpressed in the intratumoral Tcells of NHL patients [291] and the favourable results of the combination of anti-TIGIT and anti-PD-1/PD-L1 in preclinical models and some clinical trials for solid cancers [289,294], the combination will, most likely, be evaluated in NHL patients in future clinical trials. Mechanisms Underlying B-Cell Lymphoma Refractoriness to Immune Checkpoint Blockade Resistance to immune checkpoint blockade therapy in human cancers was extensively reviewed by Bellone M and Elia A [295]; however, the mechanisms underlying B-cell lymphoma refractoriness to immune checkpoint blockade are still poorly understood. Although major advances have been made in the last 20 years to overcome the refractoriness of B-NHL patients to standard therapies through the introduction of immune checkpoint blockades in the clinical setting, either as single agents or in combination therapies (Table 1 and Figure 2), several parameters can impair the efficacy of these new approaches. Patient-intrinsic factors such as age, sex, HLA heterozygosity or loss of β2microglobulin (B2M), amplification of oncogenic signalling pathways, immunosuppressive cells and molecules present in the TME may impair antigen recognition and contribute to the failure of immune checkpoint blockades [296,297] (Figure 3). The frequent PD-L1 aberrant expression that is found among lymphoma patients results in the most responsive cancer type to anti-PD1 therapy. However, the PD-1/PD-L1 blockade can be strongly influenced by disease-specific factors, and its predictive value in clinical trials is still controversial. The inconsistent data could be attributable mainly to the variable PD-L1 resources (tumour cells, tumour microenvironment cells, peripheral blood), the differences in staining (including detecting antibodies) procedures, and positive/negative PD-L1 cut-offs. Additionally, it has been shown that PD-L1 can interact in cis with CD80 on APCs and then disrupt the binding between PD-1 and PD-L1 [298]. The functional effects of alternative binding partners also highlight the differences seen in the efficacy of PD-1/PD-L1 immunotherapy in different biological settings. Figure 3. Mechanisms of resistance to immune checkpoint blockade. The deregulation of MHC class I components such as the loss of β2 microglobulin (B2M) and the loss of human leukocyte antigen (HLA) heterozygosity as well as defects in IFN signalling pathways may impair antigen recognition by antitumor CD8+ T-cells. Amplification of oncogenic signalling pathways such as PI3K/AKT/mTOR, Wnt/β-catenin, and MAPK increases the production of immunosuppressive cytokines, trigger T-cell exclusion from TME and may also result in resistance to immune checkpoint blockade. Epigenetic (histone acetylation or DNA methylation) and genetic (deleterious mutations) alterations are crucial triggers of gene expression disorders related to sustained T-cell exhaustion that could eventually cause the failure of immune checkpoint therapy. Moreover, myeloid-derived suppressor cells (MDSCs), Tregs, tumour-associated macrophages (TAMs), and cancerassociated fibroblasts (CAFs) are major immunosuppressive cell types within the TME that may contribute to resistance to immune checkpoint blockade. Immunosuppressive molecules such as TGF-β and IFN-γ, secreted by tumour cells, myeloid cells and macrophages in the TME, may also suppress the functions of effector T-cells, rendering immune checkpoint blockade ineffective. There are several mechanisms determining if a patient will respond or not to a PD-1/PD-L1 blockade. Weakly immunogenic tumours may have an insufficiently active T-cell population to respond to PD-1/PD-L1 blockade; additionally, potentially immunogenic tumours will also be resistant to PD-1/PD-L1 blockade if they develop mechanisms to suppress the activation and infiltration of T-cells after the treatment. Moreover, patients may become resistant to PD-1/PD-L1 therapy if they have insufficient reinvigoration of exhausted tumour-specific CD8+ T-cells or if they have lost target antigens or the ability to present them. Finally, some patients might initially respond to PD-1/PD-L1 blockade but become resistant if the antitumour T-cells are short-lived [299]. There are also epigenetic mechanisms of B-lymphoma cells driving the resistance to immune checkpoint blockades. The in vitro and in vivo data from Zheng et al. [300] showed that miR-155 overexpression enhances PD-L1 expression, reduces peripheral blood immune cells, induces CD8+ T-cell apoptosis and dysfunction via AKT/ERK dephosphorylation, and decreases the survival of DLBCL patients. It has also been shown that histone deacetylase 3 (HDAC3) is another important epigenetic regulator of PD-L1 in B-cell lymphoma as its inhibition increases PD-L1 transcription, resulting in a better clinical response to PD-L1 blockade [301]. The molecular mechanisms of the immune environment in regulating the efficacy of immune checkpoint blockades in B-cell lymphoma is poorly understood. It has been shown that T-cell-inflamed tumours are enriched for sensitivity to PD-1 blockade therapy [302]. Conversely, T-cell noninflamed tumours present low-infiltrating immune cells and are typically resistant to immune checkpoint blockade therapy [303]. Inflamed lymphomas are characterised by the presence of prominent T-cell infiltration [304], genetic alterations that facilitate escape from immune surveillance [78,82,305,306], and frequent mutations, resulting in hyperactivity of the NF-kB signalling pathway [78,307]. Hyperprogressive disease has been found in 9% of patients who received anti-PD-1/PD-L1 therapy [308,309]. The hyperprogression is associated with the elderly age of patients but not tumour burden or cancer type [308]. MDM2/MDM4 amplification and EGFR aberration are correlated with a higher risk of hyperprogression in solid cancers [309], although it was little known in lymphoma. Recent data showed that patients experiencing an hyperprogression have a higher prevalence of PD-L1 − disease [308]. It is predicted to be because the engagement of PD-1 with anti-PD-1 mAb inhibits but does not augment T-cell activations. Therefore, anti-PD-1 mAbs might be PD-1 agonists rather than antagonists in PD-L1 − status. The disease might also rapidly progress through interaction between PD-L1 and CD80 instead of PD-1 for the blocking duration by anti-PD-1 mAbs in PD-L1+ status [310]. Some polymorphisms of PD-1 could also affect the action of anti-PD-1 mAbs, and, thus, hyperprogression could be possible after PD-1 blockade [311]. Conclusions and Future Perspectives Despite the remarkable implementation of immune checkpoint therapies in the last 5 to 6 years in determined subtypes of lymphoma, including cHL and PMBL, the applicability of these approaches in the management of R/R B-NHL has, so far, been mixed and predicting which lymphoma will respond to immune checkpoint blockade is currently not accurate. To date, pembrolizumab is the only FDA-approved agent for use with R/R B-NHL (i.e., PMBL) patients, illustrating the fact that, conceptually, PD-1 blockade in B-NHL appears to be promising and rarely related to severe immune-related AEs [7,97,103]. However, the efficacy of this approach is low, with no long-term durable responses except in PMBL, PCNSL, and PTL, which are due to alterations of chromosome 9p24.1 and the expression of PD-L1/PD-L2. To improve the efficacy of these agents, there have been great efforts made on combination immunotherapy with PD-1/PD-L1 and CTLA-4 checkpoint inhibitors. The superior outcomes of combined immunotherapy over single-agent regimens in preclinical studies, together with the approval of nivolumab plus ipilimumab, give hope to the therapeutic potential of CTLA-4 blockade and its possible combination with PD1/PD-L1 blockers. Finally, understanding the complex interplay between malignant cells, lymphoid TME and immune-accompanying cells is mandatory in order to identify the specific lymphoma types that are vulnerable to a determined checkpoint and still requires improvement in the detection methods. To this aim, patient preselection based on accurate genomic and phenotypic examination of the TME will be necessary to identify the best target(s) of interest, either in monotherapy or in combination therapy, and facilitate the design of biomarker-driven trials. Author Contributions: All authors made a substantial contribution to all aspects of the preparation of this manuscript. All authors have read and agreed to the published version of the manuscript. Conflicts of Interest: G.R. received research support from TG Therapeutics. Remaining authors declare no conflict of interest.
2020-12-24T09:09:59.913Z
2020-12-17T00:00:00.000
{ "year": 2021, "sha1": "fd9a169af1767460fbf0044e82def3faabedf2cd", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc7827333?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b7cfe3970a040db891154390c33b2a4989f03a74", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
19254108
pes2o/s2orc
v3-fos-license
Negative and Positive Lateral Shift of a Light Beam Reflected from a Grounded Slab We consider the lateral shift of a light beam reflecting from a dielectric slab backed by a metal. It is found that the lateral shift of the reflected beam can be negative while the intensity of reflected beam is almost equal to the incident one under a certain condition. The explanation for the negativity of the lateral shift is given in terms of the interference of the reflected waves from the two interfaces. It is also shown that the lateral shift can be enhanced or suppressed under some other conditions. The numerical calculation on the lateral shift for a realistic Gaussian-shaped beam confirms our theoretical prediction. It is well known that the Goos-Hänchen (GH) effect [1][2] refers to the lateral shift ∆ of a totally reflected beam displaced from the path of geometrical optics. This phenomenon has been widely analyzed both theoretically [3][4][5][6][7][8][9] and experimentally [1,[10][11][12][13][14]. The investigation on the GH shift has been extended to other areas of physics, such as acoustics, quantum mechanics and plasma physics [15]. The GH shift is usually proportional to the penetration depth at the order of a wavelength. Many attempts have been made to achieve large lateral shifts (positive or negative) under different circumstances, such as absorbing media [4,9], atomic resonant absorptive vapors [12,16], resonant artificial structures [7,[17][18], negative-permittivity media [19][20], negatively refractive media [21][22][23][24][25][26][27][28], dielectric slabs [8,[29][30], multi-layered structures [6,[31][32]. Recently Li [8] found that the lateral shift of a light beam passing through a lossless transmitting slab could be negative due to the interference between the reflected waves from the slab's two interfaces. However, in Ref. 8, the amplitude of the reflected beam is strongly dependent on the parameters of the slab. In this paper, we consider a light beam reflecting from a dielectric slab backed by a metal, where the lateral shift of the reflected beam can also be negative with the advantage that the reflected beam intensity is almost equal to the incident one at any incident angles (this makes the comparison of the measured lateral shifts easy). When the thickness of the dielectric slab is suitable chosen, the negative lateral shift is dominated. The physics behind the negative lateral shift is the interference between the two reflected waves from two interfaces. At the same time, the lateral shift can also be positive, and it is shown that the lateral shift can be greatly enhanced or suppressed under certain conditions. Finally, we present a simulation for the lateral shift of a Gaussian-shaped beam. Consider a TE-polarized light beam of angular frequency ω (corresponding wavelength λ ) with angle θ incident from the vacuum ( 0 1 ε = ) upon a nonmagnetic dielectric slab backed by a metal ( 2 ε ). The slab's thickness is d and its dielectric constant is 1 ε ( 1 ε is real number) as shown in Fig. 1. TM polarization can be discussed similarly. From Maxwell's equations and boundary conditions, the reflection coefficient could be expressed by [24][25] where 0 0 cos , and c is the light speed in vacuum. For the simplicity, we first consider the metal layer as a perfect conductor. In this case, Eq. (1) can be simplified into and the phase of ( ) r θ can be analytically given by For the wide incident beam (i.e., with a very narrow angular spectrum, k k << ∆ ), the lateral shift ∆ of the reflected beam can be defined by: [8,27] . Then the analytical result of the lateral shift ∆ is given by This is our key result of the present problem. For the optical denser medium ( 1 1 ε > ), the denominator in Eq. (4) is always positive. It is clear that when the inequality holds, the lateral shift ∆ becomes negative, i.e., the reflected beam will be appeared at the left side of the dotted line as shown in Fig. 1 (Ray 1). This result is counterintuitive in comparison with the prediction of geometric optics. In the frame of geometric optics, the lateral shift of the reflected wave from the first interface is zero and that of the reflected wave from the second interface is ' ), see the dotted and dashed lines in Fig. 1. The inequality (5) indicates that the negative lateral shifts are much easily obtained at the large angles of incidence θ (or for the large value 1 ε ). In fact, we can rewrite Eq. (4) term origins from the interference between two reflected waves from the two interfaces of the dielectric slab. The interference leads to the oscillation of the lateral shift [8]. However, in our case, the value of ( ) r θ is always equal to one, so that there is no angular shift even for the finite beam (which refers to the departure from the original propagation direction) that could not be avoided in Refs. [8,29]. To average the first and second terms over 1 z k d in one period ] , 0 [ π will erase the interference effect, and we can easily obtain the averaged lateral shift, ' tan 2 , which is the same as the result of the reflected wave from the second interface, expected from the geometric optics. Actually, under the condition of The physical origin of the enhancement (or suppression) of the lateral shift comes from the destructive interference (or the constructive interference) of the reflected wave at the first interface with the incident wave. In practice, the plane wave assumed above is an approximation. Normally the front of an incident wave is finite, that is to say, it has a profile with a certain width, typically a Gaussian profile. Therefore, we consider the lateral shift of the finite beam, such as a Gaussian-shaped beam, incident upon the system, for example, a glass layer backed by silver. The electric field of the incident beam at the plane of 0 = z is given by the following form: is the angular spectrum of the Gaussian-shaped beam with the angle θ , and 0 0 sin , W is the half-width of the beam at waist. The reflection coefficient ( ) x r k can be obtained from Eq. (1). Then the electric field of the reflected beam can be written by In the numerical simulations using Eqs. (7,8), we take 13 W λ = > λ (the incident beam is well collimated and ( ) x A k is sharply distributed around 0 x k ), and consequently, it is expected that the shape of the reflected beam is the same as that of the incident beam without obvious distortion. We use the peak position difference between the incident and reflected beams to denote the lateral shift ∆ , see the black dots in Fig. 4. In Fig. 4, we also plot the result from
2017-09-23T15:01:33.181Z
2005-12-09T00:00:00.000
{ "year": 2005, "sha1": "33c483981f7d39d151f6ba9022702a66ecdc9cac", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0512083", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "33c483981f7d39d151f6ba9022702a66ecdc9cac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
15646672
pes2o/s2orc
v3-fos-license
Mining featured micro ribonucleic acids associated with lung cancer based on bioinformatics Background Few genetic markers useful for the screening of lung cancer risk exist. Although related research has shown that certain expression profiles of micro ribonucleic acids (miRNAs) are different in lung cancer versus the normal lung, such as miR-29a and miR-29s, the precise molecular mechanism of lung cancer remains obscure. In order to get a better understanding of the pathogenetic mechanism of lung cancer, we analyzed the differentially expressed genes (DEGs) and identified featured miRNAs in lung cancer tissues. Methods We used the gene expression profile GSE10072, including 49 gene chips of non-tumor tissues and 58 gene chips of lung tumor specimens. The DEGs between these two groups were identified by Limma package in R language. The TarBase database was used to construct the networks of miRNA regulating DEGs related to lung cancer. After ordering miRNAs regulating DEGs, we further screened featured miRNAs combined with the miR2Disease database. Results A total of 5572 DEGs were obtained between lung cancer and control specimens. After constructing a miRNA regulatory network, a total of 398 regulations between 57 miRNAs and 321 target genes existed. By intergrating the miR2Disease database and using a sorting algorithm, a total of six featured miRNAs related to lung cancer were identified, including miR-520h, miR-133a, miR-34, miR-103, miR-370, and miR-148. They might be involved in lung cancer progression by regulating ABCG2, PKM2, VAMP2, GPD1, MAP3K8, and DNMT3B, respectively. Conclusion The top 10 significant miRNAs, such as miR-520h, miR-133a, miR-34, and miR-103 may be potential therapeutic targets for lung cancer. Introduction Lung cancer is one of the leading causes of cancer-related deaths in the world, largely because of the genetic and epigenetic damage caused by tobacco smoke. 1 In general, lung cancer is divided into two categories for the purpose of diagnosis and treatment: small cell lung carcinoma and non-small cell lung carcinomas (NSCLCs). Both classifications are difficult to diagnose at an early stage and are always related to poor survival. 2 To improve the patient survival rate, it is critical to investigate the mechanism of tumorigenesis in lung cancer in order to determine effective therapies. 3 Recently, molecular investigations have provided evidence that the development of lung cancer is involved with genetic alterations, which has contributed to defining the molecular network of lung carcinogenesis. The expressions of Kirsten rat sarcoma viral oncogene homolog (K-ras), phosphatase and tensin homolog (PTEN), fragile histidine triad gene (FHIT) and myosin XVIIIB (MYO18B) are frequently altered. 4,5 Studies also show that p53 and RB/p16 pathways are usually deficient. 6 In addition, some unknown markers, such as noncoding RNA gene products, may lend insight into lung cancer. Micro ribonucleic acids (miRNAs) are small noncoding RNA gene products considered to be important regulators of gene expression and play a crucial role in cellular growth, differentiation, and death. 7,8 Some miRNA expression levels are closely related with human tumorigenesis. It is reported that miR-29 seconds is inversely correlated to DNA (cytosine-5-)-methyltransferase 3 alpha (DNMT3A) and DNA (cytosine-5-)-methyltransferase 3 beta (DNMT3B) in lung cancer tissues and plays a role in the epigenetic normalization of NSCLCs. 9 Let-7 microRNA, as a tumor suppressor gene, has been demonstrated to directly repress cancer growth in the lung. 8 Although numerous studies have contributed to exploring the mechanisms of lung cancer, the role of miRNAs in lung cancer progression is not yet clarified. In our study, a set of gene expression profiles of lung cancer and controls were analyzed to identify the differentially expressed genes (DEGs). We then applied bioinformatics tools to identify miRNAs regulating DEGs. Using TarBase and the MiR2Disease database, we further screened featured miRNA involved in the occurrence of lung cancer. Our work may help to seek potential targets for lung cancer therapies. Methods and data Affymetrix microarray data Gene expression profiles under the accession number GSE10072 were downloaded from the Gene Expression Omnibus (GEO). 10 A total of 107 samples were used for the development of a microarray profile, which contained 58 NSCLC tissues, including 16 from never smokers (NS), 18 from former smokers (FS), 24 from current smokers (CS), and 49 normal samples, which included 15 NS, 18 FS, and 16 CS. There was no significant difference in the number of smokers between the two groups. The raw data were obtained based on the GPL96 Platform. Data preprocessing and screening of lung cancer related genes The probe-level data in CEL files were converted into expression profiles; MAS 5.0 performed background correction and standard summarization. 11 For genes corresponding to multiple probe sets, which have a plurality of expression values, the gene expression values of those probe sets were averaged. Eventually, a total of 12 752 gene expression profiles were obtained, including 58 lung cancer and 49 control specimens. CancerResource (http://bioinformatics.charite.de/ cancerresource/) is a database integrating cancer-relevant relationships of compounds and targets. 12 A total of 211 lung cancer-related genes were obtained from CancerResource and 209 genes were involved in gene expression profiles. Differentially expressed gene (DEG) analysis A Limma package in R language was used to analyze the DEGs between the 58 lung cancer and 49 control specimens. 13 The P-values were adjusted by the Benjamin and Hochberg (BH) method based on the multtest package. 14,15 A fold discovery rate (FDR) of <0.01 was used as the cut-off criterion for DEGs. To get a better understanding of DEGs,the expression values of the DEGs were collected and hierarchical clustering analysis was performed based on Euclidean distance. 16 An enrichment analysis of DEGs corresponding to lung cancer-related genes was then performed using the Fisher test. 17 To facilitate the analysis, we named the DEGs related to lung cancer (P < 0.01) as annotated differentially expressed genes (ADEGs). TarBase (http://diana.cslab.ece.ntua.gr/DianaToolsNew/ index.php?r=tarbase/index) is a database providing a collection of all experimentally tested miRNA targets. 18 A total of 1094 miRNA-target interactions in humans were selected based on TarBase data. miRNA-targeted DEG regulation networks were constructed and the regulated relations, target genes, and miRNAs were calculated. Screening of featured miRNAs related to lung cancer After miRNA-targeted DEG regulation networks were obtained, we aimed to further identify featured miRNAs related to lung cancer. In this study, we applied a ranking approach to obtain featured miRNAs. The Rank value of miRNA regulating DEGs were calculated according to: where DEGRankj is the rank of certain DEG in all DEGs. Featured miRNAs were endowed with a smaller rank value. We then tested whether DEGs were related to lung cancer genes by calculating the rank sum of ADEGs and randomly selecting the same amount of DEGs. The procedure was repeated 1000 times. We then calculated the average of the random rank sum, variance, and P-value by Z-score test. We also ranked miRNAs regulating DEGs. The miR2Disease database (http://www.mir2disease.org) is a manually curated database that aims to provide a comprehensive record of miRNA deregulation involved in various human diseases. 19 A total of 42 lung cancer-related genes were stored in the miR2Disease database. We mapped DEGs in miRNA regulating networks into the miR2Disease database to further screen miRNAs. Screening of DEGs and annotated DEGs We obtained the publicly available microarray dataset GSE10072 from the GEO database. A Limma package in R language was used to analyze the DEGs between 58 lung cancer tissues and 49 non-tumor samples. According to threshold criterion (FDR < 0.01) for DEGs, there were 5572 miRNAs associated with lung cancer DEGs. The results of hierarchical clustering are shown in a heat map (Fig 1). Genes with similar expression levels were collected together and samples (case and control) were relatively distinguished based on the gene expression profiles. By integrating the CancerResource database, DEGs were significantly enriched in ADEGs (P-value = 0.001171), which suggested that the DEGs identified in this work were efficient. Screening of featured miRNAs related to lung cancer There were 115 ADEGs among the DEGs identified in our paper. Their rank sum was 300251. We randomly selected 115 DEGs and calculated rank sum, variance, and P-value. Their value was 321423.8, 16512.12, and 0.09 respectively. These data showed that lung cancer-related genes ranked highest among the DEGs. We screened the significant miRNAs associated with lung cancer by ordering 57 miRNAs (Table 1). Using the miR2Disease to search for the 57 miRNAs, we found that there were 11 overlapping miRNAs. These 11 miRNAs were ranked highest among all of the miRNAs ( Table 2). We found that four miRNAs (miR-130a, miR-20a, miR-19a and miR133b) existed in the miR2Disease in the top 10 miRNAs (Table 1). Using PubMed, four miRNAs -miR-520h, miR-133a, miR-34 and miR-103 -were reported to have an intimate relationship with lung cancer. Discussion In this study, our experimental design primarily found featured miRNAs associated with lung cancer based on bioinformatics. We identified 5572 DEGs between lung cancer and control specimens. After constructing miRNA regulating DEG networks, we obtained 398 miRNA-target interactions, 57 miRNAs, and 321 DEGs. Using the CancerResource and MiR2Disease databases, we finally obtained the top 10 significantly featured miRNAs related to the development of lung cancer; using PubMed, four miRNAs -miR-520h, miR-133a, miR-34 and miR-103 -were reported to have an intimate relationship with lung cancer. As shown in Figure 2, miR-520h shows interaction with SMAD6 and ABCG2. PKM2 is a direct target for miR-133a. VAMP2 and GPD1 are targets for miR-34 and miR-103, respectively. A recent study reports that miR-520h is a mediator in suppressing the progression of lung cancer. 20 Resveratrol, served as a component in Chinese herbs, can suppress various tumor activities, such as lung cancer. 21 Resveratrol can suppress the migratory ability of tumor cells to lungs by regulating the miRNA-520h-mediated signal cascade. 20 It has been demonstrated that ABCG2 is overexpressed in human cancers and hsa-miR-520h can downregulate ABCG2 in pancreatic cancer to inhibit migration and invasion. [22][23][24] Another recent study indicates that the expression of miR-133a significantly declined in lung squamous cell carcinoma compared with normal tissues. 25 miR-133a, as a tumor suppressor, shows a significant effect in inhibiting tumor cell proliferation. PKM2, as a protein kinase and a transcriptional coactivator, represents an attractive target for cancer therapy. 26 Increased expression of PKM2 can provide advantages for diverse cancer cell growth and survival. 27 Recent studies show that miR-133a is a targeting transcriptor of PKM2 and the overexpression of PKM2 is associated with the downregulation of miR-133a. 28 We can infer that miR-133a may play an important role in lung cancer by regulating PKM2. In addition, in mammalians, the miR-34 family comprises three processed miRNAs, including miR-34a, which is highly expressed in the brain, and miR-34b/c, which is mainly expressed in the lung. 29,30 miRNA-34 has a tumor suppression function in lung cancer. 31 Exogenous miRNA-34 can reduce the proliferation and invasion of lung cancer epithelial cells. Moreover, the aberrant expression of miR-103 is found in metastasis-associated gene 1 silencing lung cancer cells. 32 The differential expression of miR-103 is closely related with lung cancer progression. VAMP2 is thought to participate in neurotransmitter release. However, little is known about miR-34 regulating VAMP2 in lung cancer. Four miRNAs (miRNA-130a, miRNA-20a, miRNA-19a and miR-133b) are related with lung cancer based on the miR2Disease database. miRNA-130a has been suggested as a novel prognostic marker for NSCLC patients. 33 miRNA-130a is differentially expressed in smoking patients with NSCLC compared with non-smoking ones. A previous study has also shown that miRNA-130a was able to inhibit tumor migration of NSCLC. 34 The expression of miRNA-19a was found to be downregulated in lung cancer tissues compared with controls. 35 The low expression of miRNA-19a is related to a poor prognosis in patients with lung cancer. Numerous studies have suggested that the expression of miRNA-133b is involved in various cancer progressions. [36][37][38] The expression of miRNA-133b is decreased in lung cancer cells and functions in inducing apoptosis in tumor cells. 39 Although evidence of miRNA-20a regulating lung cancer development is insufficient, previous studies have shown that miRNA-20 is differentially expressed in cervical cancer and has been considered to be a prognostic marker for oral squamous tumors. 40,41 The relationship between miRNA-20a and lung cancer needs to be further investigated. Conclusion Our study more intuitively shows the relationship between miRNA and DEGs in lung cancer than previous reports. We ordered miRNAs based on reasonable indicators and obtained four miRNAs related to lung cancer reported in the literature. The featured miRNAs identified in our paper play key roles in the initiation and progression of lung cancer and may be potential targets for lung cancer treatment. Further research is required to more closely investigate the exact mechanism of lung cancer. Disclosure No authors report any conflict of interest.
2016-05-16T10:01:05.248Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "2db8739babdfcc94e864532be5e8f57cc0268cbf", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/1759-7714.12187", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2db8739babdfcc94e864532be5e8f57cc0268cbf", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
227128432
pes2o/s2orc
v3-fos-license
Case Report: Huge Dumbbell-Shaped Primary Hydatid Cyst Across the Intervertebral Foramen Primary hydatid cyst of the spinal canal is extremely rare. We reported a 42-year-old Kazakh man with right lower back pain and weakness in both lower limbs for 2 months, who lived in the pastoral area. Clinical examination revealed that the patient had no cysts on other organs and no previous medical history except for a huge cyst inside and next to the vertebrae. MRI examination revealed a huge dumbbell-shaped primary cyst across the intervertebral foramen. Pathological examination after operation confirmed a fine-grained hydatid cyst disease. Therefore, in the pastoral area, doctors should be alert to the occurrence of hydatid cyst disease if patients complained about progressive back pain and lower limb weakness or other spinal cord compression symptoms. Once hydatid cysts in other organs or systems were detected, the occurrence of the disease could be highly suspected. Complete resection is an effective treatment for hydatid cyst disease. INTRODUCTION Hydatid cyst disease, also known as echinococcosis, is a parasitic disease that affects humans and other mammals such as sheep, dogs, mice, and horses, relatively common in livestock areas. Humans may become infected by direct contact with the definitive host (dogs) or by ingestion of food infected with the parasite's eggs (1). In clinical practice, the liver was the most susceptible organs of echinococcosis infection, accounting for about 75% of cases; the lung follows, accounting for about 15%; other organs such as the brain can also be affected, but <10%. And only 0.5-2% of patients have bone involvement, with half of them mainly affecting the spine (1,2). The most common affected sites of the spine were the lumbar, thoracic, sacral, and cervical vertebrae, which were mainly affected by the vertebral body (3). And echinococcosis originating from the spinal cord or spinal canal was extremely rare. Spinal hydatid cyst disease usually presents as cauda equina symptoms or symptoms of spinal cord compression. And it is typed into five categories: (1) primary intramedullary hydatid cyst; (2) intradural extramedullary hydatid cyst; (3) extradural intraspinal hydatid cyst; (4) hydatid disease of the vertebrae; and (5) paravertebral hydatid disease (4,5). To the best of our knowledge, primary hydatid cyst of the spinal canal across the intervertebral foramen has been rarely and seldom reported in the literature. This case report describes a rare presentation of primary extradural intraspinal and paravertebral connected primary hydatid disease. It was the first time that the initial diagnosis of the patient was the hydatid cyst disease, but no hydatid involvement was found in other organs except the spinal area, the serological test (ELISA) of echinococcosis was also negative. The purpose of this case report is to share our experience in the diagnosis and treatment of hydatid cyst disease and to review the relevant literature. CASE PRESENTATION Presentation and Examination A 42-year-old Kazakh patient was admitted to the hospital with pain in his right lower back and weakness in both lower limbs with no inducement for 2 months. The pain was aggravated after activity and was relieved after rest, the Visual Analog Scale (VAS) pain score was 7. Neurological examination revealed bilateral lower extremity weakness: left muscle strength was 3/5, the right was 4/5. With brisk tendon reflexes and negative Babinski sign, the remainder of the physical examination was unremarkable. Besides, the patient grew up in a pastoral area with close contact with cattle, sheep, horses, and other livestock. Neuroimage The CT scan showed the L1 vertebral body and peripheral bone were hyperostotic (smooth edge), the relevant intervertebral were also enlarged. The MRI scan performed an oval solid-cystic cavity at the right side of the vertebral body of the T12∼L2 vertebrae and the enlarged L1∼L2 intervertebral foramen, the cystic cavity was hypointense on T1-weighted-image (T1WI), hyperintense on T2-weighted-image (T2WI), and T2-fat-suppression-image (T2FS). The multiloculated cystic lesions could be seen inside and had clear boundaries with surrounding tissues. Corresponding intervertebral foramina around the cavity was enlarged, which compressed the conus medullaris of L1 and extended into the adjacent psoas (Figures 1, 2). The L4∼L5 and L5∼S1 intervertebral disc protruded to the left rear. The enhancement MRI image showed the abnormal signals and significantly enhanced septum on the right side of the T12∼L2 paravertebral and the right side of the L1∼L2 intervertebral foramen. According to MRI and pastoral area life history, we highly suspected it as the hydatid cyst disease. The CT and B ultrasound examination of other body parts showed no signs of systemic cyst. Operation We used the posterior straight incision approach of T12∼L2. The incision was made to expose the T12∼L2. The laminectomy was performed for the right part of the vertebral plate and inferior articular process of the T12, the right vertebral plate and attachment of the L1, and superior articular process at the right side of the L2. Entering the incision, the oval solidcystic cavity with clear boundaries located at extradural (across the intervertebral foramen and the dumbbell extension was intraspinal) and had compressed the spinal cord to the left ventral side. The lesion has high tension and tight adhesion to the dura mater and intervertebral foramina. Gauzes were used to protect the incision and the surrounding tissues, we carefully cut the ectocyst to decompress, a whitish, pearly, translucent cystic endocyst was found. After careful separation and loosening, we scratched the edge of the cyst to remove the endocyst as a whole carefully. And loosen the attachment of the dural sac, ensure that it was intact and pulsates well. Then used hydrogen peroxide and 5% hypertonic saline to inject for 10 min, checked again whether there was residual endocyst. Peeled off most of the ectocyst carefully, followed by hydrogen peroxide, 5% hypertonic saline, and dexamethasone irrigated the operative site again to avoid relapse and anaphylactic reaction. Finally, spinal fusion and fixation of T12∼L2 have been performed and the wound was closed in layers. Histopathological Examination In histopathological examination, we found that after Hematoxylin and Eosin (H&E) staining of the tissue, granulomas and inflammatory granulation tissue could be seen, consisting of proliferative fibers, vascular epithelioid cells, and multinucleated giant cells. The hydatid and endocyst's germinal layer could be found (Figure 3). Chronic interstitial cells are present in the stroma and support a severe inflammatory reaction. All histopathology features above revealed it was the hydatid cyst disease. Post-operative Course The patient recovered well after surgery, the muscle strength of both lower limbs was restored to level 5/5, and the pain was almost completely relieved (VAS score from 7 to 2), Albendazole session was continued for 12 months (10 mg/kg body weight/day) as for standard anti-infectious therapy of Echinococcus. During the follow-up and reexamination for 6 months after surgery, the patients did not show the same similar symptoms, and there was no sign of recurrence on imaging evaluation. We advise the patient to take the medication on time and to review for further imaging every 6 months. DISCUSSION Spinal hydatid disease is rare and was rarely reported in the literature. The main infectious source of this disease is livestock, most of which are caused by the invasion of Echinococcosis Granulosa into the human body, and a few are caused by Echinococcosis Multilocularis. The former is the more benign form and is characterized by cyst formation, whereas the latter is not encapsulated and presents as a ramifying, porous, and necrotic mass. The prognosis for the multilocular form is extremely poor, with an almost 100% mortality rate (6). Spinal hydatid disease is mostly hematogenous metastasis, due to the rich blood circulation and the slowly flowing blood of the spine area. Therefore, spinal involvement is more common in bone hydatid disease, accounting for about 50% (7). Spinal hydatid disease can occur in any segment of the spine, while 50% of the vertebral involvements are seen in the thoracic area and 20% are in the lumbar area, sacral and pelvic involvement are rare (3). It can be originated in the vertebral canal wall, or it can invade into the spinal canal by sacrospinous muscle hydatid through the intervertebral foramen and compress the spinal cord. Due to the limitation of the surrounding bone wall, the development is slow, but the proliferation and spread can be outward along with the Haversian osteon lamella system. When the cyst breaks through the cortical bone into the spinal canal, the compression of the spinal cord or nerve root may occur, due to various shapes of the space formed by the spinal canal and its surrounding soft tissues, the different shapes of the hydatid cyst can be formed. In this case, vertebral canal involvement was primary, and no signs of hydatid were found in the vertebrae and other organ systems, the serological test (ELISA) was negative, which made the case extremely rare. In our literature review, Parvaresh et al. (8) reported a dumbbell hydatid cyst of the spine, but in their case, spinal involvement of hydatid disease was considered secondary, because the primary lesion was located in the liver, whereas in the presented case spinal involvement was primary. Karakasli et al. (6) reported a primary dumbbell hydatid cyst of the thoracic spine, but the part of the paravertebral was smaller than that in our case. For the case we reviewed, primary hydatid infestation of the spine without any other systemic involvement can be explained through the direct Porto-vertebral venous shunt theory: in rare instances, the disease begins from the extradural area, suggesting that the parasite's embryo is possibly being carried through the Porto-vertebral venous shunts (9)(10)(11). The disease has been reported for various names in different literature: including hydatid disease invasion of the spinal canal, extradural intraspinal hydatid cyst, spinal hydatid disease or hydatid cyst, etc., currently collectively known as spinal hydatid disease. The early diagnosis of this case for primary hydatid cyst across the intervertebral foramen was difficult, because such cases were extremely rare, and the patient had no hydatid cysts of other organs and no previous history, the serological test was also negative. There were only abnormal MRI signals and significantly enhanced septum of paravertebral and intervertebral foramina. Therefore, it is easy to be misdiagnosed. We collected the detailed data of all published cases of primary extradural intraspinal together with paravertebral hydatid cyst in Table 1 (exclude primary involvement of other organs and vertebrae. Soft tissue involvement without bony origin and some literature did not mention whether it originated from other organs, which we viewed with skepticism), there were only 11 cases in all, but there were many early misdiagnosed cases among them. Therefore, the early correct diagnosis of this disease was much more important. There should be differential diagnosis of the disease from schwannoma and neurofibromatosis (10,20): the lesion was not multiloculated, multiple loculations being more common with hydatid cyst. Hydatid cysts contain the hydatid fluid component and tend to invade anatomical cavities. Besides, it does not show contrast enhancement but exhibit a cerebrospinal fluid-like signal intensity on MR imaging. Differential diagnosis is also needed to avoid misdiagnosis with the aneurysmal bone cyst, giant cell tumor of bone, isolated bone cysts, arachnoid cysts, fibrocystic diseases, chondrosarcomas, and tuberculosis, etc. (11). Therefore, in the pastoral area, patients with progressive back pain and progressive paraplegia or other spinal cord compression symptoms should be suggestive of the occurrence of the primary disease. If the patient also has a history of hydatid disease in other organ systems, the occurrence of this disease can be highly suspected. The diagnosis of hydatid disease depends on radiology, biology, and serology examinations. Serological diagnosis is usually performed by Enzyme-Linked Immunosorbent Assay (ELISA), Western Blotting, Indirect Hemagglutination Assay (IHA), and Polymerase Chain Reaction (PCR). The sensitivity rates of serological diagnosis of the liver, lungs, and other organs are 80-100%, 50-56%, and 25-56%, respectively (6). Among the 11 cases reviewed, only 1 presented positive serological for IHA, while most of the other cases presented negative serological diagnosis and a few pieces of literature did not mention serological diagnosis. Because the serological test is less sensitive to echinococcosis of the nervous system, the characteristic imaging performance can also support the diagnosis, even if the serological test is negative. So, medical imaging is the main basis for the diagnosis of patients with a suggestive medical history (living in an endemic area) and clinical manifestations. CT or MRI examination is of great significance for the localization, qualitative, and quantitative diagnosis of this disease. Bone destruction has no specificity in imaging, it can be osteolytic destruction, vertebral deformities, or hyperplastic changes, and these changes are more prominent in the thoracic segment. And the symptoms may present in different nerve roots or spinal cord compressions depending on the occurrence of different segments of the spine. According to the statistics, the different symptom of spinal cord compression rate of paraparesis at presentation (61-73%), associated or not with back pain (27.8-43%), bladder dysfunction (11.1-32%), sensory loss (24%), and radicular pain (27-60%) (18). In our case, the patient's spinal cord compression symptoms showed lower back pain and paraparesis. Surgical removal of the lesion is the best choice for the treatment of spinal hydatid disease, especially for the treatment of patients with spinal cord compression symptoms, surgical removal of the lesion is still the "gold standard." For the operative approach selection, the lesion is usually deep in the side of the spinal bypass incision approach, which makes it difficult to quickly enter the lesion area and may cause retroperitoneal spread during the operation. Therefore, we need to choose the posterior straight incision approach of the affected spine segment, which can quickly enter the corresponding lesion area and facilitate the complete resection of the lesion. And it is also more convenient for spinal fusion and fixation. For the resection, the removal of hydatid cysts must be thorough, as much as possible to remove all endocyst in the field of surgery. During the operation, spinal cord injury must be avoided, and care should be given to prevent the puncturing of the endocyst wall. Finally, spinal fusion and fixation should be performed to restore spine stability while removing the lesion. Because the damage of hydatid to the spinal cord is mainly mechanical compression, rather than invasive damage. So, if it can be early detection and early surgical removal, the prognosis will be better. In our case, the lesions occurred in the extradural, and the intraspinal lesions were small, with a clear boundary between the internal and external spinal canal and relatively limited adhesion around the surrounding tissues. So, the capsule can be completely removed. Remove the ectocyst completely and avoid residuum of the endocyst, then the disease could be cured. Besides, due to the slow onset of this patient, the large volume of the lower lumbar spinal canal, and it has a certain buffering effect, hydatid cysts extending along the nerve root sheath to the outside of the spine also play a certain role in decompression. And through the fusion fixation of the spine, the patient can do off-bed activity earlier. Therefore, the neurological function recovered well in this case after the operation. Due to the limited space in the spinal canal, if the lesion in the spinal canal is large or at a high position, to avoid excessive compression of the spinal cord during intraoperative operation, the ectocyst can be cut and gauze can be used to protect the surrounding tissues, suction out the fluid of endocyst to decompression, and remove the ectocyst in a whole. If the endocyst ruptures during the operation, the rate of recurrence will be higher. Therefore, it is necessary to avoid intraoperative contamination as much as possible to prevent the hydatid cyst from rupturing and causing it to be planted in the surrounding tissues. Anti-hydatid drugs and appropriate hormones should be used as adjuvant therapy before and after surgery to avoid the occurrence of anaphylactic shock. After the removal of the lesions, it should be washed with hydrogen peroxide and 5% hypertonic saline to prevent recurrence or reduce the recurrence rate, then use dexamethasone water to avoid serious allergic reactions. All these measures are used to avoid various symptoms caused by allergic reactions or overflow of the content of the hydatid head section, such as itching, urticaria, edema, dyspnea, asthma, vomiting, diarrhea, colic abdominal pain, and even anaphylactic shock. CONCLUSION Hydatid cyst across the intervertebral foramen has been reported only sporadically in the literature, and the primary cyst is extremely rare. For diagnosis, hydatid cyst disease should be considered when imaging findings indicate cystic changes in the spinal region and the patient has a pastoral life history, even if the serology test is negative. For treatment, the main method is still surgical treatment, and great attention should be paid to completely remove the cyst to avoid extravasation of the cystic fluid to cause spread. As the incidence of this disease is extremely low, rarely encountered in clinical practice, so this case would be reported in the hope of improving the understanding, diagnose it and treat it in time. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS YT and MJ performed the image data analyses and wrote the manuscript. XS contributed to perform the operation and give the manuscript preparation. YH helped perform the analysis with constructive discussions. LJ helped for giving the literature review. All authors contributed to the article and approved the submitted version.
2020-11-24T14:11:44.699Z
2020-11-24T00:00:00.000
{ "year": 2020, "sha1": "5523937624d74315f89227873b89950e4927fbf0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.592316/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5523937624d74315f89227873b89950e4927fbf0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125630887
pes2o/s2orc
v3-fos-license
Bootstrapping Average Value at Risk of Single and Collective Risks Almost sure bootstrap consistency of the blockwise bootstrap for the Average Value at Risk of single risks is established for strictly stationary β-mixing observations. Moreover, almost sure bootstrap consistency of a multiplier bootstrap for the Average Value at Risk of collective risks is established for independent observations. The main results rely on a new functional delta-method for the almost sure bootstrap of uniformly quasi-Hadamard differentiable statistical functionals, to be presented here. The latter seems to be interesting in its own right. Introduction One of the most popular risk measures in practice is the so-called Average Value at Risk which is also referred to as Expected Shortfall (see Acerbi and Szekely (2014); Acerbi andTasche (2002a, 2002b); Emmer et al. (2015) and references therein).For a fixed level α ∈ (0, 1), the corresponding Average Value at Risk is the map AV@R α : L 1 → R defined by AV@R α (X) := R α (F X ), where F X refers to the distribution function of X, L 1 is the usual L 1 -space associated with some atomless probability space, and for any F ∈ F 1 with F 1 the set of the distribution functions F X of all X ∈ L 1 .Here, g α (t) := 1 1−α max{t − α; 0} and F ← (s) := inf{x ∈ R : F(x) ≥ s} denotes the left-continuous inverse of F. The statistical functional R α : F 1 → R is sometimes referred to as risk functional associated with AV@R α .Note that AV@R α (X) = E[X|X ≥ F ← X (α)] when F X is continuous at F ← X (α).In this article, we mainly focus on bootstrap methods for the Average Value at Risk.Before doing so, we briefly review nonparametric estimation techniques and asymptotic results for the Average Value at Risk.Given identically distributed observations X 1 , . . ., X n (, X n+1 , . ..) on some probability space (Ω, F , P) with unknown marginal distribution F ∈ F 1 , a natural estimator for R α (F) is the empirical plug-in estimator where ∞) is the empirical distribution function of X 1 , . . ., X n and X 1:n , . . ., X n:n refer to the order statistics of X 1 , . . ., X n .The second representation in Equation (2) shows that R α ( F n ) is a specific L-statistic which was already mentioned in Acerbi (2002); Acerbi and Tasche (2002a); Jones and Zitikis (2003). In particular, if the underlying sequence (X i ) i∈N is strictly stationary and ergodic, classical results of van Zwet (1980) and Gilat and Helmers (1997) show that R α ( F n ) converges P-almost surely to R α (F) as n → ∞, i.e., that strong consistency holds.If X 1 , X 2 , . . .are i.i.d. and F has a finite second moment and takes the value α only once, then a result of Stigler ((Stigler 1974, Theorems 1-2) ) yields the asymptotic distribution of the estimation error: where σ 2 F := g α (F(x 0 ))Γ(x 0 , x 1 )g α (F(x 1 )) dx 0 dx 1 with Γ(x 0 , x 1 ) , and ; refers to convergence in distribution (see also Shorack 1972;Shorack and Wellner 1986).In fact, for independent X 1 , X 2 , . . . the second summand in the definition of Γ(x 0 , x 1 ) vanishes.Results of Beutner and Zähle (2010) show that Equation (3) still holds if (X i ) i∈N is strictly stationary and α-mixing with mixing coefficients α(i) = O(i −θ ) and lim x→∞ (1 − F(x))x 2θ/(θ−1) < ∞ for some θ > 1 + √ 2. Tsukahara (2013) obtained the same result.A similar result can also be derived from an earlier work by Mehra and Rao (1975), but under a faster decay of the mixing coefficients and under an additional assumption on the dependence structure.We emphasize that the method of proof proposed by Beutner and Zähle is rather flexible, because it easily extends to other weak and strong dependence concepts and other risk measures (see Beutner et al. 2012;Beutner andZähle 2010, 2016;Krätschmer et al. 2013;Krätschmer and Zähle 2017). Even in the i.i.d.case the asymptotic variance σ 2 F depends on F in a fairly complex way.For the approximation of the distribution of √ n(R α ( F n ) − R α (F)), bootstrap methods should thus be superior to the method of estimating σ 2 F .However, to the best of our knowledge, theoretical investigations of the bootstrap for the Average Value at Risk seem to be rare.According to Gribkova (2016), a result of Gribkova (2002) yields bootstrap consistency for Efron's bootstrap when X 1 , X 2 , . . .are i.i.d, while Theorem 3 of Helmers et al. (1990) seems not to cover the Average Value at Risk, because there the function J (which plays the role of g α ) is assumed to be Lipschitz continuous.In these articles, bootstrap consistency is typically proved by first proving consistency of the bootstrap variance and then using this result by showing that upper bounds for the difference between the sampling distribution and the bootstrap distribution converge to zero.Employing different techniques, Beutner and Zähle (2016) established bootstrap consistency in probability for the multiplier bootstrap when X 1 , X 2 , . . .are i.i.d. as well as bootstrap consistency in probability for the circular bootstrap when X 1 , X 2 , . . .are strictly stationary and β-mixing with mixing coefficients β(i) = O(i −b ) and |x| p dF(x) < ∞ for some p > 2 and b > p/(p − 2).Recently, Sun and Cheng (2018) established bootstrap consistency in probability for the moving blocks bootstrap when X 1 , X 2 , . . .are strictly stationary and α-mixing with mixing coefficients α(i) ≤ cδ i and |x| p dF(x) < ∞ for some p > 4, c > 0 and δ ∈ (0, 1).Strictly speaking, Sun and Cheng did not consider the Average Value at Risk (Expected Shortfall) but the Tail Conditional Expectation in the sense of Acerbi andTasche (2002a, 2002b). The contribution of the article at hand is twofold.First, we extend the results of Beutner and Zähle (2016) on the Average Value at Risk from bootstrap consistency in probability to bootstrap consistency almost surely.Second, we establish bootstrap consistency for the Average Value at Risk of collective risks, i.e., for R α (F * m ) and more general expressions. The rest of the article is organized as follows.In Section 2, we present and illustrate our main results which are proved in Section 3. Section 3 is followed by the conclusions.The proofs of Section 3 rely on a new functional delta-method for the almost sure bootstrap which seems to be interesting in its own right and which is presented in Appendix B. Roughly speaking, the (functional) delta method studies properties of particular estimators for quantities of the form H(θ). Here, H is a known functional, such as the Average Value at Risk functional, and θ is a possibly infinite dimensional parameter, such as an unknown distribution function.The particular estimators covered by the (functional) delta method are of the form H( T n ) where T n is an estimator for θ.In general and in the particular application considered here, the appeal of the (functional) delta method lies in the fact that, once "differentiability" of H (here, the Average Value at Risk functional) is established, the asymptotic error distribution of H( T n ) can immediately be derived from the asymptotic error distribution of T n (here F n ).This also applies to the (functional) delta method for the bootstrap where bootstrap consistency of the bootstrapped version of H( T n ) will follow from the respective property of the bootstrapped version of T n (here F n ).Thus, if in financial or actuarial applications the data show dependencies for which the asymptotic error distribution and/or bootstrap consistency of plug-in estimators for the Average Value at Risk have not been established yet, it would be enough to check if for these dependencies the asymptotic error distribution and/or bootstrap consistency of F n is known; thanks to the (functional) delta method the Average Value at Risk functional would inherit these properties.In Appendix A.1, we give results on convergence in distribution for the open-ball σ-algebra which are needed for the main results, and in Appendix A.2 we prove a delta-method for uniformly quasi-Hadamard differentiable maps that is the basis for the method of Appendix B. Readers interested in these methods used to prove the main results might wish to first work through Appendices A and B before reading Sections 2 and 3. The Case of i.i.d. Observations Keep the notation of Section 1. Assume that (X i ) i∈N is a sequence of i.i.d.real-valued random variables on some probability space (Ω, F , P) with distribution function and (W ni ) be a triangular array of nonnegative real-valued random variables on another probability space (Ω , F , P ) such that one of the following two settings is met.S1.The random vector (W n1 , . . ., W nn ) is multinomially distributed according to the parameters n and p 1 = • • • = p n := 1/n for every n ∈ N. S2.W ni = Y i /Y n for every i = 1, . . ., n and n ∈ N, where Y n := 1 n ∑ n j=1 Y j and (Y j ) is any sequence of nonnegative i.i.d.random variables on (Ω , F , P ) with is nothing but Efron's boostrap (Efron 1979).If in Setting S2. the distribution of Y 1 is the exponential distribution with parameter 1, then the resulting scheme is in line with the Bayesian bootstrap of Rubin (1981).Let Theorem 1.In the setting above assume that φ 2 dF < ∞ for some continuous function φ : R → [1, ∞) with 1/φ(x) dx < ∞ (in particular F ∈ F 1 ), and that F takes the value α only once.Then Theorem 1 is a special case of Corollary 1 below.For the bootstrap Scheme S1. the result of Theorem 1 can be also deduced from Theorem 7 in Gribkova (2002).According to Gribkova (2016), Condition (1) of this theorem is satisfied if there are 0 = a 0 < a 1 < • • • < a k = 1 for some k ∈ N such that J is Hölder continuous on each interval (a i−1 , a i ), 1 ≤ i ≤ k, and the measure dF −1 has no mass at the points a 1 , . . ., a k−1 .For the bootstrap Scheme S2. the result seems to be new. We now consider the collective risk model.Let (X i ) i∈N and F n be as above, and let p = (p k ) k∈N 0 be the counting density of a distribution on N 0 .Let F denote the set of all distribution functions on R, and consider the functional Theorem 2. In the setting above assume that |x| 2λ dF(x) < ∞ for some λ > 1 (in particular F ∈ F 1 ) and ∑ ∞ k=1 p k k 1+λ < ∞, and that C p (F) takes the value α only once.Then, and Theorem 2 is a special case of Corollary 4 below.Lauer andZähle (2015, 2017) derive the asymptotic distribution as well as almost sure bootstrap consistency for the Average Value at Risk (and more general risk measures) of F * m n when m n /n is asymptotically constant, but we do not know any result in the existing literature which is comparable to that of Theorem 2. The Case of β-Mixing Observations Keep the notation of Section 1. Assume that (X i ) i∈N is a strictly stationary sequence of β-mixing random variables on (Ω, F , P) with distribution function F. As before let be a sequence of integers such that n ∞ as n → ∞, and n < n for all n ∈ N. Set k n := n/ n for all n ∈ N. Let (I nj ) n∈N, 1≤j≤k n be a triangular array of random variables on (Ω , F , P ) such that I n1 , . . . ,I nk n are i.i.d.according to the uniform distribution on {1, . . ., n − n + 1} for every n ∈ N. Let (Ω, F , P) := (Ω × Ω , F ⊗ F , P ⊗ P ) and Note that the sequence (X i ) and the triangular array (W ni ) regarded as families of random variables on the product space (Ω, F , P) := (Ω × Ω , F ⊗ F , P ⊗ P ) are independent.At an informal level, this means that, given a sample X 1 , . . ., X n , we pick k n − 1 blocks of length n and one block of length n − (k n − 1) n in the sample X 1 , . . ., X n , where the start indices I n1 , I n2 , . . ., I nk n are chosen independently and uniformly in the set of indices {1, . . ., n − n + 1}: block 1: The bootstrapped empirical distribution function F * n is then defined to be the distribution function of the discrete probability measure with atoms X 1 , . . ., X n carrying masses W n1 , . . ., W nn , respectively, where W ni specifies the number of blocks which contain X i .This is known as the blockwise bootstrap (see, e.g., Bühlmann (1994Bühlmann ( , 1995) ) and references therein).Assume that the following assertions hold: A1. φ p dF < ∞ for some p > 4 (in particular F ∈ F 1 ).A2.The sequence of random variables (X i ) is strictly stationary and β-mixing with mixing coefficients (β i ) satisfying β i ≤ cδ i for some constants c > 0 and δ ∈ (0, 1).A3.The block length n satisfies n = O(n γ ) for some γ ∈ (0, 1/2). and note that which can be verified easily.Let σ 2 F := g α (F(x 0 ))Γ(x 0 , x 1 )g α (F(x 1 )) dx 0 dx 1 with Γ(x 0 , x 1 ) : Theorem 3. In the setting above (in particular under A1.-A3.)assume that F takes the value α only once.Then, we have Theorem 3 is a special case of Corollary 1 below.To the best of our knowledge, there does not yet exist any result on almost sure bootstrap consistency for the Average Value at Risk when the underlying data are dependent. Bootstrapping the Down Side Risk of an Asset Price Let (A i ) i∈N 0 be the price process of an asset.Let us assume that it is induced by an initial state A 0 ∈ R + and a sequence of R + -valued i.i.d.random variables (R i ) i∈N via A i := R i A i−1 , i ∈ N. Here, R i is the return of the asset in between time i − 1 and time i.For instance, if A 0 , A 1 , A 2 , . . .are the observations of a time-continuous Black-Scholes-Merton model with drift µ and volatility σ at the points of the time grid {0, h, 2h, . ..}, then the distribution of R 1 is the log-normal distribution with parameters (µ − σ 2 /2)h and σ 2 h.However, the adequacy of a specific parametric model is usually hard to verify.For this reason, we do not restrict ourselves to any particular parametric structure for the dynamics of (R i ) i∈N . Let us assume that we can observe the asset prices A 0 , . . ., A n up to time n, and that we are interested in the Average Value at Risk at level α of the negative price change A n − A n+1 (which specifies the down side risk of the asset) in between time n and n + 1.That is, since for any a 0 , . . ., . copies of X, we can use R α ( F n ) as an estimator for R α (F) and derive from Equation (4) an asymptotic confidence interval at a given level τ ∈ (0, 1) for R α (F) where one has to estimate σ 2 F by )) dx 0 dx 1 .As the estimator for σ 2 F depends on F n in a somewhat complex way, the bootstrap confidence interval at level τ derived from Equations ( 4) and ( 5) is supposed to have a slightly better performance.Here, q * t (ω) denotes a t-quantile of (a Monte Carlo approximation of) the distribution of the left-hand side in Equation ( 5) for fixed ω.For Equations ( 4) and ( 5) it suffices to assume that E[|R 1 | 2+ε ] < ∞ for some arbitrarily small ε > 0. Bootstrapping the Total Risk Premium in Insurance Models In actuarial mathematics, the collective risk model is frequently used for modeling the total claim distribution of an insurance collective.If the counting density p = (p k ) k∈N 0 corresponds to the distribution of the random number N of claims caused by the whole collective within one insurance period, and if X 1 , . . ., X N (, X N+1 , . ..) denote the i.i.d.sizes of the corresponding claims with marginal distribution F, then C p (F) is the distribution of the total claim ∑ N i=1 X i (the latter sum is set to 0 if N = 0).Now, R α (C p (F)) is a suitable insurance premium for the whole collective when the Average Value at Risk at level α is considered to be a suitable premium principle. Assume that p is known, for instance p m = 1 for some fixed m ∈ N, and let X 1 , . . ., X n be observed historical (i.i.d.) claims with n large.On the one hand, the construction of an exact confidence interval for R α (C p (F)) at level τ ∈ (0, 1) based on X 1 , . . ., X n is hardly possible.Likewise, the performance of an asymptotic confidence interval at level τ derived from Equation ( 6) with (nonparametrically) estimated σ 2 p,F is typically only moderate.Take into account that σ 2 p,F depends on the unknown F in a fairly complex way.On the other hand, the bootstrap confidence interval at level τ derived from Equation ( 7) should have a better performance.Here, q * t (ω) denotes a t-quantile of (a Monte Carlo approximation of) the distribution of the left-hand side in Equation ( 7) for fixed ω. Note that Theorem 2 ensures that Equations ( 6) and ( 7) hold true when the marginal distribution F of the X i is any log-normal distribution, any Gamma distribution, any Pareto distribution with tail index greater than 2, or any convex combination of one of these distributions with the Dirac measure δ 0 , and the counting density p corresponds to any Dirac measure with atom in N, any binomial distribution, any Poisson distribution, or any geometric distribution.The former distributions are classical examples for the single claim distribution and the latter distributions are classical examples for the claim number distribution. Proofs of Main Results Here, we prove the results of Section 2. In fact, Theorems 1-3 are special cases of Corollaries 1 and 4. The latter corollaries are proved with the help of the technique introduced in Appendix B.2, which in turn avails the concept of uniform quasi-Hadamard differentiability (see Definition A1 in Appendix B.1). Keep the notation introduced in Section 1.Let D be the space of all cádlág functions v on R with finite sup-norm v ∞ := sup t∈R |v(t)|, and D be the σ-algebra on D generated by the one-dimensional coordinate projections π t , t ∈ R, given by π t (v) := v(t).Let φ : R → [1, ∞) be a weight function, i.e., a continuous function being non-increasing on (−∞, 0] and non-decreasing on [0, ∞).Let D φ be the subspace of D consisting of all x ∈ D satisfying v φ := vφ ∞ < ∞ and lim |t|→∞ |v(t)| = 0.The latter condition automatically holds when lim |t|→∞ φ(t) = ∞.We equip D φ with the trace σ-algebra of D, and note that this σ-algebra coincides with the σ-algebra B • φ on D φ generated by the • φ -open balls (see Lemma 4.1 in Beutner and Zähle ( 2016)). Average Value at Risk functional Using the terminology of Part (i) of Definition A1, we obtain the following result. Proposition 1.Let F ∈ F 1 and assume that F takes the value α only once.Let S be the set of all sequences (G n ) ⊆ F 1 with G n → F pointwise.Moreover, assume that 1/φ(x) dx < ∞.Then, the map R α : F 1 (⊆ D) → R is uniformly quasi-Hadamard differentiable with respect to S tangentially to D φ D φ , and the uniform quasi-Hadamard derivative Ṙα;F : where as before g Proposition 1 shows in particular that for any F ∈ F 1 which takes the value α only once, the map R α : F 1 (⊆ D) → R is uniformly quasi-Hadamard differentiable at F tangentially to D φ D φ (in the sense of Part (ii) of Definition A1) with uniform quasi-Hadamard derivative given by Equation (13). Proof.(of Proposition 1) First, note that the map Ṙα;F defined in Equation ( 13) is continuous with respect to • φ , because Let us denote the integrand of the integral in Equation ( 14) by we have lim n→∞ F n (x) = F(x) and lim n→∞ (F n (x) + ε n v n (x)) = F(x) for every x ∈ R. Thus, for every x ∈ R with F(x) < al pha, we obtain g α (F(x))v(x) = 0 and i.e., lim n→∞ I n (x) = 0. Since we assumed that F takes the value α only once, we can conclude that lim n→∞ I n (x) = 0 for Lebesgue-a.e. x ∈ R.Moreover, by the Lipschitz continuity of g α with Lipschitz constant 1 1−α we have , the assumption 1/φ(x) dx < ∞ ensures that the latter expression provides a Borel measurable majorant of I n .Now, the Dominated Convergence theorem implies Equation ( 14). As an immediate consequence of Corollary A4, Examples A1 and A2, and Proposition 1, we obtain the following corollary. Corollary 1.Let F, F n , F * n , C n , and B F be as in Example A1 (S1. or S2.) or as in Example A2 respectively, and assume that the assumptions discussed in Example A1 or in Example A2 respectively are fulfilled for some weight function φ with 1/φ(x) dx < ∞ (in particular F ∈ F 1 ).Moreover, assume that F takes the value α only once.Then, Compound Distribution Functional Let C p : F → F be the compound distribution functional introduced in Section 2.1.For any λ ≥ 0, let the function φ λ : R → [1, ∞) be defined by φ λ (x) := (1 + |x|) λ and denote by F φ λ the set of all distribution functions F that satisfy φ λ (x) dF(x) < ∞.Using the terminology of Part (ii) of Definition A1, we obtain the following Proposition 2. In the proposition, the functional C p is restricted to the domain F φ λ in order to obtain D φ λ as the corresponding trace.The latter will be important for Corollary 3. where as before H p,F := ∑ ∞ k=1 k p k F * (k−1) .In particular, if p m = 1 for some m ∈ N, then Proposition 2 extends Proposition 4.1 of Pitts (1994).Before we prove the proposition, we note that the proposition together with Corollary A4 and Examples A1 and A2 yields the following corollary. Corollary 2. Let F, F n , F * n , C n , and B F be as in Example A1 (S1. or S2.) or as in Example A2 respectively, and assume that the assumptions discussed in Example A1 or in Example A2 respectively are fulfilled for φ = φ λ for some λ > 0.Then, for λ ∈ [0, λ) To ease the exposition of the proof of Proposition 2, we first state a lemma that follows from results given in Pitts (1994).In the sequel we use f * H to denote the function defined by f * H(•) := v( • − x) dH(x) for any measurable function f and any distribution function H of a finite (not necessarily probability) Borel measure on R for which f * H(•) is well defined on R. Then, the following two assertions hold. (i) There exists a constant C 1 > 0 such that for every k, n ∈ N Proof.(i): From Equation (2.4) in Pitts (1994) we have so that it remains to show that |x| λ dF n (x) is bounded above uniformly in n ∈ N. The functions Pitts (1994)).Therefore, |x| λ dF n (x) ≤ C 1 for some suitable finite constant C 1 > 0 and all n ∈ N. (ii): With the help of Lemma 2.3 of Pitts (1994) (along with Pitts (1994), and Equation (2.4) in Pitts (1994), we obtain It hence remains to show that |x| λ dF n (x) and |x| λ dG n (x) are bounded above uniformly in n ∈ N.However, this was already done in the proof of Part (i). Proof.Proof of Proposition 2. First, note that for G 1 , G 2 ∈ F φ λ , we have by Equation (2.1) in Pitts (1994).Moreover, according to Lemma 2.2 in Pitts (1994), we have that the integrals |x| λ dC p (F)(x) and |x| λ dC p (G)(x) are finite under the assumptions of the proposition.Hence, D φ λ can indeed be seen as the trace.Second, we show ( where the first and the second inequality follow from Lemma 2.3 and Equation (2.4) in Pitts (1994) respectively.Hence, Now, the series converges due to the assumptions, and with the usual convention that the sum over the empty set equals zero.We find that for every M ∈ N where for the third "=" we use the fact that for By Part (ii) of Lemma reflemma preceding qHD of compound (this lemma can be applied since Since λ < λ and v n − v φ λ → 0, we have v n φ λ ≤ K 1 for some finite constant K 1 > 0 and all n ∈ N. Hence, the right-hand side of Equation ( 17) can be made arbitrarily small by choosing M large enough.That is, S 1 (n, M) can be made arbitrarily small uniformly in n ∈ N by choosing M large enough. Furthermore, it is demonstrated in the proof of Proposition 4.1 of Pitts (1994) that S 3 (M) can be made arbitrarily small by choosing M large enough. Next, applying again Part (ii) of Lemma 1, we obtain It remains to consider the summand We show that for M fixed this term can be made arbitrarily small by letting n → ∞.This would follow if for every given k ∈ {1, . . ., M} and ∈ {0, . . ., k − 1} the expression for some suitable finite constant c(λ , v) > 0 depending only on λ and v.The first inequality in Equation ( 18) is obvious (and holds for any v ∈ D φ λ ).The second inequality in Equation ( 18) is obtained by applying Lemma 2.3 of Pitts (1994) to the first summand (noting that Pitts (1994) to the second summand (which requires that v is as described above), and by applying Lemma 2.3 of Pitts (1994) to the third summand. We now consider the three summands on the right-hand side of Equation ( 18) separately.We start with the third term.Since v ∈ D φ λ , Lemma 4.2 of Pitts (1994) ensures that we may assume that v is chosen such that v − v φ λ is arbitrarily small.Hence, for fixed M the third summand in Equation ( 18) can be made arbitrarily small. We next consider the the second summand in Equation ( 18).Obviously, We start by considering the first summand in Equation ( 19).In view of Equation ( 16), it can be written as Applying Lemma 2.3 of Pitts (1994) where we applied Part (i) of Lemma 1 to 1 [0,∞) − F * n φ λ to obtain the last inequality.Hence, for the left-hand side of Equation ( 20) to go to zero as n → ∞ it suffices to show that (ε where we applied Part (ii) of Lemma 1 with v = ε n v n to all summands in For every k and ∈ {0, . . ., k − 1} this expression goes indeed to zero as n → ∞, because, as mentioned before, v n φ λ is uniformly bounded in n ∈ N, and we have ε n → 0. Next, we consider the second summand in Equation ( 19).Applying Equation ( 16) to F * (k−1) n and F * (k−1) and subsequently Part (ii) of Lemma 1 to the summands in H k−1 (F n , F), we have Clearly for every k this term goes to zero 0 as n → ∞, because by assumption.This together with the fact that Equation (20) goes to zero 0 as n → ∞ shows that Equation ( 19) goes to zero in • φ λ as n → ∞.Therefore, the second summand in Equation ( 18) goes to zero as n → ∞. It remains to consider the first term in Equation ( 18).We find where for the last inequality we used Formula (2.4) of Pitts (1994).In the following, Equation ( 19) we showed that n φ λ goes to zero as n → ∞ for every k and ∈ {0, . . ., k − 1}.Hence, for every such k and , it is uniformly bounded in n ∈ N. Therefore, we can make Equation ( 22) arbitrarily small by making v − v φ λ small which, as mentioned above, is possible according to Lemma 4.2 of Pitts (1994).This finishes the proof. Composition of Average Value at Risk Functional and Compound Distribution Functional Here, we consider the composition of the Average Value at Risk functional R α defined in Equation ( 1) and the compound distribution functional C p introduced in Section 2.1.As a consequence of Propositions 1 and 2, we obtain the following Corollary 3. Note that, for any λ > 1, Lemma 2.2 in Pitts (1994) , and assume that C p (F) takes the value α only once.Then, the map T α,p := R α • C p : F φ λ (⊆ D) → R is uniformly quasi-Hadamard differentiable at F tangentially to D φ λ D φ λ , and the uniform quasi-Hadamard derivative Ṫα,p;F : with g α and v * H p,F as in Proposition 1 and 2, respectively. Proof.We intend to apply Lemma A1 to H = C p : (for which we applied Lemma 2.3 and Inequality (2.4) in Pitts (1994)), the convergence of the latter series (which holds by assumption), and v φ λ ≤ v φ λ < ∞.Further, it follows from Proposition 1 that the map R α is uniformly quasi-Hadamard differentiable tangentially to D φ λ D φ λ at every distribution function of F φ λ that takes the value 1 − α only once.This is Assumption (c) of Lemma A1.It remains to show that Assumption (a) of Lemma A1 also holds true.In the present setting, Assumption (a) means that for every sequence where we used Equation ( 16) for the second "=" and applied Part (ii) of Lemma 1 to the summands of H k to obtain the latter inequality.Since the series converges, we obtain As an immediate consequence of Corollary A4, Examples A1 and A2, and Corollary 3, we obtain the following corollary. Corollary 4. Let F, F n , F * n , C n , and B F be as in Example A1 (S1. or S2.) or as in Example A2, respectively, and assume that the assumptions discussed in Example A1 or in Example A2 respectively are fulfilled for φ = φ λ for some λ > 1 (in particular F ∈ F 1 ).Moreover, assume ∑ ∞ k=1 p k k 1+λ < ∞ and that C p (F) takes the value α only once.Then, Conclusions In this paper, we consider the sub-additive risk measure Average Value at Risk and presented in Sections 2.1 and 2.2 results on almost sure bootstrap consistency for the corresponding empirical plug-in estimator based on i.i.d. or strictly stationary, geometrically β-mixing observations.Our results supplement those by Beutner and Zähle (2016) on bootstrap consistency in probability and those by Sun and Cheng (2018) on bootstrap consistency in probability for the Tail Conditional Expectation (which is not sub-additive).In Section 2.1, we also look at the case where one is interested in Average Value of Risk in the collective risk model.Note that one might interpret the collective risk model as a pooling of independent risks.In the context of Solvency II, pooling of risks has received increased attention (see, for example, Bølviken and Guillen 2017).However, one should keep in mind that our results of Section 2.1 can typically not be applied in the Solvency II context.In Solvency II applications risks are usually dependent, whereas in the collective risk model the different risks (claims) are assumed to be independent. Appendix A. Convergence in Distribution • Let (E, d) be a metric space and B • be the σ-algebra on E generated by the open balls B r (x) := {y ∈ E : d(x, y) < r}, x ∈ E, r > 0. We refer to B • as open-ball σ-algebra.If (E, d) is separable, then B • coincides with the Borel σ-algebra B. If (E, d) is not separable, then B • might be strictly smaller than B and thus a continuous real-valued function on E is not necessarily (B • , B(R))-measurable.Let C • b be the set of all bounded, continuous and (B • , B(R))-measurable real-valued functions on E, and M • 1 be the set of all probability measures on (E, B • ). Let X n be an (E, B • )-valued random variable on some probability space (Ω n , F n , P n ) for every n ∈ N 0 .Then, referring to Billingsley (1999, sct. 1.6) In this case, we write X n ; • X 0 .This is the same as saying that the sequence ( Beutner and Zähle (2016).It is worth mentioning that two probability measures µ, ν ∈ M (Billingsley 1999, Theorem 6.2)). In Appendices A-C in Beutner and Zähle (2016), several properties of convergence in distribution • (and weak • convergence) have been discussed.The following two subsections complement this discussion. Appendix A.1.Slutsky-Type Results for the Open-Ball σ-Algebra For a sequence (X n ) of (E, B • )-valued random variables that are all defined on the same probability space (Ω, F , P), the sequence (X n ) is said to converge in probability • to X 0 if the mappings ω → d(X n (ω), X 0 (ω)), n ∈ N, are (F , B(R + ))-measurable and satisfy In this case, we write X n → p,• X 0 .The superscript • points to the fact that measurability of the mapping ω → d(X n (ω), X 0 (ω)) is a requirement of the definition (and not automatically valid).Note, however, that in the specific situation where X 0 ≡ x 0 for some x 0 ∈ E, measurability of the mapping ω → d(X n (ω), X 0 (ω)) does hold (see Lemma B.3 in Beutner and Zähle (2016)).In addition, note that the measurability always hold when (E, d) is separable; in this case, we also write → p instead of → p,• .Theorem A1.Let (X n ) and (Y n ) be two sequences of (E, B • )-valued random variables on a common probability space (Ω, F , P), and assume that the mapping ω → d(X n (ω), Y n (ω)) is (F , B(R + ))-measurable for every n ∈ N. Let X 0 be an (E, B • )-valued random variable on some probability space (Ω 0 , F 0 , P 0 ) with P 0 [X 0 ∈ Proof.In view of X n ; • X, we obtain for every fixed f ∈ BL Since f lies in BL • 1 and we assumed d(X n , Y n ) → p 0, we also have , where the inclusion may be strict. Corollary A1.Let (X n ) and (Y n ) be two sequences of (E, B • )-valued random variables on a common probability space (Ω, F , P).Let X 0 be an (E, B • )-valued random variable on some probability space (Ω 0 , F 0 , P 0 ) Let ( E, d) be a metric space equipped with the corresponding open-ball σ-algebra B • .Then, X n ; • X 0 and Y n → p,• y 0 together imply: Proof.Assertion (ii) is an immediate consequence of Assertion (i) and the Continuous Mapping theorem in the form of (Billingsley 1999, Theorem 6.4); take into account that (X 0 , y 0 ) takes values only Then, the following two assertions hold: Proof.The proof is very similar to the proof of Theorem C.4 in Beutner and Zähle (2016). Moreover, define the map h 0 : Now, the claim would follow by the extended Continuous Mapping theorem in the form of Theorem C.1 in Beutner and Zähle (2016) applied to the functions h n , n ∈ N 0 , and the random variables Third, the map h 0 is continuous by the definition of the quasi-Hadamard derivative.Thus, h 0 is (B • 0 , B • )-measurable, because the trace σ-algebra B • 0 := B • ∩ E 0 coincides with the Borel σ-algebra on E 0 (recall that E 0 is separable).In particular, ḢS (ξ) is (F 0 , B • )-measurable.(ii): For every n ∈ N, let E n and h n be as above and define the map h n : E n → E by Moreover, define the map h For Equation (A5), it suffices to show that the assumption of the extended Continuous Mapping theorem in the form of Theorem C.1 in Beutner and Zähle (2016) applied to the functions h n and ξ n (as defined above) are satisfied.The claim then follows by Theorem C.1 in Beutner and Zähle (2016).First, we have already observed that ξ n (Ω n ) ⊆ E n and ξ 0 (Ω 0 ) ⊆ E 0 .Second, we have seen in the proof of Part Bauer (2001) shows that the map 2016) is ensured by Assumption (d) and the continuity of the extended map ḢS at every point of E 0 (recall Assumption (f)).Hence, Equation (A5) holds. By Assumption (g) and the ordinary Continuous Mapping theorem (see (Billingsley 1999, Theorem 6.4)) applied to Equation (A5) and the map h By Proposition B.4 in Beutner and Zähle (2016), we can conclude Equation (A4). The following lemma provides a chain rule for uniformly quasi-Hadamard differentiable maps (a similar chain rule with different S was found in Varron ( 2015)).To formulate the chain rule, let V be a further vector space and E ⊆ V be a subspace equipped with a norm Let E 0 and E 0 be subsets of E and E, respectively.Let S and S be sets of sequences in V H and V H , respectively, and assume that the following three assertions hold. (b) H is uniformly quasi-Hadamard differentiable with respect to S tangentially to E 0 E with trace E and uniform quasi-Hadamard derivative ḢS : E 0 → E, and we have ḢS (E 0 ) ⊆ E 0 .(c) H is uniformly quasi-Hadamard differentiable with respect to S tangentially to E 0 E with trace E and uniform quasi-Hadamard derivative ˙ H S : E 0 → E. Then, the map T := H • H : V H → V is uniformly quasi-Hadamard differentiable with respect to S tangentially to E 0 E with trace E, and the uniform quasi-Hadamard derivative ṪS is given by ṪS := ˙ H S • ḢS . Proof.Obviously, since H(V H ) ⊆ V H and H is associated with trace E, the map H • H can also be associated with trace E. Note that by assumption, H(θ n ) ∈ V H and in particular (H(θ n )) ∈ S. By the uniform quasi-Hadamard differentiability of H with respect to S tangentially to E 0 E with trace E, because H is associated with trace E and ḢS (E 0 ) ⊆ E 0 .Hence, by the uniform quasi-Hadamard differentiability of H with respect to S tangentially to E 0 E , we obtain This completes the proof. Appendix B. Delta-Method for the Bootstrap The functional delta-method is a widely used technique to derive bootstrap consistency for a sequence of plug-in estimators with respect to a map H from bootstrap consistency of the underlying sequence of estimators.An essential limitation of the classical functional delta-method for proving bootstrap consistency in probability (or outer probability) is the condition of Hadamard differentiability on H (see Theorem 3.9.11 of van der Vaart Wellner (1996)).It is commonly acknowledged that Hadamard differentiability fails for many relevant maps H. Recently, it was demonstrated in Beutner and Zähle (2016) that a functional delta-method for the bootstrap in probability can also be proved for quasi-Hadamard differentiable maps H. Quasi-Hadamard differentiability is a weaker notion of "differentiability" than Hadamard differentiability and can be obtained for many relevant statistical functionals H (see, e.g., Beutner et al. 2012;Beutner andZähle 2010, 2012;Krätschmer et al. 2013;Krätschmer and Zähle 2017).Using the classical functional delta-method to prove almost sure (or outer almost sure) bootstrap consistency for a sequence of plug-in estimators with respect to a map H from almost sure (or outer almost sure) bootstrap consistency of the underlying sequence of estimators requires uniform Hadamard differentiability on H (see Theorem 3.9.11 of van der Vaart Wellner (1996)).In this section, we introduce the notion of uniform quasi-Hadamard differentiability and demonstrate that one can even obtain a functional delta-method for the almost sure bootstrap and uniformly quasi-Hadamard differentiable maps H. To explain the background and the contribution of this section more precisely, assume that we are given an estimator T n for a parameter θ in a vector space, with n denoting the sample size, and that we are actually interested in the aspect H(θ) of θ.Here, H is any map taking values in a vector space.Then, H( T n ) is often a reasonable estimator for H(θ).One of the main objects in statistical inference is the distribution of the error H( T n ) − H(θ), because the error distribution can theoretically be used to derive confidence regions for H(θ).However, in applications, the exact specification of the error distribution is often hardly possible or even impossible.A widely used way out is to derive the asymptotic error distribution, i.e., the weak limit µ of law{a n (H( T n ) − H(θ))} for suitable normalizing constants a n tending to infinity, and to use µ as an approximation for µ n := law{a n (H( T n ) − H(θ))} for large n.Since µ usually still depends on the unknown parameter θ, one should use the notation µ θ instead of µ.In particular, one actually uses µ T n := µ θ | θ= T n as an approximation for µ n for large n.Not least because of the estimation of the parameter θ of µ θ , the approximation of µ n by µ T n is typically only moderate.An often more efficient alternative technique to approximate µ n is the bootstrap.The bootstrap has been introduced by Efron (1979) and many variants of his method have been introduced since then.One may refer to Davison and Hinkley (1997); Efron (1994); Lahiri (2003); Shao and Tu (1995) for general accounts on this topic.The basic idea of the bootstrap is the following.Re-sampling the original sample according to a certain re-sampling mechanism (depending on the particular bootstrap method) one can sometimes construct a so-called bootstrap version T * n of T n for which the conditional law of a n (H( T * n ) − H( T n )) "given the sample" has the same weak limit µ θ as the law of a n (H( T n ) − H(θ)) has.The latter is referred to as bootstrap consistency.Since T * n depends only on the sample and the re-sampling mechanism, one can at least numerically determine the conditional law of a n (H( T * n ) − H( T n )) "given the sample" by means of a Monte Carlo simulation based on L n repetitions.The resulting law µ * L can then be used as an approximation of µ n , at least for large n. (ii) If S consists of all sequences (θ n ) ⊆ V H with θ n − θ ∈ E, n ∈ N, and θ n − θ E → 0 for some fixed θ ∈ V H , then we replace the phrase " with respect to S" by "at θ" and " ḢS " by " Ḣθ ". (iii) If S consists only of the constant sequence θ n = θ, n ∈ N, then we skip the phrase "uniformly" and replace the phrase " with respect to S" by "at θ" and " ḢS " by " Ḣθ ".In this case, we may also replace "H(y 1 ) − H(y 2 ) ∈ E for all y 1 , y 2 ∈ V H " by "H(y) − H(θ) ∈ E for all y ∈ V H ". (iv) If E = V, then we skip the phrase "quasi-".(v) If E = V, then we skip the phrase "with trace E". The conventional notion of uniform Hadamard differentiability as used in Theorem 3.9.11 of van der Vaart Wellner (1996) corresponds to the differentiability concept in (i) with S as in (ii), E as in (iv), and E as in (v).Proposition 1 shows that it is beneficial to refrain from insisting on E = V as in (iv).It was recently discussed in Belloni et al. (2017) that it can be also beneficial to refrain from insisting on the assumption of (ii).For E = V ("non-quasi" case), uniform Hadamard differentiability in the sense of Definition B.1 in Belloni et al. (2017) corresponds to uniform Hadamard differentiability in the sense of our Definition A1 (Parts (i) and (iv)) when S is chosen as the set of all sequences Belloni et al. (2017), it is illustrated by means of the quantile functional that this notion of differentiability (subject to a suitable choice of (K θ , d K )) is strictly weaker than the notion of uniform Hadamard differentiability that was used in the classical delta-method for the almost sure bootstrap, Theorem 3.9.11 in van der Vaart Wellner (1996).Although this shows that the flexibility with respect to S in our Definition A1 can be beneficial, it is somehow even more important that we allow for the "quasi" case. Of course, the smaller the family S the weaker the condition of uniform quasi-Hadamard differentiability with respect to S. On the other hand, if the set S is too small, then Condition (e) in Theorem A4 ahead may fail.That is, for an application of the functional delta-method in the form of Theorem A4 the set S should be large enough for Condition (e) to be fulfilled and small enough for being able to establish uniform quasi-Hadamard differentiability with respect to S of the map H. We now turn to the abstract delta-method.As mentioned in Section 1, convergence in distribution will always be considered for the open-ball σ-algebra.We use the terminology convergence in distribution • (symbolically ; • ) for this sort of convergence; for details see Appendix A and Appendices A-C of Beutner and Zähle (2016).In a separable metric space the notion of convergence in distribution • boils down to the conventional notion of convergence in distribution for the Borel σ-algebra.In this case, we use the symbol ; instead of ; • . Let (Ω, F , P) be a probability space, and ( T n ) be a sequence of maps Regard ω ∈ Ω as a sample drawn from P, and T n (ω) as a statistic derived from ω. Somewhat unconventionally, we do not (need to) require at this point that T n is measurable with respect to any σ-algebra on V. Let (Ω , F , P ) be another probability space and set (Ω, F , P) := (Ω × Ω , F ⊗ F , P ⊗ P ). The probability measure P represents a random experiment that is run independently of the random sample mechanism P. In the sequel, T n will frequently be regarded as a map defined on the extension Ω of Ω. Theorem A3 is a consequence of Theorem A2 in Appendix A.2 as we assume that T n takes values only in V H .The proof of the measurability statement of Theorem A3 is given in the proof of Theorem A4.Theorem A3 is stated here because, together with Theorem A4, it implies almost sure bootstrap consistency whenever the limit ξ is the same in Theorem A3 and Theorem A4. Theorem A3.Let (θ n ) be a sequence in V H and S := {(θ n )}.Let E 0 ⊆ E be a separable subspace and assume that E 0 ∈ B • .Let (a n ) be a sequence of positive real numbers with a n → ∞, and assume that the following assertions hold: for some (E, B • )-valued random variable ξ on some probability space (Ω 0 , F 0 , P 0 ) with ξ(Ω 0 ) ⊆ E 0 .(b) a n (H( T n ) − H(θ n )) takes values only in E and is (F , B • )-measurable.(c) H is uniformly quasi-Hadamard differentiable with respect to S tangentially to E 0 E with trace E and uniform quasi-Hadamard derivative ḢS . Then, ḢS (ξ) is (F 0 , B • )-measurable and Theorem A4.Let S be any set of sequences in V H . Let E 0 ⊆ E be a separable subspace and assume that E 0 ∈ B • .Let (a n ) be a sequence of positive real numbers with a n → ∞, and assume that the following assertions hold: for some (E, B • )-valued random variable ξ on some probability space (Ω 0 , F 0 , P 0 nonnegative real-valued random variables on (Ω , F , P ) such that Setting S1. or Setting S2. of Section 2.1 is met.Define the map F * . Recall that Setting S1. is nothing but Efron's boostrap (Efron (1979)), and that Setting S2. is in line with the Bayesian bootstrap of Rubin (1981) if Y 1 is exponentially distribution with parameter 1.In Section 5.1 in Beutner and Zähle (2016), it was proved with the help of results of Shorack and Wellner (1986) and van der Vaart Wellner (1996) that respectively Condition (a) of Corollary A3 (with F n := F) and Condition (a) of Corollary A4 (with C n := F n ) hold for a n := √ n and B := B F , where B F is an F-Brownian bridge.Here, C φ can be chosen to be the set C φ,F of all v ∈ D φ whose discontinuities are also discontinuities of F. In addition, note that, in view of C n = F n , Condition (e) holds if S is (any subset of) the set of all sequences (G n ) of distribution functions on R satisfying G n − F ∈ D φ , n ∈ N, and G n − F φ → 0 (see, for instance, Theorem 2.1 in Zähle ( 2014)). Example A2.Let (X i ) be a strictly stationary sequence of β-mixing random variables on (Ω, F , P) with distribution function F, and F n be given by Equation (A12).Let ( n ) be a sequence of integers such that n ∞ as n → ∞, and n < n for all n ∈ N. Set k n := n/ n for all n ∈ N. Let (I nj ) n∈N, 1≤j≤k n be a triangular array of random variables on (Ω , F , P ) such that I n1 , . . ., I nk n are i.i.d.according to the uniform distribution on {1, . . ., n − n + 1} for every n ∈ N. Define the map F * with W ni given by Equation ( 8 . Now, assume that Assumptions A1.-A3. of Section 2.2 hold true.Then, as discussed in Example 4.4 and Section 5.2 of Beutner and Zähle (2016), it can be derived from a result in Arcones and Yu (1994) that under Assumptions A1. and A2.we have that Condition (a) of Corollary A3 holds for a n := √ n, B := B F , and F n := F, where B F is a centered Gaussian process with covariance function Further examples for Condition (a) in Corollary A4 for dependent observations can, for example, be found in Bühlmann (1994);Naik-Nimbalkar and Rajarshi (1994); Peligrad (1998). Theorem A5.In the setting of Example A2 assume that assertions A1.-A3. of Section 2.2 hold, and let S be the set of all sequences (G n ) ⊆ D(H) with G n − F ∈ D φ , n ∈ N, and G n − F φ → 0.Then, the second part of assertion (a) (i.e., Equation (A14)) and assertion (e) in Corollary A4 hold. Here, the bracketing number N [ ] (ε, F φ , • p ) is the minimal number of ε-brackets with respect to • p (L p -norm with respect to dF) to cover F φ , where an ε-bracket with respect to • p is the set, [ , u], of all functions f with ≤ f ≤ u for some Borel measurable functions , u : R → R + with ≤ u pointwise and u − p ≤ ε. Proof of ( because the analogue for the positive real line can be shown in the same way.Let ε i and u ε i be as defined in Equation (A17).By assumption A1. we have φ dF < ∞, so that similar as above we can find a finite partition −∞ = ε-brackets with respect to • 1 (L 1 -norm with respect to F) covering the class F φ := { f x : x ∈ R} introduced above.We proceed in two steps. Step 2. Because of Equation (A19), for Equation (A18) to be true, it suffices to show that for every i = 1, . . ., k ε + m ε .We only show the second convergence in Equation (A21), the first convergence can be shown even easier.We have The first summand on the right-hand side of arbitrarily small by letting n → ∞.For every such k and we can find a linear combination of indicator functions of the form 1 [a,b) , −∞ < a < b < ∞, which we denote by v, such that v To verify that the assumptions of the lemma are fulfilled, we first recall from the comment directly before Corollary 3 that C p (F φ λ ) ⊆ F 1 .It remains to show that the Assumptions (a)-(c) of Lemma A1 are fulfilled.According to Proposition 2 we have that for every λ ∈ (1, λ) the functional C p is uniformly quasi-Hadamard differentiable at F tangentially to D φ λ D φ λ with trace D φ λ , which is the first part of Assumption (b).The second part of Assumption (b) means Ċp,F (D φ λ ) ⊆ D φ λ and follows from Ċp;F (v) φ λ = v * which together with the Portmanteau theorem (in the form of(Beutner and Zähle 2016, Theorem A.4)) implies the claim.Set E := E × E and let B• be the σ-algebra on E generated by the open balls with respect to the metric d and ξ 0 := ξ if we can show that the assumptions of Theorem C.1 in Beutner and Zähle (2016) are satisfied.First, by Assumption (a) and the last part of Assumption (b), we have Fourth, Condition (a) of Theorem C.1 in Beutner and Zähle (2016) holds by Assumption (b).Fifth, Condition (b) of Theorem C.1 in Beutner and Zähle (2016) is ensured by Assumption (d). one can argue as above) and in particular (B • 0 , B • )-measurable.Fourth, Condition (a) of Theorem C.1 in Beutner and Zähle (2016) holds by Assumption (b).Fifth, Condition (b) of Theorem C.1 in Beutner and Zähle ( Here, C φ can be chosen to be the set C φ,F of all v ∈ D φ whose discontinuities are also discontinuities of F.Moreover, Theorem A5 below shows that under the assumptions A1.-A3. the second part of Condition (a) (i.e., Equation (A14)) and Condition (e) of Corollary A4 hold for C n := E [ F * n ] = 1 n ∑ n i=1 w ni 1 [X i ,∞) with w ni := E [W ni ] (see also Equation (9)) and the same choice of a n , B, and F n , when S is the set of all sequences (G n ) ⊆ D(H) with G n − F ∈ D φ , n ∈ N, and G n − F φ → 0. ε i d(F − C n ) −→ 0 and u ε i d( C n − F) −→ 0 P-a.s.(A21) Let T * n : Ω −→ V be any map.Since T * n (ω, ω ) depends on both the original sample ω and the outcome ω of the additional independent random experiment, we may regard T * n as a bootstrapped version of T n .Moreover, let C n : Ω −→ V be any map.As with T n , we often regard C n as a map defined on the extension Ω of Ω.We use C n together with a scaling sequence to get weak convergence results for T * n .The role of C n is often played by T n itself (see Example A1), but sometimes also by a different map (see Example A2).Assume that T n , T * n , and C n take values only in V H . Let B • and B • be the open-ball σ-algebras on E and E with respect to the norms • E and • E , respectively.Note that B • coincides with the Borel σ-algebra on E when (E, • E ) is separable.The same is true for B • .Set E := E × E and let B • be the σ-algebra on E generated by the open balls with respect to the metric d(( x 1 , x 2 ), ( y 1 , y 2 takes values only in E and is (F , B • )-measurable.(c) H is uniformly quasi-Hadamard differentiable with respect to S tangentially to E 0 E with trace E and uniform quasi-Hadamard derivative ḢS .(d) The uniform quasi-Hadamard derivative ḢS can be extended from E 0 to E such that the extension ḢS : E → E is (B • , B • )-measurable and continuous at every point of E 0 .(e) ( C n (ω)) ∈ S for P-a.e. ω.(f) The map h : E → E defined by h( x 1 , x 2 ), and recall from Section 2.2 that this is the blockwise bootstrap.Similar as in Lemma 5.3 in Beutner and Zähle (2016) it follows that a n ( F * n − C n ), with C n := E [ F * n ], takes values only in D φ and is (F , B • φ )-measurable.That is, the first part of Condition (a) of Corollary A4 holds true for C n
2019-04-22T13:12:24.556Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "8c97b35a2b577070a32108635520c652823c7b82", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9091/6/3/96/pdf?version=1536816834", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8c97b35a2b577070a32108635520c652823c7b82", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
12043761
pes2o/s2orc
v3-fos-license
Emerging Threats for Human Health in Poland: Pathogenic Isolates from Drug Resistant Acanthamoeba Keratitis Monitored in terms of Their In Vitro Dynamics and Temperature Adaptability Amphizoic amoebae generate a serious human health threat due to their pathogenic potential as facultative parasites, causative agents of vision-threatening Acanthamoeba keratitis (AK). Recently, AK incidences have been reported with increasing frequency worldwide, particularly in contact lens wearers. In our study, severe cases of AK in Poland and respective pathogenic isolates were assessed at clinical, morphological, and molecular levels. Misdiagnoses and the unsuccessful treatment in other ophthalmic units delayed suitable therapy, and resistance to applied chemicals resulted in severe courses and treatment difficulties. Molecular assessment indicated that all sequenced pathogenic corneal isolates deriving from Polish patients with AK examined by us showed 98–100% homology with Acanthamoeba genotype T4, the most prevalent genotype in this human ocular infection worldwide. In vitro assays revealed that the pathogenic strains are able to grow at elevated temperature and have a wide adaptive capability. This study is our subsequent in vitro investigation on pathogenic Acanthamoeba strains of AK originating from Polish patients. Further investigations designed to foster a better understanding of the factors leading to an increase of AK observed in the past years in Poland may help to prevent or at least better cope with future cases. Introduction Amoebae belonging to the genus Acanthamoeba are ubiquitous and widely distributed in natural and man-made environments worldwide. Acanthamoeba spp. are free-living organisms existing as vegetative mononuclear trophozoites with characteristic acanthopodia and as double-walled dormant cysts, developing after the growth phase as well as under harsh conditions. The protists occur in sea, fresh, tapwater, and drinking water systems and in swimming pools, air conditioning systems, and humidifiers; they also occur in dust and soil, on fruits and vegetables, and in animal bodies. BioMed Research International They have been recognized in the hospital environment as contaminants of surgical instruments and dental irrigation units, as well as in various human cavities and tissues, and on skin surfaces, oral cavities, paranasal sinuses, lungs, and brain [1][2][3][4]; trophozoites and cysts of Acanthamoeba also have been found by us among the microbiota of periodontal biofilms, accompanying infections with Entamoeba gingivalis in patients with systemic diseases [5]. The free-living amoebae complete their life cycles in different external environments, without entering humans or animals, and feed on microorganisms and small organic particles. However, in some circumstances, they are able to enter human bodies from different sources, colonize some organs, multiply within them, and thus exist as opportunistic parasites causing pathogenic effects. Epidemiological, serological, biochemical, and molecular investigations have shown that people may be exposed to pathogenic as well as nonpathogenic Acanthamoeba strains [3,6]. It seems that the amoebae come into the human body relatively frequently, without pathogenic consequences, as indicated by 50-100% of the healthy population having specific antibodies [7][8][9]. However, Acanthamoeba spp. may be causative agents of the rare but usually fatal granulomatous amoebic encephalitis, developing in immunocompromised individuals as an opportunistic infection [4], and of the vision-threatening Acanthamoeba keratitis (AK) that occurs mainly in immunecompetent persons. AK was first recognized in 1973 in a Texas rancher [10]. The eye disease symptoms include redness, photophobia, excessive tearing, severe eye pain, and significant deterioration of the visual acuity; without adequate therapy the amoebic infections may lead to blindness [3,[10][11][12][13][14][15][16][17]. The clinical symptoms of AK are nonspecific, similar to those observed in the course of other eye diseases, thus misdiagnosed as viral, fungal, or bacterial keratitis; a mixed keratitis caused by concomitant bacterial, viral, fungal, and Acanthamoeba infections is also known. This is why the diagnosis based on clinical symptoms alone is not sufficient to indicate the causative agent of human keratitis. The proper diagnosis needs laboratory identification of the specific pathogen for confirmation. Corneal scrapings are optimal materials for AK diagnosis. The microscopic visualization of amoebae in slides prepared directly from corneal scraping and by in vitro cultivation of the amoebic isolates deriving from these samples may be helpful also to verify previous misdiagnoses [3,17,18]. For years, Acanthamoeba isolates/strains were classified based on morphological criteria, mainly cyst size and structure: three morphological groups and 18 Acanthamoeba species were determined [3,19,20]. In the recent past, with the development of molecular systematics, PCR techniques and sequence analysis of the 18S rRNA gene have been used for diagnostics and for characterization of clinical and environmental Acanthamoeba isolates [4,[21][22][23][24][25]. At present, 19 genotypes are distinguished [17]. The treatment of AK is difficult, and often a resistance to pharmacotherapy develops, among other factors, due to an improper diagnosis leading to delayed suitable therapy. Moreover, the amoeba cysts are highly resistant to chemicals: disinfectants and antimicrobial and antiparasitic drugs; antiamoebic drugs are often efficient in high concentrations, which, however, are toxic for human cells [3,4,[26][27][28][29][30]. Thus, despite therapeutic advances, the treatment of the keratitis caused by pathogenic Acanthamoeba strains continues to be difficult and is often unsuccessful. In our earlier studies of 2011-2014 [16,31], several pathogenic Acanthamoeba strains acquired from serious AK cases, variable in corneal symptom intensity and in the response to the instituted therapy, were assessed. The corneal strains have been classified as morphological group II and, in genotype identification carried out by us with PCR based on sequence analysis of the 18S rRNA, determined as T4 genotypes. In vitro viabilities of particular strains were monitored and compared. Results of the monitoring we analyzed in regard to the survival time of amoebae, AK course as well as therapeutic management difficulties/efficacies [16]. It has been revealed in our preliminary studies that monitoring of in vitro viability of pathogenic Acanthamoeba strains isolated from the infected eyes may be a useful tool for therapeutic prognosis. A temperature tolerance of the amoebae, particularly growth/multiplication at high temperature, is often studied and reported, because it is considered as an indirect marker of potential pathogenicity of Acanthamoeba strains [3,17,32,33]. However, apart from our preliminary investigations, no studies were undertaken with diagnosed pathogenic Acanthamoeba strains originating of AK in terms of their in vitro viability in changed temperature conditions. In the present study, subsequent pathogenic Acanthamoeba isolates acquired by us from human Acanthamoeba keratitis cases unsuccessfully treated with antibacterial and antifungal medications and poorly responding to topical antiamoebic pharmacotherapy, assessed at clinical, morphological, and molecular levels, were in vitro monitored. The corneal isolates diagnosed as pathogenic Acanthamoeba strains were examined and compared to one another as well as to the environmental Acanthamoeba castellanii Neff strain in terms of their in vitro temperature sensibility/tolerance. Moreover, dynamics of these amoebic populations, cultivated parallel in vitro, their density, and morphophysiological status of particular developmental stages were assessed. Acanthamoeba Pathogenic Corneal Strains: Isolates and Cultures. The material deriving from twenty-two patients who reported to our hospital in 2010-2014 at different times after first symptoms of keratitis appeared and who were under suspicion of Acanthamoeba infection was analyzed. The patients complained of photophobia, pain, excessive tearing, and deterioration of visual acuity. In the clinical diagnosis, noninvasive methods of slit-lamp and in vivo confocal microscopy were used. During the laboratory microbiological and parasitological diagnosis, direct microscopic examinations of corneal scraping material and in vitro cultures derived from those scrapings were performed to determine etiological agents of the eye deteriorations. Although Acanthamoeba infections were confirmed for all cases, the isolates marked as I-1 up to I-22 showed a high degree of variation. In ten cases that were properly diagnosed early, 3 to 15 days after first AK symptoms appeared, and thus received early suitable treatment, an improvement was observed relatively quickly. As the isolates showed in vitro weak dynamics, trophozoites were dead in the culture medium after 8-10 days, no transformation into cysts was observed, and thus no material from these ten strains was kept for further in vitro investigations. Another two isolates have originated from incidences in which AK has been diagnosed and pathogenic amoebae were isolated; however after moderate improvement, the two patients have not continued therapy and no information on treatment efficacy was available. For these reasons, finally, ten remaining isolates acquired from the corneal material have been included in this analysis. The patients, three men, all contact lens wearers, and seven women, six contact lens wearers, one of whom bathed in swimming pool, and one non-contact-lens wearer with a history of swimming in a lake, all 26-42 years old, had different intensities of pathogenic changes in their eyes. All of them had previously been unsuccessfully treated in other ophthalmic units with antifungal and antibacterial pharmaceutics; thus proper diagnosis was delayed ranging from 25 to 45 days after first symptoms appeared. The initial identification of causative agents was achieved by in vivo confocal microscopy. The final diagnosis was made/confirmed by corneal scrapings examinations in the light microscope, first directly and next with enrichment during in vitro cultivation; the isolates were assessed at cytological and molecular levels. The isolates originating from corneal samples of the AK patients, initially examined in wet-mount slides to visualize cysts or/and trophozoites of amoebae, were cultured under bacteria-free conditions for one to three years in sterile 15 mL tubes containing BSC culture medium [34] enriched with 10% calf serum, incubated at 27 ∘ C, and subcultured twice each month. Additionally, environmental A. castellanii Neff strain, after years with serial passages in the same growth medium in the Laboratory of the Department of Medical Biology, Medical University of Warsaw, Poland, was used in this study. Genotyping of Acanthamoeba Strains. All samples/isolates were also examined by PCR techniques for specific detection of Acanthamoeba DNA and to determine genotypes of the particular strains. Extraction of DNA from the samples was performed using commercial Sherlock AX Kit (A&A Biotechnology, Gdynia, Poland). Extraction of DNA from cultured in vitro isolates was performed using commercial Genomic Mini Kit (A&A Biotechnology) for routine genomic DNA extraction, according to the manufacturer's instructions. Then, DNA was stored at −20 ∘ C. An Acanthamoebaspecific PCR following the protocol established by Schroeder et al. [21] amplifying a fragment of the 18S rRNA gene with the primers JDP1 (5 GGCCCAGATCGTTTACCGTGAA3 ) and JDP2 (5 TCTCACAAGCTGCTAGGGAGTCA3 ) was applied. PCR products were analyzed using GelDoc-IT Imaging Systems (UVP, USA) after gel electrophoresis on agarose gel (Sigma, St. Louis, Missouri) stained with Midori Green DNA stain (Nippon Genetics Europe GmbH, Germany). Cycle sequencing was performed and sequences obtained were compared with data available in GenBank using GeneStudio Pro Software (GeneStudio, Inc., Suwanee, Georgia). In Vitro Growth of Acanthamoeba Isolates at Different Temperatures. The population dynamics of the corneal and environmental Acanthamoeba isolates cultured in vitro in the aforementioned growth medium under bacteria-free conditions at 27 ∘ C was systematically monitored in terms of developmental stage status by phase-contrast light microscopy. For temperature assays, on the second day following subculturing, all cultures were shaken intensively and one mL samples of strains were transferred to 1.5 mL Eppendorf tubes containing culture medium. Next, the samples of the respective cultured strains were exposed to either 20 ∘ C, 37 ∘ C, or 42 ∘ C during 3-7 days following regular subculturing. In vitro viability and dynamics of each particular strain population were then assessed and compared. The morphophysiological changes and overall numbers of the amoebae as well as proportion of trophozoites and cysts were microscopically determined in the exponential and stationary growth phases. During exposure to changed temperature, cultures were vigorously shaken and 10 L samples were successively taken for assessment of each isolate. The changes in overall number of amoebae and number of trophozoites and cysts were counted with the aid of a Burker hemocytometer. The ability of amoebae to multiply in vitro was examined; the ranges of four counts calculated for 1 mL of culture medium were compared for particular strains and assays. Results of the investigations were analyzed statistically (ANOVA, Student-Newman-Keuls method, < 0.05). Results The material assessed in our study was acquired from ten patients with symptoms of Acanthamoeba keratitis including redness, photophobia, severe eye pain, excessive tearing, and lid edema, as well as significant deterioration of visual acuity. Active epithelial inflammations, corneal ulcers, and characteristic ringlike stromal infiltration were detected by slit-lamp in the affected eyes ( Figure 1). Keratitis symptoms intensified in different degrees as the disease progressed. 3.1. Effects of Differential Diagnosis. AK was finally confirmed in all ten cases; however, several patients experienced significant delayed proper diagnosis. In the five cases in which patients reported late to their physicians and AK diagnosis was performed more than four weeks after the first keratitis symptoms appeared, hyperreflective objects identified presumably as Acanthamoeba cysts by in vivo confocal microscopy were revealed (Figure 2). At the same time, in 3 of five cases, amoebic, bacterial, and fungal coinfections (P. aeruginosa, E. faecalis, and Candida sp.) were revealed in the microbiological diagnosis (Table 1). In five remaining cases, in which the duration of symptoms prior to proper diagnosis was somewhat shorter, no cysts were visualized by in vivo confocal microscopy. Finally, in all samples Acanthamoeba infection was confirmed by laboratory methods. In parasitological microscopic examinations, Acanthamoeba cysts and trophozoites were found in different materials: some of them were detected immediately in wet-mount slides prepared from corneal scrapings, whereas others were detected after 2-7 days of in vitro cultivation of material deriving of these isolates ( Table 1). The results of molecular examinations of the isolates and a comparison of the obtained sequences with those available in GenBank revealed that all sequenced isolates showed 98-100% homology with isolates belonging to the T4 genotype. However, there were differences between particular strains. Respective highest sequence identities were found with strains originating from both environment sources and human ocular infections and from various countries of Europe, Asia, and America. Treatment Difficulties. Material included in this analysis originated from AK cases that were previously unsuccessfully treated because of improper diagnosis. Moderate or severe course of the eye disease needed prolonged therapeutic management. Eyes affected were treated with a topically applied agent, among others, a combination of chlorhexidine (0.02%) and polyhexamethylene biguanide (PHMB, 0.02%), propamidine isethionate (0.1%), and antibiotic neomycin; however the treatment was without clear clinical improvement. Moderate response or a resistance to applied chemicals was observed during the long-term pharmacotherapy; in several cases the treatment difficulties resulted in the necessity for surgical management (corneal Xlinking, penetrating keratoplasty). Monitoring of In Vitro Dynamics of Pathogenic Corneal Isolates and Their Temperature Adaptability. Successive monitoring of the clinical isolates and comparison to the environmental A. castellanii Neff strain cultivated in vitro allowed evaluation of morphophysiological features, their associated developmental stages, and changes in their population dynamics. The living trophozoites with pseudopodia and characteristic protrusions, acanthopodia, were 12-38 m in diameter, with a nucleus and prominent centrally placed nucleolus. Cysts, 8-24 m in diameter, with their two cyst walls exhibited a wrinkled ectocyst and a polygonal or round-to-ovoid endocyst (Figure 3). The amoebae detected were identified and classified as belonging to Acanthamoeba castellanii morphological group II. The comparative evaluation of monitored strains that at the beginning were cultivated in vitro at 27 ∘ C showed that numbers of live amoebae were low in the early adaptive phase and successively increased in the exponential growth phase while the amoebae multiplied and increased population density of particular Acanthamoeba strains. The exposure of parallel cultures of strains to 20 ∘ C and 37 ∘ C from the 4th day following the subculturing caused clear changes in population dynamics, expressed in the overall number of amoeba cells. Statistically significant differences were observed between pathogenic strains and the environmental A. castellanii Neff strain in terms of number of viable amoebae and thus in population density during the stationary growth phase. Although numerous living trophozoites were detected in all cultures at 20 ∘ C, there was a significantly lower population density of pathogenic isolates cultured in vitro than the density of the Neff strain cultures ( Table 2). Contrary to this, the numbers of viable trophozoites in clinical isolates cultured at 37 ∘ C (this is near eye and general human body temperature, about 35 ∘ C to 37 ∘ C) were significantly higher in comparison to that found in the environmental strain. At this temperature, during the stationary growth phase, the highest numbers of amoebae were determined for I-19 pathogenic strain cultured in vitro, in range of four counts 44.0-102.2 × 10 2 , that were in comparison to A. castellanii Neff strain that were 3.3-7.8 × 10 2 at the same temperature. Comparative assessment of isolates exposed to 42 ∘ C showed differences between several strains. The pathogenic isolates indicated lower population activity than at 37 ∘ C, expressed in somewhat decreasing amoebic numbers. The level of statistical significance was set at < 0.05. * Data from exponential phase of population growth. However, in general, high amoebic population density was observed in all examined pathogenic strains during successive days of exposure to 42 ∘ C, while the number of amoebae significantly decreased in cultures of the environmental Neff strain. Comparative data of the environmental and clinical Acanthamoeba strains deriving from severe cases of AK cultivated in vitro at 20 ∘ C, 27 ∘ C, 37 ∘ C, and 42 ∘ C are presented in Table 2. Discussion Recently, Acanthamoeba strains generate a serious human health threat due to their pathogenic potential as facultative parasites. In addition, the amoebae may also act as vehicles/sources/reservoirs of other organisms pathogenic for humans: fungal, protozoan, viral, and bacterial microorganisms which can survive and even multiply within the amoeba cells. For these reasons, epidemiological aspects are included in a majority of studies on severe vision-threatening Acanthamoeba keratitis. Investigations on the distribution of the virulent amoebic strains in different soil, air, and water environments have been conducted worldwide, also with the aid of molecular techniques. Potentially pathogenic strains are detected in environmental samples and reported worldwide [18,25,[35][36][37][38][39][40][41][42]. In Poland, free-living amoebae have been isolated from waters in the vicinity of Poznań [43]. Successively, in further environmental studies on Acanthamoeba, potentially pathogenic strains have been detected in LakeŻarnowieckie, the Piaśnica River, and a canal used as a recreational resort in northern Poland; moreover, free-living amoebae have been isolated from natural water bodies including lakes, ponds, rivers, and lagoons of the West Pomeranian and Lubuskie area, in tap water of the water supply system of Szczecin city, in surface water layers, and in water with sediment in northern Poland at the area of the cities Gdańsk, Gdynia, and Sopot, as well as in swimming pools and fountains in western Poland [44][45][46][47][48][49][50]. At present, the approach based on genotype identification is more often applied to detect and characterize Acanthamoeba strains from environmental and clinical samples below the genus level. There is evidence that among seven genotypes detected in patients with AK about 90% of incidences are linked with the T4 genotype [3,4,17,24]. In our investigations, all pathogenic strains of AK were identified as belonging to the T4 genotype. This is in accordance with the fact that this genotype is considered to be the most common cause of the vision-threatening eye disease. Our previous [16,31] and this study are the only in that the molecular assessment was performed in diagnosed pathogenic Acanthamoeba isolates that originated from Polish patients with drug resistant Acanthamoeba keratitis. Complete results of these examinations will be reported in detail in a separate publication. It is considered that the leading risk factor for Acanthamoeba keratitis is contact lens wear. After the first case of AK associated with contact lenses in Central Europe was reported from Germany, more incidences were recognized in different countries and an association between contact lens wear and Acanthamoeba keratitis was revealed [51,52]. Nevertheless, occasionally Acanthamoeba corneal infections are also detected in persons not using contact lenses. A corneal epithelial injury, eye surgery, and, especially, an exposure of the eye to water or moist soil in which Acanthamoeba forms exist are considered as other important risk factors for acquiring AK [2-4, 17, 53]. Recently, the popularity of contact lens use is rising, and severe AK cases are reported with increasing frequency year after year, particularly in contact lens wearers (85% of all incidences), from various regions of the world, including Poland. Since the first incidences of AK were reported [54], further AK cases have been described in Poland [16,31,46,47,55]. All patients with keratitis symptoms from whom the corneal isolates were analyzed in our studies at the beginning were assessed to search for/confirm the etiological agent of the disease. Lorenzo-Morales et al. [17] "the most important step in AK diagnosis is to think of it." In many Acanthamoeba cases there is a history of misdiagnoses and improper therapy; also in the ten cases finally included in our study, some diagnostic difficulties and prolonged disease process occurred. The direct detection of etiological agents is considered as the only reliable diagnostic method for AK; simultaneously, cultivation remains the gold standard for Acanthamoeba laboratory diagnosis [17]. It is noteworthy that examiners have to be familiar with morphological characteristics of Acanthamoeba spp. Currently, it has been shown that both environmental and clinical Acanthamoeba strains/isolates vary in their pathogenicity; they may be virulent, weakly virulent, or nonvirulent [4]. Simultaneously, the ability of Acanthamoeba to grow at high temperatures is considered to be correlated with the pathogenicity of Acanthamoeba isolates and to be a good indicator of the pathogenic potential of a given isolate. Thermal tolerance examinations and growth at high temperature are used as indirect markers of Acanthamoeba virulence/pathogenicity of some environmental samples [4,17,32]. Results of our earlier investigations [33] on environmental strains of Acanthamoeba in terms of their temperature tolerance showed that the amoebae may grow at 37 ∘ C, a temperature that is higher than that in which they had been cultured for many months; at the same time, the number of trophozoites of this strain was ∼10% lower than at 26 ∘ C. In general, our experimental investigations of ten pathogenic strains revealed the ability of all isolates deriving from AK cases to grow in higher temperature than 27 ∘ C at which they had been cultured for many months. The maintenance of metabolic activities was expressed in the relatively high population density that characterized all pathogenic Acanthamoeba isolates incubated in parallel at 37 ∘ C and 42 ∘ C. It is noteworthy that cultures of the environmental Neff strain exhibited optimal growth at 20 ∘ C and weakest growth in temperatures higher than 27 ∘ C. Corneal pathogenic isolates developed well also at higher temperatures. Conclusions Acanthamoeba keratitis is a vision-threatening emerging eye disease caused by the facultative parasite Acanthamoeba, ubiquitous in human environments. The leading risk factor for the disease is contact lens wear, steadily rising in popularity; thus, AK is detected with increasing frequency worldwide, including Poland. Awareness of these risk factors and thus strict hygiene while cleaning and using contact lenses are crucial as preventive measures. The diagnostic and therapeutic difficulties, coupled with severe course of the disease in the cases analyzed in the current study, were due to unspecific clinical symptoms, misdiagnosis, and resulting delay in suitable treatment, as well as resistances to antimicrobial and antiparasitic therapy. It was shown in our study that the ability of in vitro cultured corneal Acanthamoeba isolates to adapt to higher temperature was typical for all pathogenic isolates monitored. At the same time, the in vitro metabolic activities of these strains were also maintained at 20 ∘ C. In our opinion, the pathogenic strains are not just thermotolerant but rather have a wide adaptive capability. Our study is the first detailed study from Poland providing evidence of the significant role of adaptability to temperature changes as one of a complex of contributory factors allowing free-living amoebae to exist as parasites, namely, as the causative agents of AK, a serious human eye disease. Nevertheless, from which sources our AK patients acquired their infections remains uncertain, whether they came from natural or from man-made habitats. Also, which mechanisms enable particular strains to grow at a wide temperature range remains unclear, while others do not show this ability. These remaining areas need further studies if we are to understand the epidemiology of these opportunistic infections and to prevent future cases.
2016-05-12T22:15:10.714Z
2015-11-23T00:00:00.000
{ "year": 2015, "sha1": "002ff9091f4425b47db9f0105ef8d5837267e111", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2015/231285.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41874d54d438c8c3186ef14a084cd646eea6e0cc", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252878285
pes2o/s2orc
v3-fos-license
Identifying potential microRNA biomarkers for colon cancer and colorectal cancer through bound nuclear norm regularization Colon cancer and colorectal cancer are two common cancer-related deaths worldwide. Identification of potential biomarkers for the two cancers can help us to evaluate their initiation, progression and therapeutic response. In this study, we propose a new microRNA-disease association identification method, BNNRMDA, to discover potential microRNA biomarkers for the two cancers. BNNRMDA better combines disease semantic similarity and Gaussian Association Profile Kernel (GAPK) similarity, microRNA function similarity and GAPK similarity, and the bound nuclear norm regularization model. Compared to other five classical microRNA-disease association identification methods (MIDPE, MIDP, RLSMDA, GRNMF, AND LPLNS), BNNRMDA obtains the highest AUC of 0.9071, demonstrating its strong microRNA-disease association identification performance. BNNRMDA is applied to discover possible microRNA biomarkers for colon cancer and colorectal cancer. The results show that all 73 known microRNAs associated with colon cancer in the HMDD database have the highest association scores with colon cancer and are ranked as top 73. Among 137 known microRNAs associated with colorectal cancer in the HMDD database, 129 microRNAs have the highest association scores with colorectal cancer and are ranked as top 129. In addition, we predict that hsa-miR-103a could be a potential biomarker of colon cancer and hsa-mir-193b and hsa-mir-7days could be potential biomarkers of colorectal cancer. Introduction Cancers are seriously threatening and endangering human health (Yang et al., 2013;Yang et al., 2022). Colon cancer and colorectal cancer are two of leading causes of cancer-related deaths worldwide (Lee et al., 2018;Piawah and Venook, 2019). Patients with colon cancer only have a survival rate of 10% when diagnosed at late stage. More importantly, colon cancer shows a higher incidence rate in elder populations. The survival rate of patients with colon cancer is densely associated with the size, location, and stage of the tumor. Metastasis may be the leading cause of deaths for patients suffered from late-stage colon cancer. Thus, understanding the mechanisms of colon cancer could contribute to designing more strong therapeutic options (Ma et al., 2021). Nowadays, patients with colorectal cancer show a younger trend. In the last decade, incidence rates and death rates of colorectal cancers separately increased by 22 and 13% among adults under 50 years in the United State. However, their precise aetiologic factors still remain unknown. Many evidence demonstrate that early screening of colorectal cancer can reduce their incidence and mortality. Thus, the identification of diagnosis or prognosis biomarkers can contribute to assessment of tumour initiation, progression and therapeutic response for colorectal cancer (Sampath et al., 2021). Many researches show that numerous RNA data play important roles in the development and metastasis of various diseases including cancers and COVID-19 (Huang et al., 2017;Peng L. et al., 2020;Yang et al., 2020;Zhang et al., 2021;Peng L. et al., 2022;Shen et al., 2022;Tian et al., 2022). In particular, noncoding RNAs could be biomarkers to boost drug design Meng et al., 2022). For example, lncRNAs and circRNAs have been used as biomarkers of cancers (Peng et al., 2021a;Peng et al., 2021b;Verduci et al., 2021;Wang et al., 2021;Peng L. H. et al., 2022). MicroRNAs (miRNAs) are a class of small non-coding RNAs with 22-24 nucleotides in length Chen et al., 2020). MicroRNAs can bind to mRNAs of target genes to inhibit expression of these genes. In addition, a few microRNAs may suppress tumors while other microRNAs may affect the progression and metastasis of tumors. The dysfunction of microRNAs is densely linked to the inflammation of colon cancer. For example, Ma et al. (Ma et al., 2021) found that M2 macrophage-derived exosomal miR-155-5p may have an association with the immune escape of cells in colon cancer. Pagotto et al. (Pagotto et al., 2022) observed that the miR-483 gene could have a responsive to glucose availability for colon cancer. Miao et al. (Miao et al., 2021) identified that miR-4284 could be a therapeutic target in colon cancer. Dougherty et al. (Dougherty et al., 2021) inferred that the upregulations of microRNA-143 and microRNA-145 have close linkages with colonocytes suppresses colitis and inflammation-related colon cancer. Zhang et al. (Zhang et al., 2021) suggested that microRNA-24-3p could heighten the resistance of colon cancer cell to MTX. Yue et al. reported that NEDD4 could trigger colon cancer progression through microRNA-340-5p suppression. In summary, the identification of microRNAs in the blood, tissues, and faecal matter will help us use these microRNA as biomarkers in early detection of colon cancer and thus design strong targeted therapeutic strategies for inflammation-mediated colon cancer (Peng et al., 2018;Sampath et al., 2021). Recently, many researchers have been devoted to microRNA biomarker identification for cancer including colon cancer and colorectal cancer by computational microRNA-disease association prediction (Peng et al., 2017;. Huang et al. (Huang et al., 2021) innovatively represented microRNA-disease-type triples as a tensor and further designed a tensor decomposition model to detect new microRNA-disease associations. Li et al. considered that the abnormal expression of microRNAs is densely associated with the evolution and progression of human diseases and inferred disease-related microRNAs as new biomarkers through a graph auto-encoder model. Chen et al. designed a deep learning model for microRNA-disease association identification based on deep belief network. Wang et al. (2022)) pretrained a stacked autoencoder to predict potential microRNA-disease associations in an unsupervised manner. These methods effectively improved microRNA biomarker identification of human complex diseases. In this study, we design a MicroRNA-Disease Association prediction algorithm (BNNRMDA) to find potential microRNA biomarkers for colon cancer and colorectal cancer based on disease semantic similarity, microRNA functional similarity, Gaussian association profile kernel (GAPK) similarity, and the Bound Nuclear Norm Regularization model. (Li et al., 2014). The hierarchical structures between diseases can be downloaded from the MeSH database (https://www.nlm.nih.gov/ mesh/). Experimentally supported microRNA-gene interactions can be downloaded from TarBase (Vergoulis et al., 2012), miRTarBase (Hsu et al., 2014), and miRecords (Xiao et al., 2009). We acquired microRNA-disease associations between 495 microRNAs and 378 diseases, hierarchical structures for 4,663 diseases, and 38,089 microRNA-gene interactions between 477 microRNAs and 12,422 genes. Finally, we obtained 4,791 associations between 353 microRNAs and 327 diseases after removing microRNAs without target genes and diseases without hierarchical structures. Disease semantic similarity For a known disease d, it can be described as a directed acyclic graph (DAG) based on the MeSH descriptor: where T d denotes the set of nodes that contains d and all its ancestors, and E d represents corresponding direct edges. Given a disease t ∈ T d , its semantic contribution to d can be defined as Eq. 1: where Δ denotes the semantic contribution decay factor (Δ 0.5) (Wang et al., 2010). In general, two diseases d i and d j are more similar when they share more common ancestors. Thus, pairwise semantic similarity between d i and d j can be defined as Eq. 2: MicroRNA functional similarity MicroRNA similarity can be computed based on microRNAgene associations and gene functional network. First, the associated log-likelihood scores LLS(g i , g j ) between two genes g i and g j can be calculated using HumanNet (Lee et al., 2011). Second, LLS(g i , g j ) is normalized by Eq. 3: where LLS min and LLS max represent the minimum and maximum associated log-likelihood scores computed by HumanNet, respectively. Third, similarity between g i and g j can be calculated by Eq. 4: where e(g i , g j ) indicates interaction between g i and g j . Finally, the functional similarity between two microRNAs m i and m j can be computed by Eq. 5 based on their associated genes: where G i and G j denotes the gene sets associated with m i and m j , respectively, |G i | and |G j | denote corresponding cardinalities, respectively, and S(g, G) max gi∈G {S g (g, g i )}. GAPK similarity For a known disease d i in a microRNA-disease association matrix X a×b , let the i th row of X denotes its Gaussian association profile GAP(d i ) to represent its association features with all diseases. GAPK similarity between diseases d i and d j can be measured by Eq. 6. where γ d indicates normalized kernel bandwidth according to parameter γ′ d , and a indicates the number of diseases. Similarly, for a known microRNA m i , let the i th column of X denotes its Gaussian association profile GAP(m i ) to describe its association features with all microRNAs. GAPK similarity between microRNAs m i and m j can be measured by Eq. 7: where γ m indicates normalized kernel bandwidth according to parameter γ′ m , and b indicates the number of microRNAs. Similarity fusion Disease semantic similarity S d and GAPK similarity G d are fused to calculate the final disease similarity matrix S D by Eq. 8: where the parameter w is applied to measure the weight between disease semantic similarity and GAPK similarity. MicroRNA functional similarity S m and GAPK similarity G m are fused to calculate the final microRNA similarity matrix by Eq. 9: where the parameter w is applied to measure the weight between microRNA functional similarity and GAPK similarity. Heterogeneous microRNA-disease network construction A heterogeneous microRNA-disease network is created by fusing microRNA similarity network, disease similarity network, Frontiers in Genetics frontiersin.org and microRNA-disease association network. Each edge in similarity network is weighted based on the computed similarity. The heterogeneous microRNA-disease network can be described using a bipartite graph G(M, D, E), where M and D separately represent microRNA set and disease set, where W md denotes known microRNA-disease association matrix, W mm and W dd denotes the adjacency matrices about microRNA similarity network and disease similarity network, respectively. Hence, the adjacency matrix can be rewritten as Eq. 11. BNNRMDA model In known microRNA-disease association dataset, majority of microRNA-disease pairs are unknown-associated. Inspired by the bound nuclear norm regularization model provided by Yang et al. (Yang et al., 2019), in this study, we design the bounded nuclear norm regularization-based MDA prediction method to score each unknown microRNA-disease pair. We describe microRNA-disease association inference as a matrix completion problem and construct model (12) to predict new microRNA-disease associations in microRNA-disease association matrix: where Y denotes a matrix need to complete, rank(Y) denotes the rank of Y, W ∈ R (m+n)×(m+n) denotes a known microRNA-disease association matrix, Ω denotes a set containing all index pairs (i, j) that correspond to known microRNA-disease associations in W, and Ρ Ω represents a projection operator on Ω by Eq. 13: Model (12) is a non-convex model and difficult to solve. Thus, we transform it to a nuclear norm model through the nuclear norm optimization method proposed by Candes et al. (2013) by Eq. 14: where Y p represents the nuclear norm of Y. Because the value of each element in microRNA and disease similarity matrices S m and S d is in the range of [0,1] and the value of each element in microRNA-disease association matrix X md is 1 or 0, the computed microRNA-disease association scores are restricted to [0,1]. Higher score indicates bigger association probability for one microRNA-disease pair. But the elements in Y are in the range of (−∞, +∞). Therefore, we add a bounded constraint to Eq. 14 to make the computed scores in [0, 1]. Considering the affect of data noise on the prediction performance, in addition, we develop a rank minimization-based matrix completion model by Eq. 15: where . F indicates Frobenius norm and ϵ represents the noise level. We introduce a soft regularization term to tolerate data noise considering the difficulty in selecting an appropriate parameter in Eq. 15. Consequently, a bound nuclear norm regularization model is built to infer potential microRNA-disease associations by Eq. 16: where the parameter α is applied to weigh the importance between the nuclear norm and the error term. Consequently, we introduce an auxiliary matrix Z and define model 17) to optimize model (16): where Y 1 Ρ Ω (W). Thus, the corresponding augmented Lagrange function is written as Eq. 18: where L and β represent the Lagrange multiplier and penalty parameter, respectively. At the t -th iteration, we alternatively compute one of Y k+1 , Z k+1 and L k+1 by fixing other two values according to the solution from Yang et al. . Finally, microRNA-disease association matrix Z p md is updated through completing the unlabeled elements in Z md . Experimental settings and evaluation In this study, we perform five-fold cross validation for 10 times to investigate the microRNA-disease association Frontiers in Genetics frontiersin.org 04 inference ability of BNNRMDA. During five-fold cross validation, 80% of elements in microRNA-disease association matrix X are randomly chosen as the training set and the remaining are taken as the test set. Parameters α, β, w , and γ′ are set by grid search. We find that BNNRMDA obtain the best AUC when the four parameters are set as α 1, β 10, w 0.3 , and γ′ 0.5, respectively. Therefore, we set the four parameters as corresponding values. In addition, AUC is widely used to measure the performance of association prediction methods, and thus we use it to measure the performance of BNNRMDA. Performance measurement To measure the microRNA-disease association prediction performance of BNNRMDA, we compare it with MIDPE (Xuan et al., 2015), MIDP (Xuan et al., 2015), RLSMDA (Chen and Yan, 2014), GRNMF , and LPLNS . MIDP (Xuan et al., 2015) and MIDPE (Xuan et al., 2015) are two random walk-based microRNA-disease association prediction methods. MIDP is used to detect association information for microRNAs related to diseases. MIDPE is used to detect association information through the bilayer network. RLSMDA (Chen and Yan, 2014) is a semi-supervised learning-based microRNA-disease association inference framework. GRNMF ) is a graph regularized non-negative matrix factorization-based microRNA-disease association prediction model. In addition, GRNMF built an association probability profile for each disease or miRNA based on a weighted nearest K neighbor profiles. LPLNS combined label propagation and linear neighborhood similarity for microRNA-disease association prediction. MIDP, MIDPE, RLSMDA, GRNMF, and LPLNS obtained better AUCs for microRNA-disease association prediction. Case study In the above section, we have computed the performance of BNNRMDA. The results show that BNNRMDA obtains better AUC and outperforms other five microRNA-disease association prediction methods. We continue to implement case analyses to identify possible microRNA biomarkers for colon cancer and colorectal cancer. Inferring possible microRNA biomarkers for colon cancer Colon cancer is a common malignant tumor and has a very high incidence rate in adult with age of 40-50 . More importantly, it has no any symptoms in the early stage. Therefore, it is important to infer possible biomarkers to boost the diagnosis and treatment for colon cancer . Among the HMDD dataset, there are 73 known microRNAs associated with colon cancer among 353 microRNAs. Based on the proposed BNNRMDA method, we compute the association score for each microRNA-disease pair. The results show that all 73 known microRNAs associated with colon cancer in the HMDD database have the highest association scores with colon cancer and are ranked as top 73. We continue to investigate the following 30 miRNAs that have higher association scores with colon cancer and are ranked as 74-103. The results are shown in Table 2 and Figure 1. From Table 2 and Figure 1, we can find that 18 microRNAs are confirmed to associate with colon cancer by literature retrieval. In addition, 12 microRNAs are inferred to associate with colon cancer and are potential biomarkers of colon cancer. In addition, we infer that microRNA hsa-mir-103a may associate with colon cancer. Wnt signaling pathway is hyperactivated in many human cancers. Therefore, Wnt pathway demonstrates promising diagnostic and therapeutic effect in cancer medicine. Fasihi et al. (2018) found that hsa-miR-103a may be a possible regulator of Wnt signaling pathway by detecting its effect on Wnt pathway components in colorectal cancer-originated cell lines and its expression in colorectal cancer tissues. They also found that hsa-miR-103a has an upregulation function in colorectal cancer tissues through RT-qPCR and its overexpression could cause elevated Wnt activity. Therefore, we infer that hsa-miR-103a could be a potential biomarker of colon cancer (Fasihi et al., 2017). FIGURE 1 Associations between the predicted top 30 microRNAs and colon cancer except for known 73 microRNA-colon cancer associations in the HMDD database that are predicted to have the highest association scores with colon cancer. Black dot lines denote associations between microRNAs and colon cancer and these associations have been reported by publications. Blue dot lines denote associations between microRNAs and colon cancer and these associations are unknown and need to experimental validation. FIGURE 2 Associations between the predicted top 30 microRNAs and colorectal cancer except for known 129 microRNA-colorectal cancer associations in the HMDD database. Black dot lines denote associations between microRNAs and colorectal cancer and these associations have been reported by publications. Orange solid lines denote associations between microRNAs and colorectal cancer and these associations are unknown and need to experimental validation. Frontiers in Genetics frontiersin.org Inferring possible microRNA biomarkers for colorectal cancer Colorectal cancer is the third leading cause of cancerrelated deaths in the United States. In the United State, there are about 1.85 million cases and 850 thousand deaths annually. In 2020, there are 53,200 colorectal cancer deaths in the United State. Among new colorectal cancer diagnoses, approximately 20% of patients suffered from metastatic disease and approximately 25% of patients suffered from localized disease that may later develop metastases. Of patients who are diagnosed as metastatic colorectal cancer, about 70-75% of patients survive more than 1 year, about 30-35% patients survive more than 3 years, and less than 20% patients survive more than 5 years (Xie et al., 2020;Biller and Schrag, 2021). Among the HMDD dataset, there are 137 known microRNAs associated with colorectal cancer among 353 microRNAs. Based on the proposed BNNRMDA method, we compute the association score for each microRNA-colorectal cancer pair. The results show that 129 known microRNAs associated with colorectal cancer in the HMDD database have the highest association scores with colorectal cancer and are ranked as top 129. We continue to investigate the following 30 miRNAs that have higher association scores with colorectal cancer and are ranked as 130-159. The results are shown in Table 3 and Figure 2. From Table 3 and Figure 2, we can find that 8 microRNAs are known to associate with colorectal cancer in the HMDD database. In addition, the remaining 22 microRNAs are inferred to associate with colorectal cancer and are reported by publications. The results confirm the strong microRNA identification performance of BNNRMDA for colorectal cancer. In addition, we predict that hsa-mir-193b and hsa-mir-7 days may associate with colorectal cancer and need validation. Conclusion Colon cancer and colorectal cancer are two of leading causes of cancer-related deaths worldwide and are seriously threatening human health. Inference of diagnosis or prognosis biomarkers for colon cancer and colorectal cancer can help to evaluate their initiation, progression and therapeutic response. In this study, we developed a new microRNA-disease association prediction method, BNNRMDA, to find possible microRNA biomarkers for colon cancer and colorectal cancer. BNNRMDA effectively integrated disease semantic similarity and GAPK similarity, microRNA function similarity and GAPK similarity, and bound nuclear norm regularization. Compared to other five classical microRNA-disease association prediction methods, BNNRMDA obtains the best AUC of 0.9071, demonstrating its powerful microRNA-disease association prediction performance. We continue to use the proposed BNNRMDA method for finding possible microRNA biomarkers for colon cancer and colorectal cancer. The results show that hsa-miR-103a could be a potential biomarker of colon cancer and hsa-mir-193b and hsa-mir-7 days could be potential biomarkers of colorectal cancer. Our proposed BNNRMDA method fully considers the affect of Gaussian association profile similarity on the prediction performance. In addition, the bound nuclear norm regularization approach can effectively learn the intrinsic distribution of data. Therefore, BNNRMDA significantly outperform other MDA prediction methods. Although BNNRMDA obtains better AUC, its performance including AUC, precision, recall, and accuracy need to further improve. In the future, we will improve the bound nuclear norm regularization model to discover possible biomarkers for colon cancer and colorectal cancer. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Author contributions Conceptualization: S-YZ and CQ; Methodology: S-YZ, X-LL and CQ; Project administration: CQ, YW, X-LS and B-BJ; Software: S-YZ, X-LL and CQ; Writing-original draft: S-YZ, X-LL and CQ; Writing-review and editing: S-YZ, and CQ. Funding This work was supported by the Medical and Health Science Technology Development Program in Shandong Province (202104080159) and Science Technology Development Program in Weifang City (2021YX007). Conflict of interest Authors YW, XS, and BJ were employed by the company Geneis (Beijing) Co. Ltd. In addition, this manuscript was conducted by a multicenter study initiated by the corresponding author (CQ). Geneis Beijing Co., Ltd. Contacted the doctors in hospitals around the country. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2022-10-14T13:28:31.206Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "51a37513d56255c64ed78a2ae29341993f6a7c60", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "51a37513d56255c64ed78a2ae29341993f6a7c60", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234275119
pes2o/s2orc
v3-fos-license
LOCKE A Defense of Locke’s Moral Epistemology : In An Essay concerning Human Understanding , John Locke provides an empirical account of all of our ideas, including our moral ideas. However, Locke’s account of moral epistemology is difficult to understand, which leads to mistaken objections against his moral epistemological theory. In this paper, I offer what I believe to be the correct account of Locke’s moral epistemology. This account of his moral epistemology resolves the objections that morality is not demonstrable, that Locke’s account fails to demonstrate the normativity of statements, and that Locke has not provided us with the means to determine the correctness of the moral rules. Introduction In An Essay concerning Human Understanding, John Locke provides an empirical account of all of our ideas, including our moral ideas. However, Locke's account of moral epistemology is difficult to understand. While much work has been expended on Locke's moral theory, little of that has been directed at explaining Locke's moral epistemology. One notable exception is by Catherine Wilson. 1 Her account addresses the different sources of our moral ideas, Locke's rejection of innate ideas, Locke's comparison of morality to mathematics, and morality being a mixed mode. However, her paper is focused more on issues with Locke's moral epistemology than with presenting a cogent account of his moral epistemology. The clearest account Wilson provides is an attempt to articulate how morality is demonstrable according to Locke. She argues that Locke's account fails to show that morality is demonstrable. Her reasons seem to be that Locke's account of demonstrability relies on set definitions of moral words. Given that people can contest the meaning of words, then morality is not demonstrable. Second, even if morality is demonstrable, we "cannot claim to have demonstrated a normative statement." 2 Her argument seems to be that although Locke can show that, say, theft is unjust, Locke cannot show that theft is wrong. A related worry to Wilson's objection is that Locke has not provided us with a means to assess the correctness of our moral rules. 3 In this paper, I will argue that Locke can respond to all three objections. I am not claiming that Locke's theory is correct but only that Locke does address these objections. However, I will argue that while Locke can respond to these objections, Locke' account is unlikely to provide the certainty needed to know if one is following the divine law that Locke believes is the foundation of true morality. I believe a significant source of Wilson's criticisms of Locke is due to an incomplete account of Locke's moral epistemology. Her focus on the definitions of words rather than the lawgiver as the source of moral knowledge is a key error that she is making, which leads to her mistaken objections to Locke. Therefore, I will begin by explaining what I believe is the correct interpretation of Locke's moral epistemology. I will then explain Wilson's objections to Locke and how Locke's theory provides an explanation, satisfying or not, to her objections against his theory. I conclude by attempting to demonstrate how a proper accounting of Locke's moral epistemology could provide us with the means necessary to know if we are following the divine law. While promising, my account is unlikely to provide the certainty we would want if we are trying to follow the diving law. However, Locke's moral epistemology does provide us a means for assessing at least the degree of correctness of our moral rules. Moral Ideas as Mixed Modes Understanding Locke's moral epistemology requires an understanding of the nature and origin of our moral ideas. Locke claims that moral ideas are about human actions, and "their various Ends, Objects, Manners, and Circumstances" (II.xxviii.4,351). 4 Locke classifies moral ideas as mixed modes. According to Locke, "these mixed Modes being also such Combinations of simple Ideas, as are not looked upon to be characteristical Marks of any real Beings that have a steady existence, but scattered and independent ideas, put together by the Mind, are thereby distinguished from the complex Ideas of Substance" (II.xxii.1, 288). Mixed modes are created "often using an active power" of the mind. These ideas have a "constant Existence, more in the Thoughts of Men, than in the reality of things" (II.xxii.2,288). Since these ideas do not presuppose a corresponding substance for reference, it is possible for the same set of simple ideas to result in different mixed modes. Mixed modes are created by the mind and do not resemble ideas produced by our senses. If we observe a tree, we have various simple ideas about that tree that we unite in our minds to form our idea of that tree. When it comes to a mixed mode, we have various simple ideas, but they are combined arbitrarily by the mind. If there is an outside world, we like to think that our idea of a tree matches the actual tree. However, there is nothing in the outside world for mixed mode ideas to correspond to. While Locke describes mixed mode ideas as arbitrary, these ideas are not random. The mind has created them because these ideas are useful (III.v.6,431). Adultery is a mixed mode. We have the idea of sex and the idea of marriage. We label sex outside of marriage as adultery. Murder is another example of a mixed mode. We have the idea of killing, the idea of voluntary action, the idea of a sheep, and the idea of a person. When we combine the idea of a man committing a voluntary act of killing another man, we label it murder. If it is killing a sheep, we do not consider it murder. Other ideas can be added to murder to specify how in one situation a killing is murder and not in other cases. By this process, we create our moral ideas. Given that all we have knowledge of are our ideas, then moral ideas are just as real as our idea of a tree. Mixed mode ideas are created by combining simple ideas. However, in the process of making a moral judgment, we create additional moral ideas. If one judges an action, that judgment expresses a relation. A relation is another means of creating a complex idea. A moral relation exists when we see "Conformity or Disagreement" of "Men's voluntary Actions" to some rule (II.xxviii.4,350). The first step is to develop a moral idea. That moral idea is then used to create rules. We can then compare an action to some rule that governs that action. For example, polygamy is having more than one spouse-usually more than one wife. This does not tell us anything about if having more than one spouse is right or wrong. If rules exist about polygamy, then we compare a person's action or intended action to that rule. If a rule states that one should not engage in polygamy and a man is married to more than one wife, then his action is in disagreement with that rule. Without rules, moral relations are impossible. The Lawmaker and Moral Relations In order for there to be rules, there must be a rule-maker or a lawmaker. The concept of a lawmaker plays a central role in Locke's moral epistemology. The lawmaker is the source of the rules that are necessary for a moral relation. Those rules are necessary for us to judge whether an action is morally good or evil. Good and Evil, as hath been shewn, Bk.II.Ch.XX. § 2. and Ch.XXI. § 42. are nothing but Pleasure or Pain, or that which occasions, or procures Pleasure or Pain to us. Morally Good and Evil then, is only the Conformity or Disagreement of our voluntary actions to some Law, whereby Good or Evil is drawn on us, from the Will and Power of the Law-maker; which Good and Evil, Pleasure or Pain, attending our observance, or breach of the Law, by the Decree of the Law-maker, is that we call Reward and Punishment. (II.xxviii.5,351) Locke is using "good and evil" in multiple ways in this passage. The first is how we judge an action to be morally good or evil. We judge our action as morally good not because of anything intrinsic to that action but because of its relationship to the law. If you obey the law, then your action is judged as good or right; if you break the law, it is judged as evil or wrong. Second, Locke is explaining that our concepts of good and evil depend on pleasure and pain. 5 An action is good if it brings pleasure and evil if it brings pain. Third, that the 5 J. B. Schneewind believes that pleasure is a "stand in" for whatever a person prefers. (Schneewind "Locke's Moral Philosophy," 204) His conclusion is based on the following passage from Locke: "So the Greatest Happiness consists, in having those things, which produce that greatest pleasure; and in the absence of those, which cause any disturbance, any pain. Now these, to different Men, are very different things." (II.xxi.55,269) It is true that if two people disagree over what is pleasurable, but still seek out pleasure, then it appears that pleasure can mean different things to different people. However, Schneewind has gone farther than what Locke intended. In section 55 Locke is examining how people, if they are motivated by happiness, come to live different lives. In this section, Locke is referring to food. Lobster may be a delight to some people; but those who hate lobster will find the thought of eating it nauseating. In this case, a person may prefer hunger to eating lobster. This is a case of preferring the lesser of two pains. It would be absurd, however, to suggest that hunger is bringing pleasure. Given that Locke is not discussing pleasure itself, we have no reason to assume that pleasure means whatever a person prefers. We might be justified in concluding that a person always acts on one's preferences, but a preference is not the same as pleasure. We can prefer things because they bring the least amount of pain. Pain and pleasure are simple ideas. Certainly, it is true that, at least for some things, people derive pleasure from different things. Yet it seems obvious to us, and I suppose to Locke as well, that there are going to be certain fundamental things that bring pleasure and pain to all persons. Punishment and rewards based on pleasure and pain are a central feature of Locke's moral epistemology. If pleasure and pain are extremely variable, then a system of good or evil that is the result of our action conforming or disagreeing with the law is the result of the lawmaker. The lawmaker is the one that will reward or punish. Locke appears to be claiming that a law that is not backed by reward and punishment will render us incapable of judging if obeying a law is morally good or evil. According to Locke, in order for an action to be considered morally good or evil, it cannot be "the natural product and consequence of the Action it self" (II.xxviii.6,352). If an action conforms to a rule, and the action results in pleasure, then the action is the source of good and not the law. Morality expresses a relationship between an action and some rule. In order to bring good and evil into the relationship, it must be connected to the moral rules and not the action. The role of reward and punishment serves the purpose of connecting the laws to good and evil. By doing so, it is possible to now view the relationship between the rule and an action as good or evil. Further, morality is the relationship between a rule and an action. Combining those two ideas, we are in a position to say that if one obeys a law that has reward and punishment attached to it, we can judge a person's action to be morally good or bad. Without this interconnectedness, we can only judge the action in a non-moral sense. That is to say, we can only judge whether performing that action brings pleasure or pain. Based on Locke's argument, it seems that a law without reward or punishment is not a real law. Thomas Hobbes might have influenced Locke on this point. According to Hobbes, "where there is no common Power, there is no Law." 6 While Locke's language is not as strong as Hobbes's language, it does express a similar sentiment. As Locke writes, "for the law of nature would, as all other laws that concern men in this world, be in vain, if there were no body that in the state of nature had a power to execute that law." 7 Given the fact that laws require an enforcement mechanism in order to be of use, Locke writes that everyone in the state of nature has the authority to execute the law of nature. Whether we should view a law written in vain as still a law may be disputed. My purpose is to point out that Locke's concept of a law is close to a Hobbesian interpretation of law. Hobbes and Locke agree that the nature of a law is to provide rules to govern a person's action. Punishment and reward serve as a psychological motivation to follow the law. "We must, where-ever we suppose a Law, suppose also some Reward or Punishment annexed to that Law. It would be in vain for one intelligent Being, to set a Rule to the Actions of another if he had it not in his Power, to reward the compliance with and punish the deviation from his Rule" (II.xxviii.6, 351). If it would be vain to make a law, if a person did not have in his power the means to reward and punish, then it must be because a person will not follow the law unless reward and punishment are involved. Locke states, "that every rewards and punishments will not be effective. Therefore, while there is some variability, I think it is wrong to conclude that pleasure is whatever a person prefers. While pleasure is a preference people have to pain, pleasure is not the same as a preference. Pleasure is nothing more than the sensation of physical or mental pleasure-a simple idea. intelligent Being really seeks Happiness, which consists in the enjoyment of Pleasure, without any considerable mixture of uneasiness" (II.xxi.62,. A person will never act to choose not to pursue what will make him the happiest unless that choice is based on a mistake. People act to pursue pleasure and avoid pain. If you make a law, people will have a tendency to follow that law as a means to acquiring pleasure and avoiding pain. It would be vain to punish and reward someone if it has no effect on how that person acts. My interpretation is not without textual difficulties. Locke states, "our Actions are considered, as Good, Bad, or Indifferent; and in this respect, they are Relative, it being their Conformity to, or Disagreement with some rule, that makes them to be regular or irregular, Good or Bad; and so, as far as they are compared with a rule, and thereupon denominated, they come under relation" (II.xviii.15,359). In this passage, Locke does not mention punishment or reward. We judge an action as good or bad if it conforms to some rule. Yet if this is the case, it is in tension with Locke's claim that our idea of good and evil are tied to our simple ideas of pleasure and pain. Pleasure gives us the idea of good and pain gives us the idea of evil. The idea of good and evil cannot come from a relation. All we can see is that an action agrees or disagrees with some rule. Unless one wishes to define good as conformity, then we need a means to connect pleasure and pain to the rule. It is only by this connection that we are able to use the terms good and evil and preserve their meanings. This is why that I insist that not only is reward and punishment necessary for motivation, it is necessary to generate a relationship where we can evaluate an action as good or evil. Without some punishment or reward rendering pain or pleasure to us based on the disagreement or conformity of our action to some rule, we are incapable of judging an action as morally good or evil. There are two important aspects to Locke's theory of our idea of morality. First, there must be a relation. The relation allows us to judge if an action conforms or disagrees with some standard. Second, we cannot judge that relationship as a moral relationship without rewards or punishments. The rewards and punishments cannot come from the act itself because morality is about a relation to some rule. If it were to come from the act itself, then morality is no longer about the conformity of one's action to some rule. For example, if I prick my finger on a cactus, that causes pain. I believe that pricking my finger is bad because of it. However, it is not morally bad unless there is some rule by a lawmaker proclaiming not to prick one's finger. Further, I cannot judge pricking my finger as morally bad unless the lawmaker will punish me for pricking my finger. Rewards and punishments attached to adherence to rules by a lawmaker is the source of morality. It is also the source of moral motivations and obligations. One ought to follow the rules because of the rewards and punishments. Before discussing the different types of moral relations, I believe a further example may help to clarify the process. Suppose one wants to know if adultery is wrong. Sex, regardless of marital status, tends to bring pleasure. However, pain or pleasure by itself does not allow us to know if the action is moral or not. In order for a moral rule to be a rule, there must be a lawmaker who will reward or punish a person for obeying or breaking the law. If the lawmaker proclaims adultery to be wrong but cannot reward and punish, then adultery is not wrong. If the lawmaker proclaims adultery to be wrong and yet does not attach rewards or punishments for adhering or breaking the rule, then adultery is not wrong. Adultery is only wrong when the lawgiver says it is wrong and attaches rewards and punishments for obeying or breaking the rule. Three Types of Moral Relations With this groundwork in place, Locke describes three types of relations. Each relation results in different moral concepts such as duty, criminality, and virtue. Depending on the type of lawmaker, the result is a different type of relation and a different label for expressing that relation. Each relation is based on the type of lawmaker that creates the rules. The three moral relations Locke describes are the divine law, the civil law, and the law of public opinion. Divine law corresponds to our concept of duty and sin. It is "that Law which God has set to the actions of Men, whether promulgated to them by the light of Nature, or the voice of Revelation." Locke considers the divine law to be the "only true touchstone of moral rectitude." By comparing an action to the divine law, "Men judge of the most considerable Moral Good or Evil." God is the ultimate authority because he has the power to reward and punish people in an "infinite weight and duration, in another Life" (II.xxviii.8, 352). The second moral relation Locke describes is the civil law. Governments create rules and if a person breaks the rules, that person is considered a criminal. Every government has a system of rewards and punishments to reward those who obey the law and punish those who break it. The third type of moral relation is "the Law of Opinion or Reputation" (II.xxviii.10, 353). These rules are based on the opinions of one's fellow citizens. Many actions the public disapproves of become part of the civil law, but not every action offensive to public sentiment becomes criminal. In many cases, Locke believes people will base their approval or disapproval of an action based upon the divine law. God may punish in the afterlife, but he does not do so during a person's life. Society, however, can punish a person for breaking the divine law. The public may also have rules not based on the divine law. This is especially true in places without a concept of God prescribing a set of laws. A person is either praised or blamed for one's actions depending on the action's conformity or disagreement with the society's norms. If a person obeys the rules, one may be considered virtuous; if one disobeys them, one is considered acting on vice. It is public pressure that serves as reward or punishment for as Locke wrote, "no Man scapes the Punishment of their Censure and Dislike, who offends against the Fashion and Opinion of the Company he keeps, and would recommend himself to. Nor is there one of ten thousand, who is stiff and insensible enough, to bear up under the constant Dislike, and Condemnation of his own Club" (II.xxviii.12, 357). Summary of Locke's Moral Epistemology At this point, I would like to summarize Locke's moral epistemology before turning to objections to his theory. Moral ideas are mixed modes. We combine simple ideas in the mind that deal with human actions and their ends. This is how we develop ideas such as lying, adultery, and murder. Rules, or laws, are then generated based on those moral ideas. For example, the state can create a law that forbids murder. We can now compare a person's action or proposed action to that law. This generates a relationship of conformity or disagreement between the action and the law. This relationship results in additional moral ideas such as duty, criminality, and virtue. The different names are used to express a relationship between a specific type of law and an action. Our ideas of good and evil are the result of our experiences of pleasure and pain. The lawmaker must attach punishment and rewards to the lawmaker's rules in order for us to view an action as either morally good or evil. Without this attachment, we can only see an agreement or disagreement. Yet once the lawmaker attaches pleasure and pain to the rules, we are able to judge an action not just in conformity with some rule but that the action is morally good because obeying it results in pleasure or the avoidance of pain. Is Morality Demonstrable? With Locke's theory described, we can now turn to Wilson's objections. Wilson objects to Locke's belief that morality is demonstrable in the same way that mathematics is demonstrable. "The Idea of a supreme Being, infinite in Power, Goodness, and Wisdom, whose Workmanship we are, and on whom we depend; and the Idea of ourselves, as understanding, rational Beings, being such as are clear in us, would, I suppose, if duly considered, and pursued, afford such Foundations of our Duty and Rules of Action, as might place Morality amongst the Sciences capable of Demonstration" (IV.iii.18, 549). In making his case that morality is demonstrable, Locke draws a parallel with mathematics. Mathematical concepts are mixed modes and exist only in the mind. Yet, mathematical knowledge counts as real knowledge. Morality also exists only in the mind and, like mathematical concepts, is real knowledge. Further, in the same way that one can reason about mathematical concepts, one can reason about morality. "Moral knowledge is as capable of real Certainty as Mathematicks. For Certainty being but the Perception of the Agreement, or Disagreement of our Ideas; and Demonstration nothing but the Perception of such Agreement, by the Intervention of other Ideas or mediums, our moral ideas, as well as mathematical, being Archetypes themselves, and so adequate, and complete Ideas, all the Agreement or Disagreement, which we shall find in them, will produce real Knowledge, as well as in mathematical Figures" (IV.iv.7, 565). Given that moral ideas only exist as ideas, how do we demonstrate the truth of those ideas? Locke uses property as an example. Take the proposition "there can be no injustice in a world without property" (IV.iii.18, 549). We can see this is true because of our concepts of injustice and property. Injustice is the violation of someone's rights to an object. Property is to have a right to some object. Therefore, in a world without property there is no means for a person to commit an injustice. If one were to maintain that it is possible to commit injustice in a world without property, we know his statement to be false given our definition of property and injustice. The truth of a statement depends on its conformity or disagreement to our ideas. Definitions serve as a means to specify the content of ideas. If morality is demonstrable, then definitions are necessary. We can only judge the truth or falsity of a statement of a moral proposition by comparing it to the relevant moral ideas. Wilson's objection is that definitions of moral terms are always contestable. "Disagreement over precisely what murder entails or excludes cannot be resolved by mathematical methods, since mathematics begins with precise stipulative definitions." 8 Locke believes many of the problems with morality not being demonstrable relate to definitions. If we could agree on the "Collection of simple Ideas" that constitute each term and always use that term to refer to that specific collection of simple ideas, then we will "come nearer perfect Demonstration" of morality (IV.iii.20, 552). Wilson is pessimistic that an agreement on terms can be reached. Philosophers do spend considerable time arguing over what moral terms mean. Locke, however, does not believe it is agreement that resolves the issues. 9 "That where GOD, or any other Law-maker, hath defined any Moral Names, there they have made the Essence of that Species to which that Name belongs; and there it is not safe to apply or use them otherwise: But in other cases 'tis bare impropriety of Speech to apply them contrary to the common usage of the Country" (IV.iv.10, 567-68). Thus, according to Locke, the lawgiver determines the meaning of moral words. The meanings of moral words are not contestable if a lawgiver has provided a definition. Wilson believes that mathematics is demonstrable because of stipulative definitions. The lawmaker provides us with the stipulative definitions needed. Thus, morality would be demonstrable in the same way. This leaves open the issue of moral terms that are not set by a lawgiver. Locke does agree with Wilson that "the precise Collection of simple Ideas they stand for" are not easily agreed on (IV.iii.19, 550). Thus, when a person uses moral language to communicate one's idea to another person, the person hearing the word may have a different idea produced from hearing the word. This disagreement is the reason Locke stresses the need for definitions. Yet, if we cannot agree on the meaning of the words not defined by a lawmaker, does that mean we are stuck at the level of arguing over definitions? Locke believes we can avoid arguing over definitions: "If we but separate the Idea under consideration from the Sign that stands for it, our Knowledge goes equally on in the discovery of real Truth and Certainty, whatever Sounds we make us of" (IV.iv.9, 567). If we focus on the ideas involved and not the words being used, then we can avoid arguing over definitions. While inconvenient, at least we avoid the argument over definitions. However, I believe there is perhaps a related concern. When one is arguing over the definition, one is also arguing over if something is moral or not. One can say, "theft is just." The question then becomes if the person is using just to mean what Locke and others would label unjust or if the person means that theft is morally appropriate. While the first 8 Wilson, "Moral Epistemology," 397. 9 Antonia LoLordo defends that morality is demonstrable. According to LoLordo, "Locke makes clear both that a demonstrative science of morality is possible and why it's possible: because we know the real essences of the things our moral terms stand for. This is a consequence of the fact that moral ideas are ideas of mixed modes." (Antonia LoLordo, Locke's Moral Man [Oxford: Oxford University Press, 2012], 82) Substance is mind-independent and mixed modes are mind-dependent. In virtue of being mind-dependent, we can have complete knowledge. This is what makes morality demonstrable in theory. Her concern about demonstrability is rooted in issues of what we can have knowledge of and not to the issues related to Wilson's objections. option is confusing, there is not a disagreement over the rightness of theft. The second option is a greater worry, but one resolved by Locke. Morality requires a lawgiver. If a lawgiver has stipulated a law, then that is what determines whether theft is right or not. A person cannot make theft right if a lawgiver has a law specifying that it is wrong. In cases where a lawgiver is not involved, then disagreement is still possible. The easiest solution is to have a lawmaker settle the issue. Lacking that, we are stuck in a definitional argument that is actually about the rightness or wrongness of an action. If no lawgiver has defined murder, and one wants to know if killing an adulterer is murder or not, one is most likely not just arguing over if it should be included in the definition. Rather, it is by including or excluding the adulterer from the definition of murder that one is also either approving or disapproving of killing adulterers. In this case, one is asking what the morally correct action is. This is moving beyond Locke's moral epistemology into normative ethics. Locke is explaining the origin of our moral ideas, not the justification of them. From a pragmatic point of view, Locke addresses most of the concerns over demonstrability. The lawgiver provides the definitions permitting us to demonstrate that an action is right or wrong. Can Locke Demonstrate Normative Statements? Wilson never addresses the issue of if we did agree on the definition of moral terms that morality would be demonstrable. It seems plausible, given Locke's account of demonstrability, that agreed upon definitions would solve many of the issues with the demonstrability of morality. The greater worry Wilson has is that "we cannot claim to have demonstrated a normative statement." We can demonstrate what actions are unjust, but we cannot demonstrate that committing an injustice is wrong. Wilson gives a few examples in supportive of her argument. For example, she claims that we cannot demonstrate that "slavery is not unjust, since it deprives no one of his property." 10 However, Wilson's objection is unsuccessful if we stick to Locke's theory. We make moral judgments based the relationship between a rule and an action. 11 One could maintain that slavery is not unjust, but at the same time, slavery is morally evil. If there is a rule that forbids slavery, then slavery conflicts with that rule. Therefore, we can judge slavery as morally evil. Wilson ignores the role of moral relations. Moral relations are the source of normativity. Locke demonstrates that slavery is unjust in the Two Treatises of Government. 12 We are God's property. To make someone a slave is to take God's property. Therefore, slavery is unjust. If one does not believe in God, one can appeal to the idea of self-ownership. Each person owns oneself; thus, slavery denies someone property of oneself. Finally, even if there is no property involved, we can demonstrate that slavery is immoral even though it is not unjust. It is immoral because there are rules forbidding it. Judging by the criticism, I believe that Wilson is overlooking a significant part of Locke's theory. In fact, she is falling into a mindset that Locke himself comments on. When we say that something is theft, we are not merely describing the action, but condemning it. Yet, while a person may do this, one is conflating one's ideas. First, there is the idea of what theft is and second there is the rule claiming that theft is wrong. The rule stipulates when the taking of another person's property is permissible or impermissible (II.xviii.16, 359-60). If we separate the rule from the label of the action, I believe we arrive at what I believe is Wilson's actual objection. Wilson wants to know that it is possible to demonstrate the correctness of moral rules. Locke's account allows us to know if an action conforms or conflicts with a rule, but what seems missing is an account of whether the rule is the correct rule to follow. When Wilson wants to show that slavery is unjust, she wants a demonstration that slavery not only conflicts with a rule but that conflicting with the rule is morally wrong. I believe this is her argument because she wants to show the demonstrability of normative statements. Normative statements deal with what one should or should not do. What a person should or should not do is tied to the rules. If Wilson is not making this claim, it seems to me she should be because Locke easily answers her arguments, taken as they are stipulated. He can demonstrate that without property, there is no injustice, but he never claimed this showed that injustice is truly wrong. If one wants to know if an unjust act is wrong, we compare it to a rule issued by the lawmaker who has in one's power the ability to reward and punish. Reward and punishment provide one with pleasure and pain. Pleasure and pain are the source of our concept of good and evil. With the basic assumption that a person should do the good and avoid the bad, then Locke has demonstrated a normative statement. Are There Correct Moral Rules? What I believe is a more serious objection against Locke is to demonstrate that the rules are the correct rules. This is what is necessary in order to demonstrate that something truly is right or wrong. If we measure something by a yardstick, we know the relationship between the yardstick and some object, but we do not know that the yardstick really is a yard. That is what Locke's account seems to fail to provide. We can know if an action conforms to a moral rule or not. What we do not seem to know is if the rule is the correct rule. In terms of our knowledge of the world outside of our ideas, we have some assurance that our ideas match the way the world is. It may turn out that our ideas do not match the way the world is; however, we at least have some assurances that they do. In terms of moral knowledge, Locke needs to provide a method for knowing if moral rules correspond to the correct moral rules. Moral epistemology needs to provide an account of our moral ideas, but it also needs to provide a method for judging whether our moral concepts are the correct ones. Locke himself seems to endorse this claim when he writes "there being no part of Knowledge wherein we should be more careful to get determined Ideas, and avoid, as much as may be, Obscurity and Confusion" than our moral ideas (II.xxviii.4,351). Having correct moral rules, I believe, depends on the rules being universal. This is not to suggest that circumstances do not matter, but assuming the exact circumstances are the same, then the morality of an action should be judged the same across all cultures. If killing a baby is murder, and murder is always wrong, then this should be true across all cultures. It cannot be acceptable in China but unacceptable in Norway. It might be the case that people in China personally believe that it is acceptable, but their belief does not make something right or wrong. If my assumption that having the correct moral rules requires them to be universal and objective, then there are three questions we can ask of Locke. First, are any of the three sources of moral rules described by Locke the source for the correct rules? Second, if none of the three sources of moral rules are correct, then does Locke provide us with the tools to build a correct set of rules? Third, how can we even judge what rules are the correct ones? Locke only refers to the divine law as the "only true touchstone of moral Rectitude" (II.xxviii.8, 352). Given that he realizes public opinion varies by culture, if any of the three sources of law is the correct one, then Locke must believe it is the divine law. Do Any of the Sources of Moral Rules Provide the Correct Moral Rules? Regarding the first question, of whether any of the three sources of moral rules are the source of the correct rules, I believe Locke has his sympathies but not a definitive argument. Locke refers to the Law of Nature as an "unalterable Rule" that people ought to use to judge a person's moral rectitude (II.xxviii.11, 354n). Locke also seems to give an argument for why we must obey God's law: "God has given a Rule whereby Men should govern themselves, I think there is no body so brutish as to deny. He has a Right to do it, we are his Creatures: He has Goodness and Wisdom to direct our Actions to that which is best: and he has Power to enforce it by Rewards and Punishments of infinite weight and duration, in another life" (II.xxviii.8, 352). I do believe that Locke believes in some version of the divine command theory. 13 However, in the quoted section, he is describing why God is a lawgiver and why we should obey the rules. He details similar justification for the civil law. The commonwealth has the right and power "to protect the Lives, Liberties, and Possessions, of those who live according to its Laws, and has power to take away Life, Liberty, or Goods, from him, who disobeys" (II.xxviii.9, 352-53). In the same manner that Locke points out why we should follow the laws of the state, he is pointing out why we believe God to be a lawmaker and why we ought to follow the divine law. The Essay is a work of human understanding and not a moral theory. He is attempting to provide an account of how we come to have moral knowledge. These moral relations are how "Men judge whether their Actions are Sins, or Duties; . . . Criminal, or Innocent; and . . . whether 13 Schneewind argues that Locke "tried to work out a Grotian theory of natural law in voluntarist terms." (Schneewind, "Locke's Moral Philosophy," 222) Schneewind does present a compelling case that Locke is a natural law theorist, yet most of the evidence is from Locke's other works. It is my conjecture that Locke's overall epistemological project could not justify natural law. Thus, we see Locke using language that indicates his personal beliefs, but those cannot be justified through Locke's epistemological theory. If Locke were to be arguing for natural law, we should expect a clear argument instead of rhetorical phrases, but there is no argument in the Essay. Hence, we know Locke's position; but it is one that Locke realized he could not justify. they be Vertues or Vices" (II.xxviii.7, 352). We, including Locke, may judge the divine law to be the true law, but this is our assessment and not necessarily the actual truth. "And thus, much for the Relation of humane Actions to a Law, which therefore I call Moral Relations. 'Twould make a Volume, to go over all sorts of Relations: 'tis not therefore to be expected, that I should here mention them all" (II.xxviii.17, 360). Locke is not providing an account of all possible moral relations. There could be other relations. In order to have a moral relation, all one needs is a lawmaker with the ability to reward and punish and then to create some rule covering the permissibility of some human action. Locke is providing an account of how we can have moral knowledge at all. Locke himself denies that he is giving us an exact standard of morality and that topic is for "another Enquiry" (II.xxviii.20, 362). The Building Blocks of Normative Theory It does not seem as though we can offer a satisfactory account of how one can know if one is correctly following the divine law. This should not come as a surprise because Locke is only offering us an account of moral relations and is not claiming we can know if the rule giving us the relation is the correct moral rule (II.xxviii.20, 362). While a full normative theory is beyond Locke's project, he does need to provide an account of the building blocks necessary for building a moral theory for an adequate account of moral epistemology. Locke provides an account of the origin of our moral ideas. We pass moral judgments based on the relation an action has to some rule. We perform an action because of rewards and punishment. We are motivated to seek pleasure and avoid pain. One could press Locke and ask why we should avoid pain and seek pleasure. Yet to ask this is to ask a normative question. Locke is not offering a normative theory. He is offering, at least partially, a metaethical theory. One could ask why we do seek pleasure and avoid pain. This question can be answered by empirical study. For Locke, it just appears to be true that people seek pleasure and avoid pain. Locke has described how we come to view something as good or evil. He has provided us with an account of normativity, motivation, moral assessment, and moral ideas. While one may not agree with his account, if Locke is correct, then it does appear he has provided us an account of all of the moral concepts needed to build an ethical theory. How We Judge a Rule to be the Correct Rule This leaves us with the third question of how we can judge a particular rule to be correct. We do know that Locke holds that the true moral law is unalterable (II.xviii.11, 354n). Locke describes three moral relations. This means there are three separate standards to judge an action by. It is conceivable that the divine law requires a person to perform an action, but the civil law forbids it. If a person performs the action required by the divine law, then we judge that person's action to be good, but if at the same time it breaks the civil law, we judge it to be bad. How do we know if the action is good or bad? We would need to decide on what set or rules is correct. This involves a relation. We need to compare the actual rules to the correct rules. Yet we lack the ability to know what rules are correct, so it is impossible to make a comparison. While there may be an objective unalterable moral rule that applies to everyone as Locke claims, our ability to know those rules is something to be doubted. I believe it is possible for Locke to respond to the concern that we have no means to judge whether a given set of moral rules are the correct rules. Locke claims that practical sciences deal with "the Skill of Right applying our own Powers and Actions, for the Attainment of Things good and useful. The most considerable under this Head is Ethicks, which is the seeking out those Rules, and Measures of humane Actions, which lead to Happiness, and the Means to practise them. The end of this is not bare Speculation, and the Knowledge of Truth; but Right, and a Conduct suitable to it" (IV.xxi.3, 720). The purpose of ethics is to discover what rules promote happiness. If we assume that happiness is pleasure, then the task of ethics is to lay out the rules that best promote people's pleasure. Insofar as a set of rules promotes pleasure, they are right; insofar as they do not, then they are wrong. The correct set of rules will be those rules that provide the most happiness. Further, with this view of ethics, we can understand why punishment and reward are attached to rules. When we get to normative ethics, then we can assess rules by how much they promote the good where good is understood as pleasure. People make mistakes in understanding what the good is. We are often led astray by the appearance of immediate good and ignore a larger good in the future (II.xxi.63,275). Given that we can misjudge things, then this allows for moral progress. The task of ethics, therefore, can be seen as developing and modifying rules over time in pursuit of the correct set of rules. Given that people are often focused on the short-term pleasure over a long-term pleasure, even if the long-term pleasure is the greater of the two, then we need to find a method to make people act only on their long-term pleasure. A proper rule will be one that promotes the most pleasure. Often this pleasure will not be an immediate gain of pleasure, but one that happens later. If you make it in a person's short-term interest to follow the rules by the use of rewards and punishments, then you help ensure that the person will benefit in the long term as well. One possible objection to my account of how we judge the correctness of moral rules, relates to the use of pleasure and pain. Previously it appeared as though we could only judge something as good or bad based on a rule and if that rule was backed by punishment and rewards. The use of reward and punishment are meant to provide us with a means of assessing actions as morally good or evil and not the rules themselves. Pleasure and pain are the standards by which we evaluate anything as good or bad. At this point, we have stepped back from evaluating if an action is good or bad, to evaluating if the rules are good or bad. The morality of an action is evaluated by conformity to some rule. In interpreting Locke, we always need to keep in mind that evaluating anything in terms of good and evil needs to connect directly to pleasure and pain because these are the origins of those concepts. If my account is correct, then Locke has provided us with a means for determining the correct moral rules. Or rather, he has provided us with a way to evaluate the degree of correctness of moral rules. What we are ignorant about is how close the moral rules we have are to the moral rules God enforces. That is problematic if we are concerned with avoiding hell. However, it is not a problem for us knowing if there is a moral relation. It is also not a problem if we accept the idea of moral progress over time. Thus, if we focus on developing a secular morality, then Locke's moral epistemology is more useful because it does not require us to know God's moral rules. Yet, even if we accept that God is the source of morality, at least we potentially have a means to judge how close our moral rules are to God's rules by the degree of happiness the rules bring. Is the Divine Law Demonstrable? While I have argued that Locke is not attempting to argue in the Essay that the divine law is the correct moral law, some scholars may disagree. Based on Locke's rhetoric, it is clear he does believe God to be the source of the correct moral law. Morality is about a relation and there are as many different possible moral relations are there are possible lawmakers. Are we to take it that every moral relation is equal to another? In this section, I will argue that it is the command of God that makes something moral. However, the content of the divine law must be fixed in a manner that allows people to discover the divine law without the need of revelation if morality is demonstrable. If I am correct, then the correct moral rules must be objective and apply to all. Since we need a lawmaker for a moral relation, then we need one that can punish all of humanity. God clearly could do so. However, we could imagine a world government with such power or perhaps an occupying alien force. It seems unreasonable that Locke believes that the commands of God are on par with that of a world government. Yet, both would have the ability to reward and punish all persons for their adherence to law. God has an infinite ability to reward or punish making God more powerful than any world government. Yet, are we to think that moral truth resides solely in the power of the most powerful lawmaker? Thus, we are left with the question of if the source of morality resides solely in God's infinite power or something else. The voluntarist position holds that morality is dependent on God's will. As such, God can make anything moral. Such a position makes morality arbitrary. The intellectualists reject that the source of morality is God's will and place it within reason. Andrew Israelsen argues that God's will cannot be the basis of morality if we believe that morality is capable of demonstration. If morality is based on God's will, then morality is arbitrary, and one cannot make reasonable inferences about moral rules since those rules are arbitrary. Thus, "we must think of Locke as advocating a view of moral concepts that stems necessarily from God's nature rather than arbitrarily from God's will," and by God's nature Locke must mean "God's rational nature." 14 Voluntarists, however, can argue that if morality stems from reason, then God is not necessary for morality. There are at least two additional problems with the intellectualist account. The first stems from the origin point of arbitrariness. For example, we can suppose that God creates the universe and everything within it. Then, God decrees moral laws for humanity. These are both done arbitrarily. Thus, there is no way to infer from aspects of reality to the moral laws God will use to judge humanity. If this is what occurs, then Israelsen is correct. On the other hand, if we situate the arbitrariness only within the creation of the universe, then we could still infer moral rules. This is because the natural law follows necessarily from the nature of the universe. For example, it just happens to be that humans reproduce sexually. It could have been otherwise. This is where things are arbitrary. However, given the way the universe is, the natural laws are set. This allows us to infer the natural law from reality. The second, and more problematic, aspect of the intellectualist position is that it fails to account for moral obligation. A moral relation requires a command backed up by rewards and punishments. It is power that is the source of the obligation. If we think that reason is the source of moral obligation, then not only is God unnecessary to determine the content of morality, God is unnecessary for moral obligation. Regarding God's will being the source of moral obligation, there is a possible response the intellectualists can make. According to Israelsen, in order for one to follow the divine law, one must know that there is a God who has articulated in some manner what rules to follow and one must know that God will reward or punish one for following or breaking those rules. 15 Drawing on a passage from The Reasonableness of Christianity, Israelsen points out that Locke believes that reason or revelation can provide us with knowledge that God rewards and punishes. 16 While revelation is sufficient, it is not a necessary condition for moral understanding. Reason alone can provide knowledge of moral concepts. 17 Morality flows from God's rational nature and not his will. Thus, human nature and the natural law are fixed by God's nature. If Israelsen is correct, then we can know that God punishes those who break the law. However, that does not address the source of obligation. Presumably, if Israelsen argues that we can know that God rewards and punishes through reason alone, then he recognizes the need of rewards and punishment. He quotes Locke to demonstrate that a law without rewards and punishments would be in vain. However, Israelsen interprets this in terms of motivation and not normativity. God is the enforcer of the law and that provides people with a "motivation to follow the natural law." 18 One interpretation of Israelsen is that he is drawing a distinction between a motivating reason and a justificatory reason. The enforcement of the natural law provides people with a motivating reason to follow it. However, the justification for following the natural law is that it is rational. God's nature is reasonable and morality flows from that. If this interpretation is correct, then the normative force is reason and not the command. This is highly problematic because it seems to remove our ability to evaluate something as morally good or evil. As I previously argued, Locke believes that pleasure and pain are the source of our ideas of good and evil. The lawmaker must attach punishment and rewards to his rules in order for us to view an action as either morally good or evil instead of naturally good or evil. Without this attachment, one can only see an agreement or disagreement with the rule. By attaching pleasure and pain to the rules, one is able to judge an action as morally good because obeying it results in pleasure and breaking it involves pain. Locke is clear that the rewards and punishments must come from the lawmaker and not as a natural consequence of the action (II.xxviii.6, 352). To relegate rewards and punishments to mere motivation strips one of the ability to view a rule as a moral rule. An alternative interpretation is that Israelsen does believe that the enforcement of the natural laws by God is necessary for a moral obligation. In this case, we see that the normative force is through enforcement, but the content of the natural law is through reason. If that is correct, then Israelsen's position seems like the third option argued for by Alex Tuckness. The third option claims the reason we should obey God is because of God's will, while the content of those moral commands is not determined by God's will. 19 It is the command itself, i.e., God's willing, that makes something moral. However, what is commanded is determined by reason. It is important to keep in mind that for Locke, a law is a type of command. The law requires the ability of the lawgiver to reward and punish for adherence to the law in order for it to be a law. Therefore, in order for God's commands to be commands, God must reward and punish. The result is that Locke is a voluntarist in terms of the grounding of morality but an intellectualist in terms of the content. I do not intend to settle the voluntarist-intellectualist debate in this paper. Rather, what I have argued is that Locke's account of morality requires a lawgiver that rewards and punishes depending on one's adherence to law. God rewards and punishes the divine law. This is necessary for us to have a moral relation and be able to know if something is morally good or evil. The content of the law does not determine if something is morally good or morally evil. For example, we can judge that being beaten is bad because it causes pain. However, we cannot say it is bad in a moral sense without the lawgiver making a rule that is enforced with rewards and punishments. The moral rules that God proclaims may flow from God's rational nature or not. However, a mechanism must be available by which one can know the moral rules without revelation. If the intellectualists are correct in that if morality does not flow from God's rational nature, then we cannot know the moral rules, then Tuckness's position would be correct. On the other hand, as I previously suggested, if the natural law flows necessarily from the nature of the universe and the universe is made arbitrarily, then morality, may be arbitrary but can still be inferred. In a different universe, morality could have been different. However, given this arbitrarily created universe, this is what the natural law is. We can then infer the natural law from observations of reality. In any case, all I need to assume going forward is that Locke is a natural law theorist because that provides the best explanation for how one can learn morality without the need of revelation. Can We Know the Content of the Divine Law? If the divine law is, or can be known through, the natural law, then this raises the question of how one can know that the moral ideas one has about the natural law correspond to God's ideas. Israelsen's argument for the natural law being demonstrable begins by correctly pointing out that Locke thinks we need to look at the ideas and not the words. From here, his account is that there would be an "inferential chain in which one would go from the most basic ideas of natural good and evil, namely pleasure and pain (cf. E.II.XX.2) respectively, and in a movement from the recognition of natural goods to the higher class of moral goods, rightly attach the notions of good and bad to modes in a way commensurate with our rational recognition of Natural Law." 20 Israelsen's account seems to be that humans start with natural good and evil. Humans also have a recognition of the natural law. Then, people connect the idea of natural good and bad to the natural law. This gives us a moral relation allowing us to have moral concepts. However, how is that process supposed to occur? Locke has provided an account where we form the idea of good and bad through pleasure and pain. However, we can only judge if an action is right or not based on if our actions conform to a rule set down by a lawmaker with the ability to reward and punish. Even if we accept the idea that God's laws are natural laws, the most we can know is that if our action conforms to the natural law, then we are rewarded and if not, we will be punished. Thus, we can say that following the natural law is right and breaking it is wrong. What we cannot know is if our account of the natural law is the correct account of the natural law. One of Wilson's complaints is that we cannot know that our moral ideas are the correct moral ideas. As I previously argued, Locke's account says that the lawgiver determines the correct meanings. Thus, God's ideas are the correct moral ideas. Yet, there is a problem in that there does not seem to be a way to know that our moral ideas correspond to God's ideas. At least in terms of civil law and public opinion, one has a means to learn the correct moral ideas and can be corrected by others. No such recourse is possible with God. Mark Mathewson raises a similar complaint to that of Wilson. Mathewson begins with the distinction between certain knowledge and certain real knowledge. Certain real knowledge requires that we know not only about relations between ideas but also about the agreement of disagreement of those relations with external reality. 21 Mathewson treats moral knowledge as something that counts as knowledge of external reality because the divine law that one is judged by is external to us. Therefore, we would need to know the divine law to have real knowledge. Mathewson presents a solution to obtaining certain real knowledge of the divine law, but ultimately rejects it. Having knowledge of external things only requires having assurance. We have knowledge of moral ideas because these are just archetypes in the mind. We have assurance that our moral ideas match the divine law. Therefore, we could have certain real knowledge about morality. Mathewson rejects this as a possibility because "the high degree of certainty that Locke sets for knowledge." 22 Consequently, we can have certain knowledge of morality but not certain real knowledge. 23 Mathewson's argument conflicts with Locke's argument that we can have certain real knowledge of morality. Locke believes that we can have knowledge of morality just as we have knowledge of mathematics because they are both mental constructs that do not correspond to an external world (IV.iv.5-7, 564-65). Locke believes that we can have certain real knowledge of morality. Based on his comparison to mathematics, one can have certain real knowledge of angles being congruent just as one could have certain real knowledge that murder is wrong. In both cases, it is because it only involves relations between ideas and not anything external to the mind. The mistake that Mathewson makes is treating the divine law as an external thing. In regard to geometry, Locke writes one "is certain all his Knowledge concerning such Ideas, is real Knowledge: because intending Things no farther than they agree with those his Ideas" (IV.iv.6, 565). Regarding morality, Locke is clear that "our moral Ideas . . . will produce real Knowledge" like one can have with geometrical figures (IV.iv.7, 565). However, since moral ideas require a relation, then we need to specify that the relation is between one's idea of the moral rule rather than the lawmaker's idea of the moral rule. The concern Mathewson has is if our idea of the rule corresponds to the lawmaker's actual rule. Those rules are ideas and should not be treated in the same way as external objects. Locke believes that people often have moral disagreements due to the words used rather than ideas. As previously mentioned, theoretically this can be resolved by discourse such that we can understand the use of the terms the lawmaker is using. No such discourse is possible with God. The lack of knowing the divine law is problematic. However, Locke has provided a means to identify the divine law: That several Moral Rules, may receive, from Mankind, a very general Approbation, without either knowing, or admitting the true ground of Morality; which can only be the Will and Law of a God, who sees Men in the dark, has in his Hand Rewards and Punishments, and Power enough to call to account the Proudest Offender. For God, having, by an inseparable connexion, joined Virtue and publick Happiness 22 Mark D. Mathewson, "Problems of Moral Knowledge," 522. 23 In making his argument, Mathewson relies on arguments from Lex Newman relating to certain knowledge versus certain real knowledge. Newman, however, articulates that we can have certain real knowledge of moral ideas. (Newman, "Locke on Knowledge," in Cambridge Companion to Locke's "Essay", 349) Thus, Mathewson is either misinterpreting Newman or Newman failed to see the issues raised by Mathewson. together; and made the Practice thereof, necessary to the preservation of Society, and visibly beneficial to all, with whom the Virtuous Man has to do; it is no wonder, that every one should, not only allow, but recommend, and magnifie those Rules to others, from whose observance of them, he is sure to reap Advantage to himself. . . . Since we find that self-interest and the Conveniences of this life, make many Men, own an outward Profession and Approbation of them, whose Actions sufficiently prove, that they very little consider the Law-giver, that prescribed these rules; nor the Hell he has ordain'd for the Punishment of those that transgress them. (I.iii.6, 69) 24 We can see that Locke believes there are several moral rules. However, the rules God intends us to follow are the ones necessary for the preservation of humanity. This allows us to both identify the correct rules and a motivation to follow the rules. Locke goes so far as to claim that following the correct moral rules is in a person's self-interest even if one does not consider God's punishment for breaking them. These moral rules are so obviously beneficial that we recommend these rules to others and people do not need to know the rules come from God. This view matches with what I have previously argued in regard to us being able to identify the correct moral rules in terms of pleasure and pain. If my account is correct, then one could argue that we can identify the natural law by those rules that promote happiness. However, it could be objected that there are multiple rules that promote happiness. How do we identify which is the correct rule? A possible response are the rules that generate the most happiness. Scalar utilitarianism allows one to judge an action as being right and another action as more right based upon how much happiness is produced. The less right action is not wrong. If God's divine law is on a scale that permits actions to be judged as right and more right, without the less right action being wrong, then this provides us hope. We can judge a rule by how well it tends to produce happiness. It may not be the one that maximizes happiness, but at least it is still a right action. We would also need to assume that it is only the divine law that can produce public happiness. If not, then we could be following a rule that produces public happiness and yet not at all be a rule that is a divine law. On the other hand, if we reject the idea of morality being scalar, then we have a problem. We would need to know the moral rules that produce the maximum amount of happiness. Anything less than following the moral rule that produces the maximum amount of happiness would be a deviation from the divine law. Therefore, even though following the law is producing public happiness, it is not the correct law. Thus, God would punish us for not following the divine law. Patrick Connolly argues that what is important is not that we correctly follow the divine law, but rather that we try. 25 According to Locke, "he that makes use of the Light and Faculties GOD has given him, and seeks sincerely to discover Truth, by those Helps and Abilities he has, may have this satisfaction in doing his Duty as a rational Creature, that though he should miss Truth, he will not miss the Reward of it" (IV.xvii.24, 688). Connolly believes that while Locke is not addressing moral knowledge in this section, it applies to all knowledge. As such, as long as people use reason and attempt to ascertain the divine law, then God would reward rather than punish us for getting the divine law wrong. Of course, Locke needs an argument for how we can know that God would not punish. Even granting that Locke is correct, this does not answer how we can have moral knowledge. Rather, Locke is claiming that we should not worry about getting it wrong as long as we try. At this point, we have an option. We can argue that the divine law is on some scale. Such an argument is beyond the purpose of this paper. However, as I have argued, the idea of a scale allows us to salvage the idea that Locke's moral epistemology can provide us with at least a secular way for determining the correct moral rules. The other alternative is to admit we cannot know what the diving law is. I believe the best recourse for textual interpretation of the Essay is to accept the idea that humans cannot know the content of the divine law. According to Locke, the divine law is "the only true touchstone of moral Rectitude; and by comparing them to this Law, it is, that Men judge of the most considerable Moral Good or Evil of their Actions" (II.xxviii.8,352). Notice that Locke is saying that people judge if an action is moral or not based on if they think that the person's action conforms to what we think the divine law is. People compare their idea of the divine law with the action. They are not comparing actions with the actual divine law. If taken as an account of what people mean when they say morally good or evil, then Locke's account is adequate. He does not claim people are correct in their evaluation. 26 Thus, as an epistemological account of what people mean, Locke's account is adequate. On the other hand, as an account of how we can correctly know what the divine law is, then Locke's account is inadequate. Yet, it does not appear that Locke is making the latter claim. Locke believes that the divine law is promulgated "by the light of Nature, or the voice of Revelation" (II.xxviii.10, 353). Where Locke's account is entirely inadequate is on our ability to know if what we infer as natural law is what God intends to be natural law. At best, we can know if a moral rule brings pleasure or public happiness. We could assume that we are on the right path because God has linked together the correct moral rules to public happiness. Yet, this does not allow us to know if we really are following the divine law. Although, we may take comfort in knowing that this seems a good enough inference for God not to punish us. Fundamentally, the light of nature is not as clear as Locke seems to be implying or perhaps hoping that it is. 27 Conclusion I believe that the most serious objections against Locke's moral epistemology are: that morality is not demonstrable; that even if it were demonstrable, Locke has not demonstrated a normative statement; and that Locke has not provided us a means to evaluate the correctness of moral rules. However, as I have argued, a proper understanding of Locke's moral epistemology resolves these objections. One needs to keep in mind that Locke is only providing us with the building blocks of a normative theory and not a normative theory. One also needs to keep in mind the central and necessary role of the lawgiver for Locke's moral epistemology. Locke's belief of the divine law as the correct moral law is the problem, not his moral epistemology. This has to do with our inability to know God's commands. The fact that one cannot know God's commands should give one pause not about Locke's moral epistemology but with God being the source of morality. Viewed as a secular theory, Locke's moral epistemology is useful for work in modern ethics and metaethics. Given that Locke has provided an empirical account of all of our moral ideas, it would serve the metaethicist well to revisit Locke. Many aspects of metaethics have taken a linguistic or psychological turn. Locke has provided a means to ground our ethical concepts in empiricism. He has also shown how moral progress is possible. Certainly, Locke's theory is more inclined to lead to a consequentialist ethic than a deontological or virtue ethic. This fact should raise questions for those other theories. If empiricism is true, then philosophers need to provide a metaethical account that is supported by empiricism. If one cannot build a normative theory out of empirical concepts, then one needs to question whether empiricism is right or if one's ethical theories are wrong. 28 Portland State University 27 The natural law theory of ethics is widely challenged. Even if we ignore the naturalistic fallacy, there is the issue of proper inference about what the natural law is. Consider the belief held by many natural law theorists that sex is only for procreation. One can observe that sex can result in pregnancy. Since God designed everything, then God intends this to be the result. However, sex also results in pleasure. Why can't we say that the purpose of sex is pleasure instead of procreation? Is using birth control a violation of natural law? God did provide us with sense, reason, and intellect. Can it be wrong for us to use that aspect of nature to forgo children while embracing that aspect in other technological developments such as flying on airplanes?
2021-05-11T00:03:04.366Z
2021-01-23T00:00:00.000
{ "year": 2021, "sha1": "7e3a13346d4b9c63f772fa036c218397c2b34827", "oa_license": "CCBYNCSA", "oa_url": "https://ojs.lib.uwo.ca/index.php/locke/article/download/8249/11104", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "387d9dd4448f7e41439146d8338d35498532b6fc", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Philosophy" ] }
13571804
pes2o/s2orc
v3-fos-license
Knowledge, attitudes, and values among physicians working with clinical genomics: a survey of medical oncologists Background It has been over a decade since the completion of the Human Genome Project (HGP), genomic sequencing technologies have yet to become parts of standard of care in Canada. This study investigates medical oncologists’ (MOs) genomic literacy and their experiences based on their participation in a cancer genomics trial in British Columbia, Canada. Methods The authors conducted a survey of MOs from British Columbia, Canada (n = 31, 52.5% response rate), who are actively involved in a clinical genomics trial called Personalized Onco-Genomics (POG). The authors also measured MOs’ level of genomic knowledge and attitudes about clinical genomics in cancer medicine. Results The findings show a low to moderate level of genomic literacy among MOs. MOs located outside the Vancouver area (the major urban center) reported less knowledge about new genetics technologies compared to those located in the major metropolitan area (26.7 vs 73.3%, P < 0.07, Fisher exact test). Forty-two percent of all MOs thought medical training programs do not offer enough genomic training. The majority of the respondents thought genomics will have major impact on drug discovery (67.7%), and treatment selection (58%) in the next 5 years. They also thought the major challenges are cost (61.3%), patient genomic literacy (48.3%), and clinical utility of genomics (42%). Conclusions The data suggest a high need to increase genomic literacy among MOs and other doctors in medical school training programs and beyond, especially to physicians in regional areas who may need more educational interventions. Initiatives like POG play a critical role in the education of MOs and the integration of big data clinical genomics into cancer care. Electronic supplementary material The online version of this article (doi:10.1186/s12960-017-0218-z) contains supplementary material, which is available to authorized users. Background Physicians are increasingly working with genomic big data in their research and clinical work. Genomic sequence data helps scientists understand the molecular causes of diseases such as cancer [1,2]. Since the completion of the Human Genome Project (HGP) in 2003, scientists promoted a genomic revolution in which genomics would create radical breakthroughs in scientific and biomedical practice [3]. At the end of the decade, scholars raised questions about the pace of progress of these lofty promises translating into clinical action [4]. Despite massive funding and research by both public and private agencies for several decades, clinical application of genomic technologies is still facing many risks and hurdles [5]. Physicians have begun to adopt genomic data and technologies into clinical practice, and single gene tests have become integrated into the standard of care for the treatment of some specific cancer types. Like many big data technologies, discovery takes time and translation from scientists to doctors is typically incremental. There are a number of challenges to the adoption of genome sequencing into routine practice such as doctors' attitudes toward genomics, their level of genomic literacy, and experiences working with clinical genomic trials. One of the main challenges is a lack of familiarity and understanding of cancer genomics among healthcare professionals. Physicians often report their level of genomic knowledge is inadequate in order to make treatment decisions based on a patient's genome sequencing information [6][7][8]. The current status of medical genomic education in Canadian medical school is also limited [9]. Despite limited genomic knowledge, physicians express a positive attitude toward increasing their genomic knowledge, and a desire to adopt genomics into their practices [10][11][12][13][14]. Other studies have shown, otherwise, a mixed attitude toward genomics and their willingness to adopt the technologies [15][16][17]. This is an early stage of adoption as genomics moves from scientist stakeholders to medical practitioners and the public [18]. Put another way, genomics is moving from the research bench to the clinical bedside. It is critical to update our understanding of doctors' perspectives and experiences during the adoption process. This knowledge can be fed back to clinical trial researchers to help develop clinical genomic technologies. The purpose of this study was to understand doctors' genomic literacy and help direct pedagogy for medical students and practicing doctors. Methods We conducted a web-based survey from Nov 2015 to April 2016 with a sample of Canadian physicians who are medical oncologists. The Department of Research Ethics Review Board (DORE) at Simon Fraser University approved the study (Approval #2014s0172). Sample We surveyed medical oncologists (MOs) involved in an experimental clinical genomics trial in the province of British Columbia, Canada. These specific MOs were chosen for the survey because they are investigators in a precision medicine clinical trial called Personalized Onco-Genomics (POG), led by clinicians and scientists at the British Columbia Cancer Agency (BCCA) in Canada (ClinicalTrials.gov Identifier: NCT02155621). Therefore, our study sample is a relatively select group of oncologists who have experience working with clinical genomics and a higher exposure than other physicians. The cancer clinical trial enrols patients from across the province with incurable cancers that is primarily based on stages such as incurable stage II and IV advanced cancers, who have limited or no standard treatment options available (full enrolment criteria are outlined in the trials link above and in Laskin et al. study [19]). Each patient undergoes a tumor biopsy and this sample undergoes comprehensive DNA and RNA sequencing. Genome scientists perform in-depth bioinformatic analyses comparing the normal DNA to the cancer DNA and the RNA expression in order to identify variants that may be cancer "drivers" or therapeutically actionable targets [19]. Theses analyses involve a dedicated team of genome scientists and bioinformaticians and the process from the time of consent to the generation of a report takes approximately 10 weeks; details on the methodology for this complex analytic process can be found in Laskin et al. [19] Thus, POG generates medical big data from genomes and transcriptomes, and other types of biological and medical information; each case represents 1.5 terabytes of data that needs interpretation. This is big data. POG is an interdisciplinary collaboration between physicians, medical oncologists, genome scientists, pathologists, bioinformaticians, medical geneticists, and social scientists from communication, bioethicists, and health economists. The group meets weekly to discuss two to four individual patient cases. There are three parts to the analysis. First, a MO presents an overall background of the patient, their current cancer treatment, and may ask the data analysts specific questions regarding the next therapy that would be standard for this patient. Second, a pathologist presents the tumor analysis of the cancer patient. Third, a bioinformatician/ genome analyst presents genomic sequencing and a genomic pathway data and identifies potential biological pathways to be considered for a therapeutic intervention. The presentations are followed by a collective discussion and assessment for potential treatment strategy. This is different than commercial panel-based profiling tests in which more simplified versions of genomic analysis can be ordered by MOs, who then receive a report of genomic data that they must interpret for themselves. The POG meetings signify the communication culture of medicine where the meaning making of genomic results takes place through the social interactions, discussions, and communication of the multidisciplinary medical stakeholders. As of this writing, POG's enrolment is over 900 patients and includes 50 pediatric cancer cases. We surveyed 59 MOs who enrolled patients in POG. This represents over 60% of MOs in the BCCA network (n = 103) and almost 48% of the MOs in the region of British Columbia (n = 123). The response rate was 53%. Some MOs did not participate in the survey because they only had one patient enrolled in the program. Hence, they did not have enough experience working with POG to provide any useful inputs for the survey. Complex clinical genomics is not widely adopted at this historical point, so this survey captures a high number relative to the actual amount of doctor early adopters in Western Canada. Survey measures The survey instrument is a 29-item web-based questionnaire including questions about MO genomic literacy, experiences and attitudes toward genomics, and their perceived values of POG. The survey also included questions about MOs' demographic information such as gender, years of practicing oncology, their location, and the number of cancer patients they treat per year. We organized the survey into three sections including our respondents' knowledge of genomics, their experiences and attitudes toward the use of genomics in oncology, and their perceived values of POG. We undertook a number of steps to construct the survey. Clinical genomics is an emerging area, so there is little research on its application at present. We aimed to be as consistent as possible with past survey research in the area and the specifics of POG [20, 21]. We conducted a systematic review of empirical research on genomic literacy and genetic education as well as literature review of research on attitudes and experiences with genomic and genetic technologies (Ha et al., under review). We adapted existing measures from UBC Physician Education [20] and Middleton et al. [21], because these two studies have similar research objectives of examining the level of knowledge, attitudes, and experiences of physicians working with clinical genomics. We also consulted with the leadership team of POG, which includes a genome scientist, a bioinformatician, and three MOs. We interviewed the five co-leads with a semi-structured and open-ended protocol to understand what they want to know from the MOs and collaborate to co-construct the survey. The goal of these semistructured interviews with the project principals of POG is to understand their thoughts and interests in the survey. As a result, we incorporated the findings from the interviews to construct the survey questions. Although different interviewees have different perspectives on what they want to know from the POG survey, those perspectives are inter-correlated with each other. The main interests they want to find out from POG survey include four main themes: (1) the clinical values of POG in which whether POG help change their decisionmaking process or management plan for their cancer patient treatment; (2) the oncologists' expectations when coming to POG and experiences after collaborating with POG; (3) the knowledge or understanding about genomics among the oncologists; and (4) the communication process of POG. Taken together, we constructed the survey protocol based on the extensive literature review, systematic review, and semi-structured interviews. After we developed the protocol, we piloted the survey with a small sample (n = 9) of MOs at POG. We gave the MOs in the pilot survey an opportunity to provide feedback into improving the survey. We incorporated the feedback from our pilot survey respondents and revised the survey protocol. We then validated the survey protocol with a clinician who is an expert in clinical genomics. We relied on her expertise to build content and scientific validity of the instruments and to ensure the instruments would make sense to their communicative cultural context. Overall, this survey is a coconstruction between social scientists and medical experts including a genome scientist, a bioinfomatician, and three clinicians. Statistical analyses We analyzed categorical response frequencies to report our descriptive statistics. Conceptually related sets of rating scaled responses were subjected to withinsubjects repeated measures analysis of variance (ANOVA) to compare the differences of their mean scores. We also performed inferential statistics between gender, location, years of practicing oncology, and number of cancer patients as independent variables with the level of genomic knowledge and POG values as dependent variables. Due to the relatively small sample, instead of using χ 2 , we used Fisher exact test to examine the statistical significance of the findings. Participant characteristics The results showed almost equal gender distribution between female (n = 15) and male (n = 16) MOs (Table 1). Other variables included their years of practicing oncology and their number of cancer patients per year in order to get a sense about their oncology experience. We used central tendency measurements of median for our ratio variables of "years of practicing oncology" and "number of cancer patients per year" to distribute the data equally for these two variables into two groups divided by their median. The median for "number of cancer patients per year" was 180, so we grouped the responses into two groups of less than or equal to 180 or more than 180 patients per year. Likewise, the median for "years of practicing oncology" was 12, which coincidently matched with the number of years since the Human Genome Project (HGP) was completed. We assumed that based on the impactful discoveries of the HGP, it could result in a paradigm shift in medical research and in styles of thoughts between MOs who had been practicing before and after the HGP. Location where MOs practice was also an important factor to take into account. The majority of our respondents (n = 13) work in Vancouver and the rest work outside Vancouver. Genomic knowledge In the first section of the survey, we explored MOs' perceptions and level of knowledge about genomics. We asked our participants to rate their level of knowledge based on a scale of 1 = "little knowledge", 2 = "knowledgeable", 3 = "very knowledgeable", and 4 = "expert" on three different topics of genomic science and technologies: (1) basic genetic principles (i.e., inherited patterns), (2) newer genetic/genomic technologies (i.e., high-throughput sequencing, genotyping and copy number variation analysis), and (3) the process of whole genome sequencing or WGS (i.e., features, eligibility criteria for sequencing, benefits, risks, and non-medical implications). The results showed the majority of the MOs ranked themselves as knowledgeable (57%) or very knowledgeable (33%) (mean = 2.36; SD = 0.66) about the topic of basic genetics principles (Table 2). Seven percent of the physicians claimed that they have little knowledge. However, the results shifted as more MOs acknowledged they had little knowledge about newer genetic technologies (50%) (mean = 1.61; SD = 0.67) and WGS process (41%) (mean = 1.77; SD = 0.76). Only one MO considered themselves to be an expert on the field of basic genetics principles and whole genome sequencing process, and no MO regarded themselves as an expert in newer genetic technologies. 45.2% of the respondents did not have enough information and knowledge to understand the POG meeting and results (Table 5). 32.3% of them did not feel confident that they could communicate POG results to their patients ( Table 5). As a result, the majority of our respondents reported little or adequate knowledge about genomics (mean = 1.61-2.35; item main effect F(1.5,46) = 30.7, P < 0.0001). We also asked the respondents to rate the sufficiency level of genomic education and training in medical schools. We were aware that the majority of our participants graduated from medical schools at least 5-10 years ago. However, many of them are professors in medicine and genetics or supervising medical students at a local medical school and are familiar with current medical school curriculum. The results showed that the majority of our respondents either do not know (54.8%) or think medical training (4-5 years) program did not sufficiently (42%) prepare students with enough genomic materials or training. Likewise, the majority of the MOs also thought there was not enough genomic training during their specialized medical training (54.8%), residency or fellowship (67.8%), or postgraduate medical training (58%). We also explored how important it is for MOs to improve their knowledge of clinical applications of genomic science and technologies by asking them to rate on a scale of 1 = "unimportant", 2 = "somewhat important", 3 = "important", and 4 = "very important". The data showed that 45% of our respondents (n = 14) considered it very important to improve their genomic knowledge. Another majority of our respondents (39%) only thought it was "important" to improve genomic knowledge. Even though none of MOs consider updating their genomic knowledge unimportant, 16% of the respondents (n = 5) considered improving genomic knowledge only somewhat important. In sum, the majority of MOs felt improving their genomic knowledge was highly important but this activity was not urgent. Since most MOs considered it important to improve their genomic knowledge, we asked who they think should be responsible for updating them about genomics. Respondents could choose multiple answers for this question (i.e., "check all that apply"). The MOs considered themselves to bear the primary responsibility for updating genomic knowledge (84%) followed by medical training and research institutions (Additional file 1: Table A2, online only). 93.6% of the MOs agreed that meeting with the POG team was worthwhile, as they could learn more about genomics through interpersonal channels and face-to-face exchanges with other physicians involving with clinical genomics (Table 5). Most notably, there were two respondents who collaborated with POG for the sole reason of learning more about genomic research. We found geographic location plays a role in levels of genomic knowledge in BC. MOs who work in Vancouver reported a higher level of knowledge about genomics on average than those who work outside Vancouver. More respondents who work outside Vancouver reported little knowledge about new genetics technologies compared to those who work in Vancouver (73.3 vs. 26.7%, P < 0.07, Fisher exact test). Likewise, no respondents who work outside of Vancouver reported being very knowledgeable or expert in whole genome sequencing compared with those who work in Vancouver (0 vs. 30.8%, P < 0.09, Fisher exact test). The data showed the domain experts who reported the highest levels of knowledge about genomic technologies are located in Vancouver. Those located outside greater Vancouver, the major urban center in BC, reported lower levels of genomic knowledge on average. Physicians' attitudes and experiences We asked the respondents to envision the impact of genomic technologies on their practice in the near future. The respondents rated seven items on a scale from 1 = "no impact", 2 = "minor impact", 3 = "major impact". We found 67.7% (Table 3) of the respondents envisioned that in the next 5 years genomic technologies would have major impact on drug discovery (mean = 2.68; SD = 0.48). Genomic technologies would also have major impact on helping physicians select course of treatment (58%) (mean = 2.55; SD = 0.57), and sequence whole genomes for their cancer patients (58%) (mean = 2.48; SD = 0.68). 58.1% of the respondents felt more confident making treatment decisions after becoming informed about their patients' genome (Table 5). However, the majority of our respondents thought genomic technologies would only have a minor impact (58%) or no impact (9.7%) on making a diagnosis (mean = 2.23; SD = 0.62). 61.3% of our respondents thought genomic technologies would have a minor impact on extending and improving lives (mean = 2.19; SD = 0.6). Overall, the majority of the MOs envisioned genomic science and technologies would have some impact on their oncology practices but nothing as major or significant (mean = 2.19-2.68; item main effect F(6,180) = 5.1, P < 0.0001). Low level of genomic literacy The results suggest there is a need for better strategies and guidelines for enhanced genomic education among MOs specifically and doctors more generally. Most MOs were comfortable with basic genetic principles. However, they felt much less comfortable when it came to more complex cancer genomics. Most of the respondents agreed it is important for them to improve their understanding of clinical applications of genomic sciences and technologies, which reflects other survey findings The results suggested MOs are looking for alternative sources to learn about genomics. MO thought that a local government funding agency, Genome British Columbia, should spend more funding on research and projects that can enhance their genomic knowledge and alleviate their genomic educational needs. Educating professionals and the public is a goal of Genome BC, so this would be a strategic opportunity to focus on. Furthermore, the data also indicated MOs collaborated with POG to find effective treatment options and to learn about genomic research. POG plays a pedagogical role on top of a scientific/clinical role. This suggests clinical genome projects and trials serve more than the expected discovery and application of knowledge functions. They also are places where doctors go to further their own learning. As mentioned earlier, POG is different from other models of clinical profiling in which the knowledge production of genomics takes place through interdisciplinary meetings between different medical stakeholders. These social interactions, discussions, and communication at the interdisciplinary meetings are inherently educational for clinicians to learn and improve their genomic literacy. Additionally, professional training, workshops, clinical rounds, and continuing medical education (CME)-accredited events are potential tools that can help MOs update their knowledge of genomic sciences and technologies. Furthermore, health information technology systems and other online, point-of-care tools are innovative and effective educational resources for physicians, which thereby results in better utilization of genetic information in clinical practices. For example, social media platforms such as Twitter and YouTube are lowcost and wide-reaching platforms for interactive educational tools. Mixed attitudes toward genomic technologies The MOs in our sample showed a mix of attitudes toward the use of genomic technologies in clinical practices. The findings on the impact of genomics on oncology practices (Table 4) and concerns about genomic science and technology ( sequencing and personalize drugs and treatments particularly to a patient's genome. However, the uncertainties of clinical utility and validity of genomic information are a hurdle for MOs to incorporate genomic data into their diagnosis and treatments. The reluctance to adopt genomic technologies into clinical practices could also result from the lack of genomic knowledge to analyze, evaluate, and apply genomic information. These findings were consistent with the results from Gray et al. [8] study, in which physicians, who decided not to adopt genetic testing in clinical practices or to not disclose test result, tend to have "lower genomic confidence and lower reported baseline understanding" (p 1320). As a result, the lack of genomic literacy could engender a negative attitude among physicians about the effect of genomic technologies in diagnosis and treatment and impede the adoption of genomic technologies into healthcare systems. If doctors are not on board, then it will be difficult to implement and develop clinical genomic technologies at the population level. On the basis of the educational deficiencies identified in this survey, the POG team has initiated applied cancer genomics symposiums for the physicians of BC to address some of these educational gaps. Geographical factors in genomic literacy In the global era of rapid information movement on the Internet, there is a tendency to think geography is not a particularly important challenge in information gathering and learning. However, sociologists of globalization find geography matters because a central hub of innovation can attract capital, experts, and infrastructures for the development and diffusion of innovation. Consistent with this research we found geography plays an important role in variations in MOs genomic literacy. Respondents who work outside Vancouver reported lower levels of genomic knowledge than those who work in the metropolitan center. Vancouver is the central hub and a milieu of innovation for professional and educational networks connecting different medical stakeholders with medical skills and expertise. POG is an interdisciplinary group of oncologists, pathologists, bioinformaticians, bioethicists, and health economists. One of the ten principles for good interdisciplinary team work is communications strategies and structures [22]. Proximity to a metropolitan center like Vancouver, Boston, or New York can have an impact on the level of genomic knowledge possibly due to easier access to more genomic training, workshops, or conferences and other face-to-face community opportunities. As a result, it is going to be important to take into account geographic location to identify the best strategies and targets to address the educational needs. To design better genomic training pipelines, genomic scientists and policy makers should target doctors who work outside major cities and metropolitan centers. The POG team has started creating targeted educational strategies for those working in regional areas of the province. Also, they are creating "POG-casts", which are short form educational videos for YouTube. Clinical genomics trial programs such as POG have a significant pedagogical role for working doctors and physicians to learn more about genomics through utilizing the technologies and collaborating with other medical stakeholders. Conclusion The main potential limitations of this study are the raw numbers of participants and reliance on some self-report item, namely the instruments for measuring levels of genomic literacy. The sample size limited our ability to apply more inferential analyses such as logistic regression models to identify more associations between our variables. Other studies have employed tests to measure genomic literacy. We considered this option but did not pursue it because of the other goals of the survey and the limited time respondents would most likely spare to complete it. However, a strength of the study is the response rate. We surveyed 54% of all MOs in POG, which represents almost 30% of all working MOs in BC in the survey population. Fifty-four percent is a very good response rate for a survey and generalizable to the study population. Both the response rate and the sample population relative to the overall population provide a solid foundation to build on for future studies. Other strengths of the study include the rigorous, multi-step process to construct and validate the questionnaire. We consider this survey a co-production between social scientists and medical domain experts. We adapted existing items and measures from other questionnaires examining genomic knowledge of physicians [20,21]. We also incorporated findings from the semi-structured interviews with the POG project principals to construct the survey questions. Then, the survey was assessed twice through a physician who is an expert in clinical genomics, and a pilot test of MOs with a feedback mechanism. This survey provides useful measures to assess general genomic literacy and yields interesting findings. Future research could apply our survey protocol to assess the reliability and validity of the measures and its performance compared to other measures. Another strength of our study is the consistency in our findings with other studies in the same research, which showed a low level of genomic knowledge and a mixed attitude regarding genomics [6][7][8][9][10][11][12][13][14][15][16][17]. Some might argue that physicians who work at experimental clinical trials like POG would have higher genomic knowledge than other physicians. However, majority of our respondents appeared to have low genomic literacy. This implies that other physicians outside the BCCA network are likely to have even lower levels of awareness, knowledge, and favorable attitudes toward genomic technology. A recent report from the Secretary's Advisory Committee on Genetics, Health, and Society (SACGHS) also pointed out a lack of basic genetic understanding among many health professionals, which in turn limited the adoption of genomic technologies into clinical practices [23]. We suggest this points to clinical genomics still in early stages of adoption where the validity and application is highly uncertain. There is a critical need to understand these early adopters, however. Technology development of any kind can be better strengthened with domain expert input at the earliest stages. If not, then the risk is creating something that does not fit the user needs or their buy in. Therefore, our findings point to a high need for substantive applied genomic education for cancer physicians specifically right now. Working doctors also need opportunities to further their education at conferences and workshops as well as selfpacing methods. Medical schools will need to address all students as genomics diffuses into different areas of health care practice. It is also important for medical schools to keep updating their curricula topics in relevance with the rapid advancement of medical genomics. As POG is a self-funded clinical trial through BCCA, we did not examine the impact of the pharmaceutical and biotechnology industry on the clinical practices at POG. Future research could investigate the impact of the pharmaceutical industry on cancer clinical trials and any ethical issues including transparency that could be derived from this conflict of interests. Finally, our findings shed light on the current level of genomic literacy among physicians in Western Canada. Physicians who locate outside metropolitan areas tend to have lower genomic knowledge than those who work in the city. More genomic training and workshop should be offered in regional areas to physicians who need more educational interventions. Initiatives like POG play a critical role in the education of MOs and the integration of big data clinical genomics into cancer care. Additional file Additional file 1: Table A1. Stakeholders responsible for updating physicians about genomics.
2017-06-27T20:21:09.748Z
2017-06-27T00:00:00.000
{ "year": 2017, "sha1": "c733ceefe609fc81970621a2e56148fb2bd5781d", "oa_license": "CCBY", "oa_url": "https://human-resources-health.biomedcentral.com/track/pdf/10.1186/s12960-017-0218-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d97da8b1052a306f104fe2e16ade895d9b9964c3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9258620
pes2o/s2orc
v3-fos-license
Measurement of the p-pbar ->Wgamma + X cross section at sqrt(s) = 1.96 TeV and WWgamma anomalous coupling limits The WWgamma triple gauge boson coupling parameters are studied using p-pbar ->l nu gamma + X (l = e,mu) events at sqrt(s) = 1.96 TeV. The data were collected with the DO detector from an integrated luminosity of 162 pb^{-1} delivered by the Fermilab Tevatron Collider. The cross section times branching fraction for p-pbar ->W(gamma) + X ->l nu gamma + X with E_T^{gamma}>8 GeV and Delta R_{l gamma}>0.7 is 14.8 +/- 1.6 (stat) +/- 1.0 (syst) +/- 1.0 (lum) pb. The one-dimensional 95% confidence level limits on anomalous couplings are -0.88 (Dated: January 12, 2022) The W W γ triple gauge boson coupling parameters are studied using pp → ℓνγ+X(ℓ = e, µ) events at √ s = 1.96 TeV. The data were collected with the DØ detector from an integrated luminosity of 162 pb −1 delivered by the Fermilab Tevatron Collider. The cross section times branching fraction for pp → W (γ) + X → ℓνγ + X with E γ T > 8 GeV and ∆R ℓγ > 0.7 is 14.8 ± 1.6(stat)±1.0(syst) ±1.0(lum) pb. The one-dimensional 95% confidence level limits on anomalous couplings are −0.88 < ∆κγ < 0.96 and −0.20 < λγ < 0.20. The W γ final states observed at hadron colliders provide an opportunity to study the self-interaction of electroweak bosons at the W W γ vertex. The standard model (SM) description of electroweak physics is based on SU(2) L ⊗U(1) Y gauge symmetry and specifies the W W γ coupling. In the SM, production of a photon in associ-ation with a W boson occurs due to radiation of a photon from an incoming quark, from the W boson due to direct W W γ coupling, or from the outgoing W boson decay lepton. To allow for non-SM couplings, a CPconserving effective Lagrangian can be written with two coupling parameters: κ γ and λ γ [1,2]. The SM predicts ∆κ γ ≡ κ γ − 1 = 0 and λ γ = 0. Non-standard couplings cause the effective Lagrangian to violate partial wave unitarity at high energies; it is necessary to introduce a form-factor with scale Λ for each of the coupling parameters. The form-factors are introduced via the ansatz λ → λ/(1 +ŝ/Λ 2 ) 2 with √ŝ the W γ invariant mass. In this analysis, the scale Λ is set to 2 TeV. For sufficiently small values of Λ the dependence on Λ is relatively small. Deviations from the SM W W γ couplings would cause an increase in the total W γ production cross section and would enhance the production of photons with high transverse energy. Limits on the W W γ coupling parameters have been previously reported by the DØ [3] and CDF [4] collaborations using direct observation of W γ final states in data collected from hadron collisions at the Fermilab Tevatron collider and by the UA2 [5] collaboration using the SppS collider at CERN. Searches for W + W − final states at DØ [6] and CDF [7] have also been used to test W W γ and W W Z coupling parameters simultaneously. Similarly, experiments at the CERN LEP collider constrain the W W γ and W W Z coupling parameters simultaneously through observations of W + W − , single-W boson, and single-γ final states in electron-positron collisions [8]. Observation of b → sγ decays by the CLEO collaboration has also been used to constrain the coupling parameters [9]. The analyses discussed here use the DØ detector to observe pp → ℓνγ + X(ℓ = e or µ) events in collisions at √ s = 1.96 TeV at the Fermilab Tevatron collider. The data samples used for the electron and muon channels correspond to integrated luminosities of 162 pb −1 and 134 pb −1 , respectively. The DØ detector [10] features an inner tracker surrounded by a liquid-argon/uranium calorimeter and a muon spectrometer. The inner tracker consists of a silicon microstrip tracker (SMT) and a central fiber tracker (CFT), both located within a 2 T superconducting solenoidal magnet. The CFT covers |η| 1.8 and the SMT covers |η| 3.0 [11]. The calorimeter is longitudinally segmented into electromagnetic and hadronic layers and is housed in three cryostats: a central section covering |η| 1.1 and two end-cap cryostats that extend coverage to |η| 4.0. The muon detectors reside outside the calorimeter and consist of tracking detectors, scintillation counters, and a 1.8 T toroidal magnet. The muon detectors cover to |η| 2.0. Luminosity is measured using scintillator arrays located in front of the end-cap cryostats and covering 2.7 |η| 4.4. Candidate events with electron decays of the W boson (W → eν) are collected using a suite of single elec-tron triggers that require electromagnetic clusters in the calorimeter with at least 11 GeV of transverse energy (E T ). Offline electron identification requires the candidate electrons to be in the central calorimeter (|η| < 1.1), isolated in the calorimeter, have shower profiles consistent with those of electromagnetic objects, and have a track found in the tracking detectors matched to the calorimeter cluster. Similarly, photons are identified as central electromagnetic calorimeter clusters without a matched track that are isolated both in the calorimeter and in the tracking detectors. To suppress events with final state radiation of the photon from the outgoing lepton, and to avoid collinear singularities in calculations, the photon is required to be separated from the electron Events used in this analysis are required to have E e T > 25 GeV, E γ T > 8 GeV, missing transverse energy using the full calorimeter E / T > 25 GeV, and M T > 40 GeV/c 2 , where M T is the transverse mass 2E e T E / T (1 − cos φ eν ) of the electron and E / T vectors which are separated by φ eν in azimuth. Candidate events with muon decays of the W boson (W → µν) are collected using a suite of single muon triggers that require a high p T track in the muon detectors and a high p T track in the central tracking detectors. Offline muon identification additionally restricts muon candidates to the full central tracking acceptance (|η| < 1.6), requires matched central tracks, and imposes timing cuts to reduce backgrounds from cosmic and beam halo muons. Events with more than one identified muon are rejected to reduce backgrounds from Z → µµ(γ). Events are required to have p µ T > 20 GeV/c, E γ T > 8 GeV, E / T > 20 GeV, and there is no M T requirement for this analysis. Photon identification is the same for both electron and muon analyses. The dominant background for both decay channels is W +jet production where a jet mimics a photon. The contribution of this background is estimated by using a large multijet data sample to measure the probability of jets to mimic photons. Some fraction of multijet events contains true photons, and this fraction has previously been seen to increase with increasing transverse energy as 1 − e a−bET [12]. The systematic uncertainty on the probability of a jet being misidentified as a photon is taken to be the full difference between ignoring the presence of true photons in the multijet data sample and estimating their contribution with the above functional form. The method described above is dependent on agreement between the jet energy calibration and the electromagnetic energy calibration; as a check of the accuracy of the jet energy calibration, the method is repeated using jet-like objects that have a high fraction of calorimeter energy in the electromagnetic layers. This yields a background estimate consistent with the method based on jets. A second class of background events comes from processes which produce an electron or muon, an electron that is misidentified as a photon, and missing transverse energy. This background, labeled ℓeX, is small for the muon channel since very few processes produce a high E T muon and an electron. However, this background is significant for the electron channel since Z(→ ee)+jet (with a mismeasured jet leading to apparent missing transverse energy) processes have a relatively large cross section. To reduce this background, an additional criterion on the invariant mass of the electron and photon candidates is imposed, and events with 70 < M eγ < 110 GeV/c 2 are rejected. In both the electron and muon analyses, the ℓeX background is estimated by reversing the track match requirement on the photon candidate (i.e. require a matched track) in W γ candidate events. The number of ℓeX events in which the electron is isolated and does not have a matched track (and therefore is misidentified as a photon) is then estimated using the known track matching and track isolation inefficiencies. Small backgrounds from Zγ, where one lepton from the Z decay is not reconstructed, and W → τ νγ, where the τ decays into an electron or muon, are estimated from Monte Carlo samples. The background estimates and numbers of events observed in the data are summarized in Table I. The efficiencies of the triggers and the lepton identification cuts are measured using Z → ee, µµ events. Efficiencies for electrons are 0.96 ± 0.02 for the trigger, 0.84±0.01 for the calorimeter identification requirements, and 0.78 ± 0.01 for the track match requirement. For muons, the trigger efficiency is 0.74 ± 0.01, the offline reconstruction efficiency is 0.77 ± 0.02, and the efficiency of the track match requirement is 0.98 ± 0.01. The efficiency of the requirement of no more than one muon in muon candidate events is estimated to be 0.942 ± 0.004 by counting the fraction of Z → ee events containing a muon. The track isolation efficiency used for the ℓeX background estimation is measured using Z → ee events and is 0.95 ± 0.01. The efficiency of the calorimeter requirements in photon identification is estimated using a full geant3 simulation of the detector [13]. The probability for unrelated tracks to overlap with the photon and cause it to fail the track isolation requirements is mea-sured using Z → ee events by measuring the probability of an electron to have nearby tracks after the event is rotated in φ by ninety degrees. The overall efficiency for photon identification is 0.81 ± 0.01. The total efficiencies are 0.51 ± 0.02 for the electron channel and 0.43 ± 0.01 for the muon channel. The acceptances due to the kinematic and geometric requirements in the analyses are calculated using a Monte Carlo generator [2] that fully models W γ production to leading order in quantum chromodynamics (QCD) and electroweak couplings and allows anomalous coupling values to be set. The detector response is simulated using a parameterized detector simulation. The effects of higher order QCD processes are accounted for by the introduction of a K-factor of 1.335 [2], and the transverse momentum spectrum of the W boson is simulated using parton showers in pythia [14]. The detector acceptance calculation has a very small dependence on the simulation of the transverse momentum of the W boson. The CTEQ6L parton distribution function (PDF) [15] is used for the proton and anti-proton. The acceptances are 0.045 ± 0.002 for the electron channel and 0.102 ± 0.003 for the muon channel with the uncertainties dominated by the PDF uncertainty. The measured cross sections times branching fractions σ(pp → W (γ) + X → ℓνγ + X) with E γ T > 8 GeV and ∆R ℓγ > 0.7 are 13.9 ± 2.9(stat)±1.6(syst)±0.9(lum) pb for the electron channel and 15.2 ± 2.0(stat)±1.1(syst)±1.0(lum) pb for the muon channel. The three components of the cross section uncertainty are: statistics; systematic effects associated with the background subtraction, acceptance calculation, and object identification; and the systematic uncertainties in the luminosity measurement. Combining events from the two decay channels and accounting for correlations in the systematic uncertainties yields a combined cross section times branching fraction of 14.8 ± 1.6(stat)±1.0(syst)±1.0(lum) pb. The SM prediction calculated by the Monte Carlo generator using the K-factor and the CTEQ6L PDF is 16.0 ± 0.4 pb, where the uncertainty is due to PDF uncertainty. The prediction is in agreement with the measurements. The photon E T spectrum of the candidate events is shown with the background estimation and the SM expectation in Fig. 1. The distribution is described well by the SM, and no enhancement of the photon E T spectrum is seen at high transverse energy. Limits on anomalous couplings are determined by performing a binned likelihood fit to the photon E T spectrum. The effect of anomalous couplings is more pronounced at high W γ transverse mass, M T (W, γ), so only events with M T (W, γ) > 90 GeV/c 2 are used for the distributions in the likelihood fit. The M T (W, γ) distribution before this requirement is shown in Fig. 2. Monte Carlo distributions of the photon E T spectrum are generated with a range of anomalous coupling values, and the likelihood of the data distri- bution being consistent with the generated distribution is calculated. The uncertainties in the background estimates, efficiencies, acceptances, and the luminosity are included in the likelihood calculation using Gaussian distributions. The limits on the W W γ coupling parameters are shown in Fig. 3 one-dimensional 95% CL intervals. The one-dimensional exclusion limits on each parameter are −0.88 < ∆κ γ < 0.96 and −0.20 < λ γ < 0.20, where the limit on ∆κ γ assumes λ γ is fixed to the SM value and vice versa and Λ = 2 TeV. In summary, the cross section times branching fraction for the process pp → W (γ) + X → ℓνγ + X with E γ T > 8 GeV and ∆R ℓγ > 0.7 is measured to be 14.8 ± 1.6(stat)±1.0(sys)±1.0(lum) pb using the DØ detector during Run II of the Tevatron. The measured cross section is in agreement with the SM expectation of 16.0 ± 0.4 pb. Limits at the 95% confidence level on anomalous W W γ couplings are extracted using the photon transverse energy spectrum and are −0.88 < ∆κ γ < 0.96 and −0.20 < λ γ < 0.20. These limits represent the most stringent constraints on anomalous W W γ couplings obtained by direct observation of W γ production. We thank the staffs at Fermilab and collaborating institutions, and acknowledge support from the DOE and
2015-03-07T18:39:34.000Z
2005-03-01T00:00:00.000
{ "year": 2005, "sha1": "97f57a4c0bf0518a6f4827bfc53b3a498caab473", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ex/0503048", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5e323d8b1cc294a84108819d5da44425ff3dca5a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
35337204
pes2o/s2orc
v3-fos-license
Multisensory Prediction Fusion of Nonlinear Functions of the State Vector in Discrete-Time Systems We propose two new multisensory fusion predictors for an arbitrary nonlinear function of the state vector in a discrete-time linear dynamic system. Nonlinear function of the state (NFS) represents a nonlinear multivariate functional of state variables, which can indicate useful information of the target system for automatic control. To estimate the NFS using multisensory information, we propose centralized and decentralized predictors. For multivariate polynomial NFS, we propose an effective closed-form computation procedure for the predictor design. For general NFS, the most popular procedure for the predictor design is based on the unscented transformation. We demonstrate the effectiveness and estimation accuracy of the fusion predictors on theoretical and numerical examples in multisensory environment. Introduction The integration of information from a combination of different types of sensors is often used in the design of highaccuracy control systems.Typical applications that benefit from usage of multiple sensors include industrial tasks, military commands, mobile robot navigation, multitarget tracking, and aircraft.One problem that arises from the use of multiple sensors is that if all local sensors observe the same target, the question then becomes how to effectively combine the corresponding local estimates.Several decentralized fusion architectures have been discussed and algorithms for estimation fusion have been developed in [1][2][3][4].An important practical problem in the above systems and architectures is to find a fusion estimate to combine the information from various local estimates to produce a global (fusion) estimate.Optimal mean square linear fusion formulas, for an arbitrary number of local estimates with matrix and scalar weights, have been reported in [5][6][7][8][9][10]. However, because of the lack of prior information, in general, the decentralized estimation using the fusion formula is globally suboptimal compared with optimal centralized estimation [11].Nevertheless, in this case it has advantages of lower computational requirements, efficient communication costs, parallel implementation, and fault-tolerance [11][12][13].Therefore, in spite of its limitations, the decentralized estimation has been widely used and is superior to the centralized estimation in real applications.The aforementioned papers [5][6][7][8][9][10][11][12][13] have not focused on the prediction problem, but most of them have considered only decentralized filtering of state variables in multisensory dynamic models.The decentralized prediction of the state requires special algorithms presented in [14,15].Some applications require the estimation fusion of nonlinear function of the state variables, representing useful information for system control, for example, a quadratic form of a state vector, which can be interpreted as a current distance between targets or as the energy of an object [16,17].We refer to the nonlinear function as the nonlinear function of the state (NFS).In [17], we have not focused on the prediction of the NFS, considering instead only filtering.To the best of our knowledge, there are no methods reported in the literature for prediction fusion of the NFS in a multisensory environment.Direct generalization of the distributed fusion filtering algorithms to the prediction problem of the NFS is impossible. Therefore, in this paper, the prediction fusion problem of NFS is considered under a multisensory environment.The primary aim of this paper is to propose centralized and decentralized prediction fusion algorithms and analyze their statistical accuracies. This paper is organized as follows.Section 2 presents a statement of the prediction fusion problem for NFS.In Section 3, the centralized global optimal predictor is derived.In Section 4, we propose the nonlinear decentralized prediction fusion algorithm for NFS.In Section 5, we propose effective closed-form computational procedure for prediction of multivariate polynomial functions.For prediction of a general NFS, we use the unscented transformation.In Section 6, we study the comparative analysis of the proposed fusion estimators via a theoretical example.In Section 7, the efficiency of the fusion predictors is studied for prediction of the instantaneous impact point of space launch vehicle.Finally, we conclude our results in Section 8. Problem Statement The general Kalman multisensory framework involves estimation of the state of a discrete-time linear dynamic system: where ∈ R and () ∈ R are unknown state and measurement vectors, respectively, and ∈ R × , ∈ R × , and () ∈ R × .Assume that sensors are used to observe the state vector simultaneously.The process noise ∈ R ∼ N(0, ) and the measurement noises V () ∈ R ∼ N(0, () ), = 1, . . ., , represent normally distributed uncorrelated random processes. Our goal is to find a fused prediction estimate of the NFS at future time + , ≥ 0, where based on overall current sensor measurements, where Typical examples of such NFS may be an arbitrary quadratic form ( ) = Ω of the state vector or magnitude of position and velocity of three-dimensional state vectors ( ) = √ In general, there are two fusion estimation approaches commonly used to process the overall measured data.If a central processor receives measurements [1:] from all local sensors directly and processes them in real time, the corresponding result is known as centralized data processing.However, this approach has several serious drawbacks, including poor survivability and reliability, as well as heavy communication and computational burdens. The second approach is called decentralized estimation fusion, in which every local sensor is attached to a local processor.In this approach, the processor estimates the state of a system based on its own local measurements () [1:] and then transmits its local linear x() +| or nonlinear ẑ() 𝑘+𝑠|𝑘 predicted estimates to the fusion center.Finally, the fusion center predicts the state (object) + and NFS + = ( + ) based on all received local estimates.For this reason, the proposed estimation algorithm is referred to as decentralized prediction fusion algorithm.Clearly, decentralized prediction has significant practical value, because it has greater survivability in extreme situations because it can estimate objects even though the fusion center is destroyed. We propose centralized and decentralized prediction fusion algorithms for NFS in the subsequent sections. Centralized Multisensory Prediction Fusion: Global Optimal Predictor In this section, the best global optimal (in the mean square error (MSE) sense) prediction algorithm for an NFS is derived.In the centralized fusion setup, a multisensory dynamic system (1) can be reformulated into a composite form: where V ∼ N (0, ) , = diag { (1) , . . ., () } . ( The optimal centralized Kalman predictor (CKP) xCKP +| = E( + | [1:] ) of the state vector and its error covariance are given by the Kalman predictor equations [18,19]: or where is an × identity matrix, Φ ,ℓ is the transition matrix for the system model (1), and the initial conditions xCKP Note that the optimal Kalman predictor xCKP +| and filter xCKF represent the centralized estimators, which simultaneously process the overall measurements .Next, the global optimal mean square predictor of NFS + = ( + ), based on the overall sensor measurements (3), also represents a conditional mean; that is, where ( + | 6)- (10), in this case it is impossible to separate prediction of the state + from prediction of the NFS + .Many different approximate filters have been proposed in the literature.For instance, the most common one is the extended Kalman filter (EKF), obtained by linearizing the nonlinear model equations along the state.In this case, the EKF must perform the simultaneous calculation of equations determining x/ and the auxiliary matrices / , (−)/ , and .Also computational complexity of the nonlinear approximate filters is considerably greater than complexity of the linear Kalman estimators ( 6)- (8). In decentralized fusion, the fusion center tries to get the best prediction of an NFS with the processed data received from each local sensor () [1:] = { () 1 , . . ., () }, = 1, . . ., .In Section 4, we propose decentralized multisensory prediction fusion algorithm based on the local Kalman predictors of NFS + = ( + ): which are available at the fusion center. In the following, we discuss effective computational algorithms for the evaluation of local nonlinear predictors ẑ() +| and cross-covariances () ,+| in ( 15) and ( 18), respectively, depending on the type of NFS. Consider an arbitrary quadratic cost function Show that optimal local nonlinear predictor ( 15) can be calculated explicitly in terms of a local Kalman predictor and its error covariance.Using formula E( Ω) = tr[Ω( + )], = E(), = cov(, ) [21], we obtain an optimal local predictor for the quadratic function where the local Kalman predictor and error covariance ( x() +| , () +| ) satisfy ( 13) and ( 14).In a special case of a polynomial NFS (20), the local cross-covariances General Nonlinear Function and Unscented Transformation. The unscented transformation (UT) makes it much easier to calculate statistics of the transformed random variable, for example, the mean and covariance [23,24].The UT has become a powerful approach for designing new filtering and control algorithms for nonlinear dynamic models [23][24][25][26].Following this, the UT procedure to calculate the best local predictor of an NFS (conditional mean) can be summarised as follows. Generate the sigma points { ℎ,+ } 2 ℎ=0 with corresponding weights { ℎ } 2 ℎ=0 : where [√ () +| ] ℎ is the ℎth column of the matrix square root of () +| and ℓ is the scaling parameter influencing the spread of points in the state-space and, thus, the accuracy of the approximation [26].Propagate each of these sigma points through the original nonlinear function as and the resulting best local estimate of the NFS is given as Similar to ( 23)-( 26), the local cross-covariance () ,+| also can be calculated based on the UT: Therefore, ẑ() +| and ,+| are represented by the known functions of the local Kalman predictors x() +| and covariances () +| , , = 1, . . ., . Discussion (1) The local covariances ,+| and weights () + can be precomputed, because they do not depend on the sensor measurements () , = 1, . . ., , but only on the noise statistics and () , the system matrices , , and () , and the initial conditions 0 and 0 , which are the part of system models (1) and (2).Thus, once the measurement schedule has been settled, the real-time implementation of the fusion estimators requires only the computation of the local predictors x() +| and ẑ() +| and the final fusion predictor ẑfus +| . (2) The implementation of the decentralized predictor consists of two stages: offline and online.The offline stage is more complex than the online stage.This is because it requires the computation of the local covariances ,+| and the fusion weights () + which depend on NFS + = ( + ).However, it is not essential, because this stage can be precomputed.The online stage (real-time implementation) requires the computation of only the local and fusion estimates (predictors).To compute ẑopt +| , the centralized predictor requires all sensor measurements together at each time instant = 1, 2, . .., whereas the decentralized predictor computes ẑfus +| sequentially. To demonstrate the performance of the proposed centralized and decentralized predictors, they will be evaluated in the next section for the theoretical example originating from [7]. (28) Here, we derive precise formula for the MSE for the proposed fusion estimators and demonstrate a comparative analysis. Comparative Analysis of Estimators. The MSE is an important value that can be used to reflect the accuracy of state estimation. Experimental Analysis of Fusion Predictors A comparative experimental analysis of the proposed predictors is considered in an example of the prediction of the instantaneous impact point (IIP) of a space launch vehicle (SLV).In the space rocket launch, a precise and real-time prediction of the IIP plays an important role for the range safety operations.Hence, online IIP prediction is carried out during rocket launch to follow the expected touch down point for a rocket body. The dynamic model of a SLV, in general, varies from linear model to nonlinear model.Typical nonlinear model considers comprehensive factors such as thrust, gravity, drag coefficient, March number, and air density [27].Although the nonlinear model precisely describes a motion of SLV, it needs complex prior information concerning a SLV flight environment.In case of the linear dynamic model, on the other hand, a constant acceleration (CA) model with multiple hypotheses which takes advantage of Singer's model [28] is introduced in [29].In [29], to describe motion of a sounding rocket using the CA model, the rocket motion is separated into two parts, propelled flight and free fall flight phase by utilizing empirically tuned, independent probability density function.This multiple model approach is suitable for the sounding rocket which has relatively short propelled flight phase with large free fall flight phase.In contrast, most of the SLV flight phases fall into the propelled flight.Therefore, for simplicity, we can reduce the dynamic model of the SLV to a CA model.To apply the proposed algorithm for prediction of IIP, a discretized CA model for motion of SLV takes the following form: where the state vector ∈ R 9 consists of the position, velocity, and acceleration components along the -axis, axis, and -axis, respectively, ∈ R 9×9 is the system matrix, and ∈ R 9 is the white Gaussian noise, ∼ N(0, ): 0 0 0 0 0 0 0 1 Δ 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 Δ Δ 2 2 0 0 0 0 0 0 0 1 Δ 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 Δ Δ 2 2 0 0 0 0 0 0 0 1 Δ 0 0 0 0 0 0 0 where Δ is the discretization interval. Next radar sensor observes the range (), azimuth (), and elevation () of the SLV.In general, due to reliability reason of the SLV, tracking system uses multiple radar to cover large trajectory.Let us assume that radars simultaneously observe trajectory of the SLV within joint radar coverage.Then, the nonlinear multisensory measurement equations are given by where and is index of radar ( = 1, . . ., ). Using 3D debiased converted measurement form [30], we can transform the original nonlinear equations (39) into linear form as International Journal of Distributed Sensor Networks 9 Here V () , is the converted measurement noise expressed in terms of Cartesian coordinates; that is, V () , ∼ N(0, () , ), Figures 1 and 2 show the global optimal and fusion prediction MSEs: for the unit vectors ,+ and ,+ along the -axis for position and velocity, respectively.In Figures 1 and 2, we can observe that the global optimal MSEs opt ,+| and opt ,+| are smaller than the decentralized fusion MSEs fus ,+| and fus ,+| ; however, the difference between MSEs of these predictors is negligible.Analogously with Figures 3-6, comparison analysis of the MSEs for the -component and -component of the unit vectors , and , represents similar results shown in Figures 1 and 2. In addition, Figure 7 illustrates the relative errors between the optimal and fusion prediction MSEs: for position along -axis, -axis, and -axis, respectively.Analogously, Figure 8 illustrates the relative errors Δ ,+| , Δ ,+| , and Δ ,+| for velocity components. Figure 7 shows that the relative errors Δ ,+| and Δ ,+| are very close and their values are ranging from 0.1% to 18% at > 1.The relative error Δ ,+| along -axis varies from 3% to 9%.However, all relative errors for position display a tendency around 5.5% at > 15. Figure 8 illustrates similar results for velocity components.The values Δ ,+| and Δ ,+| are ranging from 0.1% to 7.3%, and Δ ,+| varies from 0.1% to 5.8%.At > 15, the relative errors for velocity change around 5.5∼6.2%.So the results in Figures 7 and 8 demonstrate that for our example the application of the proposed decentralized predictor can produce good results for a long time period. Conclusion In some control problems, nonlinear functionals of state variables are interpreted as cost functions, which denote useful information of the target systems for control.To predict an NFS under a multisensory environment, prediction fusion algorithms are proposed and their estimation accuracies are discussed.In general, the centralized fusion algorithm is considered the most accurate.However, owing to the inherent drawbacks of centralized processing, here the decentralized algorithm is found to be the best between fusion prediction algorithms.To show performance of the fusion predictors for NFS by practical application, multisensory fusion prediction of unit vector of position and velocity under constant acceleration motion of SLV is considered.In the example part, the comparative analysis and simulation results show that the proposed decentralized fusion predictor for NFS has competitive performance in an aspect of MSE and relative errors. Figure 7 : Figure 7: Relative errors for position components. Figure 8 : Figure 8: Relative errors for velocity components. There is an alternative idea to estimate the NFS.In this case, the unknown NFS + = ( + ) is considered as additional state variable which is determined by the nonlinear difference equation +1 = Φ ( , ), Φ = Φ ( + ), = 1,2,.... Including the variable = ( ) into the state vector of a system ∈ R , we obtain nonlinear discrete-time system with the extended state / = [ ] ∈ R +1 .Thus, the problem of prediction of the unknown NFS is reduced to the nonlinear filtering problem by replacing real state vector by the respective extended state vector / .And approximate nonlinear filters can be used for simultaneous prediction of the unknown state vector + and NFS + .Notice that, contrary to the proposed idea ( Table 1 : Comparison of MSEs and relative errors. Table 1 1 , and 2 .Table1shows the relative errors for the decentralized estimator with respect to the global optimal centralized estimator, Δ = |( fus ) and (ẑ fus ) is obvious.International Journal of Distributed Sensor Networks [31]he Keplerian motion[31], the IIP can be represented as nonlinear function of the unit vectors of the position ( , ) and velocity ( , ).Therefore, in this case, the IIP prediction problem is reduced to estimation of the following unit vectors: , = [ , , , ] , , = [ , , , ] ( fus ,+| −
2018-04-03T02:52:11.872Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "cc9daa22a2bf0d5950fd248d04fc57f9e85aad7b", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1155/2015/249857", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "918bf6544c84bde4ddb06add9db50b9cdf34de24", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }