id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
118414060 | pes2o/s2orc | v3-fos-license | South Galactic Cap u-band Sky Survey (SCUSS): Data Reduction
The South Galactic Cap u-band Sky Survey (SCUSS) is a deep u-band imaging survey in the Southern Galactic Cap, using the 90Prime wide-field imager on the 2.3m Bok telescope at Kitt Peak. The survey observations started in 2010 and ended in 2013. The final survey area is about 5000 deg2 with a median 5-sigma point source limiting magnitude of about 23.2. This paper describes the survey data reduction process, which includes basic imaging processing, astrometric and photometric calibrations, image stacking, and photometric measurements. Survey photometry is performed on objects detected both on SCUSS u-band images and in the SDSS database. Automatic, aperture, point-spread function (PSF), and model magnitudes are measured on stacked images. Co-added aperture, PSF, and model magnitudes are derived from measurements on single-epoch images. We also present comparisons of the SCUSS photometric catalog with those of the SDSS and CFHTLS.
Introduction
The South Galactic Cap u-band Sky Survey (SCUSS; see X. Zhou et al. 2015, in preparation for more detail) is a deep imaging survey in the South Galactic Cap in u-band with an effective wavelength of 3538Å. It is an international cooperative project between the National Astronomical Observatories of China and the Steward Observatory of the University of Arizona. The survey utilizes the 2.3m Bok telescope at Kitt Peak. The camera, installed at the prime focus, provides a field of view (FOV) of about 1 deg 2 . The adopted filter is similar to the u band of the Sloan Digital Sky Survey (SDSS; York et al. 2000). The SCUSS project started in the summer of 2009 and began its observations in fall 2010. Survey observations were completed in the fall of 2013. The final survey area is about 5000 deg 2 with uniform imaging depth, of which more than 75% is covered by the SDSS footprint. The median imaging depth for point sources is about 23.2 mag at a signal-to-noise ratio (S/N) of 5 with a 5 minute exposure time. Table 1 gives the basic characteristics of SCUSS. The main goal of the survey is to supply input photometric catalogs to select spectroscopic targets for the Large Sky Area Multi-Object Fiber Spectroscopy Telescope (Cui et al. 2012). In addition, by combining with other bands in largescale photometric surveys, such as the SDSS and Panoramic Survey Telescope & Rapid Response System (Pan- STARRS Kaiser 2004), the survey data can be used for a wide range of scientific investigations, such as Galactic structure, Galactic extinction, galaxy photometric redshift, galaxy star formation rates, and stellar populations of nearby galaxies. This paper describes the data reduction pipeline specially designed for the SCUSS survey. There are some instrumental issues that need to be specially handled, such as issues in overscan, substructures in bias, and crosstalk. In addition to detecting sources on SCUSS images, we also provide photometry for SDSS objects using consistent object parameters. Section 2 introduces the SCUSS survey and related facilities. Section 3 describes the basic image processing. Astrometric and photometric calibrations are presented in Sections 4 and 5, respectively. Section 6 provides imagequality statistics of the observations. Section 7 describes image stacking. Sections 8 and 9 present photometry methods and a comparison with other surveys, respectively. A summary is given in Section 10.
The survey and facilities
SCUSS is a u-band imaging survey in the northern part of the southern Galactic Cap. The survey originally covered the region of Galactic latitude b < −30 • and equatorial latitude δ > −10 • , with a total area of about 3700 deg 2 . It was further extended to the Galactic anti-center region and the extra area covered by the SDSS, with a final survey area of about 5000 deg 2 (X. Zhou et al. 2015, in preparation). Normally, there are two exposures for each field and the total exposure time is 5 minutes, which generates images 1-1.5 mag deeper than the SDSS u band. In the following, we give a general description of the telescope, camera, and filter that were used in SCUSS.
Telescope
The Bok telescope 1 is a 90 inch (2.3 m) telescope operated by the Steward Observatory of the University of Arizona. It is located on Kitt Peak, whose latitude is +30 • 57 ′ 46 ′′ .5 and longitude is 111 • 36 ′ 01 ′′ .6W. The elevation is about 2071 m and the typical seeing is about 1 ′′ .5. 1 http://james.as.arizona.edu/ psmith/90inch/90inch.html The telescope runs year-round except on Christmas Eve and during a maintenance period in August. The accuracy of the absolute pointing of the telescope is recorded as 3 ′′ over the entire sky.
Camera
An imaging system, named 90Prime, is deployed at the prime focus (corrected focal ratio: f /2.98; corrected focal length: 6829.2 mm). The detector is a CCD array consisting of four 4k×4k backside-illuminated CCDs. They are STA2900 CCDs made by Semiconductor Technology Associates, Inc. and backside processed at the University of Arizona Imaging Technology Laboratory. These CCDs have been optimized for the u-band response, giving a quantum efficiency at u band of about 80%. Figure 1 displays the layout of the CCDs on the focal plane. The edge-to-edge FOV is about 1 • .08×1 • .03. The pixel scale is 0 ′′ .454. There are inter-CCD spacings in the center of the array: 166 ′′ in right ascension and 54 ′′ in declination. Each CCD is read out by 4 amplifiers located at the corners. Each detector has 4096×4032 physical pixels and 20-row pixels of overscan for each amplifier. The full well is about 90,000 electrons or 65,000 data numbers (DNs). The current system gain is set to be about 1.5 electrons per DN. The dark current is about 7 electrons per pixel per hr. The readout time is about 30 s and the average readout noise is about 8.8 electrons. The response non-uniformity with the u filter, which is the standard deviation divided by the mean of a u band flat field image, is about 2%. Table 2 summarizes these parameters of the detector. In 2010, CCD #4 had problems and only a quarter of this CCD could be used. CCD #2 and CCD #3 had relatively large readout noise. The camera was upgraded in 2011, at which time these two CCDs were replaced with new detectors. CCD # 4 was replaced and swapped with CCD #1. New video preamps were added to all four CCDs in order to reduce crosstalk and improve noise immunity.
Filter
The SCUSS u filter is similar to the SDSS u band. Figure 2 displays both SCUSS and SDSS system response functions. The SCUSS u response curve includes the filter transmission, the CCD quantum efficiency, and the atmospheric extinction at the typical airmass of 1.3. The adopted atmospheric extinction is the same as that of the SDSS, which is based on the standard Palomar monochromatic extinction coeffi- cients but here scaled to the elevation of Kitt Peak assuming an exponential scale height of the atmosphere of 7000 m (Doi et al. 2010). The effective wavelength and bandwidth of the filter are respectively defined as λ eff = λR(λ)dλ R(λ)dλ and λ eff = λR(λ)dλ R(λ)dλ , where R(λ) is the filter response curve. The effective wavelength and bandwidth of the SCUSS u band are about 3538Å and 345Å, respectively. The FWHM is about 520Å. The effective wavelength, bandwidth, and FWHM of the SDSS u filter is 3562, 385, and 575Å, respectively. The SCUSS u band is slightly bluer than the SDSS filter. In the rest of this paper, we will use the symbol of u * to refer to the SCUSS u band and the symbols of u, g, r, i, and z for the five SDSS bands. Throughout the paper, objects are classified as point-like or extended throughout this paper using the SDSS star-galaxy separation unless otherwise specified. 3. Basic image processing 3.1. Image division and overscan correction 90Prime has four CCDs and each CCD is read out by four amplifiers. Thus, every exposure file includes 16 FITS extensions. We split the file into four smaller FITS images, each of which represents one of the four CCDs. Overscan lines (40 pixels) are moved to the right side of the image.
We compute the median of the overscan columns for each amplifier and subtract it from the raw frame. There are some subtle issues about the overscan that require special care. For example, when a bright star is located right beside the overscan region, nearby overscan pixels are contaminated. Occasionally, some brighter stripes appear within or close to either end of the overscan region. The overscan in these regions needs to be interpolated or extrapolated. After overscan subtraction, frames are rotated counter-clockwise by 90 • to keep north up and east left. These frames are then trimmed to the size of 4096×4032 pixels.
Dark Current
The average dark current of the 90Prime imager is about 7.2e per hr. For an exposure of 300 s, it is ignorable (about 0.6e) relative to the readout noise of about 8.8e, so we did not take dark exposures.
Bias
Bias frames are taken before and after the scientific exposures. A total of 20 bias frames are obtained each night. After combining them, there are large-scale structures, especially for CCD #2 and #3 as shown in Figure 3. Counts in some peaks of the structures can be more than 10 DN. These structures affect the accuracy of photometry significantly, especially in u band, because counts of scientific frames in this band are low. The bias structures are stable for a long time (month-to-month variation is about 0.25 ADU), so we derive a median "super" bias by combining all the bias frames taken within a month. All overscan-subtracted frames are corrected by this "super" bias.
Flat-fielding
Flat-fielding removes the instrumental signatures from raw frames using several exposures taken with the telescope facing a uniform light source. At the beginning and end of each night, a total of 20 dome flats are taken with the dome closed and the telescope pointing to a white screen. This screen is illuminated by UV lamps of Philips MasterColor Ceramic Metal Halide ED-17. Usually, a 6 s exposure generates about 20,000 DN on the CCD. Although dome flats have extremely high S/Ns, there are some disadvantages to applying them in flat-fielding. First, lamps are point sources and the scattered light on the screen might be uneven in the radial direction. Second, the optical path of the light from the screen is not the same as the light from night sky. In addition, we also find that the gain of each amplifier changes considerably during the whole night. When using only dome flats, we find that average sky backgrounds in the four amplifiers of each CCD can be quite different.
Twilight flats are an alternative way and regarded as being more uniform than dome flats. They are obtained by observing the sky during evening and morning twilight. The brightness of the twilight sky changes rapidly and it strongly depends on the weather. It is difficult to get enough good high S/N flats. The problem of gain variation also persists.
Raw science images also contain the flat-field characteristics. By combining all the science frames with outliers rejected, we can obtain a "super" sky flat. This kind of flat has more advantages than the two types of flats above. The light of night sky "super" flat comes through the same optical path as the observed objects. The CCD gain is the average of all science frames and it is closer to the real-time variation during the night. There are some structures in the "super" flat that present a different sensitivity amplitude relative to the dome flats possibly due to a different light path. About 150 science images on average are observed each night. The average level of the sky background in each image is about 150 ADU. As a result, the "super" sky flat has an average count of about 23,000 ADU. We present the sky flats on 2011 December 28 as an example in Figure 4. After applying either the dome flat (higher S/N) or super sky flat, we find that there is less than a 0.2% of difference in the sky background fluctuations. The resulting error due to lower S/N of the "super" flat is negligible relative to the large sky background fluctuation of about 12.7 ADU and CCD readout noise of 8.8 electrons. Thus, we use the "super" sky flat to do flat-fielding corrections of the science frames.
Crosstalk
Crosstalk frequently occurs in multi-channel CCD chip read-out. When there are saturated objects in one of the CCD amplifiers, it can be best seen as mirror images in other amplifiers. The crosstalk usually causes contaminations across the output amplifiers at the level of 1:10,000 (Freyhammer et al. 2001). The output counts of each ampli-fier are the sum of the true counts from the sky and a small fraction of counts from other amplifiers. The crosstalk signal can be either positive or negative and should be corrected for high-precision astronomical photometry.
The 90Prime camera has four CCDs and each CCD has four amplifiers. The crosstalk effect is clearly seen in our SCUSS raw images with saturated stars. The crosstalk signal is positive. We characterize the effect assuming that it is additive and proportional to the number counts of other amplifiers. The proportionality coefficients are relatively stable in SCUSS images. We correct crosstalk using the following prescription: a series of images with bright stars appearing in one of the CCD amplifiers is selected; the proportionality coefficients are estimated by comparing the mirror signals with their original signals; the crosstalk signals in one quadrant are removed with the corresponding coefficients of the other three quadrants. The average coefficient for the SCUSS images is about 2:10,000. Figure 5 illustrates the crosstalk and the performance of its correction. The crosstalk signals are clearly seen as mirror images (arrows) of a bright star that is marked with a circle.
Astrometry by UCAC4
Astrometric solutions are derived by cross-identifying objects in science frames with the Fourth US Naval Observatory CCD Astrograph Catalog (Zacharias et al. 2013, UCAC4). UCAC4 is an all-sky star catalog that is complete in R band down to 16 mag. It contains proper motions for most stars. An approximate first-order guess of the astrometric solution for each CCD is first estimated by using UCAC4 objects in the same area with positions revised according to their proper motions. Then, the projection center of the telescope can be calculated using this rough solution. A more accurate astrometric solution is then derived with the projection center and a radial second-order correction for the focal plane distortion. This distortion term is the same for all frames in our survey.
The left panel in Figure 6 shows the distribution of the astrometric errors in the plane of R.A. and decl. differences between SCUSS and UCAC4. It includes objects in one exposure of a randomly selected field. About 1800 objects are matched with UCAC4. The average position offsets are 0 ′′ .002 and 0 ′′ .004 for R.A and decl., respectively. The 1σ po- sition errors in R.A. and decl. are about 0 ′′ .128 and 0 ′′ .120. The right plot of Figure 6 illustrates the distortion of the focal plane. The plate scale varies from the center (0 ′′ .455) to the edge (0 ′′ .447) of the FOV. The pixel area near the corner is about 3.3% smaller than that of the central pixel.
External astrometric errors
External astrometric errors are estimated by matching SCUSS objects with the UCAC4 catalog. About 165 objects on average for each CCD are crossing-identified and the global external position error is about 0 ′′ .13±0.02. The mean external astrometric offsets and RMS errors for R.A. and decl. are ∆α = −0 ′′ .0012 ± 0.0079, ∆δ = −0. ′′ 0015 ± 0.0071, where σ corresponds to the 68.3% confidence level. Figure 7 displays the external coordinate offset and RMS error as functions of RA and decl. The offsets and RMS errors are uniform over the entire survey.
Internal astrometric errors
Internal astrometric calibration errors are estimated by using the cross-identifications of UCAC4 sources inside the overlapping areas between two adjacent exposures. Each field is observed with two exposures dithered by half a CCD size, so every CCD frame is covered by four other frames, whose astrometric solutions are independently derived. We calculate the internal astrometric uncertainties using the objects in common to both frames. The global average internal error is about 0 ′′ .09±0.03. The mean internal R.A. and decl. offsets and RMS errors over the whole survey are ∆α = 0 ′′ .0009 ± 0.0102, ∆δ = 0. ′′ 0004 ± 0.0113,
External calibration
The SDSS imaging survey has covered more than one third of the sky within both northern and southern Galactic caps. It provides photometric catalogs of about 5200 square degrees in the SGC. More than 21 million objects in this area are recorded. The exposure time for each band is about 54 s. The u magnitude limit with 95% completeness for point sources is about 22.0 mag. The photometric calibration accuracy in u band is about 1.3%. The ninth data release (Ahn et al. 2012, DR9) is utilized to make photometric calibrations of the SCUSS images.
Because of gain and observational condition variations, individual zeropoints must be determined separately for each frame. Aperture photometry is performed by using DAOPHOT (Stetson 1987) on the bias-subtracted and flatfielded images. We choose an aperture radius of about 7 ′′ .3 (16 pixels), similar to that of 7 ′′ .43 adopted by the SDSS photometric calibration. The aperture diameter is about 7 times the typical seeing FWHM, which is large enough not to be affected by aperture correction. The transformation equation between the instrumental magnitudes u inst = −2.5log 10 ADU to the SDSS calibrated magnitude is simply given by where c is the instrumental zeropoint, k is the atmospheric extinction coefficient, X is the airmass, u − g is the SDSS color, and f (u − g) is the color term. The final term of f ′ (u − g, X) is related to the color effect due to the atmospheric extinction. The central wavelength of SCUSS u * band at an airmass of 1.0, 1.3, and 1.5 is estimated to be 3534Å, 3538Å, and 3542Å. The photometric effect in the color terms for most main-sequence stars at different airmasses is less than 0.3%, estimated using theoretical stellar spectral libraries. So, we ignore the second-order color term. Since the standard stars are the common SDSS objects within the same area of each image, the term solely related to the airmass is constant. Thus, Equation (1) can be simplified as where u is the calibrated SCUSS u-band magnitude and C is still termed as the "photometric zeropoint". The color term is estimated as follows: (1) SCUSS instrumental magnitudes are calibrated using the photometric magnitudes of common SDSS objects without considering the color term; (2) magnitude differences between these calibrated SCUSS magnitudes and SDSS magnitudes are calculated as a function of the SDSS u − g color; (3) an approximate system transformation formula is fitted by a two-order polynomial: for 0.8 < u − g < 2.7 (see Figure 8). This color term is only used for the photometric calibration. Here the SDSS is used for the zeropoint only, and the SCUSS photometry on its native system is defined to match the SDSS at u−g = 0. We choose stars with 16.0 < u < 20.5 to eliminate objects that are saturated or have large photometric errors. Equation (3) is then applied to transform u to SCUSS u * . Following Equation (2), we measure the final zeropoint C by iteratively rejecting outliers. The CCD gain varies slightly during the observation. Furthermore, CCD gains of the four amplifiers did not change synchronously. In addition, there is a photometric response non-uniformity on the flat-fielded images. It is possibly caused by the focal plane distortion and scattered light re- flected in the optical system (Regnault et al. 2009;Betoule et al. 2013).Thus, we perform photometric calibration for each amplifier of each CCD. There are about 47 stars in each amplifier on average to derive the zeropoint, whose accuracy is estimated to be about 0.01 mag. The zeropoint here is expressed in units of mag for 1ADU s −1 . After deriving independent zeropoints for each amplifier, we find a small residual pattern in the sensitivity as derived from objects observed in different positions in the field as shown in Figure 9. We use this residual map to further refine the flat-field.
Photometric response differences of the four amplifiers
The average zeropoints of four amplifiers are shown in the left panel of Figure 10 as a function of time. Usually during the night, we scan the sky from east to zenith and then to west. The airmass and corresponding atmospheric extinction first decreases and then increases. Therefore, zeropoints present nightly variations. From this figure, it can be seen that the weather conditions over 2010 are worst. The right panel of Figure 10 shows the relative zeropoint variation with time for each amplifier of CCD #1, compared with the average zeropoint of all amplifiers. The relative zeropoint variations do not keep the same value and sometimes two amplifiers present opposite variations. The detectors have bad qualities in 2010 so that the relative zeropoint changes more notably with time. Thus, it is critical that we obtain the photometric solutions independently for the four amplifiers. Table 3 gives the zeropoint scatter of four amplifiers for each year and each CCD. The zeropoint differences among amplifiers originate from gain variation, photometric response non-uniformity as mentioned before, and the photometric calibration error. The overall zeropoint scatter of four amplifiers around the averages are 0.012±0.006, 0.016±0.007, 0.015±0.009, and 0.011±0.006 for CCD #1, #2, #3, and #4, respectively. The general response differences of four amplifiers in all CCDs are less than 1.5% except the CCD #3 of 2010, which is about 2.8%.
Internal calibration
Each SCUSS field is observed twice and overlaps with the adjacent fields. Thus, each CCD image can be calibrated using the common objects cross-identified in surrounding images. The calibrating process begins with balancing the zeropoints of images covered by the SDSS and then transferring the photometric solutions to the images out of the SDSS footprints. The calibration iterates until the whole grid of photometric solutions finally converges. Figure 11 shows the magnitude difference distribution between measurements for objects that are observed twice. The histograms in black and red are the distributions of the same objects with the SCUSS u-band magnitude calibrated externally and internally, respectively. The dispersion by external calibrations is about 0.028 mag, while that by internal calibrations is about 0.025. It seems the internal calibration is at least as good as or even a little better than the external calibration.
Image Quality Statistics
Both weather and instrumental status affect the image quality. The camera was regularly updated after each observation season, so its performance was improved gradually. The weather and night sky conditions strongly affect the imaging depth of the survey. We can measure the characteristics tracing the image quality, such as airmass, sky background brightness, seeing, and photometric zeropoints.
Seeing
The seeing is estimated by the FWHM measurements of isolated and bright point sources. The seeing distribution is presented in Figure 12a. The best seeing is 1 ′′ .2 and the overall median seeing is about 2 ′′ .0. The Bok telescope is located in the trough between two peaks of the mountain, so the wind speed is usually larger than other places on the same site, which has an effect on the seeing. Figure 12b shows the airmass distribution of all survey images. The median airmass is about 1.28, and this is used Fig. 11.-Magnitude difference distributions between two measurements for objects observed more than twice. The black histogram is externally calibrated by the SDSS catalog, while the red one is internally calibrated. Only bright objects with 16 < u <19 are selected.
Airmass
to determine the typical u-band filter response as shown in Section 2.3.
Photometric zeropoint
The variation of the photometric zeropoint mainly reflects the change in atmospheric transparency. The distribution of the photometric zeropoints is plotted in Figure 12c. The mean photometric zeropoint is 23.81 mag for 1ADU s −1 . The median zeropoints for airmasses of 1.0, 1.2, and 1.4 are 23.93, 23.85, and 23.75, respectively. There are some images with low zeropoints, most of which were observed when the weather was cloudy.
Sky brightness
The sky background brightness is an important parameter to quantify a ground-based observation station. Compared with other light sources, the artificial light pollution from nearby cities is more serious at Kitt Peak. The distribution of the u-band sky brightness at Kitt peak is shown in Figure 12d. There are some images taken when the moon is above the horizon. The average night sky brightness of all observations is about 22.05 mag arcsec −2 . The moonless median sky background at zenith is about 22.37 mag arcsec −2 , which is comparable to that of the Apache Point site (22.1 mag arcsec −2 ). Note that the calibrated sky brightness uses the frame zeropoints that implicitly include atmospheric extinction, so the actual sky brightness is darker because the u-band atmospheric extinction coefficient is about 0.5, as estimated with the observations taken during photometric nights.
Quality Control
Most SCUSS fields are observed under good weather and moderate seeing conditions. There are some cases with bad image quality: (1) high sky background due to observations during astronomical twilight or when moon was up; (2) large seeing due to strong wind; (3) low atmospheric transparency due to cirrus in the FOV; (4) bad focus of the CCD camera. In 2013, we spent one and half observation runs re-observing most bad-quality fields.
To ensure the homogeneity of the imaging depth, and completeness of the SCUSS survey, we keep only the images with seeing <3 ′′ .0, sky background ADU <500, and photometric zeropoint >22.56 mag. About 92.6% of the survey area is covered by these images. For the remaining area, we take images with seeing <3 ′′ .0 (3.6% area), regardless of the sky brightness and photometric zeropoint. If none are available, we take all remaining images, which covers about 3.8% of the total area.
Image Resampling and Stacking
Based on the central coordinates of each field, we stack related single-epoch images to form a combined image. We first project and resample the single-epoch images to a grid with a fixed pixel scale of 0 ′′ .454. The grid has 8640×8200 pixels, covering a sky area of 1 • .090×1 • .034. If a singleepoch images contributes less than less than 128×128, it is not included in the stacking process. For each pixel of the grid, there are more than four related pixels in each single-epoch images. These pixels are regarded as having the same size after being flat-fielded. We calculate the fractions of these pixel areas that are covered by the grid pixel and sum them to conserve flux.
We subtract the sky backgrounds from the resampled images. Their photometric zeropoints are converted to linear flux weights. The remaining signal after removing backgrounds are weighted by these weights and then co-added. If there are more than three pixels involved in stacking, a cosmic-ray rejection is implemented using a sigma clipping algorithm. We redo the flux calibration for each stacked image with the SDSS DR9 catalog to derive the final photometric zeropoint. This zeropoint is approximately 29 mag for 1 DN. In addition, a mask image for each field is also generated. Each mask pixel presents the number of images that are actually involved in the flux co-adding. The mask value is reduced by one for each epoch in which a pixel is bad, saturated, or blank. Figure 13 shows a typical stacked image and its mask image. Most of the stacked image has two exposures, an overlapping area has more than three exposures, and there are some small holes located in the CCD gaps that are blank. Some of the CCD gaps have only one exposure, so the depth is about 0.75 mag shallower in those locations.
Photometry
The SCUSS photometric pipeline generates comprehensive catalogs using multiple photometric techniques so that different users can choose the one that suits their needs. Aperture, automatic aperture, point-spread function (PSF), and model photometry are applied to both SCUSS stacked and single-epoch images. The model photometry is consistent with the SDSS model photometry that utilizes the SDSS r-band model shape parameters to measure brightnesses on the SCUSS u-band images.
Source detection
Source detections for astronomical science images are never complete, especially at the fainter magnitude end. Morphologies of extended sources in u band are more fragmented and diffuse than those in other optical bands. Therefore, many fainter sources with low surface brightness might be missing. In addition, most of the science projects based on SCUSS data also need to use deeper and redder other bands from other large-scale surveys. More than 3/4 of the North is up and east is left. A galactic cluster of Abell 400 (marked as a plus symbol in the left plot) happens to appear in this area. The mask image shows the exposure numbers. There are two black boxes presenting no observations. Most of the area has two exposures, a small part shows more than two exposures, and some CCD-gap areas have only one exposure. The small horizontal lines are caused by the missing CCD rows that were not recorded by the camera controller during 2011 and 2012. SCUSS area is covered by the SDSS. Our sources include both SDSS detected objects and detections unique to our SCUSS images. For the area not covered by the SDSS, we perform photometry for only SCUSS detections. The resulting catalog within these areas will be useful to match with other wide imaging surveys (e.g. Pan-STARRS).
The SDSS objects with any one of the PSF, Petrosian, model, and CModel ugriz-band magnitudes brighter than 23.5 mag are selected. SExtractor is used to detect objects in SCUSS stacked images, and these are matched with the SDSS objects. The mismatched objects are rematched with SDSS catalogs with magnitudes fainter than 23.5 mag in order to find missing SDSS objects. The rest of the objects are SCUSS unique detections.
Aperture photometry by DAOPHOT
Circular aperture photometry is a simple procedure to measure magnitudes of sources. The core code of the aperture photometry in DAOPHOT is utilized to measure aperture magnitudes (Stetson 1987). We use 12 apertures with radii ranging from 1 ′′ .4 to 18 ′′ (from 3 to 40 pixels; see Table 4). It is important to apply aperture corrections to aperture magnitudes due to flux loss within a finite aperture size. For smaller apertures and larger seeing, this is especially important. Figure 14 presents growth curves under different seeing conditions. The growth curve is calculated as the magnitude difference between one of 12 apertures and the 7 ′′ .3 radius aperture as function of aperture radius. This reference aperture is the one used for photometric flux cali-bration. We choose isolated and point-like objects with photometric errors in all apertures less than 0.05 mag to derive the growth curve. Outlier objects are eliminated when the median growth curve is calculated.
Automatic photometry by SExtractor
SExtractor (Bertin & Arnouts 1996) provides precise magnitudes of sources using automatic aperture photometry, generating so-called "automatic magnitude", which is motivated by the Kron algorithm (Kron 1980) 2 . An elliptical aperture is automatically determined for each object to integrate the flux. Although most of the flux is expected to lie within the elliptical aperture and the flux loss should be almost independent of the source magnitude, we discover that in fact, the flux loss does change with both the source magnitude and seeing. The magnitudes of objects that are brighter or objects observed with worse seeing have more corrections. We consider an equivalent circle whose area is equal to that of the ellipse. Thus, the radius of this circular aperture is equal to the root of the product of the semi-major and semi-minor axis lengths. We call this circular radius the characteristic radius, which has the same meaning as the aperture size in normal aperture photometry discussed above. Figure 15 presents the aperture corrections for the automatic magnitudes with different seeing. The black dots are point-like and isolated objects detected by SExtractor. They lie along with the growth curve of the aperture correction described in the previous section. Thus, an interpolation from the curve is good enough for correcting automatic magnitudes.
We also perform Petrosian-like photometry on SCUSS images by SExtractor. The Petrosian aperture is very similar to the Kron one. They share the same position angle and ellipticity. The Petrosian aperture radius is determined by the ratio of the isophotal brightness at a certain radius and average surface brightness within this radius. The ratio is set to be 0.2 and the corresponding Petrosian radius is larger than the Kron radius. The aperture corrections for Petrosian magnitudes can be estimated by the same way as mentioned above. Figure 14 are overlaid in red. The cyan curves are the cubic spline interpolation of those growth curves.
PSF photometry
The PSF is obtained using PSFEx 3 (Bertin 2011). The form of the PSF in PSFEx is expressed as a linear combination of basis vectors. The pixel basis is selected in our PSF modeling. The spatial PSF variation on the focal plane usually shows a smooth profile, which can be modeled by a loworder polynomial. For SCUSS single exposures, a secondorder polynomial is good enough to describe the PSF variation over the CCD plane. For SCUSS stacked images, a seventh-order polynomial is used to describe the complexity of the image quality.
A code is specially designed to measure the PSF magnitudes at known object positions. The code takes the SCUSS image, SDSS and SCUSS-only object positions, the bad-pixel list, and the PSF model derived by PSFEx as inputs and then outputs the Gaussian-fitted position, local sky background and its error, local PSF FWHM, fitted PSF integrated flux and its estimated error, and a flag tagging the status of each object. Figure 16 shows the flowchart of the PSF photometry code. The local sky background for each object is measured using the pixels at r > 7.5×FWHM. Here, the FWHM is the full width at half maximum of the local PSF model. Outliers, such as cosmic rays and signals from real objects, are iteratively rejected by a sigmaclipped algorithm. After the sky background is subtracted, the object position on the CCD is fitted by a two-dimensional Gaussian function. The position is allowed to shift because the coordinates in SDSS redder bands are slightly different from those in the SCUSS u band due to atmospheric refraction and star proper motions. The pixels centered at the initial position with r < 2.5×FWHM are considered in the calculation. If the new fitted position is more than 2.0 pixels away from the old one, the following PSF photometry will be performed at the original position. The PSF model is interpolated to the same pixel scale of the CCD image and fitted to the fluxes of the object pixels within r = 2.0× FWHM. Flags are also provided by our PSF photometry to show the reliability. They are coded in decimal and expressed as a sum of powers of 2: (1) CCD artifacts; (2) bad pixels; (4) including saturated pixels; (8) contaminated by neighbors; (16) near image edges.
We compare our PSF measurements with one of the popular PSF photometry software packages (DAOPHOT), which 3 http://www.astromatic.net/software/psfex is a widely used package for accurate stellar photometry designed to deal with crowded fields. We use DAOPHOT to find objects in one single-epoch image and at the same time give the PSF magnitude measurements. Then, our code performs PSF photometry for those objects with the PSF model derived by PSFEx. The magnitude comparison is shown in Figure 17. Two groups of objects are chosen: a brighter one with 16 < u < 18 mag (left in Figure 17) and a fainter one with 21 < u < 22 mag (right in Figure 17). The scatter of the PSF magnitude differences between our code and DAOPHOT are 0.009 mag for the bright group and 0.021 for the faint group, respectively. Our PSF photometry is consistent with DAOPHOT, although the PSFs used by these two techniques are derived in different ways.
Model photometry
The SDSS also provides a type of model magnitude, or "modelMag," which can measure the unbiased colors of galaxies in the absence of color gradients through equivalent apertures in different bands. The "modelMag" is generated by choosing a shape model, either a deVaucouleurs or exponential profile, based on its best-fit likelihood in the SDSS r band, then convolving them with seeing in other bands, and finally forcing magnitude measurements with the same aperture shape.
We divided SDSS objects into two groups, point sources and extended sources, based on the SDSS classification. For extended sources, we construct their theoretical 2D models with the effective radii, axis ratios, and position angles from the SDSS measurements. These models are convolved with local PSF profile derived by PSFEx. For point sources, we use local PSF from PSFEx directly as their models. In addition, the extended sources with small sizes or low brightnesses (effective radii less than 0.5 arcsec or SDSS r-band magnitude fainter than 23.5 mag) are also treated as point sources.
After the models are constructed, model amplitudes are calculated as the ratios of the models and raw SCUSS fluxes. The pixels within r = 1.0× FWHM are considered in the calculation, which is optimized for faint SDSS extended sources. According to the amplitudes, we then compute the corresponding deVaucouleurs and exponential magnitudes. Therefore, the "modelMag" in SCUSS u band can be estimated by the SDSS r-band deVaucouleurs and exponential profiles. The SCUSS model magnitude is aperture- Fig. 16.-Flow chart of our PSF photometry pipeline. Firstly, the PSF model is extracted by combining SExtractor and PSFEx. Secondly, the coordinate for each object is fitted by a Gaussian function. Thirdly, the image of each object with the sky background subtracted is fitted by the local PSF profile. Finally, the photometric results together with the fitted positions and flags are given as outputs. corrected to make the magnitudes of bright point sources (16 < u < 20) equal to the SDSS u-band model magnitudes.
Co-added photometry
The stacked images are composed of single-epoch images taken with different observational conditions. The seeing varies between each single-epoch image, which makes the PSF profile of the stacked image quite complicated. Subsequently, the fraction of flux loss due to finite apertures for objects in different parts of the stack images might be different. In addition, the PSF profile cannot be perfectly determined unless the seeing of related single-epoch images is similar enough. On the other hand, the PSF profile of a single image varies smoothly and it is much easier to model. We can first perform photometry for an object observed with different exposures and then co-add its fluxes to generate different co-added magnitudes at the catalog level.
The aperture photometry for single-epoch images is obtained by DAOPHOT with 12 apertures as defined before. Aperture corrections are applied to the resulting aperture magnitudes. Corresponding corrected fluxes are weighted by the errors and averaged to generate the co-added aperture magnitudes.
The PSF photometry for single-epoch images is performed using our PSF fitting code. The PSF magnitudes are also aperture-corrected by comparing with the aperture magnitudes in a 7 ′′ .3 radius. The co-added PSF magnitudes are calculated by averaging the fluxes weighted by their errors. We perform the model photometry for single-epoch images and measure the exponential and deVaucouleurs magnitudes. We adopted model magnitude with a higher SDSS r-band likelihood. Aperture corrections are applied to make model magnitudes equal to the SDSS model magnitudes in the case of unresolved objects. The co-added model magnitude is calculated by averaging the model fluxes weighted by their errors.
Some photometric comparisons
The photometric methods described above are applied to stacked and single-epoch images. Figure 18 illustrates the general photometric scheme. The detections are based on both SDSS and SCUSS images. We perform aperture, PSF, model, and automatic photometry for the stacked images. Note that the automatic magnitudes are only based on ob-jects solely detected by SExtractor. We also obtain co-added aperture, PSF, and model magnitudes from photometry of single-epoch images. Since point sources can be best modeled while extended sources are much more complicated, the photometry of point-like objects is better suited for comparisons of our different photometric methods. Thus, the comparisons of point sources are presented for most cases below and, for comparison with SDSS, we correct the magnitude with the SCUSS/SDSS color term.
Photometry of stacked images
We compare the different SCUSS photometric measurements for stacked images with the SDSS PSF or model magnitudes of point sources in Figure 19. The SCUSS model magnitude is compared with the SDSS u-band model magnitude, while other magnitudes are compared with the SDSS PSF magnitude. We choose the 5 pixel (2 ′′ .27) to present the aperture magnitude in this figure. For the smallest apertures, the aperture magnitudes might be problematic due to inhomogeneous image quality in some stacked images. Table 5 gives the magnitude offset and scatter in two magnitude intervals: 16 < u < 19 mag and 20.5 < u < 21.5 mag (i.e. ∼21 mag). The automatic magnitude is best among all photometric magnitudes for stacked images, with a scatter of 0.033 for bright stars and 0.178 for faint stars at u ∼ 21.0 mag.
Co-added photometry for single images
The comparisons of the co-added PSF, aperture, and model magnitudes with the SDSS PSF or model magnitudes are shown in the right panels of Figure 19 and the last three rows of Table 5. The co-added PSF magnitude performs the best for point sources among all the magnitude types. The scatter is 0.033 for brighter sources and 0.174 for fainter sources at u ∼ 21.0 mag. The co-added aperture magnitude is also adequate if a proper aperture radius is considered. The aperture size should be chosen according to the object type, object brightness, signal-to-noise requirement, etc.
Comparisons with the CFHTLS deep u band
The Canada-France-Hawaii Telescope Legacy Survey (CFHTLS; Astier et al. 2006) used the wide-field optical imaging camera on CFHT to obtain deep multicolor photometry over wide areas over 5 years.The photometric system is similar to that of the SDSS, except the u filter. As fully covered by the SCUSS and it is located at a higher declination, where the SCUSS data quality is more typical than it is in the W1 field. We make some photometric comparisons by using the W4 catalogs. Figure 20 shows the photometric comparisons of point sources between the SCUSS and SDSS with the CFHTLS wide data as a reference. The left panels in this figure present the magnitude difference between the SDSS (in blue points) or SCUSS u-band magnitude (in red points) and the CFHTLS automatic magnitude as a function of the CFHTLS u magnitude. The SDSS magnitude is the PSF magnitude and the SCUSS magnitudes are automatic, co-added PSF, and co-added aperture (2 ′′ .27) magnitudes from top to bottom, respectively. These three types of SCUSS magnitudes are considered to be the best flux measurements for point sources. The number of objects for the automatic magnitude is less than those of the other two magnitude types (mainly at the faint end) due to different source detection by SExtractor itself. Both the SCUSS and SDSS u-band magnitudes are converted to the CFHTLS photometric system by the color term as indicated in the CFHTLS webpage 4 : u CFHT = u SDSS/SCUSS − 0.241(u SDSS/SCUSS − g SDSS ). The 4 http://cfht.hawaii.edu/Instruments/Imaging/MegaPrime/generalinformation.html SDSS scatter for brighter sources is similar to the SCUSS scatter, while it is much larger than the SCUSS scatter at the faint magnitude ends. The histograms in the right panels of Figure 20 show the magnitude difference distributions with 22 < u CFHT < 23. The scatter for the SCUSS and SDSS are also presented in these panels. The photometric accuracy of the SCUSS is much better than that of the SDSS for fainter sources due to deeper imaging.
Photometry for extended sources
Extended sources are much more complicated, especially in ultraviolet bands, since their morphologies look much more fragmented than in redder bands. By comparing different magnitude measurements on stacked images with the SDSS model magnitude, we calculate a scatter at u ∼ 21 of about 0.2, 0.31, 0.4, 0.23 for automatic, aperture (2 ′′ .27), PSF, and model magnitudes, respectively. The co-added photometric magnitudes give similar results. The automatic magnitudes seems to be the best for extended sources, but they only describe the brightnesses of SCUSS-detected objects. The model and aperture magnitudes are also adequate. The PSF photometry fails to measure the magnitudes of galaxies.
Guidelines to use magnitudes
The choice of photometric technique is dependent on the science one wants to do. Here we present some general guidelines. More than 90% of the stacked images are assembled from single-epoch images with consistent image quality. The photometry on these images is as good as the corresponding co-added photometry. However, as the background noise in single-epoch images is larger than that in stacked images, there are about 23% more objects with available magnitude measurements in stacked images than with available co-added magnitudes.
The automatic magnitude performs as well as the PSF and model magnitudes because it can adaptively fit elliptical apertures to both point and extended sources with similar flux loss. It can be regarded as a universal magnitude for both point-like and extended sources. But unlike other photometric methods, the automatic photometry is based on the objects detected on SCUSS images, which are about 35% of SDSS objects with available SCUSS u-band fixed-parameter measurements. For point sources like quasars and stars, the PSF and aperture magnitudes with appropriate aper- Fig. 20.-Photometric comparisons of point sources between the SCUSS (in red) and SDSS (in blue) with the CFHTLS W4 data as a reference. The SDSS magnitude is the PSF magnitude and the CFHTLS magnitude is the automatic magnitude. The left panels are the magnitude differences between the SCUSS or SDSS and the CFHTLS as a function of the CFHTLS u-band magnitude, and the right ones are the normalized distributions of the differences at the faint magnitude end (the hatch area: 22 < u < 23 mag). From top to bottom, they are comparisons of three types of SCUSS magnitudes: the automatic, co-added PSF, and co-added aperture magnitudes, respectively. Only about 20,000 objects are randomly selected in this figure in order to avoid crowding. The color texts in the right panels gives the magnitude scatter of the SCUSS and SDSS, which corresponds to the 68.3% confidence level.
ture sizes are recommended. The aperture size needs to be determined based on scientific objectives. For nearby galaxies with extended morphological structures, the automatic magnitude and aperture magnitude are good choices. The SCUSS model magnitude is defined the same as the SDSS "modelMag". When combining magnitudes of other SDSS bands to measure the colors of extended sources, it is better to use the model magnitude.
Summary
The SCUSS survey is a wide-field u-band sky survey in the Southern Galactic cap. The survey used the Bok telescope on Kitt Peak and the filter is close to the SDSS u band. The survey observations were completed by the end of 2013 and the total area is about 5000 deg 2 . This paper describes the detailed data reduction dedicated to the survey, including basic image processing which has some special features related to the detectors, astrometric and photometric calibrations, and photometry. The general astrometric error is about 0 ′′ .13. The SCUSS photometric calibration is tied to the SDSS catalogs and is performed for each amplifier of a CCD due to gain and weather variations.
We apply different photometric techniques to the stacked images, including automatic photometry by SExtractor, aperture photometry by DAOPHOT, PSF photometry, and model photometry. Our PSF photometry with more controllable parameters is consistent with DAOPHOT. The model photometry is similar to the SDSS "ModelMag", which uses the SDSS r-band model-derived shape parameters and SCUSS PSF profiles to make consistent and unbiased model magnitude measurements. We perform photometry on stacked images and also on single-epoch images, from which co-added photometry is derived. More than 90% of stacked images are assembled from single-epoch images with consistent quality. There are about 23% more objects with available magnitudes on stacked images than with available co-added magnitudes. The photometry on these stacked images is as good as the co-added photometry. However, for the rest of the stacked images, their photometry is worse than the co-added photometry due to uneven image quality.
We thank the referee for his/her thoughtful comments and insightful suggestions that greatly improved our paper. This work is supported by the National Natural Science Foundation of China (NSFC, Nos. 11203031, 11433005, 11073032, 11373035, 11203034, 11303038, 11303043) and by the National Basic Research Program of China (973 Program, Nos. 2014CB845704, 2013CB834902, and 2014CB845702). Z.Y.W. was supported by the Chinese National Natural Science Foundation grant No. 11373033. This work was also supported by the joint fund of Astronomy of the National Nature Science Foundation of China and the Chinese Academy of Science, under Grant U1231113.
The SCUSS is funded by the Main Direction Program of Knowledge Innovation of Chinese Academy of Sciences (No. KJCX2-EW-T06). It is also an international cooperative project between National Astronomical Observatories, Chinese Academy of Sciences, and Steward Observatory, University of Arizona, USA. Technical support and observational assistance from the Bok telescope are provided by Steward Observatory. The project is managed by the National Astronomical Observatory of China and Shanghai Astronomical Observatory. Data resources are supported by Chinese Astronomical Data Center (CAsDC).
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/IRFU, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at Terapix available at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. | 2015-09-09T06:28:36.000Z | 2015-09-09T00:00:00.000 | {
"year": 2015,
"sha1": "38982ca84a327009bcb635cdfd09210ea8b8e2a6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1509.02647",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dfadfbe9636e466edce537c9158a8766738f98b5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
} |
52802943 | pes2o/s2orc | v3-fos-license | Effect of Rivaroxaban Versus Warfarin on Health Care Costs Among Nonvalvular Atrial Fibrillation Patients: Observations from Rivaroxaban Users and Matched Warfarin Users
Introduction New target-specific oral anticoagulants may have benefits, such as shorter hospital length of stay, compared to warfarin in patients with nonvalvular atrial fibrillation (NVAF). This study aimed to assess, among patients with NVAF, the effect of rivaroxaban versus warfarin on health care costs in a cohort of rivaroxaban users and matched warfarin users. Methods Health care claims from the Humana database from 5/2011 to 12/2012 were analyzed. Adult patients newly initiated on rivaroxaban or warfarin with ≥2 atrial fibrillation (AF) diagnoses (The International Classification of Diseases, Ninth Revision, Clinical Modification: 427.31) and without valvular AF were identified. Based on propensity score methods, warfarin patients were matched 1:1 to rivaroxaban patients. Patients were observed up to end of data, end of insurance coverage, death, a switch to another anticoagulant, or treatment nonpersistence. Health care costs [hospitalization, emergency room (ER), outpatient, and pharmacy costs] were evaluated using Lin’s method. Results Matches were found for all rivaroxaban patients, and characteristics of the matched groups (n = 2253 per group) were well balanced. Estimated mean all-cause and AF-related hospitalization costs were significantly lower for rivaroxaban versus warfarin patients (all-cause: $5411 vs. $7427, P = 0.047; AF-related: $2872 vs. $4147, P = 0.020). Corresponding estimated mean all-cause outpatient visit costs were also significantly lower, but estimated mean pharmacy costs were significantly higher for rivaroxaban patients ($5316 vs. $2620, P < 0.001). Although estimated mean costs of ER visits were higher for rivaroxaban users compared to those of warfarin users, differences were not statistically significant. Including anticoagulant costs, mean overall total all-cause costs were comparable for rivaroxaban versus warfarin users due to cost offset from a reduction in the number and length of hospitalizations and number of outpatient visits ($17,590 vs. $18,676, P = 0.542). Conclusion Despite higher anticoagulant cost, mean overall total all-cause and AF-related cost remains comparable for patients with NVAF treated with rivaroxaban versus warfarin due to the cost offset from reduced health care resource utilization. Electronic supplementary material The online version of this article (doi:10.1007/s12325-015-0189-1) contains supplementary material, which is available to authorized users.
INTRODUCTION
Atrial fibrillation (AF) is the most common heart rhythm disturbance, with a prevalence estimated between 2.7 and 6.1 million cases in the United States [1]. Compared to non-AF patients, AF patients have been found to be at a near five-fold higher risk of stroke and at an eight-fold higher risk of having multiple cardiovascular hospitalizations [2,3]. The associated health care costs of patients with AF are high. The incremental cost burden of AF patients versus non-AF patients was estimated at $26 billion in the United States in 2010, with more than 50% of this amount being hospitalization costs [3,4]. Moreover, the AFrelated hospitalization rate increased by 23% among US adults from 2000 to 2010 [5].
Chronic anticoagulation has been the standard of care for patients with chronic nonvalvular atrial fibrillation (NVAF) in the previous decades and, until recently, warfarin and other vitamin K antagonists were the only available options [6,7]. Recently, the targetspecific oral anticoagulants rivaroxaban, dabigatran, and apixaban have been approved by the US Food and Drug Administration (FDA) for the treatment of NVAF [8][9][10]. These new agents have predictable pharmacokinetic properties, minimal food-drug interactions, and do not require frequent monitoring as compared to warfarin [11][12][13][14]. Recent studies have compared these new agents with warfarin and found that target-specific oral anticoagulants were a cost-effective option [15][16][17].
AF is a significant driver of hospitalizations [18] and a considerable burden for the health care system. Since the use of new target-specific oral anticoagulants may result in potential economic benefits, the aim of the present study was to compare health care costs between NVAF patients using rivaroxaban and a matched sample of patients using warfarin.
Data Source
The analysis was conducted using health [23][24][25]. In each of the phase III trials, a total of 50-62% of patients had used warfarin before enrollment and randomization.
The observation period spanned from the date of the first dispensing (i.e., the first filled pharmacy prescription) of rivaroxaban or warfarin, defined as the index date, to the earliest among the end of data availability, end of insurance coverage, death, a switch to another anticoagulant, or 14 days after treatment nonpersistence (i.e., 14 days after the end of the days of supply of the first dispensing for which the next dispensing of the index medication, if any, was more than 60 days later). The nonpersistence criterion increased the certainty that health care costs were evaluated during exposure to the medications of interest.
Study Endpoints
The primary endpoint of this study was allcause health care costs, which included hospitalizations, ER visits, outpatient visits, and pharmacy costs. Health care costs were calculated as the sum of the following elements: amount paid by insurance, copay amount, coinsurance amount, deductible amount, and secondary insurance amount. AF-related costs were also evaluated. Costs for AF-related hospitalizations, ER visits, and outpatient visits were defined as costs associated with claims that had a primary or secondary diagnosis for AF. AF-related pharmacy costs were the costs of anticoagulant or antiplatelet agents that were dispensed.
Statistical Analysis
Propensity score matching was performed to adjust for confounding bias. Patients in the warfarin group were matched 1:1 to patients in the rivaroxaban group based on random selection among propensity score calipers of 5%. Propensity scores were calculated using a multivariate logistic regression model that incorporated the following baseline characteristics: age, gender, type of insurance, comorbidity index scores (i.e., Quan-Charlson Comorbidity Index, CHADS 2 score, CHA 2 DS 2 -VASc score, ATRIA score, and HAS-BLED score), baseline resource utilization, baseline costs, the month of the index date, and specific comorbidities ([5%; Table 1).
Patients' baseline characteristics evaluated during the 6 months prior to the index date were summarized using means [±standard deviation (SD)] for continuous variables, and frequencies and percentages for categorical variables. Baseline characteristics were compared between cohorts using standardized differences. Baseline characteristics with standardized differences of less than 10% were considered well balanced [26][27][28].
Health care costs (i.e., hospitalizations, ER visits, outpatient visits, and pharmacy costs) between rivaroxaban and warfarin users were reported and compared using Lin's method to account for death and the censored observation periods of patients [29].
Patient Characteristics
A total of 2253 rivaroxaban and 10,796 warfarin users were identified (Fig. 1). All rivaroxaban users were propensity matched with the same number of warfarin users to form the study cohorts. Overall, baseline characteristics were well balanced (i.e., standardized difference below 10%) between rivaroxaban and warfarin users. The baseline characteristics of the matched cohorts are summarized in Table 1 ; where p ¼ P warfarin þ P rivaroxaban ð Þ =2 c Evaluated during the 6-month baseline period Rivaroxaban was associated with a significant reduction in all-cause and AF-related estimated costs of hospitalization compared to warfarin (27% and 31%, respectively).
Significant differences between costs incurred by rivaroxaban and warfarin users were also found for estimated all-cause and AF-related outpatient visits (25% and 37%, respectively). Estimated pharmacy costs were significantly lower for warfarin users compared to rivaroxaban users (51% lower costs for allcause pharmacy costs and 95% for AF-related pharmacy costs).
Patients in the current study treated with rivaroxaban who had previous use of warfarin were classified in the rivaroxaban cohort. Since the results of the ROCKET AF trial suggested that rivaroxaban users who were naïve to warfarin experienced better primary efficacy and safety endpoints relative to warfarinexposed patients [24], including warfarinexperienced patients in the rivaroxaban cohort likely produced more conservative estimates of differences between groups in the current study.
The proportion of rivaroxaban patients with prior use of warfarin in the current study at 23% was lower than the proportion reported in the ROCKET AF trial, where 62% of rivaroxaban patients had previous use of vitamin K antagonists [24]. Since the current study was conducted with real-world data, it may be more representative of the real rivaroxaban patient population than a clinical trial with more strict inclusion criteria. and reproduction in any medium, provided the original author(s) and the source are credited. | 2016-05-12T22:15:10.714Z | 2015-03-18T00:00:00.000 | {
"year": 2015,
"sha1": "6e6d0e7afba5d5ff3176700af0a0495451de2259",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12325-015-0189-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e6d0e7afba5d5ff3176700af0a0495451de2259",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158490916 | pes2o/s2orc | v3-fos-license | Defining U.S. consumers’ (mis)perceptions of pollinator friendly labels: an exploratory study
Declining pollinator insect populations is an important global concern due to potential negative environmental and economic consequences. However, research on consumer perceptions of pollinator friendly traits is limited. Understanding consumer perceptions is important because they impact behavior and product selection. In turn, this affects the effectiveness of relevant policies and pollinator insects’ access to beneficial plants. This manuscript quantifies consumers’ perceptions of plant traits that aid pollinators. U.S. consumers (n=1,243) were surveyed to identify their perceptions of pollinator friendly traits. Binary logit models and marginal effects were estimated using 22 plant traits and consumers’ purchasing interest, existing knowledge, and demographic variables. Results imply consumers interested in purchasing pollinator friendly plants selected positive traits regardless of accuracy. Furthermore, consumers selected traits that aligned with their knowledge. Older participants had more accurate perceptions of pollinator friendly traits. Results highlight the challenges facing regulatory efforts geared towards promoting pollinator friendly products/practices.
Introduction
Recently, pollinator insects have become an important environmental topic due to decreasing populations and their global significance (Hanley et al., 2015;Klein et al., 2007;Wratten et al., 2012). Estimates indicate 70% of the world's food crops rely on insect pollination (Klein et al., 2007) worth a total value of €153 billion (~$194.7 billion;Gallai et al., 2009). Additionally, pollinators contribute to biodiversity, wildlife food availability, and prevention of soil erosion and water runoff (Hanley et al., 2015;Wratten et al., 2012). Thus, declining pollinator populations has potential to harm global markets, food availability, and the environment. Contrary to recent trends, in 2017, the U.S. Department of Agriculture's National Agricultural Statistics Service reported that honeybee populations were increasing; however, there are many questions that have yet to be addressed (National Agricultural Statistics Service, 2017). 1 To date, pollinator-related research has focused on causes of declining populations (Fairbrother et al., 2014) and overall economic and production impacts (Figueiredo Jr et al., 2016;Gallai et al., 2009;Klein et al., 2007) but relatively few studies address consumer perceptions of 'pollinator friendly' products (Rihn and Khachatryan, 2016;Wollaeger et al., 2015).
Consumer perceptions are important because they influence behavior, purchasing intentions (Costanigro et al., 2015;Stranieri and Banterle, 2015), and (in this case) pollinator insects' access to habitat and nutrient sources Fairbrother et al., 2014;McIntyre and Hostetler, 2001). Evidence suggests consumers are confused about pollinator-related claims which can influence behavior (Wollaeger et al., 2015). For example, consumer perceptions and their intrinsic definitions influence their purchasing choices for eco-friendly foods (Campbell et al., 2015;Stranieri and Banterle, 2015). This may be problematic since consumer perceptions may not align with the actual product characteristics which can impact marketing efforts, labeling strategies, promotional message clarity, and policy effectiveness (Campbell et al., , 2015Stranieri and Banterle, 2015). This issue is amplified by 'pollinator friendly' being a credence attribute, which is not searchable unless in-store promotions (e.g. labels) are used. But, with the wide variety of pollinator-related labels (Rihn and Khachatryan, 2016), how do consumers perceive and define 'pollinator friendly' plants? We do not know.
The present study's objective is to better understand consumers' definitions of 'pollinator friendly' products by investigating the relationship between consumer factors (i.e. purchase interest, knowledge, demographics) and perceptions of pollinator friendly product attributes. Section 2 provides a brief review of relevant literature summarizing pollinator friendly product attributes, policy implications, and the existing pollinator-related consumer behavior research. Section 3 outlines the research methodology while Section 4 presents the results. Lastly, Section 5 provides a brief discussion and concluding remarks.
Background: definitions, policy, and consumer behavior research
Several definitions of practices that aid pollinators are available; however, very few definitions exist that clearly identify product characteristics that aid pollinators. The U.S. Forest Service (2015) and Xerces Society (2015) indicate that providing habitat and/or nutrients to pollinators constitutes 'pollinator friendly' products. Several studies have identified product-specific (plant) traits related to aesthetics (Kendal et al., 2012), production practices (Gabriel and Tscharntke, 2007;Kiester et al.,1984), and physiological characteristics (Kiester et al., 1984), including: integrated pest management (IPM) strategies, organic production, natural production, environmentally friendly production, native origins, fragrant flowers, reduced/no pesticide use, and (often) the production of fruit, nectar, flowers, and/or pollen. Thus, a 'pollinator friendly' label can imply many different traits which may result in consumer confusion and reduce the label's effectiveness.
Policy implications associated with defining and labeling pollinator friendly products are related to mandatory labeling or restrictions on use. Currently, a relevant debate is the mandatory labeling of neonicotinoid Khachatryan and Rihn Volume 21, Issue 3, 2018 (neonic) pesticides. Neonic pesticides are systemic pesticides used to protect crops from insect predation. The systemic nature of the pesticide means it is present within the entire plant including parts utilized by pollinators (pollen, nectar). This means neonics may affect pollinator insects' health and behavior (Blacquiére et al., 2012). Currently, the UK government and several U.S. retailers (e.g. Home Depot) have restricted the use of neonic pesticides (Environmental Protection Agency, 2013). However, existing scientific research of the risks of neonic pesticides to pollinators is inconclusive (Barbosa et al., 2015;Blacquiére et al., 2012;Fairbrother et al., 2014;Hanley et al., 2015). For instance, Pilling et al. (2013) studied the affect of neonics in pollen over 4 years and found no differences between neonic-treated and control hives' health. Blacquiére et al. (2012) determined that the lethal and sublethal effects of neonics on pollinator insects only occurred in lab experiments but not in field experiments. Another study (Fairbrother et al., 2014) reported that Varroa mites and disease are the primary cause of worldwide bee loss. This finding is supported by the USDA's report on honeybee health (National Agricultural Statistics Service, 2017). Regarding consumer behavior research, research shows that not many consumers are aware of neonic pesticides and many are confused about what 'neonic-free' labeling means (Rihn and Khachatryan, 2016;Wollaeger et al., 2015). This is problematic since in order for a policy to be effective, consumers must understand the key message being communicated to them (Brécard, 2014). Without a clear understanding of consumers' perceptions of products that aid pollinators the marketing potential and policy effectiveness relative to using pollinatorrelated labels is limited.
The effectiveness of pollinator-related labels is especially important because evidence suggests consumers are interested in pollinator-benefiting policies and products. In 2008, UK consumers were willing to pay £1.77 billion/year (~$3.52 billion) to support bee protection policies (Mwebaze et al., 2010). Breeze et al. (2015) determined UK tax payers were willing to pay £13.4 per year (~$21.61/year) to conserve wildflowers for pollinators. In 2012, U.S. consumers were willing to pay $4.78-6.64 billion to purchase beneficial plants or donate to butterfly conservation programs (Diffendorfer et al., 2014). While these studies emphasize broad consumer awareness of the importance of conserving pollinators, consumer perception studies are needed to understand the motives behind this behavior. Currently, there are two relevant consumer perception studies. Wollaeger et al. (2015) demonstrate consumers are more likely to purchase plants produced using 'bee friendly' production methods when compared to traditional insect management practices. Consumers' purchasing frequency positively affected their awareness and knowledge of 'bee friendly' production methods. Similarly, Rihn and Khachatryan (2016) found consumer knowledge affects purchasing behavior and that broad pollinator labels (e.g. 'pollinator friendly') are preferred to species-specific labels (e.g. 'bee friendly'). However, neither of the studies delved into consumers' underlying perceptions and their accuracy. In this study we address this gap.
Survey design
An online survey was used to assess consumer perceptions of 'pollinator friendly' traits. In the survey, participants indicated from a pre-determined list which traits they considered to be beneficial to pollinator insects. Ornamental plants (in general) were selected as the product because they are key nutrient and habitat sources for pollinator insects (U.S. Forest Service, 2015;Xerces Society, 2015). In order to capture participants' overall perceptions of 'pollinator friendly' traits, specific ornamental plant examples were not included. The 22 listed traits were developed from consultations with green industry professionals and existing literature. The list also included an 'other, please list' option to insure all potential traits were covered. Product traits were randomized to eliminate any order effect and participants were asked to 'select all that apply.' Likert scales were used to measure participants' purchase interest for products that aid pollinators (1=not at all interested; 7=very interested) and knowledge of pollinator-related topics (1=not at all knowledgeable; 7=very knowledgeable; similar to Campbell et al. (2013) and Wollaeger et al. (2015)). Lastly, participants completed a standard set of socio-demographic questions. Khachatryan and Rihn Volume 21, Issue 3, 2018 3.2 Sample summary A sample of 1,243 U.S. participants was collected during January 2015 using an online survey conducted by Qualtrics, LLC. Participants were recruited from Qualtrics' online panel. Online surveys have previously been used to collect data from a wide variety of participants in consumer perception studies (Campbell et al., 2014(Campbell et al., , 2015Wollaeger et al., 2015). The average age of participants was 52 years old (Table 1). Males comprised 42% of the sample. Most (54%) of participants had less than a 4 year college degree. Participants' 2014 household income was in the $51,000-60,000 range and the average household size was 2.6 people. 86% of the sample classified themselves as Caucasian/white. U.S. population statistics are provided for comparison purposes (U.S. Census Bureau, 2014). Overall, the sample over-represented older consumers, females, higher education levels, higher income households, and Caucasian/white consumers. Some of these results may be attributed to the study product (plants) where older women are the core consumers (Mason et al., 2008).
Econometric model
The empirical model focused on the following themes: (1) understanding consumers' perceptions of traits that aid pollinators; (2) how their interest in purchasing products to aid pollinators affected those perceptions; and (3) how their existing knowledge of pollinators/related topics and their socio-demographics influenced those perceptions. Following Campbell et al. (2013), a set of binary logit models and marginal effect estimates were used to determine the impact of the explanatory variables (i.e. knowledge, purchase interest, and sociodemographic characteristics) on their perceptions of 'pollinator friendly' traits. To accommodate the binary logit model, the traits were coded to equal 1 if selected and 0 if they were not selected. 2 A binary logit model was analyzed for each trait. Specifically, the probability (P i ) of the i th participant selecting each trait can be represented by where x i represents participant i's purchasing likelihood, knowledge, and socio-demographic variables and β indicates the estimated coefficients. Marginal effects were then estimated. 3 The marginal effects indicate 'the percent change given a one-unit increase from the mean' for continuous variables while the dummy explanatory variables specify 'the percent change for a move from the base attribute level to the level of interest' (Campbell et al., 2015). Alternative models were also ran to test for heterogeneity but the results were similar and available from the corresponding author upon request.
Exploratory analysis of perceptions
Participants' perceptions of different 'pollinator friendly' traits varied (Table 2). Most participants selected traits associated with flowers (i.e. pollen producing, flower producing, nectar producing, bright colored flowers, fragrant, and produces fruit) as being beneficial. This is likely due to consumers realizing that flowers are a main source of nutrients for adult pollinator insects (Kiester et al., 1984). However, bright colored flowers were not always beneficial to pollinators since plant breeding efforts emphasizing aesthetic characteristics can reduce nutrient availability (Landry, 2010). The aesthetic results may also reflect that consumers associate bright colors with aiding pollinators since 31.9% selected bright colored foliage. Additionally, 35.9% of participants selected native as a beneficial trait. This is not surprising since native plants have coevolved to aid native pollinators and are often preferred by pollinator insects over exotic plant species (Frankie et al., 2005). Production methods were also frequently selected (including environmentally friendly, pesticide free, grown using natural practices, organic, and grown using IPM strategies). A small percentage (1.9%) of consumers viewed aiding pollinators as a marketing gimmick.
Many of these findings are consistent with previous literature on products that aid pollinators. However, there were some inconsistencies as well. 30% of consumers associated locally grown with aiding pollinators and 22% indicated that a product classified as 'pollinator friendly' meant it was safer for humans (Table 2). To date, neither of these traits has been shown to positively affect pollinators. Increasing consumer interest and demand for local and sustainable products is likely responsible for these misperceptions. Local production is popular due to product acclimation to the local environment and consumers' perceptions of local community benefits (i.e. economy, jobs, etc.) (Campbell et al., 2014;Wehry et al., 2007). Interest in sustainably produced plants (i.e. ones perceived as 'safer for humans') is often due to human and environmental health concerns (Campbell et al., 2014). If consumers perceive 'pollinator friendly' positively, they may project additional positive traits (such as local and safe for humans) onto those products to enhance their benefits and attractiveness. Alternatively, consumers may not be knowledgeable about pollinator friendly products and therefore used their personal preferences and past experiences to shape their perceptions (Campbell et al., 2015;Wollaeger et al., 2015).
These results provide an overview of consumer perceptions of product traits that aid pollinators; however, additional quantitative results need to be considered in order to make inferences from the data. In the next section, the influence of purchase interest, knowledge, and socio-demographic variables on consumer Khachatryan and Rihn Volume 21, Issue 3, 2018 perceptions of products that aid pollinators using the marginal effect estimates from the binary logit models are discussed.
Marginal effects for accurate traits
Marginal effect estimates provide insights on why consumers perceive certain traits as beneficial and not others. For ease of interpretation, accurate traits were divided into production method traits (Table 3) and product traits (Table 4). Consumers who were interested in purchasing products to aid pollinators had an increased probability of correctly identifying beneficial production methods (Table 3). Consumers who were knowledgeable about neonic pesticides were 9.7% more likely to select organic production methods as being (Kiester et al., 1984), natives (Frankie et al., 2005), organic systems (Gabriel and Tscharntke, 2007;Morandin and Winston, 2005), environmentally friendly, and natural practices (Frankie et al., 2005). Additionally, plants have coevolved with pollinator species to attract specific pollinators through fragrance, flower morphology, and nutrient sources (i.e. pollen and nectar) (Kiester et al., 1984). Conversely, pesticides have been shown to negatively influence pollinator health (Fairbrother et al., 2014;Hanley et al., 2015;Pimentel, 2005). 3 Not all fruit producing crops require insect pollination; however, several fruit producing crops rely on insect pollination (Gallai et al., 2009;Klein et al., 2007) and 23% of fruits are highly economically vulnerable to pollinator population loss . Therefore, the 'fruit producing' trait is listed as 'varies '. 4 Although flowers are beneficial to pollinator insects (Kiester et al., 1984), bright colored, long-lasting flowers are often bred at the expense of the plant's reproductive organs (including pollen and nectar) which can be detrimental to pollinators (Landry, 2010). Therefore the 'bright colored flowers' trait is listed as 'varies' since it can vary between species and cultivars. 'pollinator friendly'. Additionally, consumers who were knowledgeable about environmental stewardship were 2.8% more likely to indicate pesticide free as a production practice that aids pollinators. Interestingly, being knowledgeable about pollinator friendly features reduced the likelihood of selecting IPM by 2.0%. This may reflect low consumer knowledge about what constitutes IPM strategies. Regarding the influence of socio-demographic variables, older participants were 0.23% more likely to select environmentally friendly as a beneficial trait, while males and consumers with higher incomes were 7.0% and 1.8% less likely to select environmentally friendly. Higher income individuals were 1.3% less likely to select natural practices. More educated respondents were 1.8% and 2.4% more likely to select organic and environmentally friendly production methods. Caucasian/white consumers were 11.4% less likely to indicate organic practices.
Consumers' purchase interest also increased their probability of selecting accurate product traits (Table 4).
Consumers who were knowledgeable about landscapes, gardens, and plants were 4.7% more likely to select flower producing as a beneficial trait. Plant aesthetics were a primary attribute when making purchasing decisions (Kelley et al., 2001;Kendal et al., 2012). As a result, this group of consumers may have an increased interested in aesthetic characteristics. Consumers knowledgeable in environmental stewardship were 2.7% more likely to select pollen producing and 2.5% more likely to select native. Entomology knowledgeable consumers were 4.0% less likely to select fruit producing. Consumers knowledgeable in agriculture were more likely to select pollen (2.0%) and fruit producing (2.7%) traits as beneficial. Older participants were also more likely to select pollen producing (0.4%). Individuals with higher incomes were less likely to select fruit producing (-1.0%). Individuals who had obtained a higher education level were 1.9% more likely to select native as a beneficial trait. Caucasian/white consumers had a higher probability of selecting the nectar (9.1%) and flower producing (13.4%) traits.
Marginal effects for inaccurate traits
Regarding inaccurate traits, consumers who were interested in purchasing pollinator friendly products did not perceive 'pollinator friendly' as a marketing gimmick (Table 5). This is intuitive because if consumers are interested in purchasing products that aid pollinators, they are more likely to actively seek out those products rather than discount the information as a marketing gimmick. Neonic pesticide knowledgeable consumers were 0.8% more likely to inaccurately select genetically modified. Consumers who were knowledgeable about pollinators were 3.0% less likely to inaccurately select safer for humans. Consumers knowledgeable about bee keeping were 1.2% more likely to select expensive. Consumers interested in purchasing pollinator friendly products were more likely to select bright colored foliage (4.3%) and flowers (7.2%) as traits that aid pollinators. Consumers knowledgeable about neonicotinoid pesticides were 9.2% less likely to select 'bright colored flowers'. For socio-demographics, age negatively influenced the probability of selecting genetically modified and marketing gimmick. Males were less likely to select bright colored foliage (-7.7%) and flowers (-8.8%). Caucasian/white consumers were 8.6% less likely to select safer for humans but 8.7% more likely to select bright colored foliage and 9.4% more likely to select bright colored flowers.
Consumers' increased purchase interest improves the probability of inaccurately selecting locally grown by 5.0% (Table 6). Knowledge about pollinator friendly features or agriculture increased consumers' likelihood of selecting greenhouse grown by 2.9 and 1.4%, respectively. Purchase interest negatively impacted the probability of selecting 'none of the above'. Age negatively affected the likelihood of selecting 'pesticides were used'.
Discussion: emerging consumer perception patterns
Cumulatively, when examining consumers' accurate and inaccurate perceptions and how purchase interest, knowledge, and socio-demographics influence these perceptions, several interesting patterns emerge (Tables 3-6). First, increased interest in purchasing products to aid pollinators results in the consumer selecting more positive traits even if they are not accurate (e.g. locally grown). A potential explanation for this result is that if consumers perceive pollinator beneficial products positively (as indicated by increased purchase interest) they associate it with other positive traits (much like the 'halo effect' discussed by Wu and Petroshuis (1987)). Thus they are more likely to have positive opinions regardless of accuracy, which sequentially influences their product choices.
There are advantages and disadvantages to this phenomenon. Advantages include the opportunity to promote products that aid pollinators which increases product availability and can be leveraged to generate consumer interest in those products. In turn, this may lead to increased profits and greater abundance of pollinator friendly products in the environment which may have substantial long-term impacts on pollinator insect populations (Frankie et al., 2005;Hanley et al., 2015). However, if consumers' obtain greater satisfaction from bright colored foliage and flowers (depending on species/cultivar) than from pollinator friendly traits, the non-beneficial traits may outweigh the beneficial traits. This may be problematic since plant aesthetics are a primary purchase driver but do not always benefit pollinators (Kelley et al., 2001;Kendal et al., 2012;Landry, 2010). Pollinator-related labels may be able to overcome this issue; however, to what extent is unknown and outside the scope of this study.
Consumers' existing knowledge also influences their perceptions of what constitutes a product that aids pollinators. Results imply that existing knowledge and interests strongly affect consumer perceptions which, in turn, influence their choices Wollaeger et al., 2015). For instance, consumers knowledgeable in landscaping, gardens, and plants select flower producing (an important aesthetic trait). Environmental stewardship knowledgeable consumers primarily select environment friendly attributes (pesticide free, pollen producing). Similarly, neonic pesticide knowledgeable consumers avoid selecting pesticide containing options an (as reflected through the selection of organic) which is consistent with Wollaeger et al. (2015). These patterns provide insights into how consumers' existing knowledge influences their perceptions which can be used to increase awareness of traits that positively affect pollinator health.
Regarding socio-demographic variables, age appeared to have the most impact with older participants having a more accurate perception of traits that aid pollinators. This is not surprising considering older consumers are the core consumers of plants (Mason et al., 2008), meaning they are likely more familiar with the products and their impact on pollinators. Education also appeared to increase the accuracy of participants' selection of traits that benefit pollinators.
In conclusion, research has shown consumers are interested in pollinator conservation measures but, to date, very few studies investigate consumer perceptions of products that aid pollinators. We found consumers' interest in purchasing pollinator friendly products, existing knowledge, and socio-demographics all contribute to their perceptions of beneficial traits. Overall, findings indicate some confusion exists about what traits are actually beneficial to pollinator insects. However, results should be interpreted cautiously since there are unobserved individual/consumer characteristics that (due to data limitations) were not included in the analyses. Though the study results are consist with previous studies addressing the impact of consumer knowledge on behavior Rihn and Khachatryan, 2016) and consumer behavior toward traits that benefit pollinators (Wollaeger et al., 2015) indicating robustness of the present results. Future studies incorporating additional variables and experimental methods (e.g. incorporation of live plants, exposure to pollinator-related news in mass media, treatment groups, etc.) could further test the robustness of results.
There is an opportunity for researchers to further quantify how difference consumer characteristics influences their definitions of 'pollinator friendly' products. Furthermore, policy makers and industry stakeholders could benefit from educating consumers about pollinator beneficial traits and use in-store promotions to influence consumer behavior toward those items. Ultimately, this could positively influence demand for pollinator beneficial products and improve pollinator health through increased availability of beneficial products. | 2019-05-20T13:04:05.162Z | 2018-02-26T00:00:00.000 | {
"year": 2018,
"sha1": "8a30bd9a7f726144ec8fe410fdd5cf052ba26082",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.22434/ifamr2017.0044",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "027a76a3cb3cbf104e1950d039025081c311fbe6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
} |
237340808 | pes2o/s2orc | v3-fos-license | Genomic Variation and Diversification in Begomovirus Genome in Implication to Host and Vector Adaptation
Begomoviruses (family Geminiviridae, genus Begomovirus) are DNA viruses transmitted in a circulative, persistent manner by the whitefly Bemisia tabaci (Gennadius). As revealed by their wide host range (more than 420 plant species), worldwide distribution, and effective vector transmission, begomoviruses are highly adaptive. Still, the genetic factors that facilitate their adaptation to a diverse array of hosts and vectors remain poorly understood. Mutations in the virus genome may confer a selective advantage for essential functions, such as transmission, replication, evading host responses, and movement within the host. Therefore, genetic variation is vital to virus evolution and, in response to selection pressure, is demonstrated as the emergence of new strains and species adapted to diverse hosts or with unique pathogenicity. The combination of variation and selection forms a genetic imprint on the genome. This review focuses on factors that contribute to the evolution of Begomovirus and their global spread, for which an unforeseen diversity and dispersal has been recognized and continues to expand.
Introduction
Circular, Rep-encoding single-stranded (CRESS) DNA viruses (phylum Cressdnaviricota) is the group of single-stranded DNA (ssDNA) viruses encoding a replication-associated protein (Rep) that appears to have originated from a common ancestor [1]. Plant infecting CRESS DNA viruses are categorized into the families Geminiviridae and Nanoviridae [2]. Begomoviruses (BGVs) belong to the largest known genus of ssDNA viruses (the genus Begomovirus represents 88% of the family Geminiviridae), and they are responsible for a substantial amount of crop loss worldwide [3,4]. They are efficiently spread by a polyphagous whitefly vector, i.e., Bemisia tabaci (a collection of biotypes), to a broad host range, which encompasses both wild and cultivated plant species [5]. However, other species can also transmit BGVs, such as Trialeurodes ricini [6] and Trialeurodes vaporariorum [7]. Key symptoms of BGV infections include yellowing, inward curling of the leaves, and stunting of the plants, resulting in significant yield loss [8]. The genome can either be a single component (monopartite) between 2.5-3.1 kb or, in the case of some BGVs, two similar-sized components (bipartite), each between 2.6 and 2.8 kb [3]. Monopartite BGVs encode six proteins, viz. C1/Rep, C2/TrAP, C3/REn, C4, V2, and V1/CP [9]. Homologs are encoded in one of the genomic components of bipartite BGVs, firstly DNA A (in this case, termed AC1/Rep, AC2/TrAP, AC3/REn, AC4, AV2, and AV1/CP); secondly, DNA B (the additional component in bipartite species) encodes two additional proteins: the nuclear shuttle protein (NSP) and the movement protein (MP).
The two genomic components of bipartite BGVs are denoted as DNA-A and DNA-B. These components share no significant sequence identity, excluding an intergenic region (IR) (size of~200 nucleotides). The IR comprises the replication origin (ori), a conserved stem-loop structure with the conserved nonanucleotide TAATATT//AC and repeat sequences (iterons) that are explicitly recognized by the viral replication-associated protein, Rep [10]. The IR is vital for preserving the integrity of the bipartite genome, allowing both components to be replicated by Rep, which is known to confer a high specificity for their cognate ori [11].
Although, the mechanism by which segmented and multipartite genomes have emerged is uncertain, evidence suggests they may have derived by fragmentation of the genome of their non-segmented ancestors, with defective segments becoming virulent by complementation [12]. Specifically for BGVs, Briddon et al. [13] proposed that the DNA-B could have emerged from a satellite molecule captured by the monopartite progenitor of all BGVs. Perhaps, this combination offered greater flexibility to the monopartite ancestors, and accordingly, it was sustained over the evolutionary process.
The two main phylogenetic clades of BGVs are predominantly revealed based on their geographical distribution and genome organization: The Old World BGVs can be divided into African, Indian, Japanese, and Oceania, with a small number of strains falling outside these [13]. The New World BGVs are distributed into Central and Southern America [13]. The NW BGVs (~140 species) have a bipartite genome (DNA-A and DNA-B) with a few reported exceptions, while the OW BGVs include both bipartite and monopartite species, with a predominance of the latter (ca. 85%) [9]. A key difference between the New World (NW) and Old World (OW) BGVs is that the latter has an extra small open reading frame (ORF) that leads and moderately overlaps the CP gene, termed AV2/V2 or "precoat" gene [14]. The OW BGVs are recognized as more ancient and diverse than the NW BGVs.
Vectors are imperative in spreading viruses from infected plants to healthy plants through various transmission strategies, including transovarial mode [15]. More than 320 species of BGVs are known to be transmitted by B. tabaci (Hemiptera: Aleyrodidae), a cryptic species complex that includes more than 44 morphologically indistinguishable species [16][17][18]. Accordingly, the potential permutations of interactions in nature could be over 320 species of BGVs × more than 44 putative cryptic B. tabaci species × 1000 s of crop species and varieties [17,19,20]. Consequently, BGVs are considered fast-evolving DNA viruses due to the global expansion in the population, dispersal of their whitefly vector, and the worldwide movement of plant materials, usually driven by human movements [19]. Previous studies have shown the differences in virus transmission efficiency in the races of B. tabaci isolated from OW and NW geographical regions [20]. B. tabaci is a complex of at least 39 morphologically indistinguishable biotypes that are challenging or unmanageable to discriminate based on their morphology. For example, B. tabaci B or Middle East-Asia Minor 1 (MEAM1), which originated in Middle East-Minor Asia, and B. tabaci Q or MED, which originated in the Mediterranean region, are the two utmost invasive and destructive whiteflies [16]. B. tabaci B and Q (basically B and Q) vary in feeding behavior, virus transmission efficiency, host range, endosymbionts, and insecticide resistance. However, both B and Q enormously damage plants by feeding on phloem tissue and transmitting BGVs [21]. In the past 30 years, both biotypes have dominated many countries worldwide and banished some native cryptic biotypes. A recent study revealed that MED populations had a higher level of genetic variation with multiple invasions than MEAM1. Molecular genetic methods, such as mitochondrial cytochrome oxidase I (mtCOI) [22] and nuclear (microsatellite) DNA [23], have been used to investigate the ecological and evolutionary aspects of biological invasions and their concurrent impacts on the genetic structure and variation of an invasive species [24].
BGVs are often associated with DNA satellites, designated beta-and alphasatellites, promote vector-host interaction, suppress host defense, and support symptom development [25]. Rolling circle amplification (RCA) has revolutionized the diagnosis and genomics of BGVs and their associated satellites. This success is mainly due to the accessibility of RCA using ϕ29 DNA polymerase, a technique that allows the amplification of ssDNA viral genomes without any prior knowledge of nucleotide sequences [21,22]. RCA has also enhanced the detection of many small noncoding DNA satellites that are a quarter of the size of their cognate helper BGV genomic components [23,24]. Deltasatellites have recently been proposed for these satellites [26]. The association of betasatellites (also called symptom-modulating satellites) with the majority of the Old World monopartite BGVs and their unrestrained trans-replication by diverse helper BGVs have made them a severe threat to the agro-economy [27][28][29][30]. Alphasatellites are self-replicating circular single-stranded DNA molecules (size 1.3 kb to 1.4 kb) and requisite helper viruses for their movement inside the host plant and vector transmission. Their exact function is not well-known [27][28][29]. However, in another study, the function of alphasatellites in disease severity via affecting the virulence of the helper virus has been demonstrated [30][31][32]. Betasatellites are circular single-stranded DNA molecules (size 1.3 kb) and are entirely reliant on their helper viruses for replication, encapsidation, movement, and vector transmission [33,34]. Betasatellites are often associated with symptom development, disease diversification, and increased accumulation of viral nucleic acids in the host [35,36].
BGVs from the NW and OW geographical regions are known to be genetically divergent. Phylogenetic analyses suggested independent segregation, where the OW BGVs clade displayed a greater genetic diversity [37]. Diverse environmental factors frequently influence virus transmission and tritrophic interaction between plant-vector-virus, where the vectors are vital mediators. Virus replication in both vectors and plants enforces an evolutionary pressure over the virus genome. For example, viruses jump in different hosts and experience robust and strict adaptive selection as they intensify their fitness for the new niche. Therefore, the host might act as a primary driver of the longer-term evolution of viruses. Based on the same hypothesis, Simmonds et al. proposed a "niche-filling model" and highlighted the role of host interactions in shaping virus evolution [38]. Some non-cultivated plant species, especially of the families Malvaceae, Euphorbiaceae, Fabaceae, and Solanaceae, are identified hosts of BGVs [39].
Genomic variation, evolution, and adaptation of the viruses to distinct hosts are reconciled by the combinative effect of genetic factors in their genome and selection pressure imposed by the host [40][41][42]. Besides, different host species may play an essential role in the standing genetic variability of BGV populations [38]. Mutations are the leading source of variation for most BGV populations [43]. Selective pressures applied by the host play a critical role in shaping virus populations, and these populations are likely being selected for at both the protein and DNA or RNA levels [44]. Accordingly, the chronological mechanism of emergence for some mutational patterns (nucleotide and amino acid substitutions) over the virus genome is key for anti-viral defense. Recent studies based on computational analyses have allowed the identification of fractions of non-synonymous to synonymous substitutions to determine virus evolution (as diverging clades) [41,45]. Purifying selection diminishes the volume of non-synonymous substitutions before they arise or are fixed in the genome and favors the fixation of those involving adaptive benefits. In contrast, synonymous substitutions are more likely to be maintained [46]. Attributing them to the small genome size with a high potential for genomic variation (due to mutation and recombination), BGVs are attractive models for studying the evolutionary and ecological factors driving their emergence [47]. Substitution rates (or µ) of whitefly vectored BGVs have been described to be equally high as those of ssRNA viruses [48,49], and positive selection pressure on mutations or the products of recombination events plays a crucial role in BGV evolutionary dynamics [50,51]. Regulatory mechanisms of BGVs and RNA viruses promote host adaptation; the betasatellite silencing suppressor βC1 avoids excessive inhibition of antiviral pathways and cell toxicity through autophagy activation [52]; Cucumber mosaic virus (CMV) silencing suppressor 2b and its interacting partner ARGONAUTE 1 (AGO1) is antagonized by the viral CP and 1a [53,54]; and the regulated proteolysis of the Plum pox virus (PPV) P1 modulates the HCPro silencing suppressor activity to promote the long-term virus fitness [55].
DNA/RNA methylation is also an essential epigenetic modification that could affect plant immunity, virus adaptation, and evolution [56,57]. Independent studies have disclosed that geminivirus-betasatellite complexes are both robust inducers as well as targets for both post-transcriptional gene silencing (PTGS) and transcriptional gene silencing (TGS) and thus play a fundamental role in virus-host interaction [58,59]. To lower the host antiviral RNA silencing defense, the βC1 protein encoded by several betasatellites can suppress PTGS. Moreover, epigenetic modifications of histones (ubiquitination, methylation) associated with the minichromosomal structure of monopartite capsicum-infecting BGVs have been shown to play a crucial role in virus-host interaction [58]. An excellent recent report investigated the incidence of BGVs adept at modulating plant immunity to enhance the fitness of their whitefly vector and diminish the performance of two nonvector herbivores [60]. They indicated that the βC1 proteins encoded by the satellites associated with Cotton leaf curl Multan virus (CLCuMuV) and Tomato yellow leaf curl China Virus (TYLCCNV) could interact with the transcription factor WRKY20 and thus stimulate a plant tissue-specific response against different herbivores. Consequently, satellite DNAs need further investigation, as they may be a key factor driving the diversification of begomovirus-satellite disease complexes.
Begomoviral proteins have been characterized for understanding the mechanism of symptom recovery [61,62], virulence, and host resistance [63,64]. Coat protein (CP) is a multifunctional protein due to its interaction with plants and vectors [65]. The CPs of all the whitefly-transmitted geminiviruses have one or more antigenic epitopes in common, suggesting that these could be determinants of vector specificity and that they play a leading role in virus transmission [66,67]. Recently, an in silico study showed a higher mean diversity in the cp gene of OW BGVs compared to the NW [68]. However, highly mutable amino acids have been identified in the CP of Squash leaf curl China virus (SLCCNV) [69], which did not alter their fitness in the host plant but rendered the virus more competitive for certain species of whiteflies.
Although several techniques ranging from conventional methods to molecular advances have been implemented to control geminiviral infections, the success has been limited due to synergistic virus infections. CRISPR-Cas (Clustered, regularly interspaced short palindromic repeats, CRISPR, associated protein), a bacterial adaptive immune strategy against interfering foreign nucleic acids, has emerged as effective genome editing technology that has been successfully applied in many organisms, including several plant species [70][71][72]. Nevertheless, rapid genetic variation and virus evolution evidence include the escapee characterization from CRISPR-Cas9 plants engineered to target BGV genomes. Ali et al. underlined a potential problem with the technique by determining the probability of virus escape from the CRISPR-edited plants [73][74][75]. However, virus escape from editing was also proven by Mehta et al., whose efforts to persuade resistance against African cassava mosaic virus (ACMV) in permanent transgenic cassava (Manihot esculenta) lines showed limited success [76]. Therefore, selecting targets within the viral genome is a crucial factor in conquering durable resistance. In this perspective, non-coding targets are more efficient over coding regions as they embed the key elements for virus replication and pathogenicity maintenance [77]. Furthermore, detecting the potential host factors involved in the resistance during plant-geminivirus interaction, multiplexed genetic engineering tools directing multiple targets, and targeted deletion in viral genomes can assist in developing disease-free plants and counteracting the emergence of CRISPR-resistant BGVs.
Tomato yellow leaf curl virus (TYLCV) has the highest host range across BGVs and has been discovered in 49 species belonging to 16 different plant families [78]. In the tomato plant, various sources of resistance to TYLCV have been recognized and employed to produce resistant cultivars. Despite broad efforts to control TYLCV by deploying resistance in the field, new variants efficient in overcoming resistance have continuously emerged, and TYLCV remains the most widespread and damaging virus both in tomato and pepper crops. Apparently, wild tomato species, like Solanum pimpinellifolium, S. peruvianum, S. chilense, S. habrochaites, and S. cheesmaniae, are resistant to TYCLV and other BGVs. Resistance genes, such as Ty-1 to Ty-6 [79,80], from these wild relatives have been repeatedly backcrossed into cultivated tomato varieties, leading to the improved resistance to the virus, but they were never 100% resistant [81]. Resistance-driven selective pressure combined with the high evolutionary capacity of TYLCV might have contributed to the unique evolution of TYLCV. Therefore, more cohesive advancements that complement host resistance are essential for the successful control of TYLCV.
Discussion
BGVs have become the most devastating group of plant viruses in tropical and subtropical regions of the world. The current emergence of BGVs is noteworthy, as these viruses have been co-evolving with their dicotyledonous plant hosts for ages. The plant hosts and varieties grown will influence virus diversity through selecting for viruses and vector populations. Agricultural growth has been suggested as one of the leading causes, together with expansions in populations of their vector Bemisia tabaci, moderately due to the worldwide spread of the more prolific B-biotype with new diseases and associated epidemics.
The fecundity of different B. tabaci populations vary significantly on diverse hosts [82], and cultivated crop fluctuations might result in discrete changes in vector abundance. For instance, the increased cultivation of cotton, soybean, and other horticultural crops in Latin America in the 1970s led to greater B. tabaci populations and subsequent BGV disease [39]. The main driving force for the destructive cassava mosaic pandemic that has spread quickly in East Africa since the late 1980s [83] shows to be an interaction between virus strains, vector populations, and host genotypes rather than a single factor [84]. The fecundity of B. tabaci spreads drastically on cassava plants infected with the recombinant EACMV-[UG], an East African cassava mosaic virus Uganda (Uganda variant) to a much higher-population density on the restricted green areas of severely affected leaves and an increased migration rate of infective adults.
One other fundamental area that needs clarification is the role and mode of interaction of the newly discovered circular ssDNA satellites with each other, their helper viruses, and their role in BGV epidemiology. These DNA satellites share no significant sequence homology with their helper BGV sequences and are of various types. The epidemiological role of DNA-β satellite molecules seems to be in extending the host range of BGVs. For example, at least five diverse BGV species, including Papaya leaf curl virus, can cause cotton leaf curl disease in Pakistan but only when associated with a particular DNA-β molecule [85].
Little is known about the selection pressures that seem to operate and drive BGV evolution towards increased virulence and an extended host range. However, the genomes of BGVs show extreme plasticity, leading to rapid evolution in response to changing cropping systems. Genetic factors determining virulence, host adaptation, and suppression of defense responses are under positive selection [86,87]. In the perspective of naturally distinct hosts and vectors, BGVs may face differential selection pressure to maintain functionality [46]. Furthermore, geographically distinct host and vector genetic diversity enforce various selective constraints [58,86]. Every combination of a host and virus is unique, and assisting different variant selection provides new host adaptation, a new strain, species emergence, and ultimately host range extension. Irrespective of human or plant viruses, experimentally validated co-evolving amino acids are associated with a host shift [88,89]. It is known that a change in a set of a few amino acids of viral proteins (co-evolutionary) can lead to a change in the host infectivity range. However, recent progress in analyzing mutation libraries and the interaction between viral threedimensional protein structures and host factors can enhance co-evolutionary amino acid discovery and our understanding of the viral evasion landscape [90,91]. Some studies have proven the relation between sequence amino acid co-variation in viral determinants and host adaptation [92][93][94]. Such modification occurs over a genetic adaptation process that overcomes viral entry and replication barriers in a new cellular environment.
Previous findings indicate that additional virus-induced driving forces for BGV epidemics might be the alteration of plant biochemistry so that infected plants emit volatiles as vector attractants, alter feeding behavior, enhance vector fertility [95], and allow increased virus acquisition [96]. These interactions could have the consequence of increasing the robustness of the virus population. Biological and genetic studies to elucidate such interactions are a high priority for future research. Additionally, experimental study should be combined with mathematical modelling studies [97], which offer prospects for dissecting and incorporating the various layers of interaction and exploring the consequences for virus epidemiology [98][99][100].
BGVs co-evolve with their hosts and vectors in diverse environments and face selection pressure for any host-vector combination to maintain genomic organization and protein functions that facilitate vector transmissibility, replication, and movement [69,101]. Based on the same hypothesis, we presented a model for BGVs evolution via an example of a typical BGV master/founder genome ( Figure 1A). While hosts from different niches favor the better adapted variants to replication and movement, vectors select them based on transmission efficiency before or after adaptation to a particular host ( Figure 1B). Additionally, the trans-replication of betasatellites by different BGVs may trigger diversity in the BGV genome by acquiring homologous iteron-like motifs [102] ( Figure 1B). The previous study based on phylogenetic analyses has shown the segregation of betasatellites according to their host and geographic origin [103]. These results strongly encourage the concept of coadaptation of betasatellites with their corresponding helper BGVs. Accordingly, genetic plasticity in key segments of the BGV genome must sustain functionality in genetically diverse hosts, vectors, and environments. Tolerance for new mutations may provide the robustness required for generating selection diversity to identify variants with a competitive advantage. Thus, due to the repeated cycle of virus replication in a host plant, vector transmission and selection may lead to host adaptation. In the present model, mutations get fixed in begomoviral proteins that are determinants of host adaptation and vector transmission ( Figure 1C).
Investigation on the molecular diversity of BGV populations prerequisites to focus the population rather than 'molecular' level [104], as simply determining the number of different molecular sequences present in a host plant, crop or region, is insufficient to track evolutionary change and determine the influence of factors such as the introduction of hostplant resistance, or changes in cropping system. Also, there is still a lack of evidence on the exact rate of virus variants and inevitably there will be biases in the current information of virus diversity. Diagnostic practices, such as polymerase chain reaction (PCR), are selective even when degenerate BGV PCR primers are used. Besides, many reports of gene or genome function have dealt only with the properties of infectious clones of one sequence. In the field, the biological function of the virus may depend on the interaction between a 'swarm' of variant sequences upon which selection acts [105]. Recent advances in DNA synthesis have allowed the establishment of a synthetic genomics framework that can significantly accelerate the biological characterization of BGVs and their satellites [106].
In summary, these few illustrations highlight the practicality and importance of beneficial mutations in the plant viral genome, providing acquaintance behind the resistant outbreak. Therefore, understanding the influence of the evolutionary direction of virus populations is vital for developing more durable strategies to control begomoviral diseases in crop fields. The abundance of these mutations is manifold, and it is secure to accept that many of the viral determinants in BGVs have not yet been identified. In conjunction with enhancements of practical genome engineering approaches, novel viral functions will be discovered. in genetically diverse hosts (species, cultivars, or landraces). Due to various environmental climates and geographical niches, the genotype of the plant and vector may differ. Virus replication and selection within-host is a continuous process. During this process, the interaction of begomoviral proteins with pro-viral and antiviral proteins (host and vector) regulates the balance between variation and selection, leading to the selection of the fittest, most adapted strains. Vectors contribute to selection by transmitting the virus in new plant species or different genotypes/cultivars of the same species. Some BGVs retain a satellite called DNA β (betasatellite), and this interaction is called begomovirusbetasatellite complexes (red dotted arrows). Betastaellites depend on the helper virus for their replication and spread within and between hosts. Selection pressure enforced on a virus genome by a given environment will alter the virus population, excluding less fit entities. Mutations that offer a beneficial advantage are probable to be fixed in the genome. Some mutations generated in alternates hosts (from different niches) might break resistance and expand the host range. (C) During the evolutionary process, beneficial mutations (non-synonymous), including sites under positive selection, differentially accumulate in different viral proteins. They might contribute to fitness by enhancing stability, transmission, replication competence, escape from immunity, suppression of immune responses, or a combination.
Conclusions
Genetic determinants facilitate virus variation, evolution, and adaptation to diverse hosts and environments. The potential of BGVs to evolve rapidly by acquiring genes from other BGVs or viruses of different genera enhances further complications. Consequently, a commencing point is mutations (nucleotide substitutions, insertions, deletions), recombination, and reassortment (in segmented viruses). While these mutations may occur randomly, selection separates beneficial from unfavorable and neutral mutations. Selection is enforced by the host, the environment, and their interaction. Mutations that provide a beneficial advantage are more likely to be fixed in the genome and accumulate to higher than random frequencies in areas of the genome that contribute to robustness by enhancing stability, transmission, replication competence, escape from immunity, suppression of immune responses, or a combination. Unquestionably, there will be no specific solution to monitoring these epidemics, and the human impact on BGV evolution can only be minimalized by constraining or altering some practices that have led to the advent of these viruses and their vector populations.
Prospects
Our review illustrates that we are only beginning to comprehend the tripartite interactions between BGVs, vectors, and host plants. The perpetual resurgence of new recombinant strains of TYLCV or any other BGV species might lead to resistance breaking, efficient vector transmission, and expansion of the host range, thus being a significant threat to crop production and disease management. In the future, it would be imperative to study a multidisciplinary approach, such as the combined study of host and geography, vector/human-mediated dispersal, and molecular interaction fundaments of begomovirussatellite disease complexes, for the in-depth understanding of their expanding virosphere. It would help to determine the viral determinants of vital importance, broader infectivity, and a potential antibody target. | 2021-08-29T06:16:17.908Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "ca90814071540f866957ba0ea817d9531f038852",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/8/1706/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1c8a9b5d326542c68f001b8a162017324572032",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53016526 | pes2o/s2orc | v3-fos-license | Sensory and cultural acceptability tradeoffs with nutritional content of biofortified orange-fleshed sweetpotato varieties among households with children in Malawi
Background Biofortified orange-fleshed sweetpotato (OFSP) varieties are being promoted to reduce vitamin A deficiencies due to their higher beta-carotene content. For OFSP varieties to have impact they need to be accepted and consumed at scale amongst populations suffering from vitamin A deficiencies. Objective We investigated the sensory and cultural acceptability of OFSP varieties amongst households with children aged between 2–5 years old in two areas in Central and Southern Malawi using an integrated model of the Theory of Planned Behavior (TPB) and the Health Belief Model (HBM). Methods Sensory acceptability was measured using a triangle, preference and acceptance test using three OFSP varieties and one control variety, among 270 adults and 60 children. Based on a food ethnographic study, a questionnaire on cultural acceptability was developed and administered to 302 caretakers. Data were analyzed by calculating Spearman’s correlations between constructs and multiple linear regression modeling. Results The sensory evaluation indicates that all three OFSP varieties are accepted (scores >3 on 5-point scale), but there is a preference for the control variety over the three OFSP varieties. Almost all caretakers are intending to frequently prepare OFSP for their child in future (97%). Based on regression analysis, the constructs ‘subjective norms’ (β = 0.25, p = 0.00) reflecting social pressure, and ‘attitudes toward behavior’ (β = 0.14 p = 0.01), reflecting the feelings towards serving their child OFSP, were the best predictors for caretakers’ behavior to prepare OFSP for their child. Conclusions Our study shows that both sensory and cultural attributes can influence acceptability of varieties and consumption amongst households with children. Considering these attributes can improve the impact of biofortified crops in future programming, by reducing Vitamin A deficiencies through the intake of these nutrient-rich crops.
Objective
We investigated the sensory and cultural acceptability of OFSP varieties amongst households with children aged between 2-5 years old in two areas in Central and Southern Malawi using an integrated model of the Theory of Planned Behavior (TPB) and the Health Belief Model (HBM).
Methods
Sensory acceptability was measured using a triangle, preference and acceptance test using three OFSP varieties and one control variety, among 270 adults and 60 children. Based on a food ethnographic study, a questionnaire on cultural acceptability was developed and administered to 302 caretakers. Data were analyzed by calculating Spearman's correlations between constructs and multiple linear regression modeling.
Results
The sensory evaluation indicates that all three OFSP varieties are accepted (scores >3 on 5-point scale), but there is a preference for the control variety over the three OFSP varieties. Almost all caretakers are intending to frequently prepare OFSP for their child in future PLOS Introduction Sweetpotato (Ipomoea batatas (L.) Lam) is one of world's most important crops for food and nutrition security, particularly in Sub-Saharan Africa, parts of Asia, and the Pacific Islands [1,2]. Malawi is the main producer of sweetpotatoes in Sub-Saharan Africa, with an average production of 3.9 million ton per year in the period 2012-2014 [2]. From a nutrition perspective, sweetpotato roots are a good source of carbohydrates, fiber and vitamins B, C and E [3]. Most of the varieties of sweetpotato currently grown and consumed in Sub-Saharan Africa are white-or yellow-fleshed, and contain little beta-carotene [2]. In recent years, breeding programs have developed improved biofortified orange-fleshed sweetpotato (OFSP) varieties that are a good source of beta-carotene, a precursor of vitamin A [4]. Vitamin A deficiency is one of the major nutritional deficiencies in the world, affecting 190 million preschool children globally [5]. Micronutrient surveys conducted in Malawi in 2001 and 2009 reported that 59% and 23% of preschool children were vitamin A deficient [6]. Recent data on vitamin A deficiency however suggests that only 4% of preschool children living in rural areas in Malawi are vitamin A deficient [7], which is defined by the World Health Organization as a mild public health problem [8]. Possible explanations for this drop in deficiency rates could be the mandatory vitamin A fortification of oil and sugar in Malawi since 2015. Only 67% of preschool children received a vitamin A capsule in the last 6 months [9], hence there remains a need for a more sustainable and cost effective approach to reduce vitamin A nutrition deficiencies.
Biofortification strategies to improve human nutrition can be complementary to supplementation, dietary diversification, and fortification initiatives to combat vitamin A deficiency. Biofortification is a food-based approach to combat micronutrient malnutrition through breeding staple crops that have higher levels of micronutrients (e.g., iron, zinc, beta-carotene). Biofortification has been shown to be effective to alleviate micronutrient deficiencies in several populations [10]. The HarvestPlus Program has met pre-set breeding goals for OFSP with a beta-carotene level of 3200 ug/100g OFSP, to meet the daily requirements for vitamin A when consuming 100 grams of OFSP per day for a child aged 4-6 years [10]. The consumed betacarotene is converted in the human body to vitamin A, which is one of the essential micronutrients for human nutrition [11]. Hence, OFSP consumption has major potential to contribute to decreases in vitamin A deficiency rates in children as well as adults, which has been shown by both efficacy and effectiveness studies [4,12].
The first OFSP variety Zondeni was locally available in farmers' fields and officially recognized in Malawi in 2008, followed by an additional five varieties that were released in 2011 through a breeding program [13]. The OFSP varieties have different visual phenotypes and taste than the pre-existing varieties of sweetpotato used by farmers. The orange color intensity of OFSP varieties is associated with higher beta-carotene levels and lower dry matter content [14,15]. Such trait changes can influence sensory and cultural acceptability, where newly introduced varieties have to remain acceptable to consumers if they were to have the intended effect of improving the vitamin A status of the target consumers. As acceptability can differ due to cultural and demographic factors, it is important to conduct research on each countrycrop combination [16]. Talsma et al. have reviewed nine studies on the sociocultural drivers and determinants of acceptance and adoption of OFSP [17]. Overall, these studies indicated that acceptability and adoption of OFSP were high in areas where it was promoted. While OFSP has been promoted throughout Malawi since 2009, no in-depth research has been published on identifying factors that can influence the acceptability of consuming OFSP. To assess such cultural acceptability an integrated model, combining the Theory of Planned Behavior (TPB) and the Health Belief Model (HBM), can be used to investigate food or health-related behavior [18].
The TPB model assumes that the intention to perform a behavior, in our case consumption of OFSP, is closely related to the behavior itself. The intention to perform this behavior can be predicted by attitudes toward the behavior, subjective norms and perceived behavioral control [19]. The HBM is used for explaining and predicting acceptance of health-related recommendations. It combines individual perceptions and modifying factors to a likelihood of action, eg. of adopting a certain behavior, in our case OFSP consumption. The most important elements are the perceived susceptibility and threat of the health problem, the cues to action to adopt the behavior and the perceived benefits of the preventive action [20]. This combined TPB/ HBM model has been used to investigate the acceptance of foods such as amaranth, iron fortified soya sauce, fonio and yellow cassava, [18,[21][22][23], but has to date not been used to investigate the acceptability of OFSP.
The aim of our study was to investigate the sensory and cultural acceptability of OFSP amongst households with children between 2-5 years old in two areas in Central and Southern Malawi using the integrated model of the TPB and HBM.
Ethics statement
Written informed consents were collected among research participants or caretakers before the start of the study and all children were asked for their verbal consent. Ethical clearance for this research project was obtained from the National University of Ireland Galway Research Ethics Committee (Reference 16/FEB/07) and the National Commission of Science and Technology in Lilongwe, Malawi (Protocol number P.06/16/114.).
Study area
The research was conducted in Central and Southern Malawi in, respectively, the Mngwangwa location in Lilongwe district and Katuli location in Mangochi district. These rural research sites were selected based on their high production levels of sweetpotato and beans, the difference in culture (Chewa ethnic group in Mngwangwa and Yao ethnic group in Katuli) and the presence of collaborating organizations (International Potato Center and Concern Worldwide).
Mngwangwa is situated relatively close to the capital of Malawi, Lilongwe, at a distance of approximately 30 km. Katuli lies more isolated behind hills on 60 kilometers from Mangochi, and is bordering with Mozambique. Malawi has one rainy season stretching from December to April, followed by a long dry season [24]. Both locations can be described as rural areas where over 90% of the population is engaged in agriculture activities [25]. Major crops grown in both locations include maize and groundnuts. In Lilongwe, tobacco, beans and soy are also important crops. Diets are mainly cereal based, in which over 50% of calorie intake is from maize, with Nsima (maize flour mixed with water) as a staple, supplemented with starchy roots (cassava, potatoes), vegetables and beans [26]. Literacy is higher in Lilongwe (64.5%) than Mangochi (57.2%) [25].
Study participants & sampling
This study consisted of two parts, the sensory evaluation and the cultural acceptability survey. To prevent bias, the two parts of the study were conducted in different areas within the locations. For the sensory study, five villages were identified in each location through random sampling where adults, preschool children aged 4-5 years and their caretakers were interviewed using the method of convenience sampling and central location testing [27]. Sample sizes were large enough to perform analyses stratified by location, except for the preference test for children, where all children were analyzed as one group.
For the cultural acceptability study, participants were selected using multistage sampling, by selecting five areas within the location (each location was subdivided in 20 areas) and then randomly selecting three villages in each selected area. Inclusion criteria for the cultural acceptability study were the presence in the household of a child between 2-5 years of age whom they were taking care of and previous exposure to the OFSP varieties. Therefore, villages were included if there had been any previous OFSP related activity (nutrition promotion, agricultural training, demonstration plots). The list of villages was composed and crosschecked by Agriculture Extension Development Coordinators and field staff from the International Potato Center of the respective areas who were responsible for implementing the OFSP related activities. Probability proportional to (population) size sampling was used to calculate the number of participants per village, in which the sample size for a sub-population (area/village) is weighted proportional to its size (evenly distributed over the 2 locations). Per location 15 villages were included. Participants within each village were randomly sampled using household lists. If the participant selected was not available a new randomly selected household was invited for the interview.
The study was conducted in the period between September and November 2016 by four trained enumerators for the sensory evaluation and five trained enumerators for the cultural acceptability study. Interviews took place in a communal place and were conducted in Chichewa or Yao depending on the preference of the participant. All questionnaires were developed in English and translated to Chichewa and Yao local languages. Correctness of translation was checked by back translation to English. Pretesting was done for both studies and resulted in small changes in explanation to the participants and language used. To assess whether study participants understood the modified 5 point Likert-scale with both faces and checks (i.e. p symbols) that was used for both parts of the study, an example question was asked which was not related to the research. Caretakers decided about which child the interview was with, when there was more than 1 child eligible; this decision was based on availability of the child to attend the interview. Reasons for exclusion were if the participant did not understand the scale or if the mother could not present a proof of birth date for the child. and cut into roughly equal sized portions between 25 and 40 grams. All varieties were harvested in the dry season from the same irrigated farmer's field at the Chiwamba location in Lilongwe district, and stored for the same number of days before the tasting took place.
The Kenya variety is a yellow sweetpotato that is widely available in Malawi, and therefore the control variety. Kadyaubwerere is a high-yielding OFSP variety, variety Chipika is a more drought-resistant OFSP, whereas Zondeni is the oldest OFSP variety available in Malawi, and has lower yields [28]. These were boiled until the texture, assessed by a fork by the researchers, was considered right for consumption [29]. To assess sensory acceptability three different tests were done. All participants did a preference test (n = 270), followed by either a triangle (n = 66) or an acceptance test (n = 210).
A triangle test (n = 66) was conducted with the control variety and Kadyaubwerere OFSP variety to determine whether blindfolded participants could perceive a difference in the taste between two sweetpotato varieties. Previous studies indicate that 24-30 participants for a difference test are sufficient to determine a statistical significance for noticeable differences in sensory testing [30]. Three samples of sweetpotato were presented to the blindfolded participant, who was asked to taste and identify the odd sample [27,30]. A preference test was conducted with adult participants (n = 270) and with 60 children of the age 4-5 years to study the preference between the control variety and the Kadyaubwerere OFSP variety. Sample sizes over n = 20 are possible to analyze, but the ideal sample size is >100 participants when the binomial distribution is very equal to the normal distribution [27]. Participants were given two samples and they indicated whether they preferred the OFSP or the control variety and for what reasons. The same procedure was used for children between 4-5 years [31]. An acceptance test (n = 210) was conducted to evaluate the overall liking of sweetpotato, as well as the liking of the following attributes; taste, colour, smell, texture, starchiness and sweetness. Four sweetpotato varieties were presented to the participants, one control yellow variety of sweetpotato (Kenya) and the three OFSP varieties, Kadyaubwerere, Chipika and Zondeni. The participants were then asked to rate samples for each attribute on a 5-point modified Likert-scale with both smiley faces and checks. All test samples were color-coded and presented in random order. Participants were allowed to swallow the samples and were asked to rinse their mouth with water before the test and after tasting each sample.
Cultural acceptability. For the cultural acceptability survey, questionnaire-based interviews were conducted. The questionnaires consisted of two parts. In part one, information on socio-demographics were gathered, followed in part two with statements according to the 13 constructs of the TPB and HBM. These statements were identified based on literature study and a food ethnographic study, consisting of focus group discussions, a food attribute and food difference study, key informant interviews and a pile sorting session [32]. In total, 109 statements were categorized into thirteen constructs as described by Sun et al. and Ajzen et al. [18,33]. Anticipated affect was added as an construct to this model for which it was shown that it can explain additional variance in predicting the intention to perform the behavior [33]. The construct 'attitude towards behavior' consisted of a maximum of twenty-two questions, whereas there was only one question for the construct 'health behavior identity'. Respondents were asked to respond to all statements on a 5-point modified Likert scale with both faces and checks to indicate their response ranging from "I completely disagree" to "I completely agree".
For the constructs (prior) behavior and intentional behavior a different scale was used, to reflect the frequency of (intended) consumption scored from 0-5: (0) never, (1) once a month, (2) 2-3 times a month, (3) once a week, (4) 2 or more times a week and (5) every day. The items of the majority of other constructs were scored from 1 to 5. For two constructs paired questions were asked; the construct 'attitude toward behavior' consisted of behavioral beliefs and the evaluation of these beliefs. For the construct 'subjective norms' these were the normative beliefs and motivation to comply. The scores of the beliefs were scored 1-5, whereas the evaluation of these beliefs and motivation to comply was scored with -2 to 2, after which the answers were multiplied resulting in a total score between -10 and 10. Total scores per construct were calculated for each caretaker, by adding all scores of the individual statements within a construct. The adjusted combined model of the TPB and the HBM (as we used for our study) can be found in Fig 1. In our case, compared to the original model presented by Sun et al. [18], the construct 'prior behavior' was a better predictor of behavior than the construct 'behavioral intention'. Therefore, we swapped the constructs 'behavior' and 'behavioral intention' as shown in Fig 1. The questionnaire-based interview format is provided in the S1 Appendix.
Sweetpotato measurements. Dry matter content (%) was determined in triplicate for three randomly sampled raw roots of the four sweetpotato varieties. Dry matter content (%) was determined as the ratio of fresh weight compared to dry weight for the sweetpotato varieties. Sweetpotato samples were dried in an oven at 70 degrees˚C until weight remained unchanged (on average 27 hours) [34]. Color charts were used to estimate beta-carotene levels for the different OFSP varieties [35].
Statistical analyses. The triangle and preference tests were analyzed using a binomial distribution. Critical values to determine the number of correct or agreeing choices were 2006) About the model: the model is predicting behavior based on the construct 'prior behavior' (frequency of serving the child OFSP in the past when available). Prior behavior is linked to the 'behavioral intention' (intention to serve the child OFSP in future), which was the original predictor in the model of Sun et al. Both constructs can be influenced by the anticipated affect (feelings of regret when not serving the child OFSP). All other constructs are divided in three categories. 'Background and perception' consists of the constructs 'knowledge' (on vitamin A and OFSP),'perceived susceptibility' (perceptions on the susceptibility of vitamin A deficiency), 'perceived severity' (perceptions on the severity of vitamin A deficiency) and 'health value' (perceptions on the importance of health in general). These are followed by constructs around 'beliefs and attitudes', 'health behavior identity' (perception that it is healthy and good to eat OFSP), 'attitudes toward behavior' (feelings towards serving OFSP to their child) and 'perceived barriers' (perceived sensory (1) or agricultural (2)-related barriers that prevent the caretaker to serve OFSP to the child). The last category covers the external factors, 'subjective norms' (perceived social pressure on serving OFSP to their child), 'control beliefs' (perceived ability to make decisions in the household) and 'cues to action' (external triggers either (1) health-related or (2) activities that stimulate to serve OFSP to their child). � p<0.05, �� p<0.01 (both two-tailed), Spearman's correlation coefficients between constructs were calculated. retrieved from statistical tables [27]. Sample sizes were sufficient to do analyses on location level. Data from the acceptance test for individual attributes were treated as ordinal data. Nonparametric tests were used to analyze if there were significant differences between location and varieties, the Mann Whitney U test for independent samples and the Wilcoxon Rank test for paired samples. Mean liking was defined as the average of all attributes assessed.
For the cultural acceptability survey, multiple item constructs were tested for reliability using the Cronbach-α and item total correlation. Items were removed from the analysis in case of a low item total correlation <0.3 or if the Cronbach alpha of the construct increased significantly upon removal of an item. In total 25 items were excluded. For each respondent, a total score per construct was calculated by adding all individual scores. Spearman's correlation was used to calculate bivariate associations within constructs. Multiple linear regression modeling was performed to build the models. Models were adjusted for interviewer, education level, age and location (if applicable).
Sensory evaluation
To investigate the sensory acceptability of the four different sweetpotato varieties, a total of 270 adults and 60 children participated in a range of different tests. The overall mean age of the adults was 31.9 (±10.7) years; slightly more women were included (63%). For the children, the mean age was 4.5 ±0.8 years. The percentage of participants that reported to grow OFSP was significantly different between areas, 36% in Lilongwe versus 72% in Mangochi. The consumed sweetpotatoes were mainly from own production (52%), the market (33%) or purchased from other farmers (26%). Participants reported to consume sweetpotato mostly as a breakfast dish (98%), and to a much lesser extent at lunch or dinner (respectively 14% and 13%). The most common prepared dishes were boiled OFSP (74%), boiled OFSP mixed with peanut flour (called 'Futali', 18%) and roasted OFSP (6%). As a benefit of sweetpotato, 59% of the participants reported health and nutrition related reasons, and 17% reported it as a source of income. The main reason for not consuming more sweetpotatoes was the availability (59%). In the triangle test, blinded participants in both areas were able to observe the difference between the orange and control sweetpotato samples, in total 49 out of 66 participants pointed out the odd sample, where 29 right answers were needed for a significant difference (p<0.05) ( Table 1).
Adult consumers display preference for the non-OFSP control variety over the OFSP variety
For the preference test all participants (n = 270 adults, n = 60 children) were asked about their preference for either the yellow-fleshed control variety (Kenya) or the OFSP variety (Fig 2 and S1 Table), due to sweetness (22%), odor (19%) and taste (19%). Two hundred and eleven participants favored the control variety (76%) because of sweetness (36%), starchiness (24%), and odor (13%). Color was also mentioned as a reason for preferring one of the varieties: 8% for the OFSP and 3% for the control sweetpotato variety. For the children (n = 60), the OFSP variety was preferred by 35 children, although this was not statistically significant (p>0.05) (Fig 2 and S1 Table). No significant differences in preference were found between locations for both adults and children.
To further investigate the difference in liking between varieties for seven different attributes, an acceptance test with four different varieties of sweetpotato was conducted (S2 Table). The color of the Zondeni and Kadyaubwere varieties were highest rated followed by the control variety Kenya; between these three no significant differences were found (p>0.05). Lowest rated was the Chipika variety, which was not significantly differently rated from the Kenya variety (p>0.05). For smell and texture only small differences were found in the liking of these attributes, for all varieties the median was 4. Hedonic scores of starchiness for the varieties Kenya and Zondeni (median 4) were significantly higher compared to Chipika and Kadyaubwerere (median 3) (p<0.05). Sweetness received a significantly different score for all varieties (p<0.05), Kenya was rated highest with a median of 5, followed by Zondeni (4), Kadyaubwerere (3) and Chipika (3). For the attribute taste, the median scores for Kenya were highest (5) and significantly higher than for the other three varieties (p<0.05). There was a significant difference in the scores of overall liking for all varieties (p<0.05), where Kenya was the preferred variety (5). No differences in liking for any of the seven attributes were found between locations for the Kenya variety. For the OFSP varieties Chipika and Kadyaubwere no difference was found for the overall liking between Lilongwe and Mangochi districts, all the other attributes were significantly differently evaluated. For the Zondeni variety a significant difference in hedonic scores between areas was found for all attributes, except for taste (p<0.05).
Zondeni is the highest rated OFSP variety
After combining all attributes into a mean score for each variety (Table 2), a significant difference was found between all varieties. Overall, the control variety Kenya is liked most. In Mangochi however, the OFSP variety Zondeni is rated higher than Kenya, but this difference is not significant. Analysis of the difference in liking between the control and the OFSP varieties shows that there is on average a 0.50-point higher rating given for the control variety. There is a significant location effect between the areas, where the scores that were given for OFSP in Mangochi are much closer to the scores given for the control varieties (mean difference -0.22) than in Lilongwe (mean difference -0.76).
Cultural acceptability study identifies opportunities and barriers for including OFSP in children's diets by caretakers
To investigate the cultural acceptability of the OFSP varieties, a total of 302 caretakers were interviewed in a cultural acceptability study. The mean age of study participants was 31.9 (±9.1) years, almost all women (99.7% female). A household consisted on average of 5.9 persons (±1.8), where 74% of the caretakers had more than 3 children. In total 23.6% of the caretakers were illiterate, with most of them having attended only (part of) primary school (71%). The main household income source was from farming (56%). The majority of caretakers had consumed OFSP before (92.5%), with 20% reporting having grown OFSP in the last season and 60% reporting that they have grown any variety of sweetpotato in the last season. Additional descriptive information of the study population and difference between locations can be found in Table 3.
Sixty percent of caretakers reported that their children ate OFSP at least once a week when it was in season (between April and August), whereas 97% had the intention to feed their child OFSP once a week or more. The intention of feeding the children OFSP was much higher than current consumption when in season. The initial outcome construct of 'behavioral intention' showed very little variation in responses, which made it unsuitable to link the different constructs of the model to. Therefore, we used the construct 'prior behavior' instead as an outcome measure, which had a more even distribution of responses (and therefore was more suitable), which was included in the adjusted model in Fig 1. Furthermore, over half of the caretakers agreed that OFSP is rich in vitamin A (59%), that vitamin A could improve eyesight (62%), and could prevent diseases (60%). Most caretakers agreed that children in the age between 2-5 years old are at risk of developing vitamin A deficiency (69%) and almost the same proportion of caretakers also considered their own child being at risk of developing vitamin A deficiency (65%). Most caretakers agreed with the statements that 'vitamin A deficiency makes the child more frequently ill' (77%) and 'lack of vitamin A can lead to stunted growth of my child' (71%). Caretakers acknowledged that it was very important to them that their child can see properly during dusk (96%) and has a good health (99%). The majority of the caretakers (88%) were convinced that eating OFSP was good for their child, while the remaining 12% gave a neutral response to this question.
The majority of participants agreed that OFSP has an attractive color (86%) and that it tastes well (96%). Over one third of respondents indicated that they would rather sell OFSP than consume it themselves (37%). Caretakers indicated that provision of OFSP vines would make them decide to cultivate and prepare OFSP for their child (98%) and that information sessions on the benefits of OFSP would convince them to feed OFSP to their children (94%). Most caretakers agreed that other cues to prepare OFSP for their child were: (a) if their child was sick, (b) would have vitamin A deficiency or (c) would have problems with seeing properly during dusk or dawn (65-71%). The most influencing opinions on food preparation for caretakers were the opinions from health workers (96%), the child growth centers (90%) and health extension workers (80%). The opinions of friends and neighbors are much less valued in making decisions on what food to prepare for their children (both 53%). Furthermore, the majority of caretakers indicated that they would regret it if they would not give OFSP to their child Sensory and cultural acceptability of orange-fleshed sweetpotato among households with children in Malawi (91%). Table 4 provides an overview of the different constructs. Cronbach-α scores ranged from 0.53 to 0.81, which demonstrated a medium reliability for most of the constructs, median scores of the constructs were ranging from 3 to 51. Fig 1 shows the correlations between the different constructs of the model. Within the background and perception section significant correlations were found between health behavior identity and the constructs 'knowledge'(r = 0.230), 'perceived susceptibility' (r = 0.168) and 'perceived severity'(r = 0.219, all p<0.01). For the beliefs and attitudes section 'health behavior identity' was correlated with the construct 'attitude toward behavior' (r = 0.159, p<0.01) and 'perceived barriers -1' (r = -0.118, p<0.05). Perceived barriers-1 are barriers related to color, taste and starch content of the OFSP. Only the construct 'attitude toward behavior' was significantly correlated to the construct 'prior behavior' (r = 0.149 p<0.05). For the external factors, the constructs 'subjective norms'(r = 0.158) and cues to action-2' (r = 0.187) were significantly correlated with prior behavior (p<0.01). Cues to action-2 are related to activities and recommendations by others promoting OFSP. The construct 'anticipated affect' was significantly correlated with 'behavioral intention' (r = 0.145, p<0.05), suggesting that a higher regret of not giving OFSP to the child leads to higher behavioral intention to give OFSP to the child. However, the construct 'anticipated affect' is not correlated with prior behavior (r = -0.012). No significant correlation was found between the constructs 'behavior' and 'behavioral intention'.
Attitudes towards behavior and subjective norms can predict prior behavior in relation to caretakers serving their child OFSP
To further assess the relationship between multiple constructs and to investigate which constructs can predict prior behavior, multiple linear regression was used. An overview of the relative contribution of the constructs to prior behavior, using three different models is provided in Table 5. Overall, the constructs could explain a small percentage of the total variance in Sensory and cultural acceptability of orange-fleshed sweetpotato among households with children in Malawi predicting the behavior. Model 1, 'background and perception', explained 10% of the variance in 'health behavior identity'. No significant predictors were found in the total model, which was also the case when analyzed on location level. Model 2 explained 24% of the variance of the internal factors influencing the prior behavior of the caretakers to give OFSP to their children. Only 'attitudes towards behavior' (β = 0.14) was a significant predictor (p = 0.01) of prior behavior. When the model was predicted only for Lilongwe, 'health behavior identity' (β = 0.24) was a significant predictor of prior behavior (p = 0.01). For Mangochi 'attitude toward behavior' (β = 0.22) was a significant predictor (p = 0.01) of prior behavior. For model 3, predicting prior behavior using external factors, the construct 'subjective norms' was a significant predictor (β = 0.25, p = 0.00), also when the model was run for either of the two research locations.
Discussion
For biofortified crops to have an impact on micronutrient intake, it is required that biofortified varieties are consumed in sufficient quantities by the vulnerable target populations, in particular to improve maternal and child health. In addition to dissemination barriers (e.g., ineffective seed systems) that can limit access by smallholder farmers to healthy planting materials (seeds, vines) of new biofortified varieties [36], additional barriers to acceptability and sustained consumption can arise due to biofortified varieties not having equivalent or improved sensory characteristics, or due to a lack of preference for the varieties by the caretakers of children. The first objective of our study was to assess sensory acceptability of the OFSP varieties in comparison with a control variety. The difference in preference for the varieties per location highlights the importance of conducting sensory evaluation research in different areas, to be able to adjust variety dissemination initiatives to local preferences where possible, particularly to increase acceptability of varieties.
In our study, caretakers in Lilongwe significantly preferred the yellow-fleshed variety over the OFSP varieties, whereas children did not significantly prefer either. In contrast, research amongst children and mothers in Tanzania found that higher mean acceptability scores were observed for OFSP in comparison with pale-fleshed sweetpotato varieties although children gave significantly lower scores than mothers [29]. These contrasting results could potentially be due to regional differences in acceptability, and/or differences in % dry matter, color, flavor, smell and other important sensory characteristics. The preference for the control variety expressed in our study group may be of concern when promoting OFSP in Malawi, since foods liked by mothers are more likely to be offered to their children [37]. This would decrease the exposure of OFSP to children in our study population as the mothers are the primary caregivers. To address this potential barrier, it would be important to have an effective strategy to promote OFSP amongst mothers, which makes it more likely they will feed it to their children.
Dry matter has been identified as an important varietal trait when comparing OFSP to other pale-fleshed sweetpotato varieties, and has been reported as an important attribute for the liking of OFSP by consumers [38,39]. The OFSP varieties used for the triangle test differed in their dry matter content, which has been shown to make it easier to identify the odd sample [27]. However, our aim in this study was to compare the most promising OFSP variety (based on yield and beta-carotene content), Kadyaubwerere, with the control variety used by farmers and in households. According to a study in Kenya, children have a preference for OFSP varieties with lower dry matter content, whereas adults prefer high dry matter content (>27%) [40], which was not recapitulated in our study. Analysis of the dry matter content of the OFSP varieties used in our study demonstrated that the control variety Kenya had the highest dry matter content (39.2%), followed by Chipika and Zondeni (respectively 34.6% and 34.3%), with the lowest dry matter content found for Kadyaubwerere (29.8%). In our study, the dry matter content factor alone cannot explain the difference in liking between Chipika (lowest rated OFSP) and Zondeni (highest rated OFSP). Other studies on sensory characteristics of OFSP and cream fleshed sweetpotato varieties have concluded that major varietal differences are differences in color, dry mass, sweet flavor and maltose content [38]. These characteristics could likely also explain the differences in liking of the OFSP varieties in our tests. Therefore, further research is needed to test the relationship between the hedonic test results with sensory characteristics of the different varieties to be able to explain the differences in liking in more detail. It is also important to take into account that textural traits can potentially be influenced by genotype-environment interactions, which can complicate the testing and selection of varieties for consumer acceptance and breeding for improved textural traits [39].
Another angle that provides opportunities for increasing the sensory acceptability of OFSP is researching the effect of information provision. Research showed that nutrition information combined with tasting OFSP is positively weighted and integrated by the consumer to form emotions, that can be associated with product acceptance [41]. A review summarizing acceptance studies on biofortified crops concluded that information on the health benefits is an important determinant of acceptance [42].
Our acceptance tests revealed high scores (means are >3) for all sweetpotato varieties, which indicates that all of the sweetpotato varieties are accepted. Since uptake of OFSP among a population who's source of income is mainly farming not only depends on sensory acceptability but also on production and farming system attributes (e.g. yield, resistance to pests and diseases) [43,44], such additional factors determining adoption for cultivation and marketing should also be taken into account (see S3 Table). While the sensory acceptability for the Zondeni variety was high, the potential yields for Chipika and Kadyaubwerere are 35 t / ha under ideal circumstances, whereas Zondeni's yield potential is only 8-16 t / ha. Therefore, from a food security point of view based on aggregate supply of sweetpotato the promotion of the Zondeni variety amongst smallholders might not be justified. On the other hand, from a nutritional perspective, the beta-carotene levels of the different OFSP varieties are also an important factor to take into account. The Chipika variety has a much lower beta-carotene content (3500 μg/100g) compared to the other OFSP varieties Kadyaubwerere and Zondeni (respectively 8900 and 9000 μg/100g) and the control variety (770 μg/100g) (see S3 Table for more data on characteristics of the different varieties used). We acknowledge that measurement of beta-carotene using the HPLC method would be favourable [42], since estimations made with color charts are less precise. However, even with our estimation approach, the differences in potential yield and beta-carotene content of the different varieties (resulting in different nutritional yields) are clear.
It is possible that the preference for the Zondeni variety might be due to an inertia effect as it was the first OFSP variety that was introduced, 3 years before the other two OFSP varieties that were tested were introduced, so participants had more exposure to this variety. It is also possible that the much higher yield of the other OFSP varieties compared to Zondeni might act as a driver for smallholder farmer adoption, that is sufficient to override the relatively small differences in liking between the varieties.
The second objective of our study was to take a cultural acceptability approach to identify the constructs that contribute most to behavior of caretakers serving their children OFSP. Our findings indicate that this behavior was strongest correlated with the constructs 'subjective norms' and 'attitudes toward behavior'. The discrepancy between the intention and prior behavior shows the caretakers' difficulty to implement the behavior possibly through various personal and environmental control factors [45,46]. Depending on the type of behavior, the strength of the intention-behavior relationship can vary widely, and the discrepancy is larger when multiple steps have to be taken before the intention can be realized into the behavior [45].
Using prior behavior as an outcome measure, the constructs could only explain a small portion of the total variance for predicting the caretaker's frequency of serving OFSP to their children (23-29%). This is in concordance with other studies [18,[21][22][23], that found similar low explained variances by using the intention as an outcome measure. Therefore, an addition of the construct anticipated affect was made to the model, which explained an additional 3% of total variance. Despite its small contribution, the anticipated affect is important to take into account [33]. However, our study reveals that there remain other unknown factors that will be necessary to identify to explain the remaining variance.
Most of the respondents tended to agree with the statements in the cultural acceptability survey, and because of that, scores were high compared to the ranges possible. This high level of agreement to the statements can be due to different reasons. Firstly, it might be related to unfamiliarity to the behavior. We attempted to prevent the behavior unfamiliarity effect by only enrolling people in the survey that knew OFSP and selecting areas where it was introduced by ongoing agri-development programs and by including both negative and positively phrased questions. The intention to consume OFSP was very high, but at the same time the access to the OFSP was low, since the roots were not widely available in markets and the planting material was hard to access. Unfamiliarity with the behavior makes it less likely that there are strong beliefs towards the statements, and therefore the respondents might have had difficulty to decide on their level of agreement or disagreement. Secondly, another important factor that might have influenced our findings is that in general, respondents attempt to understand the goal of the research, in order to tailor their results which they hope will benefit themselves, their family or their community in future [47]. By giving positive responses respondents might have hoped that the survey showed their community to be a good place to continue the programming on OFSP, to be able to receive planting material or more support.
Concerning the constructs within the section background and perception it was found that the construct 'health behavior identity' was significantly correlated with the internal factors 'knowledge', 'perceived susceptibility' and 'perceived severity'. However, none of these constructs were predictors in the model. In our study, we can conclude that specific knowledge on vitamin A and the threats of vitamin A deficiency are more likely to positively influence caretakers' behavior to serve OFSP to their children than a more general knowledge on health, which is reflected by the construct 'health value'.
The prior behavior of caretakers serving their child OFSP was predicted (p<0.05) by the constructs 'attitude towards behavior' and 'subjective norms'. The construct 'attitudes toward behavior' was a good predictor within the section beliefs and attitudes, which confirms results of other studies [22,48]. These attitudes were determined by questions about beliefs on serving the child OFSP and the importance of these beliefs for the caretakers. The most important attitudes were the (sweet) taste, the attractive appearance, that it can cure and protect against diseases, and that it is easy to prepare.
The construct 'subjective norms' was a good predictor within the external factors. It reflects social pressure, which can be explained as the influence other people have on the decision whether a caretaker will serve OFSP to their children or not. In particular, the opinions of the extension workers, health workers and parents were highly valued, according to the responses. It has been highlighted that Malawi is a collectivist society [49], which can mean individuals put the priority of the group above the priorities of the individual [50]. In addition, the values of extended family and the community have a major influence on the behavior of the individual. This is important to take into account when promoting OFSP, to not only focus on positive attitudes and knowledge of women, but to also include a wider range of social 'influencers'. Other studies have found that the subjective norms were correlated. However, they were not a good predictor of the intention [22,51] or did not find any correlation [18,21].
The perceived economic and health benefits of OFSP in Malawi have been studied among farmers of OFSP [52]. The health benefits that were most frequently mentioned were increased energy, improved eyesight and the perception that OFSP is good for healthy bodies. From an economic perspective benefits cited were the ability to invest the income retrieved by selling OFSP (vines) in housing, livestock and food. In addition, women mentioned an increased selfesteem through the increased incomes. The most important benefits of producing and consuming OFSP can be used in information sessions and nutrition sessions where knowledge on the OFSP is communicated to potential consumers and/or farmers. These benefits would also help to create positive attitudes towards the behavior of consuming OFSP and increasing the already high acceptability of OFSP.
Conclusions
Overall, our study reveals that biofortified OFSP varieties are well accepted in Lilongwe and Mangochi districts in Malawi from both a cultural and sensory perspective. However, we find that there is a preference for the yellow-fleshed control variety and the Zondeni variety which is high in vitamin A. Our cultural acceptability analysis indicates that attitudes toward behavior and subjective norms were correlated to, and important predictors of the caretakers' behavior of serving their child OFSP. Our study findings provide guidance and direction for improvement of ongoing and planned programs for increasing the uptake of OFSP in Malawi among households with children. We consider that there is a need to conduct a follow-on indepth study quantifying sensory characteristics (sweetness, maltose concentration, dry matter) of the OFSP varieties and a more accurate quantification of the beta-carotene levels of the varieties to be able to unravel favourable traits by linking this information to the hedonic test results. Our results also indicate that there is both a need and an opportunity to promote a more diversified use of OFSP, as it is currently almost exclusively consumed as a breakfast snack (where the OFSP are mostly prepared through boiling), or in a dish called Futali. The high energy density of OFSP should be taken into account, to make sure it is a good and nutritious replacement when increasing intake or diversifying the use. The ongoing programs for promotion of uptake of OFSP varieties in Malawi will need to decide which specific OFSP varieties to promote based on criteria that include sensory acceptability, beta-carotene content or agricultural characteristics as well. For increasing adoption and consumption of OFSP to improve maternal and child health in Malawi, there is an additional opportunity to focus on positive attitudes and identify and include important influencers around the caretaker in the promotion strategy to increase the frequency of caretakers serving OFSP to their children.
Overall, while biofortified crops such as OFSP have major promise for combatting hidden hunger micronutrient deficiencies, our study highlights that consideration of sensory and cultural attributes that can influence both acceptability and consumption amongst smallholder farmers and households can improve impact pathways for biofortified crops.
Supporting information S1 Table. Results for the preference test with OFSP and a control yellow-fleshed sweetpotato variety among adults (n = 270) and children (n = 60) per location. (TIF) S2 Table. Results for the acceptance test with 3 OFSP varieties and a control variety, per area and total. (TIF) S3 Table. Beta-carotene content and dry matter (%) content for the various sweetpotato varieties. | 2018-11-09T20:33:54.219Z | 2018-10-18T00:00:00.000 | {
"year": 2018,
"sha1": "51e124ef37ca279f86ba5c514771b364c0edc3e9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0204754&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51e124ef37ca279f86ba5c514771b364c0edc3e9",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
31663110 | pes2o/s2orc | v3-fos-license | Tripartite Motif-containing 33 (TRIM33) Protein Functions in the Poly(ADP-ribose) Polymerase (PARP)-dependent DNA Damage Response through Interaction with Amplified in Liver Cancer 1 (ALC1) Protein*
Background: PARP activation at sites of DNA breaks leads to recruitment of chromatin remodeling enzymes such as ALC1. Results: TRIM33 associates with ALC1 after DNA damage and regulates its retention at DNA breaks. Conclusion: TRIM33 has a role in the PARP-dependent DNA damage response pathway. Significance: The role of TRIM33 in the DNA repair may contribute to its known tumor suppressor function. Activation of poly(ADP-ribose) polymerase (PARP) near sites of DNA breaks facilitates recruitment of DNA repair proteins and promotes chromatin relaxation in part through the action of chromatin-remodeling enzyme Amplified in Liver Cancer 1 (ALC1). Through proteomic analysis we find that ALC1 interacts after DNA damage with Tripartite Motif-containing 33 (TRIM33), a multifunctional protein implicated in transcriptional regulation, TGF-β signaling, and tumorigenesis. We demonstrate that TRIM33 is dynamically recruited to DNA damage sites in a PARP1- and ALC1-dependent manner. TRIM33-deficient cells show enhanced sensitivity to DNA damage and prolonged retention of ALC1 at sites of DNA breaks. Conversely, overexpression of TRIM33 alleviates the DNA repair defects conferred by ALC1 overexpression. Thus, TRIM33 plays a role in PARP-dependent DNA damage response and regulates ALC1 activity by promoting its timely removal from sites of DNA damage.
Activation of poly(ADP-ribose) polymerase (PARP) near sites of DNA breaks facilitates recruitment of DNA repair proteins and promotes chromatin relaxation in part through the action of chromatin-remodeling enzyme Amplified in Liver Cancer 1 (ALC1). Through proteomic analysis we find that ALC1 interacts after DNA damage with Tripartite Motif-containing 33 (TRIM33), a multifunctional protein implicated in transcriptional regulation, TGF- signaling, and tumorigenesis. We demonstrate that TRIM33 is dynamically recruited to DNA damage sites in a PARP1-and ALC1-dependent manner. TRIM33-deficient cells show enhanced sensitivity to DNA damage and prolonged retention of ALC1 at sites of DNA breaks. Conversely, overexpression of TRIM33 alleviates the DNA repair defects conferred by ALC1 overexpression. Thus, TRIM33 plays a role in PARP-dependent DNA damage response and regulates ALC1 activity by promoting its timely removal from sites of DNA damage.
Higher-order chromatin structure acts as a major barrier for the detection and repair of DNA damage. Rapid and efficient modification of chromatin facilitates the accessibility of damaged DNA to the DNA repair machinery (1,2). Breaks in DNA are known to result in rapid activation of the poly(ADP-ribose) (PAR) 3 polymerases PARP1 and PARP2, which catalyze the assembly of PAR chains onto chromatin substrates (3)(4)(5)(6)(7)(8)(9)(10). PAR modification at damage sites is believed to facilitate DNA repair by attracting PAR-binding DNA repair factors and promoting local chromatin relaxation (11)(12)(13)(14)(15)(16).
PARP1/2 activity is required for efficient single-strand break repair, with PARP inhibition causing increased entry of unrepaired breaks into S phase, leading to replication fork stalling and DNA double-strand break equivalents (9,17,18). PARP1 and PARP2 are also activated at stalled replication forks, where they play a role in efficient fork restart (19). Upon synthesis by PARP activity, PAR can be rapidly degraded by poly(ADP-ribose) glycohydrolase (PARG). Deletion of the nuclear isoform of PARG leads to DNA repair defects and genomic instability, demonstrating that regulation of PAR production and degradation is critical for efficient DNA repair (20). The rapid assembly and disassembly of PAR and PAR binding proteins at DNA damage sites implies that this process is tightly controlled in the cell and raises the possibility that yet-unknown factors are involved in mediating and regulating these events.
DNA damage-induced PARylation can directly recruit DNA repair proteins such as XRCC1 (X-ray repair cross-complementing protein 1) and APLF (aprataxin-and PNKP-like factor), which contain specific PAR-binding motifs. The chromatin remodeling enzyme Amplified in Liver Cancer 1 (ALC1) is recruited to sites of DNA damage by directly binding PAR through its macro domain. PARP1 activity is thus required to locally target ALC1-dependent nucleosome remodeling, which may facilitate local chromatin relaxation and repair (12,21,22,23). ALC1 is an oncogene that is amplified in some solid tumors, including hepatocellular carcinoma (49). Overexpression of ALC1 leads to altered chromatin structure and increased sensitivity to intercalating agents, such as phleomycin, that induce breaks preferentially at linker DNA (12,21).
Tripartite motif-containing 33 (TRIM33) is a member of the TIF1 family of transcription regulators, which possess a RING domain, two B-boxes, and a coiled-coil domain at the N terminus as well as the plant homeo domain (PHD) and Bromo domains at the C terminus ( Fig. 2A) (24) TRIM33 has been implicated previously in transcriptional regulation during hematopoiesis and interactions with elongation factors (25)(26)(27)(28). TRIM33 was also shown to regulate the TGF- pathway by interacting with both SMAD2/3 and SMAD4 (26, 29 -31). TRIM33 helps recruit SMAD2/3 to chromatin via interaction of its PHD and Bromo domains with histone H3 trimethylated at lysine 9 (H3K9me3) and histone H3 acetylated at lysine 18 (H3K18ac), respectively. In embryonic stem cells, binding of the TRIM33 PHD domain to H3K9me3 displaces HP1-g from regions of silenced chromatin, enhancing transcriptional activation ability of the SMAD2/3-SMAD4 complex at target promoter regions (32). Histone binding is also required for TRIM33 ubiquitin ligase activity and its transcriptional repression function (33). TRIM33 may play a dual role in TGF- signaling, initially enhancing the transcriptional activation function of the SMAD2-3⅐SMAD4 complex and then promoting the dissociation of this complex from chromatin (32,33).
TRIM33 also functions as a tumor suppressor in multiple tissues. Targeted knockout of TRIM33 in liver leads to hepatocellular carcinoma in mice (34), whereas targeted knockout of TRIM33 in hematopoietic precursors leads to myeloproliferative disorders similar to chronic myelomonocytic leukemia (35,36). Loss of TRIM33 also cooperates with K-ras activation to induce cystic tumors and adenocarcinomas of the pancreas in mice (37). Although the mechanism underlying the tumor suppression function of TRIM33 in these tissues remains unclear, a recent study suggests that this tumor suppressor function is separate from its functions in regulating SMAD4 (38).
Here we identify TRIM33 as an ALC1-interacting protein that is required for efficient DNA repair. We show that TRIM33 is rapidly recruited to sites of DNA breaks in a PAR-and ALC1dependent manner. We further demonstrate that TRIM33 is required for the timely dissociation of ALC1 from damaged chromatin. Our results raise the possibility that TRIM33 acts to regulate ALC1 activity at DNA lesions. Indeed, we show that increased sensitivity to certain DNA-damaging agents associated with ALC1-overexpressing cells is reversed by concomitant overexpression of wild-type TRIM33. We propose that TRIM33 functions during the PARP-dependent DNA damage response to promote timely removal of ALC1 from damaged chromatin. Thus, TRIM33 regulates ALC1 function in the DNA damage response to facilitate efficient DNA repair.
Laser Microirradiation-Laser microirradiation was carried out as described previously with some modifications (39). To generate subnuclear DNA damage, a laser was focused with LD ϫ40, NA 0.6 Achroplan objective to yield a spot size of ϳ1 M. The laser output was set to 35% to generate localized damage assisted with PALM Robo software supplied by the manufacturer (P.A.L.M. Microlaser Technologies, Bernried, Germany). Approximately 50 cells were microirradiated in each experiment.
Immunofluorescence Microscopy-Cells were fixed with 4% buffered paraformaldehyde for 10 min, followed by permeabilization with 0.5% Triton X-100. Cells were then incubated for 1 h with the appropriate primary antibodies diluted in 5% goat serum. Cells were then washed and incubated with secondary antibodies coupled with FITC and rhodamine for immunodetection and mounted in Vectashield with DAPI (Vector Laboratories). Images were taken with a ϫ40 objective using a Nikon Eclipse 80i microscope.
PAR Binding Assay-A PAR binding assay was performed as described previously (12). Proteins were dot-blotted onto a nitrocellulose membrane and blocked with TBST (Tris-buffered saline and Tween 20) buffer supplemented with 5% milk. The nitrocellulose membrane was then incubated with radiolabeled PAR polymer in TBST buffer. The membrane was washed and subjected to autoradiography.
Protein Purification and Mass Spectrometry-Purification of ALC1-associated immunocomplexes was performed as described previously (12). Briefly, stable HEK293T FlP-In FLAG (control) and ALC1 cells were grown in roller bottles, TRIM33 Functions in the PARP-dependent DNA Repair Pathway pelleted, washed in PBS, and lysed for 10 min at 4°C in sucrose buffer (10 mM HEPES (pH 7.9), 0.34 M sucrose, 3 mM CaCl 2 , 2 mM magnesium acetate, 0.1 mM EDTA, and protease inhibitors) containing 0.5% Nonidet P-40. Nuclei were then pelleted by centrifugation at 3900 ϫ g for 20 min. Residual cytoplasmic contamination was removed by washing with sucrose buffer and subsequent centrifugation at 3900 ϫ g for 20 min. Nuclei were resuspended in nucleoplasmic extraction buffer (20 mM HEPES (pH 7.9), 3 mM EDTA, 10% glycerol, 150 mM potassium acetate, 1.5 mM MgCl 2 , 1 mM DTT, and protease inhibitors), homogenized, and rotated for 20 min at 4°C. The chromatinenriched fraction was pelleted by centrifugation at 13,000 rpm for 30 min. The pellet was resuspended in digestion buffer (150 mM HEPES (pH 7.9), 1.5 mM MgCl 2 , 150 mM potassium acetate, and protease inhibitors), homogenized, and incubated with benzonase (25 units/l stock) for 1 h at room temperature. The digested chromatin was cleared by centrifugation at 38,000 ϫ g for 30 min. The soluble chromatin extract was recovered and used for anti-FLAG immunoprecipitation with anti-FLAG M2-agarose. Immunoprecipitated proteins were eluted by 3ϫ FLAG peptide and then precipitated with 10% TCA. Proteins were then trypsinized, purified, and analyzed by LC-MS/MS on an LTQ mass spectrometer (Thermo).
Immunoprecipitation of the TRIM33-associated Complex-Endogenous coimmunoprecipitation using TRIM33 antibody was performed according to the protocol of the manufacturer using a nuclear complex coimmunoprecipitation kit (Active Motif). HeLa cells (8.8 ϫ 10 6 ) were washed with ice-cold PBS/ phosphatase inhibitors and collected by gentle scraping in the same buffer. Cells were centrifuged and resuspended in hypotonic buffer followed by incubation for 15 min and lysis using detergent. After centrifugation, the nuclear pellet was resuspended in complete digestion buffer with an enzymatic shearing mixture and incubated for 90 min at 4°C. After centrifugation, supernatants were collected, protein was quantitated, and an equal quantity of protein was mixed with coimmunoprecipitation buffer and precleared with protein A-agarose beads for 2 h. Protein extracts were then incubated overnight at 4°C with antibody against TRIM33. Protein A-agarose beads were then added and incubated for protein binding for 2 h at 4°C. Beads were then washed with wash buffer, and proteins were eluted by boiling the beads with 2ϫ Laemmli buffer and loaded onto the SDS-PAGE gels. Resolved proteins were transferred to nitrocellulose membrane and incubated with the appropriate antibody. Aliquots of the extracts were processed directly for Western blotting as an input control.
MTS Assay-To evaluate the effect of TRIM33 on bleomycin sensitivity, 293T cells were transfected with control or TRIM33-specific shRNA. To study the effect of exogenously expressed WtTRIM33 on rescuing the bleomycin sensitivity of TRIM33 knockdown cells, cells were cotransfected with TRIM33 shRNA and either FLAG-WtTRIM33. To study the effect of TRIM33 on rescuing the sensitivity of ALC1-overexpressing cells to bleomycin, FlP-In-WtALC1 cells were transfected with either empty vector or FLAG-tagged WtTRIM33. 48 h after transfection, cells were seeded into 96-well plates (3000 cells/well), treated with the indicated concentrations of DNA-damaging agents, and grown for another 3 days. The relative cell number was measured by incubating cells with Celltiter 96 Aqueous One Solution reagent (Promega) for 3 h and measuring the absorbance at 490 nm. The values were plotted as an average of two different experiments in the case of TRIM33 shRNA-treated cells and three experiments in ALC1 cell lines.
Comet Assay-A comet assay was carried out as described previously (12). FLP-In-FLAG, FLP-In-ALC1, and FLP-In-ALC1 cells transfected with WtTRIM33 were treated with the indicated concentrations of phleomycin. A Comet assay single cell gel electrophoresis kit (R&D Systems) was used to prepare samples according to the instructions of the manufacturer. Approximately 1 ϫ 10 5 /ml cells were combined with molten low-melting agarose at 37°C at a ratio of 1:10 (v/v). 75 l of this mix was pipetted onto a comet slide. The slides were left at 4°C for 10 min, immersed in lysis buffer for 30 min followed by 20-min incubation in alkaline solution, and subjected to electrophoresis at 300 mA for 20 min. Following electrophoresis, the slides were washed in 70% ethanol and left to dry overnight. Samples were then stained with SYBR Green and analyzed with image analysis software (Comet IV, Perceptive Instruments).
Western Blot Analysis-Nuclear extracts were prepared using an Active Motif kit following the instructions of the manufacturer. Equal amounts of protein were resolved on SDS-PAGE, and Western blotting was carried out using the indicated antibodies. Densitometric measurements of bands on Western blots were carried out using Adobe Photoshop software. Western blot images were inverted, and mean intensity was measured. This was then normalized to the background intensity, and normalized values were plotted as mean Ϯ S.E.
TRIM33 Interacts with ALC1 upon Induction of DNA
Damage-Previous studies have shown that ALC1 is rapidly recruited to DNA damage sites via a macro domain-dependent interaction with PAR. The retention of ALC1 on damaged chromatin is short-lived, with a half-life of 2.5 min (12,21).The rapid association/disassociation kinetics of ALC1 from damage sites implies that its chromatin association is subject to strict regulatory control. To gain insight into the regulation of ALC1 during the DNA damage response, we sought to identify proteins that interact with ALC1 upon DNA damage. HEK293 control cells and cells expressing FLAG-tagged wild-type ALC1 (12) were either mock-treated or treated with bleomycin and subjected to immunoprecipitation using FLAG beads. Immunoprecipitates were then analyzed by LC-MS/MS to identify potential interacting proteins. In agreement with previous reports, peptides for PARP1, APLF, H2B, Ku70, Ku80, and DNA-PKcs were identified in ALC1 immunoprecipitates both before and after DNA damage but not in controls (12). Intriguingly, peptides for TRIM33 were also found in ALC1 immunoprecipitates but only from the bleomycin-treated samples (Fig. 1A). This observation raised the possibility that TRIM33 interacts with ALC1 upon induction of DNA damage. To confirm these findings, HeLa cells were mock-treated or treated with either 300 M bleomycin for 1 h or 3 mM hydroxyurea for 3 h and then subjected to immunoprecipitation using antibody against endogenous TRIM33 and processed for Western blotting using antibodies specific for ALC1 and TRIM33. As shown TRIM33 Functions in the PARP-dependent DNA Repair Pathway NOVEMBER 8, 2013 • VOLUME 288 • NUMBER 45 in Fig. 1B, ALC1 is found in the TRIM33 immunoprecipitation only after induction of DNA damage with either bleomycin or hydroxyurea, indicating that TRIM33 and ALC1 interact in a DNA damage-dependent manner. Analysis of the cell extracts demonstrated that TRIM33 was present in undamaged cells and that its total protein level is not increased upon DNA damage.
TRIM33 Localizes to DNA Breaks-Given that TRIM33 interacts with ALC1 in response to DNA damage, we next sought to determine whether TRIM33 is also recruited to DNA damage acutely induced by UV laser scissors in HeLa cells sensitized with iododeoxyuridine. Endogenous TRIM33 rapidly localized to DNA damage, as seen by its colocalization with ␥H2AX, a marker of DNA strand breaks (Fig. 2, A-C). Unlike continuing exposure to agents such as bleomycin that induce ongoing DNA damage, laser scissors induce acute, transient DNA damage at a defined time point, making it amenable to high-resolution time course analysis. TRIM33 recruitment to DNA damage sites was rapid and short-lived. TRIM33 was detected within 5 min of damage induction and disassociated from DNA lesions between 15 and 20 min (Fig. 2, B and C).
To determine whether TRIM33 also localizes to sites of replication stress, HeLa cells were treated with either 0 or 3 mM hydroxyurea for 3 h, and localization of TRIM33 and ␥H2AX were evaluated by immunofluorescence. Hydroxyurea treatment led to induction of nuclear foci of TRIM33 that colocalized with ␥H2AX (data not shown). To determine which domains of TRIM33 contribute to its localization to DNA breaks, HeLa cells were transfected with FLAG-tagged constructs encoding either WtTRIM33 or a series of mutant constructs. These included a RING domain mutant (TRIM33CA) that has two cysteine-to-alanine mutations at amino acids 125 and 128, internal deletion mutants of the histone-binding PHD (TRIM33⌬PHD) or Bromo domain (TRIM33⌬Bromo) (32), and the mutation of highly conserved residues in the PHD domain (TRIM33-PHD(AAA)) (40) (Fig. 2A). The TRIM33CA mutant has been shown to lack ubiquitin ligase activity (41), and TRIM33-PHD(AAA) has been shown to be unable to bind methylated histone residues (40). Cells were then subjected to UV laser scissor-induced DNA damage, and TRIM33 localization was monitored by immunofluorescence (IF). The WtTRIM33 and TRIM33CA mutants both localized rapidly to DNA breaks. In contrast, deletion of either the PHD or Bromo domain abrogates TRIM33 localization to sites of laser scissor-induced DNA breaks. TRIM33 PHD(AAA) also has greatly reduced localization to DNA breaks (Fig. 2, A, D, and E). Thus, the chromatin-binding PHD and Bromo domains are critical for robust localization of TRIM33 to sites of DNA breaks.
TRIM33 Knockdown Sensitizes Cells to DNA-damaging Agents and Activates Cell Cycle Checkpoints-To investigate a potential role for TRIM33 in the DNA damage response, we next examined the effect of depleting TRIM33 on the sensitivity of cells to DNA-damaging agents. HeLa cells were transfected with control sh/siRNA, with either of two different TRIM33 shRNAs, or one TRIM33 siRNA, and after 48 h, the cells were treated with different concentrations of bleomycin or hydroxyurea. TRIM33 knockdown enhanced sensitivity to both bleomycin (Fig. 3A) and hydroxyurea treatment (data not shown). Introduction of shRNA-resistant WtTRIM33 could rescue the bleomycin sensitivity of TRIM33 depletion (Fig. 3B). These results indicate that TRIM33 plays a role in DNA damage response.
Cells treated with TRIM33 shRNA also exhibited evidence of spontaneous unrepaired DNA damage, evident from the elevated levels of ␥H2AX (Fig. 3C). Furthermore, TRIM33 shRNA-treated cells also exhibited hallmarks of damage-induced checkpoint activation, including increased levels of p21 and enhanced phosphorylation of CHK2 on threonine 68 when compared with cells treated with control shRNA (Fig. 3C). Collectively, these results reveal that TRIM33 knockdown results in increased sensitivity to DNA-damaging agents, accumula-FIGURE 1. TRIM33 interacts with ALC1 upon induction of DNA damage. A, HEK293 cells expressing either vector or FLAG-ALC1 were treated with either vehicle or bleomycin, subjected to immunoprecipitation using FLAG beads, and then processed by LC-MS/MS. The relative peptide counts for each condition are shown for ALC1, PARP1, APLF, histone H2B, Ku70, and TRIM33. As shown, TRIM33 peptides were identified only in the bleomycin-treated samples. B, endogenous interaction of TRIM33 and ALC1 upon hydroxyurea (HU) and bleomycin treatment. DNase-treated nuclear extracts from untreated (Un) cells or cells treated with HU (3 mM, 3 h) or bleomycin (300uM, 1 h) were immunoprecipitated with anti-TRIM33 antibody. Immunoprecipitates were processed for Western blotting (WB) using antibodies to TRIM33 and ALC1 (top two panels, IP). Aliquots of nuclear extract were also directly processed for Western blotting with these antibodies (bottom two panels, Inputs).
TRIM33 Functions in the PARP-dependent DNA Repair Pathway
tion of spontaneous DNA damage, and activation of DNA damage-induced checkpoints.
TRIM33 Localization to DNA Breaks Is Dependent upon PARP Activity-TRIM33 localization to sites of DNA breaks was intact in Ataxia telangiectasia-and Rad3-related (ATR)deficient (GM18366) and DNA-PKcs-deficient (M059J) cells and was unaffected in HeLa cells treated with an ATM inhibitor, KU-55933 (42) (Fig. 3, D and E). The level of ␥H2AX at the UV laser scissor stripes was reduced but still detectable in these cells, as has been reported previously (10). These data demonstrate that recruitment of TRIM33 to sites of DNA damage is independent of ATM/ATR/DNA-PKcs activation.
Given that PAR formation is required for the recruitment of some DNA repair proteins, including ALC1, to the sites of DNA damage (6, 12, 21, 43), we next examined the role of PAR in TRIM33 recruitment. The effect of inhibiting PAR formation on the localization of TRIM33 to UV laser scissor-induced DNA breaks was evaluated by treating cells with 1 M PARP inhibitor ABT-888 (44). PARP inhibitor treatment abolished the induction of PAR polymers at sites of laser scissor-induced DNA breaks and greatly reduced the localization of TRIM33 to these sites when compared with mock-treated cells (Fig. 4, A and B). Furthermore, recruitment of TRIM33 to UV laser scissor-induced DNA damage was also greatly reduced in Parp1 Ϫ/Ϫ mouse embryonic fibroblasts when compared with Parp1 ϩ/ϩ mouse embryonic fibroblasts (Fig. 4, C and D). PAR polymers are normally rapidly degraded by the action of PARG (45,46). Treatment of HeLa cells with a PARG inhibitor, gallotannin, led to prolonged retention of both PAR and TRIM33 at sites of DNA breaks (data not shown).
PARP1 is also known to be activated at sites of replication stress where it is believed to play a role in replication fork restart NOVEMBER (19). Treatment of HeLa cells with PARP inhibitors also reduced TRIM33 focus formation in response to hydroxyurea treatment, with only 19% of the TRIM33 foci colocalizing with ␥H2AX foci compared with 74% observed in controls (data not shown). Together, these results demonstrate that the recruitment of TRIM33 to sites of DNA breaks and stalled replication forks is PAR-dependent.
TRIM33 Functions in the PARP-dependent DNA Repair Pathway
TRIM33 Recruitment to DNA Damage Is Dependent on ALC1-The DNA repair proteins APLF and ALC1 are recruited and bind directly to sites of active PAR synthesis via their PARbinding PBZ (PAR-binding zinc finger) and macro domains, respectively (12,47,48). Because TRIM33 is rapidly recruited to sites of DNA damage in a PAR-dependent manner, we sought to determine whether TRIM33 also binds directly to PAR. Purified recombinant FLAG-tagged WtTRIM33, WtALC1, C1 (macro domain) fragment of ALC1, and APLF were spotted onto nitrocellulose, and their ability to bind 32 P-radiolabeled PAR was measured. Both the WT and C1 (macro domain) region of ALC1 exhibited a strong interaction with labeled PAR, as demonstrated previously (12,21). However, TRIM33 failed to bind PAR directly (Fig. 4E), suggesting that PAR-dependent recruitment of TRIM33 to DNA damage is not via direct binding to PAR and may involve some intermediary factor.
Given that TRIM33 and ALC1 associate in response to DNA damage, we investigated whether the recruitment of TRIM33 to DNA damage is dependent on ALC1. The localization of TRIM33 to sites of laser scissor-induced DNA breaks was therefore examined in U2OS cells expressing either control shRNA or ALC1-shRNA. TRIM33 recruitment to UV laserinduced DNA damage was greatly reduced in ALC1 shRNAexpressing cells (Fig. 4, F, G, and H). Thus, TRIM33 recruitment to DNA breaks is dependent on the presence of ALC1.
We further investigated which regions of ALC1 are required for TRIM33 localization to DNA breaks. Cells stably expressing ALC1sh were reconstituted with WtALC1, ALC1-K77R (ATPase dead), or the ALC1-D723A macro domain mutant, which is unable to interact with PAR and fails to localize to DNA breaks. Cells were then subjected to UV laser-induced DNA breaks, and TRIM33 localization was observed by IF using antibodies against endogenous TRIM33. Reconstitution of ALC1sh cells with either WtALC1 or ALC1-K77R rescued TRIM33 localization to DNA breaks (Fig. 4, F, G, and H). However, reconstitution with the ALC1-D723A macro domain mutant failed to rescue TRIM33 localization to DNA damage. This result suggests that PAR binding of ALC1, but not its catalytic activity, is required for its function in localizing TRIM33 to DNA breaks.
The Interaction of TRIM33 with ALC1 Is PARP-dependent-To determine whether the DNA-damage induced interaction of ALC1 and TRIM33 is dependent upon PAR synthesis, endogenous coimmunoprecipitations were performed in FIGURE 3. TRIM33 knockdown results in DNA damage sensitivity. A, HeLa cells treated with control shRNA, TRIM33 shRNA1, TRIM33 shRNA2, and TRIM33 siRNA were exposed to increasing concentrations of bleomycin. Relative cell counts measured by MTS assay, normalized to no treatment, were performed on day 3 and were plotted. p Ͻ 0.005 for control versus TRIM33 shRNA or siRNA. Relative expression of TRIM33 and tubulin is shown. B, HeLa cells treated with control shRNA, TRIM33 shRNA, or TRIM33 shRNA cells complemented with WtTRIM33 were exposed to increasing concentrations of bleomycin, and relative cell counts were measured as above. The Western blot analysis shows levels of TRIM33 and tubulin. C, whole cell extracts from control and TRIM33 sh2-treated cells were processed for Western blotting using antibodies to the indicated proteins. D, TRIM33 localization to DNA damage is not dependent on ATM, ATR, or DNA-dependent protein kinase (DNApk). HeLa cells treated with vehicle or ATM inhibitor (ATMi) (KU-55933), GM18366 (ATR mutant), and M059J (DNApk Ϫ/Ϫ ) cells were subject to laser scissor-induced DNA damage. Cells were fixed after 10 min and processed for IF using antibodies to ␥H2AX (green) and TRIM33 (red). E, quantitation of TRIM33 at sites of DNA damage is shown. Each data point is the mean Ϯ S.D. of at least 20 cells.
DNase-treated nuclear extracts. HeLa cells were either mocktreated or treated with PARP inhibitor (PARPi) and then exposed to 0 gray, 10 gray IR, or 100 J/m 2 UV light (Fig. 5A). These treatments were chosen because we could follow the dynamics of interaction after an acute episode of DNA damage. Although an interaction between ALC1 and TRIM33 was not detected in untreated cells, a robust interaction was evident 5 min after either IR or UV light treatment, which diminished after 10 min. The interaction of TRIM33 and ALC1 after IR and UV light treatment was significantly reduced in cells pretreated with PARP inhibitor ABT-888 (PARPi) (Fig. 5, A and B). These results suggest that TRIM33 and ALC1 interact in response to DNA damage and that this is partly dependent on active PAR synthesis.
To determine whether the PAR binding activity of ALC1 or its ATPase activity is required for its interaction with TRIM33, cells were transfected with FLAG-tagged constructs encoding either WtALC1, the ATPase dead mutant, ALC1-K77R, or ALC1-D723A macro domain mutant that cannot bind to PAR (12). The cells were subjected to UV damage, cell extracts were collected after 5 min post-damage, immunoprecipitated with anti-FLAG beads, and processed for Western blot analyses using antibodies to TRIM33. Both the WtALC1 and ALC1-K77R mutant interact with TRIM33 after UV damage. However, the macro domain mutant ALC1-D723A, which does not localize to DNA breaks, fails to interact with TRIM33 (Fig. 5C). These data are consistent with the IF data presented in Fig. 4, and together, they suggest that the interaction of TRIM33 and ALC1 requires PARP-dependent localization of ALC1 to DNA breaks but that it is not dependent upon its ATPase activity.
To determine whether PAR binding of ALC1 is sufficient to induce interaction with TRIM33, PAR-bound ALC1 immobi- . Localization of TRIM33 to DNA breaks is dependent upon PARP and ALC1. A, PAR (top two panels) and TRIM33 (bottom two panels) were localized by IF in untreated (Un) and in cells pretreated with 1 M PARPi (Pi) ABT-888 for 1 h. B, quantitation of PAR and TRIM33 colocalization with ␥H2AX at sites of laser scissors. *, p Ͻ 0.005. C, Parp1 ϩ/ϩ or Parp1 Ϫ/Ϫ mouse embryonic fibroblasts were treated with laser scissors, and ␥H2AX and TRIM33 were localized by IF. Images are shown at identical magnification. D, quantitation of PAR and TRIM33 colocalization with ␥H2AX at sites of laser scissors. *, p Ͻ 0.005. E, APLF, WtALC1, C1 (ALC1 macro domain only), and TRIM33 proteins were dot-blotted onto a nitrocellulose membrane and incubated with 32 P-labeled PAR. F, TRIM33 localization to DNA breaks is ALC1-dependent. U2OS cells stably expressing control sh, ALC1sh, or cells expressing ALC1 sh were reconstituted with WT ALC1, KR (kinase dead) and DA (PAR binding mutant) were analyzed. All cells were subjected to UV laser scissor-induced DNA breaks. After 10 min, cell were fixed, and IF was performed using antibodies to ␥H2AX and TRIM33. G, Western blot analyses showing levels of ALC1 and TRIM33 in U2OS cells expressing control and ALC1 shRNA and different constructs of ALC1 in ALC1sh cells. H, quantitation of relative intensity of TRIM33 at sites of DNA damage. Each data point is mean Ϯ S.D. of at least 20 cells. *, p Ͻ 0.005. NOVEMBER 8, 2013 • VOLUME 288 • NUMBER 45 lized on nitrocellulose was incubated with purified TRIM33, and, after washing, analyzed by immunoblotting with antibodies to TRIM33. No interaction of PAR-bound ALC1 with TRIM33 was observed using this approach (Fig. 5D). This suggests that PAR binding of ALC1 is not, by itself, sufficient to induce interaction of ALC1 with TRIM33 in vitro.
TRIM33 Functions in the PARP-dependent DNA Repair Pathway
TRIM33 Knockdown Results in Prolonged Accumulation of ALC1 at Sites of DNA Damage-Previous studies have shown that ALC1 transiently localizes to laser scissor-induced DNA damage, appearing within seconds of induction and disassociating from the damage site within 10 -20 min. Mutant forms of ALC1 that retain the macro domain but are inactive for chromatin remodeling show prolonged retention and persistence of XRCC1 on damaged chromatin (12). To investigate the effect of TRIM33 on the dynamics of ALC1 recruitment to damage sites, we depleted TRIM33 by shRNA in HeLa cells (Fig. 6, A-C). In control shRNA-treated cells, ALC1 was rapidly recruited to sites of laser scissor-induced damage but was undetectable at these sites 45 min after damage (Fig. 6, A and C). In contrast, treatment of cells with TRIM33 shRNA resulted in prolonged retention of ALC1 (Fig. 6, A-C) at sites of laser scissor-induced DNA damage, with ALC1 evident at damage sites 45 min after treatment. Consistent with a prior report, the prolonged retention of ALC1 was also accompanied by prolonged retention of XRCC1 at sites of DNA damage (data not shown) (12).
Of note, TRIM33 knockdown has no effect on protein levels of ALC1 after DNA damage, as analyzed by Western blot analysis (Fig. 6D), suggesting that TRIM33 does not influence ALC1 protein stability. TRIM33 knockdown has no effect on the dynamics of PAR at the UV laser-induced DNA breaks. In both control and TRIM33 knockdown cells, PAR rapidly localized to DNA breaks at 5 min but was not present at breaks after 45 min. This suggests that, in the absence of TRIM33, ALC1 remains at breaks even when PAR is no longer present (Fig. 6, E, F, and G).
To examine the impact of TRIM33 on ALC1 recruitment and retention at damage sites, we complemented TRIM33 knockdown cells with shRNA-resistant wild-type TRIM33 or the RING domain TRIM33CA mutant. Importantly, the prolonged retention of ALC1 at damage sites observed in TRIM33 knockdown cells was corrected by introduction of the shRNA-resistant WtTRIM33 construct but not by the TRIM33CA RING domain mutant (Fig. 6, A-C). Collectively, these data suggest that TRIM33 is required for timely dissociation of ALC1 from sites of damaged DNA and that this function requires an intact RING domain.
The DNA Repair Phenotype Associated with ALC1 Overexpression Is Reversed by TRIM33 Overexpression-ALC1 dissociation was delayed in TRIM33-depleted cells. To determine whether ALC1 overexpression leads to a similar effect, we evaluated the dynamics of ALC1 overexpression on its localization to UV laser scissor-induced DNA breaks. Cells overexpressing WtALC1 (FLP-In-ALC1) show prolonged retention of ALC1 at DNA breaks (Fig. 7A, B, and C). Overexpression of WtTRIM33 restores the ALC1 dynamics, and ALC1 is no longer detectable at sites of breaks after 45 min. These data suggest that proper stoichiometry between TRIM33 and ALC1 is essential for timely dissociation of ALC1 from sites of DNA breaks (Fig. 7, A, B, and C). FIGURE 5. TRIM33 dynamically interacts with ALC1 in a PARP-dependent manner. A, HeLa cells were untreated (Un) or treated with IR or UV light, with or without PARP inhibitor (PARPi), and DNase-treated nuclear extracts were prepared at 5 and 10 min. TRIM33 IP was performed, followed by Western blotting (WB) using the indicated antibodies. IP, immunoprecipitation; No Ab, no antibody. B, quantitation of ALC1 interaction with TRIM33. The plot shows the ratio of the signal of ALC1 coimmunoprecipitation to ALC1 input. C, The FLAG WtALC1, ALC1K77R, and ALC1D723A mutants were expressed in 293 cells and subjected to UV irradiation. Protein extracts were prepared after 5 min and immunoprecipitated with anti-FLAG antibodies, followed by Western blotting using antibody against TRIM33 and FLAG. D, PAR-bound ALC1 does not bind TRIM33 in vitro. WtALC1, KR (ATPase dead) and DA (PAR binding mutant) ALC1 mutants and TRIM33 proteins were dot-blotted onto a nitrocellulose membrane and incubated with PAR, washed, and then incubated with purified TRIM33. Membranes were then processed for Western blot analysis with antibodies to PAR (top panel) or antibody to TRIM33 (bottom panel).
ALC1 is amplified and overexpressed in certain cancers, suggesting that it may function as an oncogene (49). Overexpression of ALC1 leads to chromatin relaxation and sensitivity of cells to the DNA-damaging agent phleomycin, which induces breaks preferentially in linker DNA (12). Our data raise the possibility that the relative levels of ALC1 and TRIM33 may be important for the regulation of ALC1 activity. To directly investigate the effect of TRIM33 expression on the phenotype associated with ALC1 overexpression, cells stably overexpressing either an empty vector (FLP-In-FLAG) or WtALC1 (FLP-In-ALC1) were analyzed for sensitivity to bleomycin. Confirming prior reports, overexpression of Wt ALC1 confers increased sensitivity to bleomycin (Fig. 7D) (12). Overexpression of WtTRIM33 greatly reduced the sensitivity of ALC1-overexpressing cells to bleomycin, with these cells now showing similar sensitivity as vector-expressing cells. However, overexpression of the RING domain mutant TRIM33CA failed to rescue the bleomycin sensitivity of ALC1-overxpressing cells (Fig. 7D). NOVEMBER 8, 2013 • VOLUME 288 • NUMBER 45
JOURNAL OF BIOLOGICAL CHEMISTRY 32365
The effect of TRIM33 overexpression on induction of DNA breaks by phleomycin in ALC1-overexpressing cells was also evaluated using the comet assay. Consistent with prior reports, phleomycin exposure produces longer tail moments in ALC1overexpressing cells compared with control cells, suggesting that ALC1 overexpression promotes chromatin relaxation and increased accessibility of linker DNA to phleomycin (12). Overexpression of WtTRIM33, but not the TRIM33CA mutant, counteracted the effect of ALC1 overexpression on induction of DNA breaks induced by phleomycin treatment, as measured by comet tail moments (Fig. 7E). This is consistent with our prior results, and collectively these data demonstrate that elevated TRIM33 expression can counteract the phenotype of ALC1 overexpression and that this requires an intact RING domain.
DISCUSSION
PARP activation at sites of DNA breaks leads to local changes in chromatin structure required for efficient DNA repair. A key insight into how PARylation impacts chromatin structure came from the finding that PAR-binding proteins such as ALC1 are recruited to sites of DNA breaks and participate in chromatin remodeling and DNA repair (12,21,23). Here, we implicate TRIM33 in the PAR-induced DNA damage response through its interaction with ALC1. This assertion is supported by the following observations. 1) TRIM33 is rapidly recruited to DNA damage sites in a PAR-and ALC1-dependent fashion; 2) TRIM33 interacts with ALC1 in response to DNA damage; 3) TRIM33 knockdown in cells confers DNA damage sensitivity
TRIM33 Functions in the PARP-dependent DNA Repair Pathway
and delays disassociation of ALC1 from damaged chromatin; and 4) overexpression of TRIM33, but not the TRIM33-CA mutant, alleviates the DNA repair phenotype resulting from ALC1 overexpression.
Our data suggest that TRIM33 does not directly interact with PAR but that its enrichment at damage sites is dependent upon interaction with ALC1. Recruitment also depends upon the presence of an intact Bromo domain and an intact PHD domain in TRIM33, suggesting an important role for the interaction with modified histone residues (32,33). The interaction of TRIM33 with ALC1 is highly dynamic and is evident within minutes after acute, transient DNA damage but declines rapidly. Although we cannot rule out a possibility of a low level of interaction between TRIM33 and ALC1 in undamaged cells, this interaction is clearly enhanced by DNA damage with kinetics that parallel production of PAR. Indeed, the interaction between TRIM33 and ALC1 is reduced by PARP inhibitor treatment or by point mutations in the macro domain of ALC1 that abolishes PAR binding, suggesting that PAR binding by ALC1 is important for its interaction with TRIM33. However, PAR binding alone might not be sufficient to induce interaction of ALC1 by TRIM33 because PAR-bound ALC1 could not directly interact with TRIM33 by a far-western approach (Fig. 5D). It is possible that binding of the TRIM33 PHD-Bromo domain with histones, which is required for recruitment of TRIM33 to DNA breaks, may also be required for optimal interaction of ALC1 and TRIM33. It is also possible that other proteins recruited to DNA breaks may facilitate or mediate the interaction between ALC1 and TRIM33.
Loss of TRIM33 leads to increased base-line H2AX phosphorylation and activation of cell cycle checkpoints, suggesting that it may be required for efficient DNA repair. TRIM33 loss also leads to increased sensitivity to DNA-damaging agents such as bleomycin. Of note, loss of TRIM33 leads to somewhat slower growth (data not shown). However, the sensitivity to DNAdamaging agents remains proportionately increased. This is similar to other DNA repair proteins whose loss can lead to both decreased proliferation and increased sensitivity to DNAdamaging agents. However, we cannot rule out that the role of TRIM33 in transcriptional regulation may contribute to these phenotypes.
TRIM33 appears to regulate the dynamics of ALC1 retention at sites of DNA breaks. ALC1 is normally recruited rapidly to sites of DNA damage but dissociates quickly. Loss of TRIM33 leads to prolonged retention of ALC1 at sites of DNA breaks. Furthermore, reintroduction of WtTRIM33, but not the RING domain mutant TRIM33, was found to rescue the effect of TRIM33 knockdown on ALC1 retention. Our observations are consistent with a model in which, upon DNA damage, TRIM33 interacts with ALC1 and promotes the timely removal of ALC1 from damaged chromatin.
We surmise that overexpression of ALC1, a putative oncogene, disrupts the normal stoichiometry of ALC1 and TRIM33, leading to dysregulated ALC1 activity, which confers promiscuous chromatin relaxation and enhanced sensitivity to bleomycin. Concomitant overexpression of WtTRIM33 was found to counteract the DNA repair phenotype and enhanced suscep-tibility to phleomycin-induced DNA breaks evident in ALC1overexpressing cells. Conversely, knockdown of TRIM33 leads to a phenotype similar to that induced by ALC1 overexpression, including increased sensitivity of cells to bleomycin. Thus, loss of TRIM33 may be, in part, functionally analogous to ALC1 overexpression, with both leading to abnormal ALC1 retention at sites of DNA breaks. This effect may also contribute to the tumor suppressor function of TRIM33. The functional impact of abnormal ALC1 retention at DNA breaks on the DNA repair process needs to be further characterized. TRIM33 likely has additional ALC1-independent functions that contribute to the phenotype associated with TRIM33 loss.
TRIM33 has been implicated in the regulation of the TGF- pathway, where it interacts with and regulates the SMAD3-SMAD4 complex and its chromatin association (29,41). TRIM33 also interacts with FACT1 and other members of the transcriptional elongation complex and plays a key role in transcriptional regulation during development (27). It is not currently clear whether the transcription and DNA repair roles of TRIM33 are separate functions or are related mechanistically. It is, however, intriguing to note that both the SMAD pathway and the FACT1 complex are regulated by PARP activity, with both SMAD3 and FACT1 reported as substrates for PARylation by PARP1 (50,51). Because PARP1 functions in both transcription and DNA repair, it is possible that TRIM33 may act downstream of PARP1 activation in several distinct cellular contexts where it interacts with specific target proteins: SMAD4 linked to transcription and ALC1 associated with DNA repair.
Our findings have potential clinical implications because TRIM33 is mutated, translocated, and has decreased expression in several human cancers, including hepatocellular cancer, pancreatic cancer, and chronic myelomonocytic leukemia (34 -36, 38, 52, 53). Tissue-specific TRIM33 knockout in mouse liver leads to hepatocellular carcinoma (34). Because amplification of ALC1 and knockout of TRIM33 are both implicated in the pathogenesis of hepatocellular cancer (49), this observation lends support to the potential antagonistic role of TRIM33 and ALC1. Decreased TRIM33 expression is also seen in a subset of human pancreatic cancers, and a tissue-specific knockout of TRIM33 is known to cooperate with KRAS mutation in the development of adenocarcinomas of the pancreas in mice (37). A recent study also demonstrates that pancreas-specific TRIM33 knockout is not epistatic with SMAD4 knockout in the development of Kras-associated pancreatic cancer, suggesting that the role of TRIM33 as a pancreatic tumor suppressor may be independent from its effect on SMAD4 (38). Our findings raise the possibility that the DNA repair defects associated with TRIM33 loss may contribute to tumorigenesis. Moreover, treatment of tumors exhibiting loss of TRIM33 function could be designed to exploit the DNA repair defect present in these cells. | 2018-04-03T04:12:55.510Z | 2013-08-06T00:00:00.000 | {
"year": 2013,
"sha1": "036d97f20b21c1c2a9f49df6b3fa522e78c2d2ba",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/288/45/32357.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "a3eda3499f45e7b78fe42c46a2f0a7cfe4850e76",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
255441240 | pes2o/s2orc | v3-fos-license | Safety, reactogenicity, and immunogenicity of Ad26.COV2.S: Results of a phase 1, randomized, double-blind, placebo-controlled COVID-19 vaccine trial in Japan
Background This study evaluated safety, reactogenicity, and immunogenicity of a 2-month homologous booster regimen of Ad26.COV2.S in Japanese adults. Methods In this multicenter, placebo-controlled, Phase 1 trial, adults (Cohort 1, aged 20–55 years, N = 125; Cohort 2, aged ≥ 65 years, N = 125) were randomized 2:2:1 to receive Ad26.COV2.S 5 × 1010 viral particles (vp), Ad26.COV2.S 1 × 1011 vp, or placebo, followed by a homologous booster 56 days later. Safety, reactogenicity, and immunogenicity were assessed. Results Two hundred participants received Ad26.COV2.S and 50 received placebo. The most frequent solicited local adverse event (AE) was vaccination-site pain, and the most frequent solicited systemic AEs were fatigue, myalgia, and headache. After primary vaccination, neutralizing and binding antibody levels increased through Day 57 (post-prime) in both cohorts. Fourteen days after boosting (Day 71), neutralizing antibody geometric mean titers (GMTs) had almost reached their peak value in Cohort 1 (5 × 1010 vp: GMT = 1049; 1 × 1011 vp: GMT = 1470) and peaked in Cohort 2 (504; 651); at Day 85, GMTs had declined minimally in Cohort 2. For both cohorts, binding antibody levels peaked at Day 71 with minimal decline at Day 85. Conclusion A single dose and homologous Ad26.COV2.S booster increased antibody responses with an acceptable safety profile in Japanese adults (ClinicalTrials.gov Identifier: NCT04509947).
Introduction
Despite expanded availability of vaccines to prevent coronavirus disease 2019 (COVID-19), the pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) continues to cause serious illness [1]. In Japan, as of July 2022, >10 million confirmed COVID-19 cases and > 31,000 related deaths have been reported [2]. Five vaccines for the prevention of COVID-19 have been approved for use in Japan, including Ad26.COV2.S [3][4][5][6]. As variant strains continue to emerge [7], further development of safe and effective vaccines is critical to control COVID-19 in Japan.
Here, we report the results of a primary analysis of a Phase 1 study in Japanese adults with or without stable underlying conditions to assess the safety, reactogenicity, and immunogenicity of Ad26.COV2.S at 2 dose levels.
Study design
This Phase 1 randomized, double-blind, multicenter, placebocontrolled trial (ClinicalTrials.gov Identifier: NCT04509947) was conducted at 3 centers in Japan beginning in August 2020. Data cut off for the present analysis was 22 February 2021. The trial enrolled 2 cohorts with participants randomly assigned 2:2:1 to receive an intramuscular injection of 5 Â 10 10 vp Ad26.COV2.S, 1 Â 10 11 vp Ad26.COV2.S, or placebo. Randomization was performed using a central randomization scheme generated before the trial began. The randomization was balanced using randomly permuted blocks and stratified by study site for each cohort. The primary objective was to assess the safety and reactogenicity of Ad26.COV2.S at 2 dose levels, each administered as a single-dose primary vaccination followed by a homologous booster 56 days later. Participants were Japanese adults.
All participants provided written informed consent. The trial adhered to the principles of the Declaration of Helsinki and to the Good Clinical Practice guidelines of the International Council for Harmonisation. The protocol (available in Supplementary Materials) and its amendments were approved by institutional review boards.
Trial participants
Cohort 1 included 125 healthy adults aged 20 to 55 years and Cohort 2 included 125 adults aged ! 65 years with or without well-controlled underlying conditions not related to an increased risk for severe COVID-19 [13]. Eligibility criteria included body mass index < 40.0 kg/m 2 , normal immune system function, no prior receipt of a COVID-19 vaccine, and SARS-CoV-2 infection negative at screening. Full inclusion and exclusion criteria are detailed in Table S1. Randomization and vaccination of participants began following internal review of safety data 7 days after vaccination of participants enrolled in Janssen's first-in-human Ad26.COV2.S study (ClinicalTrials.gov Identifier: NCT04436276) [12].
Procedures
Participants were screened up to 28 days before vaccination. Eligible participants received vaccine (batch 20E27-04) or placebo (batch 05353DK) as an intramuscular injection into the deltoid on Days 1 and 57, with follow-up visits up to 1 year after primary vaccination. The study duration from screening until the last followup visit was approximately 13 months for each participant. Ad26. COV2.S and placebo were prepared as previously described [14].
Each participant was closely observed for the development of acute reactions for a minimum of 30 min after vaccination. Participants were asked to record signs and symptoms of any solicited adverse events (AEs) in a diary for 7 days after vaccination. Unsolicited AEs were reported for each vaccination until 28 days postvaccination (Days 29 and 85 after the primary vaccination). All other serious AEs, AEs of special interest, and AEs leading to study or treatment discontinuation were reported for all participants from primary vaccination to the end of the study.
Neutralizing antibodies capable of inhibiting wild type (wt) SARS-CoV-2 infections in vitro were quantified using the virus neutralization assay (VNA) developed and qualified by UK Health Security Agency, Porton Down, United Kingdom [12]. The concentrations of antibodies specific for SARS-CoV-2 prefusion conformation spike protein were determined using the validated human SARS-CoV-2 pre-spike immunoglobulin G indirect enzyme-linked immunosorbent assay (S-ELISA) [12]. Collection of blood samples for immunogenicity assessments were planned on study Days 1, 15, 29, 57, 71, 85, 239, and 366.
Concomitant therapy
Use of concomitant therapies, including antipyretic or analgesic medications, non-steroidal anti-inflammatory drugs (NSAIDs), corticosteroids, antihistamines, and vaccinations up to 30 days before administration of the primary vaccination were recorded at the screening visit. Use of these products was also recorded for both doses before administration on the day of vaccination until 28 days post-vaccination. Use of any other concomitant therapies was recorded if administered in conjunction with a confirmed COVID-19 case or with a new or worsening AE. The use of analgesics and NSAIDs were permitted following vaccination at the first signs of symptoms. Prophylactic use of these medications prevaccination was prohibited. Antipyretics were recommended by study staff post-vaccination for symptom relief as needed.
Statistical analysis
The planned total sample size was 250 participants, with 125 participants enrolled in each cohort. The number of participants chosen for this study was to provide a preliminary safety and immunogenicity assessment. Analyses for Cohorts 1 and 2 were conducted when approximately 125 participants per cohort reached Day 29, 28 days after primary vaccination, or discontinued earlier. Primary analyses for Cohorts 1 and 2 were conducted when approximately 125 participants per cohort reached Day 85, 28 days after booster vaccination, or discontinued earlier.
The full analysis set (FAS) included all participants with ! 1 documented vaccine administration. The per-protocol immunogenicity (PPI) population included all randomized and vaccinated participants for whom immunogenicity data were available, excluding participants with major protocol deviations expected to affect immunogenicity outcomes. In addition, samples obtained after missed vaccinations or from participants who became infected with SARS-CoV-2 after screening were excluded from the analysis set.
No formal statistical testing of safety and immunogenicity data was planned, and data were analyzed descriptively by vaccine group. Geometric mean and 95 % confidence interval (CI) were calculated for wild-type virus neutralization assays (wtVNA) and S-ELISA assays. The immunogenicity analyses were performed on the both the FAS and PPI populations. The ratio and correlation between neutralizing and binding antibodies as determined by wtVNA and S-ELISA, respectively, were calculated.
A baseline sample was considered positive if the wtVNA titer or S-ELISA concentration value was greater than the lower limit of quantification (LLOQ). After vaccination, a participant was considered a responder if ! 1 of the following criteria were met: 1) the baseline sample value was LLOQ and the post-baseline sample was > LLOQ, or 2) the baseline sample value was > LLOQ and the post-baseline sample value was ! 4 times greater than the baseline sample value. Once a participant met responder criteria, the participant was considered thereafter to be a responder, regardless of the titer value.
In Cohort 1, timing of the Day 57 (post-primary vaccination/ pre-boost) visit ranged from 73 to 88 days (median, 78 days) due to a pause in study vaccination. Median time to Day 71 (14 days post-boost) and Day 85 (28 days post-boost) was 92 days and 106 days, respectively.
Participant demographics
Demographic and baseline characteristics, including SARS-CoV-2 seropositivity status and Ad26 VNA seropositivity status, are shown in Table 1.
The use of antipyretics or analgesics was observed more frequently in the 1 Â 10 11 vp group than the 5 Â 10 10 vp group of both cohorts. After primary vaccination in Cohort 1, 43.1 % and 74.0 % used antipyretics/analgesics in the lower-and higher-dose groups, respectively, and in Cohort 2, 8.0 % and 18.4 % used antipyretics/ analgesics. After boosting, 32.6 % and 45.2 % of participants used antipyretics/analgesics in Cohort 1; in Cohort 2, 2.1 % and 4.4 % used antipyretics/analgesics. The use of antipyretics or analgesics was more frequent in Cohort 1, with 47.2 % and 28.9 % of participants reporting use of any antipyretics/analgesics 7 days after primary and booster vaccinations, respectively, compared with 11.2 % and 3.4 % in Cohort 2. Use of antipyretics or analgesics was generally less frequent after boosting than after primary vaccination in both vaccine groups in both cohorts. Paracetamol (acetaminophen) was most frequently used in the vaccine groups of both cohorts. No participants in the Cohort 1 placebo group received antipyretics or analgesics 7 days post-primary vaccination; 1 placebo recipient in Cohort 2 received paracetamol postbooster.
Safety
After primary or booster vaccination with Ad26.COV2.S, the most frequently reported solicited local AE in both cohorts was vaccination-site pain (Fig. 1), with a median duration of 2 to 4 days. In Cohort 1 (20-55 years), vaccination-site pain after primary vaccination was reported by 87 participants (vaccine, n = 85; placebo, n = 2) whereas 60 participants reported pain after boosting (vaccine, n = 60; placebo, n = 0) ( Table 2). In Cohort 2 (! 65 years), Table 1 Summary of participant demographics and characteristics. Characteristic Table 2). In Cohort 1, a higher proportion of participants in the 1 Â 10 11 vp group experienced AEs after primary vaccination compared with the 5 Â 10 10 vp group. After the booster, similar proportions of participants in both groups of Cohort 1 reported headache, myalgia, and nausea; more participants in the higher-dose group reported fatigue and pyrexia. Solicited systemic AEs were generally less common in Cohort 2 compared with Cohort 1. In the placebo groups of both cohorts, fatigue and myalgia were generally the most common systemic AEs after each dose. Most solicited systemic AEs were grade 1 or 2 in severity. In Cohort 1, the frequency of grade 3 solicited systemic AEs was higher after primary vaccination than after boosting; there were no grade 3 events after boosting in Cohort 2. Following primary vaccination with 1 Â 10 11 vp, pyrexia was the most frequently reported solicited systemic AE of grade ! 3, with all events occurring in Cohort 1 ( Table S2). Three of the pyrexia events in Cohort 1 were grade 4, all of which resolved in 4 days following vaccination and were considered related to the study vaccine. One grade 3 event of pyrexia occurred in Cohort 1 after the booster; no events of grade 3 pyrexia were reported in the 5 Â 10 10 vp group and no events of grade 4 were reported at either dose level after the booster.
The majority of unsolicited AEs were grade 1 or 2 in severity in both cohorts (Table S3). In Cohort 1, the most frequently reported unsolicited AE was arthralgia (3/51 [5.9 %] in the 5 Â 10 10 vp group; 5/50 [10.0 %] in the 1 Â 10 11 group). The most frequently reported unsolicited AE in Cohort 2 was administration-site pruritus (4/50 [8.0 %] in the 5 Â 10 10 vp group; 1/49 [2.0 %] in the 1 Â 10 11 vp group; and 2/26 [7.7 %] in the placebo group). For participants who received Ad26.COV2.S, the frequency of unsolicited AEs was lower in Cohort 2 than in Cohort 1. In both cohorts, the frequency of unsolicited AEs was generally higher after primary vaccination than after boosting. After primary vaccination in the 1 Â 10 11 vp group, 3 unsolicited AEs of grade ! 3 were reported (Cohort 1, n = 2; Cohort 2, n = 1). In the 5 Â 10 10 vp group of Cohort 2, there was one grade 3 unsolicited AE reported after boosting. Cohort 1 participants in the 1 Â 10 11 vp group reported 2 grade 3 unsolicited AEs considered related to vaccination, both of which occurred post-primary vaccination (1 AE each of arthralgia and myalgia). No grade ! 3 unsolicited AEs related to vaccination were reported in either Cohort 1 or Cohort 2 post-boost with either dose. No AEs of special interest were reported in either cohort.
Discussion
This Phase 1 trial conducted in Japanese adults demonstrated that a single dose of Ad26.COV2.S followed by a booster 56 days later had an acceptable safety and reactogenicity profile at the 5 Â 10 10 vp dose level. Higher reactogenicity was observed with the higher dose (1 Â 10 11 vp) than the lower dose level, supporting use of Ad26.COV2.S at the 5 Â 10 10 vp dose level under emergency use authorization and (conditional) marketing approval. Overall, in healthy adults aged 20-55 years and adults aged ! 65 years without underlying conditions or with wellcontrolled underlying conditions, the homologous booster was generally well tolerated. No safety signals were identified in this study.
At both dose levels, reactogenicity was less frequent in older adults. Reactogenicity was also lower following the booster than primary vaccination. Generally, higher frequencies of solicited local AEs, solicited systemic AEs, and unsolicited AEs were observed at the higher dose level compared with the lower dose level in both cohorts. Overall, the 5 Â 10 10 vp dose level safety data evaluated in this study align with safety data reported in a phase 1/2a study [12] and two global phase 3 studies of Ad26.COV2.S [9,11], although direct comparisons cannot be made between the studies owing to differences in study design. These other studies reported a similar frequency of solicited AEs to those observed in our study [9,11,12], demonstrated a lower reactogenicity profile for older adults compared with younger adults [10,11], and showed lower reactogenicity after a booster dose compared with primary vaccination [11].
Pyrexia (systemic solicited AE) was reported most frequently in the higher-dose group in younger adults after primary vaccination (74.0 %, 37/50; 14 events were grade ! 3). In the lower-dose group of younger adults, pyrexia occurred in 25.5 % (13/51) of participants after primary vaccination and 7.0 % (3/51) after boosting; no grade ! 3 pyrexia events were reported in the lower-dose group. The median duration of pyrexia was 1 day in both dose groups. Antipyretics and analgesics were used more by the higher-dose group than the lower-dose group.
In this study, a single dose of Ad26.COV2.S elicited robust humoral responses in a large majority of vaccine recipients, with neutralizing antibodies present in > 90 % of participants by Day 15 post-primary vaccination, irrespective of age group or vaccine dose level. Neutralizing and binding antibody levels and responder rates increased up to Day 57 (56 days post-primary vaccination/ pre-boost), with a trend for higher antibody levels in those aged 20 to 55 versus ! 65 years. Increased humoral immunogenicity was observed after the booster in both cohorts, albeit with different kinetics. The younger adults had almost reached peak immunogenicity 14 days after boosting and remained stable up to 28 days after boosting. These results are consistent with observations in ongoing [12,15], and suggest that a single dose and a homologous booster of Ad26.COV2.S further increased SARS-CoV-2-specific antibody responses.
The booster elicited a modest increase in neutralizing antibody titers for older adults compared with younger adults. The older adults reached peak immunogenicity at 14 days postboost with a trend towards decline by 28 days post-boost. Detectable baseline levels of Ad26 titers, indicative of previous Ad26 exposure, were observed in 36 of 124 older adults. Although no critical significant impact of pre-existing Ad26 humoral immunity on Ad26.COV2.S humoral immunogenicity was observed based on pre-existing Ad26 neutralizing antibodies at baseline in older adults, age, potential comorbidities, vaccination interval and/or pre-existing Ad26 immunity could play a role in both the lower responses observed and a trend for faster waning in post-boost neutralizing titers observed in older adults versus younger adults. The small number of participants with pre-existing Ad26 immunity in Cohort 1 (4/125) precluded any meaningful conclusions regarding immunogenicity of Ad26. COV2.S in younger adults.
Results of the humoral immune response correlation analysis (wtVNA vs S-ELISA) indicated that the high correlation observed between the 2 assays was independent of time, and that S-ELISA can be considered for future use as a surrogate for wtVNA.
In conclusion, we have demonstrated that a single dose of Ad26. COV2.S in Japanese participants induced humoral immune responses and had an acceptable safety and reactogenicity profile at the 5 Â 10 10 vp dose level, and that a booster dose increased immunogenicity while maintaining an acceptable safety profile. The results of this study indicated that the protective effect observed after primary vaccination with Ad26.COV2.S in the large Phase 3 ENSEMBLE trial (COV3001, NCT04505722) [9,10] and after an Ad26.COV2.S booster in the Phase 3 ENSEMBLE2 trial (COV3009, NCT04614948) [16] can also be expected in the Japanese population. These findings support the continued development of Ad26. COV2.S for the prevention of COVID-19.
Prior Presentation
Data reported in this manuscript were previously presented at the 25th Annual Meeting of the Japanese Society for Vaccinology; 3-5 December 2021; Nagano, Japan.
Data Availability
The data sharing policy of Janssen Pharmaceutical Companies of Johnson & Johnson is available at https://www.janssen.com/clinical-trials/transparency. As noted on this site, requests for access to the study data can be submitted through Yale Open Data Access (YODA) Project site at http://yoda.yale.edu.
Declaration of Competing Interest
YT is an employee of Janssen Pharmaceutical K.K. KF, HN, KT, and HT are employees of Janssen Pharmaceutical K.K. and are share- | 2023-01-05T14:04:58.786Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "28d7ad2c401277b0fef8bc729dc37d8f425a4a5b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.vaccine.2023.01.006",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a94436579dbabe866a4a7787fbc586ce7f3ade7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53495042 | pes2o/s2orc | v3-fos-license | of Union Internationale de Spéléologie Comparative microbial sampling from eutrophic caves in Slovenia and Slovakia using RIDA ® COUNT test kits
International Journal of Speleology 41 (1) 1-8 Tampa, FL (USA) January 2012 Available online at scholarcommons.usf.edu/ijs/ & www.ijs.speleo.it International Journal of Speleology Official Journal of Union Internationale de Spéléologie Comparative microbial sampling from eutrophic caves in Slovenia and Slovakia using RIDA®COUNT test kits INTRODUCTION Transport and handling of research equipment for detection of microorganisms during expeditions in the underground is inconvenient; therefore it is important to adopt a sensitive procedure and robust materials. The protocol should include appropriate microbial indicator groups, and it should be easy, reproducible and cost-efficient. For this study, we adopted the RIDA®COUNT test kit (R-Biopharm AG, Germany, http://www.r-biopharm.com/) for quantitative microbial detection to monitor underground water and air quality and to get an insight on viable microbes in eutrophic caves. RIDA®COUNT test has been used successfully in the dairy industry to find critical points where special attention or improved cleaning is needed (Salo et al., 2006). We tested the use of RIDA®COUNT plates to count total aerobic and heterotrophic bacteria (RIDA®COUNT Total Aerobic Count), total number of coliform bacteria (RIDA®COUNT Coliform) and colony-forming units of 1Karst Research Institute, Scientific Research Centre of the Slovenian Academy of Sciences and Arts, Titov trg 2, 6230 Postojna, Slovenia (janez.mulec@guest.arnes.si) 2Institute of Soil Biology, Biology Centre of the Academy of Sciences of the Czech Republic, Na Sádkách 7, 370 05 České Budějovice, Czech Republic (kristuf@upb.cas.cz), (alicach@upb.cas.cz) Mulec J., Krištůfek V. and Chroňáková A. 2012. Comparative microbial sampling from eutrophic caves in Slovenia and Slovakia using RIDA®COUNT test kits. Use of RIDA®COUNT in caves. International Journal of Speleology, 41 (1), 1-8. Tampa, FL (USA). ISSN 0392-6672. http://dx.doi.org/10.5038/1827-806X.41.1.1 RIDA®COUNT test plates were used as an easy-to-handle and rapid indicator of microbial counts in karst ecosystems of several caves in Slovakia and Slovenia. All of the caves had a high organic input from water streams, tourists, roosting bat colonies or terrestrial surroundings. We sampled swabs, water and air samples to test robustness and universality of the RIDA®COUNT test kit (R-Biopharm AG, Germany, http://www.r-biopharm.com/) for quantification of total bacteria, coliforms, yeast and mold. Using data from swabs (colony-forming units CFU per cm2) we proposed a scale for description of biocontamination level or superficial microbial load of cave niches. Based on this scale, surfaces of Ardovská Cave, Drienovská Cave and Stará Brzotínská Cave (Slovakia) were moderately colonized by microbes, with total microbial counts (sum of total bacterial count and total yeast and molds count) in the range of 1,001-10,000 CFU/100 cm2, while some surfaces from the show cave Postojna Cave (Slovenia) can be considered highly colonized by microbes (total microbial counts ≥ 10,001 CFU/100 cm2). Ardovská Cave also had a high concentration of airborne microbes, which can be explained by restricted air circulation and regular bat activity. The ratio of coliform to total counts of bacteria in the 9 km of underground Pivka River flow in Postojna Cave dropped approximately 4-fold from the entrance, indicating the high anthropogenic pollution in the most exposed site in the show cave. The RIDA®COUNT test kit was proven to be applicable for regular monitoring of eutrophication and human influence in eutrophic karst caves.
INTRODUCTION
Transport and handling of research equipment for detection of microorganisms during expeditions in the underground is inconvenient; therefore it is important to adopt a sensitive procedure and robust materials.The protocol should include appropriate microbial indicator groups, and it should be easy, reproducible and cost-efficient.For this study, we adopted the RIDA ® COUNT test kit (R-Biopharm AG, Germany, http://www.r-biopharm.com/)for quantitative microbial detection to monitor underground water and air quality and to get an insight on viable microbes in eutrophic caves.RIDA ® COUNT test has been used successfully in the dairy industry to find critical points where special attention or improved cleaning is needed (Salo et al., 2006).We tested the use of RIDA ® COUNT plates to count total aerobic and heterotrophic bacteria (RIDA ® COUNT Total Aerobic Count), total number of coliform bacteria (RIDA ® COUNT Coliform) and colony-forming units of yeast and molds (RIDA ® COUNT Yeast&Mold Rapid) in the underground.Similar methodology was applied in the Cave of Altamira for enumerating total aerobic bacteria in dripping water using Petrifilm plates (Laiz et al., 1999).
In some rare cave ecosystems energy originates from in situ bacterial chemoautotrophy (e.g.Movile cave in Romania; Sârbu et al., 1996); however most caves depend on nutrient input originating from the cave exterior.Nutrients enter caves via underground rivers, penetrating plant roots, migrating animals, and percolation water (Culver & Pipan, 2009).Along with nutrients, allochthonous microorganisms also enter caves, and due to their small size, they can easily penetrate deep underground in the form of bioaerosols, which are simply transported by air currents.Many microbes enter caves as airborne particles, while some become airborne in the underground, for example due to splashing water or local air currents caused by bat movements.
Heterotrophic microorganisms tend to colonize parts of caves where nutrients have been introduced, such as: areas near surface openings, underground rivers, sediments, and surfaces associated with animal excrement.Many caves naturally face increased input of organic matter, while others are subjected to high anthropogenic impact due to drainage Janez Mulec 1 , Václav Krištůfek 2 , and Alica Chroňáková 2 of polluted water into the underground or extensive tourist visits of show caves (e.g.Kartchner Caverns, Arizona, USA; Ikner et al., 2007;and Lascaux Cave, Montignac, France;Bastian et al., 2009).For sanitary microbiology, caves in karst represent a natural window to monitor underground water conditions and accumulation of organic matter.The most important question for public safety is to locate the source of biological pollution, especially in those karst areas with well developed underground drainage systems.
Less discussed than microbiological water quality in the underground are cave airborne microbes.Air contains various inanimate particles, such as dust, and many viable propagules.This issue is especially urgent in caves with mass tourism.Humans each shed on average about seven million particles and cells per minute, and each of these particles carries an average of four microbial cells (Binnie, 1991).Furthermore, coughing and loud talking are reported to release approximately 10 4 droplets, while sneezing releases approximately 10 6 droplets (Stetzenbach, 1997), which can significantly increase the potential of transfer and deposition of pathogens in a cave ecosystem.Laiz et al. (1999) found that dripping waters in Altamira cave (Spain) contain mainly gram-negative bacteria related to Enterobacteriaceae and Vibrionaceae.Other significant sources of airborne microorganisms are bat guano heaps and bats.Chroňáková et al. (2009) found as much as 1.6 -3.9 × 10 10 total bacterial counts in 1 gram of bat guano (dry weight) deposited in Domica Cave (Slovak Karst National Park, Slovakia).The most static cave microhabitats are solid materials and surfaces subjected to microbial colonization.An insight on these cave microhabitats can be obtained by swabs.A swab of a particular site also gives us an idea of how microbes have been spread in a cave, for example by footprints of animals and humans.
The assurance of naturally occurring conditions should be in the focus of sustainable management of karst caves, especially for all caves in protected areas and caves listed as UNESCO World Heritage Sites (www.unesco.org).Here we present a list of different and suitable microhabitats and a methodology based on already existing and established protocols (AOAC Performance Tested Method SM status) that allow ecologists a rapid insight on microbiological load, eutrophic level and eventual biohazards in the underground.
DESCRIPTION OF STUDIED CAVES
One cave in Slovenia (Postojna Cave, 11 June 2009, 28 August 2009, 24-30 June 2010) and three caves in Slovakia (Ardovská Cave, Drienovská Cave, Stará Brzotínská Cave, 12-14 April 2010; Slovak Karst National Park, Slovakia) were studied.Background on the caves is summarized in Table 1.Postojna Cave is part of the longest cave system in Slovenia (the whole system is 20,570 km long) with the underground Pivka River, and is partly equipped for tourist visits.This show cave is visited by approximately 500,000 tourists per year.Ardovská Cave, Drienovská Cave and Stará Brzotínská Cave are wild caves with roosting bat colonies; in addition in Drienovská Cave there is an active underground stream.
Cultivation-based analysis
The RIDA ® COUNT AOAC Performance tested Method SM 100402 (R-Biopharm AG, Germany) was used for enumeration of microorganisms in cave samples.The total counts of bacteria (RIDA ® COUNT Total Aerobic Count, Fig. 1), conventional total coliform bacteria (RIDA ® COUNT Coliform, Fig. 1), yeasts and molds (RIDA ® COUNT Yeast&Mold Rapid) were detected by the test.The principle behind the RIDA ® COUNT test plates is generally based on cultivating microorganisms using standard nutrients combined with a specific chromogenic detection system (Morita et al., 2003).During the growth phase, microorganisms will form typical colonies whilst the presence of specific enzymes will change the originally colourless substrate to produce a distinctively coloured colony.All of the different products of the RIDA ® COUNT line are suitable for the detection of microorganisms deriving from food or feed, contact samples, membrane filtration and air sampling systems (www.r-biopharm.com).Comparative recovery between the medium sheet and different agar or Petrifilm was conducted (Morita et al., 2003;Morita et al., 2006).The correlation co-efficient to plate count agar or Petrifilm in the internal accuracy studies were 0.94 -0.99.
Sampling sites
Various microhabitats were sampled in caves.To observe the potential impact of environmental gradient and influences from the surface, samples were taken from the cave entrance towards the interior of each cave, and in addition in Postojna Cave samples were taken in a heavily visited part and in a wild part deep in the cave.Three different types of samples were taken from all studied caves: air, swabs of solid surfaces and water.
Air samples
Air samples were taken simultaneously with measurement of atmospheric parameters, temperature, and relative humidity by a Kestrel 4500 Pocket Weather Tracker (USA).Samples for microbial counts International Journal of Speleology, 41 (1), 1-8.Tampa, FL (USA).January 2012 were collected at various distances from cave openings and from bioaerosols formed by cave streams and aerosolized from guano heaps.A RIDA ® COUNT test kit was applied for air samples by a depositional sedimentation method when open RIDA ® COUNT test plates were exposed to cave atmosphere for 20 minutes.Microbial counts (colony-forming units, CFU) from depositional sampling on RIDA ® COUNT test plates (4.7 × 4.7 cm) were recalculated per Petri plate (9 cm in diameter) per hour, allowing comparison with data of standard sanitary conditions for concentration of microorganisms in indoor air (Klánová, 2002).Two hours prior to air or swab sampling in a cave 1 ml of sterile physiological solution (R-Biopharm AG, Germany) was applied on the sheet.
Surface swab samples
The most variable samples were swabs.In Ardovská Cave the following sampling locations were selected: cave wall covered with vermiculites (defined by Gèze, 1973), cave wall covered with black organic layer of bat guano, limestone bedrock along caving route, and a dry pond with animal excrement.In Drienovská Cave, the following samples were taken: dead wood with bat excrements, cave wall covered with a black organic layer originating from bat guano, and a wet stalagmite.In Postojna Cave we sampled a stainless steel fence touched by tourists at the cave entrance, concrete of a riverbed (in the past, the water course in this part of the cave was regulated with a barrier constructed to obtain a permanent lake at the ponor -where a surface stream flows underground in order to minimize external influences of the cave environment), the surface of sediments occasionally flooded by the underground Pivka River, a railroad tie of a tourist train inside the cave, a speleothem covered with dust, vermiculites, a stalagmite touched by tourists, the surface of a tourist trail, and a pristine untouched stalagmite.In Stará Brzotinská Cave swabs were taken from bedrock covered with aerophytic algae, from flowstone with seeping water and from "cave gold".The golden aspect of organic layer and microbial colonies usually appears when illuminated water droplets magnify the yellowish pigment of the microbial mat beneath the water film (Mulec, 2008).Surface swab samples (20 cm 2 ) were taken aseptically in caves from solid surfaces with minimum irregularities.
Water samples
In Ardovská Cave and Stará Brzotínská Cave water samples were taken from pools filled with percolation water.In Drienovská Cave an underground river and karst spring were also sampled.In Postojna Cave the underground Pivka River was sampled at several sites along 9 km of the underground flow from the ponor in Postojna Cave till the underground stream confluence with the Rak River in Planina Cave.Basic physical parameters of the water (temperature, specific electric conductivity-SEC, pH) were measured with the use of WTW Multiline P4 equipment.At the site 1 ml of water specimen was directly applied with a sterile plastic Pasteur pipette onto the RIDA ® COUNT growth medium surface.
Analysis and reading results
All samples were applied onto RIDA ® COUNT plates at the place of sampling in a cave.In the laboratory plates were incubated at 35°C for 24-48 hours for total aerobic and coliform bacteria counts, and at 25°C for 48-72 hours for yeast and molds counts.Twenty-four hours of prolonged cultivation according to the user's manual (R-Biopharm AG) gave higher numbers of viable bacteria.Microbial colonies were enumerated and expressed as colony-forming units (CFU) per surface (100 cm 2 ) or volume unit (ml).
Statistical evaluation
Pearson's correlation and dependence were calculated between physical parameters among microhabitats (air and water samples) and microbial counts.Relative standard error (RSE) was calculated to evaluate the repeatability on natural water samples from the underground Pivka River from Postojna Cave for the RIDA ® COUNT Total Aerobic Count and RIDA ® COUNT Coliform.
RESULTS
Total bacterial count, counts of coliform bacteria and yeast and mold counts expressed as CFU per volume or surface unit were used as a measure to compare various habitats between wild and show caves.All selected caves had high organic input of different origin.A large fraction of organic input in Postojna Cave originates from the Pivka River and tourist activities.The presence of bat colonies and guano in Slovak caves was the source of another type of organic input there (Table 1).
Limit values of airborne microorganisms for residential rooms are 50 bacterial CFU/Petri dish/ hour and 50 fungal CFU/Petri dish/hour (Klánová, 2002), corresponding approximately to the secondary category of pollution by EUR 14988 EN (Verhoeff, 1993) which corresponds to fewer than 500 bacterial CFU/m 3 and fewer than 500 molds CFU/m 3 (Klánová, 2002).The standard sanitary limit was significantly exceeded in all studied caves in the following order: Ardovská Cave > Drienovská Cave > Postojna Cave > Stará Brzotínská Cave.Depositional sedimentation for airborne microbiota showed that Ardovská Cave had the highest number of viable bacteria in the air (10-298 CFU/Petri dish/hour) and the second highest count of yeast and molds (29-115), but sampling was performed with very short distances between sampling stations (Table 2).We found low total bacterial counts and no statistically significant correlations between microbial counts and environmental air parameters in Postojna Cave, which can be attributed to its large size and huge underground galleries.As expected, airborne concentrations of coliform bacteria were zero except in places with evident and fresh organic inputs such as bat guano (Ardovská Cave) or organically polluted underground stream (Postojna Cave).Yeasts and molds were detected in higher concentrations in smaller and poorly ventilated caves in descending order: Ardovská Cave, Drienovská Cave, and Stará Brzotínská Cave.Analyses of variances showed that in smaller cave systems particular atmospheric parameters can have a notable effect on microbial distribution.For example, the correlation between temperature and yeast and mold counts in Stará Brzotínská Cave was R 2 =0.99 (p=0.04,total cave length 120 m, Table 1).
Swabs of surfaces with obviously high organic material had high microbial counts.For example the surface of a speleothem with animal droppings had a total aerobic count of 2,285 bacterial CFU/100 cm 2 (Ardovská Cave), a bedrock of riverbed with attached guano showed 65 coliform bacteria and dead wood had greater than 2,500 CFU/100 cm 2 of yeast and mold (Drienovská Cave) (Table 3).The highest detected density of microbes per surface area was in the show cave Postojna Cave, primarily from swabs taken from the surface of tourist trails and frequently touched speleothems.Swabs of tourists' footprints showed counts of up to 15,100 total aerobic bacteria and 825 coliform bacteria (CFU/100 cm 2 ) (Table 3).Sediments regularly flooded by the underground Pivka River were also rich in viable bacteria, including coliform bacteria (Fig 1).Bacterial counts from an untouched stalagmite (15 CFU/cm 2 ) were lower from a touched stalagmite by 30-fold.Besides yeast and mold, tourists also spread coliform bacteria in a show cave.Coliform bacteria in the tourist part of caves might indicate their origin from human and/or animal pollution, which raises an important biohazard issue.
RIDA ® COUNT test kits were also successfully applied to assess water quality of the underground Pivka River (highly eutrophic) in the Postojna-Planina Cave System and Drienovská stream (Drienovská Cave, less eutrophic) in the National Park Slovak Karst (Table 4, Fig. 2).Kits were proven to be a reliable field test to determine quickly the possible existence of water contamination for public health.To evaluate the repeatability on natural water samples RSE (two sampling with triplicates) was calculated for samples from the underground Pivka River (Postojna Cave).When using RIDA ® COUNT Total Aerobic RSE ranged from 1-5% and for the RIDA ® COUNT Coliform from 3-6%.These results showed that RIDA ® COUNT test plates are satisfactory when we want to get a quick insight into the microbial status of the underground water quality; however this ready-to-use kit cannot completely substitute for classical microbiological media, because variety of commercial kits to detect different groups of microbes is rather limited.
To establish water quality we enumerated total aerobic counts of bacteria and the number of coliform bacteria.The Pivka River is a highly eutrophic river at the ponor in Postojna Cave regarding both total aerobic counts of bacteria and coliform bacteria.After the polluted Pivka sank in the cave the number of bacteria started to decrease, together with the ratio of coliforms to total bacterial counts.The highest ratio at the ponor was 0.80, which after 9 km of underground water flow dropped to 0.2 (Fig. 2).The ratio of coliform to total counts of bacteria in the underground Pivka River in Postojna Cave in the summer period with low discharge ranged from 0.17 to 0.80.In this regard, it should be taken into account that microbial count in underground streams is a reflection of many different interactions, such as dilution, mineralization, predation, etc. Temperature gradient was established along the groundwater flow, but statistically significant correlations between measured physical parameters and microbial counts were obtained only occasionally; for example on 11 June 2009, the Pearson correlation coefficient between specific electric conductivity and count of coliform bacteria was 0.967 (p=0.033).By using an alternative method of impacting to depositional air sampling (28 August 2009, impactor Mas-100, Merck) it was shown that high concentrations of coliform bacteria (>560 CFU/ml) in the river resulted in aerosolization, and consequently formation of 2.8 CFU/m 3 of airborne coliform bacteria in the cave air (Mulec, 2010).Alternatively, by depositional sampling on 25 June 2010 we found in air 1 coliform bacterium (CFU/20 cm 2 /20 min).
The studied part of the underground Drienovská stream was only 0.52 km, and number of total bacterial count decreased with increasing distance of sampling sites upstream from the spring (Table 4).
Captured underground water in caves rich with bat guano had higher microbial counts compared to a cave with low organic input, with the highest numbers in Ardovská Cave (172 CFU/ml), followed by Drienovská Cave (maximum 168 CFU/ml) and finally by Stará Brzotínská Cave (44 CFU/ml).The numbers of total aerobic bacteria in percolation water in Drienovská Cave were as high as 87 CFU/ml (Table 4).
DISCUSSION
In this study, we tested the versatility and potential use of RIDA ® COUNT plates in the underground.The selected commercially available test plates reveal only part of chemoheterotrophic microorganisms in eutrophic environments.In general, only about 1% -10% of the microorganisms in soil could be accessed by culturing and great majority of microorganisms from the environment cannot be grown in culture (van Elsas et al., 2006).Nevertheless, the use of cultural enrichment techniques has its place where selective media are required to demonstrate the presence, and possibly the magnitude, of particular microorganisms in a system, and it has been proven that such expression is representative and relevant to the objectives of the study (Ritz, 2007).For this study we selected caves which all have high organic input either from water streams, tourist visits, roosting bat colonies, or inputs from the surface.In caves we sampled microhabitats that reflect high organic load, and on the other hand, we also sampled in the same caves microhabitats that have low microbial abundance and face low impact from the cave exterior.
In all studied caves we sampled swabs, water (if present), and air.Although RIDA ® COUNT test plates were not initially designed for use in the underground, the results of their application in the underground showed that they can be easily applied in extreme environments, especially due to their easy handling and small size.Plates can be used to determine organic load indirectly by viable microbial counts in the underground and, with proper selection and application, also the biohazard level.For future use of RIDA ® COUNT test plates in underground cave microbiology we propose to adopt 24 h prolonged cultivation according to the user's manual of the producer.Additional 24 h incubation revealed higher microbial counts and probably gives more realistic viable microbial counts, because some cave microbes showed slow growth on selected media.
The first insight into microbiological conditions in cave air can be revealed using depositional sampling.Conditions in cave atmosphere can vary a lot, especially close to cave openings, underground streams, or bat colonies.More stable atmospheric conditions in the underground are in small caves and caverns, and when the atmosphere in such spaces is not disturbed, a gradient of airborne microorganisms can be observed, for example reflecting the temperature gradient.On the other hand, depositional sampling can give first information whether any kind of disturbances recently appeared in the cave atmosphere.
Cave (CFU/100 cm
We collected swabs of various surfaces for microbial count estimation (Table 3).Swabs are frequently analysed in food and pharmaceutical industries to test for microbial contamination and the presence of pathogenic microbes.In this respect there are several guidelines how to sample and analyse and for the critical values of microbial biomass detected in swab samples (e.g.APHA -American Public Health Association or HACCP -Hazard Analysis Critical Control Point).For example, in the diary industry it is suggested that the total bacterial count should not exceed 10 CFU/100 cm 2 and the coliform count 0 CFU/100 cm 2 (Gavron & Luck, 1990).For defining the degree of hazard or biocontamination level a logarithmic scale is frequently used, for example up to 10 CFU/100 cm 2 represents biocontamination level 4, ≤100 CFU/100 cm 2 represents level 3, ≤1,000 CFU/100 cm 2 represents level 2, and ≤10,000 CFU/100 cm 2 indicates level 1.Based on field results we propose a similar gradation for superficial microbial load in the underground: numbers ≤100 total CFU/100 cm 2 represent low level of microbially colonized surfaces, 101-1,000 CFU/100 cm 2 represent low-medium, 1001-10,000 CFU/100 cm 2 represent medium, and ≥10,001 CFU/100 cm 2 represent high level of surface colonization by microbes.To evaluate the different surfaces, representing specific habitats we summed up total bacterial count after 48 h and total fungal counts after 72 h (Table 3).Based on the scale above, swab surfaces from Ardovská Cave, Drienovská Cave and Stará Brzotínská Cave (Slovakia) fell in the range between 1,001-10,000 of total microbial counts/100 cm 2 , which indicated a medium level of surface colonization.One swab from Postojna Cave (tourist trail) exceeded 10,001 total CFU/100 cm 2 , which indicates that some surfaces in this cave can be considered densely colonized by microorganisms.Generally, in Postojna Cave, the total microbial counts of swabs varied (Table 3).
International Journal of Speleology, 41 (1), 1-8.Tampa, FL (USA).January 2012 Janez Mulec, Václav Krištůfek and Alica Chroňáková International Journal of Speleology, 41 (1), 1-8.Tampa, FL (USA).January 2012Comparative microbial sampling from eutrophic caves in Slovenia and Slovakia using RIDA ® COUNT test kits For assessment of drinking water quality several standards and reports are used worldwide (Canada, European Union, United Kingdom, South Africa, United States) including international standards to detect crucial microbiological parameters (Escherichia coli, coliform bacteria, enterococci, Clostridium perfringens, number of colonies at 22°C, total number of colonies at 37°C etc.).Water quality regulated by International Organization for Standardization (ISO) is covered in the section ICS 13.060: Water quality (www.iso.org).These standards can be adopted also when using RIDA ® COUNT test plates.To enumerate total bacteria, application of 1 ml of tested water was sufficient for counting.For coliforms and yeast and mold counts, the application of 1 ml of water sometimes resulted in no growth (Table 4).Coliforms are usually screened in a volume of 100 ml (BS EN ISO 9308-1, 2007); however if there was a high number of coliforms in the sample, a volume of 1 ml was enough to enumerate bacterial colonies.
In summary, surfaces in Ardovská Cave, Drienovská Cave and Stará Brzotínská Cave (Slovakia) shown medium level of microbial colonization.The tourist section of Postojna Cave (Slovenia) can be considered highly colonized by microbes.In addition, Ardovská Cave had a high concentration of airborne microbes, including total coliforms, yeasts and molds, which can be explained by restricted air circulation and regular bat activity, such as migration, roosting, and defecation.
Fig. 1 .
Fig. 1.Swab samples of sediment, regularly fl ooded by the under-Swab samples of sediment, regularly fl ooded by the under-, regularly flooded by the underground Pivka River in Postojna Cave, after 48 h of incubation at 35°C; left-RIDA ® COUNT Coliform, right-RIDA ® COUNT Total Aerobic Count.
Fig. 2 .
Fig. 2. Ratio of coliform bacteria to total aerobic bacteria (RIDA®COUNT test plates) in Postojna-Planina Cave System from the underground flow of the Pivka River from the ponor in Postojna Cave to the spring in Planina Cave.
Table 2 .
Ranges of microbial count expressed as colony-forming units (CFU per Petri plates and 20 minutes of Ridacount test plate exposition; after 24, 48 and /or 72 hours of plate incubation) for air quality, with indicated number of stations per cave.
_____________________________________________________________________________________________________________________________________________________________________________________________________N-not tested a Microbial counts (colony-forming units) in air detected on RIDA®COUNT test plates (4.7 × 4.7 cm, exposure time 20 minutes) were recalculated per one Petri plate (9 cm in diameter) per hour.JanezMulec, Václav Krištůfek and Alica Chroňáková
a b Occasionally visited by speleologists and/or archaeologists.
Table 4 .
Physical parameters of water bodies in caves and microbial counts expressed as colony-forming units (CFU per one millilitre; after 24, 48 and /or 72 hours of plate incubation). | 2018-10-18T09:36:30.776Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "75e419d778dd780a1e4d17de4b4cde04f74cee2c",
"oa_license": "CCBYNC",
"oa_url": "https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=1011&context=ijs",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "75e419d778dd780a1e4d17de4b4cde04f74cee2c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
226238069 | pes2o/s2orc | v3-fos-license | Preferences of healthcare professionals regarding hexavalent pediatric vaccines in Italy: a survey of attitudes and expectations
Summary Introduction In Italy, three hexavalent pediatric vaccines are available: two are ready-to-use (RTU) as pre-filled syringes, while the third must be reconstituted (need-for-reconstitution [NFR]). The formulation is related to the vaccination timing, safety of preparation and administration, and possible errors in immunization. We surveyed Italian healthcare professionals (HCPs) experienced with RTU and NFR vaccines in order to investigate their opinions on key aspects of the vaccines. Methods In Q1 2018, a qualitative study, ethnographic observations and in-depth interviews were performed in public vaccination settings of three Italian Regions. Data on how the vaccination process was managed and perceptions about the value of the RTU formulation were collected. In Q2 2018, face-to-face interviews were carried out to explore the attitude and preferences of Italian HCPs from nine Regions, assessing advantages and disadvantages of the two formulations from a quantitative point of view. In Q3-Q4 data analysis was carried out, using both qualitative and quantitative methodologies. Results The first phase demonstrated the following advantages of the RTU versus the NFR formulation: time-saving, lower probability of needle contamination and needle stick incidents, better handling, simpler procedure, easier disposal of waste. For the survey, 149 HCPs were interviewed; 80% and 40%, respectively, were very satisfied with the RTU and NFR vaccine. Conclusions Our study demonstrated that HCPs prefer the RTU formulation, as it simplifies vaccinations, reduces preparation time and minimizes the risk of errors. This formulation also saves time that can be spent on more in-depth counseling.
Introduction
The development of combination vaccines can undoubtedly be considered an important innovation for the prevention of infectious disease that has led to enormous improvements on health, and has also brought economic benefits to healthcare systems [1]. Indeed, combined vaccines have played a central role in prophylaxis of the pediatric population from infectious diseases over the past decades. The availability of combination vaccines represents an important means of achieving successful protection against numerous pathogens simultaneously, and is associated with several advantages. By reducing the number of injections, a better compliance to the vaccination schedule and higher rates of coverage can be achieved, and a safer profile assured, since most adverse events reported after vaccination are related to the act of injection [2,3]. Furthermore, in terms of healthcare service organizations, combination vaccines have been proven to improve the efficiency of the vaccination service, both for the healthcare professionals (HCPs) involved, namely physicians, nurses, and pediatricians, and for the organization itself. In fact, combination vaccines save HCPs time during vaccine preparation [4], reduce administra-tion costs, minimize storage space needed and reduce waste [3,5]. Depending on the practice of vaccination in terms of the number and role of HCPs involved, the impact of using combination vaccines can be very relevant, especially in situations of personnel constrains, which are common nowadays, as well as in crowded pediatric vaccination schedules, as already implemented in many high-income countries [6,7]. Currently, several pediatric combination vaccines are available. Among these, hexavalent vaccines represent the most innovative formulation to protect babies against six diseases: diphtheria, tetanus, pertussis, hepatitis B, poliomyelitis, and infection from Haemophilus influenzae type b. In the European Region, three hexavalent vaccines are authorized by the European Medicines Agency: Infanrix Hexa ® , available since 2000 [8]; Hexyon ® , available since 2013 [9]; and Vaxelis ® , available since 2017 [10]. These three hexavalent vaccines have the same indication of use, including immunization against the six diseases and age of utilization, as described in their Summary of Product Characteristics (SmPC) [8][9][10]. Although a maximum age limit of use is not indicated for any of them, the fact that they contain a "pediatric" dose of antigens, make them recommended up to 7 years of age by health authorities and scientific Introduction. In Italy, three hexavalent pediatric vaccines are available: two are ready-to-use (RTU) as pre-filled syringes, while the third must be reconstituted (need-for-reconstitution [NFR] [1]. Safety, immunogenicity and effectiveness of hexavalent vaccines is described in each SmPC and confirmed in several studies and clinical trials [1,[11][12][13]. Beyond indications, the main difference among the hexavalent vaccines is in regards to the preparation that is required for their administration: both Hexyon ® and Vaxelis ® are ready-to-use (RTU) in a pre-filled syringe, whereas for Infanrix Hexa ® there is a need-for-reconstitution (NFR) of the Hib antigen with a syringe containing the five other components. Preference for an RTU or NFR vaccine may be related to several factors, such as the preparation time required, the possibility to reduce mishandlings and dosage errors, cost, vaccination waste, the organization of the vaccination services in terms of time set for each vaccination, and to the characteristics of packaging that render the vaccine easier to integrate within existing databases. Moreover, individual experience and preferences of HCPs for a specific hexavalent vaccine may also dictate the selection of an RTU or NFR vaccine. Notably, it has been demonstrated that both physicians and nurses tend to prefer vaccines that require less time to prepare and manage [14]. As a consequence, the time saved may be spent on streamlining the vaccination session and providing parents with a more detailed vaccination counselling [15]. In addition, it has been reported that the higher acquisition costs of RTU vaccines are counterbalanced by lower administrative costs and increased safety compared with single-dose and multi-dose vial vaccines [16,17]. In Italy, pediatric vaccinations are delivered by the public health sector, either in vaccination centers or in family pediatricians' medical offices. In vaccination centers, public health physicians (also defined as hygienists) are those medical doctor specialists who are in charge of vaccines in vaccination centers, from the organizational and practical point of view. Within Italy, each Region runs independent tenders that are driven by price and/or scientific criteria, while product technical criteria are usually not taken into account in the assessment. To date, there remains limited data on the opinion of HCPs regarding technical aspects related to vaccination. To gain more insight into the opinions of HCPs on key aspects of the vaccination process, as well as on preferences for hexavalent vaccines, we carried out a survey of HCPs experienced in pediatric vaccinations, working in nine Italian Regions that differ by the organizational models of the vaccination services. Our survey investigated preferences and critical issues reported by the HCPs, in order to obtain information that may be useful for optimizing pediatric vaccinations in the public setting.
Qualitative phase
In Q1 2018 an experienced researcher performed ethnographic observations followed by in-depth interviews in public vaccination settings (vaccination centers and family pediatricians' offices) of three Italian Regions: in Liguria, with 6 HCPs (3 hygienists and 3 nurses) where the NFR hexavalent vaccine is used; in Apulia with 3 nurses and in Tuscany with 3 primary care pediatricians, where the RTU hexavalent vaccine is used. In general, all HCPs were experienced with both NFR and RTU formulations that are commonly available in Italy. The main purpose of the ethnographic observation was to understand how the vaccination process was managed in different Regions, in terms of HCPs involved and their role in the vaccination process. The purpose of the subsequent interviews was to highlight and discuss critical issues emerging from the daily routine vaccination process, investigating the overall image of the hexavalent vaccine (safety and tolerability), and the value of the RTU formulation.
Quantitative phase: survey target
In Q2 2018, personal in-depth interviews were carried out by inviting 265 HCPs (hygienists, nurses, and family pediatricians) from nine Italian Regions covering the north, center, and south of the country (Liguria, Lombardy, Piemonte, Emilia Romagna, Tuscany, Calabria, Campania, Apulia and Sicily). In these Regions, three hexavalent vaccines are used, including both RTU and NFR vaccines. Invited participants were selected through a purposive sampling methodology among those professionals that are in charge of the hexavalent pediatric vaccination at regional vaccination centers or as family pediatricians. The inclusion criteria for the HCPs to be interviewed were: a minimum of 10 years of experience in pediatric vaccinations and a minimum of 200 children under 2 years of age vaccinated monthly in vaccination centers or around 50 children under 2 years of age vaccinated monthly for family pediatricians.
Quantitative phase: survey characteristics
The survey consisted of 46 questions, requiring approximately 20 minutes for its completion (questionnaire in Annex 1). Computer-assisted interviews were conducted in person by an experienced interviewer and the anonymity of the results were assured before starting the interview. The overall objective was to identify the attributes of vaccination devices that may be valuable for HCPs and to evaluate advantages and disadvantages of the RTU formulation compared with the NFR formulation. Firstly, demographic and professional data were collected including: region where HCPs work, gender, age, profession, years of experience in administering vaccination, number of children under 2 years of age vaccinated in a typical month (either in vaccination centers or with family pediatricians), number of children under 2 years of age vaccinated with hexavalent vaccines, and typology of the hexavalent vaccine used. In order to investigate the daily practice of HCPs working in vaccination centers, where hygienists and nurses work together, the following data were collected: time and number of HCPs dedicated to vaccinations and ac-tivities that each of the two professional categories mostly deal with. With the aim of assessing perceptions and satisfaction towards hexavalent vaccines, participants were asked to describe: their individual experience while preparing and administering hexavalent vaccines to children, the attributes they consider more valuable for a hexavalent device, and the time dedicated to the various phases of the vaccination session (counselling, vaccine preparation, vaccine administration). Lastly, the survey asked the participants to indicate which one of the two hexavalent formulations, RTU and NFR, had certain characteristics related to the ease and safety in the preparation, administration, and disposal of the vaccine. The satisfaction and agreement of HCPs with the proposed statements were measured on a 1-10 scale (8-10 indicating high satisfaction/agreement).
Descriptive statistics were used to analyze and present results.
Qualitative phase
In the Liguria region, the observed vaccination staff included 2 HCPs: one hygienist and one nurse (dedicated or working mainly in other specialties). It was observed that when the nurse was dedicated, the role of the hygienist and of the nurse were interchangeable, while when the nurse was "rented" temporarily from another unit, the nurse prepared the vaccine but vaccine administration and family counselling were managed by the hygienist.
In Apulia, the vaccination staff included 2 or 3 HCPs: one hygienist and one to two nurses (one in small towns, two in the cities). It was observed that in this setting the nurse played a major role in the vaccination process, being involved in all phases from ordering to administration to disposal of the vaccine. The hygienist was in charge of checking the child's record on the database, their vaccination history, their clinical history (filled in by the parents), and scheduling the following vaccination appointment.
Considering the time and the professional figures dedicated to vaccinations in vaccination centers, the respondents working in this setting declared that approximately 4 hours for 4 days were dedicated to the vaccination of children under 2 years of age, with 2 hygienists and 3 nurses dedicated to vaccination activities only. In Tuscany, following a recent agreement with the Regional Health Authority, pediatric vaccinations have been shifted to family paediatricians, who also provide hexavalent vaccination in their practice. As a result of the interviews, 6 HCPs (3 hygienists and 3 nurses) were interviewed in Liguria, 3 nurses in Apulia and 3 family pediatricians in Tuscany (Tab. I).
The hexavalent vaccine showed a positive image across the board: it was perceived as safe and with a good level of tolerability. Moreover, although on a practical point of view vaccination is considered easy and simple to manage for the HCP, on a more emotional level, vaccine administration often becomes a potentially anxious moment for the family. As a consequence, the need for family counselling when administering the first dose of hexavalent vaccination emerged strongly and was across all Regions. The value of the RTU formulation emerged clearly, across both target and geographic areas: its value was spontaneously recognized, by users of both RTU and NFR vaccines. The advantages of the RTU formulation that emerged compared with the NFR formulation can be ranked as follows (from more relevant to less relevant): time-saving, better safety profile, better handling, simpler procedure, easier disposal of waste, more convenient set of needles. These results were considered as preliminary and were further tested during the survey phase.
Quantitative phase
In the quantitative phase, face-to-face computer-assisted personal interviews were carried out with 149 out of the 265 (56.2%) invited HCPs from the nine selected Italian Regions. Among the respondents, 60 were hygienists, 59 were nurses working in vaccination centers, and 30 were family pediatricians; 66% were female and the overall mean age was 55 years (58 years for hygienists, 51 years for nurses, and 63 years for pediatricians). The overall average number of years spent in vaccination activities was 15 years (18, 13, and 12 years, respectively, for hygienists, nurses and pediatricians). The sociodemographic and professional data of the survey participants are described in Table II. Among the HCPs, 84 (56%) used the RTU hexavalent vaccine and 65 (44%) used the NFR one. The activities in which HCPs reported being mostly involved varied amongst the professional category: talking to parents and collecting the medical history of the child were activities that hygienists mostly deal with, while nurses were in charge of preparing the vaccines and the room, taking inventory and orders, managing the stock, scheduling appointments and disposing of the waste materials. Pediatricians spent more time counselling (an average of 11 minutes) compared with hygienists (10 minutes) and nurses (8 minutes). Abbreviations: HCPs, healthcare professionals; NFR, need-for-reconstitution; RTU, ready-to-use.
Assessment of hexavalent vaccines
As for the time spent during vaccination, HCPs answered that out of an average of 17 minutes requested for each vaccination, more than half (approximately 10 minutes) was spent explaining the hexavalent vaccine and vaccination process to the parents. Vaccine preparation required an average of 3 minutes, 2 minutes were spent administering the vaccine, and 2 minutes for disposal of waste materials. Regarding hexavalent vaccination sessions, most HCPs (83.2% of the target pediatricians, 90.2% of the hygienists, and 97.2% of the nurses) expressed an 8-10 rate of agreement (very or mostly) with the declaration that giving information regarding vaccination/vaccines to parents was very demanding and time-consuming. As for managing and administrating the vaccine, 27.4% of hygienists, 29.4% of nurses, and 47.4% of pediatricians expressed an 8-10 rate of agreement (very or mostly) with the possibility of making errors during the vaccine preparation; 20.5% of hygienists, 22.5% of nurses, and 40.5% of pediatricians expressed a high rate of agreement (very or mostly) with the possibility of making errors during the vaccine administration; 18.6% of hygienists, 20.6% of nurses, and 33.6% of pediatricians very/ mostly agreed that it could be possible to forget the reconstitution of the vaccine. Key aspects of the hexavalent vaccines rated as "very important" were: minimizing the risk of needle contamination (80% of all respondent HCPs) and of needle stick injuries (79% of HCPs), being stable in case of problems of the cold chain (78% of HCPs), having low risk of errors in the reconstitution (78% of HCPs), being easy to prepare and to manage (74% of HCPs), and being ready to use (66% of HCPs). These last two aspects were particularly important for pediatricians.
RTU vs NFR vaccines
As for the overall comparison between RTU and NFR hexavalent formulations, 80% of HCPs declared their satisfaction with the advantages of RTU hexavalent vaccines was "very good": easy preparation and administration, no risk to reconstitute, low risk of needle contamination and stick injuries. On the other hand, only 40% of HCPs declared they were satisfied by the NFR formulation to a level of "very good", due to more manipulations, higher risk of needle contamination and stick injuries (Fig. 1). Figures 2 and 3 describe in detail the assessment of the two formulations, as rated by HCPs. As for safety issues related to the different syringe formulations, HCPs declared to be overall satisfied with the safety of hexavalent vaccines (49% very satisfied and 40% mostly satisfied), but a difference appeared between the two formulations with 61% of HCPs very satisfied with RTU overall syringe safety compared with only 34% of HCPs being very satisfied with NFR overall syringe safety (Fig. 4). Lastly, when asked how much the use of an RTU vaccine could facilitate when vaccinating children under the age of 2 years, 92% (from 90% of hygienists to 93% of both nurses and pediatricians) expressed a score of 8-10 (indicating high satisfaction/agreement). Moreover, HCPs declared that the time saved in preparation of RTU vaccines can be more effectively spent on vaccination counselling during the visit.
Tab. II. Quantitative phase: demographic and professional characteristics of healthcare professionals.
Discussion
This survey focused on relevant aspects of the hexavalent vaccines, such as handling, time needed for the different phases of vaccination sessions, errors and safety related to the formulation, with a comparison between RTU and NFR vaccines. Issues related to the safety or immunogenicity of hexavalent vaccines were not our objective because these aspects are already well documented and considered similar [18]. According to the inclusion criteria, vaccination centers and family pediatricians, respectively, had to vaccinate a minimum of 200 children and around 50 children under the age of 2 years each month. Of these, more than twothirds were administered a hexavalent vaccine. Thus, the surveys respondents' long-standing knowledge of the issues involved in vaccinations constitutes a reasonable guarantee of validity in the assessment of hexavalent vaccines. For Italian family pediatricians, vaccination is not a routine activity in their daily practice, but we chose to include this category as the Tuscany region has recently stated that family pediatricians should administer hexavalent vaccines in their medical offices, and this practice could be soon adopted by the other Italian Regions as a measure to increase coverage rates. In this regard, it has been demonstrated that physicians' recommendation is an important predictor of vaccine acceptance, constituting a major factor in receiving or intending to receive any vaccine [19]. For this reason, the involvement of all HCPs in our survey resulted essential to identify critical issues and thus highlight potential areas for additional intervention targeted at specific professional categories. Family pediatricians work autonomously in their office, thus being in charge of all the different phases of vaccine administration. As a consequence, as emerged in our study, they are able to perform only a limited number of vaccinations per month (i.e., 48 vaccinations to children < 2 years of age) and appeared more concerned about making errors during preparation, administration and reconstitution of the hexavalent vaccine compared with other HCPs. As is known in the literature, pediatricians can have a key role in increasing awareness about the benefits of pediatric vaccinations and educating parents [20]: in our study, pediatricians spent more time counselling than hygienists and nurses. For all these reasons, an RTU formulation may be preferable, for all HCPs, and in particular for pediatricians, as it was demonstrated to render all processes not only easier and safer, but also more rapid. Similarly, our research demonstrated that RTU formulation of hexavalent vaccines was widely preferred to NFR vaccines among all HCPs because it simplified the preparation, minimized the number of manipulations and error risks: in fact, 80% of HCPs declared they were very satisfied with RTU vaccines compared with only 40% of HCPs who were very satisfied with NFR. The perceived benefits of an RTU vaccine included easier and quicker preparation with less risk of errors such as the risk of forgetting to reconstitute the Hib or not taking all the Hib antigen from the vial. It was also seen to minimize the risk of needle contamination and needle stick injury and to produce less waste material.
Although we should consider that previous published studies used different definitions in vaccine preparation time, as well as different methodologies for data collection and analysis, we can say that our results are in line with the existing literature. In fact, handling, dosage errors, and reduced preparation time were all highlighted as being important attributes of a fully-liquid RTU vaccine versus one that requires reconstitution in a previous survey of physicians and nurses conducted in Germany on hexavalent pediatric vaccines [14]. In particular, both the present and previous studies highlighted that HCPs are concerned about minimizing the risk of errors during vaccination, which may thus be reduced by using a fully-liquid hexavalent vaccine [4,14,21]. In fact, in a time and motion study, comparing RTU versus nonfully liquid vaccines showed that mishandlings were five times more common with a NFR hexavalent vaccine compared with the RTU vaccine [4]. In our study, 77% of HCPs rated as "very good" the low risk of errors in the reconstitution for RTU vaccines versus 46% for the NFR formulation. In addition to the reduced risk of error, it was reported that an RTU hexavalent vaccine can be prepared in less than half the time needed to prepare a NFR vaccine [4,15]. Using the time difference of 35 seconds that was observed in the study of De Coster and colleagues for a HCP to prepare an RTU hexavalent vaccine versus a NFR vaccine, we can estimate the number of hours per year that are saved due to the simpler and quicker process of the RTU formulation. We applied these data to the Italian context, using hexavalent vaccination coverage (95%) of the birth cohort (440,000 newborns in 2018) and number of doses of hexavalent to be administered in the pediatric recommended schedule (3 doses, 2+1 schedule). We estimated approximately 12,000 hours saved/year, that correspond approximately to the workload of 7 HCPs working in public settings, that could therefore be re-allocated to other tasks or units, if a broader healthcare service perspective is used, with a potential saving for the public organization. Time saved is a significant aspect considering that the HCPs involved in our study devoted a substantial amount of time to vaccinations (approximately 17 minutes per vaccination), a large part of which was dedicated to informing and educating parents (around 10 minutes). Therefore, time saved in the act of preparing and administering the vaccine could be used in a more productive way with parents and the baby. Our study is limited by the generalizability of our results. In fact, the purposive sampling methodology adopted to select the HCPs and the Regions involved in the two phases of the study may reduce the representativeness of our results. Moreover, our results may not generalize appropriately to other countries, due to potential differences in the organization of vaccination programs and cultural preferences for specific pharmaceutical forms. On the other hand, this study represents one of the very few evidences that support the switch from NFR to RTU vaccines, taking in consideration HCPs preferences, as well as time saved, simplification of vaccine preparation and management, as is already known in the literature. The extension of this work to a larger sample and to other contexts could confirm our findings.
Conclusions
The present study has highlighted aspects that are important for HCPs when considering a hexavalent vaccine. We observed that a vaccine that can reduce the time needed for preparation, while reducing the risk of errors as much as possible, is preferred by HCPs. Accordingly, easy-to-use, fully liquid vaccines are desirable, and fully liquid, hexavalent vaccines in pre-filled syringes have many characteristics that HCPs value as important. An RTU vaccine minimizes the risk of errors, and especially the risk of forgetting to reconstitute the powder in the main syringe or reconstituting all the powder. RTU vaccines also reduce the risk of needle contamination and needle stick injuries as only one needle is used. The advantages in terms of time saving are clear as less time is needed for vaccine preparation and administration, which allows more time for counselling by the single HCP or can allow re-allocation to other tasks or units if a broader healthcare service perspective is used. Therefore, in comparable contexts of immunogenicity, tolerability and safety, it would thus seem likely that RTU vaccines present satisfactory characteristics over NFR vaccines. We also envisage that these technical aspects will be taken into account by regional decision makers in deciding to adopt one or another typology of vaccine.
Questionnaire Paediatric Vaccination 36598
Length of interview: 20
III. INTRODUCTION
Good morning, the Healthcare Department of Gfk Italy Company is conducting a survey on pediatric vaccinations. We would like to ask your willingness to cooperate with this survey. The interview will take about 20 minutes. Everything you say will be treated anonymously, with the utmost confidentiality and for statistical purposes only. Thanks for collaboration. (Privacy Law)
PHARMACOVIGILANCE
Adverse events/exposure to the drug during pregnancy/complaints about the product.
We are now being asked, as a company operating on marketing research, to pass on to PV services details on any adverse events, including exposure to drug during pregnancy or breast-feeding, suspected transmission of infectious agents, technical/qualitative issues,drug interactions and particular situations such as overdose, abuse, improper use, administration errors, drug prescription errors, occupational exposure and lack of effectiveness that are mentioned during the discussion in relation to a product of the Company who commissioned the survey. Although what you say will, of course, be treated in confidence,should you mention during the discussion any adverse event (or any of the situations above described) happened in a specific patients, we will need to report even in the case you just reported it directly to the company or to the Italian bodies in charge of this (we remind you that you can report using the AIFA web site http://www.agenziafarmaco.gov.it/it/content/modalit%C3%A0-di-segnalazione-delle-sospette-reazioniavverse-ai-medicinali). In this situation you will be asked if you will be willing to waive the confidentiality given to you under the Codes of conduct specifically in relation to that adverse event/drug exposure during pregnancy or brest feeding/complaint about the product . All the other information that you will give during the interview will stay confidential .
PV_1 [S]
Are you willing to make this interview on the base of these premises? | 2020-11-04T05:05:09.353Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "ca6cd2656b7170cbbc0b9d388f77fa5067d29921",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ca6cd2656b7170cbbc0b9d388f77fa5067d29921",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
49335755 | pes2o/s2orc | v3-fos-license | A convenient protocol for generating giant unilamellar vesicles containing SNARE proteins using electroformation
Reconstitution of membrane proteins in artificial membranes is an essential prerequisite for functional studies that depend on the context of an intact membrane. While straight-forward protocols for reconstituting proteins in small unilamellar vesicles were developed many years ago, it is much more difficult to prepare large membranes containing membrane proteins at biologically relevant concentrations. Giant unilamellar vesicles (GUVs) represent a model system that is characterised by low curvature, controllable tension, and large surface that can be easily visualised with microscopy, but protein insertion is notoriously difficult. Here we describe a convenient method for efficient generation of GUVs containing functionally active SNARE proteins that govern exocytosis of synaptic vesicles. Preparation of proteo-GUVs requires a simple, in-house-built device, standard and inexpensive electronic equipment, and employs a straight-forward protocol that largely avoids damage of the proteins. The procedure allows upscaling and multiplexing, thus providing a platform for establishing and optimizing preparation of GUVs containing membrane proteins for a diverse array of applications.
Nevertheless, the preparation of proteo-GUVs still requires time-consuming optimization as protocols are frequently difficult to reproduce between laboratories (own experience and personal communication with other researchers in the field), largely because not all variables affecting the outcome are controlled and optimised. This includes, for instance, design of the electroformation chamber including slide resistance when using ITO slides, spacer thickness (that is necessary for calculating the electric field), and for Pt electrodes information about the thickness and the axial distance between the wires. Moreover, the parameters of the applied electric field are critical for the outcome, with the results depending on the chamber geometry and the precise voltage-time profile of the applied electric field.
Here we report a convenient protocol for the preparation of proteo-GUVs containing functionally active neuronal SNARE (soluble N-ethylmaleimide-sensitive factor activating protein receptor) proteins for the study of membrane fusion in vitro. SNARE proteins represent a superfamily of small, mostly membrane-anchored proteins that catalyse the fusion of membranes in all eukaryotic cells. In neurons, exocytosis of synaptic vesicles is mediated by the SNARE proteins syntaxin-1A and SNAP-25 present at the plasma membrane, and synaptobrevin-2 present on the vesicles. Our protocol is straightforward and requires only a simple and affordable, in-house-built setup, therefore it can be easily adapted for other proteins and lipid compositions.
Results
Setup Design. Electroformation of GUVs can be performed by applying an alternating electric field in the formation chamber that either consists of two glasses coated with conductive material (such as ITO) that are separated by a spacer (see e.g. refs 18,[20][21][22] ; Fig. 1a), or that contains two Pt electrodes (presented in this work; Fig. 1b). In the first approach, the parameters that critically influence the electroformation are the electrical resistance of the ITO coat and the distance between the two conductive surfaces. When Pt wires are used for preparation of GUVs, the two important parameters are the axial distance and the thickness of the two parallel electrodes. In both cases, additional parameters have a strong influence on GUV formation such as chamber volume, chamber cleanliness and deterioration due to repeated use, concentration of lipids, the method of lipid drying, and the composition of the buffer used for electroformation.
Previously, we prepared SNARE-containing GUVs using ITO-coated glasses 18,20 (see also Fig. 1a). However, GUV formation was not easily reproducible and yielded relatively small GUVs (often below 5 µm in diameter), prompting us to work out a more reliable protocol that would yield larger GUVs. We therefore designed a Pt wire-based electroformation chamber (in short Pt chamber; Figs 1b and 2a; other designs are presented for example in refs 13,23 ). The main idea was to design a chamber that could be effectively cleaned with organic solvents (to remove residual lipids), and that allows for easy monitoring of GUV formation (compatibility with a standard Zeiss microscopy stage). For this purpose, a largely chemically inert PTFE (polytetrafluoroethylene) was chosen as the main material for the chamber with two Pt wires (0.5 mm in diameter) embedded close to the chamber bottom. The dimensions of the chamber were chosen to allow for sealing with a standard size (25 mm in diameter) microscopic coverslip and to fit, together with the wiring, on a microscopy stage (Figs 1b and 2a,e). For electroformation we used a digital function generator (Velleman PCGU1000), connected via USB to a Windows PC (Fig. 1c). This function generator is inexpensive in comparison to other (usually stand-alone) laboratory function generators, and the output voltage waveform can be easily programmed within the accompanying software (PcLab2000SE, Velleman). The Pt chamber was connected to the function generator using cable with BNC connector and pin socket. In this setup socket pitch of 2.54 mm fits the 2.5 mm axial distance of Pt electrodes, while for the ITO chamber the latter were replaced with crocodile clips (Fig. 1c). Additionally, by using BNC Y-splitters, multiple chambers can be connected to one function generator. In conclusion, the whole electroformation setup consists of a PC, a function generator, connecting cables, and electroformation chambers (Fig. 1c). One electroformation chamber can be then placed on a microscope (as shown in Fig. 2e, to allow live monitoring, see Supplementary Video 1), while others can be placed for stability in a suitable stand (like the one made from the polyethylene foam shown in Fig. 2d).
Formation of GUVs in a Pt chamber. The most common approach for formation of GUVs containing
transmembrane proteins is to start with small liposomes reconstituted with membrane proteins using standard protocols, e.g. using detergent removal by size exclusion chromatography 3,20,24 (Fig. 3a). These proteoliposomes are deposited on the conductive surface in the electroformation chamber and dried in order to remove the aqueous buffer. For instance, for preparing SNARE-GUVs with a Pt chamber 5-7 × 1 µl drops of SUVs are deposited on each Pt wire (10-14 drops/Pt chamber) and dried under vacuum for around 30 minutes (Fig. 3a). Next, the Pt chamber is sealed with a coverslip (25 mm diameter, coated with β-Casein to prevent bursting of GUVs making contact with the glass surface) and a silicone glue (see photo in Fig. 2c). The sealed chamber is then connected to the function generator and filled with an electroformation solution -typically water with sucrose (we used 800 µl of 200 mM sucrose solution in each chamber; see photos in Fig. 2d and e). Immediately afterwards electroformation is started by switching on the AC field. In our hands, the best GUV quality and highest protein activity was obtained when electroformation was performed for 1 h at 10 Hz, 2.2 V pp (peak-to-peak voltage, sine wave shape), followed by a detachment phase (detaching GUVs from Pt wires into the solution) of 30 min at 2-4 Hz, 2.2 V pp (sine wave shape, Fig. 3b and Supplementary Video 1). After detachment, GUVs are collected by pipetting with a cut 1 ml micropipette tip and transferred directly to the imaging chamber, or stored refrigerated for up to a week in a microcentrifuge tube (Fig. 3a).
GUV quality analysis -vesicle diameter and efficiency of protein reconstitution. Depending on the biological problem to be studied, the average diameter of GUVs may be critical. For instance, in some experiments handling and visualization of larger GUVs may be beneficial. The SNARE-GUVs prepared with a Pt chamber have diameters ranging from around 5 to 30 µm (Fig. 4a and c). Thus, the average diameter (13.5 µm, Fig. 4a) is substantially larger than that of the same GUVs prepared with ITO slides (5.8 µm) 18 . Another parameter critical for the assessment of GUV quality is the amount of protein incorporated in the membrane. In GUVs prepared with a Pt chamber we observe efficient protein incorporation by monitoring fluorescence intensity of Texas Red labelled proteins in the GUV membrane (Fig. 4b). By comparing these intensities with those of labelled lipid (for details see Materials and Methods and ref. 18 ), the protein concentration in the membrane can be estimated (see histogram in Fig. 4b).
Although the protein to lipid ratio showed some variability, there was no correlation with the size of the GUVs.
GUV quality analysis -protein activity. SNARE proteins catalyse most membrane fusion reactions in eukaryotic cells 25 . Therefore, the best test for their activity upon membrane reconstitution and vesicle formation is to perform fusion assays 26 . Here we measured fusion using a lipid mixing assay 18,27 . In this experiment, immobilised GUVs containing labelled lipid NBD-PE as fluorescence donor and a stabilised complex of plasma membrane SNARE proteins 28 , were incubated with SUVs containing Lissamine Rhodamine-PE (Rho-PE) as fluorescence acceptor and the vesicular SNARE (schematic illustration in Fig. 4d). Upon SNARE-mediated membrane fusion, these two labels are in the same membrane and undergo Förster resonance energy transfer (FRET), causing quenching of NBD. If Rho is then bleached, a corresponding recovery of the NBD fluorescence intensity is observed (Fig. 4e, red). As a control for the specificity of this reaction, we used a synaptobrevin mutant (Δ84) 29 that stops the fusion reaction at the docked state, preventing mixing of lipids and thus reducing FRET (Fig. 4e, grey).
Discussion
Here we describe a convenient procedure for preparing proteo-GUVs containing SNARE-proteins of the presynaptic plasma membrane using in-house-built devices. The electroformation chamber described here is made from PTFE and thus can be cleaned with organic solvents. Moreover, the chamber allows for directly monitoring the formation of GUVs under a microscope. Additionally, the function generator used in this study can be easily programmed, allowing for testing of multiple electroformation protocols (a crucial step when establishing a protocol for proteo-GUV formation). The protocol described here is convenient and avoids some of the problems associated with other methods. For instance, the procedures involving osmotic shock 6 require repetitive drying-rehydration cycles, which are likely to be detrimental for maintaining membrane proteins in a functional state. Furthermore, gel-assisted swelling 30 was reported to yield GUVs with altered mechanical properties 31 . Another possibility is to reconstitute proteins into the preformed GUVs with the aid of low concentrations of a detergent 10 , yet it requires extensive optimization of detergent type and concentration 3 , and it is very difficult to achieve efficient protein insertion while maintaining the GUVs intact. For special purposes, i.e. when membrane asymmetry is required, GUVs may be prepared with inkjet method 12 , however these technique requires a more specialised, and expensive equipment.
We conclude that our protocol offers a convenient method for the preparation of large GUVs containing moderate-high concentrations of membrane proteins. The yield of high-quality GUVs is comparably high, and only a single drying step is required, helping to preserve protein activity. For sensitive proteins, additional protection during the drying process may be necessary, for instance by adding disaccharides or ethylene glycol 3,32-34 .
Labelled ΔN complex was formed by replacing SNAP-25 with a S130C mutant labelled with Texas Red.
Preparation of small unilamellar vesicles and fluorescent labelling of vesicles. Small unilamellar vesicles (SUVs) containing SNARE proteins (the plasma membrane SNARE complex or synaptobrevin) were prepared by co-micellization followed by size exclusion chromatography as described before 18
Preparation of giant unilamellar vesicles.
GUVs containing SNARE proteins were prepared from vacuum-dried proteo-SUVs with the electroformation procedure using an in-house-built Pt electrode electroformation chamber (referred to as Pt chamber, see Fig. 2). The detailed GUV preparation protocol is described in the Results section. Prior to use, the Pt chamber was cleaned by bath sonication (around 5-10 min) in ethanol and subsequently in chloroform. For sealing of the chamber, microscopy coverslips (25 mm in diameter) were used, that were first cleaned with ethanol and isopropanol, then coated with β-Casein (3 mg/ml, 5 min), and finally rinsed with water and dried.
Microscopy imaging and data analysis. The formation of GUVs was directly monitored at low magnification in the electroformation chamber with an epifluorescence microscope. For visualization in higher magnification, GUVs were collected after the electroformation procedure and transferred to the imaging chamber containing a coverslip functionalised with biotinylated BSA and neutravidin 18 , and imaging buffer (20 mM HEPES/KOH pH 7.4, 150 mM KCl, 1 mM MgCl 2 , at least 1.5 × volume of the GUV solution to be added). GUVs were allowed to settle for around 30 min prior to imaging, resulting in surface attachment. Microscopy imaging was done with a Zeiss Axiovert 200 epifluorescence microscope or with a Zeiss LSM 780 confocal microscope.
The efficiency of protein reconstitution was determined as described in ref. 13 , following the detailed protocol described in ref. 18 , by comparing membrane fluorescence intensity of Texas Red labelled ΔN complex with those of known concentration of Texas Red labelled DHPE. Bulk lipid mixing experiments were performed essentially as described in ref. 18 . Image analysis was performed in Fiji 41 with self-written scripts 18,42,43 . Data availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. | 2018-06-22T13:24:59.232Z | 2018-06-21T00:00:00.000 | {
"year": 2018,
"sha1": "6ed0e968d5922839e530af9b63905a53d6c7a805",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-27456-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ed0e968d5922839e530af9b63905a53d6c7a805",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
250710280 | pes2o/s2orc | v3-fos-license | SERBIAN EFL LEARNERS’ PREFERENCES REGARDING STANDARD PRONUNCIATION MODELS 1
: This paper observes Serbian EFL students’ attitudes to standard models of pronunciation such as General American (GA) and Southern British Standard (SBS). Previous research on this issue disclosed learners’ overall inclination towards SBS as the preferred English variety. The aim of this research was to test whether Serbian students still hold such views, or whether American variety is on its way to replacing British English and becoming the eminent reference accent for pronunciation learning. Another aim was to observe the underlying reasons for learners’ pronunciation preferences. The data was collected using a direct method, i.e. a written questionnaire. The respondents were 85 English-major students at the Faculty of Philology and Arts in Kragujevac. The analysis showed that the accuracy of pronunciation remains a high-priority goal since 85% of the respondents reported on native-like accent being their ultimate objective. However, unlike previous attitudinal studies, this research revealed learners’ higher preference for GA. Such preference is to be expected since the respondents’ self-reported exposure to this particular variety was much higher compared to their self-reported exposure to SBS.
INTRODUCTION
In contexts where English is learned as a second (ESL) or foreign language (EFL), the question as to which pronunciation model 2 aligns more closely with the overall aim of pronunciation teaching remains unanswered. There are, in fact, strong reasons to believe that it may be more feasible to adhere to the standard 1 The author gratefully acknowledges support from the Ministry of Education, Science and Technological Development of the Republic of Serbia (Contract No. 451-03-68/2022-14/200198). 2 The term model is used here to describe "pronunciation characteristics of the language a teacher presents to learners in the classroom" (Kelly 2001: 14). models of pronunciation. Still, provided that we take this stance, we are left with another probing question -which standard model do we refer to, precisely? Researchers generally accept that Standard British and Standard American variety are the two varieties studied by most foreign learners (Algeo 2006: 1;Trudgill, Hannah 2008: 6). Тhe term Standard English denotes a variety 3 of the English language that has been codified, and is, therefore, widely used in dictionaries, handbooks, and grammars (Biber et al. 1999: 18). It is the variety which is adopted by publishers, represented in the media, and encouraged in classrooms (Bussmann 1999(Bussmann : 1117Biber et al. 1999: 18). Several studies have previously demonstrated that Standard English varieties generally enjoy greater prestige among native speakers (NS) (Wells 1982a: 34). Such varieties are also widely regarded as more desirable given the fact that they tend to be associated with the better-educated society members (Crystal 2008: 404). Since standard varieties presumably have this privilege of occurrence in the eyes of the native English speakers, it is not surprising that many non-native learners and teachers do not seem to hold different views.
Still, in EFL contexts, the decision to teach standard models of pronunciation, specifically Southern British Standard (SBS) 4 and General American (GA) 5 , is not necessarily rooted in language ideologies and social preferences. Rather, there are more practical reasons. The overall objective is to avoid any comprehension problems which might arise when EFL students are faced with native accented speech (Dimitrova, Chernogorova 2012: 207). Moreover, GA and SBS are understood by all NSs, they are codified for teaching purposes, pedagogically tested, and are, in fact, the language of the media (Ibid.: 209). More importantly, they are the reference accents for nearly all teaching materials on English pronunciation, not only in the UK and the US, but also internationally (Ashby 2011: 11;Dimitrova, Chernogorova 2012: 209). These, as well as some other factors which will be more thoroughly explored in the following sections, are the primary reasons why native pronunciation models, such as SBS and GA, have been widely advocated.
However, it is important to mention that not all researchers regard Standard English accents as being suitable for EFL classrooms. In fact, pronunciation models such as GA or SBS are, at times, rated less favorably. This is primarily because EFL students often fail to attain native-like pronunciation. That, in particular, has led scholars to believe that some sort of neutral, simplified, and more universal 3 In this paper, the term variety will be used synonymously with the term accent. Namely, we choose to observe only the spoken language, i.e. pronunciation. 4 Recently the term Southern British Standard (SBS) has been used to refer to the Standard British model of pronunciation (Brooks 2015: 13). SBS is nowadays considered to be a more politically correct term, even though some scholars still prefer using the widely known label RP (Čubrović 2004: 40). Another commonly used term is Standard Southern British English (SSBE) (Carr 2008: 9). 5 General American (GA) is an umbrella term for the majority of American accents that do not show marked regional characteristics, unlike, for example, the Southern US accent (Wells 1982b: 470). pronunciation variety might be a more realistic goal (Jenkins 1998: 120). More specifically, it is believed that EFL learners might benefit more from being exposed to varieties such as Euro-English, International English, World Englishes, or nowadays most commonly cited non-standard variety -English as a lingua franca (ELF) (Jenkins 2006;Bugarski 2004;Ošmjanski 2016a;2016b).6 This is simply because, in EFL contexts, English is no longer learned in order to communicate with NSs (Jenkins 1998: 119). Instead, EFL students are more likely to encounter speakers of other non-standard varieties (Ibid.). This idea was what originally inspired the development of ELF which is defined as "a contact language used only among nonmother tongue speakers" (Jenkins 2006: 160). Therefore, the proponents of ELF believe learners' main objective should not be to approximate any standard model of pronunciation, since, as they claim, that might only dishearten the learners (Jenkins 1998: 124;Derwing, Munro 2005: 384). Instead, these scholars believe it is necessary to shift the focus from teaching standard varieties to teaching something more attainable, specifically in terms of pronunciation.
Nevertheless, there are several issues when it comes to teaching non-standard varieties such as ELF. Firstly, there is still no detailed linguistic description of ELF (Ošmjanski 2016b: 150), or other non-standard varieties mentioned here (Bugarski 2004: 9). This presents some practical difficulties when it comes to designing teaching materials. It proves problematic, more so perhaps, when evaluating students' progress. The plurality of linguistic features, which ELF and other non-standard varieties seem to promote, might also make it difficult to distinguish between local variation and errors (Jenkins 2006). Lastly, when we come to look at the results of some attitudinal studies concerning the EFL students' perception of English varieties, students' preference for native pronunciation models becomes even more evident. This notion will be examined more fully in the following sections, for the present, suffice it to say that non-standard models of pronunciation might not, in fact, be what EFL students truly wish to emulate.
In the hope of reaching a more amicable conclusion about the preferred pronunciation model in Serbian educational context, this paper will explore students' attitudes towards varieties such as GA and SBS. Our decision to focus on standard pronunciation models was motivated by the aforementioned advantages of using standard models as reference accents. Another reason is that Serbian students are more likely to be familiar with these accents since that is what they are generally exposed to in terms of their formal education. Thus, we assume that, if they do strive for native-like pronunciation, they will most likely gravitate towards these accents. At the same time, we wish to observe whether students still regard British English as their prefered variety, as some of the previous studies on pronunciation 6 The aim of this research was to closely inspect Serbian students' attitudes towards standard accents of English. Therefore, we were unable to provide an in-depth study of the non-standard varieties mentioned here. For a more comprehensive review see Jenkins (1998; and Ošmjanski (2016a). preferences have indicated (Jerotijević Tišma, Karavesović 2019: 72; Grubor et al. 2008: 126). Before we do so, it is necessary, however, to note some of the most common methods for conducting an attitudinal research. Some of these methods and approaches will be employed in this research as well.
DIRECT AND INDIRECT ASSESSMENT OF ATTITUDES
There are linguists (e.g. Hassan 2018) who believe that acquiring a native accent is plausible provided that conditions such as: high motivation, a strong desire to sound native, and good linguistic aptitude are met. Except for these conditions, the attained proficiency is believed to depend heavily on positive attitudes 7 towards a variety (Dalton-Puffer et al. 1997: 115). It is important to note that having a positive attitude towards a specific variety does not, in itself, guarantee success in terms of mastering it. Nevertheless, positive attitudes most certainly govern students' choice as to which pronunciation model they wish to attain. That is why nearly all the studies on pronunciation preferences are, in fact, attitudinal studies which aim at eliciting learners' direct or indirect responses to different models of pronunciation.
In a direct approach, subjects are either interviewed or asked to complete a questionnaire with directly posed questions about their opinion regarding a certain variety (Coupland, Bishop 2007: 75;Stojić 2017: 311). This approach can yield concrete results provided that the respondents are truly familiar with the accent they are evaluating. Direct assessment of attitudes can also limit the chances of misidentification and is particularly useful when analyzing specific reasons underlying the respondents' preference for a certain pronunciation model.
In a more indirect approach, subjects are first presented with speech samples of various English accents. They are then asked to rate those samples in terms of prestige or social attractiveness (Coupland, Bishop 2007: 74;Pilus 2013: 145-146). When it comes to indirect assessment of attitudes, there are two methods which are commonly used so as to gather information about learners' responses to different varieties. These are the verbal-guise and matched-guise technique . The verbal-guise technique employs different speakers to represent different speech varieties, whereas the matched-guise technique employs only one speaker to represent multiple varieties (Carrie, McKenzie 2017: 316). Both verbal-guise and matched-guise tasks are usually accompanied by an interview or a written questionnaire (Evans 2005: 240). In the latter, subjects are presented with a list of attributes and are asked to indicate to what degree those attributes apply to the speech sam-7 Attitudes are mental constructs which are shaped by our experience. They are a psychological tendency in that they predispose us to either favorable or unfavorable reactions to certain situations, people, or objects. This suggests that attitudes cannot be neutral by definition (Dalton-Puffer 1997: 115-118;Stojić 2017: 310). ples they hear, i.e. to the speakers whose speech they are rating (Dalton-Puffer et al. 1997: 118). The purpose of such tasks is to elicit qualitative comments so as to gain a better insight into the subjects' pronunciation evaluations. Ideally, indirect approaches can uncover more deeply held beliefs and stereotypes regarding accents (Coupland, Bishop 2007: 75). Nevertheless, cross-validation of results often requires a combination of both direct and indirect approaches (Stojić 2017: 311).
PREVIOUS RESEARCH ON PRONUNCIATION PREFERENCES
Recent studies on accent preferences (e.g. Pilus 2013; Dalton-Puffer et al. 1997;Wong 2018;Paunović 2009) have mostly focused on exploring the attitudes of both teachers and learners towards Standard British, Standard American, as well as some localized English accents, using the verbal-guise experiment. Interestingly enough, studies report different findings in terms of whether or not non-standard accents are negatively evaluated. Generally, both learners and teachers tend to rate standard varieties, especially SBS, higher than their localized variety Carrie, McKenzie 2017;Dalton-Puffer et al. 1997;Wong 2018;Evans 2005;Henderson et al. 2012;Grubor, Hinić 2011;Jerotijević Tišma, Karavesović 2019). The negative evaluation of localized English varieties is often motivated by learners' belief that the heavily accented speech might confuse the hearer (Wong 2018: 180). On the other hand, some of the most commonly cited reasons for preferring the British accent are: greater familiarity with the model since it was taught at school and greater ease in understanding, as well as speaking British English (Pilus 2013: 150). Of course, this is not to say that there are no studies that advocate accented speech of non-native speakers (NNS). Studies alike usually report on learners' desire to retain an accent so as to communicate their identity to others (Ibid.). Even so, scholars like Wong (2018: 177) emphasize how very few studies actually show that English learners would not like to speak like natives. Although some scholars might be critical of such aspirations, native accent seems to remain a high-priority goal according to most research findings.
It appears that learners are strongly influenced by the accents they hear around them, particularly those accents which they hear in their classrooms. The fact that pronunciation models that served as reference accents in EFL classrooms are typically rated higher was corroborated in a research carried out by Dalton-Puffer et al. (1997). In this research, the authors tested the attitudes of 132 university students of English in Austria to both native (RP and GA) and non-native English varieties (Austrian English). The findings led the authors (Ibid.: 120) to conclude that the learners' preference for British English can partly be explained by the geographical closeness of the British Isles to Austria. Because of the geographical closeness, the authors hypothesized that their learners might have greater chances of encountering and thus interacting with the speakers of that particular variety. It is presumably this which then leads to the greater familiarity with the British accent and the learners' desire to imitate it.
However, the proximity might not always be strictly geographical. Rather, it might also be psychological due to the prominence of a specific culture (mostly through media) (Carrie, McKenzie 2017: 328). Familiarity with the target accent, which seems to be the consequence of the aforementioned proximity, is what determines students' ability to correctly identify diverse accents (Flege 1984: 704). Namely, in a recent study, Carrie and McKenzie (2017: 316) investigated the Spanish learners' (N = 71) ability to correctly identify speakers of RP and GA. They (Ibid.: 330) found that recognition rates correlate with the previous exposure to a variety (either through education or through media). It is precisely this psychological closeness which might explain why even those learners that come from countries which are geographically far from the native English speaking countries (like the UK, the USA, Canada, etc.) are occasionally capable of approximating their pronunciation to the native model.
We must not overlook the fact that learners sometimes fail to correctly identify English accents. This can, of course, greatly affect the pronunciation evaluation. Namely, a number of recent studies (Wong 2018;Carrie, McKenzie 2017) disclosed learners' inability to correctly identify accents such as Australian, Canadian, New Zealand, American and British English. Wong's (2018: 180) analysis, for instance, showed rather poor accent recognition rates (14%), despite the subjects' high preference for British English, and the fact that they labeled their own accent as British. On the other hand, Carrie and McKenzie's (2017: 313) research indicated that whenever GA speakers were wrongly identified as RP speakers, they were rated higher regarding social status. Clearly, when analyzing learners' attitudes to specific accents, misidentification can bring forth imprecise results concerning pronunciation judgments. This is why some researchers prefer using a more direct approach, i.e. they prefer using direct-method questionnaires without employing the verbal guises. Another way to ensure the validity of results is to opt for combining direct and indirect methods.
In the Serbian educational context, the circumstances do not differ greatly from those just mentioned. Namely, the purpose of nearly every conducted attitudinal research was to disclose the most widely used and preferred accent in Serbian classrooms, with standard accents remaining the prime focus. Few studies disclosed subjects' preference for SBS (Jerotijević Tišma, Karavesović 2019: 72; Grubor et al. 2008: 126). However, what is interesting is that a number of papers (Grubor et al. 2008;Grubor, Hinić 2011;Stojić 2017;Čubrović, Bjelaković 2020) reported on respondents' tendency to mix Standard English varieties (SBS and GA). Authors like Grubor and Hinić (2011: 299) are of the opinion that this occurrence stems from the bivalent influence -subjects being exposed to SBS 8 through education, and to GA trough the media. Yet, the authors (Ibid.: 303) do take into account that the respondents' decision to label their accent as a "mix" variety might be the result of a general tendency to pick a more neutral option (mix variety) over two extremes (GA and SBS). The main conclusion of Grubor and Hinić's (Ibid.: 306) research was that SBS keeps its prevalence in the educational context, while GA proves to be more dominant in informal situations.
Still, the increasing dominance of American English in various domains is beginning to, or rather, has already overshadowed students' exposure to SBS. Students in EFL contexts no longer depend solely on the input they receive in their classrooms. To illustrate this more clearly, Stojić (2017) conducted a comparative study of Serbian first-year students' overt attitudes to Standard American and Standard British English. The author (Ibid.: 312) wished to see whether the two generations of students, separated by the span of 19 years, expressed different attitudes to the accents in question. The results of the more recent survey (2016) disclosed a much higher use of the American accent (61.7%) among the students, compared to the earlier (1997) study where only 15.8% of the respondents claimed to be using the American variety (Ibid.: 312). Still, nearly all respondents (around 90%) in both studies reported that "the best" English is, in fact, British English (Ibid.: 318). The advance of GA among Serbian students was confirmed in yet another recent study conducted by Čubrović and Bjelaković (2020). As discussed previously, the growing use of GA was believed to be the result of the students' greater exposure to this variety, mostly due to the worldwide popularity of American pop culture or the Internet (Ibid.: 149).
The remaining sections of this paper will focus on analyzing pronunciation preferences of Serbian EFL learners in the hopes of making a modest contribution to the previous research on this topic. We primarily wish to observe the potential change in the learners' growing fondness for GA and to uncover the possible reasons for such fondness. By understanding our students' pronunciation goals, we are one step closer to finding ways to tailor the materials and the overall pronunciation teaching practice more to our students' liking, and by doing so, we can improve our students' chances of success.
RESEARCH QUESTIONS
The empirical part of this study focused on the following research questions: • What is the preferred pronunciation model in Serbian classrooms? 8 In their paper, Grubor and Hinić (2011: 301) used the label RP for Standard British English.
• What are the underlying reasons for students' preference for a specific accent?
• Is there a correlation between the students' accent preference and the overall pronunciation teaching practice?
PARTICIPANTS
Attitudinal studies concerning pronunciation preferences are usually conducted with university students as respondents. This is because attitudes are typically formed in adolescence and are believed to remain relatively consistent throughout life (Carrie 2017: 434;Kovačević 2004: 38). Therefore, the participants in this study were English-major students at the Faculty of Philology and Arts, University of Kragujevac. All the respondents were native Serbian speakers. The total sample size consisted of 85 students, with a mean age of 21.04. There was, however, a noticeable imbalance in the sample in terms of gender distribution. Namely, more women (N = 64) volunteered for the survey than men (N = 21). The number of respondents also varied according to the level of undergraduate study. More specifically, there were 9 first-year students, 47 second-year students, 22 third-year students and 7 fourth-year students who took part in the survey. Ideally, the number of respondents should be equal across categories like age and the level of study. However, this condition could not be met given that the participation in the research was voluntary. Every student who completed the questionnaire was accepted as a respondent, which led to certain groups (like the first-year and the fourth-year students) being under-represented in the sample. However, we did not want to disregard these groups since analyzing possible differences in pronunciation tendencies across variables like sex or the educational level was not our primary goal.
INSTRUMENT AND PROCEDURE
The data for the present study were collected using a direct-method questionnaire which was designed and distributed via e-mail to English-major students at the Faculty of Philology and Arts, University of Kragujevac. The students responded to the questionnaire anonymously. The survey period lasted from December 2020 until June 2021, during which a total number of 85 completed questionnaires was obtained. Before completing the questionnaire, the subjects were informed about the goals and the methodology of the ongoing study and they voluntarily agreed to participate in the research.
The questionnaire comprised a total of 18 questions. Since most of the questions were in multiple choice format, the questionnaire took less than 10 minutes to complete. Three supplementary questions (Q1-Q3) were posed so as to gather the students' demographic details (age, gender and the level of undergraduate study). The following 7 questions (Q4-Q10) examined students' exposure to English language and its varieties, both institutionally and outside of the educational context. More specifically, questions 4-6 elicited information on the respondents' overall exposure to English. Those questions were formulated as follows: Q4: How old were you when you first started learning English? Q5: Did you take any private English lessons as a child? If yes, for how long? Q6: What language did your teachers mostly use during your English classes?
In Question 6, students could choose between three options: "English", "Serbian" or "Both", while the previous two questions were open-ended. In Question 7, students were asked to report on the English variety their professors mostly used in their elementary school, high school and college. Four answers were offered here: "SBS", "GA", "Mix (SBS and GA) 9 " and "Other" 10 . This question was posed since the previous studies (Pilus 2013;Dalton-Puffer et al. 1997) revealed that students' decision to choose a certain variety is often motivated by the fact that that particular variety was taught at school. Question 8 asked the students whether their teachers tolerated the use of various English accents, or whether they persisted in using one particular variety. The goal of this question was to see if much of the students' attitudes towards English varieties actually came from their teachers, i.e. whether the teachers perhaps imposed their own beliefs and preferences upon their students. Conversely, Questions 9 and 10 examined the students' exposure to English accents in less formal settings. We chose to observe these two types of exposure (formal and informal) separately, since previous research (e.g. Grubor, Hinić 2011) reported on different accents being prevalent in different domains. Therefore, in Question 9, the students were asked if they had any chance to travel to an Englishspeaking country, while Question 10 dealt with the students' exposure to different English varieties through media. In the latter, the students were presented with the following options: "SBS", "GA", "Both SBS and GA", and "Other".
In order to observe our students' accent goals as well as their stance on the acquisition of a native-like accent, we presented them with the following set of questions (questions 11-16): 9 "Mix" variety was offered as a choice since a number of previous studies (Grubor et al. 2008;Grubor, Hinić 2011;Stojić 2017;Čubrović, Bjelaković 2020) reported how, instead of staying true to the chosen variety, Serbian students tend to mix Standard English accents. This option was offered because we wanted to observe whether choosing this particular variety was perhaps a conscious decision. 10 For several questions, the answer "Other" was offered as an option so that the respondents could specify different reasons or alternatives that were excluded from the given set of answers. Q11: Are you able to differentiate between various pronunciations of English (e.g. American, British, Australian, Canadian, etc.)?
Q12: What do you strive for in terms of your pronunciation? Q13: Do you think a native-like accent is fully attainable by EFL students? Q14: As an EFL student, do you think it is important to adhere to the standard pronunciation models (like SBS and GA), as opposed to non-standard varieties?
Q15: Which English variety do you use? Q16: When it comes to choosing an English variety, which factor has mostly influenced your choice?
Question 11 was asked in order to inspect the students' self reported ability to identify different English accents before asking them to label their own pronunciation. The options presented here were: "Yes", "Sometimes" and "No". In Question 12, we wished to see whether a native-like pronunciation is, in fact, a high-priority goal for our students. Hence, the suggested answers were: "A nativelike pronunciation", "A native-like pronunciation with little mother tongue interference", or "Serbian English". Questions 13 and 14 correlated with Question 12 in that we hypothesized that those students who believed native-like accent was both an achievable and a desirable goal would consequently wish to attain it. Since the aim of this research was to observe students' preference for either SBS or GA, the following set of options was offered in Question 15: "SBS", "GA", "A mix (SBS and GA)", "Serbian English", and "Other". Question 16 aimed at disclosing some of the possible reasons underlying the students' accent preferences. This question was purposefully posed directly in order to gain insight into what our students thought the main reason for their pronunciation model selection was. The students could choose here between options such as: "Greater exposure to the variety through media", "It is easier to understand/speak that variety", "The variety was predominantly taught at school/university", "Greater chances of interacting with the speakers of that variety" or "Other".
In Question 17, the students were asked: "Which variety do you find more prestigious?", and the options were: "SBS", "GA" and "They are equally prestigious". The purpose of asking this question was to observe whether the growing use of GA among Serbian students could be attributed to the possibly greater, or at least, equal prestige of GA compared to SBS. Lastly, in Question 18, the students were asked: "Which variety should be used as a model for pronunciation teaching?". The set of possible answers included options like "SBS", "GA" and "Both". This question aimed at revealing the students' overt opinion about the accent most suitable for Serbian EFL classrooms. Namely, we wanted to see whether our students thought the accent similar to their own was the most suitable model for pronunciation teaching, as some previous studies have indicated (Carrie, McKenzie 2017: 314).
DATA ANALYSIS
The respondents' answers to the open-ended questions were qualitatively elaborated on, while the data analysis for the multiple-choice questions included counting percentage scores.
RESULTS AND DISCUSSION
Overall, we could say our students are relatively experienced English learners in that the mean value for the age when the English learning process commenced was 6.68. This indicates that even the youngest respondents in the sample spent at least 12 years learning English institutionally. Yet, only 25% of our students reported taking additional English lessons. Those who did take private lessons, spent on average 5.5 years attending private schools. However, concerning the language their teachers mainly used in class, 17% of our examinees said their teachers mostly used Serbian, 28% said English, while the majority of our respondents (55%) mentioned their teachers used both languages. This has several consequences for the students. The most important one is that using the learners' mother tongue significantly decreases their exposure to the target language. This happens to be problematic since the context of learning English as a foreign language already implies limited exposure to the target language because that is not the language used for day-to-day communication.
When asked to point out the English variety their teachers predominantly used, 16% of our subjects stated their teachers used SBS, 55% answered GA, whereas 29% of our students revealed that their teachers used a mix of these two standard varieties. These results are inconsistent with the results obtained from the previous studies (Jerotijević Tišma, Karavesović 2019;Grubor, Hinić 2011;Grubor et al. 2008) which disclosed students' higher exposure to SBS in the educational context. In fact, nearly all of the teaching materials that are available to Serbian teachers and students have British English as the reference accent (Jerotijević Tišma, Karavesović 2018: 72). This is why higher rates for GA presented here appear to be quite unexpected. It is possible, though, that the teachers recognized their students' tendencies and have already taken steps in order to approximate the pronunciation teaching model more to their students' liking, despite the majority of the materials being based on the British variant.
What was even more interesting was that 93% of our subjects mentioned that their teachers tolerated the use of different English accents in class. The remaining 7% of our respondents, who claimed that their teachers insisted on using only one variety, were divided on this point. That is, half of the students reported on their teachers using GA, while the other half answered that their teachers insisted on using SBS. Still, high rates for the teachers' reported accent variation tolerance, excluded the possibility of a certain model being made compulsory.
The percentage of those who had a chance to visit an English-speaking country was quite low. That is, only 5% of our students visited either the USA or the UK. Although the majority of our students were not exposed to any standard English variety directly, their exposure to the observed varieties via the Internet, television, music, gaming, etc., was significant. Namely, 7% of the respondents said they were mostly exposed to online content which was in British English, 74% opted for American English, while 19% reported equal exposure to these standard varieties. It is evident that America has acquired superiority in areas such as technology, commerce, popular culture, science, etc. (Drljača Margić 2011: 65). It is precisely this growing dominance in nearly every sphere of life that has enabled the rapid spread of American English, which, we believe, led to the results presented here. The students were also asked about their ability to successfully identify different English accents. Here, up to 82% of our subjects believed they could successfully differentiate between various accents. This could be explained, in part, by the previously mentioned psychological proximity. More specifically, because of the diverse content students are exposed to via the Internet, the chances of familiarizing themselves with different English varieties are stronger than ever. What is more, when it comes to accents such as GA and SBS, a significant level of familiarity is likely due to the students' greater exposure to these variants through teaching materials (Carrie 2017: 432). Yet, the results concerning the learners' ability to identify English varieties are self-reported. In order to corroborate them, it would be best to conduct additional indirect assessment, i.e. to employ the verbal guises.
In questions regarding our students' pronunciation goals as well as their stance on the importance of being true to the standard models, the results demonstrate that 85% of our learners wish to attain a native-like accent. Almost the same percentage (80%) of students supposedly believe this is an achievable goal, while 63% of our learners think adhering to the standard models is, in fact, important. The remaining 15% of our examinees want to speak a native-like accent with little mother tongue interference. Interestingly enough, there were no students who opted for the localized variety, i.e. Serbian English as their goal. Such results are consistent with the findings of several previous studies Carrie, Mc-Kenzie 2017;Dalton-Puffer et al. 1997;Wong 2018;Evans 2005;Henderson et al. 2012;Grubor, Hinić 2011;Jerotijević Tišma, Karavesović 2019) which disclosed rather poor ratings for non-native varieties.
Nevertheless, it appears that there are slight changes concerning the level of prestige students attribute to standard accents like SBS and GA. Previous research on this issue disclosed higher ratings for SBS in terms of social status. The results obtained here demonstrate that SBS remains the prestigious variety since 58% of our respondents thought so. Although only 3% of our students believed GA was more prestigious, 38% said both accents were equal in this respect. Yet, it does not appear that the level of prestige was what motivated the students' choice of a specific variety. That is, despite the higher rates for SBS regarding prestige, the majority of our respondents (64%) labeled their own accent as American. Only 2% of students labeled their accent as SBS, 4% reported on speaking Serbian English, and the remaining 30% believed they spoke a mix of SBS and GA. It appears that the issue of mixing standard varieties, which is a recurring issue according to a number of studies (Grubor et al. 2008;Grubor, Hinić 2011;Stojić 2017;Čubrović, Bjelaković 2020), is something that students themselves are aware of.
According to our sample of students, the most common reason for choosing a particular variety was greater exposure to it through the media (62%). Then followed reasons such as "it is easier to speak/understand that variety" (14%), "the variety was taught at school" (11%), and "greater chances of interacting with speakers of that variety" (7%). For some (2%), all these reasons contributed to their choice of the pronunciation model. There were, however, students (4%) who opted for the answer "Other". Those were mostly students who described their pronunciation as SBS. They provided the following reasons for their choice: "I like the sound, the melody of SBS", "SBS just sounds a bit better", "I find SBS more sophisticated", "I am interested in British history, I respect countries with long tradition and rich history".
Though the majority of our students' labeled their pronunciation as GA, when asked about the best pronunciation model for pronunciation teaching, 18% of students opted for SBS, the same percentage (18%) chose GA, whereas most of our students (64%) chose the option "Both". Thus, rather than choosing their preferred variety as the best variety for pronunciation teaching, our students largely expressed tolerance for pronunciation variation.
As can be seen in the above results, there are reasons to believe that students' exposure to a certain variety is most likely what governs their pronunciation choice. The more the students are exposed to a specific variety, the greater their familiarity with that variety and their motivation to adopt it. It is, however, important to note that it is nearly impossible to single out the exact factor which governs the students' pronunciation model selection with utmost certainty. Rather, we should perhaps speak of a combination of diverse factors, where one, or most likely several factors, might prevail in a given context.
CONCLUSION
The results of the present study indicate an important change concerning Serbian EFL students' latest preferences regarding Standard English varieties. Namely, the conducted research corroborated the findings of a few recent attitudinal surveys (Stojić 2017;Čubrović, Bjelaković 2020) which disclosed learners' greater inclination towards GA as opposed to SBS. This change is significant since it disclosed a mismatch between students' preferred reference model on the one hand, and the materials used for pronunciation teaching on the other. More specifically, most, if not all, teaching materials available to Serbian learners have SBS as their reference accent (Jerotijević Tišma, Karavesović 2019; Čubrović, Bjelaković 2020). This does not, however, correlate with the reported accent preferences presented in this paper. Hence, including more materials on American pronunciation presents itself as a necessity, since such practice aligns more closely with our students' pronunciation goals. Of course, this is not to say that we believe GA should have superiority over SBS, or any other variety for that matter, nor that it is to be regarded as the norm. Rather, we look at it in the same sense as some other scholars have previously indicated -as a "point of reference", i.e. a "model for guidance" (Jenkins 1998: 124). EFL students should be given the chance to familiarize themselves with various native and non-native English varieties. Therefore, exposing students to only one particular variety brings us one step away from working towards that goal. We should, however, be aware of the fact that the results presented here might differ greatly from those obtained by analyzing the attitudes of learners who do not study English for academic purposes. Here, the focus was on those learners who are training to be English teachers. Slightly greater demands are placed on such learners since they are more likely to give public presentations or lectures, attend seminars and international conferences, and perhaps even try to enter some English-speaking colleges (Morley 1991: 492-493). This is why we should respect the students' desire to try to approximate the native model, if they wish to do so, and provide them with the necessary means that can help them realize their goal. We should, as some scholars would say, enable our students "not just to survive, but to succeed" (Ibid.: 489).
Lastly, we must note that analyzing students' actual pronunciation would require a different method from the one presented in this study. Questionnaires cannot, for that matter, elicit reliable information. The same applies to using assessment sheets. This is because the students' estimation of their own pronunciation might not necessarily reflect their actual linguistic behavior (Stojić 2017: 313). The data presented here should be taken more as an indicator of students' aspirations, rather than their actual performance. A more detailed acoustic analysis is needed in order to confirm that students' pronunciation is truly in accordance with their reported preferences. | 2022-07-21T15:21:43.877Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "7eb5728fa4242ce3759765c3df89c8c7927aec40",
"oa_license": null,
"oa_url": "https://doi.org/10.46793/uzdanica19.1.155j",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "44294d66e2b7df2430926e18ee540e17a8d33b32",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": []
} |
14146182 | pes2o/s2orc | v3-fos-license | Detecting Structural Metadata with Decision Trees and Transformation-Based Learning
The regular occurrence of dis(cid:3)uencies is a distinguishing characteristic of spontaneous speech. Detecting and removing such dis(cid:3)u-encies can substantially improve the usefulness of spontaneous speech transcripts. This paper presents a system that detects various types of dis(cid:3)uencies and other structural information with cues obtained from lexical and prosodic information sources. Speci(cid:2)cally, combinations of decision trees and language models are used to predict sentence ends and interruption points and, given these events, transformation-based learning is used to detect edit dis(cid:3)uen-cies and conversational (cid:2)llers. Results are re-ported on human and automatic transcripts of conversational telephone speech.
Introduction
Automatic speech-to-text (STT) transcripts of spontaneous speech are often difficult to comprehend even without the challenges arising from word recognition errors introduced by imperfect STT systems (Jones et al., 2003). Such transcripts lack punctuation that indicates clausal or sentential boundaries, and they contain a number of disfluencies that would not normally occur in written language. Repeated words, hesitations such as "um" and "uh", and corrections to a sentence in mid-stream are a normal part of conversational speech. These disfluencies are handled easily by human listeners (Shriberg, 1994), but their existence makes transcripts of spontaneous speech ill-suited for most natural language processing (NLP) systems developed for text, such as parsers or information extraction systems. Similarly, the lack of meaningful segmentation in automatically generated speech transcripts makes them problematic to use in NLP systems, most of which are designed to work at the sentence level. Detecting and removing disfluencies and locating sentential unit boundaries in spontaneous speech transcripts can improve their readability and make them more suitable for NLP. Automatically annotating discourse markers and other conversational fillers is also likely to be useful, since proper handling is needed to follow the flow of conversation. Hence, the overall goal of our work is to detect such structural information in conversational speech using features generated by currently available speech processing systems and statistical machine learning tools.
This paper is organized as follows. In Section 2, we describe the types of metadata that this work addresses, followed by a discussion of related prior work in Section 3. Section 4 describes the system architecture and details the algorithms and features used by our system. Section 5 discusses the experimental paradigm and results. Finally we provide a summary and directions for future work in Section 6. Table 1: Filled pauses and discourse markers to be detected by our system.
Filled Pauses
ah, eh, er, uh, um Discourse Markers actually, anyway, basically, I mean, let's see, like, now, see, so, well, you know, you see Table 2: Examples of edit disfluencies.
Repair
(I was) + she was very interested... (I was) + { I mean } she was very...
Restart
(I was very) + Did you hear the news?
Defining an all-inclusive set of English filled pauses and discourse markers is a problematic task. Our system detects only a limited set of filled pauses and discourse markers, listed in Table 1, which cover a large majority of cases (Strassel, 2003). An explicit editing term is a filler occurring within an edit disfluency, described further below. For example, the discourse marker I mean serves as an explicit editing term in the following edit disfluency: "I didn't tell her that, I mean, I couldn't tell her that he was already gone."
A repetition occurs when a speaker repeats the most recently spoken portion of an utterance to hold off the flow of speech. A repair happens when the speaker attempts to correct a mistake that he or she just made. Finally, in a restart, the speaker abandons a current utterance completely and starts a new one. Previous studies characterize edit disfluencies using a structure with different segments (Shriberg, 1994;Nakatani and Hirschberg, 1994). The first part of this structure is called the reparandum, a string of words that gets repeated or corrected. The reparandum is immediately followed by a non-lexical boundary event termed the interruption point (IP). The IP marks the point where the speaker interrupts a fluent utterance. Optionally, there may be a filled pause or explicit editing term. The final part of the edit disfluency structure is called the alteration, which is a repetition or revised copy of the reparandum. In the case of a restart, the alteration is empty. In Table 2, reparanda are enclosed in parentheses, IPs are represented by "+", optional fillers are in braces, and alterations are in boldface.
Annotation of complex edit disfluencies, where a disfluency occurs within an alteration, can be difficult. The data used here is annotated with a flattened structure that treats these cases as simple disfluencies with multiple IPs (Strassel, 2003). IPs within a complex disfluency are detected separately, and contiguous sequences of edit words associated with these IPs are referred to as a deletable region.
Previous Work
In an early study on automatic disfluency detection a deterministic parser and correction rules were used to clean up edit disfluencies (Hindle, 1983). However theirs was not a truly automatic system as it relied on handannotated "edit signals" to locate IPs. Bear et al. (1992) explored pattern matching, parsing and acoustic cues and concluded that multiple sources of information would be needed to detect edit disfluencies. A decision-tree-based system that took advantage of various acoustic and lexical features to detect IPs was developed in (Nakatani and Hirschberg, 1994). Shriberg et al. (1997) applied machine prediction of IPs with decision trees to the broader Switchboard corpus by generating decision trees with a variety of prosodic features. Stolcke et al. (1998) then expanded the prosodic tree model with a hidden event language model (LM) to identify sentence boundaries, filled pauses and IPs in different types of edit disfluencies. The hidden event LM used in their work adapted Hidden Markov Model (HMM) algorithms to an n-gram LM paradigm to represent non-lexical events such as IPs and sentence boundaries as hidden states. Liu et al. (2003) built on this framework and extended prosodic features and the hidden event LM to predict edit IPs on both human transcripts and STT system output. Their system also detected the onset of the reparandum by employing rule-based pattern matching once edit IPs have been detected.
Edit disfluency detection systems that rely exclusively on word-based information have been presented by Heeman et al. (Heeman et al., 1996) and Charniak and Johnson (Charniak and Johnson, 2001). Common to both of these approaches is a focus on repeated or similar sequences of words and information about the words themselves and the length and similarity of the sequences.
Our approach is most similar to (Liu et al., 2003), since we also detect boundary events such as IPs first and use them as "signals" when identifying the reparandum in a later stage. The motivation to detect IPs first is that Figure 1: System Diagram speech before an IP is fluent and is likely to be free of any prosodic or lexical irregularities that can indicate the occurrence of an edit disfluency. Like Liu et al., we use a decision tree trained with prosodic features and a hidden event language model for the IP detection task. However, we incorporate SU detection in those models as well. We use part-of-speech (POS) tags and pattern match features in decision tree training whereas Liu et al. (2003) developed language models for them. We explore three different methods of combining the hidden event language model and the decision tree model, namely linear interpolation, joint tree-based modeling and an HMM-based approach. Moreover, our system uses the transformationbased learning algorithm rather than hand-crafted rules for the second stage of edit region detection.
Another key difference between our system and most previous work is the prediction target. Our system incorporates detecting word boundary events such as SUs and IPs, locating onsets of edit regions, and identifying filled pauses, discourse markers and explicit editing terms. We believe that such a comprehensive detection scheme allows our system to better model dependencies between these events, which will lead to an improvement in the overall detection performance.
Overall Architecture
As shown in Figure 1, our system detects disfluencies in a two-step process. First, for each word boundary in the given transcription, a decision tree predicts one of the four boundary events IP, SU, ISU (incomplete SU), and the null event. Then in the second stage, rules learned via the transformation-based learning (TBL) algorithm are applied to the data containing predicted boundary events and other lexical information to identify edits and fillers. Following edit region and filler prediction, the system output was post-processed to eliminate edit region predictions not associated with IP predictions as well as IP predictions for which no edit region or filler was detected. An analysis of post-processing alternatives confirmed that this strategy reduced insertion errors.
Detecting Boundary Events
In order to detect boundary events, we trained a CARTstyle decision tree (Breiman et al., 1984) with various prosodic and lexical features. Decision trees are wellsuited for this task because they provide a convenient way to integrate both symbolic and numerical features in prediction. Furthermore, a trained decision tree is highly explainable by its nature, which allows us to gain additional insight into the utilities of and the interactions between multiple information sources.
Prosodic features generated for decision tree training included the following: • Word and rhyme 1 durations.
• Rhyme duration differences between two neighboring words. • F0 statistics (minimum, mean, maximum, slope) over a word. • Differences in F0 statistics between two neighboring words. • Energy statistics over a word and its rhyme.
• Silence duration following a word.
• A flag indicating start and end of a speaker turn and speaker overlap. • Ordinal position of a word in a turn.
Energy and F0 features were generated with the Entropic System ESPS/Waves package and the F0 stylization tool developed in (Sönmez et al., 1998). Word and rhyme duration were normalized by phone duration statistics (mean and variance) calculated over all available training data. F0 and energy features were normalized for each individual speaker's baseline. A turn boundary was hypothesized for word boundaries with silences longer than four seconds.
Since inclusion of features that do not contribute to the classification of data can degrade the performance of a decision tree, we selected only the prosodic features whose exclusion from the training process led to a decrease in boundary event detection accuracy on the development data by utilizing the leave-one-out method.
Lexical features consisted of POS tag groups, word and POS tag pattern matches, and a flag indicating existence of filler words to the right of the current word boundary. The POS tag features were produced by first predicting the tags with Ratnaparkhi's Maximum Entropy Tagger (Ratnaparkhi, 1996) and then clustered by hand into a smaller number of groups based on their syntactic role. The clustering was performed to speed up decision tree training as well as to reduce the impact of tagger errors.
Word pattern match features were generated by comparing words over the range of up to four words across the word boundary in consideration. Grouped POS tags were compared in a similar way, but the range was limited to at most two tags across the boundary since a wider comparison range would have resulted in far more matches than would be useful due to the low number of available POS tag groups. When words known to be identified frequently as fillers existed after the boundary, they were skipped and the range of pattern matching was extended accordingly.
Another useful cue for boundary event detection is the existence of word fragments. Since word fragments occur when the speaker cuts short the word being spoken, they are highly indicative of IPs. However currently available STT systems do not recognize word fragments. As our goal is to build an automatic detection system, our system was not designed to use any features related to word fragments. However, for a control case, we conducted an experiment with reference transcripts using a single "frag" word token to show the potential for improved performance of a system capable of recognizing fragments.
In addition to the decision tree model, we also employed a hidden event language model to predict boundary events. A hidden event LM is the same as a typical n-gram LM except that it models non-lexical events in the n-gram context by counting special non-word tokens representing such events. The hidden event LM estimates the joint distribution P (W, E) of words W and events E. Once the model has been trained, a forward-backward algorithm can be used to calculate P (E|W ), or the posterior probability of an event given the preceding word sequence (Stolcke et al., 1998;Stolcke and Shriberg, 1996). The SRI Language Modeling Toolkit (SRILM) (Stolcke, 2002) was used to train a trigram open-vocabulary language model with Kneser-Ney discounting (Kneser and Ney, 1995) on data that had boundary events (SU, ISU, and IP) inserted in the word stream. Posterior probabilities of boundary events for every word boundary were then estimated with SRILM's capability for computing hidden event posteriors.
While the hidden event LM alone can be used to detect boundary events, prior work has shown that it benefits from also using prosodic cues, so we combined the language model and the decision tree model in three different ways. In the first approach, which we call the joint tree model, the boundary event posterior probability from the hidden event LM is jointly modeled with other features in the decision tree to make predictions about the boundary events. In the second approach, referred to as the linearly interpolated model, a decision is made based on the combined posterior probability where A corresponds to the acoustic-prosodic features and the weighting factor λ can be chosen empirically to maximize target performance, i.e. bias the prediction toward the more accurate model. In the third approach, the decision tree features, words and boundary events are jointly modeled via an integrated HMM (Shriberg et al., 2000). This approach augments the hidden event LM by modeling decision tree features as emissions from the HMM states represented by the word and boundary event. Under this framework, the forward-backward algorithm can again be used to determine posterior probabilities of boundary events. Similar to the linearly interpolated model, a weighting factor can be used to introduce the desired bias to the combination model. The joint tree model has the advantage that the (possibly) complex interaction between lexical and prosodic cues can be captured. However, since the tree is trained on reference transcriptions, it favors lexical cues, which are less reliable in STT output. In the linearly interpolated and joint HMM approaches, the relative weighting of the two knowledge sources is estimated on the development test set for STT output, so it is possible for prosodic cues to be given a higher weight.
Edit and Filler Detection
After SUs and IPs have been marked, we use transformation-based learning (TBL) to learn rules to detect edit disfluencies and conversational fillers. TBL is an automatic rule learning technique that has been successfully applied to a variety of problems in natural language processing, including part-of-speech tagging (Brill, 1995), spelling correction (Mangu and Brill, 1997), error correction in automatic speech recognition (Mangu and Padmanabhan, 2001), and named entity detection (Kim and Woodland, 2000). We selected TBL for our tagging-like metadata detection task since it has been used successfully for these other tagging tasks.
TBL is an iterative technique for inducing rules from training data. A TBL system consists of a baseline predictor, a set of rule templates, and an objective function for scoring potential rules. After tagging the training data using the baseline predictor, the system learns a list of rules to correct errors in these predictions. At each iteration, the system uses the rule templates to generate all possible rules that correct at least one error in the training data and selects the best rule according to the objective function, commonly token error rate. The best rule is
Word Match that IP that POS Match
the dog IP the cat recorded and applied to the training data in preparation for the next iteration. The standard stopping criterion for rule learning is to stop when the score of the best rule falls below a threshold value; statistical significance measures have also been used (Mangu and Padmanabhan, 2001).
To tag new data, the rules are applied in the order in which they were learned. This allows rules which are learned later in the process to fine tune the effects of the earlier rules. TBL produces concise, comprehensible rules, and uses the entire corpus to train all of the rules. We used Florian and Ngai's Fast TBL system (fnTBL) (Ngai and Florian, 2001) to train rules using disfluency annotated conversational speech data. The input to our TBL system consists of text divided into utterances, with IPs and SUs inserted as if they were extra words. (For simplicity, these special words are also assigned "IP" and "SU" as part of speech tags.) Our TBL system used the following types of features: • Identity of the word.
• Part of speech (POS) and grouped part of speech (GPOS) of the word (same as the decision tree).
• Does this word/ POS/ GPOS match the word/ POS/ GPOS that is 1/2/3 positions to its right?
• Is this word at the beginning of a turn or utterance?
• Tag to be learned.
The "tag" feature is the one we want the system to learn. It is also used in templates that consider features of neighboring words. The baseline predictor sets the tag to its most common value, "no disfluency," for all words. Other values of the tag are the three types of fillers (FP, EET, DM) and edit. The objective function for our learner is token error rate, and rule learning is stopped at a threshold score of 5.
We generated a set of rule templates using these features. The rule templates account for individual features of the current word and/or its neighbors, the proximity of potential FP/EET/DM terms, and matches between the current word and nearby words, especially when in close proximity to a boundary event or potential filler. Example word and POS matches are shown in Table 3.
Experimental Setup
For training our system and its components, we used two different subsets of Switchboard, a corpus of conversational telephone speech (CTS) (Godfrey et al., 1992). One of the data sets included 417 conversations (LDC1.3) that were hand-annotated by the Linguistic Data Consortium for disfluencies and SUs according to the V5 guidelines detailed in (Strassel, 2003). Another set of 1086 conversations from the Switchboard corpus was annotated according to (Meteer et al., 1995) and is available as part of the Treebank3 corpus (TB3). We used a version of this set that contained annotations machine-mapped to approximate the V5 annotation specification.
For development and testing of our system, we used hand transcripts and STT system output for 72 conversations from Switchboard and the Fisher corpus, a recent CTS data collection. Half of these conversations were held out and used as development data (dev set), and the other 36 conversations were used as test data (eval set). The STT output, used only in testing, was from a state-ofthe-art large vocabulary conversational speech recognizer developed by BBN. The word error rates for the STT output were 27% on the dev set and 25% on the eval set.
To assess the performance of our overall system, disfluencies and boundary events were predicted and then evaluated by the scoring tools developed for the NIST Rich Transcript evaluation task.
Boundary Event Prediction
Decision trees to predict boundary events were trained and tested using the IND system developed by NASA (Buntine and Caruan, 1991). All decision trees were pruned by ten-fold cross validation. The LDC1.3 set 2 with reference transcriptions was used to train the trees 3 and the dev set was used to evaluate their performances.
Several decision trees with different combinations of feature groups were trained to assess the usefulness of different knowledge sources for boundary event detection. The tree was then used to predict the boundary events on the reference transcription of the dev set. The results are presented in Table 4. The inclusion of a special token for fragments resulted in improved precision and recall for SUs and IPs but, surprisingly, degraded performance for ISUs. These results show that prosodic features by themselves failed to detect ISUs and IPs, though 2 Experiments combining the LDC1.3 set with the mapped TB3 set were not as successful as LDC1.3 set alone for decision tree training. they lead to performance gains when combined with lexical cues. Examination of the decision tree trained with only the prosodic features revealed that pause duration and turn information features were placed near the top of the tree.
Use of lexical features brought substantial performance improvement in all aspects, and classification accuracy increased when features extracted from different knowledge sources were combined. However, we observed that a smaller number of prosodic features ended up being used in the tree and they were placed at or near leaf nodes as more lexical features were made available for training. The importance of prosodic features is likely to be much more apparent for STT data. The word errors prevalent in the STT transcriptions will affect lexical features far more severely than prosodic features, and therefore the prosodic features contribute to the robustness of the overall system when lexical features become less reliable.
Edit and Filler Detection
After the prediction of boundary events, the rules learned by the TBL system described in section 4.3 were applied to detect fillers and edit regions. As with the decision trees, we trained rules using the LDC1.3 data alone, and combined with the mapped TB3 data, finding that the combined dataset gave better results for TBL training. Again we used only reference word transcripts but discovered that training with SUs and IPs predicted by the first stage of our system was more effective than using reference boundary events.
It is difficult to formally assess the effectiveness of the TBL module independently, and results for the entire system are discussed in detail in the next section. Informal inspection of the rules learned by the TBL system indicates that, not surprisingly, word match features and the presence of IPs are very important for the detection of edit regions. The most commonly used features for identifying discourse markers are the identity or POS of the current and/or neighboring words and the tag already assigned to neighboring words.
Overall System Results
The performance of our system was evaluated on the fall 2003 NIST Rich Transcription Evaluation test set (RT-03F) using the rt-eval scoring tool (NIST, 2003), which combines ISUs and SUs in a single category, and reports results for detection of SUs, IPs, fillers, and edits without differentiating subcategories of fillers and edits. This tool produces a collection of results, including percentage correct, deletions, insertions, and Slot Error Rate (SER), similar to the word error rate measure used in speech recognition. SER is defined as the number of insertions and deletions divided by the number of reference items. Note that scores are somewhat different from those in Table 4, because of differences in scoring and metadata alignment methods. Results of our system on the RT-03F task are shown in Table 5 for the joint tree version of the system as applied to the STT transcription of the test data. SU detection by our system is relatively good. IP detection is not as successful, which also impacts edit detection. Figure 2 contrasts the results of the joint tree model for STT output with those obtained on reference data with and without fragments. As expected, all error rates are higher on STT output; IPs and fillers take the biggest hit. Filler performance in particular seems to be affected by recognition errors, which is not surprising, since misrecognized words would likely not be on the target lists of filled pauses and discourse markers. In particular, nearly all missed and incorrectly inserted filled pauses are due to recognition errors. Detection of discourse markers is more challenging; fewer than half the errors on discourse markers are due to recognition errors. Most non-STTrelated filler errors involved the words "so" and "like" used as DMs, which are hard problems since the vast majority of the occurrences of these two words are not DMs. It is also not surprising that improved IP detection on reference data contributes to a lower error rate for edits.
As expected, the inclusion of fragments improves performance on IP and edit detection, where fragments frequently occur. In LDC1.3, 17.2% of edit IPs have word fragments occurring before them; 9.9% of edits consist of just a single fragment. In the dev set, 35.5% of edit IPs are associated with fragments. However, fragments are rarely output by the STT system, so for most of our work we chose to use the identical system for processing reference and STT transcripts and did not include fragments. IP detection performance was significantly worse for those IPs associated with fragments, as shown in Table 6. However, since fragments are often deleted or recognized as a full word, STT output actually "helps" with detection of IPs after fragments, apparently because the POS tagger and hidden event LM tend to give unreliable results on the reference transcripts near fragments. Figure 3 compares the eval test set performances of the different alternatives for incorporating the hidden event LM posterior, i.e. inclusion in the decision tree, linear interpolation and the joint HMM. For this experiment, the interpolation weighting factor was selected empirically to maximize boundary event prediction accuracy on the STT transcription of the dev set. The results of this comparison are mixed: SU detection is better with the joint tree model, but IP detection and consequently edit detection are better with the interpolation and HMM approaches. The degradation of SU detection performance with the HMM is counter to findings in previous work (Stolcke et al., 1998;Shriberg et al., 2000). This may be due to differences in evaluation criteria, given that the HMM approach typically had higher precision which might benefit earlier word-based measures more. In addition, the difference in conclusions may be due to the fact that the decision trees used here include lexical pattern match features in addition to hidden event posteriors.
A problem in our system is the inability to predict more than one label for a given word or boundary. Words labeled as both filler and edit account for only 0.5% of all fillers and edits in the LDC1.3 training data, so it is probably not a significant problem. We also do not predict boundaries as both SU and IP. In LDC1.3, these account for 12.8% of SU boundaries, and are treated as simply SU in training. This does not affect IPs for edits, but impacts 38.6% of IPs before fillers. By predicting a combined SU-IP boundary in addition to isolated SUs and IPs, we obtain a small reduction in SER for IPs but at the expense of an increase in SU SER. However, separating prediction of IPs after edit regions vs. before fillers also yields small improvements in edit region precision and filler recall, resulting in 3.3% and 0.8% relative reduction in filler and edit SERs respectively for the joint HMM.
Conclusions
We have demonstrated a two-tiered system that detects various types of disfluencies in spontaneous speech. In the first tier, a decision tree model utilizes multiple knowledge sources to predict interword boundary events. Then the system employs a transformation-based learning algorithm to identify the extent and type of disfluencies. Experimental results show that the large variance and noise inherent in prosodic features makes them much less effective than lexical features for reference data; however, in the presence of word recognition errors prevalent in automatic transcripts of spontaneous speech, prosodic features have more value. Performance differences for the various score combination methods were small, but combining decision tree and HE-LM scores with a weight optimized on dev data is slightly better for edit disfluencies. Transformation-based learning is an effective way to tag fillers and edit regions after boundary events are tagged, but the best performance is obtained when training with automatically predicted SU and IP boundary events.
As this is a new task, error rates are relatively high (though significantly better than chance), but this approach achieved competitive results on the Fall 2003 NIST Rich Transcription Evaluation, and there are many directions for future improvements. | 2014-07-01T00:00:00.000Z | 2004-01-01T00:00:00.000 | {
"year": 2004,
"sha1": "b229f65e481639a6b2ea58fc022b40dbca5cbb81",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "ACL",
"pdf_hash": "fbf1904e108953a2575126a57322c1c33761be21",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53237157 | pes2o/s2orc | v3-fos-license | Combinatorial regulation of hepatic cytoplasmic signaling and nuclear transcriptional events by the OGT/REV-ERBα complex
Significance Using an interactomic approach, we have identified the nuclear receptor REV-ERBα as a O-GlcNAc transferase (OGT) protein partner. REV-ERBα protects cytoplasmic OGT from proteasomal degradation and facilitates cytosolic and nuclear protein O-GlcNAcylation while REV-ERα ligands decreased cytoplasmic OGT activity. REV-ERBα thus exerts pleiotropic activities through OGT, coordinating signal transduction, epigenomic programming, and transcriptional response in the liver.
Rapid immunoprecipitation mass spectrometry of endogenous proteins (RIME)
RIME assay was performed as previously described (2). Briefly, 4 µg of rabbit monoclonal anti-REV-ERBα (#13418, Cell Signalling Technology), 10 µg of rabbit polyclonal anti-OGT (HPA030751, Sigma-Aldrich) or control rabbit IgG (sc-2027, Santa Cruz Biotechnology) were immobilized on SureBeads TM Protein A magnetic beads (Bio-Rad). HepG2 cells grown on P150 plates were cross-linked for 10 Tryptic digests were then analysed on a NanoAcquity (Waters) LC-system coupled to a Q-Exactive Plus orbitrap mass spectrometer (Thermo Fisher Scientific). The HPLC system consisted of a solvent degasser Nanoflow pump, a thermostated column oven kept at 60°C and a thermostated autosampler kept at 10°C. Mobile phase A (0.1% FA in water) and mobile phase B (0.1% FA in acetonitrile) were delivered at 450nL/min. Samples were loaded into a Symmetry C18 precolumn (0.18 x 20mm, 5 µm particle size, Waters) over 3 minutes in 1% B at a flow rate of 5µL/min. This step was followed by a reverse-phase separation using an ACQUITY UPLC® BEH130 C18 separation column (200 mm x 75 µm id, 1.7 µm particle size, Waters). Peptides were eluted using a gradient from 1 % to 8 % B over 2 minutes followed by an 8% to 25% B step in 88 minutes and finished by 25% to 90% B in 10 minutes. A 5 min plateau at 90% B was observed before column reconditioning at 1% B. The mass spectrometer was equipped with a nanospray ion source. The applied voltage was 1.8kV and the ion transfer tube temperature was set to 250°C. MS spectra were acquired at a resolution of 70,000 at 200m/z, an automatic gain control (AGC) was fixed at 3 x 10 6 ions and a max injection time was set at 50 msec under a full ratio fixed at 15%. Peptide fragmentation was performed via higher-energy collisional dissociation set at 27V of normalized collisional energy. The ten most intense peptide ions in each survey scan with a charge state ≥2 were selected for MS/MS in the mass range of 300 to 1800. MS/MS were performed at 17,500 resolution at 200m/z, AGC was fixed at 1 x 10 5 and the maximum injection time was set to 100 msec. Peaks selected for fragmentation were automatically put on a dynamic exclusion list for 10 s. MS data were saved in RAW file format (Thermo Fisher Scientific) using XCalibur and then converted into ".mgf" files using MSConvert (Proteowizard), which were then submitted to the Mascot search engine (version 2.5.1, Matrix Science, London, UK) installed on a local server. Searches were performed against an in-house generated protein database composed of protein sequences of Homo sapiens extracted from Uniprot database (May 2016) and common contaminants (human keratins, trypsin), and combined with reverse sequences for all entries (total 299,458 entries) using an in-house database generation toolbox [https://msda.unistra.fr (3)]. Searches were performed without any molecular mass, or isoelectric point restrictions, trypsin was selected as enzyme, carbamidomethylation of cysteine (+57 Da) and oxidation of methionine (+16 Da) were set as variable modifications and mass tolerances on precursor and fragment ions of 5 ppm and 0.05 Da were used, respectively. Mascot results were loaded into the in-house Proline software (4) and filtered in order to obtain a false discovery rate of less than 1%.
RNA extraction and RT-QPCR
Total RNA was isolated using Trizol (Life Technologies) according to the manufacturer's instructions. RNA quantity and purity were measured using a Nanodrop device (Thermo Fisher). Total RNA was treated with DNAse I (Thermo Scientific) and reverse-transcribed into cDNA with the High Capacity cDNA reverse transcription kit (Applied Biosystems). Quantitative PCR were performed using the Brillant III SYBR Green QPCR Master mix (Agilent) in a MX3005 qPCR system (Agilent). Settings were: step 1: 3 min at 95°C, step 2: 40 cycles of 5 sec at 95°C and 20 sec at 55°C.
Plasmid constructs and transient transfection experiments
Wild type human REV-ERBα cDNA (NM_021724) cloned into pEZ-M11 was purchased from GeneCopoeia. The pEZ-M11-REV-ERBα H602F construct was obtained by mutagenesis [QuickChange II Site-Directed Mutagenesis Kit (Agilent Technologies)] of the wild type vector following the supplier's recommendations to convert histidine 602 into a phenylalanine residue. The Gal4 DBD-REV-ERBα vector was built by cloning the sequence encoding the human ligand binding domain of REV-ERBα (from AA 216 to AA 614) into the pM backbone (Clontech). The Gal4-UAS tk Luc reporter gene and the pCMV NCoR RID-VP16 expression vector have been described elsewhere (5,6). The Bmal-Luc vector was built by cloning the human Bmal1 promoter region (from -280 to +38) into pGL4 basic (Promega). The normalization vector pCMV-renilla was purchased from Promega. Detailed sequence information is available upon request. HepG2 cells (5x10 5 ) or HEK293 (3x10 5 ) were plated on 6-well plates and transfections were performed using JetPEI (Polyplus transfection). Twenty four hours after transfection, the medium was replaced by complete medium. Cells were treated or not with 10 µM GSK4112 or vehicle (DMSO) for 24 hours. Reporter assays were quantified using the Dual-Glo ® luciferase assay system (Promega) following the supplier's recommendations.
Adenovirus transduction
Control and REV-ERBα adenovirus were purchased from Atlantic Gene Therapies. HepG2 cells were transduced in serum-free DMEM medium for 150 min. After a 48h incubation in complete DMEM medium, cells were washed with 1X PBS and incubated 24 hours in serum-free medium. Cells were then treated or not with 60 nM insulin for the indicated time.
Simple Western immunoassays
Proteins were analysed by Simple Western ® size-based assays using a Wes system as recommended by the manufacturer (ProteinSimple). Proteins (0.5 mg/mL) were detected with primary antibodies described above. Secondary antibodies were provided by the manufacturer (PS-MK14, ProteinSimple). Samples were processed according to manufacturer's recommendations. Data were analyzed using the Compass software (ProteinSimple).
Subcellular fractionation
HepG2 cells were harvested and washed twice with ice-cold 1x PBS. Cells were suspended into a hypotonic buffer [ca 10 7 cells/mL of 20 mM TRIS-HCl, pH 7.5, 10 mM NaCl, 3 mM MgCl2, 0.2% NP40 with protease and phosphatase inhibitors(Roche)] and homogenized with 20 strokes of a Dounce grinder (pestle A). Homogenates were spun at 600g for 5 min. and the supernatant (cytosol) was collected and stored in 10% glycerol until use. Pellets were suspended in 500 µL of lysis buffer (25 mM Tris-HCl, pH 7.5, 500 mM NaCl, 2 mM EDTA, 0.5% NP-40, and protease inhibitors) for 30 min at 4°C then sonicated for 10 min with a Bioruptor device at high power mode (sonication cycle: 30 sec ON/30 sec OFF). Lysates were centrifuged at 12,000 rpm for 5 min at 4°C. The soluble fraction (nucleoplasm) was collected and stored in 10% glycerol until use.
Immunocytochemistry
Immunofluorescent detection of REV-ERBα and OGT was performed on paraformaldehydefixed HepG2 cells treated or not with 10 µM GSK4112 and cultured at high Glc concentration (25 mM).
After a 24h-treatment, HepG2 cells were fixed with 4% formaldehyde and permeabilized with 0.1% Triton X100. Immuno-staining was processed with rabbit anti-REV-ERBα or mouse anti-OGT (ab184198, Abcam) monoclonal antibodies and FITC couple secondary antibodies. DNA was stained using the Hoechst 33,258 intercalating agent. Images were acquired using a Leica DMI6000B microscope.
TET enzyme activity assays
TET enzymatic activity was assayed in the nuclear fraction of treated cells as follows. Cells were grown on P150 plates and washed twice using ice-cold 1xPBS and harvested in 1 mL ice-cold PBS. Cells were centrifuged at 600g for 2 min. The pellet was suspended into 200 µL buffer A (10 mM HEPES pH 7.9, 10 mM KCl, 1.5 mM MgCl2, 0.34 M sucrose, 10% glycerol) and incubated on ice for 10 min. After centrifugation at 1,300 rpm for 10 min, pellets were washed once with buffer A and suspended in 100 µL buffer B (10 mM HEPES pH 7.9, 10 mM KCl, 3 mM EDTA, 0.2 mM EGTA, 1 mM DTT) for 30 min. TET enzyme activity was measured with the Epigenase TM 5mC Hydroxylase TET Activity/Inhibition Assay kit (Epigentek) according to manufacturer's instructions.
5-hmC assay
The 5hmC content was determined as follows. HepG2 and mouse liver genomic DNA were extracted using the phenol/chloroform method after proteinase K and RNase treatment (7). HepG2 5hmC levels were determined using the MethylFlash TM hydroxymethylated DNA quantification kit (Epigentek) according to manufacturer's instructions. Mouse liver 5hmC level was determined using a dot blot assay. Briefly, 200 ng DNA were denatured in 0.1 M NaOH at 99°C for 5 min followed by cooling down at 4°C and a neutralization step with 0.66 M ammonium acetate. DNA was blotted on Hybond TM -N+ membrane (Amersham GE Healthcare) and UV crosslinked. Hydroxymethylated DNA was detected using an anti-5hmC-DNA rabbit polyclonal antibody (1/1000, C15310210, Diagenode).
Hydroxymethylated DNA immunoprecipitation (hMeDIP)
The hMeDIP assay was performed on mouse liver DNA using the hMeDIP kit (Diagenode) according to the manufacturer's instructions. Briefly, DNA was extracted with the phenol/chloroform method as above. Seven µg of purified DNA were sonicated for 4x10 min with a Bioruptor device at high power mode (sonication cycle: 30 sec ON/30 sec OFF) and heat-denaturated (10 min at 95°C).
After 5 min incubation on ice, hydroxymethylated DNA was immunoprecipitated with an anti-5hmC-DNA mouse monoclonal antibody bound to magnetic beads. After an overnight incubation, beads were washed and DNA (IPed and input) was treated with proteinase K and purified. The Srebf1 hydroxymethylated region enrichment were then quantified by qPCR and compared to input.
Mass spectrometry analysis of the HepG2 O-GlcNAcylome
REV-ERBα−specific and control siRNA treatments were performed in HepG2 cells. HepG2 cell extracts were immunoprecipitated with the anti O-GlcNAc antibody RL2. An aliquot from immunoprecipitates was separated by SDS-PAGE, and each band was cut into small pieces to perform in-gel tryptic digestion. Briefly, chopped gel pieces were washed three times with 25 mM ammonium bicarbonate containing 50% (v/v) acetonitrile (ACN). Samples were dehydrated by ACN and dried for 10 min at 37°C. Reduction and alkylation of samples were performed by adding dithiothreitol and iodoacetamide, respectively. Then, gel pieces were rehydrated in a digestion buffer containing 50 mM ammonium bicarbonate and 10 ng/µL trypsin (Promega, sequencing grade). The rehydrated transparent gel pieces were placed into 50 mM ammonium bicarbonate, and then incubated overnight at 37°C. The digested products were extracted with 100 µL of 5% formic acid in 80% acetonitrile (v/v). The peptide solution was then dried completely by vacuum centrifugation.
In parallel, another aliquot of immunoprecipitated sample was used for label free quantification. The samples were treated with 0.5 M N-acetyl-D-glucosamine to elute bound proteins from beads. The supernatant was subjected to eFASP tryptic digestion. UF filters from Amicon® units (10 kDa cutoff limit; Millipore, Billerica, MA) were incubated overnight in 5% (v/v) TWEEN-20 (T20, Sigma-Aldrich).
After incubation, the filter units were rinsed thoroughly by three immersions in MS-grade water. The eFASP digestion (8,9) was as follows. Samples were mixed in 50 µL of reducing buffer [4% SDS, 0. for 60 min with shaking in the dark. After centrifugation at 13,000 g for 30 min, the filtrate was discarded. To remove residual IAA, 200 μL of exchange buffer A was added to each filter unit and centrifuged. This buffer addition/centrifugation step was repeated once. Three washes with the eFASP digestion buffer (100 μL) (50 mM ABC, 0.2% DCA pH 8) were performed then 1 μg trypsin (1:50 w/w) was added. Digestion proceeded for 16 h at 37°C. Peptides were recovered by transferring the UF filter to a new collection tube and spinning at 13,000 g for 20 min. To achieve complete peptide recovery, filters were rinsed twice with 50 μL of 50 mM NH4HCO3. Ethyl acetate (200 µL) was added to the peptide-containing filtrate and was transferred to a 2 mL tube to which 2.5 μL TFA was added and quickly vortexed. White thread-like precipitates were visible for large quantities of peptides. Peptide precipitates were mixed with 800 µL of ethyl acetate and were centrifuged at 13,000 g for 10 min. The organic supernatant was discarded and this step was repeated twice. The aqueous phase was placed in a thermomixer at 60°C for 5 minutes to evaporate residual ethyl acetate and organic solvents and volatile salts were then removed by vacuum-drying. This step was repeated two times with 50% methanol. Samples were then diluted tenfold in buffer A of nano-HPLC (5% acetonitrile and 0.1% formic acid) and each sample (n = 4) was injected four times in HPLC instrument to be analyzed in triplicates.Peptides mixtures were analyzed using a nanoflow HPLC instrument (U3000 RSLC Thermo Fisher Scientific) coupled on-line to a quadrupole-Orbitrap mass spectrometer (Q Exactive Plus, Thermo Scientific) with a nano-electrospray ion source. One µL of peptide mixture (corresponding to 500 ng of proteins) was loaded onto the pre-concentration trap (Thermo Scientific, Acclaim PepMap100 C18, 5 µm, 300 µm i.d. × 5 mm) using partial loop injection, for 5 min at a 10 μL/min flow rate with buffer A. Peptides were separated on analytical column (Acclaim PepMap100 C18, 3 The "target-decoy" search strategy was used for estimating the frequencies of incorrect protein identifications (FDR), based on a reverse database generated automatically in MaxQuant. The precursor mass and fragment mass were identified with an initial mass tolerance of 10 ppm and 20 ppm, respectively. The search included variable modifications of methionine oxidation, asparagine and glutamine deamidation, tyrosine, serine and threonine phosphorylation and N-terminal acetylation and glutamine to pyroglutamate conversion, and fixed modifications of carbamidomethyl cysteine and HexNAc serine and threonine. Minimal peptide length was set to six amino acids and a maximum of three mis-cleavages was allowed. The FDR was set to 0.01 for peptide and protein identifications. To maximize the number of quantification events across samples, MS runs from skeletal muscle were analysed with the "match between runs" option in the MaxQuant software, which allowed the quantification of high-resolution MS1 features that were not identified in each single measurement. This algorithm was enabled using a 60-sec retention time window for individual matching and a 20-min retention time window for complete alignment of the spectrum. In the case of identified peptides that are all shared between two proteins, these were combined and reported as one protein group.
Moreover, proteins contaminants, proteins identified only based on variable modifications sites and proteins matching to the reverse decoy database were filtered out. LFQ intensities for respective protein groups were uploaded in Perseus (1.5.6.0) and analysed. Raw LFQ intensities were Log2transformed. At least four LFQ values per protein group needed to be present for the analysis. To replace non-quantified values with low intensities, data imputation was performed based on normal distribution of LFQ intensities. Significant interactors were determined using a two-sample analysis multiple sample test with Benjamini-Hochberg FDR at 0.05.
The wig files for 5hmC-DNA ChIP-seq data were converted to bigwig and lifted to mm10 using the liftOver tool from the UCSC Genome Browser web site. Functional regions of interest were defined as regions spanning 5kb in each direction around the center of TSS. The average ChIP-seq intensities were computed with SitePro (16) on these regions with a resolution of 50bp. The Heatmap tool (based on SitePro script) was used to cluster functional regions into 4 groups according the average signal intensity for all ChIP-seq data (setting « step » to 100bp and « saturation » to 0.01). | 2018-11-10T12:54:43.434Z | 2018-11-05T00:00:00.000 | {
"year": 2018,
"sha1": "24943d031fb015fc2fff14135478b00f509a39cb",
"oa_license": "CCBYNCND",
"oa_url": "https://www.pnas.org/content/pnas/115/47/E11033.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d9f5145a4d8bffb82d39711c2cbf449501d62e9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
224784090 | pes2o/s2orc | v3-fos-license | PTBP1‐targeting microRNAs regulate cancer‐specific energy metabolism through the modulation of PKM1/M2 splicing
Abstract Understanding of the microRNAs (miRNAs) regulatory system has become indispensable for physiological/oncological research. Tissue and organ specificities are key features of miRNAs that should be accounted for in cancer research. Further, cancer‐specific energy metabolism, referred to as the Warburg effect, has been positioned as a key cancer feature. Enhancement of the glycolysis pathway in cancer cells is what primarily characterizes the Warburg effect. Pyruvate kinase M1/2 (PKM1/2) are key molecules of the complex glycolytic system; their distribution is organ‐specific. In fact, PKM2 overexpression has been detected in various cancer cells. PKM isoforms are generated by alternative splicing by heterogeneous nuclear ribonucleoproteins. In addition, polypyrimidine tract‐binding protein 1 (PTBP1) is essential for the production of PKM2 in cancer cells. Recently, several studies focusing on non‐coding RNA elucidated PTBP1 or PKM2 regulatory mechanisms, including control by miRNAs, and their association with cancer. In this review, we discuss the strong relationship between the organ‐specific distribution of miRNAs and the expression of PKM in the context of PTBP1 gene regulation. Moreover, we focus on the impact of PTBP1‐targeting miRNA dysregulation on the Warburg effect.
implies that their expression profiles are organ-specific. Besides, the organ distribution of miRNAs is closely associated with the biological function of the organ. [13][14][15][16][17][18] Recently, cancer-specific energy metabolism (Warburg effect) has been reviewed; 19,20 increased glycolysis has been proposed as a cancer hallmark. 21 Although many genes regulate the glycolytic system, pyruvate kinase M1/2 (PKM1/2) are rate-limiting glycolytic enzymes. PKM1 and PKM2 promote TCA cycle and glycolysis, respectively. 22 PKM1 is abundantly expressed in high-energy demanding (glucose-demanding) organs such as the brain and muscle. In contrast, PKM2 is primarily expressed in other tissues (eg, fatty tissue, lung, and kidney). [23][24][25] Notably, the dimeric form of PKM2, with low affinity to phosphoenolpyruvic acid, induces a higher nucleic acid synthesis through the pentose phosphate pathway. Furthermore, PKM2 is also expressed in various proliferating cells (eg, embryonic and tumor cells). 23,24 In particular, an increase in PKM2 promotes cancer progression. [26][27][28] PKM isoforms (PKM1 and PKM2) are produced through alternative splicing, 29 under the regulation of several splicing factors, such as hnRNP and serine/arginine-rich splicing factors. [30][31][32] Of these, PTBP1, also known as hnRNPI, promotes cancer through the enhancement of PKM2 expression. [33][34][35][36][37] PTBP1 is an exonic splicing silencer, binding to optimal motifs (eg, UCUUC) in the polypyrimidine tract near the 3′ splicing site, and suppressing the downstream exon's inclusion. 38 In the PKM mRNA, the favorable sequence for PTBP1 is located at intron 8. Therefore, PTBP1 blocks the inclusion of exon 9, resulting in the expression of PKM2 through the inclusion of exon 10. 39 Importantly, the expression of PTBP1 is promoted by transcription factors with oncogenic functions, such as MYC; 39 these transcription factors are, therefore, glycolysis enhancers in cancer cells. Moreover, based on recent findings, PTBP1 is negatively regulated by miRNAs.
In this review, we discuss the miRNA-mediated regulation of PTBP1 and PKM isoforms. In particular, we elaborate on the following points. First, under physiological conditions, the expression of PTBP1 and PKM isoforms is regulated by miRNAs that are unevenly distributed throughout the organs. Second, during carcinogenesis, the dysregulation of PTBP1-targeting miRNAs affects cancer-specific energy metabolism in various types of cancer cells via PKM2 upregulation.
| THREE D IS TIN C T CONTE X TS OF miRNA DYS REG UL ATI ON C AUS E P TB P1/PKM2 UPREG UL ATION DURING C ARCINOG ENE S IS
The organ-specific dysregulation of miRNAs and the consequent impact on PTBP1 and PKM isoforms during carcinogenesis can lead to distinct types of cancer; in this review, we focus on three major contexts. First, in glucose-demanding organs, dysregulation of brain-and muscle-specific miRNAs directly targeting PTBP1 is associated with brain tumors and sarcomas, respectively (Section 4).
Second, cooperative dysregulations of both brain-and muscle-specific miRNAs are associated, especially with gastrointestinal cancers (Section 5). Third, dysregulation of liver-specific miRNAs directly targeting PKM occurs in HCC, together with cooperative dysregulation of PTBP1-targeting miRNAs (Section 6). The following sections describe each context in detail.
| REG UL ATI ON OF P TB P1 BY B R AIN -OR MUSCLE-S PECIFI C miRNA s
Brain-specific MIR124-3p is the most representative regulator of PTBP1 expression; it promotes neuronal differentiation through the repression of PTBP1 expression. 58,59 The relationship between MIR124-3p and PTBP1 was discovered: PTBP1 binds to pre-MIR124, inhibiting the expression of mature MIR124. 60 Upregulation of PTBP1 has been detected in brain tumors, such as GBM. 39,61,62 Interestingly, this upregulation is partly due to the downregulation of MIR124-3p during carcinogenesis. 44 MIR124-3p has the most numerous binding sites on PTBP1, which supports a secure connection between the two.
Another representative brain-specific miRNA, MIR9-5p, promotes differentiation of neuronal cells from retinal stem cells through downregulation of PTBP1; 63 the association of MIR9-5p and PTBP1 was also reported in glioma. 45 A natural antisense transcript (PTB-AS) stabilizes the expression of PTBP1, preventing the binding of MIR9-5p to the PTBP1 3′UTR. 45 Furthermore, brain-specific MIR137-3p suppresses PTBP1 expression through direct binding to PTBP1 in GBM cells. 25 Similar to MIR124-3p, a miRNA/PTBP1/PKM axis was demonstrated in these studies, suggesting that PTBP1 is strongly regulated by brain-specific miRNAs.
Furthermore, several muscle-specific miRNAs-MIR1-3p, Note: Gene names are described according to the Gene Nomenclature Committee of Human Genome Organization (https://www.genen ames.org/). The miRNA terminology used follows the proposed miRNA nomenclature guidelines. 76 The distribution characteristics and TSI were described with reference to data from the human miRNA tissue atlas (https://ccb-web.cs.uni-saarl and.de/tissu eatla s/). 18 The actual expression values are shown in Figure S1.
a Poorly conserved site for microRNA families broadly conserved among vertebrates. b Poorly conserved site for microRNA families conserved among mammals. Each definition is referred to as in the TargetScan database (http://www.targe tscan.org/vert_72/).
TA B L E 1 (Continued)
expression. 25,43 As with the brain-specific miRNAs, dysregulation of these muscle-specific miRNAs may significantly impact carcinogenesis, especially in sarcoma of muscle origin. 25,43 In RMS, downregulation of MIR1-3p and MIR133b promoted the expression of PTBP1, contributing to the Warburg effect. 43 Interestingly, the chimeric PAX3-FOXO1 gene, a feature of alveolar RMS, was reportedly associated with PTBP1, whereas MIR133b directly regulated PAX3-FOXO1 expression. 43 However, further research on miRNAs in the context of sarcoma (a rare tumor) is warranted.
| IMPAC T OF B R AIN -AND/OR MUSCLE-S PECIFI C MI CRORNA DYS REG UL ATI ON ON OTHER T YPE S OF C AN CER
Although the expression of brain/muscle-specific miRNAs is unevenly distributed among organs, reportedly, the dysregulation of both miR-NAs cooperatively affects carcinogenesis in various types of cancer.
Impaired regulation of the PTBP1/PKM axis by MIR124-3p has been observed in CRC, chronic myelocytic leukemia, and pancreatic cancer. 24,40,46,47 MIR340-5p is abundant in the brain ( Figure S1), MIR340-5p also negatively regulates PTBP1 expression in CRC cells. 48 Furthermore, PTBP1 and PKM2 upregulation through the dysregulation of muscle-specific MIR1-3p and MIR133b was associated with carcinogenesis in CRC and gastric cancer. 41,42 Interestingly, our investigation showed that miRNAs/PTBP axis impairment was frequently detected in colorectal adenoma specimens. 40,41 The impairment of the miRNAs/PTBP axis may be the initial step toward carcinogenesis, especially in CRC. These findings suggest that miRNA/PTBP axis-induced PKM2 overexpression plays a key, intrinsic mechanism of carcinogenesis.
| miRNA-MED IATED REG UL ATI ON OF PKM ISOFORMS E XPRE SS I ON IN HEPATOCELLUL AR C ARCINOMA
Both PKM isoforms are rarely expressed in the liver; 25 in contrast, pyruvate kinase L/R (PKLR) is specifically expressed in the liver. 23,25,64 PKL is expressed in the liver, the main gluconeogenesis-governing organ. 23 Hence, a different perspective is required regarding PKM2 upregulation in HCC carcinogenesis. Interestingly, the 3′UTR of PKM, which is common in PKM1 and PKM2, has a binding region for liver-specific MIR122-5p, the only miRNA with a conserved site across most vertebrates, as determined in silico its dysregulation also contributes to the onset of HCC. Although we found relatively high PTBP1 expression in healthy liver compared to brain or muscle, 25 Note: The miRNA terminology used follows the proposed miRNA nomenclature guidelines. 76 The distribution characteristics and TSI are described with reference to the data in the human miRNA tissue atlas (https://ccb-web.cs.unisaarl and.de/tissu eatla s/). 18 The actual expression values are shown in Figure S1.
The number before each reference corresponds to the number of the designated type of cancer studied.
consistent organ distribution of miRNAs and PTBP1; therefore, the organ distribution of miRNA in normal conditions should always be considered. Furthermore, dysregulation of the miRNA/PTBP1 axis in multiple cancer types may suggest that this mechanism is universal and essential for the development and maintenance of the Warburg effect in cancer cells. Table 2; a summary of the systematic PTBP1 and PKM regulatory mechanisms by miRNAs is shown in Figure 1.
| S I G NIFI C AN CE OF PTB P1 IN THE WA RBU RG EFFEC T
Recently, we found an association between PTBP1 and PKM isoforms in the Warburg effect. [40][41][42][43] Many reports show upregulation of PKM2 during carcinogenesis. [26][27][28] However, this upregulation involves two different patterns. For example, in brain and muscle, the expression of PKM1 is mainly due to the suppression of PTBP1; the switching of PKM isoforms from PKM1 to PKM2 is induced by dysregulation of the miRNAs/PTBP1 axis during carcinogenesis. 24,25,43 In contrast, in gastrointestinal organs, both PKM1 and PKM2 are expressed in healthy conditions; 24,25 the PKM2/PKM1 ratio is increased (not switched) during carcinogenesis. 24,[40][41][42] Perhaps, the switching of PKM isoforms causes a more significant impact on cancer-energy metabolism. Of note, both high PKM1 or high PKM2 contexts showed that the dysregulation of PTBP1-targeting miRNAs further contributes to the upregulation of PKM2 ( Figure 2).
We investigated the roles of PTBP1 in cancer cells through transient PTBP1 downregulation. PTBP1 silencing induced autophagy F I G U R E 1 Regulation of polypyrimidine tract-binding protein 1 (PTBP1) and pyruvate kinase M (PKM) isoforms by microRNAs: schematics. Brain and muscle-specific miRNAs bind to the 3′ UTR of PTBP1 and downregulate PTBP1 expression. PKM1 dominance is induced through the suppression of alternative splicing in these healthy organs. PKM1 promotes the tricarboxylic acid (TCA) cycle for energy production. In the process of carcinogenesis, coordinated dysregulation of miRNAs induces PKM2 upregulation through the increment of PTBP1 expression. PKM2 promotes glycolysis and/or the synthesis of nucleic acids, especially in proliferating cells. Dysregulation of brainspecific miRNAs such as MIR9-5p, 124-3p, and 137-3p occurs in brain tumors; that of muscle-specific miRNAs (MIR1-3p, 133b, and 206) arises in sarcoma. In gastrointestinal cancers (eg, colorectal cancer), these miRNAs are dysregulated coordinately. In contrast, in the pyruvate kinase L (PKL) dominant normal liver, MIR122-5p is abundant and downregulates both PKM1 and PKM2 by binding to the PKM 3′UTR. We assume that in hepatocellular carcinoma, the dominance of PKM2 is caused by harmonic dysregulation of PKM-targeting (MIR122-5p) and PTBP1-targeting miRNAs (MIR194-5p). Thus, there are three types of miRNA dysregulation behind the upregulation of PKM2 in cancer cells.
in various cancer cells, together with a PKM2 to PKM1 switch; of note, this effect was also observed after the introduction of PTBP1targeting miRNAs. [40][41][42][43]46 In turn, this switch led to the production of ROS and ATP, activating the TCA cycle. N-Acetyl-l-cysteine
| FUTURE PER S PEC TIVE S
Our review highlights many shades of gray in the field. First, we have not discussed the organ distribution of all potential PTBP1-binding miRNAs. In addition, several miRNAs were suggested (in silico) as PTBP1-binding miRNAs. For example, MIR133a-3p is a muscle-specific miRNA, 14,18 and its relationship with PTBP1 has been reported in the context of human islet insulin biosynthesis and dengue virus replication. 67,68 miRNAs that can potentially bind to PTBP1 based on a target-predicting database is provided in Table 3. However, further studies are needed to integrate these findings in the context of PTBP1-targeting.
Second, the miRNAs regulatory mechanisms of PKM isoforms are not entirely understood. For instance, MIR369 enhances the expression of PKM2 via the stabilization of HNRNPA2B1 in cell reprogramming. 69 Various splicers and miRNAs may constitute complex PKM isoforms and impact the regulatory mechanisms, which deserve further exploration. Third, the regulatory mechanisms of PKLR remain unclear. Although a previous study showed that the expression of PKLR was not changed in HCC, 51 this finding needs to be investigated in more detail.
Fourth, the PTBP1 functions other than the regulation of PKM isoforms have not been sufficiently elucidated. PTBP1 is involved in several steps in the metabolism of mRNAs, including mRNA stability, mRNA transport, 3'-end processing, and internal ribosome entry site-mediated translation. 45,70 In cancer cells, PTBP1 was shown to impact migration, invasion, apoptosis, and cell cycle. 71 Hence, the molecular mechanisms of PTBP1 in cancer cells, with a focus on other splicing target genes or mRNA metabolism, need to be investigated.
Fifth, the roles of PKM1 are not well understood; of note, PKM1 is upregulated in various chemo-resistant cells. 72 Moreover, PKM1 is an activator of glucose metabolism, boosting tumor cell growth. 73 Besides, in neuroendocrine lung tumors (NET), higher PKM1 expression was observed compared to non-NET tumors. 73 Therefore, PKM1 should be considered a biomarker of chemo-resistance and a potential therapeutic target in some types of cancer.
F I G U R E 2
Relationship between pyruvate kinase M (PKM) isoforms, cancer development, and anticancer effects. In carcinogenesis, the establishment of PKM2 dominance follows two patterns. PKM1 to PKM2 switching occurs in PKM1-dominant organs such as brain and muscle. Dysregulation of polypyrimidine tract-binding protein 1 (PTBP1)-targeting miRNAs (brain-and muscle-specific) induces the switch to PKM2 dominance through PTBP1 upregulation in brain tumors and myosarcoma. This PKM2 dominant change is defined as the "switching type." In contrast, in the gastrointestinal tract, both PKM1 and PKM2 are expressed. PKM2 expression is further upregulated through dysregulation of the PTBP1-targeting miRNA/PTBP1 axis in carcinogenesis. This PKM2 dominant change is defined as the "increasing type." In cancer cells, PKM2 is consistently dominant. Downregulation of PTBP1, via PTBP1-targeting miRNAs or PTBP1 gene-silencing of (siRNA-PTBP1), induces growth inhibition, metabolic change, and the production of reactive oxygen species through PKM2 to PKM1 switching.
We should also consider organ-specificity in the context of clinical applications. Recently, MIR34a-5p (MRX34) was selected as a therapeutic tool in various solid tumors; a phase I study (NCT01829971) was conducted 74 and terminated due to immune-related adverse events; the suitability of the drug delivery system was questioned. 75 Nonetheless, we suggest that the organ-specificity of MIR34a-5p should also be considered; the organ-distribution of the particular miRNA in healthy conditions should also be factored in, to maximize the effectiveness of the treatment and to avoid potential side effects.
| CON CLUS ION
In summary, the regulation of PTBP1 is organ-specific; brain-or muscle-specific miRNAs partially contribute to the organ-specific expression of PKM isoforms. Moreover, the Warburg effect in cancer cells is due to the upregulation of glycolysis-related proteins, such as PKM2, through the dysregulation of single or multiple miRNA/PTBP1 axes. This review suggests that the organ-specificity of miRNAs partially governs the characteristics of each tissue and that the miRNAs dysregulation profoundly contributes to carcinogenesis.
ACK N OWLED G M ENTS
We thank our collaborators, including the colleagues from the Gifu University and Osaka Medical College. We would like to thank Editage (www.edita ge.com) for English language editing.
D I SCLOS U R E
The authors declare no conflicts of interest. Abbreviations: PTBP1, polypyrimidine tract-binding protein 1; 3′UTR, three prime untranslated region. a Listed as a set of MIR124-2 in the TargetScan database. | 2020-10-20T13:05:28.706Z | 2020-10-18T00:00:00.000 | {
"year": 2020,
"sha1": "0dc927b9db90aede41d90cba1aaaad17439363ce",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cas.14694",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "547a2ddf71f3dd4b01e9f5d6518196d6fdf28f84",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118402084 | pes2o/s2orc | v3-fos-license | Branching ratios of $B_c$ Meson Decaying to Vector and Axial-Vector Mesons
We investigate the weak decays of $B_c$ mesons in Cabibbo-Kobayashi-Maskawa favored and suppressed modes. We present a detailed analysis of the $B_c$ meson decaying to vector meson (V) and axial-vector meson (A) in the final state. We also give the form factors involving $B_c \to A$ transition in the Isgur-Scora-Grinstein-Wise II framework and consequently, predict the branching ratios of $B_c \to V A$ and $AA$ decays.
I. INTRODUCTION
The B c meson was first discovered by CDF collaboration at Fermilab [1] in 1998. At present, a more precise measurement of its mass and life time is available in Particle Data Group (PDG) [2] i.e. M Bc = 6.277 ± 0.006 GeV and τ Bc = (0.453 ± 0.042) × 10 −12 s. It is believed that LHC-b is expected to produce 5 × 10 10 events per year [3][4][5][6], which is around 10% of the total B meson data. This will provide a rich amount of information regarding B c meson.
The B c meson is a unique Standard Model (SM) particle which is quark-antiquark bound state (bc) consisting two heavy quarks of different flavors and, therefore, is flavor asymmetric. The study of B c meson is of special interest as compared to the flavor-neutral heavy quarkonium (bb, cc) states, as it only decays via weak interactions, while the later predominantly decays via strong interactions and/or electromagnetic interactions. The decay processes of the B c meson can be divided into three categories involving: (i ) decay of the b quark with c-quark being spectator, (ii ) decay of the c quark with b-quark being spectator, (iii ) the relatively suppressed annihilation of b andc which is ignored in present work. One can find several theoretical works based on a variety of quark models [7][8][9][10][11][12][13][14][15][16][17][18] for the semileptonic and nonleptonic decays of B c emitting s-wave mesons, pseudoscalar (P ) and vector (V ) mesons. A relatively less attention has been paid to the p-wave meson emitting weak decays of B c meson [19][20][21][22][23][24][25]. In recent past, several relativistic and non relativistic quark models [13,15,[19][20][21][22] are used employing factorization approach to calculate branching ratios (BRs) of B c meson decaying to a p-wave charmonium (cc) in the final state. Most recently, Salpeter Method [24] and Improved Bethe-Salpeter Approach [25] are used to probe non-leptonic decays of B c meson. On experimental side, more measurements regarding B c meson will be available soon at Large Hadron Collider (LHC), LHC-b and Super-B experiments. A high precision instrumentation at these experiments may provide precise measurement of BRs of the order of (10 −6 ), which makes study of B c meson decays more interesting. The developing theoretical and experimental aspects of the B c meson physics motivate us to investigate weak hadronic decays of B c meson emitting vector (V ) and axial-vector (A) mesons in the final state. We employ the improved Isgur-Scora-Grinstein-Wise quark model (known as ISGW II Model) [26,27] to obtain B c → A transition form factors. Using the factorization approach, we calculate the decay amplitudes and predict branching ratios of B c → V A/AA decays. For B c → V transition form factors we rely on our previous work [18] based on flavor dependence effects in Bauer-Stech-Wirbel (BSW) model frame work [28].
The presentation of the article goes as follows. We discuss the mass spectrum and the methodology in Sections II and III, respectively. Decay constants are discussed in Section IV. We present the B c → A transitions form factor in ISGW II Model and give a brief account for B c → V transitions form factors in Section V, respectively. Consequently, the branching ratios are estimated. Results and discussions are presented in Section VI and last Section contains summary and conclusions.
In the present work, we use the following mixing scheme for the isoscalar (1 ++ ) mesons: Likewise, mixing for isoscalar (1 +− ) mesons is given by It has been observed that f 1 (1.285) → 4π/ηππ, f ′ 1 (1.512) → KKπ, h 1 (1.170) → ρπ and h ′ 1 → KK * /KK * predominantly, which seems to favor the ideal mixing for both 1 ++ and 1 +− nonets i.e., The hidden-flavor diagonal 3 P 1 and 1 P 1 states have opposite C-parity and therefore, cannot mix. However, their is no restriction on such mixing in strange and charmed states, which are most likely a mixture of 3 P 1 and 1 P 1 states. States involving strange partners of A (J P C = 1 ++ ) and A ′ (J P C = 1 +− ) states i.e. K 1A and K 1A ′ mesons mix to generate the physical states in the following manner: Numerous analysis based on phenomenological studies indicate that strange axial vector meson states mixing angle θ K lies in the vicinity of ∼ 35 • and ∼ 55 • , see for details [29]. Experimental information based on τ → K 1 (1.270) / K 1 (1.400) + ν τ data yields θ K = ± 37 • and θ K = ± 58 • [30]. However, the negative mixing angle solutions are favored by D → K 1 (1.270)π /K 1 (1.400)π decays and experimental measurement of the ratio of K 1 γ production in B decays [31]. Following the discussions given in Ref. [29], which states that mixing angle θ K ∼ 35 • is preferred over ∼ 55 • , we use θ K = −37 • in our numerical calculations. It is based on the observation that choice of angle for f − f ′ and h − h ′ mixing schemes (which are close to ideal mixing) are intimately related to choice of mixing angle θ K . In general, mixing of charmed and strange charmed states is given by and As pointed out in [31], for heavy mesons the heavy quark spin S Q and the total angular momentum of the light antiquark can be used as good quantum numbers, separately. In the heavy quark limit, the physical mass eigenstates P 3/2 1 and P 1/2 1 with J P = 1 + can be expressed as a combination of 3 P 1 and 1 P 1 states as Thus, the states D 1 (2.427) and D 1 (2.422) can be identified as P 1/2 1 and P 3/2 1 , respectively. However, beyond the heavy quark limit, there is a mixing between P Similarly, for strange charmed axial-vector mesons, A detailed analysis by Belle [32] yields the mixing angle θ 2 = (−5.7 ± 2.4) • . while the quark potential model [33,34] determines θ 3 ≈ 7 • .
B. Decay Amplitudes
In generalized factorization hypothesis the decay amplitudes can be expressed as a product of the matrix elements of weak currents (up to the weak scale factor of G F √ 2 × CKM elements × QCD factor) given by Using Lorentz invariance, the hadronic transition matrix elements [26][27][28] for the relevant weak current between meson states can be expressed as Similarly, for axial vector meson states: where It may be noted that B c → A/A ′ transition form factors in ISGW II framework are related to BSW type form factor [28] notations i.e. A, V 0,1,2 as follows Sandwiching the weak Hamiltonian (3.1) and (3.2) between the initial and the final states, the decay amplitudes for various B c → M A decay modes (M = V or A) can be obtained for the following three categories [28]: 1. Class I transitions: contain those decays which are caused by color favored diagram and the decay amplitudes are proportional to a 1 , where a 1 (µ) = c 1 (µ) + 1 Nc c 2 (µ), and N c is the number of colors.
2. Class II transitions: consist of those decays which are caused by color suppressed diagrams.
The decay amplitude in this class is proportional to a 2 i.e. for the color suppressed modes a 2 (µ) = c 2 (µ) + 1 Nc c 1 (µ).
3. Class III transitions: these decays are caused by the interference of color singlet and color neutral currents and consists both color favored and color suppressed diagrams i.e. the amplitudes a 1 and a 2 interfere.
For numerical calculations, we follow the convention of taking N c = 3 to fix the QCD coefficients a 1 and a 2 , where we use [35]: A detailed analysis regarding N c counting and role of color-octet current operators is available in [34]. It may be noted that N c , number of color degrees of freedom, may be treated as a phenomenological parameter in weak meson decays, which account for non-factorizable contributions. It implies that the effective expansion parameter is something like, 1/(4π)N c , 1/N 2 c ... or non-leading 1/N c terms are suppressed by some reason [35]. In order to study the variation in decay rates and branching ratios, we effectively vary the parameter N c from 3 to 10. The obtained results are thus presented as an average with uncertainties between branching ratios at N c = 3 to N c = 10. Taking in to account the constructive interference observed for B meson decays involving both the color favored and color suppressed diagrams [35]. We use the ratio a 2 /a 1 to be positive in the present calculations.
C. Decay Widths
Like vector meson (V ), axial-vector meson (A) also carry spin degrees of freedom, therefore, the decay rate [31] of B c → V A is composed of three independent helicity amplitudes H 0 , H +1 and H −1 , which is given by where p c is the magnitude of the three-momentum of a final-state particle in the rest frame of B c meson and M = V or A. Helicity amplitudes H 0 , H +1 and H −1 are defined in terms of the coefficients a, b, and c as follows: where such that The coefficient a, b and c describe the s-, d-and p-wave contributions, respectively. m M and m A denotes masses of respective mesons.
IV. DECAY CONSTANTS
The decay constants for axial-vector mesons are defined by the matrix elements given in the previous section. It may be pointed out that the axial-vector meson states are represented by 3 × 3 matrix and they transform under the charge conjugation [30] as Since the weak axial-vector current transfers as (A µ ) b a → (A µ ) a b under charge conjugation, the decay constant of the 1 P 1 meson should vanish in the SU(3) flavor limit [30]. Experimental information based on τ decays gives decay constant f K 1 (1270) = 0.175±0.019 GeV [20,31], while decay constant for K 1 (1.400) can be obtained from relation [31]. In case of non-strange axialvector mesons, Nardulli and Pham [36] used mixing angle for strange axial vector mesons and SU (3) symmetry to determine f a 1 = 0.223 GeV for θ 1 = −58 • . Since, a 1 and f 1 lies in the same nonet we assume f f 1 ≈ f a 1 under SU(3) symmetry. Due to charge conjugation invariance decay constants for 1 P 1 nonstrange neutral mesons b 0 1 (1.235), h 1 (1.170), and h ′ 1 (1.380) vanish. Also, owing to G-parity conservation in the isospin limit decay constant f b 1 = 0.
V. FORM FACTORS
In this section, we give a short description to calculate B c → A and B c → V transition form factors.
We use ISGW II Model [27] to calculate B → A/A ′ transition form factors. ISGW model is a non-relativistic constituent quark model [26], which obtain an exponential q 2 -dependence of the form factors. It employ variational solutions of the Schrdinger equation based on the harmonic oscillator wave functions, using the coulomb and linear potential. In general, the form factors evaluated are considered reliable at q 2 = q 2 m , the maximum momentum transfer (m B − m X ) 2 . The reason being that the form-factor q 2 -dependence in the ISGW model is proportional to e −(q 2 m −q 2 ) and hence the form factor decreases exponentially as a function of (q 2 m − q 2 ). This has been improved in the ISGW II model [27] in which the form factor has a more realistic behavior at large (q 2 m − q 2 ) which is expressed in terms of a certain polynomial term. In addition to this, the ISGW II model incorporates a number of improvements, such as the heavy quark symmetry constraints, heavy-quark-symmetry-breaking color magnetic interaction, relativistic corrections etc.
The form factors have the following simplified expressions in the ISGW II model for B c → A/A ′ transitions caused by b → c quark transition [26,27]: where t(≡ q 2 ) dependence is given byω and F (l) The function F 5 is given by with and m is the sum of the mesons constituent quarks masses,m is the hyperfine averaged physical masses, n f is the number of active flavors, which is taken to be five in the present case, t m = (m Bc − m A ) 2 is the maximum momentum transfer and µ QM is the quark model scale. The values of parameter β for different s-wave and p-wave mesons [26,27] are given in the Table I Tables II and III. It may be pointed out that the form factors are sensitive to the choice of quark masses. The variation in quark masses, particularly light quark sector, may lead to uncertainties in the form factors therefore we allowed certain range based on literature [38]. These uncertainties in the form factors are shown in Tables II and III.
For B c → V transition form factors we use our previous work [18] based on BSW framework [28], in which, one of the authors investigated the possible flavor dependence in B c → P/V form factors and consequently in B c → P P/P V decay widths. It may be noted that BSW model [28] the form factors depend upon the average transverse quark momentum inside a meson ω, which is fixed in the model to 0.40 GeV. However, it has been pointed out that ω being a dimensional quantity, may show flavor dependence. Therefore, it may not be justified to take the same ω for all the mesons. Following the analysis described in [18], we estimate ω for different mesons from |ψ(0)| 2 , i.e. square of the wave function at the origin obtained from the hyperfine splitting term for the meson masses, which in turn fixes quark masses (in GeV) to be m u = m d = 0.31 ± 0.04, m s = 0.49 ± 0.04, m c = 1.7 ± 0.04, and m b = 5.0 ± 0.04 for α s (m b ) = 0.19, α s (m c ) = 0.25, and α s = 0.48 (for light flavors u, d and s). Here also, variation in α s may lead to uncertainty in quark masses [38] and consequently in form factors. For further details we refer the interested reader to [18]. We find that all of the form factors get significantly enhanced due to flavor dependence of ω. The obtained form factors along with corresponding uncertainties due to variation in quark masses are shown in Table IV.
It may also be noted that consistency with the Heavy Quark Symmetry (HQS) requires certain form factors such as F 1 , A 0 , A 2 and V to have dipole q 2 -dependence [28]. Therefore, we use the following q 2 -dependence for different form factors: with appropriate pole masses m i .
VI. RESULTS AND DISCUSSIONS
Using the decay constants and form factors described in Section IV and V, respectively, we predict the branching ratios of B c → V A and B c → AA decays in CKM favored and CKM suppressed modes.
The Branching ratios for B c decaying to a vector and an axial-vector meson in the final state for CKM favored and CKM suppressed modes are given in column 2 of Tables V-X. We also give the helicity amplitudes of corresponding decay channels in columns 3, 4 and 5 of respective Tables V-X. We observe the following: For CKM favored modes 1. The branching ratios for dominant decays in Cabibbo enhanced (∆b = 1, ∆C = 1, ∆S = 0) mode are: We wish to remark here that the first quoted uncertainty in branching ratios is due to effective variation of parameter N c and the second uncertainty is caused by variation of quark masses in the form factors. The same has been followed throughout the presentation of results including Tables 3. It may be noted that the branching ratios for B c → V A decays are higher for axial-vectors A( 3 P 1 ) in the final state as compared to A( 1 P 1) with same quark content except for strange axial meson emitting decays, which are roughly of the same order.
4. We find that longitudinal helicity amplitudes are higher in magnitude for all the decay modes. 4. Here also, branching ratios for decays involving A( 3 P 1 ) mesons in the final state are higher than their A( 1 P 1 ) partners for same flavor content. However, for decays involving K 1 and K1 the branching ratios are of same order.
5. The longitudinal helicity amplitudes for the CKM suppressed decays show same trend as observed in CKM favored modes.
The calculated branching ratios for B c decaying to two axial-vector mesons in the final state for CKM favored and CKM suppressed modes are given in column 2 of Tables XI-XVI. The corresponding helicity amplitudes of decay channels are presented in columns 3, 4 and 5 of Tables XI-XVI. Here also, the uncertainties in the obtained results caused by N c variation and quark mass variation in the form factors, respectively, are given in Tables XI-XVI. We made the following observations: For CKM favored modes 3. In Cabibbo favored (∆b = 1, ∆C = 0, ∆S = −1) mode, the dominant decay channels are: It may also be noted that effective variation in N c leads to the change in amplitude and hence, branching ratios of these decays. The branching ratios of color favored class I decays show ∼ 6% variation in the central value and color suppressed class II decays show variation of ∼ 30%. However, class III decays involving both color favored and color suppressed diagrams show a variation from 7% to 15%.
We wish to emphasize that with remarkable improvements in experiment and sophisticated instrumentation branching ratios of the order of (10 −6 ) could be measured precisely [39] at LHC, LHC-b and Super-B factories in near future. Therefore, it may provide the necessary information for phenomenological study of B c meson physics.
Since, there is no experimental information available at present for such decays, we compare our results with other theoretical works (see Table XVII). There are several theoretical models like Bethe-Salpeter approach (BSA) [25], Relativistic Quark Model (RQM) [13,23], Non Relativistic Quark Model (NRQM) [15] etc. which give their predictions for B c → V A decays with charmonium in the final state. We find that results given by different models are comparable with some exceptions. We have used a 1 = 1.12 to obtain branching ratios for these models in Table of comparison. It may be noted that H.F. Fu et al. [24] also predict branching ratios of few decay modes namely Their predictions are lager than our results by an order of magnitude except for B − c → χ c1 D * − which is comparable to our prediction. In addition to these, H.F. Fu et al. [24] predict branching ratios of B − c → D − s1 φ 0 /D − s1 φ 0 /D − s1 K * 0 decays based on contributions from penguin diagrams which we ignore in the present analysis. We wish remark here that for B c → AA decays, theoretical predictions for only four decay channels are available for [24]. Here also, branching ratios predicted in present work are small as compared to results given by [24].
VII. SUMMARY AND CONCLUSIONS
In the present work we have calculated B c → A transition form factors using ISGW II model framework. Consequently, we have predicted branching ratios of B c → V A/AA decays. We have used flavor dependent B c → V transition form factors in BSW Model framework. Also, we have calculated the helicity components corresponding to different polarization amplitudes in B c → V A/AA decays. We draw the following conclusions: Their branching ratios range from 10 −3 − 10 −11 .
2. Branching ratios of CKM enhanced modes in case of B c → AA decays are smaller by an order of magnitude in comparison to those in B c → V A decays. The dominant decays are: Here, also the branching ratios range from 10 −4 − 10 −10 .
3. In CKM suppressed modes, the branching ratios are further small by an order of magnitude for both B c → V A and B c → AA decays. The branching ratios for the dominant decays Since, LHC and LHC-b are expected to accumulate data for more than 10 10 B c events per year, we hope that predicted BRs would be measured soon in these experiments.
Modes
Transition IV: B c → V transition form factors at q 2 = 0 using flavor dependent ω in BSW model [34]. | 2013-01-22T10:01:26.000Z | 2012-10-30T00:00:00.000 | {
"year": 2012,
"sha1": "495d1a4ac5d691a51e9340c7da3547e5eb1fbfc7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1210.7890",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "495d1a4ac5d691a51e9340c7da3547e5eb1fbfc7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15434008 | pes2o/s2orc | v3-fos-license | Relatively Recent Evolution of Pelage Coloration in Colobinae: Phylogeny and Phylogeography of Three Closely Related Langur Species
To understand the evolutionary processes leading to the diversity of Asian colobines, we report here on a phylogenetic, phylogeographical and population genetic analysis of three closely related langurs, Trachypithecus francoisi, T. poliocephalus and T. leucocephalus, which are all characterized by different pelage coloration predominantly on the head and shoulders. Therefore, we sequenced a 395 bp long fragment of the mitochondrial control region from 178 T. francoisi, 54 T. leucocephalus and 19 T. poliocephalus individuals, representing all extant populations of these three species. We found 29 haplotypes in T. francoisi, 12 haplotypes in T. leucocephalus and three haplotypes in T. poliocephalus. T. leucocephalus and T. poliocephalus form monophyletic clades, which are both nested within T. francoisi, and diverged from T. francoisi recently, 0.46-0.27 (T. leucocephalus) and 0.50-0.25 million years ago (T. poliocephalus). Thus, T. francoisi appears as a polyphyletic group, while T. leucocephalus and T. poliocephalus are most likely independent descendents of T. francoisi that are both physically separated from T. francoisi populations by rivers, open sea or larger habitat gaps. Since T. francoisi populations show no variability in pelage coloration, pelage coloration in T. leucocephalus and T. poliocephalus is most likely the result of new genetic mutations after the split from T. francoisi and not of the fixation of different characters derived from an ancestral polymorphism. This case study highlights that morphological changes for example in pelage coloration can occur in isolated populations in relatively short time periods and it provides a solid basis for studies in related species. Nevertheless, to fully understand the evolutionary history of these three langur species, nuclear loci should be investigated as well.
Introduction
Primates exhibit striking examples of skin and pelage color variation. Closely related species often exhibit marked color differences, especially in Colobinae [1]. We know little about how this color variation is generated and maintained by the processes of evolution. Whether morphological changes as for example in pelage coloration evolve rapidly in isolated population or a long time to accumulation is necessary, has not yet been determined.
Colobine monkeys (subfamily Colobinae) are a diverse group of Old World primates with 59-78 species grouped in up to 10 genera [2][3][4]. Extant colobines are found in a wide range of forest and woodland habitats in Africa and Asia. The Asian colobines are undoubtedly more diverse numerically and morphologically, and comprise 55 species across seven genera (Pygathrix, Rhinopithecus, Nasalis, Simias, Presbytis, Trachypithecus and Semnopithecus) [4]. The genus Trachypithecus comprises up to 20 species, which are grouped into various species groups according to similarities in morphology, ecology, behavior, distribution and genetics [2][3][4][5][6][7][8]. One of these species groups is the ''limestone'' or Trachypithecus francoisi langur species group [2,4,6]. The three northernmost species of the group, Trachypithecus francoisi, T. poliocephalus and T. leucocephalus occur in nearby distribution areas, but differ extremely in pelage coloration, and are thus a good example to study evolutionary processes of pelage coloration variation in colobine monkeys.
The François's langur (T. francoisi, Figure 1) is endemic to karst hills in tropical and subtropical south-western China (Guizhou and Guangxi Province) and northern Vietnam ( Figure 2). The species is medium-sized with mostly black silky hair and only white sideburns on cheeks. The golden-headed or Cat Ba langur (T. poliocephalus, Figure 1) occurs only on the island of Cat Ba, 30 kilometers off the coast of Hai Phong in Halong Bay, north-eastern Vietnam ( Figure 2). As indicated by its common name, head and neck down to the shoulders are bright golden to yellowish. The white-headed langur (T. leucocephalus, Figure 1) is distributed within a narrow range in Guangxi Province, China, surrounded by T. francoisi populations (Figure 2). It has a similar coloration as T. poliocephalus, but with its head, crest hair, neck, upper shoulder and tail tip white.
As the habitats of T. poliocephalus and T. leucocephalus are close to or nested within the range of T. francoisi (Figure 2), it is hypothesized that T. poliocephalus and T. leucocephalus might have become isolated local populations of T. francoisi with different pelage colorations [9][10][11]. This would imply that T. leucocephalus and T. poliocephalus derived from T. francoisi independently and that the two former do not share a common ancestor (hypothesis 1). On the other side, it can be hypothesized that T. leucocephalus and T. poliocephalus share a common ancestor because both are more similar in pelage coloration than either species is to T. francoisi (hypothesis 2) [2,7,[12][13][14].
To assess both hypotheses, we have analyzed sequence variation in the hypervariable region I (HVI) of the mitochondrial control region of T. francoisi, T. leucocephalus and T. poliocephalus, with the aim to (1) elucidate the phylogenetic, phylogeographical and demographical history of these three taxa, (2) detect which and when physical barriers isolated these relatively large-bodied primates, (3) evaluate the level and partitioning of genetic variation within and among them, and (4) describe phylogeographic relationships among extant populations and identify factors influencing genetic divergence and phenotypical differences. Although we used only a maternally inherited marker, we predict that our findings will provide deep insights into the evolutionary history of these three species, which are characterized by female philopatry [15,16].
Ethics Statement
Our work was conducted according to relevant Chinese, Vietnamese and international guidelines, including countries where we analyzed samples. Approval for sample collection in the wild was obtained from the China Wildlife Conservation Association, Chinese State Forestry Administration and the Vietnamese National Forest Protection Department. Fecal samples from 165 wild animals were collected non-invasively without disturbing, threatening or harming the animals during field surveys. Twenty-seven tissue samples were obtained from deceased individuals found in the wild. Hair samples from 36 T. francoisi individuals were provided by Guangxi Wuzhou Inbreeding Center, and 20 and three frozen blood samples of T. francoisi and T. leucocephalus were provided by Nanning Zoo and Shanghai Safari Park, respectively. Blood collection and invasive plucking of hairs was performed during routine health checks. In Guangxi Wuzhou Inbreeding Center, Nanning Zoo and Shanghai Safari Park, family units were housed in single indoor cages (4-6 m65-6 m66 m). Outdoor play cages for 3-5 family units are equipped with swing and trees. Langurs were fed with fruits, vegetables and leaves, which they were used to eat in the wild three times per day; thus, they were never deprived of water or food. Periodically, they obtained multi-vitamine supplement. Samples were collected by institutional staff and directors gave permission to use them in our study. Collection of fecal and blood samples adhered to the American Society of Primatologists (ASP) Principles for the Ethical Treatment of Non-Human Primates (see www.asp.org/society/ resolutions/EthicalTreatmentOfNonHumanPrimates.cfm).
Sample Collection, DNA Extraction and Individual Identification
Twenty-three blood, 27 muscle, 36 hair and 165 fecal samples of different individuals (178 T. francoisi, 54 T. leucocephalus, 19 T. poliocephalus) from 17 forest lots (lots 1-17) representing all of the extant populations of these three langurs were collected ( Figure 2, Table S1). Blood samples were stored at 0uC in acid citrate dextrose (ACD) solution B [17] until they were banked at 280uC. Hairs were stored under dry conditions in plastic bags. Muscle samples from deceased individuals found in the wild were stored in 95% ethanol. Fecal samples were collected during direct behavioral observations in the wild and stored in 95% ethanol. To avoid resampling of the same individual, each dropping was distinguished by freshness, size, shape and color, and feces found less than 1.5 m apart were not sampled [18]. DNA from blood and muscle samples was extracted using the standard PCI (25:24:1 mix of phenol, chloroform, and isoamyl-alcohol and chloroform) method [19], while DNA from hair and feces was extracted with the Chelex-100 method [20] and the DNA Stool Mini Kit (Qiagen), respectively.
Amplification and Sequencing of Mitochondrial DNA
A 395 bp long fragment of the HVI region was amplified and sequenced with the primers 59-AAC TGG CAT TCT ATT TAA ACT AC-39 and 59-ATT GAT TTC ACG GAG GAT GGT-39. Amplification was performed in a total volume of 50 ml containing 50 mM KCl, 10 mM Tris-HCl, 1.5 mM MgCl 2 , 200 mmol dNTPs, 0.2 mmol of each primer, 1 mg/ml BSA, 1.5 U Hotstart Taq DNA polymerase (Qiagen), and approx. 10 ng total DNA extract. Forty cycles were run on a Perkin-Elmer Cetus 9700 DNA thermocycler with pre-denaturing at 95uC for 15 min; denaturing at 95uC for 1 min, annealing at 56uC for 1 min, extension at 72uC for 1 min; and a final 10 min extension step at 72uC. Positive (DNA extracted from blood) and negative (water) controls were used to check PCR performance and contamination [21]. PCR products were purified with the QIAquick PCR purification Kit (Qiagen) and sequenced with the PrismTM BigDye Terminator Ready Reaction kit (Applied Biosystem Inc.) on an ABI 377 or 3130xL Genetic Analyzer. To avoid errors in amplification and sequencing, PCR amplifications of all samples were performed twice or more, and products were sequenced from both strands. Excluding ''numts'' and Cross-species Contamination To exclude contaminations of the dataset with nuclear integrations of mitochondrial fragments (''numts''), we mainly used material in which nuclear DNA is highly degraded (feces) [22]. Moreover, from several specimens, two material types (hairs, feces) were available, which resulted in identical sequences and no multiple amplifications of different copies were detected by direct sequencing of PCR products. Most importantly, the primers for this study were constructed on the basis of complete mitochondrial genome data, generated via 2-6 overlapping long range PCRs, from each one individual of the three species. The comparison of HVI sequences as derived from the HVI primer pair with complete mitochondrial genome sequences revealed no inconsistent positions in these three individuals.
To prevent cross-species contamination, benches and plastic ware were cleaned with 10% bleach and sterile water, and then exposed to UV light for 30 min. The surface of muscle samples was also exposed to UV light (30 min). DNA extraction, PCR amplification and sequencing was conducted in separate laboratories and repeated with random samples after several months. Further, negative controls were clean and sequences from independent analyses were identical.
Mitochondrial DNA Diversity, Phylogeny and Population Structure
Sequences were aligned using ClustalX [23] and rechecked by eye. Haplotypes were likewise identified with ClustalX. Pairwise sequence differences between haplotypes were calculated using Mega 2.1 [24] and genetic diversity within populations was estimated by haplotype (h) and nucleotide diversities (p) [25] in DnaSP 4.10 [26]. For phylogenetic reconstructions, we performed maximum-likelihood (ML) and maximum-parsimony (MP) analyses using PAUP* 4.0 [27] and Bayesian analysis in MrBayes 3.0 [28]. The sequences from T. delacouri and T. obscurus were used as outgroups. MODELTEST 3.06 [29] was run to determine the appropriate model of sequence evolution in a likelihood ratio test framework. In MP analyses, gaps were treated as a fifth state. Bootstrap analyses were performed with 5,000 replicates for MP and 100 full heuristic replicates for ML. For Bayesian phylogenetic inference, four Markov chain Monte Carlo (MCMC) runs were performed for 100,000 generations, sampling every ten generations. The initial 5% of trees were discarded as burn-in. Finally, a minimum spanning network [30] and a median joining network were constructed with TCS 1.13 [31] and Network 4.5.1.6 [32], respectively.
Divergence Time Estimation
Molecular dating was conducted with BEAST 1.5.3 [33] with a relaxed-clock MCMC approach. As modern Bayesian methods allow for the incorporation of a prior distribution of ages, two calibration points based on Perelman et al. [34] were applied as log-normal or normal priors to constrain the age of the following nodes: (1) the divergence between T. obscurus and the T. francoisi group was calibrated using a log-normal distribution so that the earliest possible sampled age corresponds to 2.40 (95% highest posterior density [HPD], 1.57-3.23) million years ago (mya), and (2) a normal distribution with mean of 0.64 mya and a standard deviation (SD) of 0.213 for the time to most recent common ancestor (TMRCA) of T. delacouri and the ancestor of T. francoisi, T. leucocephalus and T. poliocephalus. We applied these molecular-based calibration points, because no fossil data are available. The uncorrelated log-normal model was used to estimate substitution rates for all nodes in the tree with uniform priors on the mean (0, 100) and standard deviation (0, 10) of this model. In addition, we employed the Yule process of speciation as the tree prior with the ingroup assumed to be monophyletic with respect to outgroups. Each BEAST analysis consisted of 20 million generations with a random starting tree and sampling every 1,000 generations. Log files from each run were imported into Tracer 1.5 [35] and trees sampled from the first 1 million generations were discarded. Analysis of these parameters in Tracer suggested that the number of MCMC runs was adequate, with effective sample sizes (ESSs) of all parameters often exceeding 200, and Tracer plots showing strong equilibrium after discarding the burn-in. Tree files from the individual runs were combined using LogCombiner 1.5.3. The maximum-clade credibility tree topology and mean node heights were calculated from the posterior distribution of the trees and posterior probabilities $0.95 were considered as statistically significant (i.e. ''strong'') clade support [36]. Final summary trees were calculated with TreeAnnotator 1.5.3 and viewed in FigTree 1.2.2 [37].
Historical Demographic Events
Using the full data set, we estimated major demographic changes using BEAST. The Bayesian Skyline Plot (BSP) model was applied to examine a number of different population sizes through time and to run a smoothing procedure to visualize historical population size changes [33]. Standard MCMC sampling is used by BSP to estimate posterior distribution of effective population size through time from a sample of gene sequences, which gives a specified nucleotide-substitution model. Compared with previous methods, the BSP includes credibility intervals for the estimated effective population size at every point in time, back to the most recent common ancestor of the gene sequences [33]. The prior setting was the same as described in the Bayesian analysis above.
To test the hypothesis of demographic expansion, the population parameter h = 2 N ef m, where N ef is the female effective population size and m is the mutation rate per site per generation, was estimated using p according to the relationship E(h) = p [38] and using Watterson's [39] point estimator, h W . The estimator p uses the recent population as the inference population, whereas h W uses the historical population as the inference population. We also calculated the maximum likelihood estimates of h for variable population sizes, denoted here as h var , jointly with the growth parameter g using the program FLUCTUATE 1.4 [40], which uses genealogical information in the data and applying h W as a starting parameter for MCMC simulations. Stability of the parameter estimation was ensured by conducting ten short MCMC runs of 4,000 steps each and five long chains of length 400,000, with a sampling increment of 20 and one independent rerun. Secondly, a mismatch analysis was conducted using ARLEQUIN 3.0 [41] under a model of population expansion. The overall validity of the estimated demographic model was evaluated by the tests of raggedness index (Hri) [42] and the sum of squared differences (SSD) [43]. Significance of Hri and SSD was assessed by parametric bootstrapping (10,000 replicates) and a significant value was taken as evidence for departure from the estimated demographic model of sudden population expansion. Third, Tajima's D, Fu and Li's D* [44], Fu's Fs [45] and Ramos-Onsins and Rozas's R 2 [46] tests for mutation/drift equilibrium were performed in DnaSP and ARLEQUIN with 10,000 simulations. If a population expansion was detected, we estimated its putative age according to the following equation modified from Harpending et al. [47] and recently applied in a study of Japanese macaques (Macaca fuscata) [48]: t = mlt (equation 1), where t is the time after expansion in mutational units, m is the mean divergence rate per nucleotide per year, l is the sequence length, and t is the number of years after the expansion episode.
Population Spatial Structure Analysis
The spatial analysis of molecular variance conducted with SAMOVA 1.0 [49] was used to identify groups of sampling locations, which are geographically and genetically homogeneous and maximally differentiated from each other. This approach relies on a technique of analysis of molecular variance (AMOVA) [41]. However, in contrast to conventional AMOVA, SAMOVA does not require an a priori definition of groups, allowing instead the groups to emerge from the data. The most likely number of groups was identified by running SAMOVA with 2-16 groups and choosing the partition scheme with the highest w CT value.
Analysis of Isolation by Distance (IBD) and Isolation by Barrier (IBB)
To visualize the spatial distribution of landscapes, we collected SPOT5 satellite imagery of the year 2005 (China Remote Sensing Satellite Ground Station) and developed a vegetation-mapping model with the software ARCGIS (Environmental Systems Research Institute). Karst scrub was identified as suitable habitat for langurs [50]. Because T. francoisi and T. leucocephalus are separated by the Ming River and Zuo River, and both are wider than 100 m, rivers with a width of more than 100 m were lined out in the map and regarded as putative barriers for langurs. Mantel tests [51] were performed to test the significance of regression between pairwise genetic distances expressed as (PiXY-(PiX+PiY)/2) against the Euclidean geographical distance [52]. To estimate the effect of barriers (habitat gaps, rivers, open sea) to gene flow, a categorical matrix was generated describing the number of habitat gaps and rivers between the sampling lots. Then this matrix was used in further Mantel tests to determine whether it co-varied with genetic distance [53]. Barriers and geographical distance were not independent, because langur groups separated by barriers were usually farther apart from each other. Thus, a partial Mantel test [54] was also performed to assess how much genetic differentiation could be attributed to a barrier after controlling for the effect of Euclidean geographical distance. Mantel and partial Mantel tests were performed in ARLEQUIN with 10,000 iterations to determine the statistical significance.
Population Demographic History
For T. francoisi, the BSP suggested a sharp decrease in population size since 0.05 million years (Figure 5a), which is supported by the non-optimistic population growth parameter g (27.74612.98) (Table 1). Similarly, Tajima's D and Fu's Fs were estimated at 0.656 (P = 0.802) and 2.872 (P = 0.810), respectively (Table 1), and thus, indicating no population expansion. Consistent with these results, the mismatched distribution revealed an atypical shape of distribution (Figure 5c).
For T. leucocephalus, a large population growth parameter g (524.996197.92) and a relatively high h var (0.1060.001) were found ( Table 1). The bell-shaped mismatch distribution (Figure 5d) indicates population expansion in the past, and t = 2.33 (95% confidence interval = 1.80-3.07) and the time after expansion was estimated at 0.15-0.06 mya. A recent population expansion for T. leucocephalus was also suggested by the BSP (Figure 5b).
Population Spatial Structure, IBD and IBB Analysis
Haplotypes displayed very strong geographical specificity, consistent with population clustering based on geographic partitioning. Only one (B05) of the 29 T. francoisi haplotypes was shared among different sample locations, while all others were specific for their lots (Figure 3, Table S1). For T. leucocephalus, haplotypes also displayed strong local homogeneity and population structure. Haplotypes W01-W04 were confined to Lot 14, W05-W11 to Lot These results suggested that this pattern is the most parsimonious geographical subdivision. 83.34% of the genetic diversity is found between groups, 7.39% existed among sampling lots within groups and only 9.27% existed within sampling lots.
Genetic distances among groups were calculated by pairwise (PiXY-(PiX+PiY)/2) and the Euclidean geographical distances were estimated by GIS analysis (Table S2). Pairwise (PiXY-(PiX+PiY)/2) ranged from 0.536-31.700 and Euclidean geographical distance spanned from 42.28-2492.28 km. A categorical matrix was generated to describe whether the sampling lots are connected by habitat or separated by barriers. Sampling lots in connected habitat fragments had a categorical distance of 0. For sampling lots that were isolated by one or more barriers, the categorical distances between them was equal to the number of barriers. Isolation-by-distance analyses using the Mantel test revealed that 24.5% of the genetic distance among sampling groups can be explained by Euclidean geographical distance when the complete study area is considered (r = 0.49, P,0.01). The Mantel test for the effect of barriers on genetic distance confirmed a strong influence on gene flow and explained 59.3% of the genetic differentiation among sampling locations (r = 0.52, P,0.01). The partial Mantel test revealed a significant positive correlation between genetic distance and the presence of barriers (r = 0.23, P = 0.01) after controlling for the effect of Euclidean geographical distance. 58.4% of genetic distance was determined by the presence of barriers. Thus, we conclude that barriers as habitat gaps, rivers or open sea form a stronger barrier to gene flow than Euclidean geographical distance alone.
Evolutionary History and Population Demography of the three Langurs
The distribution range of T. francoisi is much larger than that of the other two species and we also found more haplotypes. T. leucocephalus is confined to a narrow triangular karst hill region of 200 km 2 in southern Guangxi Province (107-108uE, 22u069-22u429N), China and is separated from T. francoisi by the Ming and Zuo Rivers (Figure 2). Haplotypes found in white-headed langurs form a monophyletic clade that diverged from T. francoisi 0.46-0.27 mya (Figure 3) and population demographic analyses of T. leucocephalus indicate historical expansion 0.15-0.06 mya. Hence, relatively low genetic variation and recent population expansion suggests that T. leucocephalus emerged from a small founder population.
During phases of the Middle Pleistocene, between 0.5-0.4 mya, the current course of the Ming River was formed and the Zuo River became wider [55,56]. Accordingly, ancestral T. leucocephalus populations became physically separated from T. francoisi and a new phenotype emerged most likely via new mutations in a small population. Subsequently, T. leucocephalus experienced a population expansion, which might have contributed to the accumulation of the new pelage coloration.
Noteworthy, T. francoisi and T. leucocephalus are able to hybridize. Hu et al. [11] and Que et al. [57] reported a female hybrid in Nanning Zoo, but it died because of a missing kidney, which could be the result of outbreeding depression [58]. Furthermore, in 2006 a confiscated T. francoisi female was accidentally released into the range of T. leucocephalus (Lot 14) and she hybridized with a T. leucocephalus male producing several offspring (Deng, personal communication). All hybrid offspring showed pelage coloration characteristics of T. leucocephalus, which might indicate that white pelage on the head, crest and tail tip is a dominant character.
As a result of this release, the T. francoisi haplotype B19 was introduced in Lot 14, which we were able to confirm in our study. However, to avoid confusion we excluded this data from further analysis. If other cases of human-mediated gene flow among populations occurred in the past remains speculative. However, haplotypes in all three species show strong geographical specificity and only two haplotypes were found in more than one lot (T. francoisi haplotype B05 in Lots 5 and 6, T. leucocephalus haplotype W11 in Lots 15 and 16), suggesting that if human-mediated gene Table S1). Values above branches indicate support for each node based on ML/MP/Bayesian algorithms, respectively. Bootstrap values ,50% are not shown. Divergence age estimates for major nodes are depicted in circles along with their 95% credibility intervals (grey bars). Sampling lots are presented as colored rectangles. doi:10.1371/journal.pone.0061659.g003 flow occurred at all (besides the case mentioned above), it was very limited.
Also for the golden-headed langur, a relatively recent divergence from T. francoisi was estimated (0.50-0.25 mya, Figure 3). The separation of T. poliocephalus on Cat Ba Island in Halong Bay from all other taxa of the species group seems to be most likely caused by a larger gap of suitable karst habitat on the mainland and open sea. Particularly the lack of suitable habitat between T. francoisi and T. poliocephalus of more than 200 km might be the main reason for the interruption of gene flow between them. Open sea as barrier to gene flow might also have been effective; although the sea between Cat Ba Island and the Vietnamese mainland is less than 20 m deep [59] and repeated connections between both landmasses emerged in the last 0.5 million years [60]. However, the repeated submersion and exposure of soils on the shelf may have affected soil structure and fertility significantly, and consequently the structure of forest communities [61]. As shown for the Sunda shelf, migration between islands was extremely limited, although they were repeatedly connected during the Quaternary, the last time during the last glacial period [62][63][64].
In summary, both T. leucocephalus and T. poliocephalus seem to have a similar evolutionary history. Both might be recent, but independent descendents from T. francoisi and both are the result of most likely small founder populations; findings that support our hypothesis 1. After the split from T. francoisi, both have evolved different pelage colorations within a relatively short time period, in particular on the head and on the shoulders. Since T. francoisi
Population Structure and the Effects of Isolation
The distribution of haplotypes displays local homogeneity, implying strong population structure and genetic differentiation for all three species. In T. francoisi, all sampling lots are separated from each other by habitat gaps, rivers or geographical distance. Haplotypes also display very strong geographical specificity, consistent with clustering of patches based on geographical partitioning. Fine-scale population structure is common in large mammals and influenced by various factors [66][67]. Our study shows significant mitochondrial differentiation among all three species and within T. francoisi and T. leucocephalus populations. This suggests that habitat gaps, rivers and open sea are major physical barriers for gene flow in structuring genetic variation at inter-and intra-specific levels. SAMOVA, median-joining networks, IBD and IBB analyses clearly indicated that several major haplotype groups of T. francoisi and T. leucocephalus are restricted to habitats, fragmented by river catchments. Accordingly, incomplete lineage sorting seems to be an unlikely explanation for the observed pattern, because lineage sorting should be random with respect to geography [65]. However, it remains unclear whether the geographic structuring of mitochondrial haplotypes observed in this study is generally true for the genome because any single locus can give a non-representative result.
Besides ecological factors, also social structure largely influences population genetic structure, which is mediated mainly by social behavior (reproductive skew, dispersal, fission and fusion patterns) [53,[68][69][70][71]. In most colobines, including species of the genus Trachypithecus, females tend to stay in their natal groups (female philopatry) and males migrate at time of sexual maturity [15,16], leading to significant population differentiation when solely maternally-inherited markers as the herein applied HVI region are studied [68,71]. Thus, male-mediated gene flow is not captured in our study and accordingly to fully understand the evolutionary history of these three species further investigations should apply nuclear loci as well.
Conservation Implications
T. francoisi is classified as ''Endangered'' and T. poliocephalus and T. leucocephalus as ''Critically Endangered'' by the IUCN Red List [12][13][14]. Accordingly, conservation measures are urgently required to save these species from extinction. In last centuries, T. francoisi was widely distributed in karst forests of tropical and subtropical south-western China (Chongqin, Guizhou and Guangxi provinces) and northern Vietnam [12]. However, T. francoisi has experienced a dramatic decline of 85% in population size and 70% in distribution. Prior to 1980, the species was found in 23 different counties in China and numbered 8,000-10,000 animals. In 2007, it was estimated that there were only 1,900-2,150 animals left in the wild [72]. The situation for T. leucocephalus and T. poliocephalus is even more dire with estimates of only 580-620 and a maximum of 70 animals in the wild, respectively [12][13][14]50,73]. Recent attention on the conservation of T. leucocephalus in Lot 14 and Lot 15 has resulted in population increase [74,75], but in other forest patches it continued to decline or even became extinct. For the golden-headed langur, the population dropped from 2,500-2,800 individuals in the 1960s to 53 individuals in 2000 [14].
Our study shows that these langurs face a serious problem: habitat fragmentation and limited if any gene flow. We posit that geographical structuring of the populations is a direct issue for conservation. Species characterized by limited mobility and strong population genetic structure, such as the three herein studies species, are more prone to suffer from substantial loss of genetic diversity as a result of local extinction [76][77][78]. Additionally, haplotypes in the different forest patches form isolated groups and 83.3% of the genetic diversity within T. francoisi (including T. leucocephalus and T. poliocephalus) is found between these lots. Subpopulations that are highly divergent must be protected so as to safe guard their unique genetic diversity.
Conclusions
For all these three langurs, habitat is fragmented by habitat gaps, river catchments and sea barriers. Our study indicates that such barriers have played a key role in shaping the present-day population structure of species and populations, and that gene flow between them appears to be strongly impeded by these barriers. Mutations causing pelage coloration changes might have occurred in ancestral T. leucocephalus and T. poliocephalus populations, which became later fixed due to isolation. Thus, physical isolation could have provided the evolutionary potential for the divergence of different species, and even resulted in speciation and the significant diversity seen in Asian colobines today. However, since we analyzed only mitochondrial DNA, male-mediated gene flow is not captured by our study. Thus, to fully understand the evolutionary history of these and other species, nuclear loci should be included in future studies as well. | 2017-07-16T00:00:34.289Z | 2013-04-17T00:00:00.000 | {
"year": 2013,
"sha1": "530c9e9ac1d4d57e58d7564f6daadb77b0a6f255",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0061659&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "530c9e9ac1d4d57e58d7564f6daadb77b0a6f255",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
67863122 | pes2o/s2orc | v3-fos-license | Comparison of the effectiveness of video-assisted teaching program and traditional demonstration on nursing students learning skills of performing obstetrical palpation
Background: Teaching methods have failed to keep up with the pace of the changing curriculum. Clinical practice, an essential part of nursing education, links theory with practice, particularly in midwifery nursing. Thus, this study aimed to compare the effects of video-assisted teaching programs and traditional demonstration on nursing students learning obstetrical palpation skills. Materials and Methods: This is a quasi-experimental research work with pretest, posttest, control group design in which 60 third-year students of Bachelor of Science in Nursing were selected and assigned randomly, by lottery method, into an experimental group (video-assisted teaching program) and a control group (traditional demonstration) regarding obstetrical palpation. The data were collected through a self-designed rating scale. The validity of the rating scale was established by a panel of seven experts from the field of obstetrical and gynecological nursing, and the reliability was established through Cronbach's α(0.78), which showed the tool was consistent among the population. Results: The results showed a significant difference between the pretest and posttest skill scores of students who were exposed to video-assisted teaching program and traditional demonstration (t = 18.35, p < 0.001). Although both the methods were equally effective in enhancing skill, traditional demonstration scored much better than the video-assisted teaching program when the posttest skills were compared (t = 36.40, p = 0.001). Conclusions: The routine educational method, i.e., demonstration, is more effective in developing skills emphasizing the reinforcement of academicians in enhancing teaching skills by adopting blended teaching technique for enhancing memory storage, retrieval, cognition, and learning.
Introduction
In the last few decades, nursing education in India has undergone tremendous change, from informal bedside hospital-based training to university-based graduate, postgraduate, and doctoral nursing education. Furthermore, the rapid growth and development in science and technology have largely influenced the need for improved methods of teaching-learning process to nurses in India. Nurses are actively involved in teaching patients, families, and communities as well as educating and training the new nurses. India has observed rapid growth and development in nursing education, resulting in an increased sense of responsibility among Indian nursing faculty to educate future nurses using educational technology so that they efficiently handle patient teaching, This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com nursing education, and training. Therefore, the researcher felt a gap in the literature on pedagogy tailored to the peculiar needs of the teaching-learning process for nurses and nursing students among developing countries, in particular, India. [1,2] Teaching is distinctively a human activity. Systematic attention to methods and materials of teaching and learning as well as mastery of the subject matter are essential for the development of artistic teaching. [1] The nursing curriculum includes various subjects to be taught in each year of the course. Obstetrical and midwifery nursing is one of the third-year subjects. The effective management of the antepartum and intrapartum periods is completely dependent on the accuracy of obstetrical assessment, including assessment of women's abdomen for fetal presentation, position, and wellbeing, which in turn helps in making early decisions regarding the place and mode of delivery. [2] The nursing curriculum is continuously changing. Teaching methods have failed to keep up with the changing curriculum. [3] Bandura stated that the style of teaching preferred by a student is a reflection of his or her learning style. [4,5] The teaching of different skills requires various techniques and contemporary methods along with the traditional lecture method. Video-based education can be a suitable substitution when the demonstration method is unavailable. [6] One of the advantages of video-based education is that the voice of the broadcaster can be heard. Moreover, the figures, movements, illustrations used, and demonstrations presented can be seen.
According to Doijad and Kamble, animals were sacrificed regularly to show experimental physiology practical to first-year medical students. However, owing to certain ethical issues, ecosystem imbalance, and animal rights activists, there is a scarcity of animals for experimental use. Hence, they emphasized the need to introduce a new effective alternative teaching method to replace these animal experiments. In their study, they planned to find the better of two methods, video demonstration and live experimentation, on a small group (40 students) of first-year medical students. The students were taught experimental physiology by both the methods. The outcome was assessed in two ways, by comparing students' performance in a self-assessment question test with their perception toward the two methods by using the Likert scale. The result showed that the knowledge gained by both the methods was the same, but the perception of students toward video demonstration was better than that of live experimentation. Thus, the study concluded that students' response toward video demonstration as a novel teaching-learning method was excellent, and video demonstration can be a useful alternative to live experimentation for teaching experimental physiology to first-year medical students. [7] One of the most important principles in education is adopting a teaching method in concordance with objectives, contents, and learners. Teaching and learning clinical skills are challenging aspects of education in the field of medicine and the allied health professions. Some of the new researchers have shown that video-based instruction has many advantages in comparison with other methods. But in the domain of the psychomotor learning, there is not enough evidence to show that video-based instruction is an effective teaching method. [8] Yoo et al. (2010) conducted an experimental study on the effect of the video recording of Foley's catheterization to evaluate its effect on three outcomes: measures of the competency of procedure, communication skills, and learning motivation. The study was conducted through self-evaluation using a video recording of the Foley's catheterization of the students. The students in the experimental group (n = 20) evaluated their Foley's catheterization performance by reviewing video recordings, whereas students in the control group (n = 20) received written evaluation guidelines only. The results showed that the students in the experimental group had better scores on competency (p < 0.001), communication skills (p < 0.001), and learning motivation (p = 0.018) than the control group at the posttest, which was conducted 8 weeks after the pretest. The inference of the study indicates that self-awareness of one's performance developed by reviewing a videotape appears to increase the competency of clinical skills among nursing students. [9] Several methods can be used to identify fetal position, presentation, attitude, engagement, and wellbeing. Methods such as ultrasonography, vaginal examination, and abdominal palpation can be used to identify position, presentation, attitude engagement, and wellbeing of fetus inside the mother's uterus. [10] However, ultrasonography is not cost-effective as the equipment is costly and its use requires expertise. [11] Vaginal examination, on the contrary, is only reliable when women are in established labor because its accuracy depends on dilatation and effacement of the cervix along with descent of the fetus presenting part. [9] This emphasizes the need for accurate obstetrical palpation that can be performed by the student nurses or registered nurses. It is noninvasive and does not require any equipment and can be performed by a trained nurse at any time of the day, establishing its popularity as the most feasible test of fetal wellbeing. [12,13] The importance of obstetrical palpation in antenatal assessment is a globally acclaimed fact. As stated by Crede and Leopald in 1982, the four maneuver techniques in obstetrical palpation ensure identification of the fetal presentation, lie, attitude, position of different parts of the body, and wellbeing. Mak and Wong conducted a study and found that midwives had a favorable attitude toward obstetrical palpation but their confidence to practice was inadequate. [12] Grant et al. evaluated the effect of videotape-facilitated human patient simulator (HPS) practice and guidance on clinical performance indicators among student nurses and anesthetists. The treatment group (n = 20) participated in HPS practice and guidance using videotape-facilitated debriefing, and the control group (n = 20) participated in HPS practice and guidance using oral debriefing alone. The result showed that students in the intervention group were significantly more likely to demonstrate desirable behaviors concerning patient identification, team communication, and vital signs. The role performed by the students in the simulation significantly impacted their performance. When scores of both the intervention and control groups were combined, team leaders, airway managers, and nurse anesthetists had higher mean total performance scores than crash cart managers, recorders, or medication nurses.
Thus video-facilitated simulation feedback is potentially a useful tool in increasing desirable clinical behaviors in a simulated environment. [13] In the pressured environment of a classroom, if tools are not intuitive and simplified for the educator and student, they won't be used. However, the right technology will be quickly adopted by all. Every educator knows that delivery in a stimulating fashion, including visual input, can be key to learning in terms of understanding, application, and retention. [14,15] The use of video in nursing education classes provides an easy, innovative, and user-friendly way to engage today's nursing students. Video presentations can be easily adapted into nursing courses at any level, whether a fundamental course for undergraduate students or a theoretical foundations course for graduate students. Increasingly, nursing students enter nursing programs experienced in the latest communication technologies and knowledgeable about various media offerings. Today, it is expected that nurse educators should use creative communication technologies to enrich the learning environment. Clinical practice is an essential part of nursing education which links theory with practice. Obstetrical palpation is one of the areas of clinical practice which demands accuracy and expertise that improve with the length of experience. [13,16,17] As today's student nurses are tomorrow's professional nurses who can contribute more in the field of treatment, educating these students and creating awareness, helping them to learn more about obstetrical palpation, will bring about positive outcomes in the future health indicators and quality of care. According to the famous saying, right practice is the safest investment toward hazard-free care, and right practice comes from right education. [15,16]
Materials and Methods
The study was quasi-experimental in design conducted with the objectives of assessing the effectiveness of video-assisted teaching program and traditional demonstration on obstetrical palpation among nursing students in experimental and control groups, to compare the effectiveness in terms of gain-in-skill scores in both groups and to find out the association between the pretest skill scores with selected variables.
In 2016, this study was conducted in the College of Nursing at Sikkim through quasi-experimental approach with pretest, posttest, nonequivalent control group design in two phases. In the first phase, the video-assisted teaching program was developed and research tools were prepared and tested for reliability and validity. In the second phase, the video-assisted teaching program and traditional demonstration were implemented concerning the provided rating scale. Also, the effects on the dependent variable of the study were reviewed. The Bachelor of Science (B.Sc.) in nursing program is a 4-year course with the obstetric and midwifery subject in the third year. These students have never been exposed to this subject before. Thus, for them, obstetrical palpation is a very new skill to learn. At first, all the nursing skills and procedures are demonstrated in a structured and well-equipped laboratory. Once the student is competent enough to perform the skill, they are posted to the clinical area. The sample included 60 nursing students who were currently enrolled in the fresh third-year B.Sc. in nursing program. The duration of the study was 3 months.
Some of the concepts were defined in the study. Skill was defined as the ability to perform obstetrical palpation, which includes the abdominal examination as measured by structured rating scale to have achieved the desired effect as evident from gain-in-skill scores. Obstetrical palpation refers to antenatal examination of a pregnant woman that consists of the following components: Palpation-fundal palpation-determines the presence of head or buttocks of the fetus; lateral palpation-determines the fetal back in order to determine position; pelvic-palpation-pelvic grip 1-determines the pole of the fetus; pelvic grip 2 (Pawlik's maneuver)-determines the engagement of fetal head; [16] and auscultation-a Pinards fetal stethoscope is used to hear the fetal heart sound.
The sample size (n = 60) was calculated using the formula for comparing averages, 95% safety factor, and statistical power of 80%. Among the 110 third-year B.Sc. nursing students, 60 students were randomly selected and assigned into experimental (n = 30) and control (n = 30) groups, respectively, through random allocation (drawing lottery). Two clinical teachers were selected randomly from the same college to select and assign the students to different groups to maintain the objectivity and homogeneity of the study.
The inclusion criteria were students who were available at the time of data collection, willing to participate, and who had not been exposed to any classes or demonstration on obstetrical palpation. The exclusion criteria of the study consisted of nursing students who were repeaters in the same class. The nursing students learning skill was evaluated by a structured rating scale on obstetrical palpation. The scale was given to five experts from obstetrics and gynecological nursing and community health nursing. The experts were chosen based on their clinical experience, expertise, and interest in the problem area. The reliability of the structured rating scale was tested using the inter-rater method by two raters. In the end, it ensured that the instrument used for measuring experimental variables gives the same results every time, hence showing the coefficient of equivalence among the test items.
The scale was divided into preparatory phase (eight items), abdominal palpation-fundal palpation (four items), lateral palpation (four items), first pelvic grip (eight items), second pelvic grip (six items), auscultation for fetal heart sound (three items), and termination phase (four items).
Each item in the scale is scored on a three-point rating scale with the scoring criteria of Perform (2), Somewhat perform (1), and Not perform (0) and a total score of 78.
After completing the background data form, before starting the intervention, the pretest skill of the students from both groups was assessed in maternal and child health nursing laboratory for one day by the researcher and one clinical instructor who completed the training session for obstetrical palpation. Each student from the group was asked to perform the obstetrical palpation and at the same time, the competency was assessed through the rating scale. The pretest/posttest design was selected because, in this college, students are frequently posted to OBG department for basic care in their first year of the nursing course. Hence to maintain the homogeneity, the pretest was performed among these students.
Subsequently, only a traditional demonstration on obstetrical palpation was conducted for students in the comparison group (n = 30) for 30 min in the maternal and child health nursing laboratory. The traditional demonstration is a routine teaching method adopted by nursing colleges for teaching any skill. Here, teaching of correct steps of obstetrical palpation on a pregnant simulator and adequate explanation by the investigator were adopted. Whereas the selected video-assisted teaching program for the experimental group (n = 30) was a prerecorded 22-min long video clip on steps of obstetrical palpation prepared by the investigator. The clip was shown using a laptop and speakers in a classroom setting on the same day.
After the intervention, the posttest observation and assessment in both groups were done on the 8 th day in the maternal and child health nursing laboratory where each student performed the skill for 10-15 min. For analyzing data, paired t-test, independent t-test, and Chi-square tests were used. The statistical package used for data analysis and interpretation was IBM SPSS Statistics for windows, version 25.0 (IBM Corp., Armonk, New York).
Ethical Considerations
This study was approved by the SMIMS Institutional Ethics Committee (IEC) of the university with registration no.: IEC/419/16-02 dated May 3, 2016. All enrolled students signed an informed consent containing the clear data about the study, its purpose, and methods.
Results
Data analysis showed that the students in the traditional demonstration and video-assisted teaching program had identical variables such as age, type of residence, family income, previous academic performance, and previous experience in taking care of antenatal mother (p > 0.05) [ Table 1].
Thus, based on the paired t-test result, there was a statistically significant difference within the group. This indicates that both the methods were found equally effective in enhancing the skill of nursing students in performing obstetrical palpation (t = 3.66, p < 0.001). Moreover, based on independent t-test results, there was no significant difference between the two groups in terms of overall mean pretest skill score (t = 0.41, p > 0.05) on obstetrical palpation before the intervention. This lack of difference was due to the random assignment of students in the two groups [ Table 2].
However, the results [Tables 2 and 3] also showed that the overall mean posttest skill score of the nursing
Discussion
The study findings show that the posttest intervention scores were higher in the control group as compared with the experimental group indicating that traditional demonstration has more impact in improving the skill. The findings were consistent with a study of Karimi et al., [17] where total learning skills in the demonstration method was more than in the video-based method.
The study findings also show that in the control group, the mean posttest skill score 55.13 (7.78) was higher than the mean pretest skill score 1.20 (1.54). The calculated paired t-test value (36.40) was also statistically significant. These findings are supported by Dash [18] who conducted a randomized clinical trial with pre-and posttest designs to assess the effectiveness of the video-assisted teaching module on contraceptive methods in Pondicherry among 977 couples. Dash's study suggested that there was significant improvement in posttest knowledge, attitude, and practice on contraceptive methods as compared with pretest and showed the effectiveness of the video-assisted teaching program.
The study findings show that the mean posttest skill score 37.70 (10.47) was higher than the mean pretest skill score of 2.23 (1.97), which was found significant by paired t-test value (18.35). The result indicates that there is a statistically significant increase in posttest skill. As already mentioned, this study shows that students learn more effectively using demonstration methods. A quasi-experimental study conducted by Gowri et al. [19] to compare and evaluate the effectiveness of web-based and traditional instructional methods to teach obstetrical palpation for antenatal mothers among B.Sc. nursing second-year students revealed that skill on obstetrical palpation was higher among students in the traditional group with a mean score of 27.87 (5.95) and standard error of mean 1.53.
Based on the findings of the analysis, there was no statistically significant association of the posttest knowledge scores of subjects with their selected sociodemographic variables such as age, type of residence, family monthly income, previous academic performance, and previous experience in taking care of antenatal women (p > 0.05).
These findings were supported by Midhula and Balasubramanian [20] who conducted a pre-experimental study to evaluate the video-assisted teaching module on the care of dementia patients among B.Sc. nursing students at Mangalore. The result revealed that there was no association between pretest skill and sociodemographic variables such as age, gender, academic performance, and previous experience.
A maternity nurse is an experienced and qualified specialist in providing essential support, advice, care, and respite to parents and new-born babies. Today, the public is very much aware of their rights and the consumer protection act that holds the maternity nurse accountable if any errors are made during antepartum, intrapartum, and postpartum period. Hence, nurse educators can use a variety of teaching-learning methods and styles in clinical settings to teach a nursing procedure that suits the nature of the students with the advancement in technology for best adaptation by the younger generation nurses. The nursing curricula should focus on the mixing of traditional instruction method with the modern teaching methods in clinical settings so that the students will benefit from the blended learning. Hence, teachers can use different teaching strategies to encourage critical thinking in students. Although the creation of video-based instructional materials takes time, the preparation of other teacher-made materials frequently used with students also takes time. Time-consuming editing of self-modeling tapes using a camera, laptops, computers, and VCR may become unnecessary as more professional video-editing equipment becomes readily available and affordable. Using this technology requires some practice.
This study has some limitations as it was conducted only in one college; hence, the findings cannot be generalized. The study was also limited to third-year B.Sc. nursing students with a limited subject.
Conclusion
Videos help to memorize the steps of palpation but may not allow for the sense of touch needed to identify the fetal parts. In a traditional demonstration, the teacher observed the fetal parts which enabled the students to understand better. There is no substitute for clinical demonstration; however, video-assisted teaching can be used as a supplement to the traditionally used bedside demonstration. Further research with multiple teaching methods can be conducted. The combination of instructional methods can be used to get a rich supply of learning opportunity. | 2019-03-11T17:23:12.718Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "fea7a6e9f31b2523931605723826f4115a9b22f6",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijnmr.ijnmr_35_18",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "389e0e378901a3ca59ca08c579b7fc46d5817d69",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
54772707 | pes2o/s2orc | v3-fos-license | Modeling Strategic Decisions in the Formation of the Early Neo-Assyrian Empire
Understanding patterns of conflict and pathways in which political history became established is critical to understanding how large states and empires ultimately develop and come to rule given regions and influence subsequent events. We employ a spatiotemporal Cox regression model to investigate possible causes as to why regions were attacked by the Neo-Assyrian (912-608 BCE) state. The model helps to explain how strategic benefits and costs lead to likely pathways of conflict and imperialism based on elite strategic decision-making. We apply this model to the early 9th century BCE, a time when historical texts allow us to trace yearly campaigns in specific regions, to understand how the Neo-Assyrian state began to re-emerge as a major political player, eventually going on to dominate much of the Near East and starting a process of imperialism that shaped the wider region for many centuries even after the fall of this state. The model demonstrates why specific locations become regions of conflict in given campaigns, emphasizing a degree of consistency with which choices were made by invading forces with respect to a number of factors. We find that elevation and population density deter Assyrian invasions. Moreover, costs were found to be more of a clear motivator for Assyrian invasions, with distance constraints being a significant driver in determining where to campaign. These outputs suggest that Assyria was mainly interested in attacking its weakest, based on population and/or organization, and nearest rivals as it began to expand. Results not only help to address the emergence of this empire, but enable a generalized understanding of how benefits and costs to conflict can lead to imperialism and pathways to political outcomes that can have major social relevance.
Introduction
The Neo-Assyrian period (912-608 BCE) was a time when the Assyrians politically dominated large parts of the Near East. By the early ninth century BCE, a series of campaigns by the new Assyrian king, Ashurnasirpal II (r. 883-859 BCE), began to shape the region that ultimately led to the establishment of a large-scale empire by the eighth century BCE that dominated much of the Near East until the end of the seventh century BCE (Cline and Graham 2011). The empire became the largest state in the ancient Near East and had direct influence on many cultural groups, but it also began a long process whereby empires and imperialism became the norm in the Near East as successive states and empires began to dominate even more territory. This makes the early ninth century BCE an important period to investigate if we are to understand how this long process of imperialism emerged and the strategic path dependencies in which later decisions were shaped by earlier outcomes.
Often, most ancient states' strategic decisions are difficult to evaluate and understand within contemporary contexts. Particularly, historical data are usually missing and the problem of identifying ancient toponyms makes key battles, alliances, and events difficult to place in time and space. Furthermore, multiple factors affect strategic circumstances of states at any given time, making historical contexts often unclear for researchers and important questions, such as why and how the process of imperialism in a given region began, hard to answer. In this paper, we propose a method that evaluates strategic military decision-making by elites that affected pathways in which the Neo-Assyrian Empire began to emerge. This model considers the likelihood and propensity of given states to be attacked by the Neo-Assyrians and addresses what factors could determine observed conflict. We demonstrate the utility of a spatiotemporal Cox regression model that investigates the determinants of strategic decisions and the context of relevant political dynamics and spatial scales. Our goal is to better understand why certain states were attacked by the Neo-Assyrians and uncover the underlying processes that could have shaped their strategic decision-making.
We begin this paper by providing background into the case study, specifically the early ninth century BCE when the Neo-Assyrian state began to more aggressively launch yearly campaigns against neighboring states and key decisions by its leadership shaped which states increasingly came into conflict with the Neo-Assyrians. Next, we articulate a series of six hypotheses concerning factors that might have influenced these decisions and which are testable via the proposed model. We then present our model, demonstrating its suitability for addressing questions concerning where and when given campaigns are fought. The model results are considered in the context of strategic decision-making undertaken by the elite. We see that elevation and population density appear to deter Assyrian invasion. The clearest results, however, show that deterrence in the form of distance to the Assyrian army is the largest driver affecting Assyrian invasion. Given these results, we conclude by considering the extent to which the presented approach answers our research goal and how it might be applicable to other cases.
Historical Background
Assyria's documented history stretches back to c. 2000 BCE. Originally a small state centered on the city of Ashur (modern Qal'at Sherqat) in northern Mesopotamia, it again rose to prominence during the Late Bronze Age (c. 1600-1200 BCE), when it gained independence from the neighboring kingdom of Mitanni, located in the Khabur region in upper Mesopotamia, and embarked on a program of territorial expansion (Radner 2014). The Assyrian king Ashur-uballit (r. 1365-1330 cemented Assyria's newfound status by becoming a latecomer to the so-called Great Powers Club, a group of powerful states that dominated the Near East in the Late Bronze Age (Moran 1992).
Although the collapse of the Late Bronze Age system around 1200 BCE dramatically redesigned the political landscape of the Near East (Radner 2014), leading to the disappearance of the Hittite empire and a weakening of the oncepowerful Egyptian and Babylonian states, Assyria emerged less affected, putting it in a relatively good political position. During the reign of Tiglath-pileser I (r. 1114-1076 BCE), Assyrian territory continued to encompass a significant part of northern Mesopotamia, and Tiglath-pileser sought to extend Assyria's boundaries further by campaigning repeatedly to the west of the Euphrates river. His immediate successors, however, were less successful and by the start of the first millennium BCE, Assyria's territorial holdings had been pushed back to a modest strip of land bordering the Tigris River. The end of the tenth century, however, saw the start of renewed efforts to regain Assyria's former status. This marked the beginning of the Neo-Assyrian period (934-610 BCE), during which Assyria would emerge as the most powerful empire to date, controlling most of the ancient Near East (Cline and Graham 2011).
The reign of king Ashurnasirpal II (r. 883-859 BCE) was instrumental in Assyria's rise to greatness, ultimately having an impact long after the fall of Assyria, as the ninth century ushered the beginning of a millennia-long period of large-scale empires and states dominating the region. From his ascension, Ashurnasirpal pursued the policy of restoration and conquest begun by his grandfather Adadnirari II (r. 911-891 BCE), but expanded it to a greater scale, campaigning vigorously and regularly: his inscriptions record no fewer than 14 military campaigns during his 24 years on the throne. He was particularly active in the first few years of his reign, sometimes conducting two separate campaigns in a single year (Grayson 1982:253). The primary focus of his exploits were the regions to the east, north, and west of the empire's heartland, which lay between the cities Ashur, Nineveh (modern Mosul), and Arbela (Erbil; Radner 2011). Figure 1 shows the map of the region at the beginning of Ashurnasipal's reign, including the principal Assyrian cities, and the states that existed during this time. Advances in military technology, including more efficient siege machines and reliance on cavalry rather than chariotry, improved the effectiveness of the Assyrian army (Fischer 1998:205). At the same time, Ashurnasirpal made contributions to the system of provincial administration, under which conquered regions were put under the control of an Assyrian governor and subjected to regular tribute (Grayson 1982:258). Mass deportations of local populations sought not only to distribute manpower where it was needed, but also to minimize the risk of future revolts (Oded 1979).
Ashurnasirpal's Campaigns
The region of Mazamua, located in parts of modern Iraqi Kurdistan and parts of western Iran (Figure 1: states 0,9,21,37,and 41), was the site of three Assyrian expeditions between 881 and 880 BCE. Located to the northeast of Assyria, it represented an important gateway to the Iranian plateau and its rich trade network. The region may have been under Assyrian control prior to Ashurnasirpal's reign but rebelled soon after his accession. Having put down the revolts, Ashurnasirpal stamped his authority on the region by renovating the city Atlila, likely located in Figure 1:9, and giving it an Assyrian name, Dur-Ashur ("fortress of the god Ashur"). For the next two centuries, the city served as a garrison from which armed expeditions in the Zagros Mountains could be launched easily and with minimal delay (Radner 2013:442).
The middle Euphrates region engaged Ashurnasirpal intermittently from 878 BCE. In that year, the new ruler of Suhu (Figure 1:35), aided by the Babylonians, rebelled against Assyria. The revolt was eventually joined by the neighboring states of Laqu (Figure 1:24) and Hindanu (Figure 1:16), and despite swift and harsh retaliation from Ashurnasirpal, rebellions continued to break out in the region (Grayson 1991).
Ashurnasirpal's expeditions to the west took him from Bit-Bahiani (Figure 1:6) and Hatti (Figure 1:15) in Syria as far as the Levantine coast, where the king performed the traditional ritual of washing his weapons in the Mediterranean Sea (Grayson 1991:298). He received tribute from local rulers and cut down trees in the cedar groves of Lebanon, whose wood had been highly prized by Mesopotamian kings since the third millennium BCE (Klein and Abraham 2000). Although no direct Assyrian control was established in this region, the expedition served an important political and ideological purpose, raising Assyria's visibility and status and extending the empire's symbolic presence to the Mediterranean Sea, a representation of one of the traditional boundaries of the "officially existing" world (Liverani 1990:59).
As a result of Ashurnasirpal's skilled leadership and military exploits, Assyria regained territories that had been lost centuries earlier and established itself as one of the leading powers of the ancient Near East. Ashurnasirpal's successors would capitalize on this momentum to further extend Assyria's territorial gains and political influence. At its height in the seventh century BCE, the Assyrian empire controlled a vast territory, which stretched from Egypt in the southwest to the mountainous regions of the Taurus and Zagros ranges in the north and east, and the Persian Gulf in the south. Even after its defeat at the hands of the Babylonians and the Medes by 610 BCE, Assyrian influence continued to be felt. Its imperial dominance set the tone for its successors, from the Neo-Babylonian, Achaemenid, Seleucid empires, and perhaps even to the Abbasid Caliphate, which ruled the Middle East from its Mesopotamian capital into the thirteenth century CE (Cline and Graham 2011). The Achaemenids, for example, replicated or were influenced by innovations made by the Assyrians in governing and military affairs.
Historical Sources
The reign of Ashurnasirpal is relatively well-documented. Like his predecessors, Ashurnasirpal commemorated his achievements in official inscriptions recorded on a variety of media, from palace walls and free-standing steles to clay prisms buried in the foundations of important buildings (Grayson 1991:189-393). A large number of such compositions survive from Ashurnasirpal's new royal residence, Kalhu (modern Nimrud; Figure 1), whose buildings and public areas were lavishly decorated with images and texts celebrating the king's deeds. Royal inscriptions invariably take the form of an autobiographical account and, in accordance with the demands of Assyrian royal ideology, tend to focus on the king's role as military leader. The result is a series of detailed accounts of military campaigns led by the king, narrated in chronological order or grouped thematically (or even a combination of the two). Campaign narratives may include information about the route followed by the army, enemy casualties, and tribute or loot (Tadmor 1997).
Despite their obvious appeal as historical records, royal inscriptions remain a problematic source whose very genre defies modern classification. The information included in them was carefully selected by the royal scribes in order to portray their royal masters in the best possible light, and edited as the need arose to accommodate additional material or constraints of space. The scribes who composed royal inscriptions availed themselves of a range of source materials, including contemporary records and itineraries, but also literary narratives and mythological accounts designed to convey an ideological message about the supremacy of the Assyrian king (Tadmor 1997).
One way of strengthening this message was to maintain the fiction of a single army led by the king, who alone is responsible for all victories; for rare exceptions to this convention, see Yamada (2000:26, 221-222) and Niederreiter (2005). In reality, the Assyrian empire must have relied on multiple armies with some military leadership delegated to generals, but this is almost never reflected in official texts. Although inscriptions often record quite detailed information about the army's itinerary, it is not always possible to reconstruct this as accurately as we might like, due to the difficulties of correlating ancient and modern toponyms. Our knowledge of the reality of Assyrian state governance in the eighth and seventh centuries BCE is enhanced immeasurably through surviving corpora of letters exchanged between the king and his officials (Parpola 1987;Lanfranchi and Parpola 1990;Fuchs and Parpola 2001;Luukko and Van Buylaere 2002;Dietrich 2003;Reynolds 2003;Luukko 2012). This state correspondence allows us to counterbalance the elevated rhetoric of royal inscriptions with more down-toearth and mundane communiqués and to fill in important information gaps that facilitated Assyria's wars and expansion. However, no such archives survive from the reign of Ashurnasirpal, and we do not have complementary sources from outside Assyria to corroborate royal inscriptions or provide an alternative point of view.
Hypotheses
We investigate the factors that might have influenced the Assyrian army's decision to invade a particular state and compare this choice with other states that might have been chosen but were not. In this section, we derive a number of testable hypotheses concerning attributes of the states invaded that may have played a role in these decisions. Two types of attributes are considered: the costs that the invading Assyrian army would have to endure should they select that state and the potential opportunities associated with each state should their invasion be successful. Categorizing the attributes of each state into 'push' (i.e., costs such as elevation, organized defense, etc.) and 'pull' (e.g., benefits such as metals, distance to trade, etc.) factors from the perspective of the Assyrian army enables us to determine the balance between, on the one hand, whether the Assyrian army chose states to invade by minimizing the effort expended in invading new territory and expanding the empire in accordance with the principle of least effort (Zipf 1949) and, on the other, whether they sought to maximize the potential opportunities associated with those choices. In what follows, we derive six hypotheses that are used in the construction of the model, each of which can be considered as either a cost or a benefit associated with the target selection of the Assyrians.
The re-emergence of the Assyrian empire during the period under consideration occurred over a relatively short time-scale. As a consequence, there may have been time constraints on the decisions made by the army, with invasions taking place in certain states simply because they were en route to more desirable locations. In this way, some states that were a long distance from the location of the Assyrian army may have been seen as undesirable, given the time and effort it would have taken to travel there. Travel might have been viewed as a significant cost associated with selecting states to invade. As a consequence, our first hypothesis asserts: Hypothesis 1. The Assyrian army was more likely to invade states that were near to their previously recorded location.
As well as the distance to potential invasion sites, there are other factors that may have influenced the Assyrians' cost of travelling. We hypothesize that a significant cost may have arisen if the terrain led to difficulty in travelling through the state. More mountainous regions, for instance, would have likely led to higher perceived costs by the army. In addition, uneven or high terrain might have favored the population of that state, who may have turned to insurgent tactics to counter the threat posed by the Assyrians. There is evidence to suggest that mountainous regions are more likely to provide favorable conditions for insurgencies during modern conflicts (Fearon and Laitin 2003). Our second hypothesis states: Hypothesis 2. The Assyrian army was more likely to favor invading states with low mean elevation.
There is evidence to suggest that the invaded states had some resistance to the Assyrian army (Tadmor 1997) and it is likely that the Assyrian army encountered a number of militant groups of varying strengths during invasions. Our next hypothesis asserts that if the states were able to organize effective defense via these militant groups, the Assyrians would have been deterred from invading. The Assyrian army might have considered effective defense of a state to be a significant cost due to the potential for damaging the strength of the army via, for instance, Assyrian loss of life. In order to estimate the military potential of each state, we use a measure of population density, the operationalization of which is described in the sections that follow. In particular, we suppose that states with high population density will have had the organizational capacity to enable more effective defense. Thus, our third hypothesis states: Hypothesis 3. States with organizational capacity to enable effective defense, for which high population density serves as a proxy, were less likely to be invaded by the Assyrian army.
Certain states may have also made alliances with contiguous states in order to counter the threat posed by the Assyrian army. Indeed, the empirical record mentions three instances of alliance formation between the smaller states. Although the available data do not allow us to test a formal hypothesis regarding whether these alliances were successful in deterring the Assyrians, we consider whether other such alliances might have been made after Assyrian invasion. The specification of the model used to detect such effects is discussed in the results section.
As well as invading states in order to expand the empire, the Assyrians may have invaded some states because control over those states offered more tangible benefits and opportunities. In particular, the attributes of each of the states may have had a significant influence in selecting the states to invade. We hypothesize that states with more desirable attributes were more at risk to invasion. We consider three attributes of each state, which represent the associated opportunities that may have attracted invasion by the Assyrians.
First, we hypothesize that states with better conditions for agriculture are likely to be more desirable to the Assyrians. Although current rainfall conditions are likely to differ from those of the past, we consider the modern level of precipitation of each state (NOAA 2014) as a proxy for past agricultural conditions, leading to: Hypothesis 4. States with higher levels of precipitation were more likely to be invaded.
Next, we hypothesize that the Assyrians were attracted to a particular area for its level of natural resources, specifically iron deposits, which became increasingly desired for creating iron weapons and tools during the Iron Age (Maxwell-Hyslop 1974), leading to: Hypothesis 5. States with higher levels of metal resources were more likely to be invaded.
Our final hypothesis asserts that the Assyrians sought to seek out new trading opportunities to the west, which were likely to have been more prevalent towards the coast of the Near East (Sherratt and Sherratt 1993), leading to: Hypothesis 6. States that were closer to the Mediterranean coast were more likely to be invaded.
In what follows, we describe our analytical approach, which includes an overview of the dependent and independent data used in our analysis. We then present the results of our analysis, which enable us to evaluate each of the hypotheses stated above.
Methodology
The onset and evolution of conflict, particularly with regards to historical conflict, is traditionally discussed using anecdotal perspectives, rather than by employing mathematical or statistical models to seek out underlying mechanisms or patterns that might be exploited to obtain insights. More recently, there has been a dramatic increase in the quantity and quality of such models exploring the location and timings of various examples of conflict (Turchin 2003;Weidmann and Ward 2010;Zammit-Mangion et al. 2012;Turchin et al. 2013;Bhavnani et al. 2014). This is partly due to increased data availability, which is crucial for modeling because it enables the development of models that are empirically consistent, and partly due to an increased range of sophisticated modeling techniques, some of which are well-suited to scenarios involving only partial data. Indeed, historical conflicts pose an additional complication because available data are typically scarce, biased, or often interpreted from other sources. In many cases, the availability of data constrains the sophistication with which models can be constructed.
The dependent variable used in this study is given by the states that are invaded by the Assyrian army, as detailed through inscriptions made during the reign of Ashurnasirpal (Grayson 1991). The descriptions of the activities of the Assyrian army were collated and, where possible, georeferenced according to the state in which each activity occurred. The geographic data were obtained from using a geographic study (Liverani 1992) of the Assyrian state at the time of Ashurnasirpal's rise. The data are made available as supplementary data to this work. In total, 65 separate invasion activities were identified between 883-865 BC and used in the analysis. Figure 2 shows a thematic map of the geographic area under consideration, with counts for the number of times each state was invaded as the dependent variable.
We propose a spatiotemporal Cox regression model to investigate the relationship between the invasions of the Assyrian army and the costs and benefits associated with those decisions. The model estimates the likelihood that each state will be selected as a target and has two components. The first captures inherent variation in the times at which Assyrian invasions took place, variation which is not dependent on the states themselves but is due to other factors that influence campaign times. Variation in attack frequency might, for example, arise due to external factors such as the weather or army weariness. We do not explicitly model this natural variation, but retain the term in the model to demonstrate that the states themselves are not the only factors influencing the timings of the attacks. The second component in the model considers the differences in each of the states at the times at which these campaigns took place, and supposes that the decisions made by the Assyrians with regards to where to invade depended on these differences. In an attempt to capture the variation across different spatial regions, a number of independent variables are incorporated into the model. We now outline our model before explaining how each of the independent variables is operationalized.
We suppose that the Assyrian army invades one of surrounding states at times given by the random variables T1, T2, …, TJ and that, at each time TJ, the random variable SJ ∈ {1,2, …, N} denotes the specific state that is invaded. Thus, the actions of the Assyrian army are summarized by the sequence of tuples given by (T1, S1), (T2, S2), …, (TJ, SJ).
Following previous studies that consider the time until events occur within a geographic area (e.g., Myers 1997; Raleigh and Hegre, 2009), we model the hazard function λi(t) for state i, taken to be the instantaneous risk at time t that state i will be invaded. This is formally defined as where TJ is a random variable specifying the time of the next invasion, assuming that invasions at times T1, T2, …, TJ−1 have already taken place. The Cox regression model, first described in Cox (1972) and further elaborated upon in several subsequent papers (Cox 1975;Anderson and Gill 1982;Gill 1984), assumes that pure temporal variation in the frequency of event occurrence can be separated from dependencies specific to each of the different areas i. Specifically, it supposes that the hazard function can be written as: (2) � � ( )� = ( ) exp( ( ) ), for each area i for a temporally dependent baseline hazard α(t) and temporally varying covariates zi(t) with a vector of parameters β. An advantage of this form of the model is that to estimate the parameters β, representing the effect from the independent variables on the risk of invasion, an explicit form of the baseline hazard α(t) is not required, and purely time-varying actions of the Assyrian army can be neglected. As a consequence, the times T1, T2, …, TJ can be treated as given and the task is reduced to modelling the associated random variables S1, S2, …, SJ , which detail the invaded state. This highlights a particular strength of the Cox regression approach: the precise event times are, in fact, irrelevant and only the order in which states are attacked is required to estimate the parameters β. For the case of the Neo-Assyrian campaigns, this is particularly salient because accurate dates are not recorded by inscriptions, but the order in which the inscriptions are presented give an indication of the ordering of invasions during any given campaign.
The conditional probability that state i is invaded at time TJ, given that we know that one state is invaded at that time, is given by , and the parameters β are found via maximization of the partial likelihood function given by , where 1(SJ = i) is an indicator function, equal to one if SJ = i and equal to zero otherwise. Note that the model estimates a counting process rather than a survival process because it is possible that each state can be invaded more than once (see Anderson and Gill, 1982;Gill, 1984).
In what follows, the independent variables denoted by zi(t) that are used to populate the model are described. The first variable to be incorporated is required to evaluate hypothesis 1 and measures the distance between the Assyrian army's previous location and each state for each time period during the study. This is calculated as the geographic distance between the centroid of the state of the last invasion, and the centroid of the state in question (measured in units of 100 km).
The second variable is a measure of the mean elevation of each state, as obtained from a Level 1 digital elevation model (DEM; USGS 2014). The mean elevations are included in the model using units of 100 m for ease of interpretation of the parameter values.
Determining the potential defense within each state, and the danger to the Assyrian army being invaded, is given by the population density of each state, which is calculated by relative size of settlements found in given regions modeled (Liverani 1992;Wilkinson et al. 2005;Wilkinson et al. 2007;Radner 2014). Although these estimates are not precise, they allow us to determine areas where we expect relatively more or less people being concentrated based on historical and archaeological data in the region. Values are assigned 1-4, with 4 indicating the highest population; values indicate the relative degree to which given regions were more populated than others. These values are then divided by the area of each state, as measured in units of 100 km 2 .
Average precipitation, obtained from NOAA (2014), is used to determine whether a state was suitable for agriculture. Although these are modern data, paleoclimate studies show the current pattern is similar to the Iron Age precipitation ranges (Issar and Zohar 2007), or at least indicate relatively which regions are wetter or drier (e.g., along the Mediterranean coast). The average precipitation is measured in units of 100 mm for ease of interpretation.
The amount of metal resources available in each state is estimated by relative distance from iron deposits known in the region (Maxwell-Hyslop 1974), and in Anatolia in particular. States that are closer to known deposits score higher (i.e., a value of 2 or 3), whereas states that are farther away score lower (i.e., 1). Figure 3 shows some of the relevant variables utilized in the model.
A further variable is added to incorporate a measure of the distance of each state to the coast. This is given by calculating the geographic distance between the centroid of each state and the nearest coastal location; it is taken in units of 100 km for ease of interpreting the corresponding parameter estimates. This variable acts as a proxy for access to beneficial trade routes that emerged along the coast (Sherratt and Sherratt 1993).
Finally, two control variables are also included in order to alleviate problems associated with unobserved heterogeneity and the varying geography of the study area. The first control, indicating the number of previous times that each state had been invaded by the Assyrians at each point in time, reduces potential bias resulting from unobserved heterogeneity. This arises when factors that are largely responsible for influencing the choices made are not included in the model. Unobserved heterogeneity may also influence parameter estimates of those variables that are included in the model. Incorporating the number of prior invasions for each state goes some way to incorporate some of the factors that influenced the Assyrians' choices that we have not (or could not) incorporate into the model. The area of each state is also included as a control variable because the states vary considerably in size. As a consequence, even if the invasions were made completely at random, it is likely that some states would be more invaded than others if they are larger. Including this variable reduces size as a source of bias.
The final expression for the hazard function ( ) is given by where Di(t) is the distance between state i and the previous recorded location of the Assyrian army; Ei is the elevation of state i; Pi is the population density of state i; Ri is the precipitation in state i; Mi is the amount of metal resources in state i; Ci is the distance from state i to the coast; Ai is the area of state i; and Ii(t) is the number of prior invasions in state i.
Results
The coxph function in the R survival package (Therneau 2013) was employed to estimate the parameters β. Fox (2002) provides an overview of its implementation with regards to time-varying covariates. Note that even though the dependent variables are time-dependent, the estimated parameters are not: the regression is performed over multiple time-steps and leads to one estimate for the relative effect of each variable. Figure 4 summarizes the parameter estimates for two models: one containing only those variables which represent costs to the Assyrian army (Model 1) and one containing both costs and opportunities (Model 2). The exponential value of the parameter is plotted (the odds ratio), together with a 95 percent confidence interval for that estimate. This value is chosen for ease of interpretation and gives the estimated change in odds that a state will be invaded from a one-unit increase in the associated independent variable. To explain, if the exponential of the coefficient in e β is equal to one, then any change in the associated variable has little effect on the risk of invasion. If e β is greater than one, then a one-unit increase in the value of the associated variable increases the odds of invasion by a factor of e β . Similarly, if e β is less than one, then a one-unit increase in the associated variable decreases the odds of invasion multiplicatively by the same amount.
In keeping with Hypothesis 1, distance between the state and the Assyrian army is negatively associated with the risk of invasion in both models. Given that the values of the exponential of each parameter estimate is around 0.3, if all else is equal, every 100 km between the state and the Assyrian army reduces the probability of invasion to around a third of what it otherwise would be.
An increase in elevation negatively impacts the likelihood of invasion, a finding in agreement with Hypothesis 2. This effect was significant in both models. With all other things being equal, every 100 m increase in the mean elevation of a state reduced the likelihood of invasion by 10-15 percent.
Population density, a proxy for the extent to which each state could coordinate effective defense, was significantly negatively associated with the probability of invasion in Model 1, in agreement with Hypothesis 3. In Model 2, in which both cost and opportunity variables were included, the effect was not significant at the 95 percent level.
Figure 4.
Parameter estimates and associated 95 percent confidence intervals for each of the independent variables incorporated in the model. The left hand side presents a model containing costs to the Assyrian army and the right hand side presents a model with both costs and opportunities. If a confidence interval crosses the value 1, then the associated parameter is not significant at the 95 percent level and the confidence interval is shaded grey.
Only precipitation levels were found to be statistically significant opportunity for Neo-Assyrians that was predictive of invasion, supporting Hypothesis 4. Specifically, states with relatively higher levels of precipitation, and therefore states which are more likely to have conditions suitable for agriculture, were more likely to be invaded by the Assyrians, suggesting that the benefits associated with potential for agriculture were a significant driver in their decision-making. The presence of metals (Hypothesis 5) and the distance to the coast (Hypothesis 6) were not significant predictors of invasion at the 95 percent level.
Finally, although the control variables incorporated did not test a specific hypothesis articulated in this section, it is interesting to note their direction. The point estimate for the area of the state was positively (but not significantly) associated with the probability of invasion, implying that larger states were more likely to be invaded.
The point estimate for the number of previous attacks was negatively associated with the probability of invasion and was significant for Model 2. This negative association indicates that the Assyrians might have favored invasion of states that they had not invaded previously, suggesting that expansion was one of their principal objectives.
Model 2 led to a slightly higher Akaike's Information Criterion (AIC; Akaike 1974; Anderson 2008) score than Model 1 (298.08 vs. 298.50) despite the added variables. This suggests that the three opportunity variables (precipitation, metals, and distance to coast) do little to improve the model. Omitting the variables for metals and distance to coast but retaining the variable for precipitation resulted in an AIC value of 295.36, meaning that the inclusion of just the opportunity variable associated with better agricultural conditions leads to a better model. Parameter estimates and significance levels of this model were consistent with those shown in Model 1. Table 1 presents an analysis of deviance table for Model 2. This table shows the results of a sequence of likelihood ratio tests to establish the extent to which the inclusion of each variable improves the model fit. The table is constructed as follows: beginning with a null model containing none of the independent variables, the log-likelihood is calculated. Eight models are then specified, each of which includes just one of the eight variables in Model 2. For each of these models, a likelihood ratio test is performed against the null model to determine whether the inclusion of each variable significantly improves model fit. The variable that increases the log-likelihood by the largest amount is selected as the next variable in the table (which, in this case, is distance from the army). This process is then repeated with the remaining variables, but instead of adding each variable to a null model, we add each remaining variable to a model containing just those variables that have already been shown to cause the largest improvement in model fit and which have, therefore, already been added to the table. Thus, the improvement in model fit that is due to each variable is determined, whilst, at each stage, controlling for the variables that are found to have the largest influence. The order of the variables in the table provides some indication of the amount of explanatory power associated with each of them. To explain, the first variable is the distance from the Neo-Assyrian army; this variable increased the log-likelihood of a null model with no covariates from -241.38 to -154.08, which was the maximum available increase of all possible covariates. The model improvement as a result of including this variable was highly significant (p < 10 -5 ).
Considering the remaining variables, the next largest increase in the loglikelihood from the model that included only "distance from army" was found by including the population density variable. The null hypothesis, that the model is not improved by the addition of this variable, can be rejected because p = 0.0010. The next largest increase comes from the elevation variable, which was also the final variable that significantly improved the model. This finding supports the hypothesis that elevation played a significant role in deterring the Neo-Assyrian army. The inclusion of the precipitation variable did not appear to significantly improve model fit, suggesting that its significance for the AIC statistic (Figure 4) may arise from a confounding factor.
In summary, our results suggest that regions closer to the last recorded army location, with lower population density, and which had lower mean elevation were more likely to be invaded by the Neo-Assyrians. There is little evidence to suggest that the opportunity variables tested were capable of capturing these decisions. Figure 5 maps the relative hazards for each state in the region for four different starting locations of the Neo-Assyrian army, as specified by Equation 3. A large amount of the modeled hazard depends on the location of the Neo-Assyrian army; however, the effect from other variables can also be observed because the risk does simply decay relative to the army's location. Figure 5. The probability that each state will invaded next, as calculated by Equation 3, for four starting points of the Neo-Assyrian Empire as indicated by the star. The location of the star indicates the location of the Assyrian army and the color of each state indicates the probability that the state will be the next one to be invaded. These probabilities span from 0 to 0.6. The state of Assyria is shaded white.
A model was also constructed in order to test whether there was any evidence of alliance formation amongst the invaded states and whether such alliances might then have had either a deterrent or an incentive effect for the Neo-Assyrians. In other words, were they more or less likely to invade states which had recently been attacked and were those recently attacked states, therefore, more likely to have formed alliances? To construct this model, a dummy variable was included equal to one if a state or any of its contiguous neighbors had been invaded previously in the same campaign. It is supposed that these states were the ones most likely to form alliances.
For this model, an additional control variable was incorporated to alleviate potential errors from confounding variables. This control was another dummy variable that indicated whether each state was contiguous with the previously invaded state. If alliances between neighbors did indeed affect the decision-making of the Neo-Assyrians, then an effect would be expected for the remainder of the campaign, and not just for the next attack. Once this was controlled for, no significant effects of neighbor alliances on likelihood of invasion were observed.
Discussion and Conclusion
We have presented a spatiotemporal Cox regression model to determine the attributes of states that made them more susceptible to invasion during the Neo-Assyrian campaigns of the early ninth century BCE. The case study of the Neo-Assyrian state is generally an ideal example of early empires because it is relatively well-documented, despite some limitations. Moreover, it is an important empire to understanding the long succession of empires that continued in the Near East long after the Neo-Assyrian Empire. In essence, our results provide an idea of what the impetus may have been that began this long process of empire formation in the region.
Our results suggest that the Neo-Assyrians were under constraints with regards to how far they could travel during any one campaign. Strategic decision-making by the king would, therefore, have played a key role in deciding where the campaigns were largely fought. We have investigated a series of hypotheses to determine whether, even within each of these campaigns, the decision-making of the Neo-Assyrians were informed by factors associated with costs and opportunities of the choices that might have been made. We conclude that consideration of the costs associated with invasion had a larger impact on decisionmaking than opportunities. We find that distance of the army from potential states that could be attacked, elevation (where elevation would be a deterrent to invasion), and population density (which could be a proxy for the extent to which different regions were able to organize effective defense) were significant factors affecting invasion. Including the opportunity variables of precipitation, distance to coast, and the availability of metal deposits only marginally improved the fit of the model. Overall, it appears that human and geographic factors likely affected the onset of conflict during the reign of Ashurnasirpal, which subsequently shaped conflict in the region later in the ninth century BCE (Yamada 2000) and likely later periods. In effect, it appears that Ashurnasirpal's campaigns did have a practical goal of attacking and defeating Assyria's nearest potential rivals, particularly those which appeared to be less populated or organized.
Our study is limited mainly by the lack of data associated with this time period. Nevertheless, we have demonstrated how systematically applying mathematical models can help to understand and discuss some of the key drivers during such historical periods, even when there are data constraints. The model employed in this study is particularly well-suited to understanding the features of locations that affect their risk of invasion because it can incorporate time-varying covariates without requiring detailed information on the specific times at which the invasions occurred.
The method we employed presents a simple and easy-to-use approach for studying different possibilities as to why early empires, such as the Neo-Assyrian state, undertook expeditions and warfare. Whereas most cases of past empire formation are speculative or based on purely qualitative understanding of states' motivations and their expansion, the approach we provide allows a quantitativebased assessment of impetus for expansion and pathways in which states formed. Furthermore, this method is applicable to other states and empires because the data requirements are not a great burden to obtain, particularly for empires and states occurring after the Neo-Assyrian period. Well-dated archaeological data that signify attacks on states, for example, can be used in the place of historical records of invasions. The order and temporal circumstances of attacks could be checked by the modeling approach to determine motivation plausibility. A comparative approach among these states' invasions may allow the approach presented here to be used to determine if similar patterns were also in effect. Overall, this approach is expandable to other cases, potentially even those with less data resolution.
Supplementary Data
See uploaded file. | 2018-12-13T21:25:55.693Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "cc6e701d86d5e005855b7adb362bcb0067f6e219",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt0415c0pj/qt0415c0pj.pdf?t=pfkj5n",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "544c4befab98f05f82dffb15f9eae6e762305ecc",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
248085409 | pes2o/s2orc | v3-fos-license | Local convergence rates of the nonparametric least squares estimator with applications to transfer learning
Convergence properties of empirical risk minimizers can be conveniently expressed in terms of the associated population risk. To derive bounds for the performance of the estimator under covariate shift, however, pointwise convergence rates are required. Under weak assumptions on the design distribution, it is shown that least squares estimators (LSE) over 1-Lipschitz functions are also minimax rate optimal with respect to a weighted uniform norm, where the weighting accounts in a natural way for the non-uniformity of the design distribution. This implies that although least squares is a global criterion, the LSE adapts locally to the size of the design density. We develop a new indirect proof technique that establishes the local convergence behavior based on a carefully chosen local perturbation of the LSE. The obtained local rates are then applied to analyze the LSE for transfer learning under covariate shift.
Introduction
Consider the nonparametric regression model with random design supported on [0, 1], that is, we observe n i.i.d. pairs (X 1 , Y 1 ), . . . , (X n , Y n ) ∈ [0, 1] × R, with and independent measurement noise variables ε 1 , . . . , ε n ∼ N (0, 1). The design distribution is the marginal distribution of X 1 and is denoted by P X . Throughout this paper, we assume that P X has a Lebesgue density p. The least squares estimator (LSE) for the nonparametric regression function f taken over a function class F is given by If the class F is convex, computing the estimator f n results in a convex optimization problem. For fixed function f, the law of large numbers implies that the least squares objective i (Y i −f (X i )) 2 is close to its expectation n E[(Y 1 −f (X 1 )) 2 ] = n+n E[ that the standard analysis of LSEs based on empirical process methods and metric entropy bounds for the function class F leads to convergence rates with respect to the empirical L 2 -loss f −f 0 2 n := 1 n n i=1 ( f n (X i ) − f 0 (X i )) 2 and the associated population version E[ 1 0 ( f n (x) − f 0 (x)) 2 p(x) dx], see for instance [38,24,16,34]. The latter risk is the expected squared loss that we suffer, if a new X ∼ P X arrives and f 0 (X) is estimated by f n (X).
A widely observed phenomenon is that the distribution of the new X is different from the design distribution of the training data. As an example, assume that we want to predict the response Y of a patient to a drug based on a measurement X summarizing the health status of this patient. To learn such a relationship, data are collected in one hospital resulting in an estimator f n . Later f n will be applied to patients from a different hospital. It is conceivable that the distribution of X in the other hospital will be different. For instance, there could be a different age distribution or patients have a different socio-economic status due to variations in the imposed costs for treatments.
An important problem is therefore to evaluate the expected squared risk of the estimator f n if a new observation X is sampled from a different design distribution Q X with density q. The associated prediction error under the new distribution is then If for some finite constant C and any x ∈ [0, 1], q(x) ≤ Cp(x), then, the prediction error under the design density q is of the same order as under p. However, in machine learning applications there are often subsets of the domain with very few datapoints. This motivates the relevance of the problematic case, where the density q is large in a low-density region of p. Differently speaking, we are more likely to see a covariate X in a region for which we have very few training data based on the sample (X 1 , Y 1 ), . . . , (X n , Y n ). Since the lack of data in such a region means that the LSE will not fit the true regression function f well, this could potentially lead to a very large prediction error under the new design distribution. An extension of this problem setting is transfer learning under covariate shift. Here we know the least-squares estimator f n and the sample size n based on the sample (X 1 , Y 1 ), . . . , (X n , Y n ) with design density p. On top of that we have a second, smaller dataset with m n new i.i.d. datapoints (X 1 , Y 1 ), . . . , (X m , Y m ) with Y i = f 0 (X i ) + ε i , i = 1, . . . , m and design density X 1 ∼ q. In the framework of the hospital data above, this means that we also have data from a small study with m patients from the second hospital. In other words, the regression function f 0 remains unchanged, but the design distribution changes. Since the number of extra training data points m is small compared to the original sample size n, we want to quantify how well an estimator combining f n and the new sample can predict under the new design distribution with associated prediction error (2).
Establishing convergence rates for the loss in (2), given a sample with design density p, is a hard problem and to the best of our knowledge no simple modification of the standard least squares analysis allows to obtain optimal rates for this loss.
To address this problem, we study the case where the LSE is selected within the function class F consisting of all 1-Lipschitz functions. For this setting, we prove under weak assumptions that for a sufficiently large constant K and all x, with high probability, where the local convergence rate t n (x) is the solution to the equation t n (x) 2 P X ([x±t n (x)]) = log n/n. [13] already showed under slightly different assumptions on the de-sign density that t n is locally the optimal estimation rate and that this rate is attained by a suitable wavelet thresholding estimator. What appears surprising to us is that the LSE also achieves this optimal local rate. Indeed, the LSE is based on minimization of the (global) empirical L 2 -distance and convergence in L 2 is weaker than convergence in the weighted sup norm loss underlying the statement in (3). To establish (3) we only assume a local doubling property of the design distribution. By imposing more regularity on the design density, we can prove that t n (x) (log n/(np(x)) 1/3 . For this result, p is also allowed to depend on the sample size n such that the p(x) in the denominator does not only change the constant but also the local convergence rate. This shows nicely how the local convergence rate varies depending on the density p and how small density regions increase the convergence rate. Moreover, we show that kernel smoothing with fixed bandwidth has a slower convergence rate than the LSE. Therefore, the least squares fit can better recover the regression function if the values of the density p range over different orders of magnitude. This property is particularly important for machine learning applications.
Based on (3), we can then obviously bound the prediction error in (2) by In many cases, simpler expressions for the convergence rate can be derived from the right hand side. For instance in the case t n (x) (log n/(np(x)) 1/3 , the convergence rate is (log n/n) 2/3 if 1 0 q(x)/p(x) 2/3 dx is bounded by a finite constant. A major contribution of this paper is the new proof strategy to establish local convergence rates. For that we argue by contradiction, first assuming that the LSE has a slower local rate. Based on that, we then construct a local perturbation with smaller least squares loss. This means then, that the original estimator was not the LSE and leads to the desired contradiction. The construction of the local perturbation and the verification of a smaller least squares loss are both non-standard and involved. We believe that this strategy can be generalized to various extensions of the setup considered here.
The paper is structured as follows. In Section 2, we state the new upper and lower bounds on the local convergence rate. This section is followed by a discussion on the imposed doubling condition as well as few examples in Section 3. Section 4 gives a high-level overview of the new proof strategy to establish local convergence rates. Application to transfer learning is then discussed in Section 5 and, finally, a brief literature review and an outlook is provided in Section 6. Proofs are deferred to the Appendix.
Notation: For two real numbers a, b, we write a ∨ b = max(a, b) and a ∧ b = min(a, b). For any real number x, we denote by x the smallest integer m such that m ≥ x and by x the greatest integer m such that m ≤ x. Furthermore, for any set S, we denote by x → 1(x ∈ S) the indicator function of the set S. To increase readability of the formulas, we define [a ± b] := [a − b, a + b]. For any two positive sequences {a n } n , {b n } n , we say that a n b n if there exists a constant 0 < c < ∞ and N ∈ N such that for all n ≥ N, a n ≤ cb n . We write a n b n if a n b n and b n a n . Finally if for all > 0, there exists a positive integer N such that for all n ≥ N, a n ≤ b n then we write a n b n . For a random variable X and a (measurable) set A, P X (A) stands for P(X ∈ A). For any function h for which the integral is finite, we set h L 2 (P) := ( h 2 (x)p(x) dx) 1/2 . We also write h n := ( 1 n n i=1 h 2 (X i )) 1/2 .
Main results
In this section, we state the local convergence results for the LSE. The local convergence rate t n turns out to be the functional solution to an equation that depends on the design distribution P X . Denote by M the set of distributions that are both supported on [0, 1] and absolutely continuous with respect to the Lebesgue measure. Lemma 1. If P X ∈ M, then, for any n > 1 and any x ∈ [0, 1], there exists a unique solution t n (x) of the equation Therefore the function x → t n (x) is well defined on [0, 1]. From now on, we refer to t n as the spread function (associated to P X ). Figure 1: The spread function associated to a distribution with density p : x → 2x1(x ∈ [0, 1]) for increasing values of n. Smaller functions t n correspond to larger n.
The spread function can be viewed as a measure for the local mass of the distribution P X around x. The more mass P X has around x, the smaller t n (x) is, see Figure 1 for an illustration. Whenever necessary, the spread function associated to a probability distribution P is denoted by t P n . To derive a local convergence rate of the least-squares estimator taken over Lipschitz functions, one has to exclude the possibility that the design distribution P X is completely erratic. Interestingly, no Hölder smoothness has to be imposed on the design density and it is sufficient to consider design distributions satisfying the following weak regularity assumption.
Definition 2. For n ≥ 3 and D ≥ 2, define P n (D) as the class of all design distributions P X ∈ M, such that for any 0 < η ≤ √ log n sup x∈[0,1] t n (x), We call (LDP) the local D-doubling property, or local doubling property when the constant D is irrelevant or unambiguous.
The restriction x ∈ [0, 1] allows to include distributions with Lebesgue densities that are discontinuous at 0 or 1. For instance the uniform distribution on [0, 1] is 2-doubling, but since Since the uniform distribution on [0, 1] is contained in P n (2) ⊆ P n (D) for D ≥ 2, we see that these classes are non-empty. Inequality (LDP) states that doubling the size of a small interval cannot inflate the probability by more than a factor D. The next result shows that the maximum interval size √ log n sup x∈[0,1] t n (x) tends to zero as n becomes large.
For any ε > 0, there exists an N = N (D, ε), such that for all n ≥ N, The local doubling condition is a relaxation of the well-known global doubling condition and allows us to consider sample size dependent design distributions. For a more in-depth discussion and some examples, see Section 3.
We now show that the spread function is indeed the minimax rate. Denote by Lip(κ) the set of functions supported on [0, 1] that are Lipschitz, with Lipschitz constant at most κ, that is, , then we also say that f is κ-Lipschitz. Recall that P f0 is the distribution of the data in the nonparametric regression model (1) if the true regression function is f 0 and that P X denotes the distribution of the design X.
Theorem 4. Consider the nonparametric regression model (1). Let 0 < δ < 1 and D ≥ 2. If f n denotes the LSE taken over the class of 1-Lipschitz functions Lip(1), then, for a sufficiently large constant K, A close inspection of the proof shows that as n → ∞, the right hand side converges to zero with a polynomial rate in n. Since the previous result is uniform over design distributions P X ∈ P n (D), we can also consider sequences P n X . While at first sight it might appear unnatural to consider for every sample size n a different design distribution, this constitutes a useful statistical concept to study the effect of low density regions on the convergence rate. Indeed, the influence of a small density region disappears in the constant for a fixed density, but the dependence on the sample size makes the effect visible in the convergence rate. Moreover sample size dependent quantities are widely studied in mathematical statistics, most prominently in high-dimensional statistics, where the number of parameters typically grows with the sample size.
A key question is to identify conditions for which the local convergence rate t n has a more explicit expression. One such instance is the case of Hölder-smooth design densities. Let β denote the largest integer that is strictly smaller than β. The Hölder-β seminorm of a function g : R → R is defined as For β = 1, |g| β is the Lipschitz constant of g.
Corollary 5.
Consider the nonparametric regression model (1). Let 0 < δ < 1 and f n be the LSE taken over the class of 1-Lipschitz functions Lip(1). For β ∈ (0, 2], let P n X be a sequence of distributions with corresponding Lebesgue densities p n . If for any n, there exists a non-negative function h n with p n (x) = h n (x) for all x ∈ [0, 1], max n |h n | β ≤ κ and min x∈[0,1] p n (x) ≥ n −β/(3+β) log n, then, for all n ≥ exp(4κ) ∨ 9, log n 3np n (x) P n X ∈ P n (2 + 2 β/3 3 β κ + 2 1/3 3κ 1/β ), and there exists a finite constant K independent of the sequence P n X , such that The rate (log n/(np n (x)) 1/3 is natural, since np n (x) can be viewed as local effective sample size around x.
For β ∈ (0, 1], we can always choose h n (x) = p n (0) for x < 0, h n (x) = p n (x) for x ∈ [0, 1], and h n (x) = p n (1) for x > 1. While the rate is independent of the smoothness index β, we can allow faster decaying low density regions if β gets larger. The fastest possible decay is n −2/5 log n if β = 2.
To extend the result to β > 2 and to allow for even smaller densities, it is widely believed that imposing Hölder smoothness is insufficient. One way around is to use Hölder smoothness plus some extra flatness constraint. For more on this topic, see [28,29].
A lower bound on the small density regions in Corollary 5 seem to be necessary. Indeed, p n (x) log n/n would imply that the rate t n (x) (log n/(np n (x)) 1/3 diverges. The next lemma shows how the spread function behaves at a point with vanishing Lebesgue density p.
Then, there exists N > 0, depending only on U , such that for any n > N , .
We complement Theorem 4 with a matching minimax lower bound. A closely related result is Theorem 2 in [13].
Theorem 7. If C ∞ is a positive constant, then there exists a positive constant c, such that for any sufficiently large n, and any sequence of design distribution P n X ∈ M with corresponding Lebesgue densities p n all upper bounded by C ∞ , we have where the infimum is taken over all estimators.
Corollary 5 states that t n (x) (log n/(np n (x))) 1/3 . Combined with the lower bound, this shows that the local minimax estimation rate in this framework is (log n/(np n (x))) 1/3 .
It is known that for Lipschitz functions and squared L 2 -loss, the LSE achieves the minimax estimation rate n −2/3 . Summarizing the statements on the convergence rates above shows that the LSE is also minimax rate optimal with respect to the stronger weighted sup-norm loss.
Next, we discuss how the derived local rates imply several advantages of the LSE if compared to kernel smoothing estimators. In the case of uniform design p(x) = 1(x ∈ [0, 1]), the LSE achieves the convergence rate n −2/3 with respect to squared L 2 -loss and Corollary 5 gives the rate (log n/n) 1/3 with respect to sup-norm loss. To the best of our knowledge, it is impossible to obtain these two rates simultaneously for kernel smoothing estimators. The squared L 2 rate n −2/3 can be achieved for a kernel bandwidth h n −1/3 and the sup-norm rate (log n/n) 1/3 requires more smoothing in the sense that the bandwidth should be of the order (log n/n) 1/3 , see Corollary 1.2 and Theorem 1.8 in [36]. Any bandwidth choice in the range n −1/3 h (log n/n) 1/3 will incur an additional log n-factor in at least one of these two convergence rates of the kernel smoothing estimator. Although the suboptimality in the rate is only a log n-factor, it is surprising to see that the LSE does not suffer from this issue.
Secondly, we argue that kernel smoothing estimators with fixed global bandwidth cannot achieve the local convergence rate (log n/(np n (x)) 1/3 in the setting of Corollary 5. Denote the bandwidth by h and the kernel smoothing estimator by f nh . Below we show that the decomposition in stochastic error and bias yields that for all x ∈ [0, 1], with high probability. To balance these two terms, one would have to choose h (log n/(np n (x))) 1/3 . In this case, we would also obtain the local convergence rate (log n/(np n (x)) 1/3 . But this requires to choose the bandwidth locally depending on x. From that one can deduce that any global choice for h in (6) leads to suboptimal local rates. It is surprising that although the LSE is based on a global criterion, it changes locally the amount of smoothing to adapt to the amount of datapoints in each regime. This is a clear advantage of the least squares method over smoothing procedures. It seems that this benefit is particularly advantageous for machine learning problems which typically have high-and low-density regions in the design distribution. To see (6), consider the kernel smoothing estimator f nh (x) = with positive bandwidth h and a kernel function K supported on [−1, 1]. This is a simplification of the Nadaraya-Watson estimator that replaces the density p n by a kernel density estimator. Observe that using In the stochastic error term, the sum is over O(nhp n (x)) many variables since K has support on [−1, 1]. By the central limit theorem, this means that this sum is of the order O( nhp n (x)).
To obtain a uniform statement in x, yields an additional √ log n factor and together with the normalization 1/(nhp n (x)), the stochastic error is of the order log n/(nhp n (x)).
Thus to heuristically verify (6), it remains to bound the deterministic error term. This term is close to its expectation where we used substitution v = (u − x)/h and K(v) dv = 1. If p n is sufficiently smooth and K has enough vanishing moments, then, the Lipschitz property of f 0 shows that this term can be bounded by h, completing the argument for (6).
To draw uniform confidence bands, but also for the application to transfer learning discussed later, it is important to estimate the spread function t n from data. For P n X (A) := 1 n n i=1 1(X i ∈ A) the empirical design distribution, a natural estimator is Theorem 8. If P X ∈ P n (D) and p ∞ < ∞, then The result implies that for any ε > 0 and all sufficiently large n,
Local doubling property and examples of local rates
A real-valued measure satisfies the (global) doubling property if (LDP) in Definition 2 holds for all η > 0. Denote by P G (D) the space of all globally doubling measures. It immediately follows from the definitions that P G (D) ⊆ P n (D). A converse statement is Lemma 9. If P X ∈ P n (D), then P X ∈ P G (D n (P X )) for a finite number D n (P X ).
This means that the distinction between local and global doubling only makes a difference in the case where we study sequences of design distributions, such as in the setup of Corollary 5. In Example 3, a sequence P n X is constructed such that P n X ∈ P n (D) for all n and P n X ∈ P G (D n ) necessarily requires D n → ∞ as n → ∞.
Doubling is known to be a weak regularity assumption and does not even imply that P X has a Lebesgue density [20,8]. It can moreover be easily verified for a wide range of distributions. All distributions with continuous Lebesgue density bounded away from zero and all densities of the form p(x) ∝ x α for α ≥ 0 are doubling.
We now derive explicit expressions for the local convergence rates and verify the doubling condition for different design distributions by proving that P X ∈ P G (D) or P X ∈ P n (D).
Example 1. Assume that the design density p is bounded from below and above, in the sense that The following result shows that in this case Theorem 4 is applicable and the local convergence rate is t n (x) (log n/n) 1/3 .
Lemma 10.
Assume that the design distribution P X admits a Lebesgue density satisfying (8).
Then, P X ∈ P G (4p/p) and for any 0 ≤ x ≤ 1, log n 2np As a second example, we consider densities that vanish at x = 0.
Example 2. Assume that, for some α > 0, the design distribution P X has Lebesgue density This means, there is a low-density regime near zero with rather few observed design points. In this regime, it is more difficult to estimate the regression function and this is reflected in a slower decrease of the local convergence rate.
By rewriting the expression for the spread function, we find that the local convergence rate is t n (x) (log n/(n(x ∨ a n ) α )) 1/3 . The behavior of t n (0) can also directly be deduced from Lemma 6.
As a last example, we consider a sequence of design distributions with decreasing densities on [1/4, 3/4].
Example 3. For φ n = 1 ∧ n −1/4 log n, consider the sequence of distributions P n X with associated Lebesgue densities It is easy to check that this indeed defines Lebesgue densities on [0, 1], see Figure 2 for a plot. According to Lemma 10, these distributions are globally doubling. Since P n X ([0, 1])/ P n X ([1/4, 3/4]) = 1/φ n , the doubling constants are ≥ 1/φ n and hence tend to infinity as n grows. Therefore there is no D > 0 such that p n ∈ P G (D) for all n. On the contrary, for all n, p n ∈ Lip(16) and since φ n ≥ n −1/4 log n, the assumptions of Corollary 5 are satisfied with β = 1 and κ = 16. Therefore, p n ∈ P n (8) for all n large enough and the local convergence rate is (log n/(np n (x))) 1/3 . In particular, in the regime [1/4, 3/4], the local convergence rate becomes n −1/4 .
Proof strategy
As the new proof strategy to establish local rates for least squares estimation is the main mathematical contribution of this work, we outline it here. Consider the LSE The definition of the estimator ensures that for any g ∈ Lip(1), the so called basic inequality 2 holds. Assume that the function g satisfies which is the same as saying that g should always lie between f n and the true regression function f 0 . Together with the basic inequality and using We prove that t n (x) is the local convergence rate by contradiction. Assume that the LSE f n is more than Kt n (x * ) away from the true regression function f 0 for some x * ∈ [0, 1] and a sufficiently large constant K. Then, we choose g as a specific local perturbation of f n (in the sense that g differs from f n only on a small interval) such that the previous inequality (14) is violated, resulting in the desired contradiction. Denote the space of all possible functions f n − g by F * . Since f n ∈ Lip(1) and g ∈ Lip (1), we have f n − g ∈ Lip(2) and thus, F * ⊆ Lip(2). In fact, by choosing g as a local perturbation of f n , the function class F * will be much smaller than Lip (2). Due to the small support of f − g, we have f (X i ) − g(X i ) = 0 for most X i . It is conceivable that we can remove these indices from (14) and that the effective sample size m = m(X 1 , Y 1 , . . . , X n , Y n ) is the number of indices for which f (X i ) − g(X i ) = 0. Assume moreover that F * is star-shaped, that is, if h ∈ F * and α ∈ [0, 1], then also αh ∈ F * . We now argue similarly as in []. Replacing f * by g in their inequality (13.18) and then following exactly the same steps as in the proofs for their Theorem 13.1 and Corollary 13.1, one can now show that if there exists a sequence η n with 0 ≤ η n ≤ 1 satisfying then, To derive a contradiction assume that there exists a point Suppose moreover, that for all K large enough, we can find a function g ∈ Lip(1) satisfying (13), , and that the support of f − g has length of the order Kt n (x * ). That such a construction of a function g is possible is plausible due to f n − g ∈ Lip(2). Because we can also choose K ≥ 4, another consequence of The right hand side should be close to its expectation 1 4 where we used the definition of t n (x * ). Thus, up to approximation errors, we obtain the lower bound We now explain the choice of η n in (16), that leads to the upper bound for 1 (14). Since the perturbation is supported on an interval with length of the order Kt n (x * ), one can bound the metric entropy log N r, F * , · ∞ Kt n (x * )/r, with proportionality constant independent of K. Therefore, (15) holds for η n ∝ (Kt n (x * ) log n/m) 1/3 . The additional log n-factor in η n is necessary to obtain uniform statements in x. For this choice of η n , the probability in (16) converges to zero. Consequently, on an event with large probability, we have that 1 1 x yf f Figure 3: If the LSE f would not have locally slope= 1, then one could construct a perturbed versionf that better fits the data, implying that f cannot be a least squares fit.
Recall that the support of f − g is contained in [x * ± CKt n (x * )] for some constant C. Moreover, m is the number of observations in the support of f − g. Now m should be close to its expectation which can be upper bounded by nP X ([x * ±CKt n (x * )]). Invoking the local doubling property (LDP), m can also be upper bounded by Using the definition of t n (x), (18) can be further bounded by Comparing this with the lower bound (17) and dividing both sides by log n, we conclude that on an event with large probability, 1 4 K 2 (C K K 2 ) 1/3 , where the proportionality constant does not depend on K. A technical argument that links the upper and lower bound more tightly and that we do not explain here in detail shows that one can even avoid the dependence of C K on K, such that we finally obtain K 2 K 2/3 . Taking K large and since the proportionality constant is independent of K, we finally obtain a contradiction. This means that on an event with large probability and for all sufficiently large K, there cannot be a point There is still a major technical obstacle in the proof strategy, namely the choice of the local perturbation g. This construction appears to be one of the main difficulties of the proof. In fact, the empirical risk minimizer over 1-Lipschitz functions will typically lie somehow on the boundary of the space Lip(1) in the sense that on small neighborhoods the Lipschitz constant of the estimator is exactly one. To see this, assume the statement would be false. Then we could build tiny perturbations around the estimator that are 1-Lipschitz and lead to a smaller least squares loss, which contradicts the fact that the original estimator is a least squares minimizer (see Figure 3). This makes it tricky to construct a local perturbation g of f that also lies in Lip(1) and satisfies the required conditions. To find a suitable perturbation, our approach is to introduce first x * as above and then define another point x in the neighborhood of x * with some specific properties. The full construction is explained in Figure 5 and Lemma 23.
Applications to Transfer Learning
Transfer Learning (TL) aims to exploit that an estimator achieving good performance on a certain task should also work well on similar tasks. This allows to emulate a bigger dataset and to save computational time by relying on previously trained models. In the supervised learning framework, we have access to training data generated from a distribution Q X,Y . Observing X from a pair (X, Y ) ∼ Q X,Y we want to predict the corresponding value of Y. To do so, we compute an estimator based on observing m i.i.d. copies sampled from Q X,Y . Assume now that we have also access to n > m i.i.d. copies sampled from another distribution P X,Y . The transfer learning paradigm states that, depending on some similarity criterion between P X and Q X , fitting an estimator using both samples improves the predictive power. In other words, P X,Y contains some information about Q X,Y that can be transferred to improve the fit. Two standard settings within TL are posterior drift and covariate shift. For posterior drift, one assumes that the marginal distributions are the same, that is, P X = Q X , but P Y|X and Q Y|X may be different. On the contrary, TL with covariate shift assumes that P Y|X = Q Y|X , while P X and Q X can differ. Here, we address the covariate shift paradigm within the nonparametric regression framework. This means, we observe n + m independent pairs (X 1 , and independent noise variables ε 1 , . . . , ε n+m ∼ N (0, 1).
We now discuss estimation in this model, treating the cases m = 0 and m > 0, separately. In both cases, the risk is the prediction error under the target distribution. For the sake of readability, we omit the subscript X and write P and Q for P X and Q X respectively. Throughout the section, we assume global doubling, that is, P, Q ∈ P G (D) for some D ≥ 2.
Using LSE from source distribution to predict under target distribution
As before, let q denote the density of the target design Q . Recall that we are considering the covariate shift model (19) with m = 0 and Lipschitz continuous regression functions. If the n training data were generated from the target distribution Q X,Y instead, the classical empirical risk theory would lead to the standard nonparametric rate n −2β/(2β+1) with β = 1. More precisely, the statement would be that with probability tending to one as n → ∞, For n training samples from the source distribution P X,Y , Theorem 4 shows that the prediction risk under the target distribution is bounded by with probability converging to one as n → ∞. For a given source density p, the main question is, whether the right hand side is of the order n −2/3 , up to log n-factors. This would imply that there is no loss in terms of convergence rate (ignoring log n-factors) due to the different sampling scheme. To get at least close to the n −2/3rate, some conditions on p are needed. If p is for instance zero on [0, 1/2], we have no information about the regression function f on this interval and any estimator will be inconsistent on [0, 1/2]. If we then try to predict with Q the uniform distribution, it is clear that In the setting of sample size dependent densities p n , Corollary 5 shows that under the imposed conditions, there exists a constant K that does not depend on n, such that with probability tending to one as n → ∞. For instance, for the sequence of densities p n (x) := φ n (x) + 16 ( (12), the right hand side in the previous display is of the order (log n/n) 2/3 φ −2/3 n ≤ n −1/2 . For distributions satisfying the conditions of Theorem 4, we need to bound the more abstract integral The next result provides a different formulation that is sometimes simpler to use. Lemma 12. In the same setting and for the same conditions as in Theorem 4, there exists a constant K , such that with probability tending to one as n → ∞.
In [23], a pair (P, Q) is said to have transfer exponent γ, if there exists a constant 0 < C ≤ 1, such that for all 0 ≤ x ≤ 1 and 0 < η ≤ 1, we have P([x ± η]) ≥ Cη γ Q([x ± η]). Combined with the previous lemma, we get for tramfer exponent γ, with probability tending to one as n → ∞. Interestingly, the right hand side does not depend on the target distribution Q anymore. The next lemma provides an example for convergence rates.
with probability tending to one as n → ∞.
In the proof, we show that the result follows for 0 < α ≤ 1 by a direct application of Lemma 12. For general α > 0, we prove the lemma by a more sophisticated analysis based on the bounds derived in Example 2.
Combining both samples to predict under the target distribution
We now consider the nonparametric regression model under covariate shift (19) with a second sample, that is, m > 0.
In a first step, we construct an estimator combining the information from both samples. The main idea is to consider the LSEs for the first and second part of the sample and, for a given x, pick the LSE with the smallest estimated local rate. For a proper definition of the estimator, some notation is required. Restricting to the first and second part of the sample, let f (1) n and f (2) m denote the corresponding LSEs taken over 1-Lipschitz functions. Because the spread function is the local convergence rate, it is now natural to study f n,m (x) = f is not yet an estimator. Replacing t P n (x) and t Q m (x) by the estimators leads to the definition of our nonparametric regression estimator under covariate shift, Let p and q be, respectively, the Lebesgue densities of P and Q . We omit the dependence on n, m and define by P f the distribution of the data in model (19).
Consider the nonparametric regression model under covariate shift (19). Let 0 < δ < 1 and D > 0. If P, Q ∈ P G (D) and the estimator f n,m is as in (23), then, for a sufficiently large constant K, The proof shows that to achieve the rate t P n (x) ∧ t Q m (x) it is actually enough to estimate t P n (x) using N data points X 1 , . . . , X N ∼ P, where N is a sufficiently large number. Thus, instead of observing the full first dataset (X 1 , Y 1 ), . . . , (X n , Y n ) ∼ P, the estimator only needs the LSE f (1) n and N i.i.d. observations from the design distribution P .
In a next step we show that the rate t P n (x) ∧ t Q m (x) is the local minimax rate. The design distributions P n X , Q n X are allowed to depend on the sample size. The corresponding spread functions are denoted by t P n (x) and t Q m (x). Theorem 15. Consider the nonparametric regression model under covariate shift (19). If C ∞ is a positive constant, then there exists a positive constant c, such that for any sufficiently large n, and any sequences of design distribution P n X , Q n X ∈ M with corresponding Lebesgue densities p n , q n all upper bounded by where the infimum is taken over all estimators and P f0 is the distribution of the data in model (19).
Given the full dataset in model (19), an alternative procedure is to use the LSE over all data, that is, Instead of analyzing this estimator in model (19), the risk can rather easily be controlled in the related model, where we observe n + m i.i.d. observations (X 1 , Y 1 ), . . . , (X n+m , Y n+m ) with X i drawn from the mixture distribution P := m m+n Q + n m+n P and Y i = f 0 (X i ) + ε i . In this model, we draw in average n observations from P and m observations from Q . Since P G (D) is convex, Theorem 4 applies and, consequently, t P n+m (x) is a local convergence rate. The spread function can be bounded as follows.
Lemma 16. If P := n n+m P + m n+m Q, then, If there are positive constants C, κ such that sup x t P n (x) ≤ Cn −κ for all n, then, there exists a constant C , such that One can see that the rate is at most a log-factor larger than the local minimax rate t P n (x)∧t Q m (x). Moreover, this additional log-factor can be avoided in the relevant regime where the local rate t PX n (x) decays with some polynomial rate uniformly over [0,1].
We now return to our leading example with source density p(x) = (α + 1)x α+1 1(x ∈ [0, 1]) and target density q(x) = 1(x ∈ [0, 1]). Lemma 10 and Lemma 11 show that if α > 0, then, the assumptions of Theorem 14 are satisfied and the local convergence rate of the combined estimator (21), we see that as long as α < 3/2, the first sample is enough to achieve the rate (log n/n) 2/3 . We therefore focus on the regime 3/2 < α.
A brief review of convergence results for the least squares estimator in nonparametric regression
The standard strategy to derive convergence rates with respect to (empirical) L 2 -type losses is based on empirical process theory and covering bounds. The field is well-developed, see e.g. [14,37,16,22,38]. At the same time, it remains a topic of active research. A recent advance is to establish convergence rates of the LSE under heavy-tailed noise distributions [18,24]. Some convergence results are with respect to the squared Hellinger distance, see for instance [15,5]. This is slightly weaker but essentially the same as convergence with respect to the prediction risk E[( f n (X) − f 0 (X)) 2 ]. To see this, recall that for two probability measures P, Q defined on the same measurable space, the squared Hellinger distance is defined as H 2 (P, Q) = 1 2 ( √ dP − √ dQ) 2 (some authors do not use the factor 1/2). Denote by Q f the distribution of (X 1 , Y 1 ) in the nonparametric regression model (1) with regression function f. It can be shown that . Thus, the squared Hellinger loss is weaker than the squared prediction loss.
Concerning estimation rates, the LSE achieves the rate n −2β/(2β+d) ∨ n −β/d over balls of βsmooth Hölder functions. To see this, observe that if F denotes a Hölder ball and g n := ( 1 n n i=1 g(X i ) 2 ) 1/2 is the empirical L 2 norm, the metric entropy is log N r, F, · n r −d/β , see Corollary 2.7.2 in [37]. Any solution δ 2 of the inequality δ δ 2 log N r, F, · n dr δ 2 √ n is then a rate for the LSE, see Corollary 13.1 in [38]. It is now straightforward to check that this yields the convergence rate δ 2 n −2β/(2β+d) ∨ n −β/d . While n −2β/(2β+d) is the optimal convergence rate, Theorem 4 in [5] shows that the LSE cannot achieve a faster rate than n −β/2 (up to a possibly non-optimal logarithmic factor in n) if d = 1 and the smoothness index is β < 1/2.
To the best of our knowledge, the only sup-norm rate result for the LSE is [30]. In this work, the LSE is studied for F the linear space spanned by a nearly orthogonal function system. For this setting, the LSE has an explicit representation that can be exploited to prove sup-norm rates.
Isotonic regression refers to the setting where the regression function is non-decreasing. In this setup, it is even possible to characterize the pointwise distribution of the LSE and, surprisingly, the LSE can be computed explicitly. Let (X (1) , Y (1) ), . . . , (X (n) , Y (n) ) be a reordered version of the dataset such that X (1) ≤ X (2) ≤ · · · ≤ X (n) and for all 0 ≤ k ≤ n, define the k-th partial sum of Y as The LSE for isotonic regression is then piecewise constant on [0, 1] and is given by see for instance [6,7,39,19] and Lemma 2.1 in [33]. Based on this explicit characterization one can derive the distributional properties of the estimator. For the purposes of this discussion, we provide the following simplified version of the main theorem in [39].
with Z a random variable distributed as the slope at zero of the greatest convex minorant of W t + |t| α+1 , where W is a two sided Brownian motion.
If f 0 is Lipschitz continuous, then α = 1 and Z is known to follow the Chernoff's distribution. Assuming moreover that p(x 0 ) > 0, leads to | f n (x 0 ) − f (x 0 )| (np(x 0 )) −1/3 . This agrees with the local rate t n (x 0 ) (log n/(np(x 0 ))) 1/3 obtained in Corollary 5 up to the log n-factor that emerges due to the uniformity of the local rates.
For isotonic regression in d dimensions, the recent article [17] shows that the LSE achieves the minimax estimation rate n − min(2/(d+2),1/d) up to log n-factors. For d ≥ 3, is it known that log N (r, F, · 2 ) r −2(d−1) . Since for uniform design, the norms · 2 and · n are close, the standard approach to derive convergence rates via the entropy integral is then expected to yield no convergence rate faster than n −1/(2d−2) . Since this rate is slower than the actual rate of the LSE, this shows that the entropy integral approach can be suboptimal. Interestingly, [17] proves moreover, that if the isotonic function is piecewise constant with k pieces, the LSE can adapt to the number of pieces and attains the optimal adaptive rate (k/n) min(1,2/d) up to log n-factors. A general overview about the LSE under shape constraints is given in the survey article [33].
Related work on transfer learning
From a theory perspective, the key problem in TL is to quantify the information that can be carried over from one task to another [10,26,3,4]. Among the mathematical statistics articles, [35] proposes unbiased model selection procedures and [32] considers re-weighting to improve the predictive power of models based on likelihood maximization. The nonparametric TL literature mainly focuses on classification. Minimax rates are derived under posterior drift by [9] and under covariate shift by [23].
The closest related work is the recent preprint [27]. While we consider the LSE, this article proves minimax convergence rates for the Nadaraya-Watson estimator in nonparametric regression under covariate shift. The proofs are quite different, as one can make use of the closed-form formula for the Nadaraya-Watson estimator (NW). The rates are proven uniformly over two different sets of distributions pairs. Let ρ η (P X , Q X ) : , γ, C ≥ 1 and denote by S(γ, C) the set of all pairs (P X , Q X ), such that sup η∈(0,1] η γ ρ η (P X , Q X ) ≤ C.
To discuss the connection of this class to our approach, observe that, in our framework, bounding the prediction risk of the LSE with regards to some target distribution amounts to bounding the quantity Using the definition of the spread function and assuming (P X , Q X ) ∈ S(γ, C), we obtain In some cases, faster rates for the prediction error can be obtained for the LSE using our results. For an example, consider again the case that the source density is p(x) = (α + 1)x α 1(x ∈ [0, 1]) and the target distribution is uniform on [0, 1]. For the nonparametric regression model with covariate shift (19), Lemma 13 shows that for the LSE f n , with probability tending to one as n → ∞. For the Nadaraya-Watson estimator, Lemma 30 shows that if α ≥ 1, there exists a C > 0, such that for any ε ∈ (0, α), (P X , Q X ) ∈ S(α, C) \ S(α − ε, C). According to Corollary 1 in [27], for f N W the Nadaraya-Watson estimator with suitable bandwidth choice, we then have for any α ≥ 1, This is a slower rate than (25). Also, we see that the LSE's convergence rate stops being the minimax optimal for α > 3/2, while for NW this happens for α > 1. We believe that the loss in the rate is due to the lack of local adaptivity of kernel smoothing with fixed bandwidth as discussed in Section 2.
Extensions and open problems
For machine learning applications, we are of course interested in multivariate nonparametric regression with design vectors X i ∈ R d and arbitrary Hölder smoothness β. To extend the result, the definition of the spread function has to be adjusted. If β > d/2, the LSE converges with rate n −2β/(2β+d) (see the discussion above) and we believe that the local rate t n is now determined by the solution of the equation where |v| ∞ denotes the largest absolute value of the components of the vector v. In the case d = 1, this coincides with the minimax rate found in [13]. Observe moreover that for the uniform design distribution, P X (y : |x − y| ∞ ≤ t n (x) 1/β ) t n (x) d/β and we obtain t n (x) (log n/n) β/(2β+d) . To show that t n is a lower bound on the local convergence rate, we believe that the proof of Theorem 7 can be generalized without too much additional effort. But the upper bound is considerably harder than the case β = d = 1 that we considered in this work. The main reason is that the local perturbation in the proof also needs to be β > 1 smooth and thus a piecewise approach as in (32) does not work anymore.
In Theorem 4, we assume that the regression function is (1 − δ)-Lipschitz for some positive δ. Another interesting question is whether the local convergence result can be extended to a regression function that is itself 1-Lipschitz. Again, the main complication arises in the construction of the local perturbation in Lemma 23. One might also wonder whether the same rates can be achieved if instead of the global minimizer, we take any estimator f satisfying for a pre-defined rate τ n . In particular, it is of interest to determine the largest τ n such that the optimal local rates can still be obtained. A similar approach might be used to prove local convergence rates for deep ReLU networks. Viewing a neural network as a function, deep ReLU networks generate piecewise linear functions of the input [12]. Given now a fixed network architecture, that is, the number of hidden layers and the number of units in each layer are fixed, let N N denote the space of all ReLU networks with this architecture. For the empirical risk minimizer over neural networks in this class it is known how to prove convergence with respect to the prediction risk [31,2,21]. Because the ReLU functions are piecewise linear and therefore Lipschitz it seems conceivable that a similar strategy can be used to prove local convergence rates for the neural network based estimator. Another possible future direction is to use the refined analysis and the local convergence of the LSE to prove distributional properties similar to the ones that have been established for the least squares procedure under shape constraints, see also the discussion in Section 6.1.
A Proofs for the properties of the spread function t n Proof of Lemma 1. Let x ∈ [0, 1] and define h(t) := t 2 P X ([x − t, x + t]). We have h(0) = 0 and h(1) = 1. Also, since P X admits a Lebesgue density, h is continuous. As a consequence of the mean value theorem, there is at least one t ∈ [0, 1] such that h(t) = log n/n. For n > 1, log n/n > 0 and hence t > 0 as well as P X ([x − t, x + t]) > 0. Thus, if t 0 denotes the smallest solution, then it follows that h is strictly increasing on [t 0 , ∞). Thus, there cannot be a second solution for h(t) = log n/n, proving uniqueness.
Proof. Proof of (i): Assume there are points 0 ≤ x, y ≤ 1 such that t n (x) > t n (y) + |x − y|. Then, [y ± t n (y)] ⊆ [x ± t n (x)] and therefore log n n = t n (y) 2 P X [y ± t n (y)] < t n (x) 2 P X [x ± t n (x)] = log n n .
Since this is a contradiction, we have t n (x) ≤ t n (y) + |x − y| for all 0 ≤ x, y ≤ 1. Hence t n is 1-Lipschitz.
Proof of (ii): The inequality is an immediate consequence of the definition of the spread function.
Proof of (iii): We first show that, assuming one solution exists, it is unique. Then we prove that for all n > 9 one solution exists. Suppose that the equation t n (x) = x admits at least one solution x 1 and suppose that there is another solution y 1 such that 0 < x 1 < y 1 < 1. Then This is a contradiction, therefore x 1 = y 1 , and the solution must be unique. Next, we assume n ≥ 9 and we prove the existence of a solution x 1 ∈ (0, 1/2) of the equation t n (x) = x. The function h : x → x 2 P([0, 2x]) is continuous and increasing with h(0) = 0 and h(1/2) ≥ P ([0, 1])/4 ≥ 1/4. Since n ≥ 9 we have 0 < log n/n < 1/4 and the mean value theorem guarantees the existence of x 1 ∈ (0, 1/2) such that h(x 1 ) = log n/n, that is, For the solution x 2 one can apply the same reasoning to the distribution with density functioñ p : Proof of (iv): Consider the function f : (x, t) → t 2 P X ([x ± t]) − log n n . Denote by D 1 f (x, y) and D 2 f (x, y) the partial derivatives of f with respect to its first and second variable evaluated at (x, y). Since P X admits a continuous Lebesgue density p on (0, 1) and t → t 2 is smooth, f is continuously differentiable on S := (0, 1) For any x ∈ [0, 1], f (x, t n (x)) = 0. Therefore, if D 2 f (x, t n (x)) = 0, one can apply the implicit function theorem which states the existence of an open neighbourhood U of x such that there exists a unique and continuously differentiable function g : U → R which satisfies f (x, g(x)) = 0 for all x ∈ U . Moreover, for all x ∈ U , the derivative of g is given by ) .
Since t n (x) satisfies f (x, t n (x)) = 0 for all x ∈ U and g is unique, we must have t n (x) = g(x). Moreover, since t n (x) > 0 for all x ∈ S, we have D 2 f (x, t n (x)) > 0 for all x ∈ S. Using P X ([x ± t n (x)]) = log n/(nt n (x) 2 ), we have, for all x ∈ S, . Remark 1. In fact, we always have t n (0) > 0 and t n (1) > 0. This means that one can partition [0, 1] in three intervals I 1 := [0, x 1 ), I 2 := [x 1 , x 2 ] and I 3 := (x 2 , 1], such that on I 1 , t n (x) > x, on I 2 , t n (x) ≤ x ∧ (1 − x) and on I 3 , t n (x) ≥ 1 − x. From the expression of t n , it follows that t n is strictly decreasing on I 1 and strictly increasing on I 3 .
Proof of Lemma 3. Using the definition of the spread function, the inequality √ log n sup x∈[0,1] t n (x) < ε is equivalent to To prove the lemma, we now argue by contradiction. Assume existence of an x * ∈ [0, 1] such that P X ([x * ± t n (x * )]) ≤ log 2 n/(nε 2 ). We show that for all sufficiently large n, this implies that P X ([0, 1]) < 1 and thus P X is not a probability measure. In a first step, we prove by induction, that for k = 1, 2, . . .
For k = 1, using the local doubling condition, we have for any x ∈ [0, 1], If for some positive integer k, (28) holds, then we have proving the induction step. Thus (28) holds for all positive integers k. By symmetry, one can prove that for all positive integers k, Since t n (x * ) > ε/ √ log n, we can cover each of the intervals [0, x * ] and [x * , 1] by at most √ log n/ε intervals of length 2t n (x * ). This implies as n → ∞. Thus, there exists an N = N (D, ε), such that for any n ≥ N, P X ([0, 1]) < 1. This contradicts the fact that P X is a probability measure. The claim holds.
Remark 2.
Taking ε = 1 and using (27), the previous lemma implies that for all sufficiently large n, Proof of Lemma 6. Since p( By Lemma 3 and Remark 2, sup x∈[0,1] t n (x) < log −1/2 n. Hence, there is a positive integer N, such that for all n ≥ N, t n (x 0 ) ∈ U . Combining inequality (29) with t = t n (x 0 ), using the definitions of t n and some calculus yields (α + 1) log n An .
B.1 Concentration of histograms
For sufficiently large sample size n, we can find an integer sequence (N n ) n satisfying 1 ≤ 1 16 N n (log n/n) 1/2 ≤ 2. Define then the discretization ∆ n := 1 N n .
We now show that the histogram n −1 n i=1 1(X i ∈ [j∆ n , k∆ n ]) concentrates around its expectation k∆n j∆n p(u) du. For this purpose, we recall first the classical Bernstein inequality for Bernoulli random variables.
Lemma 20 (Bernstein inequality). Let p ∈ [0, 1] and V 1 , . . . , V n be n independent Bernoulli variables with success probability p, then, Define Γ n (α) as the event j,k=1,...,Nn Roughly speaking, this set consists of all samples for which the histogram does not concentrate well around its expectation.The next result shows that this event has, for large sample sizes, a vanishing probability.
Proof. We use the union bound and apply Lemma 20. Since N n ≤ 32 n/ log n, we obtain the inequality. The convergence to zero follows from α > 0.
The previous result allows to work on samples in the subset Γ n (α) c . In particular, this means that the random quantity n −1 n i=1 1(X i ∈ [j∆ n , k∆ n ]) is the same as its expectation k∆n j∆n p(u) du up to a factor 2. In particular, we will apply this to random integers j, k depending on the dataset (X 1 , Y 1 ), . . . , (X n , Y n ). We frequently use that for an X that is independent of the data,
B.2 Bound on covering number
We now state a result for covering numbers of subsets of Lip (1). For E an arbitrary set of real valued functions, define N (r, E, . ∞ ) as the covering number of E by balls of radius r with respect to the sup-norm. The following lemma is a slightly refined version of classical results such as Corollary 2.7.10 from [37].
Proof. The proof is rather standard. For the sake of completeness, all details are provided. As a, b are fixed, we simply write E instead of E [a,b] . Let g ∈ E. We construct 3 (b−a)/r functions serving as the centers for the · ∞ -norm covering balls. The idea of the proof is to construct a finite subset of E containing an element h that lies less than r away from g. The upper bound on the metric entropy is then given by the cardinality of this finite set. For k = b−a r and x i = ik, i = 0, . . . , k, define h(x) by induction as follows; and for any 1 ≤ i ≤ k and any Denote by H the set of all possible functions h constructed as above. Intuitively, if |g(x i ) − h(x i )| ≤ r and there is some y ∈ [x i , x i+1 ) such that g(y) is more than r away from h(x i ), say g(y) ≥ h(x i ) + r, then, by the Lipschitz property, g([x i , x i+1 )) will be a subset of [h(x i ), h(x i ) + 2r] and the function x → h(x i ) + |x − x i | is not more than r away from g on the interval [x i , x i+1 ), see Figure 4 for more details.
We prove by induction that for any i ∈ {0, . . . , k} and for any First, since g(0) = 0 and g ∈ Lip(1), it follows from the definition of h that sup x∈[0,r] |g(x)−h(x)| ≤ r, which proves the property for i = 0. Next, assume that for some We distinguish two cases. First, if there exists a y ∈ [x i , x i+1 ] such that |g(y) − h(x i )| > r, then, by symmetry, it suffices to prove the induction step assuming g(y) − h(x i ) > r. By construction of h, we then have that for any where ( ) applies the induction assumption. This proves the first case of the induction step. Next, assume that for any h(x i ) and |g(x) − h(x)| ≤ r, proving the induction step for the second case. By induction, we conclude that for any g ∈ E there exists a function h ∈ H such that g − h ∞ ≤ r. The set H has cardinal ≤ 3 k . Hence,
B.3 Construction of a local perturbation
Before proving the main theorem, we construct a specific perturbation of a Lipschitz function and state several of its properties. In the proof of the main result, the lemma will be applied with ψ the LSE.
and set Then there are two functions h n and g n and two real numbers 0 ≤ x ≤ x u ≤ 1, such that Proof. We construct a function g n satisfying the claimed properties. The construction requires several steps and can be understood best through the visualization in Figure 5.
Define the function Since f ∈ Lip(1 − δ) and δ| · | ∈ Lip(δ), we have that h n ∈ Lip(1). By construction, h n (x) = ψ(x) − s n /2 < ψ(x). Denote by x the largest x belowx satisfying h n (x) = ψ(x). If no such x exists, set x := 0. Similarly, define x u as the smallest x abovex satisfying h n (x) = ψ(x) and set x u := 1 if this does not exist. Define By construction, g n ∈ Lip(1) and supp(ψ − g n ) = [x , x u ]. Thus (i) holds. Also (ii) follows directly from the inequalities above. We now prove (iii). Applying triangle inequality yields From the last inequality, we deduce that for all x ∈ [0, 1] such that |x −x| ≥ s n /δ, we have proving the first and last inequality in (iii).
To prove the remaining inequalities in
The right hand side of this inequality is > 0 for all x ∈ [x ± s n /4] ∩ [0, 1]. The definition of x and x u implies then that We now establish (iv). For x ∈ I := [x ± s n /8] ∩ [0, 1], one can use the lower bound from Equation (34) to obtain that for any x ∈ I, where we used (ii) for the last inequality. This proves (iv). We pick x * andx as in Lemma 23. From the construction ofx, we know that the function f − f (plotted in blue) cannot lie above the green line which has slope δ/2. The yellow function is h n . Since this function has slope δ, it will hit the green curve in a neighborhood ofx. This also implies that h n intersects for the first time with f − f (blue curve) in this neighborhood and provides us with control for the hitting points x and x u . The perturbation f − g n is given by the red curve.
To prove the second claim, we once again lower bound cs n . We first consider the case s n = 2Kt n (x). If c ≥ 1, then, we get cs n ≥ s n ≥ 2Kt n (x) ≥ t n (x) and Otherwise, if c < 1, then, we can apply (LDP) in total k = log 2 (1/c) times so that 2 k c ≥ 1 and obtain We now consider the case cs n = 2Kct n (x * ) + δc|x * −x|/2. Suppose without loss of generality that x * ≤x. If c ≥ 2/δ > 1 then we get Otherwise, if c < 2/δ, we can apply (LDP) in total k = log 2 (1/(δc)) + 1 times, so that 2 k δc/2 ≥ 1 to obtain Since 2 k c ≥ 2/δ, we proceed as in the previous case to obtain Combining both cases, for any K > 1/2, any 0 < δ < 1 and any c > 0,
B.4 Proofs for upper bounds
Proof of Theorem 4. Because of |z| = max(z, −z) and since all arguments carry over to the other case, it is enough to show that We follow the proof strategy outlined in Section 4. Equation (14) shows that whenever ( f n (x) − g(x))(g(x) − f 0 (x)) ≥ 0 for all x ∈ [0, 1], then, Take for g the function g n constructed in Lemma 23 with ψ = f and f = f 0 . In particular, part (ii) of Lemma 23 ensures that ( f n (x) − g n (x))(g n (x) − f 0 (x)) ≥ 0 for all x ∈ [0, 1]. As indicated in the proof strategy section, we now derive a contradiction by obtaining a lower and upper bound for (35).
Lower bound for the left hand side of (35): Let s n be as defined in (31).
The interval I has length ≥ sn 8 ∧ 1 2 and for all x ∈ I, f n (x) − g n (x) ≥ 1 4 s n . By restriction of the sum to all {i : X i ∈ I} and using Lemma 23 (iv), we find Let K ≥ 1/2, implying s n ≥ t n (x) > log n/n and ∆ n ≤ 1 16 log n/n ≤ s n /16. This ensures the existence of two integers 0 ≤ 1 < k 1 ≤ N n , such that In particular, this implies (k 1 − 1 )∆ n ≥ sn 16 > 0 and [ 1 ∆ n , k 1 ∆ n ] ⊆ I. Notice that 1 , k 1 are random variables.
Observe that D − − log 2 (1/16) = D −4 . Applying the lower bound (i) in Lemma 24, we find with P X the conditional distribution defined in (30). As a final step, we use that for n ≥ exp(4K 2 ), s n = 2Kt n (x) ∧ (2Kt n (x * ) + δ|x * −x|/2) ≤ 2Kt n (x) ≤ √ log n sup x∈[0,1] t n (x), therefore we can apply the local doubling property of P X in total five times to obtain Upper bound for the right hand side in (35): We now derive an upper bound for 2 n i=1 i f (X i ) − g(X i ) . Since f − g is supported on a small subset of [0, 1], it is advantageous to study the sum over X i in the support. Define I k, (X) := {i ∈ {1, . . . , n} : X i ∈ [ ∆ n , k∆ n ]} and write m k, (X) for the cardinality of the set I k, (X). By a slight abuse of notation, define the variables Z 1 , . . . , Z m(X) to be the m(X) variables X i such that i ∈ I(X). Additionally, for 0 ≤ a < b ≤ 1 denote the class of 1-Lipschitz functions supported on the interval [a, b] by For a function h ∈ E [a,b] , we say that is the effective empirical semi-norm. The effective refers to the fact that the semi-norm is computed based on the 'effective' sample Z 1 , . . . , Z m(X) . From now on, we follow the same steps as in Chapter 13 from [38] and replace the sample size n by the effective sample size m(X). As we consider variance one in the regression noise, the standard deviation σ in [38] has to be set to one. The critical inequality with σ = 1 is Where G(η, E [a,b] ) is the Gaussian complexity of the set E [a,b] , that is, In this setting, observing that E [a,b] is star-shaped, one can prove a modified version of their Theorem 13.1 Theorem 25. If η n is a positive solution to the critical inequality, then for any t ≥ η n , it holds that Along with Theorem 25 comes a modified version of Corollary 13.1 [38] stating a sufficient condition for any η n to be solution of the critical inequality (38).
satisfies the critical inequality, and hence can be used in the conclusion of Theorem 25. Therefore η n satisfies (39) and, à fortiori, the critical inequality (38). Define On the set S k, , we have by Lemma 23 Using the first claim of Lemma 24 and the fact that D ≥ 2, we find that Observing that exp(−K 2/3 R log n) = n −K 2/3 R and choosing K large enough, we can achieve polynomial decay in n of any order. If (k, ) / ∈ T , then (42) implies P (D k, ) = 0. Define the random variables 0 ≤ < k ≤ N n such that With D k, as defined above, applying (44), N n ≤ 32 n/ log n and the union bound yields for any The convergence is uniform over f ∈ Lip(1) and P X ∈ P n (D).
In a next step of the proof, we provide a simple upper bound of the least squares distance on the set D k, . Using once again that for K ≥ 1/2, ∆ n ≤ s n /16 ≤ s n , Lemma 23 (iii) yields x − 2s n ≤x − s n − ∆ n ≤ ∆ n ≤ x < x u ≤ k∆ n ≤x + s n + ∆ n ≤x + 2s n , which allows to further upper bound the right most inequality in (43). Consequently, on D c k, ∩ Γ n (D −4 ) c , we have Combining the bounds for (35): Using the lower bound (37) and the upper bound derived above, (35) implies that on the event D c k, ∩ Γ n (D −4 ) c , s 2 n 32D 5 n P X x ± 2s n ≤ 16 · 17 2 2n P X [x ± 2s n ] (4s n ) 2 log 2 n Rearranging the terms in (46), and raising both sides to the power 3/2 gives Taking K large enough results in a contradiction. Hence, on D c k, ∩Γ n (D −4 ) c , and for all sufficiently large K, we must have The probability of the exceptional set tends to zero because by (45) and Lemma 21, We distinguish then two cases, either x ∈ I 2 or x ∈ I 1 ∪ I 3 .
x ∈ I 1 ∪ I 3 : By symmetry, it is sufficient to prove the inequality for x ∈ I 1 (we can apply the same to the density x → p(1 − x) and obtain the results on I 2 ). For x ∈ I 1 , we have 0 < β ≤ 1: In this case, |p n (u) − p n (x)| ≤ κ|u − x| β . Plugging this in (52), using t n (x) 2 P n X ([x ± t n (x)]) = log n/n and (48) leads to (50).
B.5 Estimation of spread function t n
The symmetric difference of two sets C, D is C D := (C \ D) ∪ (D \ C).
Remark 3. Note that in (4.3) of [1], nα n ↓ should be nα n ↑ and in (4.4) of [1], α n has to be replaced by γ n , see also the proof of Theorem 4.1 on p. 417 of [1]. A consequence of the previous result is that because of σ 2 (C D) ≥ γ n ≥ 1/n, for all sufficiently large n, we have ψ(σ(C D)) ≤ P(C D) log n and hence lim sup almost surely.
Proof of Theorem 8. It is enough to show that the statement holds for max n>1 replaced by lim sup n if we also can prove that for any finite N, To see this observe that by definition sup x t n (x) ≤ 1. By definition, the spread function solves t 2 n (x) P X ([x ± t n (x)]) = log n/n. Combining this with the inequality P Therefore, proving (57). It thus remains to prove the statement with max n>1 replaced by lim sup n . The class of all half intervals {(−∞, u] : u ∈ R} is VC. Thus, (56) with γ n = log 2 n/n and α n = n −1/4 applied to the class of half intervals gives almost surely. Remark 2 ensures that P X ([x ± t n (x)]) ≥ log 2 n/n. Due to (58) and the definition of the spread function, P X ([x±t n (x)]) = log n/(nt n (x) 2 ) ≤ (log n/n) 1/3 p 2/3 ∞ . Thus for all sufficiently large n, we have log 2 n/n ≤ P X ([x ± t n (x)]) ≤ n −1/4 . Applying (59) shows that there exist a n (x) and a constant A independent of n, such that for any x ∈ [0, 1], and sup x |a n (x)| ≤ A, almost surely. Suppose now that t n (x) > Q t n (x) with Q := (1 + A/ √ log n). By (3), t n (x) < 1/ √ log n. Using the definitions of t n (x) and t n (x), we have almost surely. This is a contradiction and hence t n (x) ≤ Q t n (x), almost surely.
Arguing similarly as above, we find that almost surely. Rewriting this gives almost surely. Since t(x) was arbitrary, applying the definition of the estimator t n (x) yields t n (x) ≥ R −1 t n (x), almost surely. Combined with t n (x) ≤ Q t n (x), this proves almost surely. For any positive number u, Without loss of generality, we can assume that n is sufficiently large such that √ log n ≥ 2A.
Proof. Suppose that for some x 0 ∈ [0, 1] and some t > 0, P X ([x 0 ± t]) = 0. We prove that this implies the contradiction 1 = P X ([0, 1]) = 0. By considering sub-intervals, one can without loss of generality assume that t ≤ √ log n sup x∈[0,1] t n (x). Therefore one can apply (LDP) to obtain that One can then repeat the previous step 1/t times to obtain that Since this is a contradiction, the proof is complete.
The following lemma provides an easily checkable condition for which a distribution P X ∈ M with monotone density p is doubling.
Lemma 29 (Lemma 3.2 from [11]). Let w be a locally integrable, monotonic function on R + and define F : x → x 0 w(u) du. Denote by µ the measure A ∈ B(R + ) → A w(u) du ∈ R + . If w is increasing, then µ is doubling if and only if there exists a constant γ ∈ (0, 1/2) such that for all x, If w is decreasing, then µ is doubling if and only if there exists a constant γ ∈ (1, 2) such that, for all x, Where doubling means that there exists a constant D ≥ 2 such that for all x ∈ R + and all η > Proof of Lemma 10. We first prove that P X ∈ P G (4p/p). To see that, for any x ∈ [0, 1], consider We now prove (9). For all n > 2 and x ∈ [0, 1], t n (x) < 1 and t n (x)p ≤ P X ([x ± t n (x)]) ≤ 2t n (x)p. By using the definition of the spread function t n , we obtain which, once rearranged, yields the desired inequality.
By assumption n ≥ 9. According to Lemma 19 and Remark 1, we can split [0, 1] into three intervals I 1 := [0, a n ), I 2 := [a n , b n ] and I 3 := (b n , 1] such that t n (a n ) = a n , t n (b n ) = 1 − b n , on I 1 , t n (x) > x, on I 2 , t n (x) ≤ x ∨ (1 − x) and on I 3 , t n (x) ≥ 1 − x. Moreover, from the expression for the derivative t n in Lemma 19 (iii) and from the fact that p is strictly increasing, we know that t n is strictly decreasing on [0, b n ) and t n is strictly increasing on (b n , 1]. We now derive expressions for the boundaries a n , b n . Derivation of a n : The solution a n of the equation t n (x) = x must satisfy 2 α+1 a α+3 n = log n n , which can be rewritten as a n = log n 2 α+1 n 1/(α+3) .
As mentioned before, the spread function t n is strictly decreasing on I 1 and for all x ∈ I 1 , t n (a n ) ≤ t n (x) ≤ t n (0). Hence, completing the proof of (10).
Using that in this regime t n (x) ≤ x implies Using the definition of t n and the previous inequalities, we get which, once rearranged, yields, for all x ∈ I 2 , log n 2 α+1 (α + 1)nx α Third regime, t n (x) ≥ 1 − x: As mentioned before, in this regime the spread function t n is strictly increasing. We already have suitable bounds for the value of t n (b n ) and need now an upper bound for the value of t n (1). We have P([1 − t n (1), 1]) = 1 − (1 − t n (1)) α+1 ≥ 1 − (1 − t n (1)) = t n (1). Therefore, t n (1) 3 ≤ log n/n and t n (1) ≤ (log n/n) 1/3 . Combining the bounds for t n (b n ) and t n (1) yields that for any x ∈ I 3 , log n 2(α + 1)n Using that b n > 1/2 and 2 α x α ≥ 1 for x ≥ 1/2, the bounds in the second and third regime can be combined into log n 2 α+1 (α + 1)nx α for a n ≤ x ≤ 1.
D Proofs for Section 5
Proof of Lemma 12. By Theorem 4, 1 0 ( f n (x) − f 0 (x)) 2 q(x) dx ≤ K 1 0 t P n (y) 2 q(y) dy with probability tending to one as n → ∞. To obtain the first inequality with K = 4K 2 , it is therefore enough to show that 1 0 t P n (y) 2 q(y) dy ≤ 4 To verify (65), observe that for any y, t P n (y) ≤ 1 and hence 1 0 t P n (y) 2 q(y) dy ≤ 2 dx t P n (y)q(y) dy.
By construction of the second integral, we have that |x − y| ≤ t P n (y)/2. From Lemma 19, we know that t P n is a 1-Lipschitz function and therefore, |t P n (x) − t P n (y)| ≤ |x − y| ≤ t P n (y)/2, implying t P n (y)/2 ≤ t P n (x) ≤ 3t P n (y)/2 so that x ∈ [y ± t P n (y)/2] implies y ∈ [x ± t P n ( Combined with (66) and t P n (y)/2 ≤ t P n (x), it follows that To prove the second inequality, observe that 1 t P n (x) = n log n P([x ± t P n (x)]) ≤ 2 n log n t P n (x) p ∞ , which can be rewritten into 1/t P n (x) ≤ (2 p ∞ n/log n) 1/3 .
Proof of Theorem 15. To shorten the notation, we suppress the dependence of P n X and Q n X on n, write N := n + m for the size of the combined samples, and set t n,m (x) := t P n (x) ∧ t Q m (x). In a first step of the proof, we construct a number of disjoint intervals on which the mixture distribution P = n N P X + m N Q X assigns sufficiently much mass. Let ψ N = (log N/N ) 1/3 and M N = 1/(2ψ N ) . Furthermore, let N 0 be the smallest positive integer, such that for all N ≥ N 0 , Clearly, N 0 only depends on C ∞ . We now prove the theorem for all N ≥ N 0 . The second constraint implies that M N ψ N ≤ (1/(2ψ N ) + 1)ψ N ≤ 3/4.
Thus, there exist at least b(N/ log N ) 1/3 intervals I k such that P(I k ) ≥ ψ N . By a slight abuse of notation, denote by k 1 , . . . , k s N the indexes for which P(I k ) ≥ ψ N . Together with (67), s N ≥ b(N/ log N ) 1/3 ≥ N 1/4 for all N ≥ N 0 . Write x j := (2k j − 1)ψ N for the center of the interval I kj and observe that using the definition of the spread function, If t n,m (x j ) > √ 2ψ N , then, t n,m (x j )P(I k ) > 2ψ 3 N = 2 log N/N, which is a contradiction. Therefore, t n,m (x j ) ≤ √ 2ψ N for all j = 1, . . . , s N . We now apply the multiple testing lower bound together with Theorem 2.5 from [36]. In order to do so we construct s N + 1 > 0 hypotheses f 0 , . . . , f s N ∈ Lip(1) such that, Together with the first inequality of the lemma, we then have for all n ≥ m ≥ M.
Proof of Lemma 17. Using Theorem 14, we have that with probability tending to one as n and m tend to infinity. Let a n = (log n/n2 α+1 ) 1/(α+3) . Rewriting the condition n 3/(3+α) log α/(3+α) n m shows that (m/n) 1/α > a n for all sufficiently large n. Using Lemma 10, Lemma 11 and m ≤ n, we find that | 2022-04-12T01:16:20.238Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "61d4ac570f823610cbacda66d6167a15c6077328",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8af9cb14c1d6bd052d4a8e270487da046adb0dfc",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
54014520 | pes2o/s2orc | v3-fos-license | Establishment of stable cell lines in which the HBV genome replicates episomally for evaluation of antivirals
Introduction Due to the increasing resistance to nucleot(s)ide analogs in patients with chronic hepatitis B, development of new antiviral drugs to eradicate hepatitis B virus is still urgently needed. Material and methods To date, most studies on evaluating anti-HBV drugs have been performed using cell lines where the HBV genomic DNA is chromosomally integrated, e.g. Hep2.2.15 in HBV-infected livers of the viral episomal genome replicates in the nucleus and covalently closed circular DNA (cccDNA) serves as a transcriptional template. Another option involves the use of HBV-infected cells of HepaRG or NTCP-overexpressing cells. However, the development of the infection system is expensive and laborious, and its HBV expression level remained low. Results Compared to HuH7 cells, the established stable cell lines based on episomal-type pEB-Multi vectors can been expressed HBV wild-type by qRT-PCR and immunoblotting (p < 0.05). These two vectors are also sensitive to Entecavir and against nucleoside analog Lamivudine in mutants cellines. Conclusions It is worth demonstrating how useful the established cell system is for evaluating antiviral agents and their mechanisms of action.
Introduction
Hepatitis B virus (HBV) is a member of the hepadnavirus family, which comprises unique DNA viruses that initiate reverse transcription during replication. About 350-400 million people are chronically infected with HBV and HBV-related liver diseases such as cirrhosis, liver failure and hepatocellular carcinoma, resulting in one million deaths annually worldwide [1]. Nucleoside analog inhibitors of HBV DNA polymerase are the current treatment options for chronic hepatitis B that has resistance to antiviral drugs; particularly lamivudine resistance is seen in 80% of patients treated for 5 years, and has a cumulative annual incidence of 14-32% [2,3]. Thus, screening new antiviral drugs remains essential for the cure of clinical patients.
The HepG2.2.15 cells containing two integrated head-to-tail copies of the genotype D and stably replicating the HBV genome [4] are currently used to evaluate the effect of antiviral compounds [5][6][7]. However, in Basic research Hepatology HBV-infected liver cells, the viral episomal genome cccDNA serves as a transcriptional template, but does not integrate the formation. Other options such as recombinant baculovirus infection and a transient transfection system are used for evaluating the drug resistance of a given mutant archive and cross-resistance profile of HBV mutants [8] due to their capability and convenience to efficiently initiate viral DNA replication. However, detailed investigations based on reliable cell-based assays are required for variant mechanisms of actions of viral mutants resistant to lamivudine and high throughput screening (HTS) of compounds that inhibit virus mutants resistance to lamivudine HBV infection [9]. To address this issue, several cell lines with stably replicating HBV virus mutants resistant to lamivudine have been reported [10][11][12][13][14]. However, these cell lines promote less persistence of viral particle-DNA copies in the supernatant and evaluation efficiency of antiviral agents in the cell lines harboring wild-type HBV genome and virus mutants resistant to lamivudine.
Hence, in the current study, we established a stable cell system based on the episomal-type pEB-Multi vector that can stably replicate HBV ge-nome of genotype Ce. Furthermore, we constructed virus mutants with the stable transfection of lamivudine mutation into hepatoma HuH7 cells. This cell line can persistently produce hepatitis B surface antigen (HBsAg) over 21 days, pregenomic RNA over 60 days, and particle-DNA with high stable expression in at least 7 days. Therefore, this cell line is considered to be suitable for screening new antiviral agents against HBV mutant lamivudine resistance.
Plasmid construction
The mutant plasmid pEB-HBV-puromycin (L180M + M204V) was constructed by using the HBV genome (genotype C, subtype adw, GenBank Accession No. AY066028) isolated from FH4. The oriP-EBNA1 system and the puromycin resistance gene are contained in the PEB-multicarrier. A 1.3 × unit length HBV genome was inserted downstream of the puromycin genome (Figure 1 A). The pEB-HBV-puromycin (L180M + M204V) plasmid contains the L180M + M204V HBV genome and its pregenomic RNA expression under the control of the base core promoter element was derived from pEB-HBCe-puromycin (wild-type) by site-directed mutagenesis.
Cell culture, transfection and selection of stable cell lines HuH7 liver cancer cells of human origin were maintained in 10% fetal bovine serum supplemented with Dulbecco's Modified Eagle medium (DMEM). Cells (2 × 10 6 cells/well in 10 cm plates) were transiently transfected with 5 µg of plasmid DNA mixed with Lipofectamine LTX purchased from Thermo Fisher Scientific. After 24-hour culture, medium containing 5 µg/ml puromycin was changed and after a further 7 days of culture with puromycin treated cells were prepared for assay.
Quantification of HBV DNA and RNA
Quantification of HBV DNA and RNA was performed as previously described [15]. The HBV DNA in the culture supernatant collected from the transfected cells was treated with PNE solution (8.45% PEG, 0.445 mole NaCl and 13 mmol EDTA) for 1 h on ice. The pellets were incubated with DNase I (TAKARA, Shiga, Japan) and RNase (TaKaRa) for 1 h at 37°C. The pellets were then treated with proteinase K for 12 h at 56°C, and HBV DNA was separated by phenol/chloroform extraction and ethanol precipitation. HBV DNA copies were determined by qPCR. For quantification of HBV 3.5 kb pgRNA, total RNA was extracted from HBV-transfected cells using TRI reagent (Molecular Research Center, Cincinnati, OH, USA). After treatment with DNase I and RNase inhibitor, cDNA templates were synthesized and HBV RNAs were quantified by qPCR using the SYBR qPCR Mix kit (Toyobo, Osaka, Japan) using 5′-TCCCTCGCCTC-GCAGACG-3′ and 5′-GTTTCCCACCTTATGAGTC-3′ for unspliced 3.5 kb RNA, β-actin mRNA primers (5′-TTCTACAATGAGCTGCGTGTG-3′ and 5′-GGG-GTGTTGAAGGTCTCAAA-3′). For semi-quantitative RT-PCR, cDNA templates were amplified with primers as previously reported [15].
Immunoblotting
Immunoblotting was performed as previously described [16]. Briefly, cell lysates were separated by SDS-PAGE and transferred onto PVDF membranes. The membrane was blocked after 1 h, and an anti-HBc antibody was produced by the HBc protein rabbit, HBs antibody (Immunology Institute, Tokyo, Japan), and GAPDH (Santa Cruz Biotechnology) which were immunologically and biologically expressed. After washing, the membrane was incubated with HRP-conjugated secondary antibody (manufactured by Cell Signaling Technology, Danvers, MA) for 0.5-1 h. Antigen-antibody complexes were measured using the ChemiDoc Imaging System (Bio-Rad Laboratories, Tokyo, Japan).
Construction of HBV wild-type and mutant plasmids
Firstly, we constructed episomal-type HBV wildtype and mutant plasmids pEB-HBCe (L180M + M204V) and pEB-HBCe. After that, we evaluated whether they were successfully constructed and their replication ability in hepatocarcinoma cells. Northern and western blotting under the control of basal core promoter of the virus after transfection into HuH7 cells were performed. The results revealed that the HBV expression plasmids pEB-HBV-puromycin (L180M + M204V) and pEB-HBCe-puromycin could produce authentic HBV RNA (Figure 1 B) and HBs protein in cells (Figure 1 C). These results confirmed the successful construction of HBV wild-type and mutant plasmids.
Establishment of pEB-Multi plasmid based cell lines by stably replicating HBV wild-type and virus mutant resistant to antiviral drugs
To confirm HBV replication activity in pEB-Multi plasmid based cell lines, HBV pregenomic RNA (pgRNA) and protein production were determined after transfection of HuH7 cells with pEB-HBV-puromycin (L180M + M204V) and pEB-HBCe-puromycin under increasing concentrations of puromycin. The results showed that 5 µg/ml puromycin was more suitable for cell growth and higher HBV pgRNA expression and protein production ( Figures 2 A, B). These results indicated that the cell lines could stably express wild-type HBV and mutant plasmids, and the mutant type was resistant to antiviral drugs.
Time course of expression of viral antigens, viral RNAs and DNA formed in the supernatant
To determine the time course of expression of HBV wild-type and mutant, we measured the production of viral DNAs, RNAs, and antigens after culturing the novel stable cell line. Time-dependent expression of HBV RNAs (Figure 3 A) and antigens (Figure 3 B) was observed in cell lines stably replicating HBV wild-type and virus mutants resistant to lamivudine after 45 days in cell culture containing 5 µg/ml puromycin. We also observed time-dependent expression of HBV DNAs (Figure 3 C) in the supernatant. These results demonstrated that pEB-based cell lines can persistently replicate with HBV and virus mutants.
Effect of antiviral drugs on HBV replication of wild-type and virus mutants resistant to lamivudine
The antiviral activities in the presence of increasing concentrations of lamivudine and entecavir were screened to validate whether this cell line would be appropriate for screening antiviral agents. HuH7 cells transfected by puromycin at 7 days were selected, and cell lines stably replicating HBV wild-type and mutants were treated with lamivudine and entecavir in a dose-dependent manner. Cells were harvested after 7 days and further cultured using nucleoside analog treatment. HBV DNA (Figure 4 A) and viral pgRNA (Figure 4 B) levels were detected by real-time PCR in supernatant and cells. The effective concentrations of entecavir were 0.1, 1, and 10 µM and those of lamivudine were 1, 10, and 100 µM for HBV DNA and pgRNA against the wild-type and lamivudine resistant cell lines, respectively. We found that HBV DNA levels were suppressed by entecavir as well as lamivudine in the wild-type cell line, but were decreased only by entecavir treatment in the lamivudine resistant cell line. No effect on pgRNA was observed in both cell lines treated with lamivudine and entecavir. These results indicated that the stable cell lines based on episomal-type pEB-Multi vectors were resistant to lamivudine, but sensitive to interferon or entecavir. This remained useful for evaluating antiviral agents and investigating their mechanisms of action. of chronic HBV therapy, especially for lamivudine and adefovir. The resistance to these old drugs has seriously affected the efficacy of the newer drugs. HepG2.2.15 cells, which are still the most widely used for cell HBV life cycle analysis and antiviral studies, are genotype D of HBV stable production cell lines regardless of significant limitations, such as no viral replication from cccDNA (This unclear sentence is duplicated below) [10]. However, different methods have been used to study HBV antiviral drugs for sensitivity in vitro. Stable transfected HBV cell lines have become an important tool for HBV and anti-HBV drug research [4]. In our study, we established stable cell lines based on episomal-type pEB-Multi vectors stably replicating HBV wild-type and virus mutants resistant to lamivudine that can stably replicate HBV for over 1 month. HBV production and particle-associated HBV DNA in culture supernatants (Figure 3 C) that are sensitive to entecavir and against lamivudine (Figure 4 A) in mutants cell line were determined. Unlike techniques that artificially introduce a few particular mutations within a wild-type HBV background, the full length amplification and transfection method allows the determination of antiviral efficacy on samples and is more useful to screen antiviral agents. Lamivudine was approved for use in 1998, followed by adefovir, telbivudine, entecavir and tenofovir in 2003, 2005, 2006 and 2008, respectively. Since the number of HBV antiviral drugs is limited, transmission of mutant virus is of particular importance for HBV infection as mutations that confer cross-reactivity can leave patients with few therapeutic options. Indeed, there are recent reports of lamivudine-and adefovir-related mutations in acute HBV infection in both China and Japan [17].
Discussion
Increased use of antiviral drugs for chronic hepatitis B has led to increased antiviral resistance. Because of this, transmission of resistant HBV is a growing concern. A similar scenario has already been witnessed in acute HIV infection. Novel cell lines are not only useful for assessing new antiviral inhibitors, but also for investigating their mechanisms of actions. On the other hand, cell lines are convenient to determine the roles of host proteins that have DNA-and RNA-binding properties in HBV replication. There are a variety of liver-enriched and ubiquitous transcription factors that target the promoter and enhancer regions to regulate viral transcription and replication [18][19][20][21]. A number of host cytokines that have been identified to interact with the ENII/ BCP region to modulate HBV transcriptional activity are mostly active activators that stimulate cis-acting elements. For example, liver-enriched or ubiquitous transcription factors such as PPAR, HNF4, HNF3, C/EBP, RXR, FTF/LRH-1, TBP, FXR, PGC-1, SIRT1 and SP1 [22,23] bind to ENII, contributing to the up-regulation of the core promoter activity.
Regarding the transcriptional repression mechanism of HBV gene expression, it is known that a negative regulatory element (NRE) located immediately upstream of ENII participates in down-regulation of core promoter activity in a direction-independent manner. In addition, ENII has not been widely reported, but it involves negative regulation of activities. Prox 1, called FTF/LRH-1 co-repressor, inhibits FTF/LRH-1 mediated ENII activation [24]. Down-regulation of IL-4 expression by C/EBP may inhibit core promoter activity [25]. It also indicates that TRIM protein and COUP-TF1 potentially contribute to the inhibitory activity of ENII [26,27]. LUC7L3, a member of the SR protein family, is a novel ENII-mediated negative regulator of HBV replication [16,28,29].
In conclusion, we constructed novel stable HBVproducing cell lines with HBV wild-type and lamivudine resistant. These novel stable HBV-producing cell lines can serve as valuable tools for screening antiviral agents and analyzing virus-host interaction in vitro. | 2018-11-29T01:30:01.559Z | 2018-11-20T00:00:00.000 | {
"year": 2018,
"sha1": "79bc3de22c40e7471cd47c191adf7a75354f8b5f",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.archivesofmedicalscience.com/pdf-95101-55092?filename=Establishment%20of%20stable.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4cfc294a2621703a33802762ba3f7a6ca939ccba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264935946 | pes2o/s2orc | v3-fos-license | BenzoHTag, a fluorogenic self-labeling protein developed using molecular evolution
Self-labeling proteins are powerful tools in chemical biology as they enable the precise cellular localization of a synthetic molecule, often a fluorescent dye, with the genetic specificity of a protein fusion. HaloTag7 is the most popular self-labeling protein due to its fast labeling kinetics and the simplicity of its chloroalkane ligand. Reaction rates of HaloTag7 with different chloroalkane-containing substrates is highly variable and rates are only very fast for rhodamine-based dyes. This is a major limitation for the HaloTag system because fast labeling rates are critical for live-cell assays. Here, we report a molecular evolution system for HaloTag using yeast surface display that enables the screening of libraries up to 108 variants to improve reaction rates with any substrate of interest. We applied this method to produce a HaloTag variant, BenzoHTag, which has improved performance with a fluorogenic benzothiadiazole dye. The resulting system has improved brightness and conjugation kinetics, allowing for robust, no-wash fluorescent labeling in live cells. The new BenzoHTag-benzothiadiazole system has improved performance in live-cell assays compared to the existing HaloTag7-silicon rhodamine system, including saturation of intracellular enzyme in under 100 seconds and robust labeling at dye concentrations as low as 7 nM. It was also found to be orthogonal to the silicon HaloTag7-rhodamine system, enabling multiplexed no-wash labeling in live cells. The BenzoHTag system, and the ability to optimize HaloTag for a broader collection of substrates using molecular evolution, will be very useful for the development of cell-based assays for chemical biology and drug development.
Introduction
Genetically encoded sensors have become essential in the biochemical sciences. 1,2 uch biosensors were originally developed using fluorescent proteins, but advancements in both synthetic dye chemistry and protein engineering have enabled improved chemogenetic constructs for cellular imaging.[5][6][7][8][9] HaloTag7 reacts with a linear chloroalkane ligand which can be appended to any molecule of interest, most commonly fluorescent dyes (Fig. 1a) but also a wide variety of substrates including biomolecules.[12][13][14][15] HaloTag7 has proven to be very versatile given its genetic encodability and substrate modularity.However, its reaction rate with chloroalkane-tagged substrates is highly variable and substrate-specific. 16Specifically, HaloTag7 reacts with chloroalkanetetramethylrhodamine (CA-TMR, Fig. 1) with second-order rate constants greater than 10 7 M -1 s -1 but this rate decreases substantially when non-rhodamine dyes are used.For example, the chloroalkane conjugate of AlexaFluor 488, a synthetic derivative of fluorescein that still bears a xanthene core, has a reaction rate 3 orders of magnitude slower than that of CA-TMR (2.5x10 4 M -1 s -1 ). 16When nonxanthene dyes are used, like benzothiadiazoles or stilbenes, [17][18][19] this rate further plummets below 10 3 M -1 s -1 .HaloTag7's preference for CA-TMR can be explained by the fact that CA-TMR was the substrate in the original HaloTag7 engineering efforts -HaloTag is no exception to the maxim "you get what you screen for." 20,21 n cellular assays, slow reaction kinetics results in sluggish or incomplete HaloTag7 labeling 22 and/or the need for higher concentration of dye in cellular experiments, which leads to high background fluorescence. 119][30] However, many of fluorogenic HaloTag7 ligands deviate from the rhodamine scaffold and thus suffer from slow reaction rates. 17-19, 31, 32Rhodamine-based fluorogenic dyes such as CA-JF635 (Fig. 1b) have also been developed, and their conjugation kinetics are faster than non-rhodamine dyes but they do not approach the super-fast kinetics of CA-TMR. 11,16,33,34 Gien the broad utility of HaloTag7 in chemical biology, broadening its substrate scope to enable more rapid kinetics with non-xanthene dyes would permit a large expansion of the biosensor toolbox including the use of more varied fluorogenic dyes.
While most work to date has focused on improving dyes as substrates for HaloTag7, recent efforts to alter HaloTag7 to improve performance with a specific chloroalkane substrate have been reported.Liang, Ward, and coworkers screened a library of 73 recombinantly expressed and purified single-mutant HaloTag variants for improved activity of a catalytic metal center and, in a separate report, a similar library was screened for improved fluorogenic and labeling properties of a styrylpyridium dye. 35,36 rei, Johnsson, and coworkers engineered HaloTag to modulate the fluorescent lifetimes of fluorogenic rhodamine dyes to enable multiplexed fluorescent lifetime imagining. 37They employed a HaloTag7 library generated by site-saturation mutagenesis of 10 pre-selected residues followed by screening bacterial lysates.Subsequent rounds of screening utilized sub-libraries generated by combinations of the best-performing single mutants.While both strategies produced improved HaloTag7 variants for their given application, they were limited by their screening throughput.In this work, we develop a molecular evolution system for HaloTag that can screen 10 7 to 10 8 variants for optimal properties including faster conjugation kinetics.The system was applied to produce an optimized HaloTag7 variant with improved kinetics for a fluorogenic benzothiadiazole dye.The new self-labeling protein, BenzoHTag, enables rapid wash-free intracellular labeling of live mammalian cells.Further, the new protein•dye system shows orthogonality to HaloTag7 which enabled both systems to be used simultaneously for multiplexed, wash-free labeling in live cells.
Evolving HaloTag7 Using Yeast Surface Display
In contrast to previous methods for HaloTag7 evolution, we sought to employ a method that would enable higher throughput.
We adapted yeast surface display for this purpose, [38][39][40][41][42] which provides several benefits for protein evolution including the ability to sort using fluorescence-activated cell sorting and the inclusion of epitope tags to allow independent measurements of protein activity and expression level (Fig. 2a). 43HaloTag7 was incorporated into a yeast display construct and activity of HaloTag7 on the yeast surface was verified by treating yeast with CA-TMR.Robust CA-TMR signal was observed for yeast cells expressing HaloTag7 but not cells expressing the catalytically inactive D106A mutant (Fig. S1a).Immunostaining of HA and/or Myc epitopes produced a linear correlation with CA-TMR signal, demonstrating independent measurements of labeling activity and expression levels (Figs.1a, S1b).This was important to avoid bias towards high-expressing variants in subsequent screens.We used error-prone PCR to generate four sub-libraries with 2 to 6 mutations per variant (Tables S3, S4).After verifying that each sub-library retained some activity (Fig. S2) they were pooled to yield an input library of 2.5x10 8 variants.We filtered the input library to remove catalytically dead variants by treating a pool of over 4x10 9 yeast with excess chloroalkane-biotin and then isolating biotinylated yeast using magnetic streptavidin beads.This pre-screen produced a filtered input library of functional HaloTag variants exceeding 5x10 7 unique members.
To validate the HaloTag yeast display system, we screened the HaloTag variant library against Bz-1, a benzothiadiazole dye that we recently developed as a fluorogenic HaloTag ligand (Fig. 1b). 19Benzothiadiazoles are a class of fluorogenic dyes that are nonfluorescent in aqueous solution and fluorescent in non-polar environments, including the HaloTag7 active site channel as originally demonstrated by Zhang and coworkers. 17,44 z-1 was cell-penetrant, it had low background in mammalian cells, and when conjugated to HaloTag7 Bz-1 had spectral properties that align with GFP and AlexaFluor488 allowing for the use of common blue lasers and blue/green filter sets.19,45,46 Further, Bz-1's small size, large Stokes shift of 70 nm which limits self-absorption, and ease of derivatization renders it a nearly ideal dye for turn-on fluorescence labeling in cells.Despite these favorable properties, the reaction rate of HaloTag7 conjugation to Bz-1 was slower than the rates of many commonly used HaloTag7 substrates.19 Thus, we sought to improve the fluorogenic system by screening for HaloTag7 variants with improved reaction kinetics, and potentially improved fluorescence intensity with Bz-1.We subjected the filtered library to iterative rounds of screening using fluorescence-activated cell sorting with substrate Bz-1.In each round, we isolated the top 0.5% of cells with high green fluorescence relative to expression level (Fig. 2a).Stringency was increased after each round by decreasing the concentration of Bz-1 and decreasing the incubation time, with round 4 applying 40 nM Bz-1 for one minute (Table S6).After four rounds of screening, there was a clear increase in fluorescence of the sorted variants when treated with Bz-1 compared to HaloTag7 (Fig. 2b-c).
HaloTag mutations enhance conjugation with Bz-1
We sequenced 110 colonies from rounds 3 and 4.There were few duplicate sequences, but numerous enriched mutations were observed.15 HaloTag variants that encompassed most of the enriched mutations were identified for further analysis.Notably, most enriched mutations were at residues that were not altered in prior HaloTag engineering efforts (Table S2) 5 with the exception of V245A, which was identified in an earlier HaloTag7 evolution screen with a non-rhodamine, styrylpyridium dye. 36After comparing the activities of these 15 variants on the surface of yeast (Fig. S4 and S5, Table S7) we selected six of the best-performing variants for recombinant expression and purification (Variants 1-6, Table 1).All six variants demonstrated faster Bz-1 labeling kinetics than HaloTag7 ((5-to 27-fold, Fig. 3a,b, Fig. S6, Table S10).The identified mutations also modulated the fluorescence properties of the protein•Bz-1 complex (Fig. 3c).For example, Variant 1, which was the only variant to contain the L246F mutation, produced the largest enhancement in endpoint fluorescence intensity of 14% greater than HaloTag7.Further, all mutants bearing the V245A mutation produced a slightly red-shifted emission maximum (Table 1).Examination of a crystal structure of Bz-2 conjugated with HaloTag7 suggested that all six variants have mutations that alter the environment near the dye's benzothiadiazole core and/or donor amine group (Fig. 3d). 17,36 o explore whether these mutations altered interactions with Bz-1's benzothiadiazole core or its donor amine, we compared the kinetics for Variants 5 and 6 reacting with Bz-1, Bz-2, 17 and Bz-3, 19 -these dyes have pyrrolidine, dimethylamine, and morpholine as their amine donors, respectively.The rates of Bz-2 and Bz-3 reacting with Variants 5 and 6 were approximately 10-fold slower than Bz-1 but were approximately 10-fold faster than their rates with HaloTag7 (Fig. S7).These results implied that the newly evolved HaloTag variants specifically recognize both the benzothiadiazole core and the pyrrolidine donor group of Bz-1.
We next generated Variants 7-10 that combined different mutations observed to correlate with rate enhancements among Variants 1-6.We observed that combining all three of the mutations V245A, F144L, and L211V (Variants 8-10) led to over two-fold faster rates compared to variants with only two of these mutations (Variants 5-7).We also observed that variants with the adjacent mutations V245A and L246F showed an overall decrease in reaction rate (Variants 7 and 9, compared to 6 and 8).Lastly L221S appeared to be a spectator mutation co-isolated with more beneficial mutations in Variant 6, as it is located distal to the active site channel and it decreases overall activity (Variant 10 compared to 8).
Bz-1 rapidly labels BenzoHTag in live cells with low background
We selected Variant 10, which we named BenzoHTag, for testing in live mammalian cells.BenzoHTag was cloned as a Histone 2B (H2B) fusion to localize it to the nucleus and the fusion was transiently transfected into U-2 OS cells. 37Cells expressing BenzoHTag or HaloTag7 were treated for 10 minutes with concentrations of Bz-1 between 7 and 1000 nM (Fig. 4a).Even at the highest concentration tested, Bz-1 showed minimal background fluorescence in non-expressing cells.BenzoHTag dramatically outperformed HaloTag7 in live cell labeling with Bz-1, enabling robust fluorescence at 10-to 20-fold lower concentration of Bz-1.Saturation of the turn-on signal was not observed for HaloTag7-expressing cells even at 1000 nM Bz-1, while turn-on signal saturated for BenzoHTag at 250 nM.Between 30 and 250 nM, BenzoHTag-expressing cells had 5-to 8-fold higher fluorescence over background compared to HaloTag7-expressing cells.BenzoHTag•Bz-1 labeling was also very sensitive -after labeling for 10 minutes with only 7 nM Bz-1, BenzoHTag-expressing cells showed greater than 200-fold signal over background (Fig. 4a).These results highlights that the intrinsic properties of the Bz-1 dye, including high cell permeability of the Bz-1 substrate and very low background fluorescence, 19 synergize with the increased reaction rate to allow robust fluorescence detection at very low dye concentrations.
We next evaluated the performance of the BenzoHTag•Bz-1 system in no-wash, live cell fluorescence microscopy.U-2 OS cells transfected with the H2B-BenzoHTag fusion were treated with 10 nM Bz-1 and imaged without exchanging media.Robust nuclear labeling was observed in BenzoHTag-expressing cells (Fig. 4b) while non-expressing cells within the same image had no observable background fluorescence.When treated with 125 nM Bz-1, cells also showed strong nuclear labeling with no detectable non-specific fluorescence (Fig. S11).We captured movies of Bz-1-treated cells (Movies S1, S2) and quantified the appearance of fluorescence over time.In BenzoHTag-expressing cells, fluorescence approached saturation within 60 seconds of Bz-1 addition, and BenzoHTagexpressing cells saturated at 100% higher fluorescence intensities than HaloTag7-expressing cells (Fig. 4c, Fig. S12).Notably, BenzoHTag-expressing cells treated with only 10 nM Bz-1 were also labeled within seconds and showed in-cell labeling kinetics similar to HaloTag7-expressing cells treated with 125 nM Bz-1.By contrast, no signal could be detected for HaloTag7-expressing cells when treated with only 10 nM Bz-1.
BenzoHTag and HaloTag7 can be used for simultaneous, multiplexed labeling in live cells
Given that BenzoHTag recognizes multiple parts of Bz-1, we wondered whether the BenzoHTag system had evolved away from HaloTag7's large preference for rhodamine-based substrates.To test this, we measured the kinetics of recombinantly purified BenzoHTag with CA-TMR and its fluorogenic silicon rhodamine analog, CA-JF635.CA-TMR reacted with BenzoHTag with a second-order rate constant of 2.1x10 4 M -1 s -1 and CA-JF635 reacted with a rate of 1.1x10 2 M -1 s -1 , which represent 900-and 9000-fold rate decrease, respectively, relative to their rates with HaloTag7 (Fig. 5a, Fig. S8, Table S11).Overall, our kinetic data indicated that populations by flow cytometry after 10 minutes of incubation, we observed that Bz-1 predominately labeled BenzoHTag while CA-JF635 predominantly labeled HaloTag7 (Fig. S13b).To verify orthogonality under no-wash conditions in individual cells, we cotransfected cells with both constructs, treated them with both dyes simultaneously, and observed the cells using confocal fluorescence microscopy with no washes (Fig. 5c, Fig. S13c).When 125 nM of each dye was used, robust labeling by Bz-1 was observed after 10 minutes but no CA-JF635 labeling was evident by microscopy.Imaging after 60 minutes revealed localization of CA-JF635 labeling to the mitochondria, however Bz-1 labeling was observed at both the nucleus and mitochondria (Fig. S13c).We ascribe these observations to faster cell-penetration for Bz-1 compared to CA-JF635.We optimized dye concentrations and found that co-treating cells with 50 nM Bz-1 and 125 nM CA-JF635 for 60 minutes resulted in robust multiplexed labeling (Fig. 5d).Bz-1 fluorescence was entirely localized to the nucleus and CA-JF635 fluorescence was entirely localized to the mitochondria, indicating excellent orthogonality between the BenzoHTag•Bz-1 and HaloTag7•CA-JF635 systems (Fig. 5e).Quantification by flow cytometry under these conditions confirmed orthogonal labeling between the systems (Fig. 5d).Johnsson and coworkers. 16The rate of CA-JF635 has not been reported, but rates of analogous Si rhodamines were reported in the range of 10 5 -10 6 M -1 s -1 . 11,16,23,33 Se Supporting Information for more details.
Discussion
Self-labeling proteins like HaloTag7 have become a mainstay in chemical biology research.HaloTag7 is used for applications with a large variety of chloroalkane-tagged compounds, but the enzyme's kinetics depend greatly on the nature of the substrate attached to the chloroalkane.Some prior work sought to modify HaloTag7 to improve the labeling rates with non-rhodamine substrates.An early example includes the mutation of negatively charged residues around the entrance to the active site channel of HaloTag to promote faster the conjugation of chloroalkane-tagged oligonucleotides. 477]48 In this work, we developed a yeast display system capable of screening 10 7 -10 8 HaloTag7 variants at a time.We anticipate this system will greatly accelerate the development of HaloTag variants that work better with non-rhodamine substrates.Indeed, in this initial application, with a single round of diversification, the larger screening capability enabled the discovery of multiple cooperative mutations at unexpected positions; this result would have been highly unlikely using prior methods. 36,37 he yeast display format allows for a variety of positive and negative selections, rapid follow-up assays, and built-in controls for expression level.
In this first application, we provide ample evidence that yeast display produced HaloTag variants with improved conjugation kinetics and brighter fluorescent complexes.The optimized variant, BenzoHTag, had a 63-fold enhancement in reaction rate with Bz-1 compared to HaloTag7.Our prior work optimized the fluorescent properties of the benzothiadiazole dye by replacing the dimethylamine donor in Bz-2 with a pyrrolidine in Bz-1, which improved the fluorescent quantum yield of the dye when conjugated to HaloTag7 by 50% while decreasing the background in cells. 19Thus, combining dye engineering and protein engineering, we improved the original HaloTag7•Bz-2 system by 1.5-fold in terms of brightness and by 300-fold in terms of reaction rate 17,19 We expect this roadmap -first optimizing the substrate for ideal functional properties, then optimizing the self-labeling protein for faster labelingwill enable additional systems to be developed with improved functionality compared to HaloTag7.
Prior to this work, the most well-developed fluorogenic substrate for HaloTag7 was a silicon rhodamine (SiR) dye, originally reported by Johnsson and colleagues 33 and later optimized to CA-JF635 by Lavis and colleagues. 23,34,49 W compared the performance of BenzoHTag•Bz-1 and HaloTag7•CA-JF635 in live-cell labeling using both flow cytometry and fluorescence microscopy (Fig. 5).
The reaction rates and relative brightness of the two systems were comparable in cells, but signal saturation was achieved much faster and at lower dye concentrations in the BenzoHTag•Bz-1 system (Fig. 5b, S8)., 34, 49 These conditions match the concentrations and incubation times we observed were required for robust cellular labeling of HaloTag7•CA-JF635.By contrast, robust wash-free labeling was observed using only 10 nM Bz-1 with an 18-fold signal-overbackground as measured by confocal microscopy (Fig. S14), and nucleus-localized BenzoHTag was saturated by 125 nM Bz-1 in under 100 seconds (Fig. 4c).When quantified by flow cytometry, Bz-1 showed greater than 200-fold signal-over-background when applied at only 7 nM, (Fig. 4a, Fig. S8a-c).We interpret the differences in performance between HaloTag7•CA-JF635 and BenzoHTag•Bz-1 to reflect superior cell permeability of the smaller, uncharged Bz-1 compared to CA-JF635 (Fig. 1b).This interpretation is further supported by the observation that the percentage of cells labeled was not dependent on concentration of Bz-1 but was highly dependent on concentration of CA-JF635 (Fig. S10c).The rapid labeling of the BenzoHTag•Bz-1 system suggests unique for monitoring fast cellular processes, such as endosomal recycling. 11[52][53][54][55] There are several implementations of multiplexed fluorescent labeling using two different self-labeling proteins, like HaloTag7 and SNAP-tag, 23,37,56 and even some reports of multiplexed no-wash fluorescent labeling. 57However, HaloTag7 is often preferred over SNAP-tag because SNAP-tag has slower reaction rates, its ligands have higher nonspecific interactions in the cell, and its complexes have weaker photophysical properties. 7,16,23,58 Threfore, it could be advantageous to use multiple HaloTag-derived self-labeling proteins in a wash-free multiplexed labeling experiments using orthogonal chloroalkane substrates.We found that BenzoHTag•Bz-1 and HaloTag7•CA-JF635 support multiplexed no-wash labeling experiments in live cells (Fig. 5e).Given the ability to perform positive and negative selections using yeast display, we anticipate that additional HaloTag7 variants can be evolved for improved orthogonality and for specificity to other, spectrally orthogonal fluorogenic dyes that will allow multiplexing using three or more colors.Moreover, this strategy could be interfaced with recent advances in protein/peptide tags [59][60][61] and fluorescence lifetime imaging 37,62 to offer even more degrees of multidimensional multiplexing.
Conclusion
We have introduced the commonly used self-labeling protein HaloTag7 into a yeast display system for directed evolution of improved variants.This display platform can produce novel systems for imaging, biosensing, and biocatalysis that were previously inaccessible. 4,5 e used this system to develop BenzoHTag, an evolved HaloTag7 triple mutant that has improved conjugation kinetics to a fluorogenic benzothiadiazole dye, Bz-1.The BenzoHTag•Bz-1 system enables robust intracellular labeling of live cells at concentrations as low as 7 nM, in seconds and without washes.The BenzoHTag•Bz-1 system exhibits similar kinetics and maximum brightness as the previously reported HaloTag•CA-JF635 system but has faster and more sensitive in-cell labeling.The fast in-cell labeling rate will be especially useful for real-time monitoring of biological processes, especially intracellular processes, with fast time scales. 11,54,63 Fnally, the BenzoHTag•Bz-1 system was found to be orthogonal to the HaloTag7•CA-JF635 system, allowing for simultaneous application of both systems for wash-free multiplexed imaging in live cells.
Figure 1 .
Figure 1.HaloTag7 and dye-containing substrates.(a) Crystal structure of HaloTag7 covalently reacted with substrate chloroalkane-tetramethylrhodamine (CA-TMR, PDB: 6Y7A). 16The model highlights how the catalytic residue, D106, is at the bottom of a ~15Å hydrophobic channel.This model also highlights how interactions between the rhodamine dye and the surface helices of HaloTag7 drive binding, illustrating why non-rhodamine substrates often have slower kinetics.(b) Structures of fluorogenic dyes, Bz-1, Bz-2 and Bz-3, and structures of other common HaloTag7 ligands, CA-TMR and CA-JF635.
Figure 2 .
Figure 2. Molecular evolution of a fluorogenic HaloTag system using benzothiadiazole dye Bz-1.(a) Yeast display construct and screening strategy.Cells within the green gate were isolated and used for subsequent rounds of sorting.(b) Histograms of green fluorescence of 10,000 yeast cells displaying the input library (green), HaloTag7 (blue), or round 4 of the screen (purple).(c) Median green fluorescence of 10000 yeast cells displaying HaloTag7, the input library, the filtered input library, and the output pools of rounds 1 through rounds 4. Background green fluorescence from unlabeled cells was subtracted.Cells were incubated with 40 nM Bz-1 for 1 minute.
Figure 3 .
Figure 3. Characterization of recombinantly expressed HaloTag variants with improved fluorogenic properties.(a) Summary of second-order rate constants measured for Bz-1 conjugation to HaloTag7 and Variants 1-10.See supplementary information for experimental details.(b) Representative kinetic traces of 0.25 μM Bz-1 reacting with 1.0 μM HaloTag7 or selected variant.(c) Representative emission spectra of 2.5 μM Bz-1 when conjugated to 5.0 μΜ HaloTag7 or selected variant after one hour of incubation, normalized to the maximum fluorescence intensity of the HaloTag7•Bz-1 complex.(d) Crystal structure of Bz-2 with HaloTag7 (PDB: 5UXZ) 17 showing the locations of key mutations observed in variants with improved fluorogenic properties.
Bz- 1
reacts 65-fold faster with BenzoHTag than with HaloTag7, while CA-JF635 reacts 9000-fold faster with HaloTag7 than with BenzoHTag (See Supplemental Discussion).These results suggested that BenzoHTag•Bz-1 and HaloTag7•CA-JF635 might be orthogonal enough for multiplexed labeling in cells.We then compared dye fluorescence in cells expressing either nucleus-localized BenzoHTag or HaloTag7.We observed that CA-JF635 preferentially labeled cells expressing HaloTag7 with very little labeling in cells expressing BenzoHTag, and Bz-1 preferentially labeled cells expressing BenzoHTag with very little labeling in cells expressing HaloTag7 (Fig.5b, S10) Encouraged by these results, we evaluated the ability to multiplex the BenzoHTag•Bz-1 and HaloTag7•CA-JF635 systems in wash-free, live-cell labeling experiments.Bz-1, CA-JF635 or both dyes were added to U-2 OS cells transiently expressing BenzoHTag localized to the nucleus, HaloTag7 localized to the outer mitochondrial membrane, or both.Using 125 nM dye and analyzing cell
Figure 5 .
Figure 5. Multiplexed labeling using the BenzoHTag and HaloTag7 systems.(a) Second-order rate constants for Bz1, CA-TMR, and CA-JF635 with recombinantly expressed and purified HaloTag7 and BenzoHTag.The rate of CA-TMR with HaloTag7 was obtained and reported by Johnsson and coworkers.16The rate of CA-JF635 has not been reported, but rates of analogous Si rhodamines were reported in the range of 10 5 -10 6 M -1 s -1 .11,16,23,33See Supporting Information for more details.(b) Comparison of Bz-1 and CA-JF635 labeling for 10 minutes in live U-2 OS cells transiently expressing H2B-BenzoHTag or H2B-HaloTag7.(c) Schematic of multiplexed labeling experiments.U-2 OS cells were transiently transfected with H2B-BenzoHTag (nuclear) and Tomm20-HaloTag7 (cytosolic, outer mitochondrial membrane fusion) and treated simultaneously with Bz-1 and CA-JF635.(d) Flow cytometry data in orthogonal labeling experiments with 50 nM Bz-1 and 125 nM CA-JF635 for 60 minutes.U-2 OS cells expressing either Tomm20-HaloTag7, H2B-BenzoHTag, or both were treated with either dye or co-treated with both dyes.The 15% most fluorescent cells were analyzed in each experiment because transient transfection efficiencies were roughly 20% (Fig. S9).(e) Confocal microscopy images of U-2 OS cells transiently transfected with both H2B-benzoHTag and Tomm20-HaloTag7.Cells were stained with nuclear Hoescht dye, washed, and then treated with 50 nM Bz-1 and 125 nM CA-JF635 for 60 minutes.No washing was performed prior to imaging.
Figure 5. Multiplexed labeling using the BenzoHTag and HaloTag7 systems.(a) Second-order rate constants for Bz1, CA-TMR, and CA-JF635 with recombinantly expressed and purified HaloTag7 and BenzoHTag.The rate of CA-TMR with HaloTag7 was obtained and reported by Johnsson and coworkers.16The rate of CA-JF635 has not been reported, but rates of analogous Si rhodamines were reported in the range of 10 5 -10 6 M -1 s -1 .11,16,23,33See Supporting Information for more details.(b) Comparison of Bz-1 and CA-JF635 labeling for 10 minutes in live U-2 OS cells transiently expressing H2B-BenzoHTag or H2B-HaloTag7.(c) Schematic of multiplexed labeling experiments.U-2 OS cells were transiently transfected with H2B-BenzoHTag (nuclear) and Tomm20-HaloTag7 (cytosolic, outer mitochondrial membrane fusion) and treated simultaneously with Bz-1 and CA-JF635.(d) Flow cytometry data in orthogonal labeling experiments with 50 nM Bz-1 and 125 nM CA-JF635 for 60 minutes.U-2 OS cells expressing either Tomm20-HaloTag7, H2B-BenzoHTag, or both were treated with either dye or co-treated with both dyes.The 15% most fluorescent cells were analyzed in each experiment because transient transfection efficiencies were roughly 20% (Fig. S9).(e) Confocal microscopy images of U-2 OS cells transiently transfected with both H2B-benzoHTag and Tomm20-HaloTag7.Cells were stained with nuclear Hoescht dye, washed, and then treated with 50 nM Bz-1 and 125 nM CA-JF635 for 60 minutes.No washing was performed prior to imaging. | 2023-11-03T13:12:29.852Z | 2024-04-02T00:00:00.000 | {
"year": 2024,
"sha1": "dba874c58d61adf082474ff4ab49769c47c34284",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/10/29/2023.10.29.564634.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc3db464a875f444b3ccd4279cfe1f51b8f9d5d9",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
137417086 | pes2o/s2orc | v3-fos-license | Low-temperature deposition of ultrathin SiO2 films on Si substrates
We present a detailed investigation of the properties of silicon dioxide deposited at a low temperature. The advantages of this process include its low thermal requirements (about 200 °C), the absence of corrosive by-products and the lack of need of vacuum equipment. Sol solutions were prepared for the deposition of ultrathin SiO2 films by spin-coating at the low annealing temperature of 200 °C. The layers' thickness was 24 nm and 5 nm. We describe in detail the material properties of this novel low-temperature SiO2 layers obtained by extensive characterization using Fourier transform infrared spectroscopy (FTIR), atomic force microscopy (AFM), XPS spectroscopy, capacitance-voltage (C-V) and current-voltage (I-V) measurements. The ultrathin oxide layers on Si substrates show good dielectric properties.
Introduction
Silicon oxide is widely used for optical and electronic applications. SiO 2 of the highest quality can be formed by thermal oxidation of Si at temperatures over 800 ºC in dry O 2 . Thermal oxide is only grown on Si substrates at high temperatures, which limits its applications, as many of those require SiO 2 film deposition on different substrates [1]. SiO 2 nanolayers have been produced by plasma oxidation, atomic layer deposition, deposition from sol-solutions and oxidation by wet-chemical methods.
In this work we propose a technological approach consisting of liquid chemical deposition from a sol solution that has the following advantages over other deposition techniques: it is cost effective, no vacuum equipment is needed, control of the film' coverage and thickness is possible. The preparation procedure is also presented of the sol solutions for ultrathin SiO 2 layers of different thickness at 200 o С. Two sets of samples were studied, namely, SiO 2 films with thickness of 24 nm and 5 nm. The film morphology was studied by AFM. the FTIR investigation confirmed the formation of SiO 2 . The chemical analysis was performed by XPS. The dielectric properties of the MOS structures were studied by capacitance-voltage(C-V) and current-voltage (I-V) measurements.
Experimental
Thev SiO 2 layers were prepared by the sol-gel technique using tetraethyl orthosilicate (TEOS) as a precursor. Тhe introduction of glacial acetic acid at the molar ratio TEOS:acetic acid 1:15 caused acetate modification, resulting in an exothermic reaction. Further, the solution was modified by a small amount of water. Acetylacetone acts as a stabilizer, so that it was added in the molar ratio 1 To whom any correspondence should be addressed. TEOS:acetylacetone 1:1. Two solutions were prepared with different molar content of TEOS in order to deposit SiO 2 layers of different thickness. The films were obtained by spin coating at 8000 rpm for 30 s on Si substrates, the latter being cleaned beforehand by the RCA procedure, which is a standard one for wafer cleaning involving removal of the organic contaminants, of the oxide layer and of the ionic contamination (for details see [2]). The samples were prepared on Si wafers: (n-and p-type Si) with different surfaces: one side polished and one side etched (p/e) and both sides polished (p/p). The annealing procedure included heating at 200 o C for 30 min.
The AFM studies were conducted on a scanning probe microscope DiMultimode V (Veeco). The FTIR measurements were performed by an IRPrestige-21 Shimadzu FTIR Spectrophotometer. The layers were deposited on Si substrates, with bare Si wafer used as a background. The electrical properties of the layers were studied using MOS structures. The XPS studies were performed by a VG Escalab II system using AlK α radiation with an energy of 1486.6 eV. The chamber pressure was 10 −7 Pa. The binding energies (BE) were determined utilizing the C1s line (from adventitious carbon) as a reference with an energy of 285.0 eV. The accuracy of the BE measured was ± 0.2 eV. The SiO 2 films thickness was determined by ellipsometry, thus preparing two sets of films film with thicknesses of 24 and 5 nm.
Results and discussions
AFM images of ultrathin SiO 2 films (5 nm thick) are shown in figures 1 and 2. The R a is the arithmetic mean of the absolute values of the height of the surface profile Z(x), where Z(x) is a function describing the surface profile analyzed in terms of the height (Z) and position (x) of the sample over the evaluation length L. The root-mean-square roughness R q of a surface is similar to the roughness average, the only difference being the mean squared absolute values of the surface roughness profile. The values obtained for the surface roughness were R a = 0.74 nm and R q = 0.94 nm for L = 500 nm and, respectively, R a = 0.85 nm, R q = 1.07 nm for L = 1000 nm. For a Gaussian distribution of asperity height, the statistical theory yields that the ratio of R q to R a should be 1.25. Some authors note that the asperity height distribution of most engineering surfaces (tribology) may be approximated by a Gaussian distribution with R q /R a values of up to 1.31. For our sample, the values of R q /R a using data collected from AFM imaging were 1.26 and 1.27, reasonably close to the value of 1.25 predicted by the theory. This result is significant since it indicates that, at the imaging scale, the asperity height distribution of these surfaces are approximately Gaussian and that the statistical relationships for the surface roughness are applicable. The height of the surface profile can be observed from the section analysis. The spectrum of the profile in one section is illustrated in figure 3. The surface smoothness of the layer is assessed by the parameters R p , R v and RT. The maximum profile peak height (R p ) denotes the highest peak around the surface profile with respect to the baseline. Likewise, the maximum profile valley depth (R v ) is the measure of the deepest valley across the surface profile analyzed from the baseline. Thus, the maximum height of the profile (RT) is defined as the vertical distance between the deepest valley and highest peak: RT = R p + R v . For the samples studied these parameters were as follows: R p = 2.5 nm, R v = 1.2 nm, and RT = 3.7 nm. The results of the AFM measurements demonstrate that the technique used results in homogeneous layers with good a surface coverage of the silicon substrate. The surface layer roughness was in the order of several nanometers and could only be observed by AFM.
XPS spectroscopy was used for determining if SiO 2 was formed ( figure 4). The presence of carbon (C1s peak at 285.0 еV) was only registered on the film surfaces; it disappeared after a short-term sputtering (1 minute).
The binding energy of O1s is 532.8 eV, which corresponds to oxide. The Si2p signal is split in two peaks located at 99.31 eV and 103.5 eV. The first peak at 99.31 eV corresponds to a Si-Si bond; it originates from the Si substrate as the sol-gel film is very thin (5 nm). The second peak at 103.5 eV is Si2p3/2, which is an indication of SiO 2 . Thus, the XPS analysis proved that we were able to deposit very thin SiO 2 films at a relatively low temperature. [3]. The absorption at 670.4 cm -1 can be attributed to Si-Si bonds due to oxygen vacancies [4,5]. The FTIR spectra reveal that ultrathin SiO 2 films have been formed at these low annealing temperatures. These results are in agreement with the results obtained for low temperature thermal-ALD SiO 2 [6].
The electrical properties of the sol-gel SiO 2 films were investigated by measuring their capacitance-voltage (C-V) and current-voltage (I-V) characteristics. For the capacitance-voltage Figure 6 presents the volt-capacitance characteristics of the two groups of samples. The dielectric permeability k of sample 5 with thickness 24 nm as determined from the capacitance measurements was 3.89; it was lower for the ultrathin layers (5 nm). The hysteresis of the C-V curves was also studied. The measurement started at 0 V, the voltage was then swept to inversion, to accumulation and was looped back. The flat band voltage V fb1 was determined from the first C-V curve swept from inversion to accumulation, while V fb2 was determined from the C-V curve swept from accumulation to inversion. Using the difference V fb1 − V fb2 , the density N t of the trapped charge was calculated using the formula q where C ox is the capacitance of the layer per cm 2 . Table 2 presents the results for N t calculated from the measurements at three points of the samples described in table 1. For structures with a p-Si substrate, the hysteresis character of the C-V curves indicates trapping of holes in the dielectric. In the 400 500 600 700 800 900 1000 1100 1200 ultrathin SiO 2 layer (sample 15), the trapped charge is positive and 2.4 times greater than that for sample 5. For structures with an n-Si substrate, the hysteresis character of the C-V curves indicates trapping of electrons in the dielectric (negative trapped charge). In the ultathin SiO 2 layer (sample 20), the trapped charge is negative and 4 times greater than that for sample 10 (d = 24 nm) [7].
The I-V measurements were conducted at a negative voltage polarity with respect to the Hg dot for the p-Si substrate, and a positive one for the n-Si substrate, to ensure that the Si substrate is in accumulation mode (avoiding the voltage drop across the depletion layer in the Si substrate). The I-V data are presented in figure 7. These characteristics demonstrate the good dielectric properties of the ultrathin SiO 2 layers. n-Si 5 0.107 0.665 -0.558 -4.03E+12 The sign "−" denotes a negative trapped charge, "+", a positive one.
Conclusions
The technological process proposed allows one to successfully deposit ultrathin films (5 and 24 nm) of SiO 2 on Si substrates at low temperatures. We present detailed material properties of the SiO 2 layers obtained by extensive characterization using FTIR, AFM, XPS spectroscopy, capacitance-voltage (C-V) and current-voltage (I-V) measurements. Thus, we proved that the SiO 2 films deposited at lowtemperaturse are uniform, smooth and possess good electrical properties. The results obtained indicate that these films can be applied as tunneling-type layers and passivation layers in advanced solar cells or in optoelectronics. | 2017-10-05T04:28:41.665Z | 2014-05-15T00:00:00.000 | {
"year": 2014,
"sha1": "e883bd332f5b85b18b252530ad831d619fff8ef4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/514/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "549ddcd72de93acf84bf7517afc5704cce6d64ce",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
236181456 | pes2o/s2orc | v3-fos-license | A two-stage amplified PZT sensor for monitoring lung and heart sounds in discharged pneumonia patients
Assessment of lung and heart states is of critical importance for patients with pneumonia. In this study, we present a small-sized and ultrasensitive accelerometer for continuous monitoring of lung and heart sounds to evaluate the lung and heart states of patients. Based on two-stage amplification, which consists of an asymmetric gapped cantilever and a charge amplifier, our accelerometer exhibited an extremely high ratio of sensitivity to noise compared with conventional structures. Our sensor achieves a high sensitivity of 9.2 V/g at frequencies less than 1000 Hz, making it suitable to use to monitor weak physiological signals, including heart and lung sounds. For the first time, lung injury, heart injury, and both lung and heart injuries in discharged pneumonia patients were revealed by our sensor device. Our sound sensor also successfully tracked the recovery course of the discharged pneumonia patients. Over time, the lung and heart states of the patients gradually improved after discharge. Our observations were in good agreement with clinical reports. Compared with conventional medical instruments, our sensor device provides rapid and highly sensitive detection of lung and heart sounds, which greatly helps in the evaluation of lung and heart states of pneumonia patients. This sensor provides a cost-effective alternative approach to the diagnosis and prognosis of pneumonia and has the potential for clinical and home-use health monitoring.
Introduction
Assessment of lung and heart states is critical when evaluating the health condition of patients with pneumonia. Lung injury in patients can be revealed by abnormal findings based on chest CT images 1-3 , PET/ CT 4 , and artificial intelligence (AI)-assisted diagnosis 5,6 . Lung ultrasound also offers a quantitative method to assess the lung state in patients 7 . Meanwhile, heart injury in patients can be revealed by echocardiography (ECG) 7 and cardiac magnetic resonance imaging (MRI) 8,9 . However, these methods generally require large, sophisticated, and expensive instruments; highly trained personnel; complex procedures; and far less harmless procedures (such as CT and MRI). Therefore, the development of novel sensing systems that are time-saving, low cost, highly sensitive, easy to read, instrument-free, and able to achieve on-site continuous monitoring 10,11 has great potential in the diagnosis and prognosis of pneumonia diseases.
Auscultation of chest wall sounds, including both heart and lung sounds, offers an easy but very effective approach for the clinical diagnosis of cardiovascular and respiratory systems. Conventional stethoscopy is widely used for intermittent auscultation; however, stethoscopy has a number of limitations, such as poor wearability due to its bulky size, friction noise during diagnosis, and difficulty in detecting weak acoustic signals including lung sounds. An alternative approach for detecting lung and heart sounds is based on accelerometer use 12,13 . Compared with stethoscopy, miniaturized accelerometers can be taped on a person's chest wall for more convenient and continuous cardiorespiratory monitoring. Previously, based on asymmetric gapped cantilever structures, we developed a series of small-sized and ultrasensitive sound sensors for continuous monitoring of heart and lung sounds in healthy subjects [14][15][16] . However, none of them have been systematically used to monitor patients with pneumonia.
Herein, we were motivated to explore more applications of our sound sensors in the assessment of lung and heart states of discharged pneumonia patients. From both theoretical simulations and mechanical tests, our sensors show improved sensitivity compared with conventional sensors, making them suitable for monitoring weak heart and lung sounds. Moreover, the lung and heart sound recorded by our sensors are in good agreement with previous clinical reports, suggesting that our sensor offers a potential alternative for the diagnosis and prognosis of pneumonia or other similar diseases.
Results and discussion
Sensor structure and working principles In this study, we used a self-developed sound sensor with high sensitivity for continuous monitoring of lung and heart sounds ( Fig. 1a, b). The sound sensor was based on a novel asymmetric gapped cantilever structure (Fig. 1c, d), which was composed with a piezoelectric beam made of piezoelectric ceramic materials of lead zirconium titanate (PZT) as the top layer, a bottom mechanical layer separated by a gap, and a movable proof mass made of aluminum (Table 1). The piezoelectric layer could convert biomechanical energy (such as acoustic vibration) to electric energy due to the piezoelectric effect 17,18 . The mechanical beam could strengthen the stiffness of the whole cantilever (Fig. 1d). Furthermore, the output of the sound signal was amplified using an amplifier circuit (Fig. 1e).
Theoretical simulation and characterization of sensor performance
We used theoretical simulation to show the advantage of our sensor with an asymmetric gapped cantilever structure compared with conventional structures (Fig. S1). Harmonic response analysis of the dynamic model was conducted under different excitation accelerations (from 0.01 g to 0.11 g). According to theoretical simulation, the strain experienced by the piezoelectric beam on our structure ( Fig. 2a) was much more significant than that on a conventional structure (Fig. 2b). In our asymmetric gapped cantilever structure, the amplitude-frequency response showed that the maximum strain on the piezoelectric beam was 1.38 × 10 −4 under 0.11 g excitation (Fig. 2c). In contrast, the maximum strain on the piezoelectric beam of the conventional structure was only 1.42 × 10 −5 under the same excitation (Fig. 2d). Therefore, according to theoretical simulation, our structure produced a ten times higher strain on the piezoelectric beams than the conventional structure (Fig. 2). We then plotted the strain-excitation response of different structures (Fig. 3a). The sensitivity of the accelerometer can be defined by the equation below: Therefore, the sensitivity of the accelerometer can be calculated from the slope of the strain-excitation response (Fig. 3a), and the sensitivity of the accelerometer with our structure and conventional structure was calculated to be 1.25 × 10 −3 and 1.29 × 10 −4 (1/g), respectively. The theoretical simulation indicates that the sensitivity of our sensor structure was improved 9.7 times that of the accelerometer with a conventional structure.
The strain experienced by the piezoelectric layer is proportional to the distance (H) between the top piezoelectric beam and the neutral plane (Fig. S1) 19 . This distance is much larger on our structure than that on a conventional cantilever-based accelerometer (Fig. S1).
This explained the significantly larger strain or higher sensitivity of our sensor compared with that of conventional accelerometers.
Moreover, due to the piezoelectric effect 17,18 of PZT materials, the strain on the piezoelectric beam can be transferred into an electric charge. As the charge produced by the piezoelectric beam was very weak and could not be directly collected, we used a charge amplifier to transfer the charge into voltage and further amplify the signal (Fig. 1c, e); therefore, the sensitivity of the accelerometer can be expressed as V/g.
To avoid using balanced dual supplies to create op-amp circuits, the op-amp LMP7721 (Texas Instrument), which enables a single supply, was selected for the charge amplifier design (Fig. 1e). LMP7721 has an ultralow typical input bias current of 3 fA and low voltage noise of 6.5 nV/√Hz, making it ideal for amplifying high impedance signals. The average level of op-amp input was biased to V S /2 by the R A -R B divider pair (Fig. 1e). The amplification rate of this circuit is inversely proportional to the feedback capacitance (C F ). The signal-to-noise ratio (SNR) and lower cutoff frequency are also inversely proportional to the C F . The C F in this charge amplifier was set as 47 pF (Fig. 1e) at the tradeoff of charge amplification rate, SNR, and lower cutoff frequency. Since the input capacitance of the piezoelectric transducer is~1 nF, the amplification rate of the circuit was calculated to be 21.3. In addition, the charge amplifier circuit was designed with a 1 GΩ feedback resistor (R F ) (Fig. 1e). Together, this circuit yielded a low cutoff frequency of 3.4 Hz, making it satisfactory for heart and lung sound monitoring. In addition, ultralow input bias current op-amp circuits require precautions to achieve the best performance. The leakage current on the surface of the circuit board could exceed the input bias current of the amplifier and could even be 100 times higher. To minimize surface leakage, a guard trace was designed to completely surround the input terminals and other circuitry connecting to the inputs of the op-amp (Fig. 1e).
Measurement of the sensitivity and noise of our sensor
We measured the sensitivity of our sensor on a mechanical shaker by setting a commercial accelerometer (752A13, Endevco) as the gold standard. The sensitivity-frequency response of our sensors is shown in Fig. 3b. Our sound sensor had a resonance frequency of 1600 Hz (Fig. 3b), which is higher than the frequency range of heart sounds (20-400 Hz) and lung sounds (60-1000 Hz).
As shown in Fig. 4a, within the sound frequency range from 20 to 1000 Hz, the output voltage of our sensor increased when the excitation acceleration increased. The sensitivity of our sensor at 0.01 g, 0.05 g, and 0.1 g under different excitation accelerations was calculated to be 9.1966 V/g, 9.1982 V/g, and 9.252 V/g, respectively (Fig. 4a). These results proved that the sensitivity of our sensor was consistent (~9.2 V/g) under different excitation accelerations within the heart and lung sound frequency ranges.
By using the novel asymmetric gapped cantilever structure, the sensitivity of our sound sensor could achieve 9.2 V/g at frequencies less than 1000 Hz (Figs. 3b and 4a). The sensitivity of conventional piezoelectric accelerometers or MEMS-based accelerometers is generally less than 1 V/g ( Table 2) [20][21][22] . Comparably, our sensor showed significantly improved sensitivity (9.2 V/g) in frequencies less than 1000 Hz. The enhanced sensitivity of our sensor makes it suitable for the detection of weak physiological sounds, such as lung and heart sounds, especially for weak lung sound detection 23 .
In addition, since sensor noise is another important characteristic, we measured the intrinsic noise of our sensor upon a vibration isolation mechanical shaker at midnight. As shown in Fig. 4b, the noise spectrum and the output voltage density demonstrated that the noise level of our sensor was 1 μV/√Hz within the frequency range of heart and lung sounds (from 20 to 1000 Hz). Therefore, the lower noise limit of our sensor was calculated to be 109 ng/√Hz.
From the lung sound signal spectrum (Fig. 4c), the output voltage density of the lung sound signal from 60 to 1000 Hz was~125 μV/√Hz (Fig. 4c), which was 125 times higher than the intrinsic noise level. Similarly, from the heart sound signal spectrum (Fig. 4d), the output voltage density of the heart sound signal was~890 μV/√Hz from 20 to 400 Hz (Fig. 4d), which was 890 times higher than the intrinsic noise level.
The SNR can be calculated according to the equation below, SNR ¼ 20 log 10 Vs=Vn ð Þ where Vs and Vn represent the signal voltage and noise voltage, respectively. Therefore, the SNRs of the lung sound signal and heart sound signal was 42 dB and 59 dB, respectively. The SNRs of our sensors are two times higher than those of commercial stethoscopes. Sensor performance for lung and heart sound monitoring Subsequently, we used our sound sensor device to monitor the lung and heart sounds of healthy volunteers in a regular laboratory environment to prove the auscultation ability of our sensor device.
Compared with a commercial high-end electric stethoscope based on a conventional cantilever structure, our sensor exhibited much better performance for recording both lung and heart signals, and especially for recording weak lung sounds (Fig. 5). Generally, lung sounds are much weaker than heart sounds during regular breathing 23 ; therefore, lung sounds, especially for gentle breathing, are difficult to detect. However, with the asymmetric gapped cantilever structure, our sensor can indeed detect weak lung sounds with a high SNR (Fig. 5a, Audio S1). In contrast, the commercialized high-end electronic stethoscope, which is based on a conventional cantilever structure, can hardly distinguish lung sounds from the noise of the captured signal (Fig. 5b). The respiratory rate of the measured volunteer was 16.2 breaths per minute (hereafter "BPM") ( Fig. 5a, Table 3), which was in the normal range of resting respiratory rates (12)(13)(14)(15)(16)(17)(18)(19)(20). Moreover, the lung sounds measured from different healthy volunteers were consistently within the normal respiratory rate range (Table 3). For heart sound monitoring, the SNR of the heart sounds detected by our sensor was two times higher than that of a commercial stethoscope (Fig. 5c, d). We could clearly distinguish two normal heart sounds from the obtained heart sound waveform, including the first heart sound (S1) and the second heart sound (S2) (Fig. 5c, Audio S2), which correspond to the "lub" and "dub" sounds of a heartbeat, produced by the closure of the atrioventricular valves and semilunar valves, respectively 24 . The heart rate of the measured healthy volunteer was 77.9 beats per minute (hereafter "bpm") ( Fig. 5c and Table 3), which was in the normal heart rate range of adults (60-88 bpm). Moreover, the heart sounds from different healthy volunteers were consistently within the normal range of heart rates ( Table 3).
The measurements of healthy volunteers proved that our sound sensor could effectively detect lung and heart sounds in the human body (Table 3), especially relatively weak lung sounds (Fig. 5a). It is also worth noting that these measurements were carried out in a regular laboratory environment full of airborne noise. These results proved that our sensor was not very sensitive to airborne noise and can therefore be applied in medical applications.
Sensor monitoring of patients with pneumonia Classification of lung and heart sounds of patients
We monitored the lung and heart sounds of discharged pneumonia patients during their follow-up visit to the hospital to evaluate their lung and heart states.
According to the sensor monitoring and based on the clinical diagnosis of the discharged pneumonia patients, we found four typical characteristics from the recorded sound signals (Table 4) independent of the patients' sex, age, preexisting conditions, the severity of illness, and time from the original diagnosis. The four types were typing I, patients with normal respiratory rate and normal heart rate; type II, patients with shortness of respiratory but normal heart rate; type III, patients with the normal respiratory rate but high heart rate; and type IV, patients with shortness of respiratory and high heart rate ( Table 4). Generally, a decreased respiratory rate is a good sign for healthy adults 23 , and the heart rates of some athletes or those who often exercise may be lower than those of ordinary adults 23 , which may explain why a decreased respiratory rate or decreased heart rate were hardly found in discharged pneumonia patients in this study.
Characterization of lung and heart sounds of patients
We described our findings of these four types of lung and heart sounds in detail.
Type I patients exhibited both a normal respiratory rate and normal heart rate (Fig. 6a, Audio S3-S4). From our sensor monitoring, both the respiratory rate and heart rate of the patient were in normal ranges (Fig. 6a i , a ii , Table S1). We found that the lung and heart sounds of most discharged patients (29/41) exhibited type I characteristics, with an average respiratory rate of 16.5 ± 1.8 BPM and an average heart rate of 70.9 ± 6.5 bpm (Table 5), where the latter showed no significant difference from the ECG data (p > 0.05, Table 5). This result indicated that most discharged patients (70.7%) recovered from pneumonia and exhibited good lung and heart functions after discharge (Table 5).
Type II patients showed shortness of respiratory function but normal heart rates (Fig. 6b, Audio S5-S6). From our sensor monitoring, the respiratory rate of the monitored patient increased to 24.0 BPM (Fig. 6b i ), and the heart rate of the patient was 68.2 bpm (Fig. 6b ii , Table S1). We found 6/ 41 discharged patients with type II characteristics (Table 5), with an average respiratory rate of 22.9 ± 1.5 BPM and an averaged heart rate of 75.9 ± 6.4 bpm (Table 5), respectively.
The measured heart rates were in good accordance with the ECG data (p > 0.05, Table 5). This result showed that 14.6% of the discharged patients exhibited normal heart function but still suffered from impaired lung function due to pneumonia infection (Table 5).
Type III patients exhibited a normal respiratory rate but an increased heart rate (Fig. 6c, Audio S7-S8). Our sensor monitoring showed that these patients exhibited a normal respiratory rate (13.3 BPM) (Fig. 6c i ) but with an increased heart rate (89.6 bpm) (Fig. 6c ii , Table S1). From our sensor monitoring, we found 2/41 patients with type III characteristics (Table 5), with an average respiratory rate and heart rate of 12.9 ± 0.6 BPM and 88.9 ± 0.9 bpm, respectively ( Table 5). These observations indicate that 4.9% of the discharged patients exhibited recovered lung function but still faced a critical challenge of heart injury ( Table 5).
Type IV patients showed the worst recovery and exhibited both shortness of respiratory function and increased heart rate (Fig. 6d, Audio S9-S10). Our sensor monitoring showed that a patient's respiratory rate increased to 30.0 BPM (Fig. 6d i ) and the patient's heart rate increased to as high as 113.2 bpm (Fig. 6d ii and Table S1). We found 4/41 patients with type IV characteristics (Table 5), with an average respiratory rate of 30.0 ± 1.2 BPM, and an averaged heart rate of 98.8 ± 10.7 bpm (Table 5). These results indicated that 9.8% of discharged patients had very poor recovery and suffered from both heart injury and lung injury after pneumonia infection (Table 5).
Generally, lung injury during pneumonia infection is revealed by chest CT imaging 2,3 and lung ultrasound 7 . Even after patients with pneumonia are discharged, patients may suffer from a lung injuries, such as lung fibrosis and changes in lung function 1 . Compared with sophisticated CT or lung ultrasound instruments, our small-sized sound sensor provided a fast and effective evaluation of lung function of the patients and revealed that 24.4% of the discharged pneumonia patients had lung injury in terms of shortness of respiratory function (type II and IV, Table 5).
In addition, pneumonia prominently affects the cardiovascular system of patients 8 . The presence of cardiac injury and myocardial inflammation in patients recovered from pneumonia was revealed by ECG 7 and cardiovascular MRI 9 . From the heart sounds measured by our sound sensor, we also revealed heart injury in 14.7% of the discharged patients (type III and IV, Table 5). Compared with conventional ECG and cardiac MRI, our sensor provides a simple, easy but very effective approach to evaluate heart injury in discharged patients.
Time course tracking of the lung and heart state of a pneumonia patient
We next tracked the lung and heart sounds of a patient at different time points after discharge to evaluate the time evolution of the lung and heart states of the patient. Over time, from our sound sensor monitoring, we found that the lung and heart function of the monitored patient gradually improved (Fig. 7). From the lung sound waveforms, the respiratory rate of the monitored patient decreased from 23.1 to 18.8 and then to 16.2 BPM on different dates (Fig. 7a i , b i , c i , d), changing from shortness of breath to a normal respiratory rate. These observations suggested that the lung function of the patient gradually improved to a normal state. Moreover, from the monitoring of heart sounds, the heart rate of the patient was 83.3, 76.9, and 74.1 bpm on different dates, respectively (Fig. 7a ii , b ii , c ii , e and Table S2), indicating the improvement of the heart states of the patient.
Time evaluation of lung and heart states of 41 pneumonia patients
We then investigated the time evolution of lung and heart states of discharged pneumonia patients (n = 41). During the first monitoring session on 15 June 2020, the ratio of four types of patients was evenly distributed (Tables S3-S4 and Fig. S2a). Over time, the ratio of type I patients increased, whereas the ratio of type II, III, and IV patients decreased (Fig. S2a). As time went by, the accumulated ratio of type I patients gradually increased from 25.0% to 70.7%, whereas the accumulated ratios of type II, III, and IV patients gradually decreased (Tables S5-S6 and Fig. S2b).
From our sound sensor monitoring, we proved that the lung and heart injuries in pneumonia patients gradually decreased after discharge, and the lung and heart functions of the patients gradually improved over time. The results of our sensor monitoring were in agreement with clinical observations that pneumonia patients can suffer long-term lung and heart damage, but their condition tends to improve over time 25 .
Based on the above results from sound sensor monitoring, we found four typical characteristics in discharged pneumonia patients (Tables 4 and 5 and Fig. 6), and we found lung injury (14.6%), heart injury (4.9%), and both lung and heart injury (9.8%) is discharged patients ( Table 5). Our results were consistent with ECG data (Table 5) and clinical observations based on chest CT 2,3 and cardiac MRI 8,9 . With our sensor device, we successfully tracked the recovery course of the pneumonia Fig. 6 Four types of lung and heart sound recorded from discharged pneumonia patients. a Records of a patient with a normal respiratory rate and normal heart rate (#31). b Records of a patient with shortness of breath but a normal heart rate (#3). c Records of a patient with a normal respiratory rate but a high heart rate (#16). d Records of a patient with shortness of breath and a high heart rate (#4). a i , b i , c i , d i Lung sounds detected by our sensor; a ii , b ii , c ii , d ii Heart sounds recorded by our sensor patients. Over time, the lung and heart states of the patients gradually improved after discharge (Figs. 7 and S2), and our sound sensor observations were in good agreement with the clinically reported tendency 25 . Compared with conventional large, sophisticated and expensive instruments, our small-sized sensor provides a rapid, simple, highly sensitive approach to detect lung and heart sounds, which greatly helps the evaluation of lung and heart states of pneumonia patients and provides an alternative approach for the diagnosis and prognosis of pneumonia disease. Moreover, our sensor provides a robust approach to capture lung and heart sounds, where patients can reliably obtain the same high-quality signals as trained medical personnel; therefore, our sensor has great potential for clinical use as well as home-use health monitoring, especially in the field of wearable electronics 26,27 .
Conclusions
In this study, we developed a two-stage amplified PZT sensor for lung and heart sound monitoring in discharged pneumonia patients. Benefiting from the asymmetric gapped cantilever structure and built-in charge amplifier circuit, our accelerometer exhibited an extremely high ratio of sensitivity to noise compared to commercialized accelerometers. In addition, a sensitivity of 9.2 V/g at a frequency less than 1000 Hz was achieved by our sensor, making it suitable for weak lung and heart sound monitoring. We used our ultrasensitive sound sensor to study the lung and heart states of discharged pneumonia patients. According to our sensor monitoring, for the first time, we classified the discharged pneumonia patients into four types: patients with a normal respiratory rate and normal heart rate, patients with shortness of breath but a normal heart rate, patients with a normal respiratory rate but high heart rate, and patients with shortness of breath and a high heart rate, which represented 70.7%, 14.6%, 4.9% and 9.8% of the discharged patients, respectively. With our sound sensor, we successfully tracked the recovery course of pneumonia patients. Over time, the lung and heart function of the patients gradually improved to normal performance after discharge. Compared with conventional medical instruments, our small-sized sensor provides a rapid, simple, and highly sensitive detection of lung and heart sounds, which greatly helps the evaluation of lung and heart states of pneumonia patients. Our sensor device provides a cost-effective alternative approach to the diagnosis and prognosis of pneumonia diseases or other similar diseases and has great potential for clinical use and home-use health monitoring.
Sensor design and working principles
We designed a two-stage amplified PZT sensor with high sensitivity for cardiorespiratory sound monitoring (Fig. 1a, b). First, the sound sensor was based on piezoelectric materials and an asymmetric gapped cantilever structure (Fig. 1c, d), which was composed of a bottom mechanical layer and a top piezoelectric layer separated by a gap (Fig. 1d and Table 1). The top piezoelectric layer was made from ceramic PZT (lead zirconium titanate, Pb (Zr x Ti (1-x) )O 3 ), a widely used piezoelectric material 28 . Due to the piezoelectric effect of PZT materials, the strain on the piezoelectric beam could be transferred into electric charge 29 . Second, a built-in charge amplifier circuit was designed to further amplify the electric signal produced by the piezoelectric beam (Fig. 1e). A 1 GΩ resistor (R F ) and a 47 pF capacitor (C F ) were used as the feedback resistor and feedback capacitor, respectively. An ultralow input bias current operational amplifier (LMP7721, Texas Instruments) was used to achieve the best performance of the sensor (Fig. 1e). To minimize the surface current leakage, a guard trace was designed to completely surround the input terminal and other circuitry connecting to the inputs of the operational amplifier. Together, the fabricated prototype sensor had a total weight of 13.5 g with a size of 39 × 23 × 13.5 mm (l × w × h).
Theoretical simulation of sensor performance
We used theoretical simulation to estimate sensor performance with different structures: one was the accelerometer with asymmetric gapped cantilever structure in the present study (Table 1), and the other was the conventional cantilever structure (Fig. S1). Harmonic analysis of the dynamic model of different structures was performed in COMSOL Multiphysics® (COMSOL Inc.), with the parameters set as described in Table 1. The amplitude-frequency of the accelerometers under different excitation forces were simulated. The amplitude was then converted to the strain, and the strain on the piezoelectric beam was plotted over the frequency with variable excitation forces.
Measurement of sensitivity of our sensor
We characterized the frequency response of our sensor using a mechanical shaker (ET-126B, Labworks). We used a commercial accelerometer (752A13, Endevco) as the gold standard sensor. The outputs of both sensors were voltage, which was recorded by a 16-bit data acquisition board (NI USB 6210, National Instrument) simultaneously. The voltages were recorded by setting the shaker to different frequencies (0 to 2000 Hz) and different accelerations. The sensitivity of the sensors was calculated over different frequencies.
Collection of lung and heart signal data
We recorded the lung and heart sounds of healthy volunteers (n = 5) and discharged pneumonia patients during the follow-up visit in the hospital (n = 41). The pneumonia patients who met the discharge criteria were discharged from the hospital. At different times after discharge (in weeks), the patients visited the hospital for a follow-up examination, and we monitored the lung and heart sounds of patients during their follow-up visit.
During sound monitoring, we chose the device location to the right anterior intercostal space above the level of the third rib for respiratory signal detection and the fifth intercostal space to the left immediately lateral to the sternum for cardiac signal detection (Fig. 1a). We recorded the lung and heart sounds for 60 s for each assay. In this study, data were recorded from 41 pneumonia patients who were recently discharged between 15 June and 2 September 2020. If the same patient was monitored several times on different dates, only the data from the first monitoring session were used for analysis of type classification.
For comparison, we also monitored the lung and heart sounds using a commercial high-end electronic stethoscope (3 M Littman 3200), and the results of both devices were compared.
Data processing
We transferred the collected data from the sound sensors to a computer through a data acquisition board (NI USB 6210), and we further processed the data by LabVIEW ® and MATLAB ® . We fixed the sampling rates to be 6 kHz. For data treatment, we applied a filter with a bandwidth from 20 to 400 Hz to extract heart sounds and applied a filter with a bandwidth from 60 to 1000 Hz to extract lung sounds.
Statistical analysis
No statistical methods were used to predetermine sample sizes. The experiments were not randomized, and investigators were not blinded during experiments and outcome assessment. Data are presented as the mean ± standard deviation (SD). Statistical analysis was performed using Student's t-test, and a p value less than 0.05 was considered statistically significant. | 2021-07-23T13:36:37.461Z | 2021-07-22T00:00:00.000 | {
"year": 2021,
"sha1": "628526fc4156ccf52d1c408a22c30a9e454804de",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41378-021-00274-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0af7c829451aef97d185074d15b925cdd56c8b97",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14221093 | pes2o/s2orc | v3-fos-license | Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil
Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation.
Introduction
If an airport or seaport is shut down by biological agent contamination, the economic loss for each missed day would be enormous; it is absolutely essential to restore operations as rapidly as possible. Improved decon methods such as an electrochemical decon system (eClO 2 ) produces 100% kill of anthrax spores in less than one minute [1]. To demonstrate that a large, complex area is clear requires taking and analyzing thousands of samples. In a crisis, decontamination equipment could potentially be assembled to treat an entire area in a matter of days, but using current sampling methods, many months to years would still be required to analyze samples and re-treat areas that show surviving spores.
The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. Consider anthrax spore contamination of a large U.S. airport with an area of 140 km 2 (Denver International Airport), estimated to consist of 20% asphalt, 10% buildings and 70% open fields. If it is assumed that one sample is taken for every 5000m 2 (roughly a football field) on the open ground, every 500m 2 on asphalt, and every 100m 2 on buildings. Using these sampling densities, there will be 84,348 samples to evaluate. Based on traditional plating techniques a single lab can do 40 samples in 48 hours, and so would require 12 years to complete these samples. To complete the job in two weeks would require 302 labs. Using advanced detection methods (RV-PCR) with a sample rate of 150 samples every 48 hours, it would take 3 years for one laboratory to complete the analysis; to get it done in two weeks would require 81 laboratories [2].
To address this sampling and analysis bottleneck, composite sampling was investigated to significantly decrease the number of samples that must be analyzed, thereby speeding the recovery process. In composite sampling, multiple samples are combined into a composite sample or pool, which is tested for contamination (live spores in this case) [3]. If the pool is clear, then the entire group has no contamination. If the pooled sample shows contamination, either the entire area can be re-treated, or the area can be sampled in detail to further asses were the contamination is located. In either case, this approach can reduce the number of analyses that must be run by an order of magnitude or more.
Background
In cases where a large number of samples must be analyzed, with a strong majority producing the same result, it is possible to dramatically reduce the number of analyses by pooling or grouping samples. This approach was described in 1943 by Dorfman [4] who proposed testing blood samples for syphilis by pooling them into groups rather than testing each sample individually. If a pool tests positive, the individuals in that pool will be retested so that the infected individuals can be identified; if the pool is negative, then a large amount of time is saved because only one test has to be run, rather than testing all samples in the pool (10,100 or whatever pool size is selected). This pooling or composite sampling procedure can greatly reduce the analysis time and costs with no loss in accuracy [5].
The standard procedure is to test the composite; if it tests positive, then re-test the individual samples to identify all of the positive samples. In a wide area biological agent restoration event we could follow this procedure if desirable, but to save valuable time, the more likely course is simply to re-treat the area covered by the group that tests positive until decontamination is achieved.
Pooled or composite sampling has been well documented in the literature; see the monograph by Patil et al. [6]. It is commonly used in drug discovery, and has been applied to environmental sampling, including Superfund sites that were analyzed for the presence of polychlorinated biphenyls (PCBs) [6]. Piepel et al. [7] describe the plan for a proposed test to release B. atrophaeus spores in a building at Idaho National Laboratory, using composite sampling approaches to reduce sample numbers. Lancaster [8] describes a RCRA facility investigation at Los Alamos National Laboratory; the contamination included radionuclides plus mercury and other inorganics in a site approximately 40 feet long and 15 feet wide. These authors identify both some concerns and some promising approaches for environmental analysis. Drielak [9] describes composite sampling for the forensic investigation of a CBRN event. Emanuel et al. [10] describe sampling procedures in the event of a biological attack. These references document the validity of composite sampling for such situations, but tend to focus on statistical analysis, and none addresses the wide-area decon challenge.
The benefits of composite sampling techniques are obvious: pooling two samples halves the number of samples that must be analyzed. Pooling 20 samples reduces the number of samples by a factor of 20, accelerating data acquisition. Compositing 20 samples can reduce the number of analyses that must be performed in the above (DIA) example to 4,217 samples. Using the new RV-PCR technique, four labs could accomplish the task of analyzing the samples for an anthrax spore contamination event within 14 days, a viable and acceptable timeline. A recent paper by EPA researchers consider methods for composite sampling of an anthrax spore surrogate form a non-porous surface using a single swab on multiple sampling surfaces [11]; however, additional work is required to fill data gaps and validate the method through field demonstrations and development for implementation in an anthrax spore remediation event.
Traditional sampling and analysis procedures have been identified, tested and validated by the CDC. These sampling method procedures are available on the CDC website [12]. Much effort has gone into validating these procedures, and they should continue to be utilized. In addition, these described compositing techniques do not alter the approved analysis methods used to detect or quantify viable spores present in the samples. While new methods to improve this analysis are being developed, the tried and true standard is to plate the sample suspension on a growth medium so that colonies can be counted and the number of viable spores in the sample calculated. Our approach is to take samples that have been collected using established sampling methods, composite them to reduce the number of laboratory samples, and analyze them using established procedures. This work demonstrates that compositing methods can work in combination with accepted sampling and analysis methods.
One of the challenges of developing a composite sampling approach is defining exactly how it is best used in the field under real world conditions. In other words, what is the best practice for compositing samples? Using the procedures described above, we will define two general methods: the single medium composite method and post-sample composite method. Further description of these techniques is provided below. It is also worth pointing out that a combination compositing approach using both of these methods would be possible.
In the single media compositing method, a single sample media is used to sample multiple locations. For example, a macrofoam swab has four sides and thus sampling surfaces. In this compositing method, a single side of the swab could be used to sample a single location. Four different sample locations would then be sampled with a single swab, each using a new side of the swab. The swab is then analyzed using standard 'individual sample analysis' methods as though it was a traditional single sample. The significant advantage of this technique is that the number of sampling kits required to be prepared prior to the sampling process is reduced by a factor of four. This can represent a significant reduction in the cost and labor needed to prepare large numbers of sampling kits, and also reduce the number of samples that must be tracked through the sample handling and analysis system. Another advantage is that no modifications in laboratory analysis procedures are required, as the single sample media is tested using standard procedures [13]. The EPA has expressed interest in this because the Bio-Response Operational Testing and Evaluation (BOTE) project showed that the time and effort for sampling media preparation was significant [14]. The disadvantage of this approach is that only 4 locations can be composited thus limiting the potential benefits (both time and analysis cost) that come with composites made from a large number of locations.
In the post-sample composite method, a single sample is used to sample a single location; multiple samples from various locations are combined after the sampling process. These samples can be combined just prior to laboratory analysis or in the field by placing all samples to be composited together for laboratory analysis. The advantage to this method is that it maximizes flexibility; numerous samples can be composited (from two samples to many) and multiple sampling media types can be composited. The disadvantage is that all samples have to be prepared as if all individual samples would have to be taken, however laboratory sample preparation and analysis time would be greatly reduced.
While testing has been performed using either the single media compositing or post-sample compositing methods, there is no reason that both of these techniques cannot be used simultaneously. As will be described below, both of these techniques have been independently verified. Using these sample techniques, single media compositing samples could be taken, providing the advantage of requiring fewer sample swabs to be prepared. The samples could be combined prior to analysis using the post-sample compositing techniques to allow the analysis of more than four samples at one time. Thus the compositing of two four-sided swabs would allow a single analysis of 8 sample locations. This type of technique would reduce the number of samples that had to be prepared by a factor of 4 and further reduce the number of samples that must be analyzed by a factor of 8. Significantly reducing the time and cost required to prepare samples, and the number of samples that must be analyzed in the laboratory.
The work described in this paper was undertaken to demonstrate the proof-of-concept of composite sampling as a tool that Federal On-Scene Coordinators or others responsible for directing a wide area biodecon operation can use during the recovery process. The methods described here, in conjunction with a good statistical sampling plan, such as the Visual Sample Plan (VSP) software tool, (developed and maintained by Pacific Northwest National Laboratory [15]) will be critical in a successful and timely recovery from an anthrax spore event.
Materials and Methods
For these experiments commercially available biological indicator spore suspensions of B. subtilis (1.9x10 8 CFU/ml, NAMSA, SBS-08) were used as a surrogate for the biological agent B. anthracis.
Arizona test dust (Powder Technology, Inc.) was steam autoclaved in 250 mg portions in 20 mL glass vials, placed on their sides to maximize soil surface area exposed to steam, at 250°F for 45 mins. Potting soil (Miracle Grow Moisture Control Potting Mix) was steam autoclaved in 250 mg portions, in 20mL glass vials placed on their sides to maximize soil surface area exposed to steam at 273°F for 2 hrs. One liter of Butterfield buffer (BBT) was prepared with 26.22 g potassium dihydrogen phosphate (Sigma-Aldrich), 7.78 g sodium carbonate (Sigma-Aldrich), 1 L distilled water (SpectraPure RO/DI) and 1 mL Tween 80 (Sigma-Aldrich). The buffer was sterilized by autoclaving at 250°F for 60mins. Tryptic soy broth (TSB) agar plates were prepared using 30 g. TSB, 15 g agar and 1 L distilled water. The TSB is autoclaved (Tuttnauer 2340M) then poured into sterile petri dishes.
The filter membrane is removed and placed in a sterile glass jar with lid, along with 4-6 glass beads (VWR), a magnetic stir bar (Sigma-Aldrich) and ten milliliters of BBT dilution solution. The vial is then sonicated (SPT UC-0609) for 5 minutes and then placed on a stir plate for at least 30 minutes, or until the filter is completely macerated. The resulting solution is diluted and plated on TSB plates to determine number of CFU on the filter assembly.
Arizona Test Dust Compositing
Droplets of B Subtilis (0.1mL, 2.1 Ã 10 5 CFU/0.1mL dilution) were added to sterilized dust (250mg) and then left undisturbed for 20mins. Afterwards 1.9mL of BBT buffer was added to the mixture (theo. 1.05 Ã 10 4 CFU/0.1mL in the tube). The mixture was vigorously shaken by hand, then vortexed for 5 mins, before allowing the soil particles to settle. The somewhat cloudy supernatant was removed and transferred to a sterilized vial. Then 0.1mL of the supernatant was diluted with 9.9mL of buffer to form a 10X dilution, from which 0.1mL was plated on TSB agar plates.
Composite samplings with 4, 10 and 20 individual samples were carried out by gravity filtration of the supernatant from each individual sample through a 0.2 micron filter funnel (VWR) then rinsed with BBT buffer. The membrane was removed and digested with desired volume of buffer, from which 0.1mL was plated on TSB agar plates (as described above).
Sterilized Potting Soil Compositing
Droplets of B Subtilis (0.1mL, 2.1 Ã 10 3 CFU/0.1mL dilution) were added to sterilized soil (250mg) and then left undisturbed for 20mins. Afterwards 3.9mL of buffer was added to the mixture (theo. 52.5CFU/0.1mL in the tube). The mixture was vigorously shaken by hand, vortexed for 5 mins, and then left to "soak" for an additional 20-30mins. The dark brown to black supernatant was removed and transferred to a sterilized vial, and 0.1mL was plated on TSB agar plates.
Composite samplings with 4, 10 and 20 individual samples were carried out by gravity filtration of the supernatant recovered from individual samples through a 0.2 micron filter funnel, then rinsed with buffer. The membrane was removed and digested with desired volume of buffer, from which 0.1mL was plated on TSB agar plates (as described above).
Controls
Blank controls were performed to verify the B. subtilis CFU counts that were place on each sample. The results of these control population counts were used to calculate the recovery efficiency from each sample, or are directly presented in the data results. All samples were plated in triplicate and the average population count is reported in terms of colony forming units (CFU).
Statistical Box Plots
The statistical box plots used in the figures are presented so that the data and box plot are overlapped. The box represents 25 to 75% of the data spread, and the whiskers are set to extend to two standard deviations of the data.
Statistical Analysis
Hypothesis testing was performed using one-sample t-tests, comparing a single data point with a larger population, and two-sample t-tests, comparing two sample data sets together. In most cases the null hypothesis states that the two-sample populations are equal or that the single sample result matches the comparative sample mean. The level of significance for all tests was set at 0.05. The Origin 9.1.0 software package was used to perform the analyses.
Proof-of-concept Compositing
The CDC has published sampling procedures, emergency response resources-surface sample procedures for Bacillus anthracis spores on smooth, non-porous surfaces-revised April 16, 2012 [5]. Based on those verified procedures, the macrofoam swab and cellulose sponge sampling use only a wetted swab or sponge for sampling. Once the material has been used to sample the contaminated surface it is placed in a screw cap container prior to analysis. In the laboratory, the samples are treated to remove the spores from the sampling media and 3 milliliters of spore suspension per sample remains [14]. From the three milliliters of spore suspension, samples are diluted and plated on growth media to determine the number of viable spores. Thus when compositing samples each sample represents a 3 milliliter volume and so combination of 100 samples means the volume to analyze is 300 milliliters.
The challenge for composite sampling is the dilution caused by the combination of these samples. For example, if only a single location is 'hot' and that sample is diluted with 99 other clean samples, instead of detecting the spores in 3 milliliters the same number of spores now have to be detected in 300. We tested whether concentrating the spores in suspension by filtration, prior to analysis, could overcome this dilution. A sterile 0.2 micron filter is used to filter the spore suspension and collect the individual spores. This filtration also allows any residual decontaminant or surfactants to be washed away. Once the filtration step has been completed the spores are recovered from the filter by washing and suspended in a small volume for analysis.
These tests used commercially available biological indicator spore suspensions (specifically, B. subtilis). TDA performed a population count on this spore suspension and determined that there were approximately 2.16 +/-0.07 x10 8 CFU per 0.1ml in the suspension.
Once this concentration of viable spores was identified, we assessed how the filtration process affects the recovery of spores. To accomplish this task we prepared three-milliliter sample solutions that each contained~2000 CFU of spores. Some of these solutions were simply plated and counted, others were filtered, recovered and then plated and counted. This allows for the determination of any systematic deviation in the spore count caused by the filtration process. Based on the results of 15 growth plates from both the filtered and non-filtered control the results showed that on average 2070 +/-486 CFU were recovered from the control and 1933 +/-443 CFU from the filtered samples. The averages from these measurements are well within one standard deviation from each other and a two-sample t-test shows they are statistically equivalent (p = 0.36899; at the 0.05 level, the population mean is not significantly different from the test mean). A box plot of the data from the control and the filtered sample is shown in Fig 1. Once it was established that the filtration process could successfully be accomplished and that there were no systematic losses of spores, we began to further dilute the samples as they would be during compositing to ensure that the spore could still be recovered. In other words, we took a fixed number of spores and diluted them to greater and greater amounts and then filtered them to ensure we could recover those spores. One concern is that as the volume of diluting solution became larger and larger, the spores may become lodged in the filter and become unrecoverable or that the filter might fail and some of the spores may be lost. We diluted a three milliliter spore suspension up to a volume of one liter, representing a compositing of 333 individual samples. This diluted solution was then filtered and the filter treated to recover the spores. A population count was then performed to determine the number of spores recovered. This count was then compared with the unfiltered control sample to determine if there was any deviation in the spore count: matched spore recovery is expected if the filtration process works correctly.
The results of this systematic spore suspension dilution and recovery experiments showed that even under a high dilution level, as would be expected after compositing numerous samples, the spores can be concentrated and recovered. A box plot of the colony counts in the control and the filtered sample is shown in Fig 2a. Composite samples comprising as few as 5 individual samples, and up to 333 individual samples have been filtered and the colony count determined in each case. Fig 2b provides the number of data points, average spore count, standard deviation and two-sample t-test p-statistic comparing each treated sample to the untreated population control. This data provides a proof of concept for the ability to combine samples, as would be required to analyze composite samples, and then concentrate them for analysis without losing the sensitivity to detect small numbers of spores. These test data confirm that the dilution associated with pooling many samples does not lead to a decrease in sensitivity because the spores can be reconcentrated by filtration. Therefore, composite sampling can be particularly advantageous in detection of anthrax spores. There is an additional situation where concentration of samples by filtration may be desirable. Assume that the analytical method used has some limit of detection, or that there is a tradeoff between sensitivity and the time for analysis. Consider a situation in which the effective limit of detection is 10 spores. Now consider a collection of 300 samples, in which 20 are positive, each containing 5 spores. If each of the 300 samples is analyzed individually, none will be identified as positive. However, if all 300 samples are pooled, the resulting composite contains 100 spores, and will easily test positive. In this case, we have reduced the geographic resolution, but have obtained a correct result (avoiding false negatives) while lowering the time and cost of analysis.
Arizona Test Dust Compositing. In these tests, 2.2x10 5 CFU of B. subtilis spores were added to each sample of Arixona Test Dust. Twenty individual samples were prepared; colony counts were determined for each individual sample, and also for composites of 4, 10 and 20 samples. Initially, four individual samples were combined, giving five sets of composite samples. Both the individual samples and the composites were analyzed. A box plot of the population control, individual samples and corresponding composites are shown below in Fig 3a. An average recovery from the individual samples of 86.7 +/-12.7% was achieved. The experiments were performed by set order (1 through 5) which shows a general trend of improved recovery and reduced deviation.
The spore recovery from the five four-sample composites was 91.3 +/-11.1%. Fig 3 shows there are no significant losses in the number of spores in the composited samples. One sample t-tests were used to compare the 5, four individual sample sets with their corresponding combined composite pooled sample. All samples sets and their corresponding composites were statistically matched, except set 3 (p = 0.03589, at the 0.05 level, the population mean is significantly different from the test mean). The composited sample for set 3 was higher than the individual samples; however, it was within two standard deviations of the individual samples that made up set 3. In addition, the composited sample for set 3 had nearly identical results (229,500 CFU) as the average for the spore population count (219,556 CFU) that was used to make these samples. Fig 3b shows the number of data points, average spore count, standard deviation and one-sample t-test p-statistic comparing the individual sample sets and its corresponding composite pool sample.
Using the twenty individual Arizona test dust recovery samples described above, the same methods that were used to composite four samples were used to prepare a composite of 10 individual samples. The results of these tests are shown in Fig 4, where the first column is the population control data, the results of the 10-sample composite followed by the 10 individual samples used to produce the composite. The spore recovery for the 10 sample composite is 99.5% of the spores that were added to the individual samples. A one-sample t-test of the composited and individual samples shows that they are matched with a p = 0.22062 (at the 0.05 level, the population mean is not significantly different from the test mean). This data shows that compositing of ten individual samples does not cause a loss in spores and nearly all the spores are recovered in a single analysis.
Using the twenty individual Arizona test dust recovery samples described above, the same methods that were used to composite 4 and 10 samples were used to composite all twenty samples. The results of these tests are shown in Fig 5 where the first column is the population control data, the 20 sample composite followed by the results of the 20 individual samples used to produce the 20 sample composite. The spore recovery for the 20 sample composite is 82.9% of the spores that were added to the individual samples. This data shows that compositing twenty individual samples does not cause a significant loss in spores, and nearly all the spores are recovered in a single analysis. A two-sample t-test between the population control and 20 samples was performed and showed they were statistically equivalent (p = 0.217, at the 0.05 level, the population mean is not significantly different from the test mean). A one sample t-test between the 20 sample composite and the 20 samples used to make that composite was performed and showed that the composite was statistically low (p = 0.00214, at the 0.05 level, the population mean is significantly different from the test mean), however the 20 sample composite result is within one standard deviation of the results of the 20 individual samples.
Based on these spore recovery results on Arizona test dust we were able to show that the techniques and methods identified can be used to composite many samples into a single composite without loss of accuracy or dilution of spores in the sample. In theory, hundreds of samples could be composited using these techniques. Instead of continuing with larger numbers of Sterilized Potting Soil Compositing. In these tests we evaluated composite sampling of a more complicated soil matrix: steam-sterilized potting soil. Potting soil is much more complicated than Arizona Test Dust. Typical commercial potting soils contain peat, composted bark, sand, perlite and can include fertilizers and slow release nutrients in which plants can be grown. Potting soil is a next level of sophistication towards actual environmental sampling. To increase the challenge, in this test we also reduced the number of spores added to each Composite Sampling of Anthrax Surrogate in Soil individual sample, from 2.1x10 5 CFU used on the Arizona test dust samples to under 2000 CFUs. The more complicated soil matrix and reduced number of spores did not require a change in the testing protocols. Despite this increased challenge, the compositing techniques that were demonstrated performed as desired.
The same basic series of experiments that were performed on the Arizona test dust were also carried out on the sterilized potting soil. A single population control sample shows that 1870 CFU were being added to each individual sterilized soil sample. A box plot of the population control and results from the individual samples is shown in Fig 6. A one-sample t-test showed matched recovery of spores and the population control (p = 0.168, at the 0.05 level, the population mean is not significantly different from the test mean). The average spore recovery for the individual samples was 96.6 +/-11.0%. The population control was within one standard deviation of the results of the 20 individual contaminated samples.
As with the Arizona test dust, we first used four-sample composites to test the compositing techniques. Two, four-sample composites were prepared; the colony counts for the individual shows a result that is statistically low using a one-sample t-test (p = 0.01, at the 0.05 level, the population mean is significantly different from the test mean). The composite is outside two standard deviations of the individual samples, it is however within three standard deviations. This deviation was the largest observed in this study and while a majority of the spores were recovered (88%) additional testing was carried out with larger composites without seeing any significant loss in spores. As with the Arizona test dust samples, composites of 10 and 20 were also completed with the contaminated potting soil. The results of the 20 sample composite (both individual results and the composite result) are shown on the left side of Fig 8, while the right side shows the individual results and composite for the 10 sample analysis. The twenty-sample composite is within one standard deviation of the individual results and shows good agreement A onesample t-test showed matched recovery of spores (p = 0.324, at the 0.05 level, the population mean is not significantly different from the test mean). The 10 sample composite is within two standard deviations of the individual results, a one-sample t-test suggested a statistical Composite Sampling of Anthrax Surrogate in Soil difference (p = 0.001, at the 0.05 level, the population mean is significantly different from the test mean).
A summary box plot of all of the data generated to demonstrate composite sampling and recovery of bacterial spores from sterilized potting soil is presented in Fig 9. Composites of up to twenty samples worked well with the techniques described above and the results appear well correlated with the individual samples used to make up the composite. The data suggest that composite samples from a larger number of individual samples could be effectively concentrated and analyzed.
Conclusions
In the event of widespread anthrax spore contamination, the time required to obtain and analyze the samples needed to identify the location of contamination (hazard mapping) and verify decontamination (clearance sampling) constitute a bottleneck in the recovery process. In this effort a proof-of-concept study was performed to show that composite sampling of bacterial spore samples (surrogates for Anthrax) can be composited without spore loss due to dilution. A compositing methodology was described and demonstrated by recovering spores from contaminated soil substrates.
Preliminary testing was performed and demonstrated that spore suspensions can be successfully concentrated during composite sampling analysis to mitigate dilution issues. Composites as large as 333 samples were analyzed without loss in sensitivity. In addition, it was shown how composite sampling can actually improve the detection of low concentrations of anthrax spores over the analysis of individual samples.
Spore recovery from dirt samples were used to mimic real world environmental samples. The composite samples from the dirt recovered spore samples showed the laboratory techniques are viable and results represented those from the individual samples. No loss in sensitivity or loss of spores was seen or identified. | 2017-07-06T09:13:12.149Z | 2015-12-29T00:00:00.000 | {
"year": 2015,
"sha1": "6f0addc5cfc393418dea50fd61fff043cb51337d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0145799&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f0addc5cfc393418dea50fd61fff043cb51337d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
3065523 | pes2o/s2orc | v3-fos-license | PREVALENCE OF POTENTIALLY REVERSIBLE DEMENTIAS IN A DEMENTIA OUTPATIENT CLINIC OF A TERTIARY UNIVERSITY-AFFILIATED HOSPITAL IN BRAZIL
The importance of investigating the etiology for dementia lies in the possibility of treating potentially reversible dementias. The aims of this retrospective study are to determine the prevalence of potentially reversible dementias among 454 outpatients seen at the Cognitive and Behavioral Neurology Unit, Hospital das Clínicas, São Paulo University School of Medicine Brazil, between the years of 1991 and 2001, and observe their evolution in follow-up. Among the initial 454 patients, 275 fulfilled the DSM-IV criteria for dementia. Alzheimer ́s disease was the most frequent diagnosis (164 cases; 59.6%). Twenty-two cases (8.0%) of potentially reversible dementia were observed, the most frequent diagnoses being neurosyphilis (nine cases) and hydrocephalus (six cases). Full recovery was observed in two patients and partial recovery in 10 patients. Two cases were not treated and eight cases were lost on follow-up. The prevalence found in the present study falls within the range reported in previous studies (0-30%).
According to Maletta 1 , the concept of reversible dementias, as it has been most frequently understood, covers three groups of distinct conditions.The first one consists of depression with associated cognitive impairment, which is often referred to as "pseudodementia".The second group comprisesis made of conditions which more commonly cause acute confusional states or delirium, such as toxic and metabolic disturbances.The third group of reversible dementias is composed of conditions such as normal pressure hydrocephalus or neurosyphilis, also referred to as secondary dementias (as opposed to primary dementias such as Alzheimer´s disease) or disease-specific dementias.The third group embodies reversible dementias as they are most widely accepted.The first problem encountered when trying to identify the prevalence of potentially reversible dementias is the lack of consistent definitions for of potentially reversible causes of dementia acrossamong various studies 2 .For instance, depression may have been considered a treatable dementia in earlier studies [2][3][4][5] .However, thisit may not fulfill the current criteria for dementia 4 .
The prevalence of potentially reversible dementias has previously been reported by several authors, and the results have varied widely among studies [2][3][4][5][6][7][8][9][10] .One of the first studies on potentially reversible dementias is that of Marsden & Harrison 11 , in which 27 out of 108 patients presented potentially reversible dementias.Clarfield 3 , in an meta-analysis of 32 studies, found that the prevalence of potentially reversible dementias was 13.2% (ranging from 0 to 32.5%).The most frequent causes found were drugs, depression and metabolic disturbances.Barry and Moskowitz 2 found the prevalence of treatable conditions to rangeranging from 1.3% to 30%, while reviewing 10 studies between the years of 1972 and 1986.Weything 4 , in a quantitative review of 16 studies published between 1972 and 1994, found that 15.2% was the prevalence of potentially reversible causes of dementia.The most frequent causes found were again depression and drug intoxication.The prevalence of potentially reversible dementias in Brazil has previously been shown in three studies.One 12 indicated a prevalence of potentially reversible dementias of 23.6%.Of these, 2.7% had partially reversible dementia and 1.8% had fully reversed dementia after treatment of the underlying disease.The most frequent cause of potentially reversible dementia was low serum vitamin B12 dosage.In a previous study within our outpatient unit 7 , eight out of 100 patients presented potentially reversible causes of dementia, which were hydrocephalic dementia (six cases, four of which had normal pressure hydrocephalus) and neuroshyphilis (two cases).Vale and Miranda 9 found, among 186 patients, potentially reversible causes of dementia in 32 cases (16 cases of alcoholism, 10 cases of normal pressure hydrocephalus, 4 cases of neurosyphilis and two cases of depression).
The importance of investigating potentially reversible dementias and determining its prevalence lies not only in the obvious opportunity to lessen one´s cognitive impairment by treating the underlying cause of such a condition, but also in the decision of how to investigate such dementias.Given that the pretest probability of potentially reversible dementia is high or low, one must consider the costbenefit of each test and the burden those tests would bring to the patients, should a broader diagnostic approach to dementia be used, and also the probability of the occurrence of false positive tests 2,4 .
The actual reversibility of potentially reversible dementias has been reviewed in studies by Clarfield 3 and Weytingh et al. 4 Clarfield 3 found that, in 11 studies, 11% of the cases of such dementias showed improvement after treatment (8% with partial recovery and 3 % with complete recovery).Weytingh et al. 4 , found partial reversal to range from 0 to 23% of dementia cases (average 9.3%) and full reversal ranging from 0 to 10% (average 1.5%).It was also observed that partially and fully reversed dementia cases haves fallen over the past few years, and therefore augmenting the discussion over an adequate approach to dementia.
The aim of this study is to ascertain the prevalence of potentially reversible dementias in the Behavioral and Cognitive Neurology Unit from the division of Neurology, at the Hospital das Clínicas of the University of São Paulo School of Medicine, Brazil (USP), a clinic dedicated to patients with cognitive impairment, by retrospectively studying the cases of outpatients seen over ten years' of experience (1991-2001), as well as observing the evolution of potentially reversible dementia cases in follow-up.
METHOD
We retrospectively reviewed the files of 454 outpatients consecutively seen consecutively in the USP Cognitive and Behavioural Neurology Unit over the period 1991 to 2001 period.
The patients´ clinical evaluation included complete clinical history and physical/neurological examination along with cognitive evaluation.A laboratory evaluation was also performed and included complete blood count, serum sodium and potassium, urea, creatinine, cholesterol, tryglicerides, uric acid, calcium, phosphorus, total protein, albumin, globulin, bilirubin, alkaline phosphatase, γ-glutamyl transferase, transaminases concentrations, erythrocyte sedimentation rate, serum thyroxine, T 3 and thyroid-stiymulating hormone concentrations, serum VDRL and FTA-ABS and computed tomography or magnetic resonance imaging of the head.Other tests were performed based on the diagnostic hypothesis considered for each case.
The diagnosis of dementia was based on the Diagnostic and Statistical Manual of Mental Disorders -Fourth Edition (DSM-IV) 13 criteria for dementia.The diagnosis of Alzhei-mer´s disease (possible or probable) was made according to the National Institute of Neurological and Communicative Disorders and Stroke -Alzheimer Disease and Related Disorders Association (NINCDS-ADRDA) 14 criteria.The diagnosis of definite Alzheimer´s disease was achieved as a post mortem diagnosis (when an autopsy was performed).The diagnosis of vascular dementia followed the National Institute of Neurological Diseases and Stroke -Association Internationale pour la Recherche et l´Enseignement en Neurosciences (NINDS-AIREN) 15 criteria of probable and possible vascular dementia.
The criteria used in the diagnosis of dementia with Lewy bodies was that of the consortium on DLB consensus guidelines 16 .Frontotemporal dementia was reached fromachieved as a diagnosis based on the modified Lund Manchester criteria 17 .Parkinson´s disease with associated dementia was diagnosed when parkinsonian syndrome was more, or as, evident as the dementia syndrome.The diagnosis of depression was based on the Diagnostic and Statistical Manual of Mental Disorders -Fourth Edition 13 criteria on depressive disorder.Other diagnoseis were made based on usual criteria.
The observation of improvement (or lack of) in each case was made based on clinical impression when data were available.When data were lacking, patients and/or families were contacted bythrough telephone and the cognitive outcome was obtained fromby their impressions.
The analysis of the data obtained was made utilizingthrough the program SPSS for Windows version 10.0.1.
RESULTS
Two hundred and seventy five patients fulfilled the criteria for the diagnosis of dementia.Of these, 79 had already been included in a former report 7 .Among the remaining 179 that did not have dementia, depression was diagnosed in 31 individuals.
Alzheimer´s disease (AD) was the most frequently established diagnosis (Table 1).Among the 164 cases of AD, the diagnosis of definite AD was made in four cases.Probable AD was found in 95 cases, whereas 65 cases of possible AD were encountered.The second most frequent diagnosis was that of vascular dementia (VD).Probable VD was the final diagnosis in five cases; the other 32 individuals being considered to be possible VD cases (hydrocephalus was found as a comorbidity in one case of possible VD).In 20 cases, no specific etiology for dementia could be established.Other diagnoseis made are enlisted in Table 1.
In the present study, neurosyphilis, hydrocephalus, alcoholic dementia, Wernicke-Korsakoff syndrome, Wilson´s disease and subdural hematoma were considered potentially reversible causes of dementia, thus totaling 22 (8.0% of all cases with dementia and 4.8% of all patients seen) patients in the potentially reversible dementia group.The two most frequent diagnoses in this group were neurosyphilis (nine cases) and hydrocephalus (six cases).Hydrocephalus was alson found as a comorbidity in 2 cases (1 with possible VD previously described and one with neurosyphilis).
The potentially reversible dementia group has its demographical characteristics (age and schooling years) listed in Table 2.The gender distribution was as follow: three female and 22 male.Table 2 are also listsed the characteristics found for the irreversible dementia group, which was composed of 147 females and 105 males.The groups were different concerning on gender (p~0, Chi-Square test), as there were more male patients in the reversible dementia group.No statistically significant differences were found between the groups when schooling years were considered (p=0.065,Mann-Whitney U test), but a significant difference was found in age between the groups (p~0, Mann-Whitney U test).The potentially reversible dementia group had a lower mean age mean than the irreversible Among the patients in the potentially reversible dementia group, full recovery was observed in two patients (one diagnosed with neurosyphilis and the other, with hydrocephalus) after proper treatment.Partial recovery was treatment´s outcome of treatment in 10 patients (nine with neurosyphilis and one with subdural hematoma).In two individuals (one with alcoholic dementia and the other with hydrocephalus) specific treatment for the etiology of dementia was not indicated, and in follow up they were cognitively stable.Eight cases were lost on follow-up.
DISCUSSION
The prevalence of potentially reversible dementias in this sample, at 8%, is similar to the one found in a previous study by our group 7 .A remark should be made regarding the fact that, in contrast to the present study, the previous study had no did not include one cases of alcoholism (out of one hundred) as a reversible etiology for dementia., opposed to this one.Alcoholic dementia has been a subject of discussion as to its actual reversibility 18 .In spite of this, it could fit within the concept of potentially reversible causes of dementia, and therefore it was included in this study.The results obtained in this article are also consistent with the prevalence range for potentially reversible dementias assessed by previous studies 18,19 , between 0 and 30%.
The most frequent diagnoses for potentially reversible cases were neurosyphilis and hydrocephalus.In past studies [2][3][4] depression and drug intoxication were the most frequent diagnoses for such cases.Nonetheless, depression was not considered a reversible dementia in this study.A prevalence study based on a tertiary hospital dementia outpatient clinic population is subject to a selection bias, which is basically a consequence of the local health system structure and the community from which the cases are drawn, and therefore, this kind of study is a unique reflection of the groups´ experience.For instance, neurosyphilis was found to be the most frequent etiology for potentially reversible dementia in this particular study.However, this diagnosis haswas not been observed in previous series, including other Brazilian prevalence studies 9,12 .This notwithstanding, prevalence studies such as the present, prove importance not only for the reasons previously reported, but also for contributing towards establishing a ground for epidemiological data regarding potentially reversible dementias ins among developing countries, from which data remain are still scarce, It has been suggested that there is a higher prevalence of potentially reversible dementias among individuals aged less than 65 years 20 .Indeed, in the present study a significant difference in age distribution between the potentially reversible dementia group (which had lower median for age) and the irreversible dementia group was found.This finding could signify that some potentially reversible cases of dementia, such as neurosyphilis, are more prevalent among a younger population, while primary dementias such as AD haves a progressively higher incidence, the older the population is 21 .
Alzheimer´s disease was the most frequent diagnosis for the dementia syndrome in the sample here considered here.The prevalence found of 59.6% is also consistent with previous series.Marsden & Harrison 11 described a prevalence of presumed AD of 57.1%.Nitrini et al 7 reported 54% of AD diagnosis in 100 outpatients, as well as 20% of prevalence for vascular dementia.Clarfield 3 , reviewing 32 studies that investigated the prevalence of dementias, found out that Alzheimer´s disease was diagnosed in 56.8% of the cases, while.Ames et al. 6 had 59 out of 100 cases diagnosed with AD.
Among patients with potentially reversible dementias, two had a full recovery and 10 had partial recovery after treatment, representing 9% with full recovery and 45.4% with partial recovery.Considering the data discussed above, these findings do not lie within the range found in previous studies.It must be taken into account that in the present sample, unlike other studies, neurosyphilis was the most frequent diagnosis among potentially reversible causes of dementia (as well as the leading diagnosis among patients in which full or partial recovery was observed), which could providebe an explanation for the discrepancy found here.
The prevalence of neurosyphilis cases found, should also point out the importance of syphilis serology as one ofamong the tests that should be included in the laboratory evaluation of dementia.Although its prevalence has fallen over the past years, neurossyphilis is still a relevant cause for potentially reversible dementia, as it can be observed from the results of this study, and should not be overlooked as a diagnosis, especially in among populations of developing countries populations.
Among the patients who were evaluated in our outpatient clinic, the cognitive impairment was attributed to depression in 31 individuals (6.9%).The significance of ascertaining the prevalence of depres-sion in a dementia outpatient clinic is enhanced by the fact that it has been proposed 22 that patients suffering from depression along with a coexisting cognitive impairment ("reversible dementia"), are more prone to developing irreversible dementia on follow-up, than patients with depression alone.Accordingly, this population warrants an attentive follow-up.
Table 2 .
Comparison between groups for age and schooling years. | 2017-06-22T06:42:04.976Z | 2003-12-01T00:00:00.000 | {
"year": 2003,
"sha1": "250b57154b2dffe4fadcf59a1871ecd8e7db2b20",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/anp/a/ZLbJYCTSDxHHdMpyqQZVtNy/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "250b57154b2dffe4fadcf59a1871ecd8e7db2b20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210885191 | pes2o/s2orc | v3-fos-license | Rapid Least Concern: towards automating Red List assessments
Abstract Background The IUCN Red List of Threatened SpeciesTM (hereafter the Red List) is an important global resource for conservation that supports conservation planning, safeguarding critical habitat and monitoring biodiversity change (Rodrigues et al. 2006). However, a major shortcoming of the Red List is that most of the world's described species have not yet been assessed and published on the Red List (Bachman et al. 2019Eisenhauer et al. 2019). Conservation efforts can be better supported if the Red List is expanded to achieve greater coverage of mega-diverse groups of organisms such as plants, fungi and invertebrates. There is, therefore, an urgent need to speed up the Red List assessment and documentation workflow. One reason for this lack of species coverage is that a manual and relatively time-consuming procedure is usually employed to assess and document species. A recent update of Red List documentation standards (IUCN 2013) reduced the data requirements for publishing non-threatened or 'Least Concern' species on the Red List. The majority of the required fields for Least Concern plant species can be found in existing open-access data sources or can be easily calculated. There is an opportunity to consolidate these data and analyses into a simple application to fast-track the publication of Least Concern assessments for plants. There could be as many as 250,000 species of plants (60%) likely to be categorised as Least Concern (Bachman et al. 2019), for which automatically generated assessments could considerably reduce the outlay of time and valuable resources for Red Listing, allowing attention and resources to be dedicated to the assessment of those species most likely to be threatened. New information We present a web application, Rapid Least Concern, that addresses the challenge of accelerating the generation and documentation of Least Concern Red List assessments. Rapid Least Concern utilises open-source datasets, such as the Global Biodiversity Information Facility (GBIF) and Plants of the World Online (POWO) through a simple web interface. Initially, the application is intended for use on plants, but it could be extended to other groups, depending on the availability of equivalent datasets for these groups. Rapid Least Concern users can assess a single species or upload a list of species that are assessed in a batch operation. The batch operation can either utilise georeferenced occurrence data from GBIF or occurrence data provided by the user. The output includes a series of CSV files and a point map file that meet the minimum data requirements for a Least Concern Red List assessment (IUCN 2013). The CSV files are compliant with the IUCN Red List SIS Connect system that transfers the data files to the IUCN database and, pending quality control checks and review, publication on the Red List. We outline the knowledge gap this application aims to fill and describe how the application works. We demonstrate a use-case for Rapid Least Concern as part of an ongoing initiative to complete a global Red List assessment of all native species for the United Kingdom Overseas Territory of Bermuda.
What is Rapid Least Concern?
Harnessing open-source data provided by GBIF -the Global Biodiversity Information Facility (GBIF) and Plants of the World (POWO), Rapid Least Concern carries out analysis of plant distributions to determine if they are likely to be threatened or not. For now, the threatened species will require further attention before publication on the Red List , but the non-threatened species require considerably less documentation and can be automatically generated. Rapid Least Concern provides both the analysis to determine if threat is likely and for the non-threatened species, also allows users to download the data in a format compliant with the IUCN Red List.
Why create Rapid Least Concern?
Quite simply, we want to speed up the rate at which assessments are generated for the IUCN Red List of Threatened Species. To date, only ~9% of plants have been assessed and ~250,000 species are estimated to be in the Least Concern category. A workflow developed for the Global Tree Assessment has proven that automation is possible and large volumes of Least Concern assessments of trees are already being transferred to the Red List. However, there are many more species to assess and there were no freely avaiable tools that apply a similar automation procedure. The develpment of Rapid Least Concern will speed up the process of documenting the un-assessed Least Concern plants will make a major contribution to the Red List and will mean that valuable assessor resources can be targeted towards assessing species most likely to be threatened.
Get involved:
You can help us improve Rapid Least Concern by letting us know of any bugs or by suggesting new features here:
How to use:
There are two options for generating LC assessments: Single and Batch. The single option first tests whether your species is likley to be Least Concern, and then allows you to download the data files needed to support publication of the assessment on the Red List. The batch option runs in the same way as the single, but allows users to process multiple species at the same time by uploading a csv. file with a list of names. You may also have a list of names with clean point data already associated with the names -in this case the batch process will run by using your points rather than searching GBIF for occurrence records.
Single
Try the quick start demo first: Step 1 -Enter a binomial (Genus species) into the 'Enter a species' search box. A table of results will appear in the main panel to the right. The results are from a search of the binomial against the Global Biodiversity Information Facility (GBIF) names backbone. The best matches are listed in order of confidence. The scientificName field includes the author and can be used to make sure you find the species you are looking for. Select a species from the table by clicking on a row, the row will be highlighted in blue. A match is then made to the Plants of the World (POWO) names backbone. Both the GBIF and POWO identifiers are reported in the left side bar. If there is no matching POWO identifier, the analysis cannot proceed.
Search results for Aloe zebrina
Step 2 -Set parameters A limit can be set to the number of occurrence records to be downloaded from GBIF. The value can be set with the slider widget. We allow a maximum of 10,000 occurrence points, a minimum of 1,000 and the default is set at 3,000.
Set the maximum limit for GBIF occurrences
Step 3 Click the Run Analysis button. The first output is a map of the georeferenced occurrence points derived from GBIF (green circle markers), and the native range according to Plants of the World Online using the TDWG geographic distribution system (red polygons). The points and native range layers can be turned on or off and the nonnative points can be hidden. Note that non-native points are not used in the analysis.
Distribution of Aloe zebrina
Statistics
SIS Tables and point file
Below the statistics table and gauges are a series of tables that provide the minimum information required to support a Least Concern Red List assessment.
The first tab shows the occurrence point
Download SIS connect files and point file
Click the Clear form button if you wish to reset the analysis Additional data There are additional fields that can be entered e.g. habitat and plant growth form as well as assessor information. However, these fields can also be entered directly into the SIS database. Use the multiple select options from the sidebar on the left to pick the relevant habitat and growth form.
Enter additional data
For many species, the plant growth form can be found by querying the World Checklist of Selected Plant Families. The results show how many species were searched, how many names could not be matched, how many names matched to synonyms. Any names not matched, or matched to synonyms are highlighted in red and ommited from further calculations.
2 As with the single option, a limit can be set to the number of occurrence records to be downloaded from GBIF. The value can be set with the slider widget. We allow a maximum of 10,000 occurrence points, a minimum of 1,000 and the default is set at 3,000.
3 Click the Run Analysis! button to generate the raw statistics. As with the single process, the results contain the original search results fields (IPNI identifier, author, accepted status and name_in) and several metrics relating to geographic range size.
In contrast to the single process, the batch process allows the user to adjust the thresholds to determine Least Concern using the slider widgets. Species that meet or exceed the LC thresholds are highlighted in green.
Results of analysis
Below the table is a list of how many species were considered for the analysis and how many warnings there were i.e. species that could not be processed. Finally, the number of LC species identified is reported. | 2020-01-24T15:42:20.948Z | 2020-01-23T00:00:00.000 | {
"year": 2020,
"sha1": "5c644c5bef8f0b7dbca5ae5e5241d0d84141a5ce",
"oa_license": "CCBY",
"oa_url": "https://bdj.pensoft.net/article/47018/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba37c2aa339194e3b4147d6161b472bfc0402a2c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
268263152 | pes2o/s2orc | v3-fos-license | The relationship between income, health insurance, and employment status as prognostic indicators of bladder cancer: A survival analysis
The following socioeconomic data were extracted for analysis: income, employment status, and health insurance. Patients were divided by income below 4 million Rupiah or more than 4 million Rupiah, according to the basic salary in Background: Bladder Cancer (BC) is one of the health problems. Socioeconomic status (SES) may correlate with patient treatment, possibly impacting patient prognosis. This study aimed to determine the relationship between income, health insurance, and employment status as prognostic indicators of BC. Methods: A retrospective observational study for patients diagnosed with BC in a hospital during the 5-year period between January 2019 and December 2023. Kaplan-Meier test analysis was used to generate overall survival curves stratified by income, employment status, and health insurance. Multivariate Cox proportional-hazards regression was used to identify factors associated with worse overall survival. Results: The results of the analysis on 219 patients showed no difference in patient survival based on income (p > 0.05), while employment status and health insurance showed significant difference in patient survival (p < 0.05). Moreover, there were 99 (45.2%) patients died, with the average patient being 58 years old and dominant in male patients. Conclusions: Prevention of poor outcomes in patients needs to pay attention to certain characteristics, particularly for the low-economic patients without appropriate national health insurance coverage.
ORIGINAL PAPER
early stage when they can be treated.However, about 25% of BCs are diagnosed at an advanced stage (2).The prognosis depends on many factors (1).SM survival varies significantly according to stage, in both non-invasive and invasive cases.The percentage of non-invasive cancers is relatively high.Stage, age, and histology associated with survival (5).The probability of accumulated survival at the end of 1, 3, 5, and 10 years in patients with BC is 0.8989, 0.7132, 0.5752, and 0.2459, respectively.There are significant differences in survival rates between age groups and types of treatment (6).The stage and extent of the cancer are important factors in determining the best treatment for BC (7).Continuity Cancer survival is generally lower for residents of more socio-economically disadvantaged areas.Socio-economic inequality decreases survival due to certain factors (8).In addition, health insurance is the determinant of patient treatment.The burden of cancer survival also affects healthcare systems and society (9).Inhospital mortality can occur in patients with BC.The objective of this study was to determine the relationship between income, health insurance, and employment status as prognostic indicators of bladder cancer.
Study design
The largest tertiary referral hospital in East Java, Indonesia, Dr. Soetomo General Academic Hospital, carried out a retrospective observational study for patients with bladder cancer.Hospitalized BC patients were the subject of the research, which ran for five years, from January 2019 to December 2023.Adult BC patients were included, and patients with missing data met the exclusion criteria.The Dr. Soetomo General Academic Hospital's ethical review board granted approval for the research, which was carried out under the Declaration of Helsinki (approval number: 1527/ LOE/ 301. 4. 2/ XI/ 2023).
Data collection
The following socioeconomic data were extracted for analysis: income, employment status, and health insurance.Patients were divided by income below 4 million Rupiah or more than 4 million Rupiah, according to the basic salary in
INTRODUCTION
Bladder cancer (BC) is a neoplasm that arises from the bladder and is the most common type of urinary tract neoplasm (1).This cancer is included in one of the 10 most common cancers worldwide and has a high mortality rate (2).BC accounts for 3% of global cancer diagnoses and is particularly common in developed countries.This case is mainly found in people aged 55 years, who are found in as many as 90% of diagnoses, and the disease is four times more common in men than women (3).The incidence rate is twice as high in developing countries than in developed countries (1).Treatment of bladder cancer tends to be significant and expensive (4).Diagnosis relies mainly on cystoscopy, an invasive and costly procedure.Most BCs are diagnosed at an
The relationship between income, health insurance, and employment status as prognostic indicators of bladder cancer: A survival analysis
Indonesia.They were divided by type of health insurance as patients with National Health Insurance (Jaminan Kesehatan Nasional/JKN) or private insurance.Mortality in this insurance system was defined as death during the hospital stay.
Statistical analysis
Survival analysis was done for patients whose income, work status, and health insurance were known.Time in months from diagnosis to death from any cause was the primary outcome.For every variable, descriptive epidemiological and survival statistics were computed.The overall survival curves were stratified by income, job status, and health insurance using Kaplan-Meier test analysis.Log-rank tests were used to analyze survival differences.To find the variables linked to a lower overall survival rate, multivariate Cox proportional-hazards regression was used.Hazard ratios (HR) and accompanying 95% confidence intervals (CI) were utilized.We also analyzed the to predict sepsis and metastases as strata.The criterion for statistical significance was fixed at P < 0.05.The statistical studies were conducted using IBM Corp.'s SPSS 25 program in Armonk, NY.
RESULTS
There were 99 (45.2%)patients who died.Results show that the average patient is 58 years old with prevalence of male patients.Our analysis shows that the characteristics of income below 4 million rupiah and education level have significant impact in mortality rates.Sociodemographic characteristics are shown in Table 1.The results of the socio-economic status (SES) data assessment show that most patients have an income of more than 4 million rupiahs every month.More than half of the respondents were employed.Most have health coverage.SES data are shown in Table 2. Based on the results of survival analysis using Kaplan Meier (Log-rank), there was no difference in patient survival based on income (p > 0.05) (Figure 1), while there was a difference in patient survival based on employment status and health insurance (p < 0.05) (Figures 2, 3).
DISCUSSION
The results showed that there was no difference in patient survival based on income, while, there were differences in patient survival based on employment status and health insurance.Previous research has found relationship between socioeconomic status and survival, although socioeconomic assessments were carried out with different standards (10).Other studies have found that cancer survival is often poorer among people from more socioeconomically disadvantaged areas.For tumors of connective/soft tissue, bladder, and unknown primary origin, socioeconomic differences in survival decrease with increasing age at diagnosis (8).In addition, health insurance is the determinant of patient treatment.Finally, the burden of cancer survival also affects healthcare systems and society (9).Taylor et al. discovered that characteristics related with a greater chance of bladder cancer presenting at an advanced stage compared to early stages were race, ethnicity, gender, insurance status, one or more comorbidities, and a median household income of less than $63,000 (11).Other researchs have found that lower SES, Medicaid insurance, and no insurance all resulted in a higher tumor stage.
Regardless of the stage of the tumor, poorer SES, having Medicaid insurance, and no insurance linked to worse overall survival (OS) and disease specific survival (DSS) (12).Worse overall survival is related to male gender and significant prognostic factors of overall survival include gender (13).Other studies found that women's risk levels were significantly higher than men's for up to two years after a bladder cancer diagnosis, especially for muscleinvasive cancers.The common belief that the prognosis for bladder cancer is poorer in women compared to men must be reconsidered (14).
In Indonesia, National Health Insurance (NHI) significantly enhances public health and offers low-income households access to care.Nonetheless, NHI coverage below the federal minimum or the government's guidelines may affect health at all phases and developments.The growth and development of stunted children, immunization rates, and the quality of life for those with non-communicable illnesses may all be negatively impacted by low NHI coverage.Moreover, health insurance is less common among rural homes.The main criterion for eligibility for Indonesia's subsidized and contributory programs is that participants must be employed and live in Java or Bali.Low coverage may also be due to the cost of traveling to the health insurance office (15).This burden should be evenly distributed across stakeholders considered in the evaluation of the cost-effectiveness of new anti-cancer drugs (9).Patient survival rates can be enhanced through strategic planning for early detection and screening, as well as proper access to appropriate diagnostic and treatment services, particularly in men, considering the significant influence on disease stage at diagnosis (16).
CONCLUSIONS
There is no difference in patient survival based on income, while there are differences in patient survival based on employment status and health insurance.Health insurance and employment status, specifically being a farmer, might affect the mortality outcomes significantly.
ETHICAL APPROVAL
The Dr. Soetomo General Academic Hospital's ethical review board grants approval for this research carried out under the Declaration of Helsinki (approval number: 1527/LOE/301.4. 2/XI/ 2023).
Figure 2
Figure 2. Survival analysis of bladder cancer patients with different employment status.
Figure 3
Figure 3. Survival analysis of bladder cancer patients within different health insurance. | 2024-03-08T06:16:04.437Z | 2024-03-07T00:00:00.000 | {
"year": 2024,
"sha1": "839664ed69ced33859b192b45e3a6e8b2188e9d4",
"oa_license": "CCBYNC",
"oa_url": "https://www.pagepressjournals.org/index.php/aiua/article/download/12305/11751",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "11fc8ffe7d2f8b0f3d18808a82fc0940e414beb3",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235753085 | pes2o/s2orc | v3-fos-license | Quality Analysis of Coffee Bean Treated by Sunning and Water Washing processing
The quality of coffee bean determines the evaluation of coffee cup. The Coffea arabica was selected for initial processing and cup quality analysis by sun and water washing treatment, respectively. The results showed that the caffeic acid was bright in cyclic washing treatment, the balance was the best in washing processing, the mellow in semi-washing was better, but the comprehensive quality of sunning was the best. This study provides a theoretical basis and practical reference for the production and processing of Yunnan coffee.
Introduction
Coffea arabica is Rubiaceae coffee genus. The quality of C. arabica is affected by many factors, but the altitude and latitude of planting, and the initial processing technology have much influence on its quality. It is also generally believed that coffee flavor at the high altitude is better than at the low altitude. Yunnan has the natural advantage and has created a three-dimensional climate suitable for the growth of C. arabica which has a strong flavor of roasted peanuts and roasted nuts, with a little fruity aroma, and is famous for its fragrance but not strong, not bitter, slightly sour taste [1].
Sunning is the earliest and most simple processing method of traditional coffee beans. Because it does not involve the fermentation process, pectin is preserved and dried, and the sweetness is relatively high. In the Middle East, Latin America, Northeast Africa and other regions, as well as most of our coffee production and processing areas in China, always use Sunning. Sunning coffee beans are usually sweeter and more mellow than wet-processed coffee beans, with a better taste, balanced taste, and a peaceful bitter and soft sour taste. However, the drying time is long, the quality is easy to be affected by the weather, and the quality control problem is difficult to guarantee. In the process of natural drying, fresh fruit is easy to mix with miscellaneous substances on the drying rack. Compared with washing treatment, the proportion of inferior beans will also increase.
Water washing treatment is the most popular way of coffee processing. In the whole fermentation process, it mainly depends on the action of enzymes and microorganisms in the air to degrade and hydrolyze pectin [2][3][4]. In Hawaii, Mexico, Kenya, Colombia, Guatemala and Costa Rica, coffee is processed by water washing. It is generally believed that the quality of coffee beans processed by water washing treatment is generally higher than that processed by sunning or other treatment methods [3][4][5]. After the fresh coffee cherry is treated by water processing, the sugar content in the raw bean will decrease, because the raw bean is soaked and fermented in water, which causes some sugar to be dissolved in water and other chemical reactions will occur. However, when the sweetness decreased in germination, the concentration of some amino acids in coffee will increase, which contribute to the most of the aroma components in coffee. So water washing coffee beans had a unique aroma far more than other kinds of coffee beans. In addition, washed coffee beans have high purity, good appearance and color, high taste consistency, and have a brighter sense of acid, overall quality is higher. However, the process of water washing treatment is complex, the water demand is large, the cost of equipment and production is relatively high, and the price is higher than that of sunning. At the same time, in the drying step, if the natural drying method, the requirements for sunshine are relatively high.
The semi-washed processing method originated in Indonesia and is widely used in the production and processing of coffee in Sumatra, Costa Rica and Indonesia. It combines the characteristics of sunning and water washing treatment, saves more water resources than water washing treatment, preserves the pulp of fresh coffee fruit, and produces coffee with mild taste, mild acidity and aroma of Chinese herbal medicine. The coffee beans produced in this way taste more mellow than the water washing treatment, the flavor is more pure than the sunning, and its flavor and taste are also between the two. Semi-washed coffee has the sweetness and consistency of sun beans [6]. It is also clean and soft with washed beans, which makes the taste of coffee better.
As one of the feature industries in Yunnan, the development speed of coffee industry is slow, and the processing technology is lagging behind. Due to the lack of scientific, theoretical guidance and standardized production system, Katium failed to give full play to its best flavor quality, which seriously restricted the upgrading and improvement of Yunnan coffee product quality. At present, it is very urgent to find out the processing suitable for Katium coffee beans in Yunnan Province.
Research materials
Coffee fresh fruit was from Nanling Ma Li Village coffee in Lancang County, Yunnan Province (60 kg in this experiment). Average rainfall is 1200-1300 mm, annual average temperature is 5℃-23℃, the shading is 35%-45%, plant spacing is 1.5*2 m.
Instrument equipment
Color separator, dry peeling machine, GEMILAI -GRM9008 bean mill, MK-MFFT1 fermentation tank, MK-HPD1 dryer (dryer), pH meter, sugar meter, TDS detector, cup measuring appliance, high precision analysis electronic balance, international red water precision temperature and humidity meter.
Research methodology 2.3.1 Sunning (A1)
Three different batches of coffee fresh fruit (15 kg per batch of 5 kg) were poured into a special cleaning pool and then poured into the fresh fruit color separator to select semi-ripened fruit and black fruit, leaving red fruit suitable for making fine coffee. The red fresh fruit after color selection was laid in the drying field for natural drying [7]. To reduce the moisture content of coffee beans to about 10.5%-11.5% and turn them into dried fruits, then remove stiff fruit shells with machinery (e.g. sheller) to prepare raw coffee. Finally, remove the seed coat (silver skin) of coffee beans, pack them with plastic bags and mark sample information, waiting for testing.
Water washing treatment (A2)
Three different batches of coffee fresh fruit (15 kg per batch of 5 kg) are poured into a special cleaning pool, then poured into a fresh fruit color separator to select semi-ripened fruit and black fruit, leaving red fruit suitable for making fine coffee. After removing the peel and pulp of coffee, mechanical peeling (such as peeling machine), pour the attached pectin coffee beans into a special clean fermentation tank and use water as a medium [8]. To ferment (at this time coffee seeds still retain the original pectin), according to the fermentation tank and coffee beans specific conditions (such as water temperature, water cleanliness and other factors) to determine the length of fermentation time. In general, when the pectin is no longer tightly glued to the coffee shell, or when the pectin is all off and accompanied by a strong fermentation taste, the pectin is washed out with running water. Finally, the water content of raw beans with shell was reduced to 10.5%-11.5% by machine drying or sun drying. Finally, the seed skin (silver skin) of coffee beans was removed, and the sample information was divided and marked with plastic bag, waiting for detection.
Semi-washing treatment (A3)
Three different batches of coffee fresh fruit (15 kg per batch of 5 kg) were poured into a special cleaning pool, then poured into the fresh fruit color separator to select semi-ripened fruit and black fruit, leaving red fruit suitable for making fine coffee. The peel and pulp of coffee were removed by peeling machine, then the anhydrous fermentation was carried out, and the coffee beans were poured into the clean fermentation tank for anhydrous fermentation [9]. After cleaning with a large amount of water, some pectin is retained on the coffee seed coat (bean shell) because of the short fermentation time, which helps to increase the flavor of coffee. At this time, coffee beans are spread out in the drying field or dried with a dryer to further reduce the moisture content, the water content must be reduced to 10.5%-11.5% (drying time depends on outdoor temperature, heating degree or weather conditions, generally 14-28 days can reduce the moisture content of raw beans in coffee fresh fruit to 10.5%-11.5%. Then the coffee beans remove silver skin (seed skin), dry mucus and silver skin are removed at one time, and finally classified and bagged, waiting for detection.
Cycle water washing treatment (A4)
Three different batches of coffee fresh fruit (15 kg per batch of 5 kg) were poured into a special cleaning pool, then poured into the fresh fruit color separator to select semi-ripened fruit and black fruit, leaving red fruit suitable for making fine coffee. Remove the peel and pulp of coffee with peeling machine. Pour pectin coffee beans into a special clean fermentation tank for 24 hours. Then, in the fermentation tank, clean water fermentation for 24 hours, then rinse with water, so repeat 3 cycles, until the fermentation time reaches 72 hours [10]. Pectin has all fallen off and has a strong special fermentation flavor, at this time the fermentation is completed the pectin is washed out with running water. Finally, the water content of raw beans with shell was reduced to 10.5%-11.5% by machine drying or sun drying. Finally, the seed skin (silver skin) of coffee beans was removed, and the sample information was divided and marked with plastic bag, waiting for detection.
Sample baking and cup testing
Sample baking according to SCAA baking index #55. The 150 ml of water is brewed over 8.25 g of coffee. The evaluation indexes and grades of coffee cups in this experiment include: Fragrance/Aroma, Flavor, Sweetness, Acidity, Aftertaste, Body, Balance, Uniformity, Clean up, Defects: including Taint, Fault, Set each indicator to five ratings, And each item is scored according to the coffee sensory evaluation criteria, The full score is 10. Fuzzy Mathematics Comprehensive Evaluation Method [11].Sensory evaluation method for experimental samples of beans [6]. (Note: Sample baking is ensured within 24 hours, and the experimental sample cup test is performed after 8 hours.)
Physiochemical Properties of Raw Bean
Water content of raw bean (WRB) was significantly higher in sunning (A1) than in washing treatment (A2), semi-washing treatment (A3) and cycle water washing treatment (A4) treatments. there was no significance of WRB between A2 and A3 ( fig. 1).
A1 has the best quality and the least defective beans. A2 and A3 have 0.96% defective beans, which are lower than in A4 treatment (1.16%) (tab. 1).
Sample cup test results
After baking, the raw bean samples were evaluated by cup test: A1, A2, A3, A4 cup test scores are above 80 points, to achieve the grade of boutique coffee beans. And A1 has the highest cup score and the best quality, followed by A2, then A3 and finally A4.
Conclusion
When the water content of coffee is insufficient, the aroma, flavor, aftertaste and mellow degree of coffee will decrease. Therefore, water content is an important factor affecting the quality of coffee. For Lancang C. arabica, 10.5%-11.5% is a more suitable drying degree (water content) range.
Fermentation degree in the initial processing can improve the quality of coffee beans. Fermentation degree in washing treatment has an important effect on the aroma, flavor and acid quality of coffee. The lightly fermented coffee is inclined to have green tea, red wine, tropical berry flavor. However, the fermented coffee is inclined to have nuts, caramel and chocolate flavor in light fermented coffee. Therefore, the comprehensive quality of sun treatment is the best. Therefore, among the four coffee beans, the cup score is ranked as: A1 is the best, followed by A2, then A3 and finally A4, cup score is above 80 points, reaching the grade of boutique coffee beans. Under the condition of the same variety, water content and baking degree, the caffeic acid in cyclic washing treatment is bright, the balance of washing treatment is the best, the mellow degree of semi-washing treatment is the best, and the comprehensive quality of sun treatment is the best. | 2021-07-07T20:01:44.612Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3626508eca3a353cec56fc896dad406662e765f1",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/792/1/012050/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3626508eca3a353cec56fc896dad406662e765f1",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
252966790 | pes2o/s2orc | v3-fos-license | Classroom Interaction in the EFL Speaking Class in Junior High School
The teaching and learning process is very dependent on the interactions that occur between teachers and students. The interaction occurs when the two subjects (teacher and student) speak. The purpose of this study is to analyze the types of interactions that were carried out during speaking class, teacher talk and student talk. This study uses a qualitative design with research subjects were English teachers and 35 students in grade 8 at SMP Negeri 5 Singaraja. The data were obtained using observation and using the Foreign Language Interaction (FLINT) system as data analysis. There were three current flows of analysis method namely: data reduction, data display, and conclusion/verification. The result of study reveal that there were seven types of classroom interaction. Those types of classroom interaction have the role to support the successful of the learning process. The most high percentage of teacher talk is teacher-whole class which the class almost done with the all interaction by the teacher to the students. This is an open access article
INTRODUCTION
Speaking is one of the most important of the four skills (listening, speaking, reading and writing), which those four skills can develop the language itself (Sadiku, 2015;Sharma & Puri, 2020).Besides that, to improve the language humans can practice everywhere and anytime (Nishanthi, 2018;Prasetyo, 2018).The practices also can be done in classroom, practice carried out in the classroom are usually created based on conditions and situations for the students in the real world where speaking is a means to strengthen relationships and social contexts (Elismawati, 2018;Hwang et al., 2016;Istri Aryani & Rahayuni, 2016).Previous study state the focus of English in classroom is to make the students be able to use English in communication and as a tool for furthering their studies (Oradee, 2013).So the use of English practice in classroom is to make the students habitual in communicating (Nair & Yunus, 2021;Praheto et al., 2020;Somdee & Suppasetseree, 2013).
In general, student problems that occur, language skills are complex skills so that it takes a long time to develop these skills (Bayyurt et al., 2014;Ribeiro, 2015).The inability in communicating correctly happened because of EFL students do not use the language in authentic situations (Johansson, 2020;Oradee, 2013;Sharma & Puri, 2020).So it will affect to the students' self-confidence then avoid the communication (Almazova et al., 2021;Asrial et al., 2019;Ferri et al., 2020).The problems of students in mastering English speaking it cause of some factors such as inhibition, nothing to say, low participant, mother tongue use, low motivation, environment factors, and lack of confidence (Efriana, 2021;Qrqez & Rashid, 2017;Yoandita, 2019).To be able to improve English speaking skills, interaction is needed on a regular basis to use English because interaction is the heart of communication (Brown, 1994;Canale, 2014).Interaction can occur anywhere as long as people communicate with each other and give each other actions and receive reactions, including in classroom settings (Asrial et al., 2019;Madzlan & Mahmud, 2018).Classroom interaction occurs because of a two-way process between teachers and students in which both influence each other (Ryve et al., 2013;Susilawati et al., 2019).Also interaction in the classroom is included in the teaching and learning process.
Class interaction between teachers and students is the most important thing for the continuity of the teaching and learning process (Elismawati, 2018;Moetia, 2018;Sukarni & Ulfah, 2015).The right teacher talks carried out by a teacher can create a harmonious atmosphere in the classroom and the right teacher talks can also increase closer relationships with students so that there will be a lot of available interaction space (Marchetti & Cullen, 2015;Ryve et al., 2013;Soucy McCrone, 2005).Interaction is the heart of communication which means that in communicating with each other interaction is very necessary (Meganingtyas et al., 2019;Trinova, 2012).In the current era of teaching, teaching using communicative language is very often and effectively carried out in the classroom which creates the success of language teaching.Previous study state class interaction is an activity that produces a reciprocal effect between one or more people (teachers and students) from the exchange of thoughts, ideas and feelings (Elismawati, 2018).Besides that, other research found that classroom management also greatly influences class interactions which can create good classroom interactions and also determine student learning outcomes (Kim, 2018).
It is in line with previous researcher that state teacher talk is the special and crucial language that use to addressing L2 students in classroom (Sandra & Kurniawati, 2020).This means that teacher talk is important in the classroom communication.It is also supported by other study that state the effectiveness of "good teacher discourse" in the classroom should be measured by how well it facilitated learning and promoted communicative interaction.Moreover, the students will get more language from the teacher talk so language exercise that is both useful and applicable to the learner.Beside teacher talk, there also students talk which the most participant in classroom is students.Students talk is the talk of students when imitating the teacher either in expressing ideas or giving opinions and criticisms of something (Lei, 2009;Sukarni & Ulfah, 2015).
Based on preliminary data the processes in this student talk are imitating, recording and expressing.Imitating what is done by students in this student talk is a process in which students imitate the teacher in expressing ideas or making comments.Then students will record things that are important and imprint in their memories and then in the end they will express what they imitate and what they remember.Therefore the researcher interesting to analyze the types of interactions that carried out during speaking class, teacher talk and student talk.
METHOD
This research will conducted qualitative research to describe the phenomena of classroom interaction in EFL speaking class.Qualitative research is an approach that allows you to examine people's experiences in detail, by using a specific set of research methods such as in-depth interviews, focus group discussions, observation, content analysis, visual methods, and life histories or biographies (Hennink et al., 2020).This type will be used to figure out the phenomenon happened in the classroom interaction.
The research was conducted at SMP N 5 Singaraja.Located in Pengelatan Village, Buleleng Regency, Bali.The researcher chose this school because it is the famous one of school in Buleleng district.The subject of this study is an English teacher and one class namely VIII I which consists of 35 students as a sample.The object of this research is classroom interaction in the EFL speaking class in the teaching and learning process.
This study used one instrument for the data collection namely observation.Besides that, the researcher uses recorder to record the activity and the interaction happened in the classroom and then that data will be transcribed by the research.The researcher will conducted data analyze (Miles & Huberman, 1984), which there were three current flows of analysis method namely: data reduction, data display, and conclusion/verification.
Result
The researcher done the observation in two meeting and each meeting was observed in 40 minutes.The data are represent the resulted observation from the researcher to determine the intensity of teacher and student talk or initiation in EFL class that are: teacherwhole class, teacher -an individual student, teachergroup members, then student-whole class, studentteacher , studentgroup member, student -whole class and other.
The researcher represent the data from the observation which are seven types of classroom interaction occurred in EFL speaking class.Teacher-whole class, the first types of classroom interaction is teacher-whole class which this interaction happened when the teacher greeted the students and checking the understanding.It also happened when the teacher gave a several feedback for students' performances.Based on the observation, teacherwhole class played a significant role which the teacher stimulus the students about material.The interaction begin with the greeting from the teacher, it also give the teacher references to check the readiness of the students about the lesson.Actually the students only say the word that usually they say when the class begin it indicate that the students are not understand yet about the teacher talk.But the teacher also do the translation about what she say because in SMP Negeri 5 Singaraja not use English Fully.
Teacher-an individual student, this types of interaction applied when the teacher want to check the attendance or ask directly to the students.Also it happened when the teacher correct the students' statement or students' mistake in pronunciation or grammar.Based on the observation, the teacher talks to the students to check their attendance and also with code-mixing.There also the teacher correct the pronunciation of the students.It will give the other students an example of the correct pronunciation, so for the other students will not same mistakes.
Teacher-group member, this types used by the teacher to divided the students into several group which will make the students easier to do the homework or assignment.Based on the observation, the teacher talk to group member to make a group consist of two to share their thought to make one topic of greeting card.In the group work also the teacher only control the flow of the learning which the teacher only support the needs of the students such as being translator or give a comment and suggestion toward the group.
Student-teacher, this types refers to the initiation of the students to interact to the teacher like ask something.It occurred in the second meeting when the students ask about the recount text material.The students try to interact to the teacher like asking about kinds of story that can be used in recount text.Based on the observation, the students try to interact to the teacher which the students ask about recount text.The students had feel to initiate their idea and discuss with the teacher in the scrip showed the student was initiated with asked based on the material that have been delivered by the teacher.
Student-student, this types of interaction is occurred when the classroom conducted the interaction and discussion with their friend.It usually happened with their seatmate when they discuss about the topic that can be used.This also the usually done by the students than ask with the teacher.Based on the observation, the students try to interact to each other.It means that the students think they can solve their problem without the teacher or they feel scared if ask the teacher.They try to interact give and ask the information to each other.
Student-group member, this types is the interaction of the students with the group in the first meeting.It happened when the student communicate and share their work to the other group.Based on the observation data, it showed that the student was trying to interact with the other group and share about their work.It also give the other students new knowledge from appear group in front of the class.
Student-whole class, this types is an interaction that done by the student and also the teacher.The difference between student-group members, it also involves the teacher in it such as give comments, suggestions or give the questions.Based on the observation, the student present their work in front of the class which the teacher and other students as an audience.These also help the student to build their confidence and interaction with the other.The whole class can be supported student enough active in class, when they talk in front of the class the interaction automatically become improved.
Discussion
Based on the first observation, it is found that the classroom interaction occurred in class VIII I by the teacher and students.The types of classroom interaction that include in category of teacher talk that measure the level of interaction in the classroom resulted the amount percentage that determine who are more dominant in the classroom : Teacher-Whole Class 46%, Teacher-an individual student 42%, Teacher-group member 12%.While the types of classroom interaction that include the category of student interaction where student have role to interaction with teacher or among another student.The amount of student talk percentage that are: student-whole class 40%, student-teacher 20%, student-student 16%, student-group member 10%, and other that is silence and confuse 14%.Furthermore based on the second observation category teacher initiated there were teacher-whole class 45%, teacher-an individual student 45%, teacher-group members 10% then types of interaction student initiation like: student-whole class 32%, student-teacher 21%, student-student 20%, student-group members 15% and other silence or confusion 12%.
From those two times of observation in SMP Negeri 5 Singaraja at 8th Grade students actually in VIII I class, it was found that the types of classroom interaction categories of teacher talk and students talk as FLINT (Foreign Language Interaction) model for the observation categories (Brown, 1994;Sukarni & Ulfah, 2015).The teacher talk category in twice meeting show that dominance classroom interaction which the highest rate is Teacher-Whole class then followed by Teacher-an individual student, and Teacher-group members.While in students talk the dominance classroom interaction is Student-whole class then followed by student-teacher, student-student, other silence or confusion, and student-group member.
It is in line with previous study that the aim of this study was to develop a deep understanding of interaction in language classroom in foreign language context (Sundari, 2017).The effective interaction which happens in the classroom can increase students' language performance.Not only students get the impact of the importance of good interaction but the teacher can also improve their teaching and learning process in the classroom.It is also supported by other researcher that analyzes talk types of an in-service teacher in an EFL classroom interaction by involving an experienced female EFL teacher at a senior high school level (Winanta et al., 2020).The result disclosed that from 12 talk types in the FLINT system, 9 types were used by the teacher.One of them 'praises or encourages' took place as the highest type.It denoted that the teacher really appreciated the students' effort to boost their learning motivation.Meanwhile, the least type used by the teacher was 'criticizes student behavior'.According to the interview result, the teacher rarely used criticism because she tried to keep the students' feelings and mental.
It can be conclude that the whole students are active in classes it showed in the percentage of Studentwhole class and Student-teacher but there also sometimes the students are silence or confuse about the interaction because of they do not understand yet about the teacher language.Also in teacher talk, teacher-whole class show that the teacher is care about the students when they give the students stimulus or invite the students to talk and interact beside that the percentage of teacher-an individual student almost in same rate with the teacher-whole class it showed that the teacher is care about self-improvement of the students.
The implications of this study provide an overview related to Classroom Interaction in the EFL Speaking Class in Junior High School.This research is useful for teachers especially EFL English teachers at the junior high school level as a reference for how to interact in class.However, this research is still limited.One of the limitations of this research lies in the research subject which only involves students in one class as the research sample.Therefore, it is hoped that future research will be able to deepen and broaden the scope of research related to Classroom Interaction in the EFL Speaking Class in Junior High School.
CONCLUSION
This study discovers the categories of talk spoken by the teacher and students in the classroom.There were also seven types of classroom interaction teacher-whole class, teacher-an individual student, teacher-group members, student-student, student-teacher, student group members, and student-whole class.Those types of classroom interaction have the role to support the successful of the learning process.The most high percentage of teacher talk is teacher-whole class which the class almost done with the all interaction by the teacher to the students. | 2022-10-18T15:57:18.337Z | 2022-08-09T00:00:00.000 | {
"year": 2022,
"sha1": "b9caa7e252a863ce1ab07a8c488ff15e431f8311",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.23887/jpbi.v10i1.47994",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f1aa6a07e349ac8ee376ee30b368a066d4e4c83a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
16381485 | pes2o/s2orc | v3-fos-license | New avenues for matter-wave-enhanced spectroscopy
We present matter-wave interferometry as a tool to advance spectroscopy for a wide class of nanoparticles, clusters and molecules. The high sensitivity of de Broglie interference fringes to external perturbations enables measurements in the limit of an individual particle absorbing only a single photon on average, or even no photon at all. The method allows one to extract structural and electronic information from the loss of the interference contrast. It is minimally invasive and works even for dilute ensembles.
Introduction
Our contribution to this special issue is dedicated to Theodor W. Hänsch, who has inspired generations of physicists as a role model for scientific creativity, genius and passion for precision. Seeing how many methods in laser physics, atomic and molecular physics, quantum optics, and highlevel spectroscopy Ted Hänsch advanced to unprecedented precision, we are reminded of a remark by Whitehead about philosophy: The safest general characterization of the European philosophical tradition is that it consists of a This article is part of the topical collection "Enlightening the World with the Laser" -Honoring T. W. Hänsch guest edited by Tilman Esslinger, Nathalie Picqué, and Thomas Udem.
3
3 Page 2 of 8 retro-reflected fluorine laser beams, at a vacuum ultraviolet wavelength of = 157.6 nm, yielding standing light waves with a period of d ≃ 79 nm. In the antinodes of the standing light waves, the molecular beam is depleted by ionization, dissociation, or any other mechanism that renders these molecules invisible to the detector further downstream. This way, the light field acts effectively as a periodic absorptive mask. The high laser photon energy of 7.9 eV allows manipulating a wide range of molecules or clusters in the same machine-largely independent of particle-specific narrow optical resonances.
Three gratings are combined to form a complete Talbot-Lau interferometer: the first grating G 1 establishes a periodic array of possible molecular locations, close to the nodes of the standing wave. The tight confinement of the wave function around these nodes then imposes a momentum uncertainty which ensures a rapid increase in transverse coherence behind the grating-even for an initially incoherent molecular beam. The second grating is positioned such that the incident molecular coherence extends at least over two nodes or antinodes of G 2 . This way, the propagating molecular wave covers two or more semiclassical paths on the way to the final state at G 3 , further downstream. Resonant near-field interference occurs around multiples of the Talbot time T T = d 2 m/h, corresponding to a Talbot length L T = vT T = d 2 / dB , for particles of mass m. In time-domain interferometry, all particles within the grating area see the same pulse sequence for the same duration, independent of their own velocity v.
The molecular fringe pattern can be visualized in various ways: the third grating G 3 acts as a spatially resolving mask with a resolution of well below /2 = 79 nm and a postionizing time-of-flight mass spectrometer allows recording all particles transmitted by this mask. If the clusters [3,18] or nanoparticles [21] in the beam have a broad mass distribution with fixed mass separation, and if they all have the same velocity, as often the case in supersonic beams, they realize a 'comb' of de Broglie waves. The particles remain, however, mutually incoherent since they are distinguishable. Recording the mass spectrum then corresponds to reading an interference pattern as a function of mass m or wavelength dB . One may also describe this phenomenon as a wave function rephasing in the time-domain [13], without reference to position and independent of the velocity distribution.
We exploit in particular the resonance in particle transmission behind grating G 3 as a function of the pulse delay between two subsequent gratings τ ij = t(G i ) − t(G j ). This resonance occurs for a symmetric interferometer timing, T = τ 12 = τ 23 , and we find a rapid decrease in the interference contrast when this balance is skewed by more than �t = τ 23 − τ 12 ≃ τ 23 /N, where N is the number of grating nodes illuminated by the incident molecular beam [22].
In principle, the matter-wave fringes could also be measured directly by plotting the particle transmission versus the lateral displacement of either grating. However, in our case, all three laser beams are retro-reflected by the same mirror to render the system as insensitive to mechanical vibrations as possible. The fringes are thus not affected by slow tilts or shifts of the mirror. Instead, in OTIMA interferometry, the interference contrast can be extracted from a comparison of the interferometer transmission for the case of resonant (symmetric) and nonresonant (slightly asymmetric) laser pulse delays [3]. For this setting, we here propose a variety of new spectroscopy tools and procedures.
Matter-wave-enhanced recoil spectroscopy (MERS)
A matter-wave interferometer can be used as a single-photon recoil spectrometer by adding a running laser wave L close to the central grating G 2 (Fig. 1a). Absorption of a single photon then imparts a recoil onto the molecule, without providing 'which-path information'. Subsequent spontaneous reemission of photons would introduce a random phase and decoherence [23], but most macromolecules dissipate the energy radiationless to many lower-lying electronic and vibrational states [20,24]. Heating of the internal molecular state does not destroy the center-of-mass coherence [25,26] as long as the internal and external degrees of freedom remain separable. Wavelets associated with the same internal state remain coherent to each other [24]. Absorption inside a matterwave interferometer thus creates shifted and unshifted molecular fringe patterns which are correlated with heated and unheated internal states. Even if the shifted and the unshifted fringes cannot be resolved, the loss of the total fringe visibility can be used for spectroscopy with high accuracy [19,20].
In OTIMA interferometry, the momentum imparted by each VUV grating exceeds the absorption recoil of a 0.3-100 μm spectroscopy photon by a factor up to 300. Visible (VIS) and near-infrared (NIR) spectroscopy will therefore work best in higher Talbot orders, when the grating pulse separation time amounts to about two or three Talbot times and the molecular state is delocalized over two or three periods of G 2 . Probing photons with wavelengths around 270-320 nm are for instance required to study the electronic states of aromatic amino acids and nucleotides, peptides and oligonucleotides. Comparing UV spectra of biomolecules in the gas phase with molecules in solution could later provide valuable information about structural changes in these different environments [27,28].
Fluorescence recoil spectroscopy (FRS)
If, contrary to the previous assumptions, absorption is followed by fluorescence, the emitted photon will add a recoil to the molecular motion, whose orientation varies randomly for each molecule. This leads to a reduction of the fringe contrast. One can use this loss of visibility to extract fluorescence quantum yields. When the exciting laser illuminates the molecular beam from the front, the absorption recoil does not blur the interference pattern and the timing of the laser pulse determines when and where the molecule is hit relative to the position and time of the second grating pulse. If the fluorescence wavelength distribution is known, the contrast reduction of the matter-wave interference pattern provides a measure for the product of the absorption cross section and the fluorescence yield. The absorption cross section can be extracted independently at low laser power and with the laser beam oriented parallel to the grating k-vector. When as little as 10% of all molecules are excited [20], the absorption measurement is only minimally affected by fluorescence.
Multi-photon recoil spectroscopy (MPRS)
If the probing laser wavelength exceeds the grating period substantially, a single photon cannot provide the recoil to shift the interference pattern sufficiently far. This is for Fig. 1 a UV-VIS spectroscopy in OTIMA: absorption of a single photon from a running laser wave imparts a recoil to the absorbing cluster or molecule. If the wavelength of the light is comparable to the semiclassical path separation of the delocalized particle, the interference fringe pattern experiences a measurable dephasing (Sect. 3) [19,20]. Because of the small grating period (79 nm), single-color visible or infrared (VIS/IR) spectroscopy requires the collective momentum transfer of several photons or operation of the matterwave interferometer in higher Talbot orders. b VIS/IR spectroscopy: can also be realized by combining a single (VIS/IR) photon of laser beam L 1 (red arrow) with a single UV photon from beam L 2 (green arrow) which provides the required momentum transfer (Sects. 6 and 7). c Polarizability spectroscopy: is the least invasive of all three techniques. The off-resonant dipole interaction with the intense laser field G 4 deforms the matter-wave front-leading to a loss of fringe contrast even without any photo-absorption. This method may be particularly useful for weakly bound van der Waals clusters (Sect. 8)
Fig. 2 a
The absorption of multiple photons from a monochromatic source is suppressed due to the anharmonicity bottleneck. b Internal vibrational relaxation (IVR) to other modes dissipates the energy and enables the repeated excitation of the same IR transition until sufficient momentum recoil has been accumulated to shift the fringe pattern measurably 3 Page 4 of 8 instance the case for vibrational transitions, driven by nearinfrared (NIR) or far-infrared (FIR) photons with wavelengths around 3-100 μm. Multi-photon absorption can then still be a viable option if the cumulated recoil of many absorbed photons has sufficient momentum.
Multi-photon recoil spectroscopy is conceptually similar to infrared multi-photon dissociation spectroscopy (IR-MPD) [29]. The anharmonicity of molecular potentials usually prevents the subsequent absorption of many monochromatic photons within the same vibrational energy ladder (anharmonicity bottleneck, Fig. 2a) [30]. On the other hand, couplings between the vibrational modes can dissipate the absorbed energy (Fig. 2b). In complex particles, vibrational excitations can relax on the picosecond time scale to many vibrational states, i.e., very fast compared to the duration of the nanosecond spectroscopy pulse. Even though multi-photon absorption will lead to internal heating, this is compatible with high-contrast interference as long as it does not provide which-path information by emission of thermal radiation [31]. Sequential absorption with a Poissonian photon number distribution will lead to a biased quantum random walk in momentum. In contrast to the single-photon case, extracting an absolute absorption cross section from the visibility loss is then less direct. However, the spectral line positions and widths will remain measurable.
Resonance-enhanced multi-photon recoil spectroscopy (REMPRS)
In order to avoid heating and the risk of spectroscopic shifts, conformation changes or even fragmentation, it is desirable to limit the number of photons required to retrieve information-even in the infrared regime. This challenge has been addressed in physical chemistry by action spectroscopy where the absorption of a few photons may lead to a detectable 'action', for instance the detachment of an additional messenger atom. Action spectroscopy has been very successful in cluster physics [29]. A prominent example is the spectroscopy of impurities in helium nanodroplets where the deposition of 1 eV of energy even suffices to boil off 2000 helium atoms [32]. However, the attached messenger atom or the environment, such as a liquid helium nanodroplet, may also influence the electronic structure of the host molecule [33]. We suggest that it is possible to avoid the need for messengers and artificial environments based on a recoil analog of resonance-enhanced multi(two)-photon ionization spectroscopy (REMPI/R2PI) [35]. In matter-wave-enhanced resonant multi-photon recoil spectroscopy (REMPRS/ R2PRS), the spectroscopy photon from laser beam L 1 triggers the absorption of a photon of high momentum from laser beam L 2 . We illustrate the idea in Figs. 1b and 3a where the first photon from laser beam L 1 excites the molecule for instance from the electronic and vibrational ground state |g, 0� to the higher-lying vibrational state |g, 1� and a photon from the more energetic laser L 2 couples this state to the upper electronic state |e, 1�, imparting the required kick (see Fig. 3a). This method is appealing for particles where photo-ionization has been notoriously difficult and photodissociation channels are not available, as is the case for many massive biomolecules [36][37][38].
Matter-wave-enhanced recoil dip spectroscopy (RDS)
While in our previous examples the resonant reduction of matter-wave contrast was assumed to provide the spectroscopic signal, we illustrate in Figs. 1b and 3b how recoil dip spectroscopy can even restore and enhance this contrast on resonance. We assume that the absorption of a single (V) UV photon from |g, 0� to |e, 1� imparts sufficient recoil to reduce the matter-wave visibility. However, we can deplete the ground state |g, 0� by coupling it resonantly to a neighboring vibrational state of the same electronic manifold |g, 1�. This reduces the UV absorption and raises the fringe contrast again. Dip spectroscopy may appear counterintuitive in comparison with earlier results from atom interferometry [34] where an increase in the number of absorbed quanta led to a decrease in fringe contrast. In contrast to that, reemission is suppressed in many molecules during their transit through the interferometer. OTIMA offers a suitable frame for this scheme since the nanosecond precise IR-UV dip spectroscopy requires that the UV photon couples efficiently to one particular vibrational ground state but substantially less to the IR excited vibrational mode. In many small-and medium-sized molecules, it is possible to excite electronic transitions with vibrational resolution. In these cases, recoil dip spectroscopy (RDS) is a realistic option. Even if the UV transitions are broadened when they couple to short-lived excited states, IR dip spectroscopy should provide resolution of the vibrational ground states, as seen in the modulation of the fringe visibility.
In VIS-UV dip spectroscopy the transitions couple electronic states and absorption of a visible spectroscopy photon is followed by a UV photon with higher momentum. As before, the method requires that the ground state and the excited state of the electronic transition couple differently to the UV photon.
Matter-wave-enhanced polarizability spectroscopy (MEPS)
Valuable spectroscopic information can be obtained even without exchanging a single real photon: The atomic or molecular polarizability provides important information about the particle composition and structure as well as their van der Waals interactions with molecules or surfaces. In atom interferometry, the optical polarizability has for instance been measured by imprinting a differential phase on two spatially separated parts of a cloud of ultracold atoms that were then recombined to interfere [39]. Even if the path separation of the matter-wave packets is smaller than the width of the spectroscopy laser beam, they accumulate state-selective phase shifts in the interference pattern, which may provide information about optical polarizabilities [40] or transition dipole matrix elements [41].
This can be generalized to high-mass particles, too. The optical polarizability of complex molecules at fixed wavelength (532 and 157 nm) can be extracted from the diffraction efficiency in the standing light wave in Kapitza-Dirac-Talbot-Lau [42] and OTIMA interferometry [43]. Here, we propose to measure it across a wide spectrum using OTIMA interferometry. By interaction with a tunable standing light-wave grating (G 4 ), close and parallel to G 2 (see Fig. 1c), the molecular matter-waves acquire a phase shift which reduces their interference contrast.
The effect of the additional grating can be understood in both a classical and a quantum picture: Quantum mechanically, the grating acts like a phase grating, whose period varies with wavelength and whose impact on the matterwave is a function of the molecular optical polarizability. In a classical picture, the fluctuating array of dipole force microlenses in G 4 scrambles the molecular interferogram. Tuning the spectroscopy laser then allows one to modulate its fringe contrast (see below).
In contrast to the absorptive spectroscopy, which can be done already with running laser waves, we here rely on the presence of an optical grating to impose strong local dipole forces. They scale with the gradient of the dipole potential and are maximized in a standing light wave. It is favorable if the spectroscopy grating (G 4 ) phase is unstable since a fluctuating phase ensures that we can ignore residual effects of constructive matter-wave interference that might emerge when the spectroscopy grating G 4 and the diffraction grating G 2 have commensurate periods.
Theoretical description
In order to quantify these statements, we here discuss how the fringe visibility is affected in OTIMA interferometry by the presence of a spectroscopy beam directly after the second grating, G 2 . In general, the interference signal is calculated by combining the effect of each individual grating on the incoming matter wave with its free propagation between the gratings [2,22,44].
Exploiting that the transit through each individual laser grating can be described in the eikonal approximation [45], the interaction between the matter wave and grating G k , k = 1, 2, 3, is characterized by the eikonal phase shift φ (k) 0 = 4πE (k) α( )/hcε 0 A, and by the mean number of absorbed photons per molecule or cluster, n (k) 0 = 4E (k) σ abs ( )/hcA [2]. Here, E (k) is the pulse energy, A denotes the laser spot area (flat top assumed), α( ) and σ abs ( ) are the molecular polarizability and absorption cross section at the laser wavelength , respectively.
OTIMA contrast-In the absence of any additional laser, the sinusoidal visibility of the interferogram can be computed as a function of the laser grating pulse separation time T and all known laser parameters where J n and I n are Bessel functions. The parameter ζ coh = φ (2) 0 sin(πT /T T ) describes the coherent evolution induced by the phase grating component in G 2 and ζ dep = n (2) 0 cos(πT /T T )/2 is related to the photo-depletion of the molecular beam in the anti-nodes of the standing (1) light wave, also in G 2 . The visibility V sin varies periodically as a function of the pulse separation T, and its period is determined by the Talbot time T T . Recoil Spectroscopy-Absorption of photons from a pulsed running wave laser of wavelength L in the instant after the second grating pulse will impart a recoil on the absorbing molecule [19]. In practice, one may even overlap G 2 and the spectroscopy laser on the same spot at the same time using dichroic optics. The resulting reduction of the signal visibility can then be used to extract the absolute absorption cross section of the molecule [20]. Assuming that the probability of absorbing n photons is described by a Poisson distribution with mean n L ( L ) = σ abs ( L )E L L /A L hc, the sinusoidal visibility V sin in the presence of the spectroscopy beam can be written as Thus, ln V sin /V sin decreases linearly with the product of the total absorption cross section and the recoil laser energy, σ abs ( L )E L . One can therefore measure the molecular absorption spectrum by varying the laser power at L and observing the fringe contrast. This idea can be extended in a straightforward way to recoil dip spectroscopy, where only the readout of the spectrum is modified.
Polarizability Spectroscopy-Replacing the running wave laser by a tunable standing light wave grating allows us to measure the molecular polarizability. In this case, the spectroscopy laser acts as a fourth grating with period L /2 . It is timed such that that the free flight to the second grating is negligible. Hence, the interaction between the spectroscopy laser and the molecule is characterized by the eikonal phase φ L ( L ) = 4πE L α( L )/A L hcε 0 and the mean photon number n L ( L ) = 4E L L σ abs ( L )/A L hc. To avoid moiré-type effects, we propose to induce or maintain phase fluctuations between the spectroscopy grating and the three (phase stable) interferometer gratings. The signal visibility reduction is then Varying the laser wavelength L in a regime in which photon absorption can be neglected, n L ≪ 1, the spectroscopy laser acts as a pure phase grating and the contrast reduction is Thus, one can directly extract the spectral molecular polarizability from measuring the contrast reduction for different pulse energies E L . In deriving the visibility (1), we have neglected additional contrast-reducing processes such as scattering with residual gas atoms [44,46], thermal decoherence [31] or phase averaging due to machine vibrations or internal molecular dynamics [47,48]. Such processes would affect the signal visibility with a common pre-factor which cancels in the ratio of the visibility with and without spectroscopy laser. This renders the measurement rather robust with respect to decoherence and dephasing.
Conclusion
Spectroscopy is an important field of atomic, molecular and optical physics with close ties to areas as diverse as physical and biochemistry, environmental science or laboratory astrophysics. It is therefore important to explore methods which are minimally invasive in the sense that they require the scattering of very few real photons to eventually not even a single one.
Matter-wave interference offers an interesting option as it imposes a very narrow comb of molecular density fringes which serves as a nanoscale ruler, whose position can be read with a sensitivity and accuracy of 10 nm or less.
While a conceptual similarity with classical Moiré shadows is obvious [49], operating in the quantum regime allows one to prepare even narrower fringes and a substantially enhanced sensitivity to fringe displacements. Compared to classical deflectometers, which usually operate with position resolution on the order of tens of micrometers [50,51], quantum interferometry has the potential of improving the position sensitivity by three to four orders of magnitude. However, substantial future work still needs to be invested in generating sufficiently brilliant molecular beam sources to turn this idea into a generic and universal tool.
Matter-wave-enhanced spectroscopy is promising and useful for isolated molecules and clusters in the gas phase under diverse boundary conditions. It can be beneficial when the absorbed energy is dissipated in internal conversion processes and fluorescence or action spectroscopy fails. This applies to a large class of complex biomolecules and van der Waals clusters.
Interference-assisted absorption spectroscopy is also expected to be favorable for many gas phase neutral vitamins, peptides and proteins with a low vapor pressure, forming only very dilute molecular beams. While interferometry can operate eventually even with a single molecule per shot, direct absorption using Beer's law would require beam densities many orders of magnitude higher.
Matter-wave interferometry-assisted two-photon and polarizability spectroscopy is also favored over fluorescence methods, where one would usually want to scatter many photons per particle. Multi-photon scattering may lead to excessive heating, particle dissociation or modification. This is the case for weakly bound van der Waals clusters, whose quantum wave nature has been successfully demonstrated in OTIMA interferometry [3,18]. | 2018-04-03T01:02:39.781Z | 2016-12-09T00:00:00.000 | {
"year": 2016,
"sha1": "322f5e060246b4bf3a3c5a965003b22bd635fccc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00340-016-6573-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c05146183474a596c310c73c5a5126a5f2f28e4a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
236401601 | pes2o/s2orc | v3-fos-license | Remaining Useful Life Prediction of Cutting Tools Using an Inverse Gaussian Process Model
: In manufacturing, cutting tools gradually wear out during the cutting process and decrease in cutting precision. A cutting tool has to be replaced if its degradation exceeds a certain threshold, which is determined by the required cutting precision. To effectively schedule production and maintenance actions, it is vital to model the wear process of cutting tools and predict their remaining useful life (RUL). However, it is difficult to determine the RUL of cutting tools with cutting precision as a failure criterion, as cutting precision is not directly measurable. This paper proposed a RUL prediction method for a cutting tool, developed based on a degradation model, with the roughness of the cutting surface as a failure criterion. The surface roughness was linked to the wearing process of a cutting tool through a random threshold, and accounts for the impact of the dynamic working environment and variable materials of working pieces. The wear process is modeled using a random-effects inverse Gaussian (IG) process. The degradation rate is assumed to be unit-specific, considering the dynamic wear mechanism and a heterogeneous population. To adaptively update the model parameters for online RUL prediction, an expectation–maximization (EM) algorithm has been developed. The proposed method is illustrated using an example study. The experiments were performed on specimens of 7109 aluminum alloy by milling in the normalized state. The results reveal that the proposed method effectively evaluates the RUL of cutting tools according to the specified surface roughness, therefore improving cutting quality and efficiency.
Introduction
Tool wear is widely considered to be stochastic and challenging to predict. This is primarily due to unit-to-unit performance variations and process variations. Efficient approaches that can predict remaining useful life (RUL) are necessary for improving cutting quality and saving costs. According to the input data used in the performance degradation model, RUL prediction methods can be classified into three categories: time series models (with working time as input), artificial intelligence models (with real-time working data as input), and stochastic process models (with degradation data as input).
The time series models analyze historical degeneration data to conduct RUL prediction using statistical approaches. Numerous time series models have been proposed and developed in tool RUL prediction in recent years, including the hidden Markov model [1,2], the autoregressive integrated moving average model [3], Kalman filtering [4,5], and particle filter [6][7][8][9]. Methods based on artificial intelligence take the extracted signal features or the original signal as input and RUL as output. Many advanced artificial intelligence models have been widely used in tool residual life prediction, such as neural network [10][11][12], support vector machines [13], and deep learning methods [14,15]. In stochastic process models, degradation over time is often modeled by a stochastic process {Y(t); t ≥ 0} to account for damage accumulation, with inherent randomness. The RUL is determined as the first passage time of the process with respect to some failure threshold. In this context, three types of degradation modelling technique are widely discussed in the literature, namely the Wiener process model [16][17][18], the gamma process model [19], and the inverse Gaussian process model [20,21]. Meanwhile, Pimenov and Mikołajczyk combined neural networks and image processing for tool life prediction [22].
The time series models are suitable for mass production, with abundant historical degeneration data, while the artificial intelligence methods are appropriate for dealing with massive and complicated process data. Both the time series methods and the artificial intelligence methods are based on the invariable degradation trajectory. However, the tool wear process is complicated by randomness and periodicity, which are related to friction speed, pressure, surface roughness, material properties, friction and wear types, lubrication status, surface coating, and individual differences between tools. All of these factors lead to uncertainty in the tool degradation process. Therefore, it is more reasonable to describe tool performance degradation using stochastic processes. Although tools of the same type have commonalities in design and material, there might be significant individual differences due to dynamic use conditions. To characterize individual differences, the random-effects model is introduced in the stochastic process model [23]. Lu and Meeker [24] introduced a random variable into the degradation model to describe individual differences. Peng and Tseng [25] imposed a random effect on the drift parameter of the Wiener process, where a normal distribution is assumed for the random drift across the population. Compared to the Wiener process model and the gamma process model, the IG process is flexible in incorporating random effects that account for heterogeneities commonly observed in degradation problems [26,27].
In the current cutting process study, precision is generally used to describe the precision level of machine tool [28]. In the actual cutting process, the precision of the product is not only related to the precision of the machine, but also related to the level of the operator, the state of the cutting tool, the state of the fixture, the material characteristics of the cutting workpiece, and the processing technology. The whole machining system will affect the precision. In this paper, the research object is the cutting tool. Thus, the precision here refers to the precision of the cutting process. The influence of the cutting tool on cutting precision is directly reflected on the surface of a workpiece, which mainly has a significant influence on surface roughness. Therefore, the roughness of cutting surface is regarded as the index of cutting precision in this study. The time that the tool can normally work while still meeting the surface roughness requirements is defined as the RUL.
Compared with the wear of a tool, cutting precision criteria, such as the surface roughness, are more concerned with the actual cutting process [29]. Therefore, the failure criterion of the tool is generally not a decrease in its strength or stiffness but a decrease in its cutting precision. Traditional tool RUL prediction model focus on tool wear level. A conservative protection strategy could waste the RUL of the tool, increase unnecessary downtime and lead to a decrease in production efficiency. There are different mechanism models for the prediction of surface roughness have been studied [30][31][32][33]. The limitation of mechanism models is that it needs a lot of strict experiments test conditions, which is time-consuming and costly. In addition, the application of artificial intelligence technology in surface roughness has been widely discussed [34,35]. While these artificial intelligence models lack of discussion on tool wear degeneration, which is a very important factor related to surface quality. Thus, this paper proposes a dynamic evaluation method for RUL prediction, links the surface roughness to the wear of the tool, and the surface roughness criterion is modeled by a random threshold for the degradation state of the tool. The degradation of the wear process is modeled by an inverse Gaussian process, which has been successfully applied in degradation modeling [36]. Considering the quality variation of the tool, the degradation rate of the inverse Gaussian process is modelled as a random effect to improve the performance of the model. The rest of this study is organized as follows: In Section 2, an inverse Gaussian process with a variable drift coefficient is formulated to characterize the degradation process considering the dynamic wear degradation mechanism and individual heterogeneity. The relationship between the surface roughness and degradation in terms of wearing is defined, and the RUL evaluation model with a random failure threshold is proposed. The parameter estimation procedure based on an EM algorithm is also developed. Section 3 provides the implementation and validation of the proposed approach through simulation experiments and real-data examples. The conclusions of the paper are drawn in Section 4.
Performance Degradation Modeling Based on Inverse Gaussian Process
In this paper, tool degradation is assumed to follow an inverse Gaussian process, as follows: where µ is related to degeneration rate, λ presents the fluctuation of the degradation process, and Λ(t) is a monotone increasing function. If the µ and λ are known, the tool degradation process Y(t) has independent increments. In addition, Y(t) is also monotonically increasing. For a given failure threshold ω, the failure time T of the tool can be defined as the first passage of time when Y(t) exceeds the threshold ω. Accordingly, the probability density function (PDF) and cumulative distribution function (CDF) of failure time T can be obtained as follows [37]: and Due to the variability in the raw materials and the dynamic working conditions, the degradation rate µ itself can vary from unit to unit. We assume that the degradation rate µ can be modeled as a random effect, which follows a certain distribution to account for this aspect. The typical model for the random effects for µ in the IG process includes the truncated normal distribution and gamma distribution. In this study, we assume that 1/µ follows a normal distribution N α µ , σ −2 µ , considering the two parameters correspond to the mean and variance of degradation rate respectively. The model parameters of normal distribution have definite physical meaning, so it is convenient to quantify the subjective information such as expert information. Then according to the total probability formula, the CDF of residual life Tr, considering the random degradation rate, can be expressed as follows [38]: In order to simplify the course of the derivation for the RUL distribution, two lemmas are given [16,39]: 2 1 , and a, b R, then the following holds: 2 2 , and A, B, C R, then the following holds: Based on Lemmas 1 and 2, we can calculate (4) explicitly. The CDF and PDF of residual life Tr can be formulated as: and
Remaining Useful Life Evaluation Model
In the actual cutting process, cutting precision is not only related to the precision of the machine tool but also to the level of the operator, the state of the cutting tool, the state of the fixture, the material characteristics of the cutting workpiece, and the processing technology. However, these factors in the manufacturing process are often steady, while the cutting tool is gradually worn. Therefore, it is often the case that cutting precision is mainly dependent on the wearing level of the cutting tool.
The influence of the cutting tool on cutting precision is directly reflected on the surface of a workpiece, which mainly has a significant influence on surface roughness. Geometric dimension precision is more related to the whole machining system, which can be improved by the adjustment of the operator. In addition, the mechanism affecting dimensional precision, such as the performance degradation of the spindle, will not change significantly in a short time. However, the tool wear will change significantly in a relatively short working time, resulting in abnormal surface roughness. Therefore, in this study, surface roughness is considered as the most significant precision index caused by tool wear in the short term.
The time that the tool can normally work while still meeting the surface roughness requirements is defined as the remaining useful life (RUL). Assume that R k is the RUL corresponding to the equipment at the current measurement time t k ; that is, the interval from time t k to the time of fault occurrence.
where Y 0:k is the degenerate historical dataset from start time t 0 to time t k .
In addition, function Λ(t) should be updated under a different measurement time. The function Λ(t) corresponding to the measurement time t k is Λ (t k ) (t): where Λ (0) (t) is the function Λ(t) at the initial time.
According to Equations (7)-(10), the CDF and PDF corresponding to remaining useful life at t k are as follows: and It is difficult or even impossible to predetermine a failure threshold in many scenarios. One possible method to tackle the above-mentioned problem is to assume the failure threshold follows a specified distribution [40,41]. For a given surface roughness requirement, the wear failure threshold ω for the wearing process is a random variable in a change interval of [ω L , ω U ]. In this study, the ω is assumed to obey uniform distribution. Then, the CDF of the remaining useful life can be expressed as follows: The average remaining useful life can be calculated as follows: The method to estimate parameters θ = α µ , σ 2 µ , λ in the above model will be introduced in Section 2.3.
Parameter Estimation Based on Expectation-Maximization (EM)
As the random variable parameter 1/µ cannot be observed directly, an expectationmaximum (EM) algorithm is applied to estimate its value. The EM algorithm is an iterative optimization strategy that includes an E-step and an M-step in an iteration. In each iteration, the conditional distribution of the missing data and the expectation of the complete loglikelihood, with respect to the conditional distribution of the missing data, are derived in the E-step, with the model parameters estimated in the previous step. The estimates for model parameters are then updated by maximizing the expectation of the complete loglikelihood in the M-step. The iteration is repeated until the estimates for model parameters converge.
Since we have assumed that 1/µ ∼ N α µ , σ −2 µ , according to the Bayesian formula, the posterior distribution of 1/µ k at t k can be obtained according to the Bayesian formula: where Define θ j k = α j u,k , σ u,k 2(j) , λ j u,k as the parameter θ at time t k , where j is iteration times. The complete log-likelihood function of {Y 0:k , 1 µ } can be expressed as follows: L θ| Y 0:k, 1 By maximizing the complete log-likelihood function, the parameter θ is calculated as follows: where E 1 µ and D 1 µ −1 are the expectation and variance of 1 µ conditional on Y 0:k given in (16) and (17). The optimal parametersθ k = θ k j+1 can be obtained by the iteration until algorithm convergence. A large portion of the existing literature on the EM algorithm has proven that the algorithm is not only simple in calculation but can also guarantee convergence. With the increase in iteration times, the likelihood function will also increase, so that the result will improve under the maximum likelihood function. Because Y 0:k is obtained with the continuous measurement of processing, the algorithm can be used to estimate the model parameters at any time after obtaining the degradation data. Furthermore, with the increase in available data, the estimated model parameters will be more accurate.
Simulation
For validating the performance of the algorithm of the proposed approach, the following simulation was conducted. The 100 numbered simulated data were generated, under the assumption that the parameters in the inverse Gaussian model were set as αµ = 1, σµ = 100, λ = 2. The data information, including simulated degradation data, was recorded with the cycle. The degradation trajectory obtained by the simulation is shown in Figure 1. The former 10 data points were regarded as historical data. The parameters αµ, σµ, and λ were estimated at different times by the EM algorithm. The simulated data contributed to the performance analysis of the proposed model, without physical meaning. From Figure 5, we can see that, with more and more data available, the PDF of RUL will be narrower, which indicates that the uncertainty of the prediction results becomes smaller, and the corresponding point estimation is closer to the real RUL. In this simulation, the failure threshold was set to [90, 100], and it obeyed uniform probability distribution. Subsequently, the prediction result is drawn in Figure 6.
Experiment
Real milling experiments were performed to verify the availability and validity of the proposed approach by detecting tool wear condition. The experiment was performed on specimens of 7109 aluminum alloy by milling in the normalized state. The workpiece was cuboid with a 100-mm side length. The chemical compositions of the selected materials are specified in Table 1. The experiment was carried out using a flat-end milling cutter using coolant, and the characteristics of the tool are specified in Table 2. A five-axis DMG CTX gamma 2000TC (Hamburg, Germany) with Numeric Control Siemens 840D sl (Munich, Germany) was used in the experiment. The cutting was conducted with the spindle speed of the cutter at 6000 r/min, a feed rate value of 2000 mm/min, and a cutting depth value of 1 mm. The cutter wear was measured using a Dino-Lite AM3113 microscopy system (AnMo, Shenzhen, China). The roughness of the machined surface was measured by the Mahr M1 surface roughness meter (Mahr GmbH, Esslingen am Neckar, Germany) after the cutting. The proposed approach was run on a server with a 2.40 GHz processor and 64 GB RAM. Milling from the lower edge of the workpiece to the upper edge of the workpiece was recorded as a cycle. After each cutting cycle, we stopped and collected the roughness data of the specimens. The roughness was measured on the flank face of the cutting surface four times, and the four measurements were averaged as the true roughness. Meanwhile, the tool wear was measured using a Dino-Lite microscopy system. The wear of tool and the roughness of workpiece were monitored and recorded, until the roughness deviated from the requirement value of 2 µm. The experimental environments and measurements are shown in Figure 7. Due to equipment and artificial measurement errors, there may be fluctuation errors in wear measurement results. To eliminate measurement errors, abnormal data were rejected, and the mean values of the normal data before and after were filled in. Then, the measurement data were smoothed. The tool-wearing curve is shown in Figure 8. Considering the high rate in the early stage and the severe wear in the later stage, the data of a stable wear period were applied for the modeling. In this experiment, it was assumed that the failure criterion of the tool is whether the surface roughness of the workpiece met 2 µm. The former 40 cycles of cutting use were regarded as the historical data. The prediction results were not ideal, with a fixed failure threshold of 0.18 mm, as shown in Figure 9. Then, the failure threshold was set to [0.175mm, 0.18mm] and obeyed a uniform probability distribution. As shown in Figure 10, the prediction results improved significantly compared to Figure 3. The PDF of the URL was drawn as shown in Figure 11.
Comparation of the RUL Predictive Model
Based on the estimated RUL, the mean absolute error (MAE) between the estimated RUL and the true RUL can be calculated as follows: Consequently, the MAE from the 40th cycle to the 120th cycle can be used as a measure to quantify the prediction accuracy of the model. Figure 12 presents the MAE of the estimated RUL using the model with a variable degradation threshold and the model with a fixed degradation threshold. Apparently, the model with a variable degradation threshold gives a more precise RUL prediction. In order to verify the effect of the model, a comparison was made between the IG process model and a particle filtering method using deviation accuracy [8]. The deviation accuracy was set as 0.1, which means that the prediction distribution falls within 1 ± 0.1 of true RUL and is regarded as the performance indicator of the two models. Table 3 shows the result of the comparison, which demonstrates the better performance of the method proposed in this paper.
Conclusions
In this paper, we studied the degradation modeling and RUL prediction of cutting tools based on a cutting precision criterion. The cutting tool suffers continued wear in usage, which decreases the cutting precision of the machining process, i.e., the roughness of the surface in this study. Although cutting precision is of more practical value, it is indirectly measurable and its degradation pattern is more complex. On the other hand, the wear state of a tool is a more directly measurable characteristic in practice, and its degradation pattern is more traceable. Therefore, we proposed to model the degradation process of tool wear as a proxy for the degradation of cutting precision, and linked the roughness of the surface requirement to a random threshold for the wearing of a tool. A degradation model based on the IG process was proposed for the tool wearing process, and the RUL prediction method was also studied. The following conclusive remarks were reached in this study: 1.
An IG process model with a variable drift coefficient was used to characterize the degradation of the tool wearing process subjected to individual heterogeneity in dynamic working environments; 2.
The surface roughness requirement was linked to a random threshold for the wearing of the cutting tool, and the RUL prediction method was developed based on the proposed degradation model with a random failure threshold. 3.
Finally, the applicability and effectiveness of the proposed method was validated using the wearing data of cutting tools in a milling experiment; the MAE was 4.33.
Further work is required to extend the proposed model's generalizability for handling the multiple cutting conditions observed in real cutting processes, such as turning, planing, and grinding. In addition, the distribution of the failure variable threshold is subject to confirmation by experimental and statistical analyses.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
RUL
remaining useful life IG inverse Gaussian EM expectation-maximization PDF probability density function CDF cumulative distribution function Ra surface roughness Y(t) degradation process with a simple IG process model monotone increasing function of Y(t) ω failure threshold T failure time Tr residual life F T (t) CDF of T f T (t) PDF of T P(·) probability of an event E(·) expectation operator N(a, b) uniform distribution with boundary [a, b] Φ(·) CDF of standard normal distribution N α µ , σ −2 µ distribution of Parameter 1/µ Λ (t) derivative function of Λ(t) t k kth measurement time R k RUL corresponding to the equipment at the current measurement time t k Y 0:k historical degenerate dataset from start time t 0 to time t k . Λ (t k ) (t) Λ(t) at the measurement time t k Λ (0) (t) Λ(t) at the initial time F Tr (t) CDF of Tr f Tr (t) PDF of Tr F Rk (r k ) CDF of R k at tk f Rk (r k ) PDF of R k at tk y k degradation value at time t k ∆y degradation increment j iteration times θ estimated parameters θ= (αµ, σµ −2 , λ) | 2021-07-27T00:05:19.942Z | 2021-05-28T00:00:00.000 | {
"year": 2021,
"sha1": "366737ebe5cb804964aea136bef0deb0ec548eb8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/11/5011/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ac70001f60f49ec7c69b8d48e952b201995099b4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
209460969 | pes2o/s2orc | v3-fos-license | Asymptotic Behavior of a Sequence of Conditional Probability Distributions and the Canonical Ensemble
The probability distribution of an additive function of a subsystem conditioned on the value of the function of the whole, in the limit of the ratio of their values goes to zero, has a limit law: It equals to the unconditioned marginal probability distribution weighted by an exponential factor whose exponent is uniquely determined by the condition. We apply this theorem to explain the canonical equilibrium ensemble of a system in contact with a heat reservoir. A corollary provides a precise formulation of what a temperature bath is in probabilistic terms.
with its surrounding heat bath at a fixed temperature, where the bath is usually considered much larger in comparison. The theory has wide applications from condensed matter physics to biophysical chemistry [8,4]. In textbooks, there are currently two heuristic justifications for the exponential factor. One is the original derivation by L. Boltzmann in 1877 based on an ideal gas [22], another is based on the notion of a large heat bath and a small system within, extensively discussed by J. W. Gibbs in his 1902 magnum opus [11].
After an extensive discussion of the properties of an invariant measure including demonstrating it has to be a function of the mechanical energy, however, Gibbs did not attempt to derive the canonical distribution; rather he simply stated that an exponential form "seems to represent the most simple case conceivable". Boltzmann's derivation was based on the idea of most probable frequency under the constraint of given total energy. In the process he recognized that entropy S = −N i f i log f i from multinomial distribution, where N is the number of total gas molecules and i represents a distinct molecule state with kinetic energy e i . This derivation preceded both modern theory of large deviations [6,26] as well as the principles of maximum entropy (MaxEnt) championed by E. T. Jaynes [14,20]. In connection to the contraction principle in the former, Boltzmann computed the large-deviation rate function for a sample frequency conditioned on a given sample mean of energy instead of obtaining the rate function for the random variable. This approach has now been made rigorous under the heading of the Gibbs conditioning principle [24,6]. MaxEnt, on the other hand, plays a pivotal role in information theory and machine learning [13,1]. In 1980s, Boltzmann's logic was also rigorously developed into providing a connection between maximum entropy and conditional probability [31,28].
Gibbs' theory for the canonical distribution was based on the idea of heat bath. In [11], he noted that distribution with the exponential form had "the property that when the system consists of parts with separate energies, the laws of the distribution in the phase of the separate parts are of the same nature".
Having energy E A for the microstate A of the small system and E B for the microstate B of the heat bath, Gibbs assumed the phase-space distributions follow (i) additivity: P (A, B) = P (A + B) (ii) independency: P (A, B) = P (A)P (B). Under those two assumptions, the only possible probability distribution for A is exponential: P (A) ∝ e λEA . Furthermore, all small systems contacted with the same bath share the same parameter λ, this means they are of the "same nature". By assuming that every small system follows the conjugate distribution laws (a family of single parameter exponential priors), A. Ya. Khinchin [15] rigorously proved Gibbs' assertion of the common λ and further showed that it is determined by the given total energy.
As far as we know, there is still missing a rigorous logical origin for the exponential weight itself for the canonical distribution, beyond an ideal gas, in the framework of modern probability. This has been noted by experts [24]. We were inspired by a very widely used derivation in standard statistical physics textbooks -based on Taylor's expansion of the entropy function of a heat bath [16,12,18]. The present work formulates this approach rigorously in probabilistic terms and then gives a proof. We indeed have obtained a rather general new mathematical theorem. The results can be applied back to particular scenarios in statistical physics under corresponding assumptions. Our theorems have clarified the notions of additivity, independency, and the vague "same natures of systems". The last is actually a corollary of the existence and uniqueness of a single parameter in the exponential form of the canonical distribution, and additivity is only required in order to preserve the exponential form during the map from a phase space to its corresponding energy space. Independency of two systems is a special case in which we shall show that the parameter only depends on fluctuations of the heat bath but independent of the small system.
Our results are obtained based on two mathematical ideas: conditional probability and asymptotics. We use a Gedankenexperiment to illustrate the crucial role of the formerconditional probability -in our theorems: Let Z := X + Y , where X is a random variable for some quantity (e.g. energy) in a small system and Y is for the same quantity in the heat bath. If one is only interested in the static statistics of X, there is a way to set up an experiment: Let Z(t) be a fluctuating total mechanical energy as a function of time and its distribution has a support on D ⊆ R + , but one selects only those measurements for X(t) that simultaneously has Z(t) ∈ I ⊆ D. In the language of mathematics, this thought experiment is about the conditional probability of X(t) conditioned on the event Z(t) ∈ I. Why is this thought experiment regarding conditional probability very much in line with the physicist's picture of a canonical ensemble? The answer is in the idea of time-scale separation which involves three different time scales. The first time scale is for the small subsystem X(t) to reach its equilibrium, the second time scale is to restrict the total system Z(t) to be fluctuating inside a finite interval I, and the third time scale is for Z(t) to reach its equilibrium. And the first one is much shorter than the second one, which is much shorter than the third one. Based on this framework of time-scale separation, the canonical ensemble is the statistical ensemble that represents the possible outcomes of the system of interest on the second time scale, i.e, when the small subsystem has reached its equilibrium but the total system is still "constrained" in a certain interval.
In fact, having its own stationary distribution of the total system (if it evolves long enough) is very significant for the theory of conditional probability for two reasons: (1) knowing the fluctuation of the large system is necessary to define the conditional probability mathematically and (2) to perturb the given condition of the total system to see how it has effects on the small subsystem is the essence of our theory of the canonical distribution. In other words, even though the original problem is only about the behavior of X(t) when Z(t) ∈ I, if we have more information of Z(t) outside of I, we are able to seek a deeper understanding of the original problem. Not only for the canonical ensemble, this idea of treating a given constraint (parameter) as a variable with distribution has been also widely used in many of other fields, for example, in comparison of quenched and annealed invariance principles for random conductance model [3], and in studying of initial-condition naturalness in the case of statistical mechanics [30].
Mathematically using conditional probability to understand Gibbs measure has a long history, see O. E. Lanford [17], O. A. Vasicek [29], H. O. Georgii [10], and H. Touchette [27]. In particular, on the basis of Boltzmann's logic, using asymptotic conditional probability to describe the canonical ensemble has been well-established through the Gibbs conditioning principle [24,6]. More discussion of this is provided in Section 2 for a contradistinction with our own work. In brief, the Gibbs conditioning principle addresses this question: Given a set A ∈ R and a constraint of the type Z n ∈ A, what are the limit points of the conditional probability identically distributed random variables (Gibbs conditioning principle holds beyond i.i.d. random variables). We can identify that (1.1) is very similar to our setup for the canonical distribution if we consider Z n := Y n + X, where Y n = 1 n n−1 i=1 X i is the heat bath in our approach. However, the heat bath Y n in our setup could be defined in a much more general way: we only require that Y n converges to some random variable Y in distribution rather than has a special form as the sum of identical random variables (we also do not require X and Y n to be independent). Therefore, either using the Gibbs conditioning principle or using our approach to derive the canonical distribution, both sides are asking a very similar question: what is the asymptotic behavior of conditional probability?
To answer this question, our approach is very different from the Gibbs conditioning principle which transforms the original problem to a sampling problem: what are the limit points of In Equation (1.2), L n = 1 n n i δ Xi is the corresponding empirical measures for Z n and Γ = {γ : xγ(dx) ∈ A} is the corresponding constraint of Z n . In fact, even though this approach is named by the "Gibbs" conditioning principle, its logic exactly follows Boltzmann's derivation of the canonical ensemble. As a consequence of the Gibbs conditioning principle, it provides a mathematical foundation of why using the maximum entropy principle with certain constraint works to find the canonical distribution [31,28]. On the other hand, our approach is direct to find the asymptotic behavior of conditional probability (1.1) on the basis of two things: (i) the subsystem is asymptotically small in relative to the total system and (ii) the distributions of the heat bath converges to a limiting distribution as n → ∞ with a proper scaling. Intuitively, under this framework, the distribution of the small subsystem should consist of its unconditional distribution and a weight from the "bias" as a function of a linear approximation of that limiting distribution of the heat bath. As we mentioned above, our approach follows Gibbs' theory for the canonical distribution which involved the idea of "heat bath" that contributes a "bias" to the system. In short, the common point of our approach and the Gibbs conditional principle is that both sides started with a very similar question concerning fundamental importance in statistical mechanics and adopted the concept of conditional probability to describe that problem. However, the method of solving the problem on each side has a very different philosophy, the Gibbs conditional principle is about counting statistics by Boltzmann's logic and ours is inspired by the idea of heat bath from Gibbs.
Besides of conditional probability, we also adopt a very important and powerful mathematical technique in our theory: Asymptotics. Indeed, asymptotics is not only a mathematical technique but also the essence of statistical mechanics. The purpose of statistical mechanics is to derive equilibrium properties of a macroscopic system with enormous numbers of molecules N and occupying a very large volume V , then that macroscopic equilibrium thermodynamics is an emergent phenomenon in the limiting case when N → ∞ and V → ∞. Following on from this concept, we shall show that the emergence of an exponential factor in the canonical ensemble is also a result of a limit law according to the probability theory. Take an analogy, our limit theorem is to the exponential form of the canonical distribution what the central limit theorem is to a normal distribution. As every limit theorem, we have to define how our assumptions depend on n carefully. In our work, as n increases, the subsystem becomes "relatively small" compared with the total system ("relatively small" has a rigorous definition in our theorems). Based on this main assumption, we obtain two significant results: (i) For a sufficiently large n, a conditional distribution can be well-approximated by its unconditional distribution weighted by an exponential factor, and (ii) a sequence of conditional distributions converges to a limit as its unconditional distribution weighted by a unique exponential factor.
We obtain two theorems regarding the first result in Section 3.2, and they provide the existence of the canonical distribution when a system is contained in a finitely large total system (n is sufficiently large). Furthermore, we obtain two limit theorems regarding the second result in Section 3.3, they provide the existence of a unique canonical distribution when the system is contained in an infinitely large total system (n → ∞). In comparison with Section 3.3, Section 3.2 only requires weaker conditions, but the exponential form in the canonical distribution may not be unique since there could be more than one sequence having the same asymptotic behavior. On the other hand, Section 3.3 requires stronger conditions, but it gives us a unique canonical distribution in the limit and this distribution can be applied back to approximate the conditional probabilities for all finitely large n. This result can be regarded as an example that the limit theorems from probability predict the laws of nature. Here, we would like to quote from P. W. Anderson [2] "Starting with the fundamental laws and a computer, we would have to do two impossible things -solve a problem with infinitely many bodies, and then apply the result to a finite system -before we synthesized this behavior." Our idea echos Anderson's view: To find the limiting behavior of a sequence of conditional probability distributions and apply it back to the distribution of a small subsystem contained in a finitely large total system with some fluctuations, and this is how it is used as a scientific theory.
1.1. Organization of the paper. We provide some useful theorems and definitions and explain our motivation in this problem in Section 2. In Section 3 we state and explain our main results. Proof of the main results are provided in Section 4. In Section 5 we present several applications of our main theorems.
Notations. Throughout the paper, we will adopt the notations a n = o(b n ) when lim n→∞ an bn = 0, and a n = O(b n ) when |a n /b n | is bounded by some constant C > 0. We sometimes use brief notations of probabilities in order to save space in our proofs, e.g., P Xn|Zn (x; I) = P (X n = x | Z n ∈ I). We always use X n , Y n , Z n to denote sequences of random variables, whose definitions might change in different theorems, but we will give their exact definitions before stating the theorems.
Preliminaries
2.1. Maximum entropy and conditional probability. We first recall the following classical results. Here we don't specify the regularity conditions in the statements of the two theorems below. For more details, see the original references.
where S n := X 1 + X 2 + · · · + X n , µ := The maximum entropy distribution under constraint α is the exponential distribution e λx , and λ is chosen such that It is said that e λx maximizes the function of entropy and the parameter is determined by the constraint (2.4).
We see that Theorem 2.1 implies the convergence of the conditional probability distribution of X 1 to its unconditional distribution. In this case, the sum of X i is conditioned on the scale of Gaussian fluctuations: S n = nµ + c n , where nµ is the mean of S n and c n is in the order of standard deviation of S n . On the other hand, we see that Theorem 2.2 implies the convergence of the conditional probability distribution of X 1 to the (normalized) product of its unconditional distribution and the maximal entropy distribution e λx . The parameter λ is determined by the condition S n = nα, which is on the scale of large deviations when Theorem 2.2 is a particular case of the Gibbs conditioning principle, which is the meta-theorem [7] regarding the conditional probability of X i given on the empirical measure of an i.i.d.
δ Xi (2.5) belongs to some rare event such as Using the empirical measure defined in (2.5) conditioned on the rare event (2.6) to find the limit of conditional probability in Theorem 2.2 turns out to be equivalent to find the limit By the Gibbs conditioning principle, under appropriate regularity conditions, γ * minimizes the relative entropy where γ ∈ Γ and µ X is the law of X 1 . In fact, this result implies the limit law derived in Theorem 2.2.
One of the most successful approaches to the Gibbs conditioning principle is through the theory of large deviations [24,7]. This approach involves Sanov's theorem [21] that provides the large-deviation rate function of the empirical measure induced by a sequence of i.i.d. random variables and the contraction principle [9] that describes how continuous mappings preserve the large deviation principle from one space to another space. In short, these theorems regarding counting and transformation in the theory of large deviations yield the Gibbs conditioning principle and provide the foundation of using the maximum entropy distribution under certain constraints to find the limit of a sequence of conditional probabilities. To study the question how fast this probability tends to zero, Harald Cramér obtained the following theorem in 1938: Theorem 2.3 (Cramér's theorem [5]). Assume that The function A is called the logarithmic moment generating function. In the applications of the large deviation theory to statistical mechanics, A is also called the free energy function and the function φ is called the rate function of large deviations [26]. We can recognize that φ(y) is the Legendre transform of A(λ) (A is a convex function). Therefore, φ = A * (the convex conjugate of A) and it leads to the following pair of reciprocal equations dA(λ) dλ = y if and only if dφ(y) dy = λ. Therefore, this result (2.14) shows that λ not only can be determined implicitly by the free energy function A but also can be founded explicitly by the rate function φ.
One of our main theorems (Theorem 3.7) can be applied to a particular type of heat bath as the sum of i.i.d. random variables (Theorem 5.7), then we directly show that λ is uniquely determined by the first derivative of the rate function φ given on the condition α. In this case, we apply the large deviation principle directly to the distribution of the heat bath δ X k . 6 In fact, the former (our approach) actually follows Gibbs' logic of the canonical distribution through the heat bath method; The later (Gibbs conditioning principle) follows Boltzmann's logic of the canonical distribution through counting statistics. The reason to call the "Gibbs" conditioning principle was in order to comprehend Gibbs' prediction of the canonical distribution from a mathematical standpoint [24], however, in our opinion, it is closer to the idea of Boltzmann's derivation of the canonical distribution. From our perspective, choosing the maximum entropy distribution to approximate the conditional probability works is a natural consequence of the emergence of e λx f (x) when the finite subsystem is contained in an infinitely large system with a value far from its mean. In other words, (normalized) e λx f (x) is the density of the limit of a sequence of conditional probabilities and it maximizes the function of entropy as an inevitable corollary from the setup of the heat bath method. In comparison with the Gibbs conditioning principle, our logic provides a very different point of view of why the maximum entropy principle works to find the limit of conditional probabilities. Even though these two approaches have very different philosophies, in terms of mathematics, they are connected by the reciprocal equations (2.12) through the Legendre transform.
2.3. Asymptotic behavior of probabilities. In order to define how "good" of an approximation of conditional probability is, we first need to decide which metric we would use in the space of measures. In what follows, let Ω denote a measurable space with σ-algebra F and let P, Q denote two probability measures on (Ω, F ).
Definition 2.4 (KL-divergence). For two probability distributions of a continuous random variable, P and Q, the KL-divergence is defined by where p, q are the density functions of P, Q, respectively. For two probability distributions of a discrete random variable, P and Q, the Kullback-Leibler divergence between them can be written as where P, Q are the probability mass functions of P, Q, respectively and Ω is a countable space. By continuity arguments, the convension is assumed that 0 log 0 q = 0 for q ∈ R and p log p 0 = ∞ for p ∈ R\{0}. Therefore, the KL-divergence can take values from zero to infinity. It's well known that we have the following relation between KL-divergence and total variation by Pinsker's inequality [19]: Definition 2.6 (convergence of measures in total variation). Given the above definition of total variation distance, let {P n } n∈N be a sequence of measures on (Ω, F ) is said to converge to a measure P on (Ω, F ) in total variation distance if lim n→∞ δ(P n , P) = 0 and it is equivalent to f dP n − f dP = 0.
Definition 2.7 (weak convergence of measures). Let {P n } n∈N be a sequence of probability measures on (Ω, F ). We say that P n converges weakly to a probability measure P on (Ω, F ) if lim n→∞ f dP n = f dP, From the two definitions above, total variation convergence of measures always implies weak convergence of measures. Definition 2.8 (convergence in distribution). A sequence {X n } n∈N of random variables is said to convergence in distribution to the random variable X if µ Xn → µ X weakly, in which µ Xn is the law of X n and µ is the law of X.
Even though KL-divergence is not a metric, by the inequality (2.17), if two sequences of measures converge to zero in KL-divergence, then they have to converge to zero in total variation. So they must converge to zero weakly. Following this line of implication, in the present work, we start with defining the KL-divergence between two sequences of measures then understand what conditions guarantee it converges to zero. Once we have that, we will attain both strong convergence and weak convergence of the two sequences of measures to zero under those conditions.
As follows, we are showing two classical theorems (see the reference [23]) regarding the convergence of probability distributions which we will use in our proofs.
Theorem 2.10 (Slutsky's theorem). Let {Z n } n∈N , {W n } n∈N be sequences of random variables. If Z n converges in distribution to a random variable X and W n converges in probability to a constant c, then This corollary follows from Theorem 2.9 and Theorem 2.10. The proof is provided in Appendix 6.2.
Main results
3.1. Setup. In statistical mechanics, the canonical ensemble is considered as the probability distribution of an additive function of a subsystem in thermal equilibrium with its surrounding heat bath that is much larger in comparison. In Section 1 of introduction, we have already provided our philosophy of adopting conditional probability to approach this problem. In this section of the main results, we are going to show: When the subsystem is "small" relative to the whole system, the "canonical distribution" is a "good" approximation of that conditional distribution. Within this framework, we first need to define three things rigorously: (1) A relatively small subsystem.
(3) Good approximations. For the definition of (1), in order to define a relatively small subsystem, we consider a sequence of conditional densities f X|Zn (x; E n ), E n := µ n + I/β n , (3.1) whereZ n := X +Ỹ n , X is a nonnegative continuous random variable andỸ n is a sequence of continuous random variables, I is a finite interval and µ n , β n are positive sequences. The formula of E n is to represent two kinds of translations that we can do for the interval I: µ n is the parameter of shifting and β n is the parameter of scaling. Through different combinations of µ n and β n , the given condition ofZ n will be on certain significant scales. For two examples, (1) Assume µ n := E[Z n ] = nµ, µ is a constant and β n = 1/ √ n, thenZ n is conditioned to be inside the interval E n = nµ + √ nI. The interval E n is then around E[Z n ] with a scale of the Gaussian fluctuations in central limit theorem. (2) Assume β n = 1/n, thenZ n is conditioned to be inside the interval E n = nµ + nI. The interval E n is then around E[Z n ] with a scale of the large deviations. In our theorems, we will make assumptions that Therefore, the definition (3.1) of conditional densities is a sequence of densities for the nonnegative continuous random variable X with E[X j ] < ∞ conditioned on the eventZ n ∈ E n with E n → ∞ (β n → 0). In this way, the positive sequence β n characterizes that the subsystem is relatively "small" to the given condition of the whole system.
Then we will extend our definition of a "small" subsystem to the case when we have discrete random variables. Consider a sequence of conditional probability functions whereH n := K +L n , K is a nonnegative discrete random variables and we assume that andL n is a sequence of discrete random variables andH n := K +L n .
For the definition of (2), we are introducing a general form of the canonical probability distribution as follows: Let I be the interval defined in (3.1) and a sequence of functions ζ n : I × R → R, for the canonical probability distribution of a nonnegative continuous random variable X, its density can be represented by Let a sequence of functionsζ n : I × R → R, for the canonical probability distribution of a nonnegative discrete random variable K, it can be represented by P (K = k)e −ζn(I;k)k k∈S P (K = k)e −ζn(I;k)k and 0 ≤ζ n (I; k) < ∞, for all k ∈ S, (3.6) where S is a set of the support of P (K = k).
For the definition of (3), a "good" approximation is defined by a sufficiently small distance of two distributions in total variation (2.17). In most of our results, we prove that two sequences of distributions converge to zero in KL-divergence, by Pinsker's inequality, it implies those two sequences converge to zero in total variation, i.e., one sequence is a good approximation of the other one.
3.2.
Approximation of conditional probabilities. Based on the definitions of (1), (2), and (3) in the setup, we provide two approximation theorems to show the existence of the canonical distributions as good approximations of conditional distributions when the subsystem is sufficiently small relative to the whole systems.
Let X n := β n X be a sequence of nonnegative continuous random variables and take j = 2 for the assumption (3.2), i.e., Therefore, we can define n ] = a n , a n = o(1).
Let Y n := β n Ỹ n − µ n be a sequence of continuous random variables and Z n := X n + Y n . For a finite interval I = [h, h + δ], h, δ ∈ R and δ > 0, let P (n) I be a sequence of probability measures with density functions And let Q (n) I be a sequence of probability measures with density functions f X|Zn x; E n . Our first theorem for continuous random variables is as follows: (1), and an open interval D such that the following holds: (1) For all (x, y) ∈ R 2 , (2) For all x ∈ R + and every [y, y + δ] ⊂ D, there exist positive constants δ 1 , C 3 depending on y such that Given an interval I ⊂ D, then (3.14) and P Remark 3.3. Interpretations of Theorem 3.1 for statistical mechanics: the sequence a n = o(1) represents that the second moment of the function of the subsystem X scaled by the size of the given condition of the whole system asymptotically goes to zero. And the sequence b n = o(1) represents that the correlation of the subsystem and its surrounding is asymptotically independent. By our approximation theorem, using the canonical distribution to approximate the conditional distribution has a very small error O( √ a n + b n ) when n is sufficiently large, i.e., (1) The subsystem is small relative to the whole system.
(2) The subsystem has week interaction with its surrounding. Note that these conditions (1) and (2) echo the physicist's setup of the canonical ensemble in statistical mechanics. Now we extend our approximation theorem to discrete random variables. Take j = 2 for the assumption (3.4), i.e., (3.15) and by the definition (3.6), we have a set S such that S := {k ∈ R : P (K = k) > 0}. (3.16) Let K n := β n K be a sequence of nonnegative discrete random variables. By (3.15) and (3.16), we can define n ] = a n , a n = o(1), (3.17) and a sequence of sets S n such that S n := {β n k ∈ R : P (K n = β n k) > 0}. (3.18) Let L n := β n L n − µ n be a sequence of discrete random variables and Y n be a sequence of continuous random variables. Let H n := K n + L n and Z n := K n + Y n .
Our second theorem for discrete random variables is as follows: Theorem 3.4. Assume the following conditions hold: (1) All conditions in Theorem 3.1 hold for Z n := K n + Y n on an open interval D.
(2) There exists a set D ′ ⊂ D and a positive sequence c n = o(1) such that for every interval Given an interval I ⊂ D ′ , then satisfies the definition of the canonical probability distributions in (3.6).
Remark 3.5. In Theorem 3.1 and Theorem 3.4, X and K are defined as a nonnegative random variable. In the following two points, we extend our approximation theorem to the case when X (or K) is bounded from below (shifting property) and the case when X (or K) is a nonpositive random variable (reflection property): (1) (Shifting property) Let X be a continuous random variable bounded below. By change of variable, letX n := β n (X − C), where C is the finite lower bound, since β n = o(1), we still have In addition, assume conditional probability P Y n ∈ [y, y + δ] |X n = x satisfies all of the conditions in Theorem 3.1, then we can apply Theorem 3.1 to obtain the canonical distribution for X. We call this shifting property of the canonical distributions. For the discrete random variable K, its canonical probability distribution has this property as well. This shifting property can be interpreted as the extension of the cases restricted on nonnegative quantities (e.g., energy and number of molecules) for the canonical ensemble and the grand canonical ensemble in statistical mechanics: the canonical distribution can be generalized to represent the possible values of a function which is bounded from below of the subsystem in thermal equilibrium with the heat bath at a positive temperature (In Theorem 3.1, we choose the condition I such that 0 ≤ ψ n (I; β n x) < ∞ ).
(2) (Reflection property) Let X be a nonpositive continuous random variable. Assume the condition (3.11) in Theorem 3.1 becomes for all x ∈ R − . And assume all of the other conditions in Theorem 3.1 are satisfied, then Theorem 3.1 can be applied to an interval I = [h, h + δ] ⊂ D such that −∞ < ψ n (I; β n x) ≤ 0, for all x ∈ R − . We call this reflection property of the canonical distributions. For the discrete random variable K, its canonical probability distribution has this property as well. Here is our interpretation of this reflection property for statistical mechanics: When a given condition I of the whole system gives rise to a negative parameter (−∞ < ψ n (I; β n x) ≤ 0) in the exponential weight of the canonical distribution, our approximation theorem can be applied to the case of a nonpositive function of the subsystem. Combined this property with the shifting property, the canonical distribution can represent the possible values of a function which is bounded from above of the subsystem in thermal equilibrium with the heat bath at a negative temperature (Here we choose the condition I such that −∞ < ψ n (I; β n x) ≤ 0).
3.3.
Limit theorems for conditional probabilities. In this section, we provide two limit theorems to show that a sequence of conditional distributions converges to a unique canonical distribution by appropriate scaling, where the convergence is also in a corresponding scaling of KL-divergence of this sequence of conditional distributions from its limit distribution. In comparison with the section 3.2, here we obtain a unique canonical distribution at the appropriate scale when a system is conditioned on an infinitely large total system (n → ∞). It is different from the section 3.2 that we derive the canonical distribution for each finitely large n directly.
Recall that from the section 3.2, for a sufficiently large n, we know that Q with density function can be well-approximated by P (n) I with density function Note that the parameter of exponential function ψ n (I; β n x) in (3.24) depends on n and x.
Through our limit theorems in this section, we show that the sequence of measures Q (n) I can be wellapproximated by a unique (sequence of) canonical distribution(s) with density function(s) in two cases: (1) λ n (I) = β n ψ(I), where β n = o(1), ψ : I → R and 0 < ψ(I) < ∞.
(2) λ n (I) = ϕ(I), where ϕ : I → R and 0 < ϕ(I) < ∞. Note that ψ(I) and ϕ(I) are independent of x and n in comparison with ψ n (I; x) in (3.24). One of the main idea behind the proof of our limit theorems is as follows: LetP (n) I be a sequence of probability measures with density functions (normalized) f X (x)e −βnψ(I)x , and let P I be a probability measure with density function (normalized) f X (x)e −ϕ(I)x . With a distance D KL defined as KL-divergence, Case (1) can be considered as Case (2) can be considered as Note that in Case (1), since we have to scale x with β n , we also have to scale the distance D KL with some order of β n to guarantee the existence of that limit ψ(I).
Furthermore, we require stronger conditions than the conditions for (3.24) in order to apply Lemma 4.2 and Lemma 4.3 to the proof of our limit theorems. Here is the essence of those two lemmas: under appropriate regularity conditions, the sequence λ n (I) in (3.25) is uniquely determined by a linear approximation of the following sequence Therefore, most of the conditions in our limit theorems are required to guarantee that (3.28) is wellapproximated by a linear function and the remainder term converges to zero fast enough.
Our first limit theorem for Case (1): λ n (I) = β n ψ(I) is as follows Theorem 3.6. Consider a function ψ : B (R) → R such that 0 < ψ(I) < ∞ for the given interval I. Let P (n) I be a sequence of probability measures with density functions Assume the following conditions hold: satisfies the definition of the canonical probability distributions in (3.5).
Our second limit theorem for Case (2): λ n (I) = ϕ(I) is as follows Theorem 3.7. Let ϕ : B (R) → R be a function such that 0 < ψ(I) < ∞ for the given interval I. Let P I be a probability measure with density function Assume the following conditions hold: (2) Y n → µ in probability, for some constant µ / ∈ I. The sequence of laws of Y n satisfies a large deviation principle with speed 1/β n and rate function φ ∈ C 2 (D), where D is an open interval containing I, and −∞ < φ ′ (y) < 0 for all y ∈ I.
(3) There exists a sequence of functions r n : R → R with r n (x)e −ξx uniformly bounded on R + for any And P I satisfies the definition of the canonical probability distributions in (3.5).
As our approximation theorems in Section 3.2, we can extend our limit theorems to discrete random variables, random variables bounded below, and random variables bounded above as follows: (1) Discrete random variables: Theorem 3.6 and Theorem 3.7 can also be applied to the case when we have a nonnegative discrete random variable K, a sequence of discrete random variablesL n , and H n := K +L n . It is said that the sequence of conditional probabilities P (K = k |H n ∈ E n ) has a limit (by appropriate scaling) as (3.34) The case of λ n (I) = β n ψ(I) follows from Theorem 3.6; The case of λ n (I) = ϕ(I) follows from Theorem 3.7. Furthermore, the probability function (3.34) satisfies the definition of the canonical probability distribution in (3.6). (2) Random variables bounded below: As Remark 3.5, we can extend those limit theorems to the case when X is bounded below. By change of variable, letX n := β n (X − C), where C is the finite lower bound, we still have Note that j = 3 is for Theorem 3.6 and j = 1 is for Theorem 3.7. In addition, assume satisfy the condition of linear approximation in (3.31) and (3.32), for Theorem 3.6 and Theorem 3.7, respectively. Then we can apply those limit theorems to obtain a unique canonical distribution of X. Therefore, as the point (1) in Remark 3.5, a unique canonical distribution derived by the limit of a sequence of conditional distributions has the "shifting property". For the discrete random variable K, its unique canonical distribution has this property as well. (3) Random variables bounded above: Let X be a nonpositive continuous random variable and the corresponding canonical distribution be a sequence of distributions with density functions When λ n (I) = β n ψ(I), Theorem 3.6 can be applied to an interval I such that −∞ < ψ(I) < 0; When λ n (I) = ϕ(I), Theorem 3.7 can be applied to an interval I such that −∞ < ϕ(I) < 0. Therefore, as the point (2) in Remark 3.5, a unique canonical distribution derived by the limit of a sequence of conditional distributions has the "reflection property". For the discrete random variable K, its unique canonical distribution has this property as well. This reflection property provides us an explanation of the possibility of negative temperature: For some given condition of the whole system which arises a negative parameter (−∞ < λ n (I) < 0) in the exponential weight, a unique canonical distribution for a function bounded from above of the subsystem emerges as the limit of a sequence of conditional distributions. Proof. We first prove for the case: {x : f Xn (x) > 0} = R + . In this case, P (Z n ∈ I | X n = x) is well-defined for all x ∈ R + .
It implies that
where ψ n (I; x) = ∂ log P Y n ∈ [y, y + δ] | X n = x ∂y y=h , (4.3) and we apply Taylor's expansion e yn = 1 + y n + (y n ) 2 e γnyn 2 , for some γ n ∈ (0, 1) and y n := ψ n (I; x)x to the third equation in (4.2). Note that by Condition (3.11), 0 ≤ ψ n (I; x) ≤ C 3 , (4.6) and by Conditions (3.10) and (3.11), for all x ∈ R + , k n (x) is uniformly bounded. Therefore, by the results of (4.1) and (4.2), for all x ∈ R + , we obtain that In the following proof, we will use brief notations P Yn|Xn I; x := P (Y n ∈ I | X n = x), P Zn I := P (Z n ∈ I). 15 First, we let Since R + f Xn (x)dx = 1, from (4.6), we have R + f Xn (x)e −ψn(I;x)x dx ≤ 1, hence A n ≥ 1 for all n ≥ 1. By definition X n = β n X, β n → 0, we also have (4.10) (4.9) is by change of variables X n = β n X and the scale invariant property of KL-divergence. (4.10) is true because KL-divergence is nonnegative. With (4.1), the right hand side in (4.10) can be written as From the expression of f Xn|Zn x; I in (4.7), we have the following identity For the second term in (4.12), Condition (3.10) and (3.11) implies that P Yn|Xn I; x and k n (x) are uniformly bounded and Condition (3.13) implies that P Zn I is uniformly bounded below. Then by the assumption E[X 2 n ] = a n , the first term in (4.12) satisfies By Conditions (3.11) and (3.12): P Yn|Xn I; x − P Yn I ≤ b n P Yn I with b n → 0, therefore for some constant K 1 > 0. With (4.14) and (4.15) and recall the definition of A n in (4.8), we have By triangle inequality, from (4.13), (4.16) and (4.17), we have P Yn|Xn I; x P Zn I A n = 1 + O(a n + b n ).
Since log(1 + x) ≤ x for all x > −1, for sufficiently large n, we have log P Yn|Xn I; x P Zn I A n = O(a n + b n ). (4.18) Note that the term O(a n + b n ) in (4.18) is independent of x. Therefore, for the first term in (4.11) we have DefineĜ δ (y, x) := log P Y n ∈ [y, y + δ] | X n = x . Then by Taylor expansion and the conditions (3.10), (3.11), we can expandĜ δ (h − x, x) at (h, x) to get 2∂y 2 x 2 , for someα n ∈ (0, 1), where q n (x) := 1 2 ∂ 2 log P Y n ∈ [y, y + δ] | X n = x ∂y 2 y=h−αnx . Therefore, for the second term in (4.11), by (4.20), we can get And by Condition (3.10), for all x ∈ R + , there is a constant K 2 > 0 such that In the following proof, we use a brief notation P Yn|Xn E n − x; x = P Y n ∈ [y, y + δ] | X n = x . By (4.21), and the uniform boundedness of A n , and the assumption: E X 2 n = a n , the second term in (4.11) satisfies For the case S n := {x : f Xn (x) > 0} ⊂ R + , we can only define P (Z n ∈ I | X n = x) on S n . But we can still define the KL-divergence on R + since the part of KL-divergence on R + \S n is 0. Therefore, same as (4.9), Furthermore, let ζ n (I; x) := β n ψ n (I; β n x), by the condition (3.11), there is a constant C > 0 such that for all x ∈ R + , 0 ≤ ζ n (I; x) < C. Therefore, P We first state the following lemma. The proof follows from the proof of Theorem 3.1 with the Definition of KL-divergence for discrete probability distributions in (2.16). Proof of Theorem 3.4. All of the conditions in Theorem 3.1 hold for K n , Y n , Z n by the assumptions, hence Lemma 4.1 can be applied. Therefore, we obtain the following relation between total variation and KLdivergence from (2.17): for every I ⊆ D, sup βnk∈Sn P K n = β n k | Z n ∈ I − B n P (K n = β n k)e −ψn(I;βnk)βnk With (3.19) and (4.27), the conclusion (3.20) follows from change of variable K n = β n K and triangle inequality: sup k∈S P K = k |H n ∈ E n − B n P (K = k)e −βnψn(I;βnk)k = sup βnk∈Sn P K n = β n k | H n ∈ I − B n P (K n = β n k)e −ψn(I;βnk)βnk ≤ sup βnk∈Sn P K n = β n k | Z n ∈ I − B n P (K n = β n k)e −ψn(I;βnk)βnk + sup βnk∈Sn P K n = β n k | H n ∈ I − P K n = β n k | Z n ∈ I =O(c n + a n + b n ).
Furthermore, letζ n (I; k) := β nψn (I; β n k). We can check that 0 ≤ζ n (I; k) < C for all k ∈ S and a constant C > 0. Therefore, B n P (K = k)e −βnψn(I;βnk)k satisfies the definition of the canonical probability distributions in (3.6).
4.2.
Proofs of Theorem 3.6 and Theorem 3.7. Let X be a nonnegative continuous random variable and with E[X] < ∞ and let Z n be a sequence of real-valued continuous random variables. Given a Borel measurable set E ∈ B (R) and a function ψ : B (R) → R with 0 < ψ(E) < ∞, let P E be a probability measure with density function And let Q (n) E be a probability measure with density function f X|Zn (x; E). We obtain the following lemma for the case (1): Lemma 4.2. Assume the following conditions hold: , for any ξ > 0, are uniformly bounded on R + . (2) (Linear approximation) There exist constants b, c ∈ R, 0 < c < ∞, and a sequence of functions Furthermore, assume E[X 3 ] < ∞ and X is not a constant random variable, letP (n) E be a probability measure with density functioñ (4.29) in which β n > 0, β n = o(1). We obtain the following lemma for the case (2): Lemma 4.3. Assume the following conditions hold : , for any ξ > 0, are uniformly bounded on R + . (2) (Linear approximation) There exist constants b, c ∈ R, 0 < c < ∞, and a sequence of functions In particular, if we choose Z n = β n X +β n (Ỹ n −µ n ), whereỸ n , β n , µ n are given in the definitions in Section 3.2, and choose the Borel set E to be a finite interval I. By Equation (3.1), those general results of Lemma 4.2 and Lemma 4.3 for f X|Zn (x, E) can be applied to f X|Zn (x, E n ), which is the conditional density defined in Section 3.2.
Proof of Lemma 4.2.
Proof. Note that for any uniform bounded function |b n (x)| on R + : for a sequence ε n → 0 since d n → ∞ by Condition (2) and E[X] is bounded by the assumption.
We first prove c = ψ(E) ⇒ D KL P E Q (n) E → 0.
By Condition (2), 35) 20 in which the last equality is from Equation (4.32), and the result of (4.31) applied to the uniformly bounded (1)). Multiplying by e −b on both side in (4.35), we have Then we can apply Taylor's expansion to e qn(x) to get for some sequence α n ∈ (0, 1). Note that we use a formula e y = 1 + y + e α(y)·y 2 y 2 , α(y) ∈ (0, 1) and let y = q n (x), α n = α(q n (x)). Combined with Equation (4.32) and Condition (1), it implies there exists a constant M > 1 independent of n such that e −ψ(E)x+qn(x) ≤ M for all x ∈ I n . Since α n ∈ (0, 1) and ψ(E) > 0 in the assumption, Hence e −ψ(E)x+αnqn(x) is uniformly bounded on I n . Then where O(γ n ) → 0 by Condition (2). By Equations (4.37) and (4.39), (4.40) where the last equation is from the result of (4.31). And since A is bounded by the result (4.34), we have Using the inequality log(1 + x) ≤ x for all x > −1, we find a bound is uniformly bounded on R + , so we can check that is uniformly bounded on R + as well. 33). With the result of (4.42), we can get where the O(ε n ) terms are from the result of (4.31) applied to the bounded function (4.43). Therefore, by (4.34) and (4.44), we get Next we prove By Condition (2), there exists a constantb and a sequence of functionsq n (x) such that Similar to the derivation of (4.33), we have which can be proved by a similar approach in (4.34). Then following the previous proof from (4.35) to (4.44), we can get whereP E is a probability measure with density functionÂf X (x)e −cx . By the assumption (4.45), we also know Hence f X (x)e −cx = Af X (x)e −ψ(E)x , for almost everywhere on R + . 22 Since and A are both independent of x and there exists an interval such that f X (x) > 0, we obtain c = ψ(E).
Proof of Lemma 4.3.
Proof. Note that for any uniform bounded function |b n (x)| on R + : where the existence of O(β 3 n ) id due to d n = O( 1 βn ) by Condition (2) and E[X 3 ] < ∞ by the assumption.
Following the proof in (4.34), for each n, we have and we can check that lim We can apply a similar proof for Lemma 4.2 to Equation (4.54). Substituting b by β n b, ψ(E)x by β n ψ(E)x, q n (x) by β n q n (x) and A by A n , then every step from Equation By the results of (4.60), Equation (4.59) can be written as in Equation (4.61) can be dropped and we obtain By Dominated Convergence Theorem, where in the first equality we apply Taylor's expansion (4.60) again. By (4.63) and (4.64), we have lim n→∞à n − A n β n = lim Therefore, we can apply the results of (4.65) and (4.66) to Equation (4.62) to get for all x ∈ R + . (4.67) Since X is is not a constant random by our assumption, (4.67) is only ture when c = ψ(E).
Proof of Theorem 3.6.
Proof. The proof follows from Lemma 4.3. By the condition (2) in Theorem 3.6, we have We now check all conditions in Lemma 4.3 are satisfied: 24 (1) (Boundness): , which is uniformly bounded on R + by the condition (2) in Theorem 3.6. And from (4.68), for any ξ > 0, , where the first term is uniformly bouneded on R + , and the second term is uniformly bouneded on R + by the condition (3) in Theorem 3.6.
(2) (Linear approximation): Following (4.68), we have Proof. The proof follows from Lemma 4.2. By the condition (2) in Theorem 3.7, we have log P (Y n ∈ I − β n x | X n = β n x) P (Z n ∈ I) = log To check all conditions are satisfied: (1) (Boundness): , which is uniformly bouneded on R + by the condition (1) in Theorem 3.7. And by (4.69), for any ξ > 0, where the first term is uniformly bouneded on R + , and the second term is uniformly bouneded on R + by the condition (3) in Theorem 3.7. Since 0 < −φ ′ (y * ) < ∞, P I satisfies the definition of the canonical probability distributions in (3.5).
5.1.
Gibbs measure on the phase space.
Since X + Y = Z and they are corresponding to X n , Y n , Z n in Theorem 3.1, respectively, it suffices to show that all the conditions in Theorem 3.1 are satisfied for X, Y , and Z.
(3) Since X and Y are supported on R + , there exists δ 2 > 0 such that hence (3.13) holds. Therefore, all of the conditions hold for D = I in Theorem 3.1, we can apply it with a n = ε 2 n , b n = 0, and Pinsker's inequality (2.17) to get In statistical mechanics, the induced measure ν 1 (du) in phase space is often considered as the Lebseque measure du normalized by the total volume of the phase space Λ (Here we assume it is finite). Therefore, for the random vector U, we have its densitŷ Ae −ψ(I)e1(u) with respect to du, (5.9) where = A/Λ is the corresponding normalization factor.
The assumption ν 1 (du) = du/Λ for the phase space has already implied that all microstates are equally probable when the system is unconstrained. It is a reasonable prior probability for U by a symmerty of a physical system when we do not have any previous information about it. For the random variable X (e.g. energy), its density f X (x) is referred to prior probability for X when it is unconstrained. Based on the principle of equal a priori probabilities of microstates in the phase space, we can show that f X (x) = γ(x)/Λ, where γ(x) is the Lebseque measure of the surface area of microstates when the energy is fixed on x (i.e. e 1 (U) = x). This can be verified by Note that γ(x) is also known as the structure function of X. In Theorem 5.3, we also make the same assumption for Y : f Y (y) = Γ(y)/Λ, where Γ(y) is the structure function of Y .
Therefore, the density of X can be written aŝ Ae −ψ(I)x γ(x) with respect to dx, (5.11) which can be interpreted as a uniform prior biased by an exponential weight e −ψ(I)x when the system is conditioned on some extra information. Note that Equation (5.9) is known as the density of Gibbs measure on the phase space and Equation (5.11) is known as the density of Gibbs measure on the energy of the system [10].
In the work of A. Ya. Khinchin [15], he assumed conjugate distribution laws for all systems. It is said that f X (x) = e −αx γ(x) e −αs γ(s)ds and f Y (y) = e −αy Γ(y) e −αs Γ(s)ds (5.12) for some constant α. Those priors are more general than the uniform prior and they have some nice properties, e.g., for a proper α, it may guarantee integrability of e −αs γ(s) when γ(s) itself is not integrable. However, we can show that the choice of e −αx term does not have influence on our results. Here is the proof sketch: By Equation (5.15), we can identify 1 ψ(I) as the temperature defined in statistical mechanics [12].
5.2.
Integer-valued random variables and conditional Poisson distributions. In the following Theorem 5.4, we will show a limiting behavior of a sequence of conditional probabilities for a nonnegative integer-valued random variables K, which is conditioned on K +L n ,L n is a sequence of sums of i.i.d random variables ξ i . This sequence of conditional probabilities has the same limiting behavior as its unconditional probability P (K = k) weighted by an exponential factor. The most important result of this theorem is that the parameter of this exponential factor determined by a normal distribution rather than the distribution of ξ i . By this result, we provide a very simple formula with an approximation error to approximate an intractable problem in calculating the conditional probability of an integer-valued random variable. And we give an example 5.5 to show an approximation formula for calculating the conditional probability of a Poisson random variable conditioned on the sum of that Poisson random variable with another Poisson random variable. For every fixed finite interval I = [−h, −h + δ], h, δ ∈ R + , −h + δ ≤ 0, and 2δ/σ 2 < ψ(I), Proof. Let K n := K √ n , L n :=L n −nµ √ n and H n :=H n −nµ √ n . We have K n + L n = H n . By Central Limit Theorem, L n converges in distribution to Y . Furthermore, since (ξ i −µ) has finite second and third moments, by Berry-Essen Theorem 2.9, Since E [K n ] → 0, we have K n converges to 0 in probability. By Slutsky's Theorem 2.10, H n converges to Y in distribution. By Corollary 2.11, we can also get (5.19) in which we use the fact Y ∼ N (0, σ 2 ) and P (−h ≤ Y ≤ −h + δ) is bounded below. Moreover, since P K (k) ≤ 1, the term O 1 √ n in (5.19) is independent of k. LetỸ n ∼ N (nµ, nσ 2 ) andZ n := K +Ỹ n . Then we have K n + Y n = Z n , where Y n :=Ỹ n − nµ √ n and Z n :=Z n − nµ √ n . (5.20) Note that Y n = Y ∼ N (0, σ 2 ) and Z n converges in distribution to Y . Similar to (5.19), Applying triangle inequality to (5.19) Then it suffices to show that all the conditions in Theorem 3.1 are satisfied for K n , Y n , Z n , then we can apply Theorem 3.4.
First, we can check that E[K 2 n ] = a n , a n = o(1): Second, by change of variables, And we can define the set S in terms of the value for K as below: , 0). Below we follow every steps in Theorem 3.1 with slight modifications: (1) For all y ∈ R, Y n = Y ∼ N (0, σ 2 ), by the formula of the density of normal distribution, we have and so we can check (5.26) exist and are uniformly bounded. For (5.27), we modify the boundedness slightly and the details of proof are provided in Appendix 6.3. Therefore, (3.10) with a slight modification holds.
Since K n and Y n are independent, we have b n = 0. Therefore (3.12) holds. By change of variable, we then obtain By applying triangle inequality to (5.22) and (5.30), we can obtain (5.16) in the theorem.
Finally we apply Theorem 5.4 to a concrete example. Proof. By the property of Poisson random variables, we can decomposeL n asL n = n i=1 ξ i , where {ξ i , 1 ≤ n} are independent Poisson random variables with mean µ and variance µ. We can check that all conditions are satisfies in Theorem 5.4. Hence Theorem 5.4 can be applied.
5.3.
Emergence of temperature (conditioned on the scale of large deviations). In this section, we define the parameter 1 ϕ(I) in the exponential function e −ϕ(I)x as the temperature of the canonical distribution. Consider a sequence of conditional probabilities for a function of a subsystem represented by X contacted with its heat bath represented byỸ n = n−1 i=1 X i , where X i are i.i.d. and X i has the same distribution as X, and X i , X are independent. Suppose that the total energyZ n = X +Ỹ n is conditioned on the scale of large deviations from its mean, we will show that the temperature 1 ϕ(I) is an emergent parameter uniquely determined by the rate function ofỸ n n . Definition 5.6. Let X be a nonnegative and nonconstant continuous random variable with E[X 4 ] < ∞, and letỸ n := ∈ I, and a function ϕ : I → R such that 0 < ϕ(I) < ∞. Let P I be a probability measure with density function Af X (x)e −ϕ(I)x , where be a sequence of probability measures with density functions f X|Zn (x; nI).
Theorem 5.7. Denote Y n :=Ỹ n n , X n := X n , and Z n := X n + Y n , and let I − x n = {y − x n , y ∈ I}. Assume the following conditions hold: is uniformly bounded on R + .
(2) |log P Yn (I) − log P Zn (I)| converges to a finite constant as n → ∞. Since y * , y * − x n ⊆ D and φ ∈ C 2 (D), by Taylor's expansion, for all x ∈ [0, nd]. (5.34) By Condition (2) and (3), there exists a sequence ε n → 0 and a constant k such that log P Zn (I) = log P Yn (I) + k + ε n = −nφ(y * ) + s n (I) + k + ε n . The following is our discussion on the connection between Theorem 5.7 and Van Campenhout and Cover's Theorem 2.2. In Theorem 5.7, if the condition is on the scale of large deviations, then the conditional density f X|Zn (x; nI), nµ ∈ nI can be approximated by the (normalized) product of its unconditional density f X (x) and an exponential function e −λx . This paramter λ = φ ′ (y * ) is unique and determined by the first derivative of the rate function evaluated at y * = inf y∈I φ(y). It implies that we are able to find λ directly from the rate function without using the maximum entropy principle. Furthermore, by the pair of reciprocal equations (2.12): φ ′ (y * ) = λ if and only if A ′ (λ) = y * , (5.42) which means the parameter λ we find by the derivative of the rate function (left side of (5.42)) is also the solution of the derivative of the free energy function A under the constraint = y * (right side of (5.42)).
Therefore, using the maximum entropy principle under the first moment constraint to find good approximations of conditional density (Van Campenhout and Cover's approach) is a natural consequence of the emergent behavior of And this emergent behavior gives rise to a large deviation function that uniquely determines the parameter of the exponential weight. As we discussed in the Section 2, we apply the large deviation principle directly to the distribution of a the heat bath On the other hand, the Gibbs conditioning principle uses the large deviation principle for emprical measures Then the limit problem of the sequence of probability measures Q (n) I with density functions and the limit problem of the sequence of emprical measures δ Xi and Γ = γ : xγ(dx) ∈ I are just two sides of the same coin. Eventually, they both give arise to a limit as a canonical distirbution with the density f X (x)e −λx .
In conclusion, our approach generates λ by the large deviation rate function of the heat bath Y n and the Gibbs conditioning principle solves λ by minimizing the relative entropy which is the large deviation rate function of sampling. These two approaches are connected by the reciprocal equations (5.42) through the Legendre transform.
5.4.
Emergence of temperature (conditioned on the scale of Gaussian fluctuations). Similar to Section 5.3, in this section, we define the parameter 1 βnψ(I) in the exponential function e −βnψ(I)x as the temperature of the canonical distribution and consider a sequence of conditional probabilities for a function of a subsystem represented by X contacted with its heat bath represented byỸ n = n−1 i=1 X i , X i are i.i.d. and X i has a same distribution as X, and X, X i are independent. In comparison with Section 5.3, here we suppose that the total energyZ n := X +Ỹ n is conditioned on the scale of Gaussian fluctuations. We will show that the temperature 1 βnψ(I) is an emergent parameter uniquely determined by a normal distribution N (0, σ 2 ), where σ 2 is the variance of X. Theorem 5.9. Denote Y n =Ỹ n −(n−1)µ √ n , X n = X √ n , Z n = X n + Y n , and let I − x √ n = y − x √ n , y ∈ I . Assume the following conditions hold: is uniformly bounded on R + .
33
(3) There exists a sequence of functions g n : R → R with √ n x uniformly bounded on R + , for any ξ > 0, and E g n (X) 2 The proof is just the application of Theorem 3.6. We can check that all of the conditions in Theorem 3.6 are satisfied. Here we want to further discuss the condition (5.44): As the proof for Theorem 5.4, by Corollary 2.11 of Berry-Esseen theorem and Slusky's theorem, we have We now discuss the connection between Theorem 5.9 and Zabell's Theorem 2.1. If the condition is on the scale of Gaussian fluctuations, Theorem 2.1 only tells us that the sequence of conditional distributions F X|Zn (x; nµ + √ nI) should converge to its unconditional distribution F X (x). By our theorem 5.9, we have an explicit formula for the canonical distribution to approximate the conditional distribution well: for a sufficiently large n, and it converges to F X (x) as n → ∞ which is consistent with Zabell's Theorem 2.1. In addition, the parameter ψ(I) √ n of the canonical distribution is uniquely determined if we require the approximation is "good" enough, i.e. the KL-divergence of the conditional distribution from the canonical distribution converges to zero in the rate o 1 n . 5.5. Mathematical definitions of the heat bath. In Section 3, we provided two limit theorems of a sequence of conditional probabilities to derive a unique canonical distribution as an emergent phenomenon. In Theorem 3.6, the emergent parameter in the exponential weight is uniquely determined by the limiting distribution of the heat bath Y n → Y (note that in Theorem 3.6, Y n follows from the appropriate shifting and scaling of the original heat bathỸ n ) evaluated on the interval I = [h, h + δ] such as Similarly, in Theorem 3.7, the emergent parameter in the exponential weight is uniquely determined by the large-deviation rate function of the heat bath Y n → µ (note that in Theorem 3.7, Y n follows from the appropriate shifting and scaling of the original heat bathỸ n ) evaluated on the interval I = [h, h + δ] such as where φ is the rate function of Y n and y * = {y : inf y∈I φ(y)}.
Remark 5.13. The formula (5.51) for the third property (it is called the heat-bath property) in Theorem 5.11 provides the precise formulation of what a heat bath is in probabilistic terms when the heat bath Y n converges to Y on the scale corresponding to Theorem 3.6; Similarly, the formula (5.62) for the third property in Theorem 5.12 provides the precise formulation of what a heat bath is in probabilistic terms when the heat bath Y n converges to a constant µ on the scale corresponding to Theorem 3.7. Through these formulations and the equivalence of the three properties: (1) the subinterval invariant property (2) the invariant temperature property (3) the heat-bath property, we really define an invariant temperature bath mathematically.
Proof of Corollary 2.21.
Proof. (2.20) follows from Theorem 2.10 since Z n → G in distribution and W n → 0 in probability. (2.21) basically follows from the proof for Berry-Esseen Theorem (see for example Theorem 2.2.8. in [25]). We include a sketch of the proof here. Let φ Y be the charateristic function of a random variable Y and ε = E|X| 3 / √ n. To prove (2.21), following every step in the proof given in [25], it sufficies to show that |t|<c/ε |φZ n (t) − φ G (t)| 1 + |t| dt = O(ε), We can recognize that A(h −α n x) = 2q n (x), in which the function q n (x) is defined in Equation (4.20) for the proof of Theorem 3.1.
38
The second term on the right side of (6.4) can be written as When y +ŷ ∈ [0, 2h + δ], the right hand side above is uniformly bounded. When y +ŷ < 0, from (6.8) we have By plugging in y = h −α n x,α n ∈ (0, 1) in (6.9), since we have 2δ/σ 2 < ψ(I), we can check the terms on the right hand is uniformly bounded when x ∈ R + . Therefore, combining the estimates in two parts, (6.5) is uniformly bounded for all x ∈ R + . | 2019-12-25T02:01:19.529Z | 2019-12-23T00:00:00.000 | {
"year": 2021,
"sha1": "8712963767469592cb8e235b961cc1a7b6fefddf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1912.11137",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f460fb58ec7239756fd7b3229462e353b303c00b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
28175458 | pes2o/s2orc | v3-fos-license | An Epidemiological Study of Child Marriages in a Rural Community of Gujarat
Context: India has the maximum number of child marriages (CMs; < 18 years) because of the size of its population, and in 47% of all marriages the bride is a child. Children who are married at young age are exposed to multiple risks pertaining to their physical, mental, and social health. Aims: (i) To estimate the prevalence of CM in rural population. (ii) To study the determinants and health effects of CM. (iii) To assess the awareness among the married women regarding the health implications of CM. Settings and Design: Community-based cross-sectional study conducted in Ardi village of Anand district. Materials and Methods: All the married women of the village were surveyed to find out the prevalence of CM. For collection of other relevant information, only those women having a married life of less than 10years were interviewed using semicoded and pretested questionnaire. Data collected were analyzed using Statistical Package for Social Sciences (SPSS) 17.0 software. Statistical Analysis Used: Proportions, ratios, χ2 test, and Fisher's exact test. Results: The prevalence of CM was found to be 71.5%. Caste and spouse's education were revealed as important determinants for CM. CM was found to be significantly associated with mother's age at birth of first child, delayed antenatal care (ANC), spontaneous abortion, preterm delivery, low birth weight (LBW), health problems in new born baby, faulty feeding practices, lack of knowledge regarding family welfare methods, and health implications of CM. Conclusion: Exceptionally high prevalence of CM in rural community and its serious health consequences warrant stricter enforcement of legislation, better educational opportunities for girls, and easy access to quality health services.
Introduction for younger girls. (5) Most importantly, early marriage bereaves young girls of their childhood by overburdening them with domestic responsibility, motherhood, and sexual relations rather than allowing them to play with friends, go to school, and dream about a career.
Considering the seriousness of the overarching impact of CM on the health status of young women and their children, and lack of data regarding the prevalence of CM in Gujarat, this study was conducted with the following objectives: 1. To estimate the prevalence of CM in Ardi village. 2. To study the determinants and health effects of CM. 3. To assess the awareness among the married women regarding the health implications of CM.
Materials and Methods
It is a community-based cross-sectional study conducted in 'Ardi' village from 10 th August 2012 to 20 th January 2013. It has a total population of about 3,400. All the married women of the village were surveyed to find out the prevalence of CM. For collection of other relevant information, only those women having a married life of less than 10 years were included in the study to avoid possibility of recall bias. The age at marriage was confirmed either orally or from marriage certificate if available.
Events related to pregnancy and child birth were confirmed from the available records such as Mamta card. When these records were not available, this information was sought from the respondent.
A house-to-house survey was conducted by the author herself. A semicoded and pretested questionnaire was used to collect the information. It consisted of three parts. First part was intended to collect sociodemographic information of the respondents. Second part contained questions related to pregnancy, child birth, and child feeding practices, and was offered to women who have given birth to at least one child.
Results
Total 755 couples were surveyed. Out of these, CM was found in 540 couples making its prevalence as 71.5% in Ardi village. There were 158 couples who had married life of less than 10 years for whom detailed information was collected. The prevalence of CM in this group was 60.1%.
The mean age at marriage of female spouses was 18.2 ± 2.4, while that of male spouses was 20.8 ± 3.2. All couples except one followed Hindu religion. Majority of the couples belonged to scheduled caste (SC)/schedules tribe (ST)/other backward classes (OBC), and lived in joint family [ Table 1]. Only 8.2% couples were in category I and 1.3% in category V of socioeconomic class according to modified Prasad's classification. Table 2 depicts the educational status of spouses and their parents. The difference in the education level of spouses with and without CM was found to be statistically significant (P < 0.05).
In our study, 98 women had at least one child. About half of them gave birth to their first child between the age of 16 and 20 years [ Table 3]. Age at birth of first child was significantly different in mothers with and without CM (P < 0.001). All 98 women except one received antenatal care (ANC). About 80% of these women received their first ANC in first trimester. Proportion of women without CM receiving first ANC in first trimester was significantly higher than that of women with CM (P < 0.05). As shown in Table 3, CM was found to be significantly associated with spontaneous abortion, preterm delivery (P < 0.05), delivery of low birth weight (LBW) babies (P < 0.05), and health problems in newborn at birth. As far as feeding is concerned, infants born to women with CM were found to be less exclusively breast fed, as well as fed with poor quality complementary food. We also assessed the knowledge of women about legal age of marriage, family welfare methods, and health consequences of early marriage [ Table 4]. Only 29% respondents knew about at least one of the four family welfare methods, viz. condom, oral pills, Copper-T, and tubal ligation. Additionally, about 14-18% of the respondents had knowledge about the health effects of early marriage like preterm delivery, surgical delivery, abortion, LBW, and higher risk of illness in both mother and child. Women with CM were found to be less knowledgeable than those without CM in all these three aspects (P < 0.05).
Discussion
The present study detected an alarmingly high prevalence of 71.5% for CM in Ardi village. It is considerably higher than the state average of 35.4% (6) and national average of 47%. (3) A study done by Raj et al., which was based on National Family Health Survey (NFHS)-3 data, found the prevalence of CM as 67.2% in rural areas of India. (7) Contrary to that, its prevalence was quite low in urban India which ranged from 2.2% in large towns to 10.2 in small towns. (7) To avoid any recall bias, we studied in detail only those couples who had married life of less than 10 years. The prevalence of CM in this group, comprising 158 couples, was 60.1%.
In our study, religion-based comparison cannot be made as all except one couple were Hindu. However, caste of the respondents was found to be significant predictor of CM in our study (P < 0.001). An UNICEF report suggests that girls from scheduled castes and tribes (SCs and STs) marry at a younger age than the girls of other castes. (6) Previous studies conducted in different states of India reported similar results, with a prevalence of CM in SC/ST/other backward classes (OBC) ranging from 68.7 to 81% in these states. (8,9) To minimize CM in these groups, strong support and commitment of all stakeholders, particularly local leaders, are required so as to promote good practices that help establish a higher age at marriage at the community level.
Education has been delineated as single most important protective factor against CM by many researchers. No or less education has been consistently related with higher prevalence of CM in different regions of India as well as in other countries. (7,8,10) In an analysis of 42 countries by UNICEF, women in age group of 20-24 years who had attended primary school were found less likely to be married by age 18 than those who had not. (11) We found that both husband's (P < 0.01) as well as wife's (P < 0.05) educational status was significantly associated with occurrence of CM. However, our study did not find such association of CM with education of husband's or wife's parents. School teachers can play a key role in preventing CM not only by educating children about their rights and concerns related to CM, but also by providing assistance to the Child Marriage Prohibition Officer, as mentioned in section 16 of Child Marriage Act, 2006. (2) When a girl is married at very young age, she is more likely to get pregnant and deliver baby at the age when she is physiologically and psychologically not prepared for child birth. Maternal morbidity and mortality is noted to be very high in such young mothers. (12) In India, 26% women gave birth by age 18, higher than that of other Asian countries like Philippines (10%) and Thailand (8%). (12) In developed countries like Germany, France, and USA; this percentages are as low as 1-10%. (12) Our study also detected a strong association between CM and mother's age at birth of first child (P < 0.001). Kamal's study done in Bangladesh also found that CM is significantly associated with lower age at first birth. (13) For young married girls, reproductive healthcare (RCH) services are either not accessible or not utilized because of lack of information and knowledge, or power of making decisions. (12) We studied the healthcare seeking pattern in the respondents during their first pregnancy and child birth. In our study, all the women except one received ANC at either governmental or private healthcare facility. Majority (57%) of the respondents were registered in first trimester for ANC. However, significantly higher proportion of pregnant women with CM was registered late, that is, in second trimester than women without CM (P < 0.001). It implies that awareness regarding importance of early registration for ANC is less in women with CM.
Various studies have shown that rates of spontaneous pregnancy terminationare higher in young mothers who are married at early age. (7,14) In our study, we found that women with CM were more likely to experience episodes of spontaneous abortion than the women without CM (P < 0.05). Moreover, the risk of neonatal conditions like preterm birth, LBW, and asphyxia are higher among the babies born to adolescent mothers resulting in higher rates of still birth and neonatal mortality. (12,15) In our study, women with CM were found to deliver preterm babies and LBW babies at significantly higher proportion than women without CM (P < 0.05). Nour's report states that the risk of delivering a LBW baby is 35-55% higher in case of mothers under the age of 18 than mothers older than 19 years. (16) Study by Raj et al., also found that mothers with CM are more likely to give birth to LBW infants than mothers married as adults. (5) In addition, 11 mothers reported some form of health problem in the new born babies like convulsions, cyanosis, jaundice, difficulty in feeding, and still birth. All these babies except one were born to mothers with CM (P < 0.05). This finding suggests that ending the practice of CM could prevent considerable proportion of newborn morbidity.
Child rearing and feeding practices play a vital role in growth and development of a child. The immaturity and lack of education of a young mother undermines her capacity to understand and realize the importance of such practices for the nurturing and upbringing of her child. In our study, significantly less proportion of mothers with CM practiced exclusive breastfeeding (P < 0.05). Similarly, complementary feeding was not started in timely manner in most of the children of respondents with CM (P < 0.05). In our study, we assessed the appropriateness of quantity and quality of complementary food given to the children as per Integrated Management of Neonatal and Childhood Illnesses (IMNCI) guidelines. (17) Though quantitative adequacy of complimentary food was similar in both the groups, our study did find a significant difference in quality of complimentary food given to the babies between the two groups (P < 0.01).
It is anticipated that young girl of age below 18 years may not have enough information or knowledge about currently available family welfare methods. Several studies have revealed that knowledge about family welfare methodsare comparatively lower in young women married at an early age than those married as adult (7,10) A study conducted in Nepal found higher unmet needs for family planning in women with CM. (18) In our study, less proportion of women with CM had knowledge regarding any of the four major family welfare methods, compared to those without CM. This difference was statistically significant (P < 0.01).
Regarding knowledge about legal age of marriage at the time of interview as well as at the time of marriage, significantly higher proportion of women who had CM did not know the legal age of marriage compared to those without CM (P < 0.01). A study done among adolescents in Bangladesh revealed similar results where majority of them did not have correct knowledge about the legal age of marriage. (19) In our study, we also tried to assess the level of respondent's knowledge about the health consequences of pregnancy at young age that may be seen in both mother as well as the child. Women married at appropriate age were found to be more aware of these aspects compared to those who were married at early age (P < 0.05). Prevention of CM is thus crucial for raising the awareness about health impact of early pregnancy.
Conclusion
CM, which has existed for centuries, is a complex issue, rooted deeply in gender inequality, tradition, and poverty. Despite national law, the practice is widely prevalent in rural areas, where prospects for girls can be limited. Exceptionally high prevalence of CM detected in our study clearly indicates that previous policies have been inadequate to sufficiently curb the social evil of CM. The outcome of the study necessitates stricter enforcement of legislation, better educational opportunities for girls, particularly, after marriage, and easy access to quality health services. | 2018-04-03T04:04:31.549Z | 2015-10-01T00:00:00.000 | {
"year": 2015,
"sha1": "fe194f924c2a7f6d11707c48f3b9a4ac306771e9",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0970-0218.164392",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "af0d4e8d26052d00b5885ab391b96c5e05191762",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
104309077 | pes2o/s2orc | v3-fos-license | The usage of carbon dioxide gas on pyrolysis process and its effect on biomass catalytic conversion
Several advantages of the usage of CO2 gas in pyrolysis theoretically could be the role for thermal efficiency, elimination oxygenated group, deep decomposition of biomass. The aim of this study was the use carbon dioxide as carrier gas and its effect on the catalytic product distribution from the rice husk type biomass. The Nickel/Alumina catalyst was prepared by impregnation method and calcination process of the Ni(NO3)2. 6H2O and Al2O3 mixture. The characterization was also performed by XRD analysis and BET adsorption method. The XRD analysis confirm the high crystallinity of NiO and NiAl2O4 phase which is considered as the interaction result between the NiO particle and the surface of γ-Al2O3. But the result of BET method suggested that the blockage the pore mouth of catalyst support γ-Al2O3 which is more likely due to the excessive amount of NiO particles. The comparison of pyrolysis carrier gas between CO2 and N2 and the variation of the operating temperature were performed to determine the product distribution distinctions. The catalytic conversion was done under the temperature 450, 475, 500, and 525°C. Experimental results show that the variation of carrier gas of the pyrolysis process and the variation of operating temperatures resulted in a difference product distribution. The dominant compounds formed as the temperature increases were ketone and benzene compounds.
Introduction
Most commodities in agriculture waste produce nearly 80 % in the form of biomass that can be used as an alternative energy source. One of the biomass that is abundant supply in Indonesia is rice husk which has high levels of lignocellulose. The high potential of this natural resources should be explored the transformation route toward a useful hydrocarbon fuel or chemical. Commonly to achieve the route, the transformation of biomass is through the pyrolysis process with nitrogen gas as carrier gas, and it is still rarely the usage of carbon dioxide gas. Many advantages of the usage of this gas theoretically could be the role for good thermal efficiency of biomass pyrolysis, elimination oxygenated group, deep decomposition of biomass, Boudart reaction for bio-char gasification and activation. So, the aim of this study is to use carbon dioxide as the pyrolysis carrier gas and to observe the resulting product distribution. So, biomass as a renewable source, will undoubtedly play an important role in the future.
Various processes may be employed to convert biomass into useful fuels and chemicals. Among those process, the conversion of biomass through a thermochemical process seems to show a promising alternative for many energy applications [1]. Pyrolysis is one of the most promising thermochemical process of converting biomass to fulfill the needs of bio fuels and chemicals [2]. Which then, the bio fuels could be upgraded using catalyst to refine the bio oils into hydrocarbon or other intermediates [3]. The catalytic conversion is one of the proven process that could be used to upgrade and improve the quality of the bio oil. For this work, in order to convert the biomass into various compounds, Ni/Al2O3 or Nickel-Alumina can be used as the catalyst for the pyrolysis process. The type and composition of the catalyst and operating temperature can affect the value of the conversion and selectivity of end product, making it important to know the right combination in order to produce useful hydrocarbon like aromatic, paraffin (linear or branch alkane), olefin and cyclo-paraffin with maximum selectivity. A variation of carrier gas from Nitrogen to Carbon Dioxide was also done in hopes of getting a variation of products formed by the pyrolysis of the rice husks. This was done due to the improved thermal efficiency of the pyrolysis process that carbon dioxide provides as well as the deeper decomposition of the complex compounds of the rice husks. Other than the varied carrier gas, the catalytic conversion was done under four different operating temperatures: 450, 475, 500, and 525 o C. These variations were applied in order to analyse the final product distribution of each carrier gas and working temperatures 2 Experimental method
Material
The type of biomass used as the feed for this experiment was rice husks. At the beginning of experiment, a grinder or grinding machine was used to lower the size of the rice husks down into 1-3 mm length. To reduce the water content lower than 10% of the total mass, the rice husks was placed in the electric heater is dry for 5-6 h at temperature of 60 O C. For each experiment running, the fixed bed was arranged by 1 g of treated rice husks, 1 g of mixture of catalyst, and 1 g of quartz sand mixture.
Various loading composition of NiO/γ-Al2O3 or Nickel Alumina catalysts were used for this catalytic pyrolysis experiment. The amount of catalyst used was 1 g for all samples of different carrier gas and temperature. The bed arrangement composed of rice husk bed and catalyst bed was adjusted in the feasible amount in order the operation of the experiment in the suitable condition with the effort in the lower pressure through the bed and the product collected produced from the pyrolysis reaction runs could be allowable to detect. The Nickel-Alumina catalyst is prepared by dissolved the precursor of Ni(NO3)2.6H2O crystalline into the deionized water as much as 30 ml in beaker glass. The mixture was placed on the pan of electric heater at 70-80 o C equipped with magnetic stirrer and stir completely until solution Ni (NO3)2 prepared. This solution was then used to impregnate γ-Al2O3 powder by pouring it into the solution and mix completely. The mixture was then kept in drying at this heating temperature until the solvent of water evaporated and resulting the solid state. This solid was subsequently calcined in the atmospheric condition in air at temperature 300 and 600 o C for 2 h irrespectively.
Methods
The catalytic pyrolysis was performed at the atmospheric pressure with the gas rate of N2 or CO2 at around 40 ml/minute, and the operating temperature of 450, 475, 500, and 525 o C. The reactor tube was 300 mm in length, and 12 mm in diameter, and fabricated from materials that are inert properties at high temperatures experimental running. The arrangement of bed inside the reactor was carried out carefully. Quartz wool was used at the lower part of the reactor tube to plug the catalyst, preventing the bed from slip down from the tube. The amount of the rice straw, catalyst, and quartz sand is 1 gram each. The amount of bed should be adjusted so bed of the rice straw, catalyst, and quartz sand inside the reactor tube were avoid the reactor tube being clogged. After reactor tube has been perfectly filled by the catalyst and rice bed inside, the reactor was then placed into an electric cylindrical shape furnace with an effort to put the rice husks and catalyst at the position where the heat is at the highest.
As shown in figure 1, the arrangement of bed in such way the pyrolysis of rice husks was firstly to take place, by placing the bed of rice husks over the bed catalyst with mixture with quartz sand as function of a reducing the pressure drop through the bed and to homogenously temperature. By this way, the pyrolytic vapour resulted from the thermal decompose of biomass bed was be directly contacted with catalyst in the lower part position. This way, the vapour from the rice husks should be upgraded when it flowed down through the layer of catalyst. The carrier gas rate was 40 ml/minute N2 or CO2, besides was able to carry out the vapour product form the rice husks decomposition for contacting to the layer bed of catalyst, its function also could purge the possible air existence inside the reactor tube. The product of the catalytic conversion between the rice husks and catalyst was visually indicated by the occurrence of gaseous formation at the bottom of the reactor. The gas produced from the catalytic conversion was then dissolved by a method of cold acetone absorption filled in glass bottle submerged in the ice-water bath. The bath temperature was kept near 0 o C by holding it inside the Dewar flask. Therefore, the gas undergone a condensation and dissolution process perfectly and finally this catalytic conversion product mixture as liquid sample for chemical analysis by GC-MS. The GC-MS was high performance instrument analysis performed by courtesy of the Forensic Laboratory of Polri institution with using capillary column Agilent 19091S-433 HP-5MS.
Product distribution
The carrier gas used for the pyrolysis process was either Nitrogen or Carbon Dioxide. Beside carrier gas variation, the catalytic conversion was conducted under different temperatures. This experiment was conducted under the Temperature variation of 450 o C, 475 o C, 500 o C, and 525 o C. Figures 2 and 3 show the product distribution resulted from catalytic conversion of rice husks using different carrier gas and at different operating temperatures. For the catalytic conversion at 450 O C, Both the Nitrogen sample and the CO2 sample yield large amounts of Ketone Compounds and Benzene compounds with Ketone compounds dominating the sample analysis results with more than 50% detected on both samples.
When the temperature was bumped to 475 o C, it's observed that the trend is going to be decrease in the most detected compounds, Ketone and Benzene, and the increase of the more complex compounds as well as other compounds such as Furan Compounds and Aldehyde compounds. Ketone production dropped significantly in both the nitrogen and carbon dioxide samples while Benzene increased in the nitrogen sample but decreased in the carbon dioxide sample. This is due to Carbon Dioxide preventing the formation of VOCs, thus supressing the benzene production.
The trend of increasingly diverse compound production continues as temperatures rise to both 500 and 525 o C. The trend of decreasing ketone production on both the nitrogen and carbon dioxide samples also continues with the nitrogen sample detecting as small as 15% on the 525 o C sample. The decreasing benzene formation trend on the carbon dioxide samples also seems to continue as the temperature increases, incrementally decreasing from 25% on the 450 o C sample to around 15% on the 525 o C sample. Meanwhile production of other compounds, particularly Acetic Acid and other complex compounds incrementally increases as the temperature increases.
Rice husks Conversion
The results of performing pyrolysis of rice husks with NiO/γ-Al2O3 catalyst with either N2 (nitrogen) or CO2 (Carbon dioxide) as the carrier gas. The reasons of CO2 was used as an experimental carrier gas because in the case of integrated valorization of biomass by pyrolysis or gasification, CO2 can play a vital role in each stage, mainly including biomass pyrolysis, biomass/ biochar gasification, biochar activation, and tar cracking/reforming. CO2 as a reaction medium can significantly improve the thermal efficiency of biomass pyrolysis. Pyrolysis in CO2 results in deep decomposition of biomass compared to pyrolysis in N2. Also, CO2 has an affinity to react with hydrogenated and oxygenated groups, leading to biochar with a higher specific surface area (SBET). Thus, exploiting CO2 as a reaction medium in biomass pyrolysis provides an attractive option for enhanced generation of syngas and tuned adsorption capability of bio-char. In addition, the CO2 pyrolysis of biomass can enhance the thermal cracking of harmful organic compounds, thus suppressing of the formation of benzene derivatives (e.g., volatile organic compounds) and polycyclic aromatic hydrocarbons. In general, CO2 gasification of biomass serves a dual purpose of reducing pollution and generating syngas. In addition, introducing CO2 with steam as a gasifying agent can enhance CO production.
Besides having varied carrier gas, the catalytic conversion was also performed under varied temperature values (450, 475, 500, and 525 o C). This range of temperatures was selected because the temperature less than 450 o C, the activity of catalyst under low performances. The result was showed in table 1. As shown in the table, the rice hulk conversion was little bit increased along with the increasing in pyrolysis temperature range using for both N2 and CO2 carrier gas. But the conversion difference was ca. 5 % with the higher conversion using CO2 as carrier gas for each temperature pyrolysis. This result is consistent to the considerations previously that CO2 as carrier gas could enhance the thermal efficiency and it makes the deep decomposition of rice husk comparing those to usage N2 as carrier gas. Figure 4 is clearly shown that the intensities of peaks indicated the high crystallinities for NiO at position 37 o and 63 o and for NiAl2O4 at those peaks suitable to the standard peak positions. It could be understood that the formation of NiO is obviously originated from the precursor of Ni (NO3)2 which undergone the decomposition and oxidation during the calcination stage. This transformation of Ni nitrate to be oxide phase (NiO) by oxidation is absolutely to occur because the such condition of calcination in the electric furnace operated at temperature 600 o C and the precursor directly contacted to atmospheric air.
XRD anf FTIR characterizations
As the figure shown, the peak attributing metallic Ni was nothing, this is logically suggested that a such condition of the calcination condition was impossible for the formation metallic phase catalyst. And the formation of NiAl2O4 phase in the catalyst prepared is considered to take place for the reaction of Ni and Al2O4 on the contacted surface between those particles. The formation of this phase could be function for a high stability of NiO particle dispersed on the surface Al2O4. It could be concluded the model catalyst system of NiO/ Al2O4 is perfectly prepared by impregnation method and could be a high activity catalyst in the pyrolysis.
As shown Table 2, characterization of NiO/ Al2O4 catalyst have been accomplished for the sample catalyst prepared by impregnation method and calcination to convert the precurdor to be the oxide form of catalyst. The measurement by BET was commonly conducted by the nitrogen adsorption technique at temperature cryogenic which wass submerged in the the nitrogen liquid in the liquid form kept in the dewar flask. The data fount from the measurement was the amount nitrogen adsorped with relative pressure. The results from the calculation of physisorption data have been analyzed using the Langmuir model, the Brunauer-Emmett-Teller (BET) method, the Barret-Joyner-Halenda (BJH) method, and the de Boer and Halsey t-method. The surface areas of catalyst calculated were 3.1774 m 2 /g by single point method, 3.1481 m 2 /g by BET method and 10.3295 m 2 /g by Langmuir method. Moreover, information such as pore size distribution, pore shape, monolayer volume, micropore volume and thickness of adsorption layer were also obtained. The calculation result of the surface area by single point was close to the that of BET method, indicates that this result should be referenced than the the that of Langmuir method. The Langmuir calculation method is assumed that the surface of the catalyst is uniform, that is, all the adsorption sites are equivalent. But, the real condition of such assumption is impossible, so surface area by Langmuir is three times more than the that by other methods. However, the such value of the surface area is so tiny comparing to the surface area of γ-Al2O3 itself. The big possibility, the presence of NiO particles dispersed on the surface has been blockaged onto the pore mouth of γ-Al2O3. So, the area was drastically decreased over than 90% from the origin surface area of γ-Al2O3 (ca. 250 m 2 /g). The phenomena of NiO pore blocking is also apparently showed by the calculation data of tplot micropore and t-plot external surface area. Ideally, the micropore aperture has a diameter maximum 2 Angstrom (0,2 nm) and the 90% area catalyst should be dominated by micropore or internal surface than the external area. The total area of both area is the total area which is supposed to similar result with the BET or single point method. In this case, the data found is that the nearly similar result of the micropore (1.4912 m 2 /g) and the external area 1.6569 m 2 /g was lead to more strongly confirmation about the blockage of pore mouth of micropore of γ-Al2O3 catalyst support. The low micropore volume (0.000748 cm 3 /g) calculated by t-method was one tenth of surface area for pores of 1.70-300.0 nm 0.007441 cm 3 /g by BJH method is shown the insignificant amount of micropore than that of this kind of macropore with the aperture diameter more than 0.5 nm. In this case, the impregnation method with excessive amount of catalyst active site could transform the pore structure from the micropore dominant structure to be macropore dominant one in this case of catalyst indicating the measurement and calculation by BET and BJH Adsorption with resulting of average pore diamater 20.0611 nm It might be concluded that the impregnation method for the NiO/γ-Al2O3 catalyst preparation in excessive amount of NiO could decrease the change the physical properties of catalyst with the decreasing surface area and the blockage the pore of mouth of catalyst support. The catalytic conversion of rice husks with Ni/Al2O3 Catalyst using either nitrogen or carbon dioxide carrier gas will yield different product distributions. The compounds that was produced the most out of all the samples were ketone compounds and benzene compounds. The samples that used carbon dioxide as the carrier gas yielded a larger number of compounds in general compared to the samples that used nitrogen as the carrier gas. Ketone and benzene production is shown to have a set trend (increasing or decreasing) across the board while other compound groups would fluctuate in their production. The presence of NiO particles dispersed on the surface has been blockaged onto the pore mouth of γ-Al2O3. So, the area was drastically decreased over than 90% from the origin surface area of γ-Al2O3. It might be concluded that the impregnation method for the NiO/γ-Al2O3 catalyst preparation in excessive amount of NiO could decrease the change the physical properties of catalyst with the decreasing surface area and the blockage the pore of mouth of catalyst support | 2019-04-10T13:12:52.253Z | 2018-12-19T00:00:00.000 | {
"year": 2018,
"sha1": "5d89a07f67b4fb6154b588aff8e590f4ea247cef",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/209/1/012056",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "21755fe970340c70c1d4ef9e9a1db89ff8df0caf",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
118495806 | pes2o/s2orc | v3-fos-license | Band structure engineering of epitaxial graphene on SiC by molecular doping
Epitaxial graphene on SiC(0001) suffers from strong intrinsic n-type doping. We demonstrate that the excess negative charge can be fully compensated by non-covalently functionalizing graphene with the strong electron acceptor tetrafluorotetracyanoquinodimethane (F4-TCNQ). Charge neutrality can be reached in monolayer graphene as shown in electron dispersion spectra from angular resolved photoemission spectroscopy (ARPES). In bilayer graphene the band gap that originates from the SiC/graphene interface dipole increases with increasing F4-TCNQ deposition and, as a consequence of the molecular doping, the Fermi level is shifted into the band gap. The reduction of the charge carrier density upon molecular deposition is quantified using electronic Fermi surfaces and Raman spectroscopy. The structural and electronic characteristics of the graphene/F4-TCNQ charge transfer complex are investigated by X-ray photoelectron spectroscopy (XPS) and ultraviolet photoelectron spectroscopy (UPS). The doping effect on graphene is preserved in air and is temperature resistant up to 200\degree C. Furthermore, graphene non-covalent functionalization with F4-TCNQ can be implemented not only via evaporation in ultra-high vacuum but also by wet chemistry.
I. INTRODUCTION
The electronic properties of graphene, such as large room temperature mobilities, comparable conductivities for electrons and holes and the ability for charge carrier operation via the field effect, make it an excellent candidate for carbon based nanoelectronics [1][2][3] . However, the limited size of graphene flakes from conventional micromechanical cleaving 1 requires individual selection and handling which makes device fabrication cumbersome. In contrast, epitaxial graphene grown on silicon carbide (SiC) offers realistic prospects for large scale graphene samples [4][5][6] . Unfortunately, as-grown epitaxial graphene is electron doped as a result of the graphene/SiC interface properties [7][8][9][10][11] . This doping translates into a displacement of the Fermi energy, E F , away from the Dirac point energy E D where the π-bands cross, so that the ambipolar properties of graphene cannot be exploited. Several approaches can be used to remove or compensate this excess charge. One that has recently been introduced is the structural decoupling of the graphene layers from the substrate using hydrogen intercalation 12 . Also, chemical gating techniques are very promising to tune the carrier concentration as demonstrated recently in low temperature experiments on graphene flakes 13,14 . Analogously, a possibility to compensate the n-doping in epitaxial graphene is to extract the surplus negative carriers, i.e. -in the language of semiconductors -to accomplish a method of hole injection.
Similar to the case of carbon nanotubes 15,16 , injection of holes in graphene can be achieved via surface adsorption of gas molecules such as O 2 or the paramagnetic NO 2 17,18 . In contrast, NH 3 and alkali metals such as potassium are known to act as electron donors in carbon based materials 7,15,18,19 . However, the high reactivity of NO 2 , NH 3 and of alkali atoms makes those materi-als ill-suited as practical dopants. This is illustrated by the need of cryogenic temperatures and ultra high vacuum conditions to stably adsorb NO 2 and potassium on graphene surfaces 7,17 . An approach that promises to control the carrier type and concentration in graphene in a simple and reliable way is that of surface transfer doping via organic molecules 20 . A variety of aromatic and nonaromatic molecules and even organic free radicals can be used to control graphene doping [21][22][23][24][25] . Many of these molecules possess good thermal stability, have limited volatility after adsorption and can be easily applied via wet chemistry. An effective p-type dopant is the strong electron acceptor tetrafluoro-tetracyanoquinodimethane (F4-TCNQ). It has a very high electron affinity (i.e., E ea = 5.24 eV) and has been used successfully as a state of the art p-type dopant in organic light emitting diodes 20,26-28 , carbon nanotubes 29-31 and on other materials 32,33 . Recently, the existence of a p-doping effect of F4-TCNQ on graphene has been suggested theoretically 34 and experimentally 25 .
In the present paper we give direct evidence that the excess negative charge in epitaxial monolayer graphene can be fully compensated by functionalizing its surface with F4-TCNQ. Electron dispersion spectra and Fermi surface maps measured via angle resolved photoemission spectroscopy (ARPES) qualitatively and quantitatively evaluate the reduction in charge carrier density and show that charge neutral graphene can be ultimately obtained. X-ray photoelectron spectroscopy (XPS) and ultraviolet photoelectron spectroscopy (UPS) elucidate the structural and electronic characteristics of the graphene/F4-TCNQ charge transfer complex. Raman spectroscopy of the G phonon peak corroborates the doping reversal and shows that the carrier concentration can be trimmed by laser induced desorption of molecules. Moreover, we investigate the effects of F4-TCNQ on the band structure of clean 0.2 nm 0.4 nm 0.8 nm bilayer graphene. By presenting a band gap 7-10 , bilayer graphene is particularly attractive for the implementation of electronic devices such as field effect transistors provided that the intrinsic doping can be compensated.
Here we demonstrate that F4-TCNQ not only renders bilayer graphene semiconducting thanks to the full compensation of the excess negative charged carriers but also increases the band gap size to more than double of its initial value. We show that the molecular layer is stable when exposed to air. The doping effect is preserved up to 200 • C and is totally reversible by annealing the sample at higher temperatures. The molecular coverage can be precisely controlled when using a molecular evaporator but the dopants can also be applied by wet chemistry, i.e. in a technologically convenient way.
In house ARPES measurements were carried out at room temperature (RT) using monochromatic He II radiation (hν = 40.8 eV) from a UV discharge source with a display analyzer oriented for momentum scans perpendicular to theΓK-direction of the graphene Brillouin zone. The Fermi surface data were extracted from ARPES experiments using synchrotron radiation from the Swiss Light Source (SLS) of the Paul Scherrer Institut (PSI), Switzerland, at the Surface and Interface Spectroscopy beamline (SIS). The endstation allows, using a display analyzer and a sample manipulator with three rotational degrees of freedom, for fast high-resolution two-dimensional electronic dispersion measurements. XPS measurements were performed using photons from a non-monochromatic Mg K α source (hν = 1253.6 eV). The stability of the molecular layers under UV and X-ray irradiation was verified by exposing 3 hours and well over 13 hours, respectively. The thickness of the deposited molecular layers was estimated from XPS spectra calibrated through a comparison to spectra for a well characterized surface phase of TCNQ on Cu(100) measured under identical conditions 37 . Different deposition rates ranging from 0.07 to 0.5Å/min and sample temperatures between -140 and 25 • C were tested for the sample preparation. No influence on the doping results was found when the same amount of molecules was deposited. Work function measurements and the analysis of molecular orbitals were performed via normal emission UPS using monochromatic He I radiation (hν = 21.21 eV) from our UV source. During the work function measurements a bias of -30V was applied to the sample in order to distinguish between the analyzer and the sample cut-off and to more efficiently collect the inelastically scattered low kinetic energy electrons into the analyzer. Raman spectra were measured under ambient conditions using an Argon ion laser with a wavelength of 488 nm at a power level of 12 mW and a laser spot size of ≈ 1 µm in diameter. In order to apply the molecular layer on graphene via wet chemistry F4-TCNQ was dissolved in either chloroform or dimethyl sulfoxide (DMSO) until saturation. Before ARPES characterization the sample was left immersed in the solution for 12 hours.
III. F4-TCNQ ON MONOLAYER GRAPHENE
The doping level of the graphene layers can be precisely monitored with ARPES measurements of the π-band dispersion around theK-point of the graphene Brillouin zone as previously established [7][8][9][10][11] . As shown in Fig. 1(a) for an as-grown monolayer of graphene on SiC(0001) the Fermi level E F is located about 0.42 eV above the Dirac point E D . This corresponds to the well established charge carrier concentration value of n ≈ 1 x 10 13 cm −2 for as grown graphene. For increasing amounts of deposited F4-TCNQ E F moves back towards E D as illustrated in Fig. 1 Meanwhile the bands remain sharp, which indicates that the integrity of the graphene layer is preserved. Evidently, deposition of F4-TCNQ activates electron transfer from graphene towards the molecule thus neutralizing the excess doping induced by the substrate. As the figure shows the electron concentration in the graphene layer can be tuned precisely by varying the amount of deposited molecules. When we deposit a 0.8 nm thick layer of molecules, charge neutrality is reached, i.e. E F = E D . For a nominal thickness of the molecular film above 0.8 nm no additional shift of the Fermi energy is observed as seen in Fig. 1(e), which indicates that the charge transfer saturates.
For a detailed quantitative determination of the carrier concentrations, high-resolution ARPES data acquired using synchrotron radiation were analyzed. Fig. 2 compares the π-band dispersion (a-c) and constant energy maps (d-f) at E F for a clean graphene monolayer (a,d), an intermediate F4-TCNQ coverage (b,e) and charge transfer saturation at full coverage (c,f). The charge carrier concentration can be derived precisely from the size of the Fermi surface pockets as n = (k F − kK) 2 /π, where kK denotes the wave vector at the boundary of the graphene Brillouin zone. The Fermi surface pocket radius is extracted by using Lorentzian fits of the maxima of the momentum distribution curves of the electronic dispersion spectra in panels (a-c). The corresponding carrier concentrations are 7.3·10 12 cm −2 , 9·10 11 cm −2 and 1.5·10 11 cm −2 for the clean graphene monolayer, the intermediate and the higher coverage, respectively. The error bar for the reported carrier concentrations is ± 2·10 11 cm −2 and was determined from the variance of the Lorentzian fits.
IV. CHARACTERIZATION OF THE CHARGE TRANSFER COMPLEX
The location of the charge transfer process within the F4-TCNQ molecule can be elucidated by core level analysis using XPS. N 1s and F 1s core level emission spectra for different amounts of deposited F4-TCNQ are displayed in Fig. 3. For the N 1s spectra of panel (a) a line shape analysis reveals two main components centered at binding energies (BE) of 398.3 and 399.6 eV. This indicates that different N species exist in the deposited molecular film. In agreement with the literature 25,38 the peak at 398.3 eV is assigned to the anionic species N −1 while the 399.6 eV component is attributed to the neutral N 0 species. The additional broad component at 401.7 eV likely originates from shake-up processes in view of its energy location and the relative intensity (approximately 20%) as compared to the main peak 39 . The F 1s spectra in Fig. 3(b) are in contrast dominated by a single component. Only at low coverages a weak asymmetry develops. The appearance of the N −1 anion species indicates that the electron transfer takes place through the C≡N groups of the molecules while the fluorine atoms are largely inactive. A similar mechanism with electronically active cyano groups has been found for F4-TCNQ on other surfaces [39][40][41] . However, in the present case not all C≡N groups are involved in the charge transfer process. While for low molecular coverages the N −1 species dominate (71%), for coverages from 0.4 nm to 0.8 nm about 45% of the C≡N groups are uncharged (N 0 ) as determined from the peak areas (0th momentum) of the fitted components. This indicates that when the films are densely packed, most of the molecules are standing upright as sketched in Fig. 3(c) (apparently, in dilute layers not all molecules are arranged perpendicular to the surface). We note, that this result is only valid for the initial molecular layer and is different than what was recently proposed for multilayers (5 nm) of F4-TCNQ 20 . The energy position of the different core level peaks shifts with increasing molecular coverage as indicated by the blue dashed line in Fig. 3(a). For 0.8 nm nominal film thickness this shift is exactly the same as the shift of the π-bands with respect to the Fermi energy E F (i.e. 0.4 eV for saturation) in agreement with our working hypothesis of a strong electronic coupling between the F4-TCNQ molecule and the graphene surface. At coverages larger than 0.8 nm, the shift of both the N −1 peak and the band structure saturates. Only the N 0 peak continues to grow indicating the formation of a charge neutral second layer of molecules. The saturation effect at 0.8 nm nominal film thickness also supports the model of a dense layer of upright standing molecules since the size of an F4-TCNQ molecule along its axis is indeed about 0.8 nm.
A comparison of the experimental band shifts when using the non-fluorinated version of the F4-TCNQ molecule, i.e.
tetracyanoquinodimethane (TCNQ), shows that the charge transfer is greatly enhanced when the F species are present, even though they are not directly involved in the charge transfer process. With TCNQ, which has a much smaller electron affinity than F4-TCNQ (i.e., 2.8 eV for TCNQ compared to 5.24 eV for F4-TCNQ), the Fermi energy remains at least 0.25 eV above the Dirac point (see Fig. 4). The maximum shift of the band structure measured upon TCNQ deposition is obtained for a molecular coverage of 0.4 nm (see Fig. 4(d)) and no additional shift is observed for higher amounts of deposited molecules.
Additional evidence for the formation of charge transfer complexes in the case of F4-TCNQ is obtained from the work function measurements shown in Fig. 5(a). The kinetic energies are plotted after correction for the applied bias and the analyzer work function, so that the sample work function is directly obtained from the intersection between the base line of the spectrum and a An analysis of the position of the highest occupied (HOMO) and lowest unoccupied (LUMO) molecular orbitals of F4-TCNQ with respect to the Fermi level using normal emission UPS corroborates further that the molecule gets charged. The low BE portion of the UPS spectra of a graphene sample with a 0.8 nm molecular coverage exhibits two additional shoulders, which are not observed for pristine epitaxial graphene. They are located at 1.4 eV and 0.35 eV (see Fig. 5(b)). In agreement with the literature 25,33,42 , the higher BE peak is attributed to the HOMO and the lowest BE peak to the (now partially populated) LUMO of the molecule. Even though the HOMO of the pristine molecule is typically found at higher BE values 43 and the LUMO is expected for negative BE values, filling of the former LUMO of F4-TCNQ with one electron generates a negative polaron 42 . Hence, the LUMO is stabilized, i.e. the binding energy of the newly occupied state is increased. In contrast, the former HOMO is destabilized (lower BE).
V. RAMAN SPECTROSCOPY ANALYSIS
The influence of the F4-TCNQ coverage on the vibrational and electronic properties of the graphene layer was also studied under ambient conditions with Raman spectroscopy. Figure 6(a) compares Raman spectra for an as-grown epitaxial monolayer of graphene (bottom trace) and for a sample that has been covered with a 1.5 nm thick F4-TCNQ layer (top trace). Peaks related to the SiC substrate are marked by arrows. The 2D-peak of graphene is highlighted with grey shading. So is the G-peak. The latter is barely visible due to overwhelming contributions of the SiC substrate in this wavelength range 44 . The Raman spectrum for graphene covered with F4-TCNQ reveals numerous additional features that are marked by stars. By illuminating a sample that is covered with F4-TCNQ molecules with the Argon laser light it is possible to gradually remove the deposited molecules through evaporation. Features associated with the SiC substrate and graphene hardly change, while the peaks attributed to the F4-TCNQ molecules decrease in amplitude. Laser heating can therefore be used to trim the molecule coverage and hence tune the charge carrier concentration in graphene. In a confocal arrangement it is therefore possible to spatially modulate the doping level. The charge carrier concentration can be extracted from a detailed inspection of the G-peak. In order to eliminate the large contributions of the SiC substrate, it is instrumental to analyze differential spectra obtained by subtracting the Raman data of the clean hydrogenetched SiC substrate from the spectrum of the F4-TCNQ- modified graphene layer on top of SiC 44,45 . The evolution of the G-peak upon successive laser illumination, i.e. for successively reduced amounts of F4-TCNQ, is illustrated in Fig. 6(b). Only the spectral region from 1530 to 1700 cm −1 centered around the G-peak is shown. The spectra can be decomposed into three peaks. Two molecular peaks at ≈ 1602 and ≈ 1637 cm −1 decrease with the laser exposure. The molecular coverage before laser exposure was calibrated with XPS (top curve in panel (b)). The other molecular coverages marked in Fig. 6(b) are calculated from the relative intensity of the molecular peaks. The intensity of the remaining peak, which we attribute to the G phonons of graphene, is approximately constant and not influenced by laser exposure. The peak position shifts however from ≈ 1583 to ≈ 1591 cm −1 . In graphene the carrier density enters the electron phonon coupling and causes phonon stiffening when the carrier density increases. The G-peak position of the F4-TCNQ saturated sample (1583.3 ± 0.9 cm −1 ) is nearly the same as for charge neutral graphene flakes 46,47 . This is consistent with the ARPES data. As the molecules are successively removed, the G-peak blue-shifts and finally reaches 1591 cm −1 , the value for clean monolayer graphene on SiC exposed to air 44 . This G-peak position corresponds to a charge carrier concentration of ≈ 5 x 10 12 cm −2 46,47 or a band gap shift of E F -E D ≈ 0.3 eV. We note, that this value is less than measured by ARPES (E F -E D = 0.42 eV) due to the additional doping when the sample is exposed to air as reported previously 44 .
VI. F4-TCNQ ON BILAYER GRAPHENE
For bilayers the band shift caused by the intrinsic ndoping of epitaxial graphene on SiC is slightly lower than for epitaxial monolayers, namely about 0.3 eV. In addition, the electric dipole present at the graphene/SiC interface imposes an electrostatic asymmetry between the layers which causes a band gap to open by roughly 0.1 eV 7-10 as seen from the ARPES data in Fig. 7(a). In the figure bands obtained from tight-binding calculations are superimposed to the dispersion plot. This facilitates an analytical evaluation of the Dirac energy position and the size of the band gap. The calculations are based on a symmetric bilayer Hamiltonian as described by McCann and Fal'ko 48 . We note that, due to the inevitable inhomogeneity of UHV-prepared graphene samples and the beam spot size, the ARPES data contain contributions of film areas with different thickness. This can be seen by a comparison with data from a sample prepared at a slightly lower temperature in Fig. 7(f). Here, the contribution from monolayer patches is notably stronger and obstructs a clear view on the bilayer bands. The sketch in panel (g) identifies the band contributions stemming from different graphene thicknesses. In the sample used for panel (a) the bilayer bands are well isolated, although trilayer contributions are clearly present. Similar to the monolayer case, F4-TCNQ deposition onto this sample causes a progressive shift of the bilayer bands, i.e. a reduction of the intrinsic n-type doping. This is illustrated in the measured and calculated dispersion plots in Fig. 7(b)-(e). Concurrent with the drop of E F -E D , the size of the band gap increases as seen from the bands fitted with the tight binding simulations. The band fitting retrieves the energy at the bottom of the lowest conduction band E cond and at the top of the uppermost valence band E val . From these values the energy gap E g and the mid gap or Dirac energy E D are derived. The corresponding energies are marked in panel (c). The evolution of the characteristic energies of these fitted bands with the amount of deposited molecules is plotted in Fig. 7(h). The band gap E g increases from 116 meV for a clean as-grown bilayer to 275 meV when a 1.5 nm thick layer of F4-TCNQ molecules has been deposited. We verified that no further charge transfer occurs for higher amounts of deposited molecules. The Fermi energy moves into the band gap for a molecular layer thickness of 0.4 nm. Hence the bilayer is turned from a conducting system into a truly semiconducting layer. The increase of the band gap indicates that the molecular deposition increases the on-site Coulomb potential difference between both layers. From the tight binding calculations we get an increase in the on-site Coulomb interaction from 0.12 eV for a clean bilayer to 0.29 eV for a bilayer with a molecular coverage of 1.5 nm 49 . This increase can be attributed to an increased electrostatic field due to the additional dipole developing at the graphene/F4-TCNQ interface.
VII. THERMAL STABILITY AND CHEMICAL APPLICATION OF THE MOLECULES
An important aspect of the F4-TCNQ/graphene system is the robustness of its preparation: the Raman experiments after transport through ambient environment already demonstrated that the charge transfer complex is stable in air. On a monolayer sample covered with a multilayer of F4-TCNQ molecules the band structure was measured with ARPES before and after several hours of air exposure. This experiment revealed no change in the band structure. XPS measurements also confirmed the inert nature of the graphene substrate. The experiment with laser light exposure suggests that the F4-TCNQ layer is sensitive to temperature. The volatility of F4-TCNQ was probed in UHV by stepwise annealing a sample with a molecular coverage of 1.5 nm. The sample was annealed repeatedly for 1 min at successively higher temperatures between 25 • C to 230 • C in steps of about 25 degrees. After each annealing step the shift of the Fermi level E F with respect to the Dirac energy E D was determined from ARPES spectra recorded at room temperature. As the annealing temperature increased the difference between the Dirac energy and the Fermi energy increased back to the value of a pristine graphene layer. This increase is considered direct evidence for molecular desorption from the graphene surface. As is evident from Fig. 8(a), desorption of the molecules is initiated at temperatures around 75 • C and completed at 230 • C. Since thermal desorption is amplified by UHV conditions we anticipate that even higher temperatures are needed under atmospheric pressure to remove the entire molecular layer. Finally, we demonstrate that the F4-TCNQ layer can also be applied by immersing the sample in a chemical F4-TCNQ solution. Two solvents were tested to apply the molecular layer on graphene via wet chemistry: chloroform and dimethyl sulfoxide (DMSO). ARPES spectra taken immediately after introduction into UHV show a considerable background due to contamination by residual chemicals from the solution as displayed in Fig. 8(b) and (c). Nevertheless, the shift of the band structure is clearly visible, and in the case of F4-TCNQ wet chemical application in DMSO (panel (c)) charge neutrality (i.e., E F = E D ) is achieved.
VIII. CONCLUSION
In conclusion, we have demonstrated that the band structure of epitaxial graphene on SiC(0001) can be precisely tailored by functionalizing the graphene surface with F4-TCNQ molecules. Charge neutrality can be achieved for mono-and bilayer graphene. A charge transfer complex is formed by the graphene film and the F4-TCNQ molecular overlayer. The electrons are removed from the graphene layer via the cyano groups of the molecule. Since the molecules remain stable under ambient conditions, at elevated temperatures and can be applied via wet chemistry this doping method is attractive as its incorporation into existing technological processes appears feasible. In bilayer graphene, the hole doping allows the Fermi level to shift into the energy band gap and the additional dipole developing at the interface with the F4-TCNQ overlayer causes the band gap magnitude to increase to more than double of its original value. Thus, the electronic structure of the graphene bilayer can be precisely tuned by varying the molecular coverage. | 2010-03-15T15:09:27.000Z | 2009-09-16T00:00:00.000 | {
"year": 2009,
"sha1": "a61795686f760ed572cfc9a9c26edeef999f13e4",
"oa_license": null,
"oa_url": "https://www.dora.lib4ri.ch/psi/islandora/object/psi:15628/datastream/PDF/Coletti-2010-Charge_neutrality_and_band-gap_tuning-(published_version).pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a61795686f760ed572cfc9a9c26edeef999f13e4",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
8773699 | pes2o/s2orc | v3-fos-license | Improvement of quark propagator estimation through domain decomposition
Applying domain decomposition to the lattice Dirac operator and the associated quark propagator, we arrive at expressions which, with the proper insertion of random sources therein, can provide improvement to the estimation of the propagator. Schemes are considered for both open and closed (or loop) propagators. In the end, our technique for improving open contributions is similar to the ``maximal variance reduction'' approach of Michael and Peisa, but contains the advantage, especially for improved actions, of dealing directly with the Dirac operator. Using these improved open propagators for the Chirally Improved operator, we present preliminary results for the static-light meson spectrum.
Introduction
We present a method [1] for improving the estimation of quark propagators between different domains of the lattice. Our method turns out to be similar to that of "maximal variance reduction" (MVR) [2]. However, it contains the advantage that one can work directly with the chosen lattice Dirac operator. In the following sections, we present our method and some first results for staticlight mesons, where the Chirally Improved (CI) [3], light quark propagator is calculated with the improved estimator.
The method
Decomposing the lattice into two distinct regions, the full Dirac matrix can be written in terms of submatrices where M 11 and M 22 connect sites within a region and M 12 and M 21 connect sites from the different regions. We can also write the propagator in this form: We consider a set of random sources, χ n (n = 1, ..., N), and the corresponding resultant vectors, η n = Pχ n , to derive useful expressions for our technique. Reconstructing the sources in one region, χ n 1 , from the solution vectors everywhere, η n , we may write If we now apply the inverse of the matrix within one region, we have This can be solved for η n 1 and substituted back into the naive estimator of the propagator between the two regions (repeated source indices, n, are summed over): 5) where in the last line we eliminate the first term due to the fact that we expect lim N→∞ χ n 1 χ n † 2 = 0. Writing out the full expression, we obtain where the second line is an exact expression, showing that one can relate elements of different regions of P = M −1 via the inverse of a submatrix of M. This is nothing new. After all, P 22 is the Schur complement of M −1 11 . But the lesson learned up to this point is that we need no sources in one of the two regions.
Looking again at Eq. (2.6), one can see that we need not make the approximation P 22 ≈ 1 N Pχ n 2 χ n † 2 . Instead, we can place the approximate Kronecker delta between the M 12 and P 22 : where we have used the γ 5 -hermiticity of the propagator. One can see from the form of the vector ψ n 1 in the next to last line that we only need sources which "reach" region 1 via one application of M. Also, one can use all points in one region for the source and all points in the other region for the sink.
For our first attempt of using this method, we use equal volumes for the two regions and place sources next to the boundaries in both regions (see Fig. 1). Although this choice may not be ideal, we do perform inversions for each spin component separately (spin dilution) and, since our sources occupy all relevant time slices surrounding the boundaries, we actually obtain two independent estimates of the quark propagator between the two regions: So our method is very similar to that of MVR, except for the fact that we can work directly with M, instead of M † M. This is better since it is less problematic to invert M due to it having a better condition number than M † M [2]. Also, the sources need only occupy enough time slices to connect them with the other region via M, rather than M † M. These are the same number of sources for Wilson-like operators, where M † M, like M, only extends one time slice. However, for many other improved operators (like CI) this can reduce the number of necessary source time slices by a factor of 2. On top of this, although the nature of the lattice Dirac operator may dictate what is the ideal domain decomposition, it does not otherwise restrict the choice of regions or the use of this method (e.g., it is even possible to use the Overlap operator).
Improvement of quark propagator estimation through domain decomposition
For expressions and first results relevant to propagators which return to the same region (e.g., closed, or loop, propagators) we point the reader to our lengthier publication on the subject [1] and move on to our application for the open propagators.
Static-light mesons
For our meson source and sink operators, we use bilinears of the form: where S is a gauge-covariant (Jacobi) smearing operator [4] and D is the covariant derivative. For our basis of light-quark spatial wavefunctions, we use three different amounts of smearing and apply 0, 1, and 2 covariant Laplacians to these: where the subscript on the smearing operator denotes the number of smearing steps; all are applied with the same weighting factor of κ sm = 0.2. So we have a relatively narrow, approximately Gaussian distribution, along with wider versions which exhibit one and two radial nodes, due to the application of the Laplacians. We point out that, thus far, we have not altered the quantum numbers of the meson source since both the smearing and Laplacians treat all spatial directions the same (i.e., they are scalar operations). In order to create mesons of different quantum numbers, we use these light-quark distributions together with the operators shown in Table 1 (see, e.g., Ref. [2]). Inserting the estimated and static propagators in the meson correlators we have The static quark is propagated through products of links in the time direction and has a fixed spin (1 + γ 4 )/2. The estimated propagator P x+t4,x is of the form of Eq. (2.8). Thus, all points within region 1 (N 3 s N t /2 of them) can act as the source location x, just so long as t is large enough to have the sink location x + t4 in region 2. Note that we now have subscripts on the source and sink operators to denote which light-quark distribution is being used. We create all such combinations and thus have a 3 × 3 matrix of correlators for each of the operators in Table 1.
Following the work of Michael [5] and Lüscher and Wolff [6], we use this cross-correlator matrix in a variational approach to separate the different mass eigenstates. We must therefore solve the generalized eigenvalue problem (3.4) in order to obtain the following eigenvalues: 3.5) where M k is the mass of the kth state and ∆M k is the mass-difference to the next state. For large enough values of t, each eigenvalue should then correspond to a unique mass state, requiring only a single-exponential fit.
Variational approaches have seen much use recently in lattice QCD, especially for extracting excited hadron masses and we point the reader to the relevant literature in [7]. We create our cross-correlator matrices on two sets of gluonic configurations: 100 quenched and 74 dynamical, each with 12 3 × 24 lattices sites. The quenched configurations have a lattice spacing of a ≈ 0.15 fm (a −1 ≈ 1330 MeV) and a spatial extent of L ≈ 1.8 fm. The dynamical set [8] has 2 flavors of CI sea quarks (with M π,sea ≈ 500 MeV), a ≈ 0.115 fm (a −1 ≈ 1710 MeV), and L ≈ 1.4 fm. We use 12 random spin-color vectors as sources for the light-quark propagator estimation. Spin-diluted, this gives us 48 separate sources for the inversions (one in the full volume, φ , and two in the subregions, ψ; see Eqs. (2.7) and (2.8)). We perform inversions for 4 different quark masses: am q = 0.02, 0.04, 0.08, 0.10.
After extracting the eigenvalues, we check for single mass states by creating effective masses. A representative sample of these, along with single-elimination jackknife errors, are plotted against time in Fig. 2. In each case one finds values from the first three eigenvalues. The horizontal lines signify the M ± σ M values which result from correlated fits over the corresponding range in time.
We require that at least three effective mass points display a plateau (within errors) and that the eigenvectors remain constant (again, within errors) over the same range before we perform said fits.
Performing fits for all quark masses, we next take a look at the mass splittings (M − M 1S ) as a function of the quark mass. These are plotted in Fig. 3, along with the chirally extrapolated results (m q → 0). We use simple linear fits to perform these extrapolations.
In Table 2 we present the results for the chirally extrapolated (B mesons) and strange-quarkmass interpolated (B s mesons) mass splittings. We include the statistical errors from the fits in the first set of parentheses. For fits where the effective-mass plateaus are not immediately clear (e.g., fits represented with dashed lines in Fig. 2), we move the minimum time of the fit range out by one to two time slices and observe the subsequent changes in M ± σ M , as compared to the previous values. The differences from the old values are reported as systematic errors; these appear in the second set of parentheses. For a discussion of these results, we refer the reader to our lengthier report [1].
One thing is clear though: due to the improvement of the light-quark propagator estimation, and our subsequent ability to use half the points of the lattice as source locations, we have greatly improved our chances of isolating excited heavy-light states. In an earlier study [9] of heavy-light mesons using wall sources on the same quenched configurations, we were barely able to see the 2S state, let alone the excited states in any other operator channel. Also, there we used NRQCD for the heavy quark; this should only boost the signals since the heavy quark can then "explore" more of the lattice through its kinetic term. It is obvious, however, that we have much better signals now since we are able to see excited states in every channel (2S, 3S, 2P − , 2P + , and 2D ± ) on the quenched lattice. | 2014-10-01T00:00:00.000Z | 2006-09-07T00:00:00.000 | {
"year": 2006,
"sha1": "0fb9c16a29a13e775d2cfb3e0cbf02ad22817d4e",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/032/169/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "0fb9c16a29a13e775d2cfb3e0cbf02ad22817d4e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250683285 | pes2o/s2orc | v3-fos-license | Quantum Hall effect on Riemann surfaces
We study the family of Landau Hamiltonians compatible with a magnetic field on a Riemann surface S by means of Fourier-Mukai and Nahm transforms. Starting from the geometric formulation of adiabatic charge transport on Riemann surfaces, we prove that Hall conductivity is proportional to the intersection product on the first homology group of S and therefore it is quantized. Finally, by using the theory of determinant bundles developed by Bismut, Gillet and Soul, we compute the adiabatic curvature of the spectral bundles defined by the holomorphic Landau levels. We prove that it is given by the polarization of the jacobian variety of the Riemann surface, plus a term depending on the relative analytic torsion.
Introduction
The classical Hall effect, giving rise to the so-called Hall conductivity, was observed in 1879 by E. Hall. Almost exactly a century later, in 1980, K. von Klitzing [1] discovered the integer quantum Hall effect which implies the quantization of Hall conductivity. Soon afterwards, in 1981, R. Laughlin [2] gave the first physical explanation based on a gauge argument on a cylinder. The remarkable exactness of this effect triggered a worldwide interest on it which eventually lead to the discovery of the the fractional quantum Hall effect by D. Tsui, H Stormer, A. Gossard in 1982 [3]. The same year, D. Thouless, M. Kohmoto, P. Nightingale and M. den Nijs [4] gave the first topological explanation for the quantization of the Hall conductivity by means of the perturbative Kubo's formula applied to a Flat torus configuration space. With the same geometry, J. Avron-R. Seiler [5], Q. Niu-D. J. Thouless [6], M. Kohmoto [7], and Q. Niu-D. J. Thouless-Y. S Wu [8] showed in 1985 that Kubo's formula is given by the Chern number of a line bundle over the torus of magnetic fluxes. The next conceptual step was carried out by J. Avron, R. Seiler and L. Yaffe in 1987 [9], when they interpreted Kubo's formula in the framework of adiabatic transport theory and proved its validity to second order. This approach allowed them to show that Hall conductivity was given by the adiabatic curvature of spectral projectors associated with a family of Schrödinger operators parametrized by the Aharanov-Bohm potentials. Later, in 1990, M. Klein, R. Seiler [10] proved the validity of Kubo's formula to any perturbation order. Finally, J. Avron, R. Seiler and P. G. Zograf [11] outlined in 1994 the computation of the adiabatic conductance for the first Landau level for strong magnetic fields.
In this paper we give a geometric description of the spectral bundles defined by the family of Landau Hamiltonians on a Riemann surface S by means of holomorphic spectral geometry techniques. This will allow us to determine the Hall conductivity for holomorphic Landau levels in terms of Chern classes of spectral bundles.
In order to achieve this goal we first use a particular instance of Nahm's transform to describe the spectral bundles associated with the holomorphic Landau levels. Then we show that this Nahm transform is equivalent to an integral functor associated with the jacobian of the Riemann surface S. This identification allows us to determine the topological invariants of the spectral bundles and hence to prove the quantization of the Hall conductivity of holomorphic Landau levels on a Riemann surface.
After this we go one step further and compute the adiabatic conductance for holomorphic Landau levels. We show that this is equivalent to determining the adiabatic curvature of the spectral bundles and in turn this is the same as computing the curvature of the determinant bundles of the spectral bundles with respect to the natural Hermitian metrics induced on them. These metrics are conformal to the Quillen metrics and we can compute the curvature of the latter by means of the techniques developed by Bismut, Gillet and Soulé [12,13,14]. As a final result we prove that the adiabatic conductance of the holomorphic Landau levels is determined by the polarization of the jacobian of the Riemann surface plus a term which depends on the relative analytic torsion.
The results expounded here are based in the papers [15,16], the reader is referred to them for further details.
Geometric quantization of the Landau-Hall problem
Let (S, g) be an oriented Riemannian surface and B ∈ Ω 2 (S) a covariantly constant magnetic field, thus B =BΩ 2 , whereB ∈ R and Ω 2 is the Riemannian area element.
The Landau-Hall problem for particles of charge e and mass m is the Hamiltonian dynamical system (T * S, ω 2 , H), where ω 2 = dθ + e B, and H = ω 2 g 2m is the Landau Hamiltonian. It can be shown that the symplectic manifold (T * S, ω 2 ) is quantizable if and only if (S, eB) is quantizable, that is, if and only if [ eB h ] ∈ H 2 (X, Z). Moreover, the Hamiltonian H is quantizable in the BKS scheme and if L = (L, , , ∇) → S is a prequantization bundle with Hermitian metric , and unitary connection ∇, then the Schrödinger operator associated with H iŝ where ∇ * ∇ is the Bochner Laplacian of the connection ∇ and R is the scalar curvature of the riemannian metric g. The different Schrödinger operators associated with the magnetic field B are parametrized by the set of equivalence classes of prequantization line bundles of (S, eB). This set is given by H 1 (S, U (1)), the space of Aharanov-Bohm potentials.
If we consider a prequantization bundle L = (L, , , ∇) → S then any other prequantization bundle L = (L , , , ∇ ) can be obtained by twisting L by a flat Hermitian line bundle L 0 .
Since L 0 is flat, its is trivializable, hence
Families of Landau operators
On a compact surface S there is an isomorphism and J(S) = H 1 (S, R)/H 1 (S, Z) is the Lazzeri model of the jacobian variety of S. Any flat line bundle on the Riemann surface (S, g) is endowed with a natural holomorphic structure. Thus, H 1 (S, U (1)) gets identified with the Picard group Pic 0 (S) of degree zero holomorphic line bundles.
Therefore there are isomorphisms that correspond to the identifications Jacobian variety of S
Aharanov-Bohm potentials
Degree zero holomorphic line bundles To each presentation of the space of Aharanov-Bohm potentials either as J(S) or Pic 0 (S) we can associate a family of Schrödinger operators parametrized by these spaces.
Family of Landau operators parametrized by J(S)
In this case we keep the line bundle fixed and parametrize the connections.
Let L = (L, , , ∇) → S be a prequantization bundle on (S, eB). We have seen that the Landau Hamiltonian H is quantizable and its Schrödinger operator, acting on L 2 (S, L), is: The spaces of eigensections ofĤ are called Landau levels. We have also seen that any other prequantization bundle is of the form (L, , , ∇ = ∇ − i A) where A ∈ Ω 1 (S) is closed. We denote byĤ the corresponding Landau operator acting on L 2 (S, L). We get this way a family of operatorsĤ(A) parametrized by J(S), whose elements act on the Hilbert space L 2 (S, L). We consider the trivial Hilbertian bundle H is endowed with the flat connection∇ associated with the trivialization. The family of operatorsĤ(A) can be thought as a sectionĤ ∈ Γ(J(S), End (H)).
Family of operators defined by a relative operator
From a geometrical point of view, a C ∞ family of differential operators parametrized by a differentiable manifold T is represented by a relative differential operator.
Definition 1 Let π : X → T be a submersion and let E → X, F → X be two vector bundles. A k-th order relative differential operator is a C ∞ (T )-linear map that factorizes through the k-th order relative jet extension j k X/T : Γ(X, E) → Γ(X, J k X/T (E)), by means of a C ∞ (X)-linear mapping D : Γ(X, J k X/T (E)) → Γ(X, F ). That is, there is a commutative diagram For any t ∈ T , set X t = π −1 (t) and let F t → X t , E t → X t be the restrictions of the vector bundles E, F to the fiber X t .
A relative differential operator D : Γ(X, E) → Γ(X, F ), induces in a natural way differential operators in this way we obtain the family of differential operators defined by D and parametrized by T .
Family of Landau operators parametrized by
Pic 0 (S) Now we deform the line bundle.
It is well known that Pic 0 (S) parametrizes the gauge equivalence classes of flat Hermitian line bundles on S.
Let P S → S × Pic 0 (S) be a Poincaré line bundle. P S can be endowed with a unitary connection ∇ P , such that the restriction of (P S , ∇ P ) to every fiber S ξ ≡ S × {ξ} of the natural projection π Pic 0 (S) : S × Pic 0 (S) → Pic 0 (S) is a flat bundle in the gauge equivalence class defined by ξ ∈ Pic 0 (S).
Let L = (L, , , ∇) → S be a prequantization bundle for (S, eB) and let π S : S ×P ic 0 (S) → S be the natural projection.
Definition 2
The family of Landau operators parametrized by Pic 0 (S) is the family defined by the relative Schrödinger operator defined on the line bundle π * S L ⊗ P S → S × Pic 0 (S). Comparing it with the family parametrized by J(S), in this case we get a Hilbertian bundle 3.4. Equivalence of the families of Landau operators parametrized by J(S) and Pic 0 (S) It can be proved that the Abel-Jacobi isomorphism J(S) Pic 0 (S) establishes an isomorphism between the families of Landau operators parametrized by J(S) and Pic 0 (S).
We call any of them the family of Landau operators on the Riemann surface S corresponding to the magnetic field B. Bearing in mind the isomorphism induced by the Abel-Jacobi map we can use these families in an equivalent way.
Adiabatic charge transport on Riemann surfaces
By means of an extension of Kato's Adiabatic Theorem, Avron, Seiler and Jaffe [9] interpreted Hall conductivity as the adiabatic curvature of a family of spectral projectors parametrized by the Aharanov-Bohm potentials. A time dependent quantum system is described by a selfadjoint operatorĤ(t) defined on a Hilbert space. Let τ be a parameter defining the temporal scale of the system. The dynamics of the system is determined by the time dependent Schrödinger equation The adiabatic limit of the system corresponds to the limit τ → ∞.
To formulate the Adiabatic Theorem one has to make several hypothesis on the uniparametric family of operatorsĤ(σ) with σ = t τ : A1. The familyĤ(σ) is smooth. A2. The spectrum ofĤ(σ) has a gap whose size is uniformly bound. Therefore, we can define a projector P (σ) onto the part of the spectrum separated by the gap. A3. The rank of P (σ) is finite.
Theorem 1 IfĤ(σ) satisfies conditions A1-A3 and U τ (σ) is the evolution operator of the Schrödinger equation, then one has The magnitude of the error term depends on τ and the size of the spectral gap. 4.1. Charge transport on a Riemann surface S Let {α 1 , . . . , α 2p } be a symplectic basis of H 1 (S, Z) and let {ω 1 , . . . , ω 2p } be the dual basis of H 1 (S, Z) formed by harmonic 1-forms, that is We say that ω j is the Aharanov-Bohm potential associated with α j .
is a vector field that by Faraday's law gets identified with an electromotive force.
Let ψ be a solution to the Schrödinger equation i ∂ψ ∂t =Ĥ(A(t))ψ. The intrinsic expression of the current 1-form associated with the state of the system described by ψ and induced by the variation of the Aharanov-Bohm potentials given by A(t), is where∇ is the connection on the Hilbertian bundle H → J(S). The 2-form C = i e 2 ∇ ψ,∇ψ relating currents with electromotive forces is called the conductance 2-form. The current induced along α j ∈ H 1 (S, Z) by the variation A(t) is given by: where D j is the vector field on J(S) associated with ω j .
The charge transport along the homology class α j due to the variation of the Aharanov-Bohm potentials A(t) is given by Q(α j , ψ) = τ 0 I(α j , ψ)dt Finally, the charge transport along α j while the Aharanov-Bohm potential associated with α k experiments a unity increment, that is D = D k , is denoted Q(α k , α j ).
Adiabatic charge transport
The idea behind adiabatic transport is to replace ψ by its adiabatic evolution and estimate the difference by means of the Adiabatic Theorem. Let P be the spectral projector associated with ψ and define the spectral bundle P = Im P → J(S). This endowed with the connection ∇ = P •∇ with curvature where∇P denotes the covariant derivative of P ∈ End (H).
Definition 3 The ordinary 2-form
is called the adiabatic curvature associated with the spectral bundle P → J(S).
We get this way the adiabatic currents I Ad (ψ) and the adiabatic charge transports Q Ad (α k , α j ). One checks that the adiabatic conductance 2-form is proportional to the adiabatic curvature C Ad = i e 2 Ω P .
The average of the adiabatic quantities are given by where α k α j is the Pontryagin product of the homology classes α k , α j ∈ H 1 (J(S), Z).
Theorem 2 Let us suppose that the family of Landau operators on a Riemann surface S satisfies conditions (A1-A3) of the Adiabatic Theorem, then one has The formula for the adiabatic transport is the geometric formulation of Kubo's formula. Therefore, on a surface S of genus p there are p(2p − 1) different transport coefficients, and any of them gives rise to a mean Hall conductivity This is the most common form of Kubo's formula. Since i 2π Ω P = i 2π Tr (Ω b ∇ ) represents the first Chern class c 1 ( P ) ∈ H 2 (J(S), Z) of the spectral bundle P → J(S), one has that Q Ad and σ H depend only on the cohomology class c 1 ( P ). As a consequence, Theorem 2 implies that the charge transport, as well as the Hall conductivity are quantized up to infinitesimal terms in the adiabatic parameter.
Nahm transforms on Riemann surfaces
In order to be able to describe the spectral bundles associated with the Landau levels and to calculate their topological invariants, we introduce the Nahm transform associated to the jacobian variety J(S) of the Riemann surface S. We assume the identification J(S) Pic 0 (S). We have seen that J(S) parametrizes the gauge equivalence classes of flat line bundles and the restriction of the Poincaré bundle P S → S × J(S) endowed with its unitary connection ∇ P to any fiber S ξ = S × {ξ} is a flat line bundle (P ξ , ∇ P, ξ ) → S in the equivalence class defined by ξ ∈ J(S).
Let us consider the spinor bundle S = S + ⊕ S − of S as a spin c manifold, with S + = Λ 0,0 T * S, S − = Λ 0,1 T * S and let ∇ S be the spinorial connection of S.
We get a family of coupled Dirac operators: By the Atiyah-Singer theorem for families, the difference bundle The curvature F ∇ of (E, ∇) has (1, 1) type, therefore E is a holomorphic vector bundle. The Dirac operator D ξ coincides with the Dolbeault-Dirac operator D ξ = √ 2(∂ * E⊗P ξ +∂ E⊗P ξ ) of E ⊗ P ξ .
The transformed Hermitian metric
Let π S : S × J(S) → S be the natural projection and let H ∞ ± be the spaces of C ∞ -sections of the vector bundles π * S (S ± ⊗ E) ⊗ P S → S × J(S). H ∞ ± can be thought of as infinite dimensional vector bundles H ∞ ± → J(S) whose fibers H ∞ ±, ξ at ξ ∈ J(S) are the spaces of smooth sections Γ(S, S ± ⊗ E ⊗ P ξ ). On each fiber we define the L 2 Hermitian metric s 1 , s 2 ξ = S s 1 , s 2 ω, s 1 , s 2 ∈ H ∞ ±, ξ . We introduce the Hilbertian bundles H ± whose fibers H ±, ξ are the L 2 -completion of H ∞ ±, ξ with respect to the previous metric.
If (E, ∇) is an IT i -pair, by the Regularity Theorem for elliptic operators E is a subbundle of H ∞ ± , hence we get an induced Hermitian metric on E.
The transformed connection
Following Bismut, we define a connection ∇ on H ∞ ± as follows D H is the natural lift of the vector field D on J(S) to S ×J(S) and ∇ 1 is the product connection of π * S (S ± ⊗ E) ⊗ P S . One checks that ∇ is a flat connection.
We endow E with the connection where P is the orthogonal projection onto E Definition 5 Let (E, ∇) be an IT-pair. The pair ( E, ∇) is called the Nahm transform of (E, ∇).
The curvature of the transformed connection
Let ( E, ∇) be the Nahm transform of an IT i -pair. Taking into account that∇ is flat, one proves that the curvature of ∇ is F b ∇ = P • ( ∇P ∧ ∇P ) • P.
Integral functors and Fourier-Mukai transforms
Now we introduce an integral functor on the Riemann surface S related to the Fourier-Mukai transform of its jacobian variety J(S). This is the holomorphic analogue of the C ∞ Nahm transform just described. The use of this integral functor has the following advantages: (i) Computation of the topological invariants of the spectral bundles.
(ii) Establish geometrical properties of these bundles.
We will see in Section 6.4 that this integral functor is compatible, in a precise sense, with the Nahm transform.
If X is a complex projective variety, D(X) denotes its bounded derived category of complexes of coherent O X -modules.
Let X, Y be two projective varieties and let P → X × Y be a vector bundle. We have the natural projections We define an integral functor with kernel P as
WIT (Weak Index Theorem) and IT (Index Theorem) conditions.
We write Φ = Φ P X→Y and Φ i (E • ) = H i (Φ(E • )) is the i-th cohomology sheaf of the complex Φ(E • ). We denote by P y the restriction of P to X y = X × {y}.
A sheaf E on X is Φ-IT i if H j (X, E ⊗ P y ) = 0 for every j = i and every y ∈ Y .
By the base change theorem of algebraic geometry one has: is a vector bundle.
6.2. The integral functor associated to the jacobian The first example of a Fourier-Mukai transform was given by Mukai. Let X be an abelian variety, X its dual abelian variety and let P → X × X be the Poincaré bundle then the Mukai transform is S ≡ Φ P There is a functorial isomorphism Φ J S • α * where α : S → J(S) is the Abel-Jacobi immersion.
Change of topological invariants under Φ J
Later, in Section 7, we will see that the spectral bundles associated to the family of Landau operators can be expressed as P = Φ J (L) for a suitable line bundle L → S. Since we have seen that the Hall conductivity is determined by the first Chern class of P , a key problem is the computation of the topological invariants of the transformed sheaves Φ J (L).
Proposition 1 If E is a coherent sheaf on S with Chern character ch(E) = (r, d), then ch(Φ J (E)) ∈ H • (J(S), Q) is given by where p is the genus of S and [Θ] is the cohomology class defined by the theta divisor Θ ⊂ J(S).
Compatibility between Φ J and the Nahm transform
Let E → S be a Hermitian bundle with unitary connection ∇.
We have seen that the spin c Dirac operator D ξ gets identified with the Dolbeault-Dirac operator of E ⊗ P ξ .
Hodge theory and the Dolbeault isomorphism give isomorphisms Thus, E is IT i with respect to Φ J ⇔ (E, ∇) is an IT i -pair with respect to the Nahm transform. There is a natural isomorphism of C ∞ vector bundles induced by Hodge theory Theorem 4 The connection ∇ is compatible with the holomorphic structure of Φ i J (E) and 7. Spectral bundles of the family of Landau operators Let (S, g) be a compact surface of genus p with constant curvature R, let L = (L, , , ∇) → S be a prequantization bundle and letĤ = 2 2m (∇ * ∇ + R 6 ) be the associated Landau operator. In order to explicitly describe the spectral bundles we must recall the relationship between the spectral geometry ofĤ and the holomorphic structure of L. The results we need are contained in [15]. what we need is the following result.
Theorem 5 Suppose that | gr(L)| > gr(K S ), where K S is the canonical line bundle of S.
(i) If p = 1, the spectrum ofĤ is the set is contained in the spectrum ofĤ and are the lowest eigenvalues.
In both cases the space of eigensections of eigenvalue E q gets identified either with H 0 (S, Definition 7 The subset Spec hol (Ĥ) of Spec(Ĥ) defined in the previous theorem is called the holomorphic spectrum ofĤ. The eigensections with eigenvalue E q ∈ Spec hol (Ĥ) form the q-th holomorphic Landau level.
According to Theorem 5, Spec hol (Ĥ) does not depend on the chosen prequantization bundle. Thus, Spec hol (Ĥ) is constant for the family of Landau operators parametrized by J(S) Pic 0 (S) and we denote it σ hol .
In particular the eigensections of the Landau operatorĤ(L 0 ) corresponding to the flat line bundle L 0 are given by We assume that gr L > 0, since if it were not the case it would be enough to replace L by L −1 in order to obtain the corresponding results. Hence, for any integer q ≥ 0, with gr (K −q S ⊗ L) > gr K S , there exists an eigenvalue E q ∈ σ hol corresponding to a holomorphic Landau level. Lemma 1 The family P q of spectral projections associated with an eigenvalue E q ∈ σ hol fulfills the hypothesis (A1-A3) of the adiabatic Theorem.
Therefore, the q-th holomorphic Landau level defines a spectral bundle P q → J(S) and we endow it with the induced connection ∇ q = P q •∇. Let ∇ q be the connection on K −q S ⊗ L obtained by twisting the connection of L with the connection of K −q S induced by the Levi-Civita connection.
Theorem 6 Let P q → J(S) be the spectral bundle defined by the q-th holomorphic Landau level, we have:
Determination of adiabatic transport and Hall conductivity
The equivalence of the Nahm transform with the integral functor associated to the jacobian, allows us to study the spectral bundles by means of the machinery of integral functors. In particular by Proposition 1 we have the following result.
Corollary 1 The first Chern class of the spectral bundle
where L Θ is the principal polarization of J(S) defined by the theta divisor Θ.
Now we can determine the adiabatic transport and Hall conductivity.
Stability of spectral bundles
The vector bundle L d = Φ 0 J (L d ), where L d → S is a line bundle of degree d, with d > deg K S , is called in the literature the d-th Picard sheaf on J(S). These sheaves have been studied by several authors, among them we may cite Ein and Lazarsfeld [17]. Using their results we can prove the following: The spectral bundles P q → J(S) associated with the holomorphic Landau levels are holomorphic vector bundles with respect to the polarization of the jacobian J(S) defined by the theta divisor Θ.
The stability of the spectral bundles P q → J(S) is a remarkable fact, since in the description of the fractional quantum Hall effect proposed by Varnhagen [18] there also appear stable bundles. These facts seem to suggest a connection between the stability of spectral bundles and the interpretation of quantum Hall effect.
Analytic torsion and Quillen metrics on holomorphic determinant bundles
Although we have determined the Hall conductivity as the integral of the adiabatic conductance, our final aim is the computation of the adiabatic conductance that controls the fluctuations of the Hall conductivity and is proportional to the adiabatic curvature. The adiabatic curvature Ω Pq of the spectral bundle ( P q , ∇ q ) → J(S) is the trace of its curvature, that is Ω Pq = Tr Ω b ∇q . Equivalently, Ω Pq is the curvature of the connection det ∇ q induced on the determinant bundle det P q → J(S), thus On the other hand, the connection ∇ q on the holomorphic vector bundle P q → J(S) gets identified with the Chern connection of the Hermitian metric , L 2 induced by the L 2 metric of the Hilbertian bundle H + → J(S).
Hence, the computation of the adiabatic curvature is reduced to finding the curvature of the Chern connection of the determinant bundle det P q → J(S) for the Hermitian metric , L 2 induced by the L 2 metric.
However, our strategy is to compute first the curvature of the Chern connection on the determinant bundle with respect to the Quillen metric, since: (i) The metric , L 2 is conformal to the Quillen metric.
(ii) It is the natural C ∞ metric defined on the determinant bundles in terms of the analytic torsion, or equivalently, in terms of the zeta-regularized determinants of certain elliptic operators.
In order to calculate the curvature of Quillen metric we use the techniques developed by Bismut, Gillet and Soulé [12,13,14].
Let π : X → T be a holomorphic submersion and let E → X be a holomorphic vector bundle. The direct image Rπ * E admits a determinant that is a holomorphic line bundle det Rπ * E → T whose dual λ KM (E) = (det Rπ * E) −1 → T is called the Knudsen-Mumford determinant.
For every t ∈ T , let X t = π −1 (t) be the fiber over t and let H i (X t , E t ) be the cohomology of the restrictionE t of E to X t . For every fiber λ KM (E) t of the Knudsen-Mumford determinant there is a canonical identification Let g X/T be a C ∞ relative Kähler metric on π : X → T , H E a Hermitian metric on E → X and let D = √ 2(∂ E + ∂ * E ) be the Dolbeault-Dirac operator of E. One defines a family of vector spaces λ(E) → T whose fibers are λ(E) t = (det KerD t ) −1 ⊗ det CokerD t .
The L 2 metric induces a Hermitian metric , L 2 on λ(E) that in general, due to the changes of dimension in KerD t , it is not C ∞ . However, if det D * D is the function on the parameter space T whose value at t ∈ T is the regularized determinant of det D * D(t) = det D * t D t of D t then Quillen proved the following result.
Theorem 9 (Quillen) The Quillen metric, defined on the determinant bundle λ(E) as , Q = det D * D , L 2 , is a C ∞ Hermitian metric. The value of the function T (E, H E , g X/T ) = (det D * D) 1 2 at t ∈ T coincides with the Ray-Singer analytic torsion T (E t , H Et , g Xt ) of the vector bundle E t → X t . We say that T (E) = T (E, H E , g X/T ) is the relative analytic torsion of E → X.
For every t ∈ T there is a canonical isomorphism between the fibers of the Bismut-Gillet-Soulé and Knudsen-Mumford determinants. One says that π : X → T is locally Kähler if there exists an open cover U of T such that for every U ∈ U there exists a Kähler metric on π −1 (U ).
Let R X/T , Ω E be the curvatures of (Λ 1,0 T * (X/T ), g X/T ) and (E, H E ), respectively. We have the following key result.
(ii) The curvature of the Quillen metric λ(E) λ KM (E) is the degree 2 component of the differential form on T 2πi where X/T denotes integration along the fibers.
Adiabatic conductance of spectral bundles
The natural projection π J(S) : S × J(S) → J(S) is locally Kähler. Therefore, we can apply the results of Bismut, Gillet and Soulé.
Theorem 11
The adiabatic curvature Ω Pq of P q → J(S) is iΩ Pq 2π = Im H Θ + i π ∂∂ ln(T (π * S (K −q S ⊗ L) ⊗ P S )), where H Θ is the principal polarization of J(S) and T (π * S (K −q S ⊗ L) ⊗ P S ) is the relative analytic torsion. | 2022-06-28T03:52:28.435Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "0e48922b10162cd30d3296e4968c6cf07c00893a",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/175/1/012014/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0e48922b10162cd30d3296e4968c6cf07c00893a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5763380 | pes2o/s2orc | v3-fos-license | The Decuplet Revisited in $\chi$PT
The paper deals with two issues. First, we explore the quantitiative importance of higher multiplets for properties of the $\Delta$ decuplet in chiral perturbation theory. In particular, it is found that the lowest order one--loop contributions from the Roper octet to the decuplet masses and magnetic moments are substantial. The relevance of these results to the chiral expansion in general is discussed. The exact values of the magnetic moments depend upon delicate cancellations involving ill--determined coupling constants. Second, we present new relations between the magnetic moments of the $\Delta$ decuplet that are independent of all couplings. They are exact at the order of the chiral expansion used in this paper.
I. INTRODUCTION
The success of chiral perturbation theory (χPT) [1] for understanding properties of the pseudoscalar mesons is now well established [2].The approach is based on the existence of a systematic expansion in derivatives of the pion's field and the pion's mass, whereby m π divided by some large scale, generated by the theory itself and typically ∼ 4πf π , becomes the perturbative expansion parameter.For the purely mesonic sector this expansion is in fact quadratic in the pion's mass, so that even for the SU f (3) generalization, (m K /4πf π ) 2 is still a reasonablely small parameter.
The application to the baryon sector has however from the outset been confronted with a variety of difficulties [3].For example, how to handle the nucleon's mass was a problem only relatively recently solved [4,5].One unavoidable complication when including baryons is that the chiral expansion is itself more complicated [3,4] than in the purely mesonic case, and the expansion parameter is now only linear in m π (m K ).A second pertinent complication involves the issue of resonances.* Originally it was conjectured [4] that all such resonances (and most notably the ∆) need not be included as an explicit degree of freedom, i.e. that they could be "integrated out".While at least one notably active group [6] have maintained this viewpoint, † most researchers have subsequently found it untenable [7].The ∆ degree of freedom was first introduced into χPT by Jenkins and Manohar in Ref. [8].Recently, the importance of the Λ * (1405) for understanding threshold kaon-nucleon scattering lengths has also been realized [10,11,12].
In this paper we discuss the role of higher-multiplets for the properties of the ∆ decuplet at the one-loop level in χPT.We consider the O(p 3 ) correction to the decuplet masses and the O(p 2 ), one-loop correction to the magnetic moments of the decuplet (the electromagnetic vertex has chiral power −1 excluding whatever power maybe assigned to electric charge).Our criterion for which multiplets to consider is that the average mass-splitting, δ h , between the multiplet and the ∆ decuplet be less than the mass of the kaon, This criterion is based on the fact that when it is met, an expansion in m K /δ h is not justified so that loop effects involving these higher-mulitplet members as intermediate states cannot be absorbed into higher order terms in the chiral lagrangian.Such loop effects place a fundamental limitation on any formulation of χPT that omits the higher-multiplet as an explicit degree of freedom, even if the chiral expansion was then executed to all orders.By the convention of Ref. [14], note that the chiral power of all δ is 1.
For the case of the nucleon octet Eq. ( 1) is clearly met, (M 10 = 1377MeV and M 8 = 1151MeV) and is indeed the driving phenomenological reason for expecting that the ∆ cannot be ignored for descriptions of the nucleon.
We focus here on the properties of the ∆ decuplet as opposed to those of the nucleon octet because of the simple reason that the mass splitting δ h satisfies the criterion specified by Eq. (1) for at least two resonances.
We will show that the Roper has a nontrivial effect on both decuplet mass splittings and magnetic moments.‡ mη is obtained at this order in χPT using GMOR [13].
We should also note that these results lead to a more general statement.When adding a loop one must also add resonances whose masses are within a kaon mass of the resonances already included.
The rest of this paper is organized thusly.In Section (II) we enumerate the various multiplets considered and their interactions (and experimentally obtained couplings) with the ∆ decuplet utilizing the heavy baryon formalism of Jenkins and Manohar [5].In Section (III) we present the one-loop, O(p 3 ) contributions to the decuplet masses focussing on the violation to the Decuplet Equal Spacing rule [15], the sole quantity for which χPT makes a prediction at this order in the chiral expansion [9,14].In Section(IV) we consider the one-loop, O(p 2 ) results for the magnetic moments of the decuplet, a subject first discussed in χPT in Ref. [16] where though adherence to the order of the chiral expansion was not strictly maintained.We demonstrate later that strict adherence is crucial for renormalizability.The focus in both Sections (III) and (IV) is the quantitative importance of the higher-multiplets.In addition new relations for the magnetic moments at this order in χPT are derived that are independent of the intermediate multiplets considered.Their violation would be a clear measure of the importance of higher chiral power terms in the expansion.In Section (V) we conclude with a discussion of the consequence of these results to the loop-expansion in general in χPT.
II. HIGHER MULTIPLETS: DEFINITIONS AND COUPLINGS
A number of multiplets § satisfy the criteria Eq. (1).Fortunately most of these can be eliminated due to symmetry constraints.For example, flavor singlets such as the Λ * (1405) do not couple to a decuplet via an SU f (3) octet (the goldstone bosons).Only slightly less straightfoward, a 1/2 − octet (e.g. the N(1535) multiplet) couples only through the lower components of the baryon spinors, which vanishes to lowest order in the heavy baryon expansion [5].(Such states would, in principle, need to be considered in higher-order calculations.)Coupling to the 5/2 − octet likewise vanishes at lowest order.With these eliminations, we obtain that only octets or decuplets of baryons with quantum numbers 1/2 + , 3/2 + , 3/2 − and 5/2 + need be considered.§ See e.g.Table 30.4 in the Particle Data Group [17].As we are at present only concerned with the average coupling of these multiplets with the ∆ decuplet, we ignore potentially interesting questions as to the exact SU f (3) composition of any particular excited state.[18] We also omit from consideration possible exotics.
The most important 1/2 + multiplet (beyond of course the nucleon's) is the octet containing the Roper, N(1440).A slight difficulty arises in determining because one member of the Roper multiplet, the excited Cascade, has not yet been identified.To get a reasonable approximate value for its mass, we use the corresponding GMO relation [19] M by which one estimates that M Ξ * = 1790.The average value of the Roper multiplet is thereby from which one gets that For the ∆N * π interaction one has in complete analogy to the leading ∆N π interaction in the heavy fermion limit.[8] The coupling C2 can be obtained from the observed decay of the N * (1440) → ∆π which has a partial width Γ N * →∆π ≈ 90MeV.Comparing this with the decay of the ∆, Γ ∆→N π ≈ 120MeV one obtains that In principle, one other 1/2 + multiplet meets our criteria, the octet containing the N(1710), with a δ h ≈ 470MeV.However, best estimates [17] for the partial width for N(1710)→ ∆π is ≈ 30MeV, which implies that the relevant coupling constant is significantly suppressed compared to that in Eq. ( 9).We therefore ignore this multiplet in our subsequent calculations as it amounts to only a small correction to those of the Roper octet.
For the lowest lying 3/2 − octet, we obtain directly from the experimentally measured masses [17] that * * The interaction with the ∆ decuplet is, to leading order in the chiral lagrangian, The coupling C * can be determined from the observed decay of the N * (1520), Γ(N * (1520) → ∆π) ≈ 25MeV, whence one finds that This relative suppression results both from the smaller branching width as well as overall kinematic factors that otherwise tend to enhance N * (1520) → ∆π w.r.t.N * (1440) → ∆π.
There are two other 3/2 − multiplets, one octet and one decuplet, listed in the Particle Data Group that could potentially satisfy our criteria, Eq. ( 1).Each are very poorly determined, containing merely one member each, the N(1700) and the ∆(1700), respectively.In the case of the N(1700), its coupling to the ∆π is experimentally negligible and hence can be safely ignored.On the other hand, the coupling with the ∆(1700) is not so readily ignored, having a decay width Γ(∆(1700) → ∆π) ≈ 120MeV.We therefore explicitly keep this decuplet, assigning for its intermultiplet spacing with the ∆ decuplet a value From the aforementioned decay width, and an interaction of the form Eq. ( 11), we obtain that for the ∆ * ∆π coupling Here we have used the convention of Ref. [9] for the SU f (3) algebra factors (whereby, for the ∆∆π coupling, H 2 ≈ 4 is typical).As in the case of the N (1520) overall kinematic factors in addition to the available phase space, yields a rather suppressed value of the coupling.Indeed, as we will soon see, due to Eqs. ( 12) and ( 14) little would have been lost had we ignored the 3/2 − multiplets altogether.Nevertheless, they have been included for completeness.We come finally to the 5/2 + states.The lowest such multiplet is the N(1680) octet.It has an intermultiplet mass splitting with the ∆ decuplet of δ h = 496MeV.We conclude that there is no 5/2 + multiplet that meets our criteria Eq. ( 1).
This then concludes our examination of the relevant multiplets.By far the most important, as we will presently see, is the Roper octet.
III. DECUPLET EQUAL SPACING RULE
The one-loop, O(p 3 ) results for the masses of the decuplet involving intermediate ∆ decuplet and nucleon octet states have been published previously [14].The contribution, δM 10 , from the 3/2 − multiplet, for the case m > δ 8 * , is where β represents SU(3) algebra factors [9,14].As discussed in Ref. [14], the counterterms neccessary to renormalize these terms are either of the form δ (for the δ 8 * m 2 divergences).As the DES rule is exact for all terms through m 2 , all counterterms (divergences) cancel at this order of the chiral expansion in the violation to the DES rule.
Including all multiplets, the one-loop, O(p 3 ), violation to the DES rule is † † V and V * are given by wherein the function m >| δ |, W (m, δ, µ) = † † Note that unlike our convention in [14], all δ h are now strictly positive and hence the explicit sign in the function V above.
‡ ‡ Note the correction from [14] regarding the arctangent term in the case m >| δ |. and As was already mentioned in Section (II), the contribution from the 3/2 − multiplets is essentially negligible due to their suppressed coupling constants, Eqs. ( 12) and (14).Explicitly we find that, which are indeed negligible.Hence we omit further considerations of the 3/2 − multiplets.This is not true of the Roper octet.Evaluating, one finds that the ratio of the Roper to Nucleon multiplet contribution where result (9) has been used.Taking C 2 = 2 one obtains that the Roper contribution which is, in absolute magnitude, as large as the average, experimental value of 6.8MeV.It clearly cannot be ignored.
IV. DECUPLET MAGNETIC MOMENTS
The topic of the magnetic moments of the decuplet in the context of chiral perturbation theory was first discussed in the work of Ref. [16].Apart from the inclusion of the Roper as an intermediate state, our work differs from Ref. [16] in the treatment of SU f (3) symmetry.In our calculation, the symmetry of the decuplet states is broken through the meson masses appearing in the one loop calculation.The meson masses are taken to be proportional to the current quark masses with the up and down quark masses being equal.The strangeness [9] and charge dependences [22] of the baryon masses are regarded as effects of chiral power 1 or more.The quantity f K − f π has chiral singularity ∼ m 2 π logm 2 π [23] and has chiral power 2. Our calculations of the decuplet mass splittings and magnetic moments are limited to chiral power O(p 3 ) and O(p 2 ), respectively.Hence, we set f K = f π and do not include in one loop calculation the sigma terms from L 1 which produce strangeness dependence of baryon masses at the tree level.We ignore charge-dependence of the masses altogether.The advantage of this strategy in the calculation of baryon masses is well-known [9,14].The counter-terms which appear at one loop level simply renormalizes the sigma terms.We find a similar result in the magnetic moment calculation at one loop level, namely, that the counter terms are strictly proportional to the baryon charge and hence renormalize the tree level decuplet magnetic moment term, Eq. ( 24) below.These advantages are lost if f K is not set equal to f π [16].
The lowest order term in the chiral lagrangian for the magnetic moment of the ith member of the ∆ decuplet is given by [24] where q i is the charge of the ith member.The oneloop, O(p 2 ) corrections to the magnetic moments result from vertex corrections in which the external photon attaches to the meson propagator [16] and receives contributions from intermediate states with either a 3/2 + or 1/2 + Photon attachments to the intermediate baryon are m π /M N further suppressed as are also the contributions from 3/2 − baryons.These latter are hence ignored as they form part of the higher-order contribution in the chiral expansion.Note that the η meson, being electrically neutral, also does not contribute at the order being considered.Following the notation of [24], the magnetic moment of the decuplet members, µ 10 i , at the O(p 2 ), one-loop contribution in the chiral expansion is, in nuclear magneton units (e/2M N ): The function F (δ, m, µ) is ultraviolet divergent and given by The expression for the nonanalytic terms in F (δ, m, µ) appeared in Ref. [16].
The coefficients α i j and β i j are simply related to the coefficients α ij and β ij of Ref. [16].We multiply the coefficients α ij by 3 so that they add up to the charge of decuplet i.Unlike Ref. [16], we use the same mass for all members of a baryon multiplet.Accordingly we add the contributions of π ± and of K ± .The sum over j in Eq. ( 25) runs over two terms -π and K.The resulting coefficients have surprizing simplicity.First, α i j = β i j .Second, they may be expressed in terms of any two of the following three -charge, q i , isospin, I i 3 , and hypercharge Y i of decuplet i.All three are traceless in any SU (3) multiplet space.We choose to use charge and isospin.
Eq. ( 29) is the key result for renormalizability.As a consequence of these relations, the counterterm for the ultraviolet divergences in F (δ, m, µ) (which are m independent) is simply proportional to δL M , Eq. ( 24), and therefore absorbed into a redefinition of µ c .Note that this is precisely the same procedure as for the one-loop, δ-dependent contributions to the masses.We emphasize that this procedure, and hence renormalizability, is tightly wedded to the systematics of the chiral expansion (whereby δ and f π have fixed values in Eq. ( 25)).
The simplicities of the coefficients α i j and β i j allows great simplification of Eq. (25).We introduce the combination: and rewrite the decuplet magnetic moments in the form: Note that the form of Eq. ( 31), in particular the modification of the coefficient of charge, reflects the choice of charge and isospin as the two traceless quantities.The form would be different if we chose charge and hypercharge or some other combination of isospin and hypercharge.The second term is present only because the SU (3) symmetry of the states is broken through the difference in π and K masses.If we had used the same masses the states would be pure decuplets and the Wigner-Eckart theorem for SU f (3) would ensure that the magnetic moments are simply proportional to the charge.At tree level the decuplet magnetic moments are proportional to the charges only.The main one-loop results is the appearance of the second term of Eq. ( 31).Now we need two magnetic moments to fix the two coefficients in Eq. ( 31), viz, µ c + G K and G π − G K .The only decuplet magnetic moment which has been measured is µ Ω − = −1.94± .17 ± .14 nbm [17].It fixes the coefficient of charge in Eq. (31).For the other magnetic moment we choose µ ∆ 0 .§ § While not measured yet, it is given entirely by the loop effect.
We can express the magnetic moments of all other decuplets in terms of these two magnetic moments.Specifically, we derive the new relation that Explicit predictions for the eight other decuplet magnetic moments at the one-loop O(p 2 ) level are listed below.
Independent of the explicit mulitplets included as intermediate states, violations to these relations are strictly due to higher-order terms in the chiral expansion.Note that the magnetic moment of the Σ * 0 continues to be zero at this order in the expansion.Analogous relations follow for the quadropole moments [25].We note that these relations are not obeyed by quenched lattice QCD [26].This last result is perhaps not surprising as § § The Ω − , decaying only weakly, is sufficiently long lived to allow such measurements.Since all other members of the decuplet decay through the strong interaction, it is a challenge to extract their magnetic moments from experiment.
the quenched calculations do not contain disconnected, quark loop diagrams [27].
The explicit expression for µ ∆ 0 in terms of the functions F (δ, m, µ) is given below.
With the help of Eqs.(26) it is easy to verify that µ ∆ 0 is renormalization scale independent.
Since the magnetic moment of the ∆ 0 is given strictly by loop-effects, it is an appropriate measure of the relative importance of the Roper at the one-loop level.Explicitly one has from Eq. ( 35) that As in the case of the masses [9] there is a strong cancellation between the ∆ decuplet and nucleon octet intermediate states.This implies that µ ∆ 0 is a very delicate function of H 2 and C 2 and therefore potentially very sensitive to the Roper contribution ( C2 ).Ambiguity in this regard resides in the fact that H 2 and C 2 are not sufficiently well-known that a reliable prediction of µ ∆ 0 can be made.To illustrate this point, we quote the results using two "representative" values for the couplings, both with and without the Roper included.For the coupling values used in Ref. [14], one obtains only a mild dependence on the Roper, while for the couplings used by Ref. [16], we find a much more dramatic dependence on the Roper, The difference in µ ∆ 0 from these two parameter sets is clearly sizable, as is the relative importance of the Roper.
If instead we chose to use as input the recent, model dependent extraction [28] of the magnetic moment of the ∆ ++ , µ ∆ ++ = 4.5 ± .5 to infer µ ∆ 0 using the relations Eq. (34), one then obtains that µ ∆ 0 = −.2 ± .2.If this is indeed the data, then by Eq. (36) the contribution of the Roper is, in absolute magnitude, significant.As in the case of the mass-splittings, a formulation of χPT without the Roper as an explicit degree of freedom is intrinsically incapable of predicting such "data".
V. CONCLUSIONS
From the results of the last two sections a few points are worth discussing.
First, that while the magnetic moment of the ∆ 0 sensitively depends on the cancellation between terms depending on relatively ill-determined coupling constants, the relations between the magnetic moments of the ∆ decuplet given in Eq. ( 34) are rigorous predictions of χPT at O(p 2 ).We urge experimental activity to confront these predictions with data.A new measurement of µ Ω − with higher precision will be most useful.At least two other decuplet magnetic moments need to be measured, hopefully with a precision of ∼ 0.1 n.m.. Second, that on the level of one-loop corrections in χPT, the contribution of the Roper octet to properties of the ∆ decuplet is as important as any other multiplet's contribution.We have seen this result quantitatively in the case of the DES rule and the magnetic moment of the ∆ 0 .Both of these quantities are good measures of the one-loop effects as they are each zero at lower-order in the chiral expansion.We expect that these results are illustrative and that they generalize to all one-loop calculations for the ∆ decuplet.Since the mass-splitting, δ R , between the Roper octet and ∆ decuplet is less than that of the kaon's mass, a Taylor expansion in m K /δ R is not permissible.Hence, these loop effects cannot be be absorbed within higher order terms of the chiral expansion of a theory not containing the Roper as an explicit degree of freedom.In such a theory it is indeed difficult to justify going to one-loop or higher without inclusion of the Roper.Phenomenologically successful results would have to be considered merely fortuitous unless shown to be a result of more general considerations (as in the relations of Eq. (34) for the magnetic moments).
Third, that the above argument can be repeated in kind for the one-loop corrections to the Roper resonance.That is, even higher-multiplets, separated by δ h in mass from the Roper octet by an amount less than the kaon mass, will a priori be as important quantitatively as the ∆ decuplet for properties of the Roper at the one-loop level.Since such corrections are neccessary for a two loop calculation * * * of the baryon masses, we are led to conclude that the loop expansion in general in the baryon sector of χPT is inevitably wedded to the neccessity of including more and more multiplets in the theory as fundamental fields.While such a result may not be true in a particular limit of QCD (e.g.m u,d,s → 0 or N c → ∞), it is a consequence of the experimental fact that on the average, m π ≈ δ h . | 2018-04-03T03:27:38.515Z | 1995-08-21T00:00:00.000 | {
"year": 1996,
"sha1": "d2313ce999e2989bedfc95f8bb2d3e3fa05dd4e3",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/hep-ph/9508340",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d2313ce999e2989bedfc95f8bb2d3e3fa05dd4e3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
267147321 | pes2o/s2orc | v3-fos-license | Revealing neuropilin expression patterns in pancreatic cancer: From single‑cell to therapeutic opportunities (Review)
Pancreatic cancer, one of the most fatal types of human cancers, includes several non-epithelial and stromal components, such as activated fibroblasts, vascular cells, neural cells and immune cells, that are involved in different cancers. Vascular endothelial cell growth factor 165 receptors 1 [neuropilin-1 (NRP-1)] and 2 (NRP-2) play a role in the biological behaviors of pancreatic cancer and may appear as potential therapeutic targets. The NRP family of proteins serve as co-receptors for vascular endothelial growth factor, transforming growth factor β, hepatocyte growth factor, fibroblast growth factor, semaphorin 3, epidermal growth factor, insulin-like growth factor and platelet-derived growth factor. Investigations of mechanisms that involve the NRP family of proteins may help develop novel approaches for overcoming therapy resistance in pancreatic cancer. The present review aimed to provide an in-depth exploration of the multifaceted roles of the NRP family of proteins in pancreatic cancer, including recent findings from single-cell analysis conducted within the context of pancreatic adenocarcinoma, which revealed the intricate involvement of NRP proteins at the cellular level. Through these efforts, the present study endeavored to further reveal their relationships with different biological processes and their potential as therapeutic targets in various treatment modalities, offering novel perspectives and directions for the treatment of pancreatic cancer.
Introduction
Vascular endothelial cell growth factor 165 receptor 1 [neuropilin-1 (NRP-1)] is a protein-coding gene on human chromosome 10p11.22that codes the NRP-1 protein from 923 amino acids (103 kDa), a cell surface receptor that contains protein domains that allow their participation in different types of signaling pathways that control cell migration (1).A family gene, NRP-2 (human chromosome 2q33.3),encoding a novel member of the family protein neuropilin-2 (NRP-2) that contains 931 amino acids with 104 kDa, was identified as a high-affinity receptor for the Semaphorins (2).Previous studies revealed that NRP family proteins exert multiple functions as co-receptors for vascular endothelial growth factor (VEGF) (3), transforming growth factor beta (TGF-β) (4), hepatocyte growth factor (5), fibroblast growth factor (FGF) (6), and Semaphorin 3 (SEMA3) (7).NRP family proteins are involved in the interaction with multiple ligand receptors, thus NRP family proteins may be involved in cancer occurrence and development and might serve as therapeutic targets for gastric cancer (8), glioma (9), endometrial cancer (10), bladder cancer (11), thyroid cancer (12), breast cancer (13), gallbladder cancer (14), colorectal cancer (15), and pancreatic adenocarcinoma (PDAC) (16).Moreover, NRP signaling has been associated with several biological processes, including pro-tumorigenic cell proliferation, invasion, metastasis, and tumor growth in PDAC (17).NRP signaling can provide resistance to chemotherapeutic reagent exposure in clinical settings by imitating the therapy-resistant cancer stem-cell properties (18).Recent advances in single-cell analysis (SCA) have revealed multiple functional roles of cancer-related cellular proteins (19), but the roles of NRP remain poorly understood.Here, we discuss recent advances in NRP biology in PDAC based on the SCA-based precision study.
NRP signaling
NRP-1 contains a large N-terminal extracellular domain, including complement-binding, coagulation factors V/VIII (CF V/VIII), and meprin domains.The two NRP-1 (complement-binding) CUB domains and the amino-terminal CF V/VIII domain are crucial for SEMA3A binding.The amino-terminal NRP-1 CF V/VIII domain remains the only required for binding to VEGF-165.Therefore, NRP-1 exerts its biological functions by binding with Semaphorin ligands (20).A previous study revealed that SEMA3A can inhibit axonal growth and induce neuronal apoptosis after binding to NRP-1, with the membrane-proximal meprin, A5/NRP1, protein tyrosine-phosphatase µ (MAM) domain of NRP-1.NRP-1 is involved in regulating cell survival by mediating the effects of its ligands, such as VEGF.NRP-1 promotes survival in cancer cells by activating signaling pathways, such as the PI3K/AKT pathway (21).The activation of these pathways helps in tumor progression and therapy resistance.Additionally, NRP-1 plays an essential role in cell migration by mediating the effects of Semaphorins and VEGF.NRP-1 can enhance the invasive and migratory capabilities of cancer cells.NRP-1, by interacting with its ligands, can activate downstream signaling pathways, such as Src kinases, which modulate cytoskeletal dynamics and cell adhesion, thereby promoting cell migration and invasion (22).This causes tumor metastasis, where cancer cells spread to other body parts.These results indicate that the meprin domain is involved in forming a higher-order receptor complex.NRP may play a key role in cell-to-cell interaction via their responses to ligands (Fig. 1) (23).Moreover, a recent report indicated the involvement of the NRP-1 signal in the symmetric cell division to expand breast cancer stem-like cells (24).NRP-1 has been overexpressed in various cancer types, including lung, breast, pancreatic, and prostate cancers.NRP-1 affects cell survival, migration, and attraction by binding to ligands and various co-receptors and may serve as a cancer biomarker of refractory tumors (25).
Neuropilins (NRPs) are transmembrane glycoproteins that act as co-receptors for a variety of ligands, including vascular endothelial growth factor (VEGF), semaphorins (SEMA), and transforming growth factor-beta (TGF-β).These ligands bind to NRPs, which then interact with and enhance the signaling of their respective receptors, such as VEGF receptor (VEGFR) and TGF-β receptor (TGFBR).VEGF is a key regulator of angiogenesis, and its binding to NRP-1 enhances VEGFR-2 signaling, leading to endothelial cell proliferation and migration.SEMA3s are involved in axon guidance and immune regulation, and their binding to NRP-1 and NRP-2 can activate downstream signaling pathways such as RhoA/ROCK and PI3K/Akt.TGF-β is a multifunctional cytokine that plays a critical role in cell growth, differentiation, and immune regulation.Its binding to NRP-1 enhances TGFBR signaling, leading to downstream activation of Smad2/3 and other signaling pathways.NRPs have been reported to interact with various signaling pathways, including TGF-β, PDGF, FGF, c-Met, and others (Fig. 1).Despite some controversy surrounding these interactions, current knowledge suggests that NRP-1 has been involved in cancer stem-cell maintenance and progression through the Wnt/β-catenin signaling pathway (30) whereas NRP-2 has been associated with lymphangiogenesis and lymphatic metastasis in certain cancer types (31).
Intractable PDAC
Pancreatic cancer, also known as PDAC, is one of the most aggressive cancers globally.A PDAC diagnosis carries a 5-year survival rate of <10% (32,33).PDAC's clinical aggressiveness has been attributed to the i) lack of PDAC-specific symptoms (rendering early-stage detection difficult) (34)(35)(36), ii) early metastases (typically spreading to marginal tissues and distant organs, including the liver) (34,35), and iii) chemoand radiotherapy resistance (34,37).Importantly, many other factors, such as topographical, vascular, and ductal pancreatic anatomy (38), and the complex involvement of stromal components of PDAC (39), may be involved in high disease recurrence rates.
Studies of six cohorts, comprising 136,000 cells from 71 cases with PDAC, indicated that PDAC contains various cells, including cancer-associated fibroblasts (CAFs) to understand the complexity of PDAC's cellular components.CAFs facilitate cell-to-cell communication and are involved in PDAC spread and therapeutic resistance (40,41).They were classified into several subpopulations, including inflammatory CAF (iCAF), myofibroblast CAF (myCAF), and antigen-presenting CAF (apCAF), based on gene expression (41).Diverse CAF subpopulations were reported for nine cancer types (42).PDAC that is characterized by iCAFs, which express interleukin 6 (IL6), collagen type XIV alpha 1 chain (COL14A1), lymphocyte antigen 6 complex locus C1 (LY6C), etc., was classified as 'classic-type' with a strong inflammatory profile (41).PDAC that is characterized by myCAFs, which express actin alpha 2, smooth muscle (ACTA2/aSMA), transgelin (TAGLN), thrombospondin 2 (THBS2), leucine-rich repeat containing 15 (LRRC15), etc., was considered as 'basal-type' with a strong myofibroblast profile (41).ApCAFs, which represent a distinct subset of CAFs expressing major histocompatibility complex class II (MHC II) and CD74, possess antigen-presenting capabilities.However, they notably lack the expression of co-stimulatory molecules, such as CD40, CD80, and CD86, resulting in the inability to initiate the typical activation response in CD4 + T cells.The specific role of apCAFs remains unclear, but a widely accepted hypothesis indicates that they might attract CD4 + T cells by expressing MHCII and subsequently interfering with their normal functionality.This interference causes CD4 + T cell inactivation or differentiation into regulatory T cells, thereby potentially contributing to the development of an immunosuppressive tumor microenvironment (43,44).Inter-cellular communication via ligands and their receptors indicated that sonic hedgehog (Shh)-mediated signals in CAFs suppressed cancer cell proliferation and progression in a PDAC model (45).
NRP expression in single PDAC cells
Few reports have focused on NRP expression at the single-cell level in PDAC, thus we used a published single-cell database (https://zenodo.org/record/6024273#.Y7T3tNXP1D8) to examine 136,000 cells from 71 patients with PDAC (41).We revealed various NRP expressions in human PDAC cells, which expressed both NRP-1 and NRP-2.In contrast, ductal cell type 1, another cell cluster, was positive for NRP-1 but not NRP-2.Thus, NRP-1 and NRP-2 appear to allow ductal cells in the pancreas to fulfill different functions.NRP-1 expression, in stellate cells, was higher than NRP-2.Fibroblasts, macrophages, and endothelial cells expressed substantial amounts of both NRP-1 and NRP-2.Endocrine cells featured very few NRP-1 or NRP-2 expressions, indicating that cases with aggressive phenotypes demonstrate fewer endocrine cells.MyCAF cells tend to express both NRP-1 and NRP-2 at high levels, while iCAF cells express only NRP-1 at high levels.Additionally, SEMA3A expression was similar in myCAF and iCAF, but the number of FGF1-expressing cells appeared slightly higher in myCAF.Targeting NRP signaling may represent a potential PDAC therapy approach, considering the high expression of NRP-1 and NRP-2 in ductal cells and fibroblasts.
Therapeutic targeting of NRP-1-positive cells in PDAC can regulate endothelial-to-mesenchymal transition (EndMT), which is an important source of fibroblasts in pathological disorders, thereby reducing tumor fibrosis and PDAC progression (46).The tumor-penetrating peptide was reported via a transcytosis transport pathway that is regulated by NRP-1.This system enhances the transcytosis of silicasome-based chemotherapy for PDAC in NRP-1-positive cells (47).Chimeric antigen receptor T cell (CAR-T) immunotherapy allows T cells to recognize an antigen and attach to antigen-positive cells, thus CAR-T targeting NRPs might be a potential PDAC therapy (8).
Figure 1.NRP-1 and NRP-2 and their related receptors.NRP-1 is a cell membrane-bound receptor that consists of three extracellular domains: i) a1/a2 domain, homologous to the complement proteins C1r/C1s, Uegf and Bmp-1 (referred to as the CUB domain); ii) b1/b2 domain, which is homologous to the coagulation factors V and VIII; and iii) c domain, which is homologous to meprin, A5 protein and protein tyrosine phosphatase µ, as well as TM and CP.NRP-1 contains an SEA sequence in the C-terminus that represents a consensus binding motif for proteins that contain the PDZ (PSD-95, Dlg, ZO-1) domain, such as synectin, which can act as the docking site for interacting partners.The homologies between NRP-1 and NRP-2 are 55% (in the a1/a2 domain), 48% (in the b1/b2 domain), 35% (in the c domain) and 49% (in the CP region) (75).TGF binds to cell membrane-bound serine/threonine kinase receptors that belong to the TGF-β receptor family.PDGFRs consist of extracellular five Ig-like domains and intracellular tyrosine kinase domains, whereas FGFRs consist of extracellular three Ig-like domains and intracellular tyrosine kinase domains.NRP-1 and NRP-2 interact with those receptors and modulate the biological function of cancer cells, vessel and lymphatic endothelial cells, fibroblasts and immune cells, which are components of architectures in tumor microenvironments.NRP, Neuropilin; TM, transmembrane domain; CP, cytoplasmic region; PDGFRs, platelet-derived growth factor receptors; FGFRs, fibroblast growth factor receptors; SEMA, Semaphorins; MRS, Met-related sequence; TK, tyrosine kinase; TGFBR, TGF-β receptor; EMT, epithelial-to-mesenchymal transition; EndMT, endothelial-to-mesenchymal transition; SEA, cytoplasmic domain.
Innovative therapeutic approaches against NRPs-positive PDAC cells
EndMT.Previous studies revealed the epithelial-mesenchymal transition (EMT) mechanism by which epithelial cells lose their polarity and cell-cell adhesion and epithelial cells acquire mesenchymal features and obtain invasive phenotypes.These characterize mesenchymal stem cells, chemotherapy-resistant cells, and cancer metastasis (48,49).Extensive transcriptional reprogramming occurs during the EMT process, and this mechanism is useful for determining the presence of metastases and circulating tumor cells, as well as developing therapies against metastasizing cancer cells (50-52).In particular, high expression of zinc finger E-box binding homeobox 1, Yes-associated transcriptional regulator, FOS like 1, AP-1 transcription factor subunit (FOSL1), and the Jun proto-oncogene, AP-1 transcription factor subunit, indicate the presence of an aggressive, breast cancer subtype.These findings confirm the translational importance of the EMT process (50).
PDAC characterizes an intense fibrotic reaction (i.e., desmoplasia) which is partly responsible for its aggressiveness, thus NRP-1 could be used to regulate TGF-β1-induced EndMT and fibrosis.Some researchers have promoted NRP-1 as a therapeutic target to reduce tumor fibrosis and slow disease progression in patients with PDAC (46).NRP interacts with many receptors and aggregates signals from other individual receptors, thereby executing EMT and EndMT (Fig. 2).Precision medicines that target NRP-1 and NRP-2 could be specified to a patient's genetic profile.Precision PDAC medicine may use drugs that target genetic mutations, such as KRAS Proto-Oncogene, GTPase, and Tumor Protein P53, and drugs that target the pathways and processes that are altered in PDAC (e.g., cell death, survival, migration, adhesion) (40,56).
Cancer stem cells (CSCs).
CSCs help in therapeutic resistance and tumor heterogeneity (57,58) (Fig. 2).A study investigated the multipotent characteristics of CSCs in patients with PDAC (59).NRP signaling contributes to CSC maintenance and development (18).The VEGF/NRP signaling axis is a prime therapeutic target because of its ability to confer resistance to standard chemotherapies (18).NRP-1 interacts with PDZ (also known as disks-large homologous regions) domain-containing protein GIPC1 and PH domain-containing family G member 5 to activate p38 mitogen-activated protein kinase signaling and CSC survival (60).Targeting either NRP-1 or NRP-2 can inhibit tumor initiation and decrease therapeutic resistance in patients with cancer (18).
The increasing evidence for the NRP-1 involvement in cancer has led many studies to investigate its potential as a therapeutic target.Previous studies have focused on the anticancer effects of targeting NRP-1, but little is known about the potentially adverse effects associated with such targeting.Further studies are needed to understand the full spectrum of effects associated with targeting NRP-1 in patients with cancer, including an investigation of potentially adverse events.Such studies should include both in vitro and in vivo cases and clinical trials.NRP-1 targeting-related adverse effects are important because they influence the safety and efficacy of potential future therapeutic targets.
Co-receptor targeting.Cancer cells in the tumor microenvironment produce multiple growth factors that promote lymphangiogenesis from initially enlarged lymphatics to collection within lymphatic vessels (61).Lymphatic enlargement may involve the remodeling of lymphatic vessels with smooth muscle cells (61).Several lymphangiogenic factors, such as VEGF-C/VEGF-D, can promote tumor metastasis (Fig. 2) (61).
NRP-2 acts as an independent or co-receptor for tumor lymphangiogenesis and lymphatic metastasis (Fig. 2) (62).During tumor progression, NRP-2 binds to the ligands VEGF-C/VEGF-D and activates the VEGF-C/VEGF-D/NRP-2 signaling axis, which stimulates lymphangiogenesis regulation in lymphatic endothelial cells and tumor cells (62).A 131I-labeled monoclonal antibody targeting NRP-2 for single photon emission computed tomography imaging allows lymphangiogenesis and tumor angiogenesis visualization in clinical settings (63).Reportedly, mice lacking the transmembrane receptor NRP1, also known as NRP KO mice, exhibit reduced glioma volume and decreased neoangiogenesis, while showing an increased anti-tumorigenic macrophage infiltration (64).Recent studies revealed that NRP-2 may regulate tumor progression through multiple, concurrent mechanisms (i.e., angiogenesis, lymphangiogenesis, EMT, and metastasis).These results indicate that NRP could serve as a therapeutic target for innovative antitumor therapies (62,65).First, NRPs tend to promote cell adhesion, cell-matrix interactions, cell motility, tumor angiogenesis, cell proliferation, and invasion (62,65).Second, NRPs are expressed in a range of cancer cells, including PDAC, as discussed above.Third, NRPs are amenable to targeted inhibition by inhibiting co-receptors or downstream signaling pathways (18).NRPs-ligand interaction inhibitors render NRPs an attractive target for novel therapeutic strategies (66).Preclinical studies revealed NRPs-targeting strategies to be safe, thereby further strengthening the case for their use as innovative antitumor therapies (62,65).
NRP mRNA binding protein.
Recent studies open a new era of diagnostic and therapeutic that target RNA binding mechanisms of NRP transcripts.RNA binding protein Lin28B can directly bind to the 3' untranslated region (UTR) of the NRP-1 transcript, thereby increasing NRP-1 mRNA stability and NRP-1 expression (67,68).This interaction has been suggested to activate Wnt/β-catenin signaling downstream that is involved in CSC or CSC-like cell maintenance and progression in gastric cancer (Fig. 2) (68).It's worth noting that the regulation of Wnt/β-catenin signaling remains a subject of ongoing debate and investigation.While existing literature suggests an association between Lin28B-binding NRPs and Wnt/β-catenin signaling, further research is needed to fully elucidate the complexities of this relationship.Lin28B can exert multiple functions in cancer development by suppressing the biogenesis of several microRNAs, including let-7 and (possibly) miR107, miR-143, and miR-200c (69,70).Overexpressed Lin28B can recruits terminal uridylyl transferase 4 (TUT4/ZCCHC11) to pre-let-7 transcripts, leading to their terminal uridylation and degradation (71).Lin28B in cancer is indicated to be related to let-7 family derepression, which can facilitate cellular transformation with stemness.These insights contribute to the development of new strategies for cancer therapy (Fig. 3).
Another study of RNA immunoprecipitation and luciferase reporter analysis indicated that RNA binding protein PUM2 competitively bound to NRP-1 3'UTR with a microRNA, miR-376a, which can suppress breast cancer cell stemness and increase NRP-1 mRNA stability and expression in breast cancer (72).
Understanding the role of RNA binding proteins (RBPs) in cancer stemness is improving.The NRP axis is crucial for regulating key pathways that are involved in cancer progression.First, NRP-1 helps regulate the Wnt/β-catenin signaling (67,68), which is important for maintaining cancer stem-cell populations.Second, NRPs help regulate tumorigenesis and metastasis by modulating oncogenic and metastasis-associated gene expression.This is particularly true for NRP-2 and tumor lymphangiogenesis and lymphatic metastasis mechanisms (62).Third, NRP-1 promotes stem-cell-associated induced pluripotent stem gene expression, including homeobox transcription factor Nanog and POU Class 5 Homeobox 1 (Oct-3/4) (73).RBPs help regulate pre-and post-transcriptional processes, such as splicing, mRNA stability, and translation (74), thus they may contribute to cancer aggressiveness via gene expression regulation in the NRP axis.
Conclusions
Precision medicines that target the NRP axis might improve the diagnoses and treatment of patients with PDAC.The NRP axis contains potential therapeutic targets that could be used to develop new and individualized PDAC treatments.
Various approaches have been used to target the NRP-1 and NRP-2 axes, including gene editing, small molecule inhibitors, and monoclonal antibodies.These approaches help identify novel therapeutic targets that may improve patient outcomes and biomarkers for risk-based patient stratification, as well as the selection of the most effective treatment for each patient.
Precision medicines that target the NRP axis are leading the field in an exciting new direction that may revolutionize our ability to treat this deadly disease. | 2024-01-24T16:23:14.104Z | 2024-01-22T00:00:00.000 | {
"year": 2024,
"sha1": "a7064eb51d14d67c55d1f590f81925f5122f2874",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2024.14247/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0d28ef73c92f284c9c3c8ce3dfee3bf4ebeb4eb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2247141 | pes2o/s2orc | v3-fos-license | Gastric carcinosarcoma with rhabdomyosarcomatous differentiation: a case report and review
We report an unusual case of gastric carcinosarcoma with rhabdomyosarcomatous and neudoendocrinal differentiation in a 71-year-old Japanese female. Gastric carcinosarcoma with rhabdomyosarcomatous and neuroendocrinal differentiation is a rare tumor. The tumor developed in the body of the stomach and was exophytic in appearance. By histochemical analysis, the tumor was shown a part of positive for desmin and myoglobin and a part of positive for synaphtophysin and vimentin. We conclude that, though rare, gastric carcinosarcoma with rhabdomyosarcomatous and neuroendocrinal differentiation thus is reviewed in the English literatures.
We reviewed the scientific literature pertaining to gastric rhabdomyosarcoma and identified several distinctive clinical features of this type of tumor.
Case presentation
A 71-year-old Japanese female was admitted to the National Kyushu Cancer Center in April, 2012. Anorexia and vomiting were not observed; the CEA and CA19-9 levels were below the cutoff levels. Endoscopic studies revealed a Bormann II type lesion in the middle stomach ( Fig. 1); rhabdomyosarcoma was confirmed on biopsy.
We performed a laparoscopic distal gastrectomy with D2 dissected lymph node on May, 2012. On macroscopic examination, a 2.0 × 1.5 cm tumoral mass was identified in the body. The tumor invaded up until the subserosa, but no lymph node metastasis was found. As a result, the operation was considered to be curative. The patient was discharged on the 14th postoperative day. She was not admitted adjuvant chemotherapy by her offer. The patient has been doing well without any recurrence for 3 years.
Pathologically, the tumor was identified as carcinosarcoma with skeletal muscle and neuroendocrinal differentiation. In the submucosa, there was a proliferation of oval to polygonal cells with hyperchromatic nuclei, prominent nucleoli, and a small amount of eosinophilic cytoplasm, arranged in sheets and accompanied by thin fibro-vascular septa and prominent necrosis. Mitotic figures were frequently seen. Aggregates of histiocytes and granulation tissue were recognized in the surrounding gastric wall (Fig. 2a).
There is no indication that the gastric rhabdomyomatous component of gastric carcinosarcoma in our case or in other case reports represented a from some other site; indeed, metastases to the stomach by rhabdomyosarcomas are uncommon. De la Monte et al. reported that no instances of metastasis to the stomach in a review of 17 autopsies of 22 patients who died of embryonal and alveolar rhabdomyosarcoma at Johns Hopkins Hospital between 1929 and 1983 [29].
Gastric carcinosarcoma with rhabdomyosarcomatous differentiation has been reported in twelve cases (Table 1) [4][5][6][7][8][9][10][11][12][13]. In these twelve cases, no clinical feature has been associated with age, sex, or location. However, most cases showed a polypoid lesion, and in three of the 12 cases, it was recognized in the remnant stomach. There are cases which had poor prognosis. Actually, three of eight cases, which could confirm their survivals, were died within 1 year. The tendency of gastric rhabdomyosarcoma to metastasize to lymph node and lungs is in agreement with previous observations of rhabdomyosarcoma emerging at other sites [30].
The histogenesis of gastric carcinosarcoma remains controversial. Some authors have reported a bioclonal origin, which supports the collision tumor theory [16,31]. Others have proposed that these tumors are monoclonal and that the sarcomatous elements originate from a common stem cell that has the ability to undergo both epithelial and mesenchymal differentiation [7,32]. In our patient, there were occasional transitions between carcinomatous and sarcomatous components, and the immunohistochemical detection of stem in sarcomatous cells may suggest the sarcomatous differentiation of adenocarcinoma. During transdifferentiation, the occurrence of stem cells with multi-differentiation ability is capable to explain the variety of cell types observed in the present tumor.
Our experience with the present case emphasizes that gastric carcinosarcoma with rhabdomyosarcomatous differentiation exhibits aggressive behavior, the tumor, however, is extremely rare.
Conclusions
This report described a very rare case of gastric carcinosarcoma with rhabdomyosarcomatous lesions. This case has survived without tumor recurrence, though most cases of gastric carcinosarcoma with rhabdomyosarcomatous were poor prognosis.
Consent for publication
Patient consent for publication of images has been given in writing. | 2018-04-03T03:15:38.604Z | 2016-06-02T00:00:00.000 | {
"year": 2016,
"sha1": "bfb93dd02b4b46926277b6e094401f8fba82f11e",
"oa_license": "CCBY",
"oa_url": "https://surgicalcasereports.springeropen.com/track/pdf/10.1186/s40792-016-0176-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bfb93dd02b4b46926277b6e094401f8fba82f11e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263692851 | pes2o/s2orc | v3-fos-license | Trypanosoma cruzi STIB980: A TcI Strain for Drug Discovery and Reverse Genetics
Since the first published genome sequence of Trypanosoma cruzi in 2005, there have been tremendous technological advances in genomics, reverse genetics, and assay development for this elusive pathogen. However, there is still an unmet need for new and better drugs to treat Chagas disease. Here, we introduce a T. cruzi assay strain that is useful for drug research and basic studies of host–pathogen interactions. T. cruzi STIB980 is a strain of discrete typing unit TcI that grows well in culture as axenic epimastigotes or intracellular amastigotes. We evaluated the optimal parameters for genetic transfection and constructed derivatives of T. cruzi STIB980 that express reporter genes for fluorescence- or bioluminescence-based drug efficacy testing, as well as a Cas9-expressing line for CRISPR/Cas9-mediated gene editing. The genome of T. cruzi STIB980 was sequenced by combining short-read Illumina with long-read Oxford Nanopore technologies. The latter served as the primary assembly and the former to correct mistakes. This resulted in a high-quality nuclear haplotype assembly of 28 Mb in 400 contigs, containing 10,043 open-reading frames with a median length of 1077 bp. We believe that T. cruzi STIB980 is a useful addition to the antichagasic toolbox and propose that it can serve as a DTU TcI reference strain for drug efficacy testing.
Introduction
Chagas disease is a neglected tropical and most elusive disease [1,2].Given the chronic nature of Chagas disease, with an indeterminate phase that is asymptomatic and lasts for decades, the vast majority of the carriers do not know that they are infected.For the same reason, there are no solid data on the prevalence of Chagas disease.The epidemiology of Chagas disease is further complicated by (i) the large zoonotic reservoir of Trypanosoma cruzi, which infects all kinds of mammals provided they are preyed upon by the triatomine vectors [3]; (ii) alternative transmission routes, including via the oral mucosa upon consumption of contaminated food [4], via blood or organ donation [5], and transplacental to an unborn child [6]; (iii) the genetic heterogeneity and genomic flexibility of T. cruzi with its (at least) seven different discrete typing units (DTUs) [7,8].
These parasites are also elusive in the human body.Trypanosoma cruzi can infect any type of nucleated cell, and the parasites will replicate intracellularly in the cytosol of the host cell.Infected macrophages distribute the parasites throughout the body.Thus, they can access different tissues and niches to hide in, including the heart and the intestinal tract, the typical sites of chronic pathology [9,10].Trypomastigote T. cruzi do not proliferate but persist extracellularly in the blood thanks to their elaborate immune evasion strategies [11].The intracellular amastigotes, too, can enter a non-replicative state of dormancy [12,13].All this makes Chagas disease difficult to diagnose and even harder to cure, as became apparent in clinical trials with new antichagasic drug candidates [14,15].In the laboratory, research on T. cruzi is hampered by the fact that the disease-relevant stages, the amastigotes, are strictly intracellular and require host cells for in vitro culture.The infectious nature of T. cruzi renders all experimental investigation resource-intensive in terms of biosafety measures, assay time, and overall cost [16].
On a positive note, there has been tremendous technological progress in genomics and reverse genetics with T. cruzi, which has boosted basic research and drug discovery.Classical genetic manipulation based on homologous recombination [17,18] is being replaced by CRISPR/Cas9-mediated gene editing [19], which allows for functional genomics in spite of the fact that T. cruzi lacks RNA interference machinery [20].Genetically engineered reporter strains of T. cruzi have enabled assay formats that better predict the potential of antichagasic molecules for irreversible and cidal action, both in vitro and in vivo [21,22].Here, we present the reference strain T. cruzi STIB980, which is useful for all kinds of investigations including genomics, reverse genetics, and drug efficacy testing.
Cells and Cultivation
T. cruzi STIB980 was originally received in 1983 from Prof. Antonio Osuna, University of Granada.Epimastigotes were cultured at 27 • C in LIT medium supplemented with 2 µg/mL hemin and 10% heat-inactivated fetal calf serum (iFCS) [23].The cultures were diluted weekly.Metacyclogenesis was stimulated by keeping the epimastigotes for 3 to 4 weeks in the same medium.Mouse embryonic fibroblasts (MEFs) were cultured at 37 • C, 5% CO 2 in RPMI medium supplemented with 10% iFCS and >95% humidity.The MEFs were subpassaged weekly at a ratio of 1:10 after 5 min treatment with trypsin.Peritoneal mouse macrophages (PMMs) were obtained from female CD1 mice.A 2% starch solution in distilled water was injected i.p., and the macrophages were harvested 24 h later via peritoneal lavage.The cells were washed and resuspended in RPMI medium containing 1× antibiotic cocktail [24], 10% iFCS, and 15% medium conditioned by LADMAC cells (ATCC ® CRL2420™), which secrete colony-stimulating factor 1 (CSF-1).The macrophages were kept in this medium at 37 • C for 3 to 4 days and then detached with trypsin treatment and cell scrapers.The isolation of PMMs from mice was conducted in accordance with the strict guidelines set out by the Swiss Federal Veterinary Office, under the ethical approval of license number #2374.
Cloning of T. cruzi
The gilded paper clip method was used for cloning (Figure 1A).An exponentially growing epimastigote culture was diluted to 5 × 10 4 cells/mL.The outer wells of a 96-well plate were filled with 100 µL sterile water.15 µL of conditioned LIT medium supplemented with 10% filtered post-culture medium and 20% iFCS was placed at the edge of the other wells so that some space on the well remained dry.Using a gold-plated paperclip, a microdrop of approximately 0.1 µL was transferred from the diluted parasite suspension to the dry space of the well.Two people analyzed the droplet under an inverted microscope.Wells that contained only one parasite were supplemented with 35 µL of conditioned LIT.The plates were incubated at 27 • C and assessed regularly for the outgrowth of the clones.
Isolation of Genomic DNA
Genomic DNA for genome sequencing was isolated from 10 8 epimastigotes.The cells were washed and resuspended in 500 µL of NTE and lysed via the addition of 25 µL of 10% SDS.The lysate was treated with 50 µL of RNase A (10 mg/mL) and 25 µL of pronase (20 mg/mL) and incubated overnight at 37 • C. The lysate was extracted sequentially with phenol and chloroform:isoamyl alcohol (24:1).The DNA was precipitated via the addition of 1 mL of cold absolute ethanol.For Illumina sequencing, the DNA was pelleted via centrifugation; for Oxford Nanopore sequencing, the DNA was collected with a glass hook.The DNA was washed with 70% ethanol, air-dried, and resuspended in 80 µL of DNasefree water.For other purposes, genomic DNA was isolated with the QIAGEN DNeasy blood and tissue kit.DNA quality control was performed with Nanodrop (Mettler Toledo, Columbus, OH, USA), using OD 260 /OD 280 >1.8 and OD 260 /OD 230 between 2.0 and 2.2 as inclusion criteria and quantified fluorometrically with QuBit (Thermo Fisher, Waltham, MA, USA).For Illumina sequencing, the genomic DNA was fragmented via sonication to a medium insert size of 750 bp.For Oxford Nanopore sequencing, the genomic DNA was used as isolated.
Genotyping and cloning of T. cruzi STIB980
T. cruzi STIB980 is one of the standard strains used for drug efficacy testing at the Parasite Chemotherapy Unit of the Swiss TPH.Amastigote and epimastigote forms are readily cultured as described in the Methods section.A fresh clone of T. cruzi STIB980 was made with epimastigotes, employing the gilded paperclip method (Figure 1A).
This clone of T. cruzi STIB980 was used for all further analyses.Genotyping based on the restriction fragment length polymorphisms of three target loci (the large ribosomal RNA subunit, heat-shock protein 60, and glucose-6-phosphate isomerase (PGI)) [44] placed T. cruzi STIB980 in DTU TcI (Figure 1B to F).This was confirmed constructing a phylogenetic tree of the PGI nucleotide sequences, in which STIB980 clustered with the DTU TcI branch (Figure 2).TcI is one of the DTUs that circulates most broadly among humans, and it correlates with cardiomyopathy [45,46].Therefore, a TcI strain is highly relevant as an assay strain.
Genome Sequencing and Assembly
Library preparation and sequencing on the Illumina platform were performed at the Quantitative Genomics Facility Basel (GFB) of the ETH Zürich.Sequencing libraries were prepared using the PCR-free KAPA HyperPrep kit (Illumina, San Diego, CA, USA).Pairedend sequencing of 125 nucleotides was performed with an Illumina HiSeq 2500 sequencer.For Nanopore sequencing, the library was prepared using the Ligation Sequencing kit 108 (SQK-LSK108, Oxford Nanopore Technology, Oxford, UK) and sequenced using the Min-ION (1D R9.3) platform.Basecalling was carried out using Albacore.Quality control for all reads was performed with FastQC (version 0.11.3)[25].The reads were trimmed stringently using the following Trimmomatic [26] parameters: SLIDINGWINDOW:4:30, LEADING:10, TRAILING:10, HEADCROP:6, and MINLEN:36.This left 38.60% of paired reads and an additional 23.71% and 5.18% of forward-and reverse-only reads, respectively.Thus, 32.51% of the original 67,187,531 read pairs were discarded.We benchmarked different assemblers available at the time: Velvet [27] and SOAPdenovo2 (version 2.04) [28] for the Illumina reads (with a range of different kmer sizes, from 17 to 73), Canu (version 1.7) [29], and Flye (release 2.3.3)[30] for the Nanopore reads.Velvet and SOAPdenovo2 assembled the genome in the lowest amount of contigs.For Velvet, we followed the tutorial by Thomas Otto [31].At kmer size 55, we had the best results in terms of contig number and N50.On this assembly, 95.07% of the trimmomatic-filtered single reads were mapped, as were 73.49% of the paired reads.The best mapping result with SOAPdenovo2 was at kmer size 17, with 79.66% and 32.34% mapping for single and paired reads, respectively.Illumina polishing of the Canu-assembled Nanopore reads was performed using Pilon (version 1.22) [32], followed by BWA-MEM [33] with default parameters.The Flye assembly was performed on the pore-chopped long reads, with an expected genome size of 53 Mb [30].Gene prediction was performed using GLIMMER [34] with the standard codon table.The genome of T. cruzi Dm28c [35] served as the training set.
Optimization of Electroporation
10 7 epimastigotes from a dense culture were centrifuged and resuspended in 100 µL of TbBSF buffer [36] containing 10 µg of circular (for transient transfection) or linearized (for stable transfection) plasmid DNA.The plasmid pTcRG was kindly provided by Santuza Teixeira (Federal University of Minas Gerais, Belo Horizonte, Brazil).The cells were electroporated with a nucleofector device (Lonza) in a 0.2 mm cuvette (BioRad, Hercules, CA, USA).After electroporation, the cells were transferred to 10 mL of LIT with a finetipped Pasteur pipette.The parasites transfected with circular plasmids were incubated for 24 h and then tested for GFP expression with flow cytometry with a FACSCalibur machine (Becton Dickinson and Company, Franklin Lakes, NJ, USA).The parasites transfected with linearized plasmid were incubated for 24 h, diluted 1:10 in medium containing 100 µg/mL G418 (Gibco, Billings, MT, USA), and further distributed in a fourfold dilution series in a 48-well plate under antibiotic pressure.Outgrowing epimastigotes were cloned by limiting dilution and assessed for correct integration of the transgene with PCR and Southern blot.
Drug Sensitivity Assay with Epimastigotes
In a 96-well microtiter plate, 100 µL of epimastigotes at a starting density of 5 × 10 6 /mL, 10 5 /mL, or 2 × 10 4 /mL was incubated with a test compound in threefold serial dilution with 11 dilution steps.After 69 h or 165 h of incubation at 27 • C, 10 µL of resazurin (Sigma) solution (12.5 mg in 100 mL water) was added to each well.After another 3 h of incubation, the plates were read with a SpectraMAX GeminiXS fluorescence reader (Molecular Devices, San Jose, CA, USA), and 50% inhibitory values (IC50) were determined in R version 3.5.1 (R Core Team 2018, Vienna, Austria) using the "drc" package [41].
Flow Cytometry
For flow cytometry, 10 5 epimastigotes were fixed with 10% formalin (Sigma) for 15 min and then analyzed for their green fluorescence levels (FL1) with a BD FACSCalibur (Becton Dickinson and Company, Franklin Lakes, NJ, USA).The threshold for GFP expression was set above the autofluorescence level of 99.6% of the untransfected control cells.The proportion of GFP-expressing cells was defined as the proportion of cells exhibiting a higher level of fluorescence than the threshold.
High-Content Drug Efficacy Assay
Assays were performed with two technical and two biological replicates.For the standard assay, 10 4 PMMs were seeded into the central wells of a black 96-well plate (Greiner, uClear, black, REF 655090, Lot E1803364) in 100 µL of RPMI medium containing 1% antibiotic mix [24], 10% iFCS, and 15% RPMI containing LADMAC growth factors per well.The border wells were filled with 100 µL of water.After 48 h, the PMMs were infected with 10 4 trypomastigotes from either the wildtype or the transgenic STIB980 line.After 24 h, the remaining trypomastigotes were washed off twice with 200 µL of RPMI per well.The infected PMMs were kept in 100 µL RPMI containing 1% antibiotic mix and 10% iFCS.Drugs were added in threefold serial dilutions 24 h post-infection.At 96 h after the addition of drugs, the plates were fixed with 10% formalin for 15 min at room temperature.Subsequently, the plates were stained with 50 µL of 5 µM Draq5 (BioStatus, Leicester, UK) per well for 30 min at room temperature in the dark.The plates were stored at 4 • C for at least 24 h and then imaged using an ImageXpress Micro XLS microscope (Molecular Devices, San Jose, CA, USA) with a 20× Zeiss objective with a Cy5 filter cube for 300 ms per image on 9 sites per well.Image analysis was performed with the MetaXpress 6 software.Statistical analysis and graphs were performed in R version 3.5.1 (R Core Team 2018) using the packages "tidyverse" [42] and "readxl" [43].
Genotyping and Cloning of T. cruzi STIB980
T. cruzi STIB980 is one of the standard strains used for drug efficacy testing at the Parasite Chemotherapy Unit of the Swiss TPH.Amastigote and epimastigote forms are readily cultured as described in the Methods section.A fresh clone of T. cruzi STIB980 was made with epimastigotes, employing the gilded paperclip method (Figure 1A).
This clone of T. cruzi STIB980 was used for all further analyses.Genotyping based on the restriction fragment length polymorphisms of three target loci (the large ribosomal RNA subunit, heat-shock protein 60, and glucose-6-phosphate isomerase (PGI)) [44] placed T. cruzi STIB980 in DTU TcI (Figure 1B-F).This was confirmed constructing a phylogenetic tree of the PGI nucleotide sequences, in which STIB980 clustered with the DTU TcI branch (Figure 2).TcI is one of the DTUs that circulates most broadly among humans, and it correlates with cardiomyopathy [45,46].Therefore, a TcI strain is highly relevant as an assay strain.
Genome sequence of T. cruzi STIB980
The genomic DNA of T. cruzi STIB980 was sequenced with the Illumina and Oxford Nanopore technologies.Illumina sequencing was performed with a 125 bp paired-end protocol and yielded 45,345,000 reads that passed quality control.With Nanopore se- Neighbor-Joining phylogenetic tree of PGI coding sequences.The naming of the T. cruzi strains is that of TriTrypDB [47]; discrete typing units are color-labeled [48,49].T. cruzi marinkellei and T. rangeli are included as outgroups.All nucleotide sequences were downloaded from tritrypdb.orgafter a blastn [50] search with STIB980 PGI as the query sequence.Multiple alignment was performed with MUSCLE [51] using default parameters, and the tree was drawn with MegaX [52].Bootstrap values are percent positives of 1000 rounds; only values above 90 are shown.The scale bar indicates the number of base substitutions per site.
Genome Sequence of T. cruzi STIB980
The genomic DNA of T. cruzi STIB980 was sequenced with the Illumina and Oxford Nanopore technologies.Illumina sequencing was performed with a 125 bp paired-end protocol and yielded 45,345,000 reads that passed quality control.With Nanopore sequencing, we obtained 250,005 reads and a median length of 1.4 kb (Figure 3A).Taking the actual read lengths and assuming a haploid genome size of 53.3 Mb, as reported for T. cruzi Dm28c [35], this provides a total coverage of 25.3-fold for the Nanopore sequencing alone.The coverage of the nuclear genome, i.e., excluding mini-and maxicircles, was 19.4-fold.The reads were categorized according to their size and GC content (Figure 3) into nuclear genome, maxicircle (assembled to a single contig and confirmed with blastn searches), minicircles, and sequences of unknown origin (Table 1).The best results for genome assembly (as judged by the mapping rate) were obtained by first assembling the long Nanopore reads (using Canu v1.7 [29]), followed by fixing errors with the short Illumina reads (using Pilon v1.22 [32]).This combination of Nanopore and Illumina reads led to drastic improvements compared with the assembly based on Illumina reads alone: the number of contigs was reduced 23-fold, the N50 increased 30-fold, and the number of gaps (n = 13,000) and undetermined nucleotides (5 Mb) were reduced to zero.The total assembly amounted to 28.2 Mb in 492 contigs (Figure 3B); the nuclear genome had a haploid size of 27.9 Mb in 397 contigs (Table 1).This is at the lower end of the range of published T. cruzi genome sizes, which vary from 27 Mb to 83 Mb [53].The best results for genome assembly (as judged by the mapping rate) were obtained by first assembling the long Nanopore reads (using Canu v1.7 [29]), followed by fixing errors with the short Illumina reads (using Pilon v1.22 [32]).This combination of Nanopore and Illumina reads led to drastic improvements compared with the assembly based on Illumina reads alone: the number of contigs was reduced 23-fold, the N50 increased 30-fold, and the number of gaps (n = 13,000) and undetermined nucleotides (5 Mb) were reduced to zero.The total assembly amounted to 28.2 Mb in 492 contigs (Figure 3B); the nuclear genome had a haploid size of 27.9 Mb in 397 contigs (Table 1).This is at the lower end of the range of published T. cruzi genome sizes, which vary from 27 Mb to 83 Mb [53].The gene prediction was based on T. cruzi Dm28c [35] as the training set, and it resulted in 10,043 open-reading frames (ORFs) with a median length of 1077 bp.The amino acid sequences were queried against the UniProt KnowledgeBase [54] using blastp [50] with an expectancy (E-value) cut-off of 10 −8 .This allowed for the functional annotation of 3505 genes.
Antibiotic Sensitivity Profile of Epimastigote T. cruzi STIB980
In order to determine the best selection markers for use in genetic manipulation, we tested the sensitivity of T. cruzi STIB980 epimastigotes to commonly used antibiotics: blasticidin, G418, hygromycin, phleomycin, and puromycin.Benznidazole and nifurtimox were included as benchmark drugs, and DMSO was included as the most commonly used solvent of test compounds.Drug sensitivity was tested for 72 h and 168 h of incubation.For the latter, we used two different inocula: a lower starting density (2 × 10 4 epimastigotes/mL) to assess the inhibition of proliferation and a higher density (10 5 epimastigotes/mL) to measure cidality.However, the obtained IC 50 values were similar across all the tested conditions (Table 2).The STIB980 epimastigotes had comparably high IC 50 values for G418, which is in agreement with the high concentrations (100 to 500 µg/mL) of G418 that are generally used for epimastigote T. cruzi [39] and in stark contrast to the 1 to 5 µg/mL used in the genetic manipulation of procyclic T. brucei [36].Besides the sensitivity of the untransfected trypanosomes, other factors will determine the optimal concentration of antibiotics for selecting positive transfectants.The expression level of the resistance gene will be affected by its copy number (especially in episomal transfections) and the strength of the promoter, the RNA polymerase (RNAPolII, usually resulting in a lower level of transcription than RNAPolI), and-in the case of the ribosomal locus-the exact site of integration [55].Overall, we recommend blasticidin or puromycin to select for T. cruzi STIB980 transfectants rather than G418, hygromycin, or phleomycin.
Optimal Transfection Protocol for T. cruzi STIB980
Lonza nucleofector 2b is a widely used electroporation device for genetic transfection.It also provides excellent results with trypanosomes but is a black box, as the provider does not disclose the characteristics of the electric discharge nor the composition of the buffers.Tests on nucleofector programs have already been published for T. brucei [36] and T. cruzi [56].We investigated which program is best suited for T. cruzi STIB980.Epimastigotes were transfected with a circular pTcRG plasmid that contained the green fluorescent protein (GFP) gene plus the 3' UTR of the GAPDH gene, which confers constitutive expression.In total, 4 × 10 7 epimastigotes in the exponential growth phase were transfected with 10 µg of plasmid DNA using nine different nucleofector programs.Immediately after transfection, we counted the surviving parasites.Then, we incubated them for 24 h in 10 mL of LIT medium at 27 • C. Finally, the proportion of GFP-expressing parasites was quantified via flow cytometry [36].The transfection efficiency was calculated as the product of cell survival and GFP positivity (Table 3).The programs U-033, X-001, and Z-001 had the best overall efficiencies.The lower survival rates with Z-001 and U-033 were compensated by higher fractions of GFP expression.Program X-001 was recommended for the transfection of Leishmania mexicana promastigotes [57].For subsequent transfections, we used the nucleofector programs U-033 or X-014 [56].The levels of cytosolic GFP obtained after the stable transfection of pTcRG were too low for high-content fluorescence microscopy.Most of the parasite signal was below three times the background level (i.e., the autofluorescence of untransfected epimastigotes).For better use of T. cruzi STIB980 in drug efficacy testing and molecular genetics, we generated stable transgenic lines expressing a LucNeon reporter gene, a Cas9 nuclease gene, or both [37].LucNeon is a chimeric gene that encodes a fusion protein of mNeonGreen, suitable for fluorescence-based in vitro imaging, plus a red-shifted luciferase that is suitable for bioluminescence-based in vivo imaging [37].Epimastigotes were transfected as described in the Methods section.The three resulting transgenic lines all had similar growth rates with population-doubling times around 20 h, slightly higher than the 17 h of the parental T. cruzi STIB980 (Figure 4).
The sensitivity profiles to reference drugs (benznidazole and nifurtimox) and drug candidates (posaconazole, fexinidazole, and oxaborole DNDi-6148) of parental T. cruzi STIB980 and STIB980-LucNeon were determined using high-content imaging of intracellular amastigotes in expanded mouse peritoneal macrophages.The IC 50 values were calculated with two different methods, either based on the number of infected host cells or the total number of intracellular amastigotes (Table 4).
The first method resulted in slightly higher IC 50 values, which was to be expected as the total number of parasites can be reduced more readily than host cells cured of the infection.Overall, the drug sensitivities of the parental STIB980 and transgenic derivative were very similar using both methods (Table 4).The function of the Cas9 nuclease was validated via the CRISPR/Cas9-mediated deletion of the fluorescence reporter using specific guide RNA for the LucNeon gene (Figure 5).
Pathogens 2023, 12, x FOR PEER REVIEW 10 of 14 [37].LucNeon is a chimeric gene that encodes a fusion protein of mNeonGreen, suitable for fluorescence-based in vitro imaging, plus a red-shifted luciferase that is suitable for bioluminescence-based in vivo imaging [37].Epimastigotes were transfected as described in the Methods section.The three resulting transgenic lines all had similar growth rates with population-doubling times around 20 h, slightly higher than the 17 h of the parental T. cruzi STIB980 (Figure 4).The sensitivity profiles to reference drugs (benznidazole and nifurtimox) and drug candidates (posaconazole, fexinidazole, and oxaborole DNDi-6148) of parental T. cruzi STIB980 and STIB980-LucNeon were determined using high-content imaging of intracellular amastigotes in expanded mouse peritoneal macrophages.The IC50 values were calculated with two different methods, either based on the number of infected host cells or the total number of intracellular amastigotes (Table 4).The first method resulted in slightly higher IC50 values, which was to be expected as the total number of parasites can be reduced more readily than host cells cured of the infection.Overall, the drug sensitivities of the parental STIB980 and transgenic derivative were very similar using both methods (Table 4).The function of the Cas9 nuclease was validated via the CRISPR/Cas9-mediated deletion of the fluorescence reporter using specific guide RNA for the LucNeon gene (Figure 5).
Conclusion
Trypanosoma cruzi STIB980 is a useful new assay strain in the toolbox of antichagasic drug discovery.It is a DTU TcI strain that is readily cultured in vitro and amenable to genetic manipulation.We provide optimized electroporation conditions and the antibiotic sensitivity profile of epimastigotes to facilitate genetic transfection.The genome sequence
Conclusions
Trypanosoma cruzi STIB980 is a useful new assay strain in the toolbox of antichagasic drug discovery.It is a DTU TcI strain that is readily cultured in vitro and amenable to genetic manipulation.We provide optimized electroporation conditions and the antibiotic sensitivity profile of epimastigotes to facilitate genetic transfection.The genome sequence of T. cruzi STIB980 was assembled by combining short reads generated with Illumina sequencing and long reads generated with Oxford Nanopore sequencing, demonstrating the power of combining both technologies, in particular for a genome with a high degree of repetitive regions like that of T. cruzi.We further provide T. cruzi STIB980 derivatives that express reporter genes (eGFP, LucNeon) for imaging in vitro and in vivo.The reporter genes are stable in the absence of selective pressure in epimastigotes but much less so in amastigotes, underlining the importance of frequently resorting to a new stabilate, e.g., when running drug-testing campaigns against intracellular amastigotes.To facilitate CRISPR/Cas9-mediated gene editing, we also constructed a line of T. cruzi STIB980-LucNeon with a stably integrated Cas9 gene and validated that line by knocking out the LucNeon gene as a proof-of-principle.Thus, T. cruzi STIB980 can serve not only as a reference strain for drug efficacy testing but also as a tool for molecular genetics.
Figure 2 .
Figure 2.Neighbor-Joining phylogenetic tree of PGI coding sequences.The naming of the T. cruzi strains is that of TriTrypDB[47]; discrete typing units are color-labeled[48,49]. T. cruzi marinkellei and T. rangeli are included as outgroups.All nucleotide sequences were downloaded from tritrypdb.orgafter a blastn[50] search with STIB980 PGI as the query sequence.Multiple alignment was performed with MUSCLE [51] using default parameters, and the tree was drawn with MegaX[52].Bootstrap values are percent positives of 1000 rounds; only values above 90 are shown.The scale bar indicates the number of base substitutions per site.
Figure 2 .
Figure 2.Neighbor-Joining phylogenetic tree of PGI coding sequences.The naming of the T. cruzi strains is that of TriTrypDB[47]; discrete typing units are color-labeled[48,49]. T. cruzi marinkellei and T. rangeli are included as outgroups.All nucleotide sequences were downloaded from tritrypdb.orgafter a blastn[50] search with STIB980 PGI as the query sequence.Multiple alignment was performed with MUSCLE [51] using default parameters, and the tree was drawn with MegaX[52].Bootstrap values are percent positives of 1000 rounds; only values above 90 are shown.The scale bar indicates the number of base substitutions per site.
Figure 3 .
Figure 3. Distribution of the Nanopore reads (A, n = 250,005) and assembled contigs (B, n = 492) of T. cruzi STIB980 according to their GC content and length.This separates nuclear sequences from mitochondrial sequences.The majority of the reads were categorized as minicircles, with GC content between 30% and 40% and a length of about 1.4 kb.
Figure 3 .
Figure 3. Distribution of the Nanopore reads ((A), n = 250,005) and assembled contigs ((B), n = 492) of T. cruzi STIB980 according to their GC content and length.This separates nuclear sequences from mitochondrial sequences.The majority of the reads were categorized as minicircles, with GC content between 30% and 40% and a length of about 1.4 kb.
Figure 4 .
Figure 4. Growth curves of epimastigote T. cruzi STIB980 wildtype (wt, blue) and transgenic derivative-expressing Cas9 nuclease (orange), LucNeon reporter gene (green), or both (bright green).The indicated population doubling times were calculated via linear regression to the log-transformed data.
Table 4 .
Drug sensitivity profiles of T. cruzi STIB980 wildtype (wt) and STIB980-LucNeon as determined using high-content imaging of intracellular amastigotes.IC50 values were calculated based on the number of infected host cells (infection rate, left) or the total number of intracellular amastigotes (no. of amastigotes, right).
Table 4 .
Drug sensitivity profiles of T. cruzi STIB980 wildtype (wt) and STIB980-LucNeon as determined using high-content imaging of intracellular amastigotes.IC 50 values were calculated based on the number of infected host cells (infection rate, left) or the total number of intracellular amastigotes (no. of amastigotes, right).
Figure 5 .
Figure 5. Validation of the LucNeon reporter and Cas9 nuclease in T. cruzi STIB980-Cas9-LucNeon via flow cytometry.The x-axis represents the fluorescence level in arbitrary units, measured with the green fluorescence channel (excitation, 488 nm; emission, 525 nm; bandwidth, 50 nm); the y-axis is the normalized cell count.Only 23.7% of the cells still showed a green fluorescence signal after CRISPR-Cas9-mediated knockout of the LucNeon fusion gene.
Figure 5 .
Figure 5. Validation of the LucNeon reporter and Cas9 nuclease in T. cruzi STIB980-Cas9-LucNeon via flow cytometry.The x-axis represents the fluorescence level in arbitrary units, measured with the green fluorescence channel (excitation, 488 nm; emission, 525 nm; bandwidth, 50 nm); the y-axis is the normalized cell count.Only 23.7% of the cells still showed a green fluorescence signal after CRISPR-Cas9-mediated knockout of the LucNeon fusion gene.
Table 1 .
Summary statistics of the separate genome assemblies for T. cruzi STIB980.
Table 2 .
Antibiotic sensitivity profile of T. cruzi STIB980 epimastigotes.All values are µg/mL; the 95% CIs are provided in parentheses.
Table 3 .
Efficiency of transient transfection of different nucleofector programs, expressed as the fraction of surviving cells multiplied by the fraction of GFP-expressing cells. | 2023-09-15T13:03:20.344Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "792252f61bdcf3f4b08244b4b019b9a4d9093fdd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/12/10/1217/pdf?version=1696408351",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfdd2bfecfc54dabf072fc25a27e5282bf8f45ad",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11148317 | pes2o/s2orc | v3-fos-license | Systemically administered PEDF against primary and secondary tumours in a clinically relevant osteosarcoma model
Background: Pigment epithelium-derived factor (PEDF) is an endogenous glycoprotein with a potential role as a therapeutic for osteosarcoma. Animal studies have demonstrated the biological effects of PEDF on osteosarcoma; however, these results are difficult to extrapolate for human use due to the chosen study design and drug delivery methods. Methods: In this study we have attempted to replicate the human presentation and treatment of osteosarcoma using a murine orthotopic model of osteosarcoma. The effects of PEDF on osteosarcoma cell lines were evaluated in vitro prior to animal experimentation. Orthotopic tumours were induced by intra-tibial injection of SaOS-2 osteosarcoma cells. Treatment with PEDF was delayed until after the macroscopic appearance of primary tumours. Pigment epithelium-derived factor was administered systemically via an implanted intraperitoneal micro-osmotic pump. Results: In vitro, PEDF inhibited proliferation, induced apoptosis and inhibited cell cycling of osteosarcoma cells. Pigment epithelium-derived factor promoted adhesion to Collagen I and inhibited invasion through Collagen I. In vivo, treatment with PEDF caused a reduction in both primary tumour volume and burden of pulmonary metastases. Systemic administration of PEDF did not cause toxic effects on normal tissues. Conclusion: Systemically delivered PEDF is effective in suppressing the size of primary and secondary tumours in an orthotopic murine model of osteosarcoma.
Osteosarcoma is an aggressive primary bone cancer that predominately affects adolescents and young adults. Neo-adjuvant chemotherapy, adjuvant chemotherapy and surgical resection are the mainstays of treatment for osteosarcoma. Advances in diagnostic imaging have provided a more complete evaluation of tumour anatomy and have allowed the treating surgeon to consider a variety of limb salvage techniques. Despite these advances, however, patient prognosis has not improved significantly since the 1970s when multiagent chemotherapy regimes were introduced (Guise et al, 2009). The 5-year survival rate for osteosarcoma remains steady at 60 -70% (Kumar et al, 2005). Novel approaches are desperately needed to improve the treatment of patients with osteosarcoma, particularly for those with chemoresistant or recurrent disease.
With this challenge in mind, research has focused on characterising the genetic basis of osteosarcoma. The molecular pathways that underlie tumourigenesis, proliferation, invasion and metastasis are being identified as targets for novel treatment agents (Broadhead et al, 2010). Targeting the deranged molecular signalling of osteosarcoma should enhance the effectiveness of conventional chemotherapeutics and possibly reduce patient morbidity.
Pigment epithelium-derived factor (PEDF) is a multifunctional molecule with a potential role as a therapeutic agent for osteosarcoma. Pigment epithelium-derived factor is an endogenous 50-kDa glycoprotein that was first shown to be capable of inducing differentiation of Y-79 retinoblastoma cells (Tombran-Tink and Johnson, 1989). Pigment epithelium-derived factor is expressed in a wide range of tissues including the eye, brain, spinal cord, plasma, bone, cartilage, heart, lung, prostate and pancreas (Broadhead et al, 2009). Pigment epithelium-derived factor has diverse roles in these tissues; however, it has attracted attention foremost as a potent anti-angiogenic agent. Pigment epitheliumderived factor is twice as potent as angiostatin and seven times as potent as endostatin (Dawson et al, 1999). It was the antiangiogenic properties of PEDF that lead to the study of its potential as an anti-tumour agent for various cancers (Broadhead et al, 2009). Anti-angiogenic agents such as bevacizumab have already been adopted as adjunctive treatments for cancers, including metastatic colon carcinoma, breast carcinoma, renal cell carcinoma, non-small-cell lung carcinoma and glioblastoma multiforme.
Previous studies have provided proof of principle for PEDF as an anti-osteosarcoma agent. Quan et al (2002) first examined the role of cartilage-derived anti-angiogenic factors at the growth plate of long bones. Using immunohistochemistry and in situ hybridisation, PEDF expression was shown to be largely restricted to the avascular resting, proliferative and upper hypertrophic layers of the growth plate. These are the regions that are consistently resistant to osteosarcoma invasion from the adjacent metaphysis. Ek et al (2007a) and Takenaka et al (2005) later showed that PEDF restricted osteosarcoma growth in vitro through both the induction of apoptosis and the inhibition of cell cycling. Pigment epithelium-derived factor also restricted the metastatic capacity of osteosarcoma cells by improving cellular adhesion and restricting invasion.
Pigment epithelium-derived factor has been tested in a number of compelling in vivo animal studies for osteosarcoma. Ek et al (2007a) applied PEDF to a spontaneously metastasising orthotopic model of osteosarcoma. SaOS-2 human osteosarcoma cells were first treated with PEDF and primary osteosarcoma was induced by intra-tibial injection of treated cells in Balb/c nude mice. Pigment epithelium-derived factor restricted the growth of primary tumours and the occurrence of pulmonary metastases. Ek et al (2007b) also showed that PEDF overexpression in an orthotopic model reduced microvessel density and osteolysis. Pigment epithelium-derived factor gene delivery in this model resulted in reduced tumour growth, both when used alone and in combination with doxorubicin therapy (Ta et al, 2009a).
All of these previous in vivo studies with PEDF have utilised a clinically relevant orthotopic model that allows an evaluation of both primary and secondary tumour progression. However, while showing proof of principle, the results of treating osteosarcoma cells with PEDF prior to inoculation (Ek et al, 2007a), and the use of a PEDF-expressing plasmid (Ek et al, 2007b;Ta et al, 2009a), are difficult to extrapolate for human use. In order to truly evaluate the therapeutic efficacy of PEDF in a clinically relevant model of disease, treatment with PEDF should be delayed until after the establishment of primary tumours, and preferably be performed with systemic recombinant protein. This would more accurately replicate the human condition where patients most commonly present for treatment with an established tumour.
In this study we aimed to evaluate systemically administered PEDF in a model optimised for clinical relevance. Using an orthotopic murine model of spontaneously metastasising osteosarcoma, we show for the first time that systemic delivery of PEDF is capable of restricting the size of established primary and secondary osteosarcoma.
Cells, culture conditions, reagents and mice
SaOS-2 human osteosarcoma cells were obtained from the American Tissue Culture Collection (ATCC, Manassas, VA, USA). SJSA-1 human osteosarcoma cells were kindly provided by A/Professor David Thomas (Peter MacCallum Cancer Centre, East Melbourne, Australia). Cells were cultured in complete medium (CM) under standard conditions at 371C and in humidified 5% CO 2 . Complete medium consisted of MEM-Alpha þ GlutaMAX (Invitrogen, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (Invitrogen) and 1% antibiotic-antimycotic (Invitrogen).
Five-week-old Balb/c nude mice were purchased from the Animal Resource Centre, Australia, and were housed at the St Vincent's Hospital BioResources Centre. The St Vincent's Hospital Melbourne Animal Ethics Committee approved all animal experimentation.
Pigment epithelium-derived factor was obtained from BioProducts MD (Middletown, MD, USA).
Terminal deoxynucleotidyl transferase dUTP nick end labelling assay Cell lines SaOS-2 and SJSA-1 were seeded in 96-well plates at a density of 1  10 3 cells per well, 100 ml per well in quadruplicate. After 24 h, cells were treated with 0 or 100 nM PEDF. After 48 h of treatment with PEDF, apoptotic cells were stained with a terminal dUTP nick end labelling (TUNEL) assay kit (Promega), according to the manufacturer's instructions. A representative field of each well in the plate was observed and photographed under  10 objective. TUNEL-positive staining cells were counted. This experiment was repeated four times.
Ki-67 immunocytochemistry
Cell lines SaOS-2 and SJSA-1 were seeded and treated in 96-well plates as for the preceding TUNEL assay in quadruplicate. Ki-67 immunocytochemistry was performed after 48 h of treatment with PEDF. Cells were first fixed with 4% paraformaldehyde at room temperature, then permeabilised with 0.3% saponin. Primary blocking with 2% rabbit serum/0.25% bovine serum albumin/0.1% saponin was performed for 30 min. Cells were then incubated overnight with a 1 : 50 dilution of monoclonal mouse anti-human Ki-67 antibody (DakoCytomation, Glostrup, Denmark). The following day, cells were incubated for 1 h at room temperature with 1 : 2000 diluted biotinylated polyclonal rabbit anti-mouse secondary antibody (DakoCytomation). A Vectastain ABC kit was then used according to the manufacturer's instructions and developed with SIGMA FAST DAB. Three drops of 100% glycerol was added to each well prior to microscopy, photography and enumeration under  10 objective. This experiment was repeated four times.
Collagen I adhesion assay
Collagen I (0.2%, BD Biosciences) was applied to the base of a 24-well plate and allowed to set at 371C/5% CO 2 for 60 min. Excess collagen was removed prior to seeding SaOS-2 and SJSA-1 cells at a density of 1  10 5 per well in 500 ml of CM ± 100 nM PEDF in duplicate. After 60 min at 371C/5% CO 2 , each well was washed twice with PBS to remove loose cells and debris. Wells were observed under  10 objective and photographed. Adherent cells were counted. This experiment was repeated three times.
Collagen I invasion assay
Cell culture inserts with polyethylene terephthalate track-etched membranes, 8.0-mm pore size, were inserted into 24-well plates, then coated with 2 mg ml À1 type I rat tail collagen (BD Biosciences). CM was placed in the wells beneath the cell culture insert. Cells SaOS-2 and SJSA-1 were seeded at 5  10 4 cells per insert in serum-free medium in duplicate. Cells were incubated under standard conditions for 6 days. Polyethylene terephthalate membranes were then removed and prepared for microscopy using the QuickDip staining system. Membranes were imaged under  20 objective and adherent cells were enumerated. The experiment was repeated twice.
Electron microscopy
Cells (SaOS-2) were seeded and treated in 96-well plates according to the protocol used for TUNEL assay and Ki-67 immunocytochemistry. Cells were prepared for transmission electron microscopy (TEM) after 48 h of treatment with PEDF. Cells were fixed with 2.5% glutaraldehyde/0.1 M cacodylate buffer (pH 7.4) for 1 h and then post-fixed with 2.0% osmium tetroxide/deionised water for 1 h. Cells were dehydrated using a gradient of acetone, followed by infiltration with Spurr's resin and sectioning on an UltraCut-S microtome. Sections were stained with uranyl acetate/lead citrate solution. Transmission electron microscopy was performed on a Siemens 102 transmission microscope at 60 kV.
Orthotopic model of osteosarcoma
Cells (SaOS-2) were mixed with 50% Matrigel to a concentration of 2 Â 10 6 cells ml À1 . Mice were anaesthetised by intraperitoneal injection of 100 mg kg À1 ketamine and 10 mg kg À1 xylazine. A volume of 10 ml of SaOS-2/Matrigel solution was injected into the left tibae of individual mice using a 27-gauge needle (Dass et al, 2006). The needle was inserted into the tibial tuberosity and advanced using a drilling motion to avoid fracture of the bone.
Mice were monitored thrice weekly for tumour growth and signs of distress. Tumours were measured in the anteroposterior (AP) and lateral (L) planes using digital callipers. Leg volume and tumour volume wearer calculated using the formula 4/3p(1/4(AP þ L)) 2 (Ek et al, 2007a). The volume of the contralateral limb was subtracted from the tumour-bearing limb to calculate actual tumour volume. Mice were weighed using digital scales.
In this study, a total of 18 mice were injected with SaOS-2 cells initially. Variability of tumour take meant that of these 18, only 12 were suitable for randomisation to control and treatment groups. Tumours did not form in six mice. These groups, each consisting of four mice, received (1) sterile water as control, (2) PEDF 50 mg kg À1 per day or (3) PEDF 500 mg kg À1 per day. Sterile water was used as the PEDF diluent.
Sustained delivery of both sterile water and PEDF (BioProducts MD) was achieved by Alzet micro-osmotic pump (Durect Corp., Cupertino, CA, USA). The mean pumping rate for the Alzet micro-osmotic pump (model 1002) is 0.25 ml h À1 over 14 days, as determined by the manufacturer. Pumps were implanted within the peritoneal cavities of mice at day 20 after SaOS-2 cell injection. The average tumour volume at this time was 21.1 mm 3 (±2.357 s.e.m., n ¼ 12). Pumps remained in situ until the conclusion of the study at day 34.
Doses of PEDF (50 and 500 mg kg À1 ) were selected based on published physiological and therapeutic concentrations. The physiological serum concentration of PEDF has previously been estimated at 100 nM (Petersen et al, 2003), while inhibition of vessel formation in ischaemia-induced retinopathy has been achieved at a 50 nM concentration (Stellmach et al, 2001). Reported serum PEDF concentrations for healthy human controls have since varied widely, ranging from 4 ng ml À1 to 15 mg ml À1 (Matsumoto et al, 2004;Wiercinska-Drapalo et al, 2007;Nakamura et al, 2009;Sabater et al, 2010;Sogawa et al, 2011;Yang et al, 2011). The 50 and 500 mg kg À1 doses used in this study are equivalent to 1 mg ml À1 (20 nM) and 10 mg ml À1 (200 nM) concentrations of PEDF, respectively, when the average mouse weight is taken as 20 g and the average blood volume 1 ml.
When tumours had grown to a disabling size for control animals (day 34 after SaOS-2 inoculation), all animals were euthanised under anaesthesia by cervical dislocation. The tumour-affected limbs were removed along with lungs, heart, intestines and skin. All specimens were fixed in 4% paraformaldehyde on harvesting. Tissues were embedded in paraffin prior to histological preparation and analysis. Four-mm sections of lungs, heart, intestines, skin and primary tumours were cut by microtome. Both lungs and tumours were sectioned to achieve the greatest cross-sectional area for examination. The lungs, heart, intestine and skin sections were stained with haematoxylin and eosin. Apoptosis was assessed in sections of primary tumour using a terminal dUTP nick end-labelling (TUNEL) assay kit (Promega), according to the manufacturers' instructions (Ta et al, 2009a). Blood sampling was performed immediately after cervical dislocation and dissection through the thoracic cage. Affected limbs were X-rayed at 35 kV for 30 s using a cabinet system (Faxitron Corp., Wheeling, IL, USA).
Statistical and imaging software
GraphPad Prism 5 for Mac OS X (Version 5.0d) was used for all statistical tests. Student's t-test and ANOVA analysis with Bonferroni multiple comparisons test were used where appropriate. ImageJ (Version 1.45j, National Institutes of Health, USA) was used for all image analysis.
PEDF induces apoptosis and inhibits cell cycling of osteosarcoma cells in vitro
In vitro studies were first performed in order to characterise the biological effects of PEDF on the SaOS-2 and SJSA-1 osteosarcoma cell lines. Cell viability was assessed by MTS proliferation assay, apoptosis by TUNEL assay and cell cycling by Ki-67 immunocytochemistry. The SaOS-2 cell line was used for the orthotopic murine model of osteosarcoma.
For the SaOS-2 cell line, 4.18% of control cells were identified as undergoing apoptosis by TUNEL staining. With PEDF treatment 8.57% were TUNEL positive, representing a two-fold increase in apoptotic cells (Po0.05, two-tailed t-test, n ¼ 4, four experiment repeats). Overall, 6.04% of SJSA-1 cells treated with PEDF were TUNEL positive, compared with 2.48% of cells that received the vehicle solution. This two-fold increase in apoptotic cells was also significant (Po0.001, two-tailed t-test, n ¼ 4, four experiment repeats; Figure 1C).
Scanning electron microscopy demonstrated chromatin condensation within the nuclei of PEDF-treated SaOS-2 cells. Treated cells also showed deranged mitochondrial architecture and prominent cell surface processes ( Figure 1G). Chromatin condensation and deranged mitochondria are consistent with osteosarcoma cells undergoing apoptosis. The significance of the cell surface processes remains unknown; however, one might speculate as to a possible role in cell -matrix or cell -cell adhesion.
PEDF reduces the metastatic potential of osteosarcoma cells in vitro
The effect of PEDF on the metastatic potential of osteosarcoma cells was assessed in vitro by collagen I adhesion and invasion assays. Treatment with PEDF significantly promoted osteosarcoma cell adhesion to type I rat-tail collagen ( Figure 1E). The result was most striking for the SJSA-1 cell line, which demonstrated an 83.9% enhancement in adhesion to the freshly set collagen (Po0.001, two-tailed t-test, n ¼ 2, three experiment repeats). For SaOS-2, treatment with PEDF improved adhesion by 23.9% (Po0.05, two-tailed t-test, n ¼ 2, three experiment repeats). While enhanced adhesion to collagen I was striking for the SJSA-1 cell line with PEDF treatment, both cell lines in this experiment showed similar degrees of inhibition of invasion through collagen I ( Figure 1F). There was a 41.3% reduction in the ability of SaOS-2 cells to migrate through the membrane (Po0.01, two-tailed t-test, n ¼ 2, two experiment repeats), and a 33.4% reduction for SJSA-1 cells (Po0.05, two-tailed t-test, n ¼ 2, two experiment repeats).
Systemically administered PEDF inhibits growth of orthotopic osteosarcoma in vivo
Treatment with PEDF was delayed until day 20 after intra-tibial inoculation with the SaOS-2 human osteosarcoma cell line. Tumours were well established and macroscopically evident prior to initiating treatment protocols, thus replicating the human situation. The average tumour volume at this time was 21.1 mm 3 ( ± 2.357 s.e.m., n ¼ 12).
A surgically implanted intraperitoneal osmotic pump delivered PEDF. Sustained delivery of PEDF at both 50 and 500 mg kg -1 per day doses caused a significant reduction in tumour volume by the study end point (Figures 2A and B). Animals treated with 50 mg kg À1 per day PEDF dose exhibited a mean reduction in tumour volume of 47.4% at day 34 (Po0.05, two-way ANOVA with Bonferroni multiple comparisons test). The higher 500 mg kg À1 per day PEDF dose caused a 53.0% reduction in tumour volume at day 34 (Po0.01, two-way ANOVA with Bonferroni multiple comparisons test). Day 34 was the humane end point of the study as tumours had grown to a disabling size. There was no statistical difference between groups receiving these two doses of PEDF.
Orthotopic tumours were examined histologically for extent of invasion of surrounding structures, tumours necrosis and apoptosis. Treatment groups were unable to be differentiated based on these parameters. All animals showed extensive tumour invasion of soft tissue and bony structures. Specifically, tumour cells were seen within skeletal muscle, crossing the proximal physeal plate of the tibia and destroying normal bone architecture. In some cases tumours progressed to replace the distal femoral diaphysis. Plain radiographs of tumour-bearing limbs showed extensive soft tissue invasion and osteolysis for both treatment groups (Figure 3).
Orthotopic tumour tissue was sectioned to achieve a maximal en face surface for quantification of tumour necrosis and apoptosis. Haematoxylin-and eosin-stained sections were used to quantify tumour necrosis. Tumours treated with 50 mg kg À1 per day PEDF dose showed 57.5% mean tumour necrosis, whereas those treated with 500 mg kg À1 per day PEDF dose showed 31.5% mean tumour necrosis. Control tumours showed 28.2% tumour necrosis. There was no statistical significance between groups.
Adjacent sections of tumour were TUNEL stained and again there was no statistical significance between treatments based on % TUNEL-positive staining. Tumours from control, 50 and 500 mg kg À1 per day PEDF dose groups demonstrated 20.2%, 45.9% and 21.2% TUNEL-positive tumour tissue, respectively.
PEDF restricts progression of pulmonary metastatic disease
The burden of pulmonary metastatic disease at the study end point was assessed histologically on haematoxylin-and eosin-stained sections of lung tissue. Lungs were sectioned in order to achieve maximal cross-sectional area for study. At  20 magnification, there was no significant difference in the mean number of pulmonary micrometastases observed between treatment groups. Control animals showed 7.5 micrometastases per lung section, as compared with 5.25 and 7.25 micrometastases for animals treated with 50 and 500 mg kg À1 per day PEDF doses, respectively.
The cross-sectional area of pulmonary micrometastases was measured. Ten micrometastatic lesions per treatment group were used in this analysis. Treatment with PEDF at 50 and 500 mg kg À1 per day doses caused 79.8% (Po0.01) and 68.1% (Po0.05) reductions in mean cross-sectional area of micrometastatic lesions (one-way ANOVA analysis with Bonferroni comparison test). The mean area of pulmonary micrometastatic lesions was 0.90, 0.18 and 0.29 mm 2 for control, 50 and 500 mg kg À1 per day PEDF dose groups, respectively (Figure 4).
Therapeutic safety of systemic PEDF
In addition to establishing the therapeutic efficacy of PEDF using the orthotopic model of osteosarcoma, we also sought to identify possible side effects associated with PEDF administration. No significant difference in animal weight was observed between PEDF-treated and control groups. Furthermore, the cachectic trend usually seen with this orthotopic model of disease was notably absent. At day 34, euthanasia was required, as tumours had grown to a disabling size in control animals.
Serum obtained at the study end point was analysed for renal and hepatic biochemical parameters. There was no significant difference between treatment groups, with serum creatinine, alkaline phosphatase and aspartate transaminase remaining within physiological limits.
Cardiac, small intestine and skin tissues were stained with haematoxylin and eosin and examined for signs of chemotherapyassociated toxicity. Treatment with conventional cytotoxic agents, such as doxorubicin, may be associated with the vacuolisation of myocardium and intestinal epithelium (Ta et al, 2009b). The lamina propria may become separated from the overlying intestinal and cutaneous epithelium, with loss of hair follicles (Tan et al, 2010). None of these changes were evident in either the PEDF-treated or control groups.
DISCUSSION
In this study we sought to evaluate the potential of PEDF as a sole treatment agent for advanced osteosarcoma. As an endogenous glycoprotein, PEDF is an attractive therapeutic agent in terms of potential chemoresistance and immunoreactivity. For the first time, we have successfully demonstrated a therapeutic effect for PEDF protein on both established primary osteosarcoma and pulmonary metastases. Additionally, we observed no adverse physiological effects associated with PEDF treatment.
Treatment with PEDF was delayed until orthotopic osteosarcoma was macroscopically evident, and despite this late stage of intervention, we observed 47.4% and 53.0% reductions in tumour volume by the study end point for 50 and 500 mg kg À1 PEDF, respectively. Ek et al (2007a) showed an effect when 25 nM PEDF was co-administered at the time of orthotopic inoculation. Tumour volume and growth rates were reduced by 40%. We add clinical relevance to these findings by delaying treatment to better replicate the human presentation of disease. In another study, Ek et al (2007b) demonstrated a 51% reduction in tumour size when PEDF overexpressing SaOS-2 cells were used for intra-tibial injection. Ta et al (2009a) tested a chitosan hydrogel delivery system for PEDF plasmid. Treatment with Chi/DPO7-pPEDF resulted in a 37% reduction in tumour volume. Similarly, Dass et al (2006) used chitosan microparticles encapsulating PEDF plasmid for a therapeutic effect. All of these studies used the same SaOS-2 orthotopic model of osteosarcoma and are thus comparable. By using recombinant protein, rather than gene therapies that have yet to be successfully adopted for human disease, we have gone one step closer to mirroring the human condition.
The molecular mechanisms that PEDF uses to inhibit growth of osteosarcoma are yet to be fully elucidated and represent an important area for further research if we are to fully understand the therapeutic effects that the aforementioned studies have demonstrated. Pigment epithelium-derived factor has been shown to inhibit osteosarcoma growth both directly and indirectly. As we have shown here in vitro, direct inhibition occurs by both the induction of apoptosis and the inhibition of cell-cycle progression. Takenaka et al (2005) showed increased caspase-3/7 activity and decreased DNA synthesis by thymidine incorporation studies when MG63 osteosarcoma cells were treated with 100 nM PEDF. Ek et al (2007a) showed PEDF to induce apoptosis using UMR 106-01 and SaOS-2 cell lines by TUNEL assay.
Intriguingly, in our study we were unable to show a differential effect on either tumour necrosis or apoptosis in vivo with PEDF treatment. By allowing tumours to advance to palpable proportions prior to initiating treatment, we have potentially allowed them to outgrow their vasculature and so undergo spontaneous necrosis and apoptosis. Additional work is needed to clarify these processes in vivo; however, this should not detract from the finding that tumour volumes were reduced in PEDF-treated mice.
Animals that received PEDF treatment had a reduced burden of pulmonary metastatic disease at the study end point and this is in keeping with the findings of previous studies. We found the crosssectional area of pulmonary metastases to be 79.8% and 68.1% smaller in animals receiving 50 and 500 mg kg À1 PEDF doses, respectively. Ek et al (2007a) showed a 70% reduction in the mean number of macroscopic metastases when PEDF was co-administered at the time of orthotopic inoculation. When PEDF overexpressing SaOS-2 cells were used, no pulmonary metastases were observed (Ek et al, 2007b). Dass et al (2007) and Ta et al (2009a) demonstrated 2-and 8-fold reductions in the number of pulmonary metastases when applying chitosan microparticles and a hydrogel delivery system, respectively, to deliver PEDF plasmid.
Although the gross burden of pulmonary metastatic disease appeared to be reduced, we sought to further clarify the effect of PEDF on the metastatic process. Quan et al (2002) first showed that PEDF expression in the avascular zones of the growth plate was likely to inhibit invasion across epiphyseal cartilage. Ek et al (2007a) demonstrated dose-dependent reductions in adhesion and invasion of collagen I using the SaOS-2 cell line. We have replicated these findings and extended them to include the SJSA-1 cell line. Ek et al (2007a) also described a change in cellular morphology with PEDF treatment. With TEM we support these findings, as PEDF-treated SaOS-2 cells showed chromatin condensation, deranged mitochondrial architecture and prominent cell surface processes.
In our study, however, all tumours, irrespective of treatment, were found to be aggressively replacing local bony architecture and invading the surrounding soft tissues and musculature. When sections of lung tissue were examined under  20 magnification, there was no difference between treatment groups in the number of observed micrometastases. When one considers the dramatic effect of PEDF on the size of pulmonary metastases, it is possible that systemic delivery of PEDF may not only inhibit the metastatic cascade, as demonstrated in previous studies, but may also have a direct effect on proliferation in pulmonary metastases. In order to further clarify the differential contributions of PEDFs antimetastatic and anti-proliferative effects, an animal model that exploits real-time imaging of metastases would be of clear benefit.
Further work is needed to clarify the mechanisms of metastasis inhibition in osteosarcoma. Guan et al (2004) showed that PEDF-decreased invasion of malignant U251 glioma cells was related to downregulation of matrix-metalloproteinase-9 (MMP-9), an important enzyme for matrix degradation. Kozaki et al (1998) Water 50 m 50 m 50 m PEDF 50 g kg -1 per day PEDF 500 g kg -1 per day Figure 3 Plain radiographs of tumour-bearing limbs (left) and haematoxylin-and eosin-stained sections of primary tumour involving bone (right). Significant osteolysis and soft tissue invasion was seen for both control and treatment groups.
showed that PEDF secreted by colon cancer cells bound with high affinity to both collagen I and III and that expression of PEDF was inversely related to its metastatic capacity. The interactions between PEDF, MMPs and collagens, and their role in the metastasis of osteosarcoma have yet to be fully evaluated. The use of an endogenous glycoprotein such as PEDF offers a number of advantages, such as reduced immunoreactivity and chemoresistance. Pigment epithelium-derived factor may be considered a targeted therapy for osteosarcoma, interacting specifically with the deregulated pathways of malignant cells, however as a physiological agent it is also critical for physiological process such as tissue healing and homeostasis. For the first time, our results not only confirm a therapeutic effect but also show PEDF to be a safe treatment. Serum and tissue analysis showed no evidence of toxicity, and animals remained well for the duration of the study.
In conclusion, this study provides evidence for PEDF as a therapeutic agent for osteosarcoma. Systemic PEDF restricts growth of both primary osteosarcoma and pulmonary metastases when treatment is delayed until after tumours become clinically palpable. We have optimised an established animal model and used recombinant PEDF to add clinical relevance to the findings. Our results also identify areas in need of further work. Studies are required to characterise the molecular mechanisms of PEDFs antiosteosarcoma and anti-metastatic activity. Particularly, the use of real-time imaging would be beneficial in order to characterise the role of PEDF in the metastatic process. This work is published under the standard license to publish agreement. After 12 months the work will become freely available and the license terms will switch to a Creative Commons Attribution-NonCommercial-Share Alike 3.0 Unported License. | 2017-11-08T17:26:57.351Z | 2011-10-06T00:00:00.000 | {
"year": 2011,
"sha1": "aa5d4a58e087d877a651bb02ea031b5ad643108d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/bjc2011410.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa5d4a58e087d877a651bb02ea031b5ad643108d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3474720 | pes2o/s2orc | v3-fos-license | Vehicle Counting and Moving Direction Identification Based on Small-Aperture Microphone Array
The varying trend of a moving vehicle’s angles provides much important intelligence for an unattended ground sensor (UGS) monitoring system. The present study investigates the capabilities of a small-aperture microphone array (SAMA) based system to identify the number and moving direction of vehicles travelling on a previously established route. In this paper, a SAMA-based acoustic monitoring system, including the system hardware architecture and algorithm mechanism, is designed as a single node sensor for the application of UGS. The algorithm is built on the varying trend of a vehicle’s bearing angles around the closest point of approach (CPA). We demonstrate the effectiveness of our proposed method with our designed SAMA-based monitoring system in various experimental sites. The experimental results in harsh conditions validate the usefulness of our proposed UGS monitoring system.
Introduction
The motion parameters of vehicles are important intelligence for a unattended ground sensor (UGS) system and Intelligent Transport System (ITS) [1][2][3][4][5][6]. The fixed-point observation technique is widely employed in UGS and ITS by installing an inductive loop sensor [7], ultrasonic sensor [8,9], seismic sensor [10] and camera [11]. However, these sensing systems suffer from complicated installation, expensive maintenance costs and high power consumption.
The microphone array (MA) sensor provides low cost, low power consumption, and non-line-of-sight measurement, which is widely used for acquiring military intelligence of intruding targets [1,3,6,12]. A network of MA-based surveillance sensors remotely deployed in conjunction with a command center can provide early warning and assessment of enemy threats, near real-time situational awareness to commanders, and may reduce potential hazards to soldiers [13]. Furthermore, the employment of small-aperture microphone array (SAMA) as a monitoring sensor should be an undetectable system which can avoid visual problems and be cheaper than most currently used systems.
The MA-based detection and classification of moving targets in UGS have received much attention [14][15][16][17][18]. However, relatively less attention has been paid to the counting and moving direction estimation in UGS. This issue generally relates to the detection of moving targets, and more specifically to the identification of the number and moving direction of vehicles using one or more SAMA sensors. More particularly, this issue pertains to a system that uses acoustic sensors to acquire enough intelligence on moving targets in dynamic, noisy, and highly mobile environments. An 'acoustic trip line' counter was introduced in [6,13] to detect the harmonic components in the prescribed look-direction by MA with the radius of 0.612 m. Moreover, the method of collaborative signal processing of a pair of 1 m spacing MA sensors was designed in [19] to determine the number of targets. Nevertheless, those currently used large aperture systems destroyed the stealth and installability characteristic of acoustic sensor in UGS. On the other hand, traffic monitoring systems with MA applied in ITS are proposed in [1][2][3]20]. However, both of the methods proposed in [1,2] need channel synchronization and ultra-high sampling frequency (48 kHz), which are not realistic in a UGS system. Furthermore, a system based on the amplitude or energy of vehicle-generated sound is easily spoofed or counter measured in military application.
The counting of vehicles and the estimation of their moving direction with small aperture and lower sampling rate are a knotty problem. Another challenge for the SAMA sensor in the wild environment is the wind-generated noise. The sound of the vehicle in the real-world environment has free-field characteristics, and the wind noise has noise-field characteristics [21]. It is well known that the wind noise is unavoidable since it cannot be totally removed by a wind shield. According to [21], spatial coherence could be used to distinguish between the noise of the wind and the sound of the vehicle for each frequency bin. Therefore, we employed a spatial coherence-based method to select the useful bands for determining the vehicle direction, counting vehicles and estimating their moving direction.
In this paper, a SAMA-based system for counting vehicles and estimating their moving direction is provided and its aperture is only 4 cm. We defined a decision zone (DZ) near to the closest point of approach (CPA) to identify the number and moving direction of vehicles. Since the direction of arrival (DOA) estimation error around the CPA is relatively small, we can achieve higher estimation accuracy. The interference of wind noise in the real-world environment is reduced through the estimation of the useful frequency bands by spatial coherence. This paper is organized as follows. Section 2 illustrates the design of the SAMA sensor system, including the system hardware architecture and the DOA estimation algorithm. Section 3 describes the vehicle counting and moving direction estimation scheme based on the calculated DOA. System verification and experimental results in different situations are given in Section 4 and conclusions are presented in Section 5.
SAMA System Architecture Design
In general, the uniform array can provide balanced space for circuit design and the uniform circular array (UCA) has the same resolution in all directions. The vehicle signal occupies the frequency bands from 100 Hz to 3000 Hz [22]. The aperture of the array has to satisfy the spatial sampling criterion in all the frequency bands to avoid performance degradation due to spatial aliasing. Therefore, to satisfy the spatial sampling criterion d ≤ 0.5λ, the array aperture should be no bigger than 5 cm, where d is the minimum distance between any two array microphones, and λ is the wavelength of the acoustic signal. Finally, uniform circular geometry with an aperture of 4 cm is employed to deploy the microphone [23].
The block diagram of the prototype SAMA system is depicted in Figure 1. The system is divided into three modules according to their functions: MA module (Module 1: MA); preprocessing and sampling module (Module 2: P&S); and real-time processing or data acquisition module (Module 3: P/A). The acoustic signals from the MA module are sampled in the P&S module to obtain four simultaneous digital signals. The synchronized filters and amplifiers mean that a comparatively strict demand on the consistency of the four channels is requested. The function of the P/A module is configured by users, either for real-time processing by digital signal processing (DSP) or for storing the signals in the memory device for further analysis. As shown in Figure 2, the system consists of a mainboard as well as an extended board connecting by a flexible printed circuit. The mainboard consists of a UCA system with four ADMP504 MEMS microphones (Analog Devices, Norwood, MA, USA), a DSP (ADSP21375, Analog Devices, Norwood, MA, USA) as the core processor, MAXIM MAX11043, four-Channel, 16-Bit analog-to-digital converters (ADCs) (Maxim Integrated Products, Sunnyvale, CA, USA) and supplemental hardware circuits. The MAX11043 contains one versatile filter block and programmable-gain amplifier per channel. The extended board contains a CSR BC6415 Bluetooth module (Cambridge Silicon Radio, Cambridge, UK), a data acquisition interface and debug interface. The hardware components that make up the system are illustrated in Figure 2. In general, the aperture of our system is very small (4 cm) which is an advantage for portability and mobility, but a challenge for high accuracy DOA estimation [23].
DOA Estimation with Spatial Coherence
DOA estimation using acoustic signals is inevitably contaminated by wind noise which is the most common interference in an outdoors environment. The wind turbulence on the microphone is comparatively incoherent, and its speed is much slower than that of sound [24]. Spatial coherence is a similarity indicator for signals in the frequency domain. It describes the coherence between two measures at two locations [21]. The spatial coherence function between two microphone signals, x 1 and x 2 , is equal to the cross power spectrum G x 1 x 2 ( f ) divided by the square root of the product of the two auto-power spectra. Specifically, the spatial coherence of x 1 and x 2 is defined by Equation (1): where f denotes the frequency of interest. The complex cross power spectrum defined in Equation (2) is the Fourier transform of the cross correlation of x 1 and x 2 in Equation (3).
Here, x 1 and x 2 are two different channel signals from SAMA and E denotes the mathematical expectation (for ergodic random processes the ensemble average can be replaced by a time average). Carter [25] gives an analytical estimation of the biasE[|γ x 1 x 2 | 2 ] − |γ x 1 x 2 | 2 as a function of the true spatial coherence |γ x 1 x 2 | 2 , the fast fourier transformation (FFT) time duration T and the time delay D.
In our case, T = 125 ms (1024 sampling points), D = 8.31 × 10 −5 s (array aperture of 4 cm). Figure 3a shows the acoustic signal of a car passing the SAMA sensor and the wind scale [26] is 4. Spatial coherence is depicted in Figure 3b to show whether the frequency bin is dominated by vehicle or wind noise. To identify the useful frequency band of the signal, we check whether the spatial coherence is above the threshold in each frequency bin. In this paper, 0.7 is chosen by simulation and experiment. If the spatial coherence of a certain frequency bin is larger than 0.7, then this bin will be selected for direction finding and other uses; otherwise, it will be discarded.
An improved multiple signal classification (MUSIC) algorithm is employed to DOA estimation associated with spatial coherence to discriminate between the wind noise and the acoustic signal of a vehicle. The algorithm first tests the spatial coherence for each frequency bin, then identifies the useful frequency bands for wind noise robust DOA estimation. Details of identifying spatial coherence and selecting useful frequency bands for DOA estimation are discussed in [23].
In addition, inspired by [2], we designed an inaccurate angle regulation (IAR) method to adjust the estimated target angle. One angle value calculation is performed on a signal length of 125 ms (frame length) and the frame moving step is equal to the frame length. If motion speed is 60 km/h, the vehicle moves about 2 m at this time interval, which corresponds to the angle deviation of 0 • -11 • (assuming that sensors are displaced 10 m away from the road). This angle deviation depends upon the location of the vehicle in relation to the MA. We conclude that it is not the correct angle of the vehicle and that maybe it is falsely estimated if the two closest neighbor angle values differ more than a deviation of 11 • (deviation threshold). Then, the neighboring angles will linear fitting out a value to replace it. Following the aforementioned principle, the deviation threshold is proportional to the speed of the vehicle.
Vehicle Counting and Moving Direction Estimation
In this section, we describe the number and moving direction estimation method of vehicles travelling on a previously established route with the calculated DOA. Through statistical analysis, the varying trend of vehicle's bearing angles can provide the number information of vehicles, as shown in Figure 4a. Since the angle estimation error around the CPA is relatively small, we defined a DZ (−15 • , +15 • ) around the CPA (0 • ) for subsequent processing. Then, we designed a vehicle counting and moving direction estimation method by analyzing the relationship of the angles with the DZ as described in Figure 5. The reference coordinate system is shown in Figure 6b. Angles that fall into the DZ more than or equal to three times represent a vehicle that was detected passing through a SAMA sensor. Subsequently, the number of vehicles is increased by 1 and the moving direction can be obtained by checking the varying trend of the angles. The vehicle is approaching from the left direction if the angles are gradually decreasing, and conversely from the right direction. At this stage, the detection of a vehicle and its moving direction is finished. After those operations, we skip eight frames to avoid the repeated count of the same vehicle because, in such a short time, another vehicle will not appear.
Assuming that the velocity of the vehicle (denoted as v) is uniform, then its DOA satisfies the inverse tangent law as Equation (5): where t 0 represents the moment of CPA and l represents the distance between the sensors and lane center, as depicted in Figure 6. Moreover, the ideal DOA curve of three vehicles, without them mutually interference with each other, is shown in Figure 7. However, due to the mutual interference of vehicles, the actual DOA curve of three vehicles passing a SAMA sensor is shown as 0 s to 30 s in Figure 4b. Hence, our method will cause false targets in the middle of two vehicles, as annotated in Figure 4b. As the DZ is very close to 0 • , the vehicle is close to the CPA. Therefore, as shown by the red dotted line in Figure 4a, the frequency energy of selected frequency bands in Section 2.2 is employed to roughly judge whether those frames are in the vicinity of the CPA. Then, we can exclude false targets in order to achieve a highly accurate vehicle count. Since counting and moving direction estimation are based on the same vehicle, we should not analyze either of them individually. The estimation of moving direction is executed immediately once a vehicle is counted.
DOA estimation is easily disturbed by other interference signals when the target is far away from the CPA, and the DOA estimation error is positively related to the distance from the CPA. The proposed method, based on the DZ, tightly surrounds the CPA which makes the method more effective.
Experimental Conditions and Datasets
Ground vehicles are moving targets of focal interest to UGS. Therefore, these three types of vehicles are employed in our experiment and part of their specifications is listed in Table 1. The acoustic signal of a moving target is sampled by the SAMA sensor with a 8192 Hz sampling rate. Besides, each vehicle is equipped with GPS to obtain the velocity and distance information between the moving target and the sensor. The sensor was located parallel to the road, about 5 to 15 m away from the lane center and the experimental layout is shown in Figure 6. Meanwhile, wind scales were recorded by an ultrasonic anemometer at the same site during our outfield experiments. The noise emitted by a vehicle at low and medium speeds is composed of tyre/road noise and mechanically originated noise [27]. Considering the effect of road type and speed, experiments were conducted in different terrains at different speeds [28]. Experimental studies were performed from June 2013 to December 2016 on Chongming Island, Zhoushan Island, Nanjing, Anhui and a suburban district around Shanghai where the wind scales are usually less than 6. The compositions of our sample set are shown in Table 2 and the photographs of four experimental environments are shown in Figure 8. In those experimental sites, vehicles move at the predetermined velocity of 30 to 60 km/h when datasets are collected. Some of the measurement sites are military training grounds with some background activity at times. Samples with higher speeds are not available for the bad road conditions in the real-world environment. Moreover, for the sake of security, experiments of tracked vehicles driving at 60 km/h on the concrete road were not implemented.
Results and Discussion
In this section, the experimental data are cropped and a total of 306 min useful acoustic samples from different sites are analyzed. Figure 9 presents the counting accuracy for three types of vehicles in four terrains with different speeds. According to Figure 9, the influence of road type to counting accuracy is greater than that of vehicle speed. In addition, these two factors have the greatest impact on the counting accuracy of cars, while they have less and the least impact on trucks and tracked vehicles, respectively. The faster the speed, the greater the vehicle noise, thus higher counting accuracy can be achieved. Moreover, regardless of the type of vehicle, the counting accuracy on the sand road is the highest among all the tested terrains. The average counting accuracy of three types of vehicles is shown in Table 3 without considering the effect of the terrain and speed. The accuracy of the tracked vehicle is as high as 96.42% due to its high level of noise. The special counting mechanism ensures that it will not lead to false positive detection, even in strong wind weather conditions. However, ultra-close-distance (less than 20 m) driving and overtaking are the main factors that result in false detection. Fortunately, both aforementioned situations, commonly encountered in ITS, are rare in the process of troops moving. Therefore, the proposed method is suitable for monitoring military activity with UGS.
Since counting and moving direction estimation are based on the same vehicle, a one-to-one relationship exists between them. Through our statistics, the moving directions of all correctly counted vehicles are correctly estimated (accuracy: 100%). It is worth emphasizing that the falsely counted vehicles are not involved in the calculation. Considering the decision mechanism, judging the increasing or decreasing trend of angles in the vicinity of the CPA, the result is reliable. Because of the 100% estimation accuracy of the moving direction, the overall performance of the system is dominated by counting accuracy.
Considering the application in the field environment, in contrast to ITS, vehicle speed is slow and vehicle flow is limited. Therefore, we can achieve satisfactory results of vehicle counting and moving direction estimation with small aperture and lower sampling rate. However, the ITS has the characteristics of faster vehicle speed, traffic intensity and multi-lane interference. Consequently, it needs large aperture arrays, synchronous acquisition and a very high sampling rate. Those demands lead to a high power consumption and make the UGS system difficult to implement. Hence, we did not compare the performance of the proposed method with methods in [1,2]. Certainly, we should admit that our proposed method shows poor ability in some of the ITS application scenarios. The proposed method realizes the counting and moving direction estimation issue through the proposed algorithm with only 4 cm aperture array. The method also introduces spatial coherence for wind noise suppression and IAR to overcome the interference from unrelated targets. In the situation of different types of closely spaced vehicles, however, the method failed because the acoustic signals are dominated by very noisy vehicles. Then, the problem of separating signals in multiple target scenarios needs to be solved, which will be considered in future work.
Conclusions
In this paper, we proposed a SAMA-based single node acoustic monitoring system with only 4 cm aperture. The proposed method includes vehicle counting and motion direction estimation. The method obtains the required intelligence by analysing the varying trend of a moving vehicle's angles within the vicinity of the CPA. Spatial coherence was assessed to select frequency bands for wind noise suppression and DOA estimation. We applied our proposed system to four different experimental environments, and assessed the accuracy of vehicle counting and motion direction estimation. The experimental results in harsh conditions confirmed the availability of our proposed UGS monitoring system. | 2017-08-19T22:40:14.606Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "114b0ab889b3e5ecf9c21e90f83d0b1a93185d1e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/5/1089/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "114b0ab889b3e5ecf9c21e90f83d0b1a93185d1e",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Engineering"
]
} |
259302639 | pes2o/s2orc | v3-fos-license | The value of consumer neuroscience research for contemporary marketing knowledge
COPYRIGHT © 2023 Haidinger and Koller. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. The value of consumer neuroscience research for contemporary marketing knowledge
Introduction
Although consumer neuroscience offers great potential, research in the field is still scarce, particularly when compared to the application of other empirical methods. In this opinion article, we want to reflect upon the potential additional value of consumer neuroscience in selected areas of marketing, while also referencing other more recent approaches such as big data. We base our elaboration on qualitative insights gained from an exploratory look at consumer neuroscience papers with regard to the additional value for marketing. This will provide the basis for suggestions for future research in this field.
One reason why the number of consumer neuroscience papers in marketing is still lower than other empirical papers could lie in the potential uncertainty among researchers about whether consumer neuroscience can actually provide insights that are of significant relevance for marketing academics and practitioners, beyond conventional research methods (Plassmann and Karmarkar, 2015;Lee et al., 2018). Moreover, resource-intensity as well as methodological and ethical issues mentioned in the literature may be another potential explanation for the relatively slow development of consumer neuroscience (Javor et al., 2013). Consumer neuroscience papers have been published in multiple disciplines (Smidts et al., 2014), which may make it harder for marketers to appreciate their value if there is a lack of familiarity with journals from other disciplines.
In contrast, big data, as another alternative approach to self-reports, has already gained significant importance for applied marketing. It offers benefits for marketing over conventional research methods due to access to a high amount of real-time information in a natural environment (Erevelles et al., 2016).
The goal of this opinion article is to outline areas where consumer neuroscience can provide insights relevant for marketing academics and practitioners beyond conventional research methods. To do so, we took an exploratory look at the findings of consumer neuroscience papers, particularly in advertising, branding, and product management. By also touching briefly on big data, we are able to provide another outlook of the prospects of consumer neuroscience in marketing in conjunction with other methods.
Evolution of consumer neuroscience
A deep understanding of consumers is undoubtedly a significant component of successful marketing. Conventional research methods based on self-reports, such as questionnaires, interviews, focus groups, or behavioral experiments, offer the advantage of high acceptance, but provide only limited insights into the subconscious processes of . /fnhum. .
consumers, which play a significant role in decision-making processes (Plassmann and Karmarkar, 2015). Consumer neuroscience can generate insights into neural mechanisms, such as emotion, reward, memory, and attention, which are central to explaining consumer behavior and consumer decision making (Solnais et al., 2013;Camerer and Yoon, 2015;Wolf and Ueda, 2021). The application of neuroscientific methods can provide more objective insights into consumer preferences and decision making by eliminating socially desirable answers or strategic behavior, as well as recall and response biases (Camerer et al., 2005;Hubert and Kenning, 2008;Kenning and Plassmann, 2008;Reimann et al., 2011;Yoon et al., 2012;Balconi and Sansone, 2021;He et al., 2021). By looking at neuroscientific and psychophysiological processes, we can gain a deeper understanding of consumers and contribute to existing marketing knowledge (Venkatraman et al., 2012(Venkatraman et al., , 2015Smidts et al., 2014). Hence, consumer neuroscience offers a lot of potential. To move the field forward, it is important to reflect upon the potential additional value of consumer neuroscience in more detail. There are three types of additional value, providing: (1) completely new insights that are not able to be obtained with conventional research methods, (2) complementary insights, in terms of explaining consumer behavior and the effectiveness of certain marketing actions not able to be explained with conventional methods, and (3) confirmatory insights, which means confirming knowledge that has been generated with traditional self-report methods by adding a neuroscientific or psychophysiological description.
Areas of additional value of consumer neuroscience
Based upon our qualitative exploration, we will now outline important areas of application for which consumer neuroscience could provide valuable insights.
Advertising stimuli and communications
The additional value of consumer neuroscience has been especially evident in the field of advertising, for example, when testing the effectiveness of different components of advertisements. It can help to gain a deeper understanding of the consequences of certain marketing actions, which would potentially remain unobserved when relying solely on conventional research methods. Marketers can benefit from the potential of neuroscientific methods during the creation phase of marketing activities by testing the effect of stimuli pre-launch (Rossiter et al., 2001). Plassmann et al. (2007) provide further insights into the additional benefits of applying neuroscience to gain a better understanding of how advertising works. At the same time, they discuss limitations and propose directions for further research in this area. Since then, consumer neuroscience has gained further attention in marketing research, especially through review articles on the emergence and development of the topic (e.g., Lee et al., 2018), special sessions at major conferences in marketing (e.g., at the European Academy of Marketing, Koller and Lee, 2016), and, for example, a special issue published in the Journal of Marketing Research (Camerer and Yoon, 2015).
Moreover, consumer neuroscience also supports attempts to explain the effectiveness of already implemented activities. The application of neuroscientific methods is especially valuable when the performance of certain marketing activities cannot be fully explained by conventional research methods. For instance, Guerrero Medina et al. (2021) looked at the effect of CSR messages on consumer behavior. Derived from previous literature, they argued that it was difficult for companies to translate their CSR communications into an increase in sales. By applying a neuroscientific method, they were able to identify possible reasons for that, which would have remained undetected if only traditional research techniques had been applied.
Consumer neuroscience can also be helpful when studying topics that are at high risk of being influenced by a social desirability bias. In a study by Vezich et al. (2017), consumer self-report data suggested a higher liking of green ads over controls, whereas fMRI data showed the opposite.
Branding and product attributes
The field of branding can also benefit from neuroscientific methods. Consumer brand perception as well as brand associations are highly influenced by implicit mechanisms, which are difficult to study with conventional research methods. Regarding brand associations, in a study by Camarrone and van Hulle (2019), conventional and neuroscientific methods produced divergent results for two brands. Neuroscientific methods revealed a difference in associations between the two brands, while selfreports did not. Research on attitudes toward brands can also benefit from applying consumer neuroscience techniques (Walla et al., 2011).
Consumer neuroscience can also help when there are contradictory insights on a specific topic. The methodological limitations of conventional methods could be one reason for contradictions in the literature (Wolf and Ueda, 2021). The application of neuroscientific methods can help to provide objective clarity for these findings. For example, there has been a debate in the literature on whether brands are perceived as human-like beings or rather like cultural objects; neuroscientific studies hint at the latter (Yoon et al., 2006;Javor et al., 2018).
The application of neuroscientific methods can also be useful in the field of product evaluation as it can reveal, for example, fine distinctions in the evaluation and perception of product attributes. For instance, Frost et al. (2015) observed, in contrast to expectations, a greater activation of areas responsible for taste intensity for wines with a low alcohol level than for wines with higher alcohol levels.
Prediction
Furthermore, neuroscientific methods have proven to be meaningful methods for testing or predicting the success of various marketing stimuli (Kühn et al., 2016), either by applying a .
/fnhum. . combination of neuroscientific and conventional research methods or the application of neuroscientific methods alone. Venkatraman et al. (2015) found that the application of fMRI explains the highest level of variance of advertising elasticities, going beyond the capabilities of self-reports. Motoki et al. (2020) observed that a combination of self-report data and neuroscientific methods forecasts the viral success of advertisements on social media.
Emotions
The neuroscientific measurement of emotions plays a significant role in predicting behavior. Pozharliev et al. (2022) observed that physiological arousal can predict consumer behavior, while self-reported affect intensity did not. Consumer neuroscience can help to shed light on conscious vs. unconscious emotions. Bettiga et al. (2020) found that consumers are aware of their emotions regarding hedonic products but not for functional products, implying that insights gained from conventional methods can be biased and incomplete. When marketers want to assess consumers' emotions about functional products, the application of neuroscientific methods is potentially more effective. Bettiga et al. (2017) found conscious and unconscious arousal were two different emotional responses that influenced attitudes toward products differently.
Consumer neuroscience vs. big data
We also explored the prospect of consumer neuroscience by contrasting it with the potentials of big data. The two methodological approaches differ in their characteristics but can also work as a meaningful complement. For instance, in the field of creating advertisements, neuroscientific methods can support marketers, especially in testing certain marketing stimuli prelaunch, while big data, which is dependent on existing data, can primarily provide insights into the actual effectiveness post-launch. Additionally, in the field of branding, marketers could benefit from a combination of both methods. Data mining can capture a large amount of, for example, brand data, and is therefore a valid method from a company's perspective (Culotta and Cutler, 2016). However, as implicit perceptions of a brand play a fundamental role (Walla et al., 2017), classic self-reports as well as data mining could fall short. Moreover, while big data is capable of collecting a large amount of data on consumers or market trends, and subsequently building the base for developing marketing actions (Seung-Pyo et al., 2018), consumer neuroscience can be a helpful tool when emotions or subconscious phenomena play a fundamental role (Ramsoy et al., 2019).
Conclusion and further research
The literature tells us that consumer neuroscience can be a powerful tool in the development phase of various marketing activities but also in explaining the effectiveness or ineffectiveness of already implemented activities. Also for making predictions, consumer neuroscience can provide value beyond the application of conventional research methods. Furthermore, consumer neuroscience can offer insights especially for activities where emotions are involved or social desirability biases exist. The application of neuroscientific tools can also help to bring clarity to contradictory findings. In contrast, for already well-established theories and models as well as topics where emotions play a subordinate role or where consumers can easily articulate their opinion, neuroscientific insights are more of a confirmatory character.
While big data is an emerging discipline, consumer neuroscience can also offer relevant insights for marketers that are not possible with both big data and conventional research methodologies based on self-reports, giving consumer neuroscience the potential to develop alongside these strong methodologies.
The characteristics of consumer neuroscience and big data widely differ and could rather act as a useful complement. Our opinion article is just a starting point in regards to the evaluation of the additional value of consumer neuroscience. We suggest conducting a more comprehensive analysis of papers published in, for example, the areas of advertising, branding, product management, and pricing, applying neuroscientific methods or big data approaches. Such a detailed analysis would enable the evaluation of potential overlaps and opportunities that can leverage the unique potentials of the two approaches. Another interesting area of future research could lie in evaluating the potential of consumer neuroscience to help handle the major challenges of today's society (Walla et al., 2014). Moreover, the translational aspects of consumer neuroscience, such as gender or culture (Braeutigam and Kenning, 2022) as well as the clinical perspective (Javor et al., 2023), should be considered in more detail. We hope that our opinion article motivates researchers to look more closely at the potential that consumer neuroscience can offer. Moreover, it would be great to also see more research dealing with the advantages, challenges, and ethical and societal concerns related to consumer neuroscience as well as big data, dealing with them both separately and in conjunction.
Author contributions
KH was the leading author responsible for the conceptual outline, the exploration of literature, and the first draft of the manuscript. MK contributed to the revision and writing up of the provided draft. Both authors contributed to the article and approved the submitted version.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated . /fnhum. .
organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-07-02T05:09:14.892Z | 2023-06-16T00:00:00.000 | {
"year": 2023,
"sha1": "9b9e2ab6a9deb61d929150272677ec926a178a37",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fnhum.2023.1214848",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b9e2ab6a9deb61d929150272677ec926a178a37",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259726578 | pes2o/s2orc | v3-fos-license | Theranostic Nanoparticles: Revolutionizing Cancer and Imaging
Diagnostic agents based on nanoparticles are frequently suggested. However, diagnostic nanoparticles, except for iron oxide nanoparticles, have not yet been extensively utilized in clinical settings. This is because of concerns about toxicity, biodegradation, and elimination as well as difficulties in achieving reproducible particle uniformity and acceptable pharmacokinetic properties. The biologic behavior of nanoparticles should be considered when considering reasonable clinical applications. For instance, numerous nanoparticles are taken up by macrophages and accumulate in tissues with a lot of macrophages. As a result, they can be utilized to provide contrast in lymph nodes, the liver, the spleen, and inflammatory lesions Nanoparticles can also effectively label cells, making it possible to study cell migration in vivo and locate implanted (stem) cells and tissue-engineered grafts. Because it is difficult to control their pharmacokinetic properties, the potential of using nanoparticles for molecular imaging is limited. While targeted nanoparticle delivery to extravascular structures is frequently limited and difficult to separate from an underlying enhanced permeability and retention (EPR) effect, ideal nanoparticle targets are located on the endothelial luminal surface. In conclusion, nanoparticles hold significant clinical potential for other diagnostic and theranostic applications, even though they are not always the best option for molecular imaging because smaller or larger molecules may provide more specific information. This literature review survey endeavored to portray theranostic nanoparticles and their utilization in the therapy and conclusion of malignant growth. This paper also looked at nanoparticles and their potential to lower the dose of radioactive material and improve the quality of computed tomography scans. This review also attempted to demonstrate the benefits and drawbacks of incorporating these nanoparticles into the modern healthcare system. To gather the pertinent data for this review, 18 scholarly sources were chosen and investigated. Every day, new nanotechnology prototypes and developments are made, created, and analyzed. Positive aspects of incorporating nanoparticles into cancer treatment have been documented. On the other hand, there are a lot of people who still do not like the idea of using this technology and think it is just a fantasy or novelty. These technologies have the potential to completely transform the health care industry, according to studies of their effects on the cancer industry. When all these factors are taken into consideration, it appears that this technology is being used more frequently than ever before. This claim was not accurately met by negative perceptions, fear of change, a lack of specific work, cost, toxicity, or synthesis. The study and manipulation of matter in the range of one to one hundred nanometers is known as nanotechnology. Innovative therapeutic and diagnostic methods are at the heart of nanomedicine, or the use of precisely manufactured materials at this length scale for medical purposes. Due to their extremely small size, high surface area to mass ratio, and high reactivity, nanomaterials are physically and chemically distinct from bulk materials of the same composition. Using these features can alleviate some of the drawbacks of conventional medical and diagnostic drugs.
Introduction
ince 2010s usage of nanoparticle imaging is emerging eighteen relevant articles came forward referenced that gives the impactful idea about futuristic usage of this imaging technique, of course more mind word and resources needs to be invested to reach the conclusion of the pros and cons in comparison to dye imaging. Longer half-life on nanoparticle can cause renal consequences that is yet to be determined. Colloid structured particles stick better to ligands gives better imaging (1). Reasonable indications for the clinical utilization of nanoparticles should consider their biologic behavior. For example, many nanoparticles are taken up by macrophages and accumulate in macrophagerich tissues. Thus, they can be used to provide contrast in liver, spleen, lymph nodes, and inflammatory lesions (e.g., atherosclerotic plaques). Furthermore, cells can be efficiently labeled with nanoparticles, enabling the localization of implanted (stem) cells and tissue-engineered grafts as well as in vivo migration studies of cells. The potential of using nanoparticles for molecular imaging is compromised because their pharmacokinetic properties are difficult to control (1). Ideal targets for nanoparticles are localized on the endothelial luminal surface, whereas targeted nanoparticle delivery to extravascular structures is often limited and difficult to separate from an underlying enhanced permeability and retention (EPR) effect. The majority of clinically used nanoparticle-based drug delivery systems are based on the EPR effect, and, for their more personalized use, imaging markers can be incorporated to monitor biodistribution, target site accumulation, drug release, and treatment efficacy (1). Theranostic nanoparticles: revolutionizing cancer and imaging Nanotechnology, a technology once believed to be a figment of the imagination and part of the fictional world, is now revolutionizing cancer and diagnostic medical imaging. Featuring in many movies such as Avengers: Infinity War, Spiderman: Far from Home, and even children's movies such as Big Hero 6, and Ben 10: Alien Swarm, nanotechnology is now a reality. It has gone from mere fantasy to making a revolutionary breakthrough in diagnosing and treating cancer. Cancer has been known to humankind for centuries. Although computed tomography is the most common imaging modality used for diagnosis and treatment of cancer, its high dosage is a pressing issue. Moreover, conventional drug treatment's non-specific bio-distribution and ineffectiveness are concerning. With continued innovation, and human ingenuity, nanoparticles have the potential to be the long-awaited aid and tool to reversing cancer's unending destructive path.
Methodology
A comprehensive search of numerous scholarly databases was used to conduct a literature review. The search for articles was restricted to at least the year 2010.18 were chosen after thoroughly reading and evaluating numerous articles. These eighteen articles were chosen for their application of nanoparticles. The benefits of folic acidmodified gold nanoparticles exhibited a striking resemblance to the application, as well as the way their duration was measured, and were deemed satisfactory for this literature review. Accepted as acceptable for this review of the literature. Cancer was one of the keywords used to find the right articles (1). Nanoparticles in computed tomography; the future of nanotechnology in medicine. The article showed the modern application in medical sciences world Nanoparticles lowering radiographic dose, nanoparticles improving image quality, and limitations of nanoparticles. The exclusion criteria for the article that they did not share much resemblance with our systemic cause. Ambiguity was a major factor that become a big criterion for exclusion of the articles irrelevant to our systemic review. Although the articles contain various knowledge, but it didn't settle without introduction and topic selection which is the most important part for beginning a study (2).
Figure. 1: Nanoparticles in genetic material innoculation
Results of nanoparticles technology revolutionizing the modern world and imaging 1. Cancer Cancer is a condition in which abnormal cells divide out of control and may spread to other tissues. Trillions of cells make up a human body. These cells multiply and grow to make new cells. Cellular apoptosis occurs when these cells become overworked or damaged, and new cells take their place. However, cancerous cells that are abnormal do not fall into this category. However, abnormal cancerous cells do not fall into this category. A malignant growth cell varies from its vague capability and supersedes the sign of apoptosis (National Cancer Institute, 2015). Cancer cells continue to divide, and form growths called tumors. Tumors may be non-cancerous (benign) or cancerous (malignant). There are diverse types of cancers; the difference among them is where cancer occurs (3). Similarly, the National Cancer Institute (2015) states there are more than a hundred types of cancer known. However, Vizirianakis indicated the cause of more than half of the cancers discovered is unknown, making it extremely difficult to prevent the occurrence and reoccurrence of specific cancers (4). In comparison, the National Cancer Institute (2015) asserts cancer is a genetic disease; it is caused by mutations and changes to the genes of cells forcing the cells to perform functions improperly. Genetic changes may be inherited from parents or arise because of damage to DNA by certain environmental exposures such as radiation, alcohol, asbestos, arsenic, or chemicals in tobacco smoke (4). Moreover, depending on the type of cancer, one may feel and experience different signs and symptoms. Often, cancer does not cause pain. Based on the research conducted by National Cancer Institute (2019a), some of the notable symptom's cancer may cause include cough, eating problems, fatigue, fever, weight gain or loss, and skin changes such as formation of new moles. Furthermore, cancer may also cause swelling or lumps in the neck, underarm, stomach or groin area, as well as neurological problems such as headaches and seizures. Cancer is one of the main causes of millions of deaths worldwide annually. If a patient is experiencing symptoms suggesting it may be cancer, a doctor may order a lab test such as an imaging test. The physician may also order a biopsy to confirm his or her diagnosis. Imaging tests provide physicians with images of the inside of a patient's body thus allowing a physician to see whether a tumor is present. However, the most used imaging modality for the diagnosis of cancer is computed tomography (CT).
Computer Tomography (CT Scan)
According to Mahan and Doiron (2018), CT scans offer images of high resolution at a low cost and a quick scan time.
The diagnosis, treatment, and monitoring of diseases all rely on medical imaging. The use of contrast agents in computed tomography (CT) enhances the contrast of soft tissues, revealing anatomical details. Barium suspensions and small, iodinated molecules are currently approved contrast agents for CT. Although iodine contrast agents provide excellent vascular imaging, its short blood half-life and non-targeted imaging applications are a pressing issue. Moreover, iodinated based contrast agents are lethal for patients with compromised renal function and hypersensitivity towards iodine (4). Deviating from the previous assertions, the drawbacks associated with CT scans such as its increased ionizing radiation and patient dose cannot be ignored, although it provides superior spatial resolution (5). CT scans account for 70% of the radiation dose given to patients undergoing imaging tests. In the process of attaining optimal images with increased spatial resolution and density in computed tomography, the patient dose is compromised. Correspondingly, Do KH (2016) declared CT as one of the most important sources of ionizing radiation in diagnostic medical imaging. ALARA is a principle proposed and enforced by the International Commission on Radiological Protection (6). ALARA stands for As Low as Reasonably Achievable. Radiologists must use ionizing radiation only when its use is justified; the imaging test must be optimized specifically using low dosage consistent with the diagnostic task. CT scans provide physicians with images essential for the diagnosis of tumors. However, patients' exposure to ionizing radiation must be considered. Many researchers have been working on a solution to eliminate the shortcomings associated with CT imaging.
Nanoparticle
Nanoparticles are colloidal particles, ranging in size between 10-100nm.Nanoparticles have become a subject for current biomedical applications because of their unique properties. therapy of various forms of cancer (7). Khademi et al described how several biological mechanisms in the body occur at the nanoparticle scale, giving nanoparticles an advantageous edge to pass through biological barriers and interact with biomolecules found at the cellular or tissue level (8). Theranostics is defined as the combination of therapy and diagnosis (9). Nanoparticles are multifunctional theranostic agents. The diagnosis, location, and stage of the disease are all enhanced using theranostic nanoparticles, which also provide data on the response to treatment. Additionally, a therapeutic agent can be carried and delivered to a tumor by nanoparticles. Equivocally, Medavenkata and Akshatha (2018) also addressed nanoparticles by the portmanteau nanotheranostics, indicating nanoparticles integrate the modalities of diagnostic imaging and therapy for the treatment of oncology related diseases (10). They reported nanoparticles not only can deliver treatment but simultaneously monitor therapy response in real-time, potentially reducing over or under-dosing patients. Proteins that can be found in cancer patients' tissues, urine, stools, or blood are known as tumor biomarkers. Folate receptors (FR) are found in a variety of tumor types, most frequently in ovarian and endometrial cancers, and human epidermal growth factor receptor 2 (HER2) is linked to breast cancer. Microbiologists can detect, diagnose, and treat cancer by measuring the levels of tumor biomarkers (11). Folic acid also binds to tumor cells twenty times more strongly than to epithelial cells, making folate receptors a promising tumor biomarker, according to the authors. Nanoparticles can be combined with folic corrosive to help in growth imaging. It has been investigated the effects of folic acid-modified gold nanoparticles (FAmodified AuNP) and gold nanoparticles as contrast agents in CT imaging systems (12). The researchers played around with different current-time products for various tube concentrations of nanoparticles. Contrast enhancement was found to be significantly higher in the cells that were exposed to modified FA-AuNPs than in the cells that were not. In addition, they concluded from their experiments that increasing the tube current-time product from 60 to 250 Mas while maintaining a voltage of 130 kept resulted in a radiation dose increase of approximately 4.17 times greater than the increase in image contrast. In stark contrast, the simple incorporation of nanoparticles-whether modified or not-significantly increased the image contrast while maintaining a Mas that was as low as was reasonably practicable without increasing either the radiation dose or the Mas as low as achievable. Like this, Parvanian and Aghashiri (2017) used cysteamine (Cyst) linking of folic acid (FA) gold nanoparticles to detect human nasopharyngeal head and neck cancer using CT imaging in in vivo research (13). They discovered that CT imaging could not pick up a small tumor; however, when gold nanoparticles were added, the same tumor was visible at least 4.30 times more. In addition, active tumor cell targeting nanoparticles (FA-Cyst-AuNPs) produced images of the tumor that were at least 2.03 times more precise and effective than those produced by passive targeting gold nanoparticles. As a result, nanoparticles have the potential to use minimal Mas product to reduce dose and improve contrast in CT images. Additionally, a targeted CT imaging strategy for the specific recognition of cancerous cells is made possible using FAmodified AuNPs. The goal of achieving a favorable diagnostic or therapeutic outcome with minimal side effects is the fundamental basis for administering imaging contrast agents or therapeutic medications. Enzymatic degradation, inadequate bloodstream margination, or inability to overcome vascular endothelium are some of the bio barriers that prevent therapeutic or diagnostic agents from reaching the affected or targeted sites when injected. According to Patra et al (2018), only one out of 100,000 drug molecules reach the targeted site, resulting in 99.99% of the drugs being delivered to unintended sites, thus causing unwanted adverse reaction, however, nanoparticles are now being used as a diagnostic tool and therapeutic drug carrier which delivers drugs to intended targeted sites in a controlled manner (14). Nanoparticles are microscopic and able to easily penetrate through blood vessels of tumor tissues, enabling them to gather within the cancerous cells (15). Taghavi et al (2020) described, one of the most important characteristics nanoparticles possess is the ability to reduce side effects and damage to healthy tissues as well as organs compared with conventional cancer therapeutic drugs. Nanoparticles not only target cancerous cells with more accuracy and specificity but can deliver therapeutic drugs for treatment to the targeted organs difficult to reach such as the brain and pancreas (16). Methods of operation of nanoparticles fulfill the requirements for becoming effective drug carriers by selectively killing cancerous cells without affecting normal cells. The size and surface characteristics of the nanoparticles can be altered so that they remain in the bloodstream and are not caught by the reticuloendothelial system, which includes the liver and spleen, during circulation. Nanoparticles' fate and lifespan are determined by their surface characteristics. Nanoparticles need to have a surface that is hydrophilic to avoid being snatched up by macrophages. The surface can be coated with a hydrophilic polymer or block copolymers with hydrophilic and hydrophobic domains can be formed into nanoparticles to accomplish this. Additionally, the nanoparticles' size can be altered. By simultaneously decreasing dose-limiting toxicities and increasing the concentration of drugs within the diseased cells, these features not only improve patient survival but also quality of life. However, the size and surface characteristics of the nanoparticles can be altered so that they remain in the bloodstream and are not caught by the reticuloendothelial system, which includes the liver and spleen, during circulation. Nanoparticles' fate and lifespan are determined by their surface characteristics. Nanoparticles need to have a surface that is hydrophilic to avoid being snatched up by macrophages. The surface can be coated with a hydrophilic polymer or block copolymers with hydrophilic and hydrophobic domains can be formed into nanoparticles to accomplish this. Additionally, the nanoparticles' size can be altered. According to Routley (2019), the nanoparticles should be large enough to prevent them from rapidly leaking out of blood vessels while being small enough to escape macrophages embedded in the reticuloendothelial system (17). Nanoparticles deliver drugs in two ways: passive and self-delivery. Imaging of nanoparticle uptake in passive delivery, the inner cavity of the nanostructure is filled with the drug needed to be transported. Contrarily, in self-delivery, the drug is conjugated to the carrier nanoparticle. However, with selfdelivery, timing is extremely crucial because if the drugs are not released at the right time, the drug will not reach the intended site (18).
Figure 4: Oncogene visibility after nanoparticle injected
Nanotechnology has tremendous potential and is constantly evolving. Trends are pointing towards field workers shifting their thinking to the smallest of technologies to solve the biggest problems in the healthcare field. These same field workers are prototyping new alternatives to perform tasks currently being executed by hand or equipment. They are currently developing programmable and controllable nano assemblers and nanorobots having the capabilities to reverse the effects of atherosclerosis and cardiovascular disease. These innovative modern technologies are also demonstrating the capabilities of being able to fix genetic errors in cells (19). Qadri and Tzika (2021) emphasized how nanotechnology is the future of medicine and by 2024 the global market for nanotechnology will exceed 125 billion dollars (20). Specialized nanobots controlled by magnetic fields are being developed to perform a wide range of surgeries such as eye surgeries, clearing blocked arteries, and collecting biopsies. Micromotors, microscopic beads of magnesium and titanium, are currently being developed for treating stomach ulcers (21). Truly little is known about the long-term impacts of nanotechnology and much more resources are required. A developing concern is whether nanoparticles will accumulate in living tissues causing toxicity issues and whether it can be affordably manufactured at commercial scale. Nonetheless, nanotechnology constantly proves to be promising in the field of medicine and it is not long before nanotechnology moves from science fiction into the real world. Illustrations for uptake of nanoparticles in oncogenic cell (21).
Results
Cancer has been an existential threat for centuries. Nanoparticles may be the long-sought solution to finally combating this deadly disease and reversing its trend finally. In a world where bigger is often considered to be better nanoparticles are an exception to conventional wisdom in their potential to become a revolutionary step in personalized medicine. Thus, when it comes to nanoparticles and their mission to stop cancer, it is time to stop thinking so big and start thinking small, but of course, more mind work and resources need to be invested to reach a conclusive diagnostic tool. 4 For future application. As per our perspective it is a revolutionizing tool that can open numerous doors in the field of modern science.
Discussion
This systemic review will attempt to prove the role of multifunctional nanoparticles as contrast agents and therapeutic drug carriers for diagnosing and treating cancer. This paper aims to provide basic information on how nanoparticles optimize computed tomography images by enhancing image contrast and lowering radiation dosage. Additionally, this paper will elaborate on the process in which nanoparticles deliver therapeutic drugs to specific targeted sites. This systemic review paper will conclude with the positives and limitations of this technology and briefly touch upon its future applications.(22)
Top-down method Merits Demerits General Remarks
Optical lithography a well-established tool for micro/nanofabrication, particularly to produce chips, with sufficient resolution for high throughputs Tradeoff between resist process resolution and sensitivity necessitates high-tech, expensive, complex clean room operations.
The approach could be extended to extreme ultraviolet (EUV) sources to reduce the dimension, and the 193 nm lithography infrastructure already possesses a certain level of maturity and sophistication. Additionally, future advancements must address the rising cost of mask sets.
E-beam lithography
An extremely accurate and efficient nanofabrication tool for fabricating nanostructures up to 20 nm in the desired shape, popular in research environments difficult for 5 nm nanofabrication because of its inflated cost, slow process (the serial writing process), and low throughput.
E-beam lithography can produce periodic nanostructure features and surpasses the light's diffraction limit. To increase parallelism and throughput in the future, multiple electron beam approaches to lithography will be required. (24) Soft and nanoimprint lithography Pattern transfer-based, easy-to-use, and effective nanofabrication tool for making features smaller than 10 nm.
unable to produce densely packed nanostructures on a large scale, requires additional lithography techniques to generate the template, and typically not cost-effective.
For templates with periodic patterns of 10 nm, self-assembled nanostructures may be a viable solution to the difficult and expensive problem of template generation.
Block co-polymer lithography A low-cost, high-throughput method that works well for large, densely packed nanostructures, including spheres, cylinders, and lamellae that can be made in parallel assembly.
Block copolymer self-assembled patterns typically have high defect densities, making it difficult to produce the self-assembled nanopatterns with variable periodicity needed for many functional applications. Atomic layer deposition pin-hole-free nanostructured films over large areas, good reproducibility, and adhesion due to the formation of chemical bonds at the first atomic layer allow digital thickness control at the atomic level with precision.
Usually, a slow process that also costs a lot because of the vacuum components. It is hard to deposit certain metals, oxides with multiple components.
The stringent specifications for pure metal barriers dense, conductive, conformal; atomic layer deposition can fulfill the thin) requirements found in contemporary Cu-based chips.
Sol gel nanofabrication a low-cost, process-based chemical synthesis method to produce a wide range of nanomaterials, including multicomponent materials like glass, ceramic, film, and composites It is usually difficult to control synthesis and the subsequent drying steps, making it difficult to scale up.
a flexible nanofabrication technique that can be scaled up by improving the synthesis steps.
Molecular selfassembly enables the self-assembly of deep molecular nanopatterns with a width of less than 20 nm and generates atomically precise nano systems through the large pattern stretches.
In contrast to mechanically directed assembly, nano systems are difficult to design and manufacture.
Multi material molecular self-assembly may be an effective strategy for creating multifunctional nano systems and devices.
Physical and chemical vapor-phase deposition High purity nanofilms, a scalable process, the possibility to deposit porous nanofilms, and versatile nanofabrication controlled simultaneous deposition of multiple materials such as metal, ceramics, semiconductors, insulators, and polymers (25) Ineffective due to the costly vacuum components, high-temperature process, and toxic and corrosive gases, particularly in the case of chemical vapor deposition.
It offers a one-of-a-kind opportunity to nanofabricate overly complex nanostructures made of distinct materials with distinct properties, new developments in chemical vapor deposition, like "initiated chemical vapor deposition" (I-CVD), make it possible to deposit polymers without reducing their molecular weights for the first time. DNA-scaffolding enables the assembly of nanoscale components with high precision into programmable arrangements with significantly smaller dimensions (less than 10 nm in half-pitch).
New unit and integration processes, compatibility with CMOS fabrication, line edge roughness, throughput, and cost are just a few of the many issues that need to be investigated. earliest stage. The semiconductor industry's willingness to invest in infrastructure, yield, and manufacturing costs is critical to ultimate success.
Conclusion
The approach of eliminating the Hepatitis C virus was successful in mice and cell culture. Physicians will be able to cut and blast with pressure alone, potentially even without pain, thanks to this breakthrough nanotechnology since the focal point is so small that it can avoid nerve fibers. It is anticipated that this minimally invasive surgical method will work without causing harm to healthy tissue. In conclusion Nanoparticles will revolutionize medicine in human biotechnology. Most of us will probably feel nanomedicine's impact in the future. Nanotechnology is the future of medicine and health, according to the volume of development and research now being held at numerous universities and research organizations. | 2023-07-12T08:33:35.832Z | 2023-06-26T00:00:00.000 | {
"year": 2023,
"sha1": "5b9ef506d8b2a4fe9c7a4e6d5c0f4680561a312f",
"oa_license": "CCBYNC",
"oa_url": "https://pjph.org/index.php/pjph/article/download/1127/291",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cc12827a9b1671a6618818215a9e6a148cc0dd9d",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
} |
248144966 | pes2o/s2orc | v3-fos-license | A chord-angle-based approach with expandable solution space to 1-degree-of-freedom (DOF) rehabilitation mechanism synthesis
. Rehabilitation robots have been proven to be an effective tool for patient motor recovery in clinical medicine. Recently, few degrees of freedom (DOFs), especially 1-DOF, rehabilitation robots have drawn increasing attention as the complexity and cost of the control system would be significantly reduced. In this paper, the mechanism synthesis problem of 1-DOF rehabilitation robots is studied. Traditional synthesis methods usually aim at minimizing the trajectory error to generate a mathematically optimal solution, which may not be a practically feasible solution in terms of engineering constraints. Therefore, we propose a novel mechanism synthesis approach based on chord angle descriptor (CAD) and error tolerance expansion to generate a pool of mechanism solutions from which mathematically and practically optimal solutions can be selected. CAD is utilized for its capability to represent the same-shaped trajectories of different mechanisms in a unified way, and it is robust to the noise in the rehabilitation trajectory acquired by motion capture systems. Then a library of mechanism trajectories is established with compressed representations of CAD via an auto-encoder algorithm to speed up the matching between mechanism and rehabilitation trajectory where the matching error tolerance can be adjusted according to practical rehabilitation specifications. Finally, a design example of a 1-DOF rehabilitation robot for upper-limb training is provided to demonstrate the efficacy of our novel approach.
Introduction
There is an increasing number of people who are suffering from sensorimotor disabilities due to the large aging population. Stroke, particularly, is a leading cause of disability worldwide (Defebvre and Krystkowiak, 2016). Scientific research and clinical experiments have shown that rehabilitation training can reduce the disabilities and improve motor function, allowing patients to regain much of their independence and quality of life (Narayan Arya et al., 2012;Caproni and Colosimo, 2017). In the traditional rehabilitation training process, the affected limbs of the patients are guided by the therapist to perform predefined movement patterns repetitively. This process is slow and labor-intensive, usually involving extensive interaction between multiple therapists and one patient. With the development of automation technology, robot-aided rehabilitation has become a useful tool to restore and reinforce motor functions for patients. A rehabilitation robot can provide patients with intensive and reproducible movement training in time-unlimited durations, which not only alleviates stress for therapists but also provides quantitative diagnosis and assessments of motor impairments (Aprile et al., 2020;Rodrigues et al., 2021;Bertani et al., 2017;Kan et al., 2011).
Most rehabilitation robots use multi-DOF (degree of freedom) mechanisms to adapt to different users. Kemna et al. (2009) developed an end traction mechanism robot called iPAM, which uses two 3-DOF mechanisms to pull the forearm and upper arm, respectively, to complete shoulderelbow coordination exercises (Kemna et al., 2009). Rosati et al. (2007) designed a 3-DOF upper-limb training mechanism powered by ropes to achieve upper-limb rehabilitation while compensating for arm weight (Rosati et al., 2007). Li et al. (2006) developed a 5-DOF wearable upper-limb rehabilitation training mechanism, which is controlled based on surface electromyography signals and can assist hemiplegic patients to carry out individual and cooperative exercise training of the shoulder, elbow, and wrist (Li et al., 2006). While these multi-DOF mechanisms can help patients perform various kinds of rehabilitation movements owing to redundant workspace, their motion dynamics are realized by the underlying control system composed of multiple actuators, sensors, and complex control algorithms, which greatly increases the purchase and maintenance cost. Moreover, the possible malfunction of the control system may result in secondary injuries to patients where the rehabilitation mechanism moves the patient's joint out of its range of motion (ROM).
To avoid those problems of multi-DOF mechanisms, 1-DOF rehabilitation mechanisms have been investigated in recent years. Zhao et al. (2021) designed a 1-DOF mechanism to guide patients' upper limbs through target points and developed a set of virtual reality (VR)-based rehabilitation systems using Unity3D software, which can provide users with an immersive experience and provide management personnel with rehabilitation movement data (Zhao et al., 2021). Theriault et al. (2014) developed a 1-DOF haptic robot for post-stroke arm rehabilitation for in-home and clinical use. The robot can apply proper assistive force when interacting with the patient, thereby extending the functionality of the system to accommodate low-functioning patients (Theriault et al., 2014). Zhu et al. (2020) proposed a 1-DOF rehabilitation robot based on the coupled-serial-chain mechanism to assist the sit-to-stand movement of patients with lower-limb disabilities . Those robots are affordable and simple to operate thanks to a single actuator, and their workspace is limited and must be adapted to the workspace of a particular rehabilitation task, usually the trajectory of a specific joint. To design a 1-DOF mechanism to generate the matching trajectory, motion capture systems are used to obtain the task trajectory as the input to the subsequent mechanism synthesis algorithm, which has been demonstrated in our recent paper (Zhao et al., 2021).
When patients of different body dimensions perform the same rehabilitation movement task, the trajectories of their joints may vary in position, orientation, and scale, except for shape; in terms of mechanism synthesis, curves of different position, orientation, and scale but identical shape will result in mechanisms with identical relative dimensions, i.e., identical link length ratios. Therefore, shape extraction is an essential step for rehabilitation mechanism synthesis towards a particular movement task. In the literature of mechanism synthesis, numerous types of curve descriptors have been proposed such as Fourier descriptor (FD) (Zhang and Lu, 2002) and Haar wavelet descriptor (HWD) (Nabout and Tibken, 2008). However, these shape descriptors are subject to similar transformation (translation, rotation, and scaling) and sus-ceptible to bias by the necessity of matching coordinates. Hence, shape-error-based descriptors are needed to evaluate the underlying differences in shape. On the other hand, there exists a few types of descriptors specifically measuring shape differences, like curvature descriptor (CD) (Deshpande and Purwar, 2019) and turning function descriptor (TFD) (Torres-Moreno et al., 2022), but they are highly sensitive to noise embedded in motion capture systems (Holden, 2018). As a result, a noise-robust and shape-error-based descriptor is required to facilitate the synthesis of a 1-DOF mechanism for rehabilitation purposes.
Upon obtaining the shape signature of the target trajectory, mechanism synthesis algorithms are applied to determine the optimal linkage parameters. Conventional approaches usually focus on finding an optimal solution, which is closest to the prescribed curve mathematically. However, such a solution may become infeasible from an engineering point of view. For example, such a solution may contain overlong links or fixed pivots outside of the required region. In this paper, an alternative way, following our previous work on motion synthesis (Zhao et al., 2016), is presented by outputting a pool of candidate mechanisms via adjusting the fitting error tolerance, from which both mathematically and practically optimal solutions can be obtained. The clinical motivation for tolerance adjustment is based on human movement variability (Sutter et al., 2021) -no movement can be exactly repeated due to inevitable noise in the nervous system (Faisal et al., 2008;Harris and Wolpert, 1998). Therefore, in the context of rehabilitation mechanism synthesis, this effect allows the approximation of the target rehabilitation trajectory by similar mechanism curves within a reasonable tolerance.
To this end, we present a path synthesis approach specifically towards a 1-DOF rehabilitation mechanism with expandable solution space whose extent is bounded by the degree of human movement variability and using a shapedifference-based descriptor called chord angle descriptor to handle the trajectory of a particular rehabilitation task without being affected by translation, rotation, or scale changes. In addition, a trajectory library with different kinds of mechanisms is established and an auto-encoder algorithm from the field of machine learning is adopted to adjust the error tolerance for expanding solutions. The rest of this paper is as follows. Sect. 2 gives the definition of chord angle descriptor. In Sect. 3, a mechanism curve library with compressed CAD is constructed to facilitate the comparison between approximate and target curves. Section 4 illustrates the CAD-based matching algorithm for mechanism and rehabilitation paths. Then, an example of circle-tracing trajectory generation for upper-limb rehabilitation is presented in Sect. 5 to demonstrate the validity of the proposed method. Finally, Sect. 6 contains the conclusions and future work.
CAD-based trajectory representation
When different patients are performing the same rehabilitation task, trajectories vary in size and direction but not shape. Although many descriptors can be used to represent the shape features (Cao et al., 2011;Adamek and Connor, 2004;Alajlan et al., 2007;Mokhtarian et al., 1998;Zhang and Lu, 2002), some descriptors such as CD and TFD are sensitive to noise and need to smooth the trajectory beforehand. In other cases, some descriptors such as FD and HWD are variant to similar transformation, which leads to the timeconsuming normalization of the trajectory. In this paper, we introduce the chord angle descriptor to extract the shape features of the trajectory for mechanism synthesis. This descriptor is independent of position, orientation, and size, and it has a certain anti-interference ability, which will be described later.
Definition of chord angle descriptor
The chord angle descriptor is a descriptor based on the string angles between the curve sampling points. Figure 1 shows a planar trajectory with n equal-interval contour points, namely, {p 1 , p 2 , . . ., p k , p n }, where p k = {x k , y k } indicates the two-dimensional (2D) plane coordinates. Here, we use the chord angle θ ij to describe its shape characteristics based on the spatial position relation between the contour sampling points.
For any two points p i and p j on the contour, θ ij is defined as the chord angle between the chord vector p i p j and the chord vector p j p m . The point p m must always be different from p i and p j to ensure that p i , p j , and p m can form a triangle. To achieve this, we can define point p m as where is a parameter that is used to distinguish p m from p j . For a trajectory which has n sampling points, 1 ≤ m ≤ n.
From the definition of p m in Eq. (1), it is obvious that in the case of i > j , 1 ≤ j + ≤ n must be satisfied. So 1 − j ≤ ≤ n − j . Similarly, in the case of i ≤ j , 1 ≤ j − ≤ n is needed, so j − n ≤ ≤ j − 1. For the convenience of calculation for both cases, p m can be chosen between p i and p j , which actually means a small positive integer can be selected for to satisfy both conditions; for example, = 4. Based on the definition of p m provided above, for any p i , p j that belongs to the set of P , we can construct the formula of the chord angle θ ij as where θ ij is in the range of [0, π ]. (p i p j , p j p m ) is calculated by the following arccosine formula: Next, to make the description of the path shape consistent with the sense of human vision, θ ij is transformed into logarithmic space by For any point p i on the trajectory, the complete chord angle descriptor is [θ i1 , θ i2 , . . ., θ in ]. By developing a chord angle descriptor for each sampling point on the path curve, we can obtain the n × n dimensional CAD matrix of the entire patch as For example, a trajectory is shown in Fig. 2a, and its gray image of the CAD matrix is shown in Fig. 2b. The image can be obtained by the MATLAB imshow function. As is shown, the color of Fig. 2b on the diagonal is black since the value of θ ij on the diagonal is 0, and the color close to black signifies that θ ij in those areas is closed to 0. Conversely, the white color of the image means that θ ij equals π.
Characteristics of CAD
One advantage of using CAD to extract trajectory features is that CAD is independent of the frame position, the frame deflection angle, and the overall scaling ratio of the mechanism. For example, Fig. 3a-c show three different trajectories which are transformed from the original trajectory in Fig. 2a with rotation, translation, and scaling, respectively; since the chord angle is invariant with the similar transformation, the shape of these three trajectories remains the same, so they can be represented by one set of CAD in a unified way. This is beneficial to the design of the rehabilitation robot, which means that mechanism results will not contain the pseudomultiple solutions with same link length ratio. Another advantage of the CAD descriptor is that it is noise proof, i.e., resistant to noise. For illustration, Fig. 4 shows two trajectories, in which Fig. 4a is the original trajectory with noise signals, and Fig. 4b is the smoothed curve. Figures 5 and 6 show the shape features of these two curves extracted by curvature and CAD matrix, respectively.
As can be seen from Fig. 5, when the curvature is used as the shape descriptor, the interference signals "drown" the original curve, so it is difficult to identify the curvature of the smooth path from the curvature with the noise. Therefore, for the trajectory with noise signals, it is difficult to extract the shape features accurately by using the curvature descriptor.
When the CAD matrix is used as the descriptor of trajectory shape features, as shown in Fig. 6, the noise signal does not drown the CAD feature of the smooth trajectory, although the CAD image with noise signals has some extra vertical lines when compared with the CAD image of smooth trajectory. This means that the shape features of smooth trajectory can still be recognized from the shape features of the noise path. This advantage is particularly beneficial for acquiring the motion trajectory in designing rehabilitation robots. This is because, while utilizing sports equipment to collect the trajectory of rehabilitation training, the collected trajectory will unavoidably have some noise due to the limitation of equipment accuracy. However, these noise signals have little effect on CAD when compared with other trajectory descriptors.
Mechanism library generation with compressed CAD
When acquiring the trajectory of rehabilitation training, one way to get the corresponding mechanism is the library method, that is, finding the mechanism trajectory that is similar to the rehabilitation trajectory in the database and taking the linkage parameters of the mechanism as the initial values of the design parameter of the 1-DOF rehabilitation robot. This approach is suitable for computer-aided solutions due to its high precision, and it is immune to circuit and branch problems. To design a rehabilitation mechanism that generates movement trajectory, we build a library and combine it with our CAD trajectory feature to obtain a diverse set of conceptual design solutions.
Range of mechanism design parameters
To design a mechanism with the library method, the first step is to establish a database containing the link parameters and the compressed CAD features of the trajectories of different mechanisms. When given a target movement trajectory, the target trajectory is compared with the trajectories in the database by CAD to find the mechanisms that match the target trajectory. The library in this paper includes three types of mechanisms: the four-bar mechanism, the Stephenson III mechanism, and the slider-crank mechanism, which are shown in Fig. 7. The coupler point Q in Fig. 7 is the generating point of the motion trajectory of the mechanism. Due to the nonlinearity of these 1-DOF mechanisms, small changes in linkage parameters can produce large changes in the generated paths. Deshpande and Purwar (2019) has demonstrated that whenever the link ratios of four-bar linkage are close to one, the sensitivity of the shape of a coupler motion is higher than it would be otherwise (Deshpande and Purwar, 2019).
In order to generate diverse types of coupler trajectories, length parameters of the mechanisms in this paper are chosen according to the research results by Deshpande and Purwar (2019). To be specific, the length parameters of a mechanism in a library are stored in the form of proportional length, and the frame distance l 0 is taken as the benchmark (l 0 = 1). The ratio of the remaining link lengths to l 0 satisfies the normal distribution relation: the l 1 -l 3 to l 0 ratio satisfies the lognormal distribution (µ = 0, σ = 0.6), and the l 4 − l 8 to l 0 ratio satisfies the normal distribution (µ = 0, σ = 2). Twenty thousand groups of different linkage parameters are comprised in the library. The motion trajectory of the point Q is obtained by the kinematic method, and each trajectory has 50 sample points.
Auto-encoder for CAD of mechanism trajectory
After obtaining the trajectory of each mechanism in the library, the CAD feature of each trajectory is calculated from Eq. (5). If we directly store the CAD matrix of each mechanism trajectory, it will cause the problem of the "curse of dimensionality" (Marimont and Shapiro, 1979) due to the high dimensionality of the CAD matrix. Therefore, we borrow the auto-encoder algorithm (Ng, 2011) from the field of machine learning to reduce the dimensionality of CAD matrix. The auto-encoder is a neural network which attempts to replicate its input at its output and utilizes the compressed feature in the hidden layer or space to represent the input data, i.e., the CAD matrix, which is calculated by Eq. (5). Figure 8 shows an auto-encoder neural network architecture similar to that which we used in this study. The entire neural network consists of three simple auto-encoders stacked on top of each other. As the dimension of the CAD is 50×50, the number of neurons in the input layer and the output layer is 2500, and the numbers of neurons in five hidden layers are 250, 25, 2, 25, and 250, respectively. Besides, the neurons in the hidden layer of the auto-encoder are activated by the sigmoid function, and the neurons in the hidden layer of the decoder are activated by the linear activation function, which is defined by Eqs. (6) and (7), respectively. The loss function applied to evaluate the performance of the neural network is defined as Eq. (8).
where mean square error (MSE) is the loss function, w is the weight of the neural network, h is the node output, g is the expected output, and λ is the coefficient, which is usually very small. Herein, we consider λ = 0.00001. The training process of the auto-encoder is carried out by using three trainAutoencoder functions in MATLAB 2019b, and the max epochs of each function is set to 3000 to avoid underfitting. The training algorithm is "Trainscg", which stands for scaled conjugate gradient descent. The computer hardware configuration used in this paper is as follows: the CPU is an Inter Core i9-9700, which has a highest computing frequency is 3.0 GHz. The computer memory is 32 GB, and the GPU is an NVIDA Quadro P2200. For the 20 000 samples of the mechanism library, the training time of the auto-encoder on the MATLAB platform is about 2 h. Once the network is trained, the innermost code layer, indicated by hidden layer 3 in Fig. 8a, represents the twodimensional compressed results of the CAD feature. With the encoder, each of the 50 × 50-dimensional CAD matrixes can be compressed into a two-dimensional feature, as is represented by (s 1 , s 2 ) in Fig. 8b. Figure 9 shows the compressed features of the CAD features of the entire trajectories in the library. Each point represents a trajectory of a mechanism. The distance between two points represents the similarity of the two trajectories. The closer the distance between the two points, the more similar the shape of the trajectories of the corresponding two mechanisms is. On the contrary, the farther the distance between the two points, the greater the shape difference of the mechanism trajectory.
For further illustration, three example points denoted by C 1 , C 2 , and C 3 are selected in the compressed feature space as shown in Fig. 10. It is obvious that C 2 is much closer to C 3 than C 2 ; meanwhile, the shape of the mechanism trajectory corresponding to C 2 is similar to that represented by C 3 while clearly different from the mechanism trajectory of C 1 . Therefore, the distance between points in the compressed feature space reflects the similarity of their corresponding mechanism trajectories.
Trajectory matching algorithm
With the auto-encoder presented in the previous section, when given a rehabilitation movement trajectory, the target trajectory is first compressed by the auto-encoder to obtain the two-dimensional compression feature. Then, the compressed feature is compared with the compression feature of the database mechanism trajectory. The designer only needs to search for some available points near the target feature points, and the corresponding mechanism of these points can be used as the mechanism of the rehabilitation robot. Therefore, with CAD and library, the rehabilitation design problem is turned into a database search problem. In this paper, Euclidean distance is used to calculate the similarity of two where R is the Euclidean distance between the two points, and (s 1i , s 2i ) and (s 1j , s 2j ) are the two-dimensional compressed features of point i and point j , respectively. The smaller the value of R is, the closer the distance between the two points is, i.e., the higher the similarity between the two trajectories. Based on the knowledge above, the steps of our novel approach to design a 1-DOF rehabilitation robot mechanism are given below: 1. Generate the library database involving sample mechanism trajectories with known parameters and construct their CAD matrixes.
2. Compress the dimension of the CAD of sample trajectories with the auto-encoder to obtain compressed features of CAD.
3. When given a target trajectory, calculate the compressed CAD feature through the auto-encoder.
4. Choose some points near to the target and output the mechanism linkage parameters.
It is worth pointing out that the time investment for steps 1 and 2 is for just one time. When a target trajectory is given, the search time is within 2 min. The flowchart of this design process is shown in Fig. 11.
Example
To verify the effectiveness of the proposed method, a typical rehabilitation task -circle-tracing training (Sanchez et al., 2006) -is taken as an example to design a 1-DOF endeffector rehabilitation robot. As discussed in Sect. 2, the shape of the rehabilitation trajectory remains nearly the same when subjects with different body dimensions perform the same rehabilitation task. Hence, our synthesis algorithm is task specific instead of subject specific. The circle-tracing training is a planar trajectory, taken from a specific subject and shown in Fig. 12a.
In order for subjects to collect their trajectories at home, portable visual devices can be used to capture the target trajectory, whose precision has been proven feasible in the context of rehabilitation trajectory acquisition in our previous work (Chen et al., 2020). The portable device is a Surface Pro 7, whose rear camera is utilized to capture the trajectory of motion. The camera has 8 megapixels, which can provide a sampling frequency of 30 fps (frames per second) and tracking accuracy of 1 mm. To obtain the wrist trajectory, we use a marker attached to the user's wrist joint for stable and precise motion tracking, as shown in Fig. 12b. More details on the tracking processing can be found in our previous work (Chen et al., 2020). When capturing the motion track of hand, the user's wrist should be kept parallel with the camera to guarantee the trajectory to be a planar trajectory. The trajectory of the wrist marker is shown as the red line in Fig. 12c.
By using the design method proposed in this paper, six groups of different mechanisms which have similar trajectories to the training motion are presented in Fig. 13; the linkage parameters of these mechanisms are presented in Table 1. The R in Table 1 is the distance of compressed CAD between the mechanism curve and the wrist motion trajectory, which represents the similarity of two trajectories. circle-tracing training process, the fifth mechanism in Fig. 13 is taken as an example to build the three-dimensional simulation model. The kinematic simulation results are shown in Fig. 14. It can be seen that the motion path of the mechanism can well track the position of the human wrist joint in the circle-tracing training process. These results demonstrate that our approach proposed in this paper is applicable to de-signing a 1-DOF rehabilitation robot, and multiple groups of solutions of different dimensional types can be obtained.
Conclusions and future work
In this paper, we propose a novel mechanical synthesis approach for a 1-DOF rehabilitation robot mechanism. First, chord angle descriptor is proposed to eliminate the effect of noise and similar transformation such as rotation, translation, and scaling of the trajectory to avoid pseudo mechanism solutions of identical relative link dimensions but merely differing by pose and size. Next, an auto-encoder algorithm is employed to build a clustered library of the shape of the mechanism path by varying relative link parameters to transform the mechanism design problem into a shape retrieval problem. Finally, an example is provided in the end to demonstrate that the approach can be applied to upper-limb rehabilitation mechanism design.
While this paper focuses on the kinematic synthesis of the rehabilitation mechanism and demonstrates that the synthesized path naturally fits the target rehabilitation trajectory, in the future work we will take the mechanism dynamics into consideration to guarantee the quality of interactive force with the implementation of control algorithm and perform clinical trials to validate the therapeutic effect of the synthesized mechanism.
Data availability. The link to our code and data cannot be provided here but can however be made available upon request to the corresponding author.
Author contributions. The formulas in this paper are mainly derived by WW. XS and PC provided the facility for experiment and is responsible for data processing. The project of this paper is supported by XL, who organized the whole structure of this paper and provided the financial funding.
Competing interests. The contact author has declared that neither they nor their co-authors have any competing interests.
Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Financial support. The work has been financially supported by the National Natural Science Foundation of China (grant nos. 51805449 and 62103291), Sichuan Science and Technology Programs (grant nos. 2021ZHYZ0019 and 2022YFS0021), and 1 3 5 project for 45 disciplines of excellence, West China Hospital, Sichuan University (grant nos. ZYYC21004 and ZYJC21081).
Review statement. This paper was edited by Zi Bin and reviewed by four anonymous referees. | 2022-04-14T15:23:32.507Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "c72aeaaf677d8375f0429230f133036e8b003792",
"oa_license": "CCBY",
"oa_url": "https://ms.copernicus.org/articles/13/341/2022/ms-13-341-2022.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b06f052f770c343bcb2bea19d3918e62b249d3b7",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": []
} |
235583308 | pes2o/s2orc | v3-fos-license | ANALYSIS OF CONSUMER PREFERENCES FOR CONSTRUCTION AND REPAIR OF RESIDENTIAL BUILDINGS AND APARTMENTS IN THE CONTEXT OF THE COVID-19 PANDEMIC
ISSN:2446-6220 ANALYSIS OF CONSUMER PREFERENCES FOR CONSTRUCTION AND REPAIR OF RESIDENTIAL BUILDINGS AND APARTMENTS IN THE CONTEXT OF THE COVID-19 PANDEMIC Pavel Ilich Samarin Vladislav Alekseevich Komarov Andrey Andreevich Tkachenko Denis Vladimirovich Semenov ABSTRACT The article analyzes the dynamics of consumer spending on construction and repair of residential buildings and apartments during the COVID-19 pandemic. In the context of the pandemic the author concludes that the population of our country needs both to invest money and to solve urgent problems related to the implementation of the following plans to organize the repair of their own apartments or the construction of houses or summer cottages on their land plots. In this regard, it can be concluded that the owners of apartments and houses plan to spend significant funds to achieve this goal. Accordingly, the analysis showed that with the growing demand for the services of construction teams, as well as the significant costs of the population for repairs and building during the pandemic, the volume of expenditures of Russian citizens in the construction industry tends to increase.
INTRODUCTION
In modern conditions, there is a rather serious epidemiological situation due to the spread of a serious infectious disease, called COVID-19. The disease started in China and spread around the world within a short period of time. The rapid course of this disease often leads to death. In March 2010 the World Health Organization recognized the disease as a pandemic, and governments around the world began to impose various restrictions on their citizens in order to preserve their lives and health. The measures that counteract the spread of COVID-19 was the introduction of a self-isolation regime in most countries for citizens over 65 years of age, the temporary closure of enterprises, the introduction of distance learning in schools and universities, the transfer of employees to remote forms of work, etc.
In Russia, these restrictive measures were also taken. Most of the citizens were isolated in their apartments and houses for several weeks. Also, some citizens, especially residents of such megacities as Moscow and St. Petersburg, preferred to spend time in self-isolation in their summer cottages and country places. The selfisolation regime has significantly changed the lifestyle of many Russians, influenced their preferences, and also made them think about the future: it is known that the forecasts associated with a decrease in tension in the spread of the COVID--term, and it is assumed that this disease will cause a significant number of citizens to be unable to work every day for quite a long period. For this reason, today many residents of our country have begun to think about what they need. On the one hand the need to occupy themselves with something useful, and on the other hand, to prepare the ground for the opportunity to provide themselves with food that can be obtained on a personal subsidiary farm.
In addition, living in an individual home substantially protects citizens from the possibility of contracting infectious diseases, and this is another factor -why so many people during the pandemic began to think about the need to build or repair their personal private homes and cottages. The purpose of the study is to determine the dynamics of demand for construction materials and services in the building sector by Russian citizens during the COVID-19 pandemic based on an analysis of their costs in this area.
MATERIALS AND METHODS
The study was conducted with the involvement of respondents-residents of Belgorod and the Belgorod region who own apartments, country houses or summer cottages, who in the period from 15.03.2020 to 01.09.2020 carried out construction and repair works of these objects. A total of 1,568 people took part in the survey. Data for the survey was obtained by: • personal survey of research participants in large construction hypermarkets in Belgorod; • a telephone survey of the authors of ads requesting services for the construction and repair of real estates, published in periodicals and on Internet resources of regional significance.
The data obtained were summarized, systematized and analyzed, and the corresponding conclusions were made. A graphical method was also used to demonstrate the data obtained.
RESULTS
Recently, researchers have come to the conclusion that the construction industry in Russia is developing in difficult conditions. So, there is an opinion that today it is necessary to talk about the need for large-scale systemic structural modernization of the construction industry, which causes the use of anti-crisis measures and mobilization mechanisms (DORZHIEVA, 2020). There is also a drop in the pace of construction of residential buildings on a monthly basis from March to August 2020 by an average of 0.1 % (GOLOVNIN & NIKITINA, 2020). There is also an opinion that the construction market was negatively affected by the decline in consumer activity (VOLOVIK et al., 2020), which caused some unstable tendencies in the mortgage lending market (TRAVKINA, 2020).
According to the authors, the pandemic has led to stagnation in the national construction industry, which already leads to a decrease in the volume of housing commissioning and a reduction in the entry of new sites to the market, and along with a decrease in the purchasing power of the population, it will significantly reduce the profitability of the construction business (BADUSHEVA & PALAGIN, 2020). The monetary income of the population has decreased, according to some authors, and there is no need to wait for a rapid recovery of the construction industry (VASILYEVA et al., 2020). It is for this reason that companies working in the construction industry will not be able to fully implement the construction projects started before (VASILYEVA et al., 2020). The socio-economic situation of the population also raised many questions for researchers (Maleva et al., 2020). This is due to the fact that a large number of residents of the country were out of work at such a difficult time (MUKHINA & SINDYASHKINA, 2020). This situation can have significant consequences for the economy of our country (SHIROV, 2020).
However, the market for construction of housing and communal services, as well as the volume of sales of construction materials, tends to grow. It was decided to study the trends in this market and draw appropriate conclusions. The reason for this research was the review of the construction services market in Belgorod and the Belgorod region. The beginning of the pandemic occurred at the beginning of spring, when traditionally all owners of suburban real estate and apartments in multistory buildings carry out repairs and construction work on their own properties. However, taking into account the closure of borders of neighboring countries, such as Ukraine, Uzbekistan, Tajikistan, etc., the city and region had a shortage of personnel in the field of construction: most of the teams that carried out construction work were unable to enter Russia. Accordingly, the demand for construction services provided by local specialists, both legal companies and individual entrepreneurs, and individuals, has significantly increased. In this regard, the employment of this category of workers increased several times during the period under review, and owners of apartments and houses who want to use their services had to wait for the release of construction teams for a considerable time.
Having identified the presence of a deficit in the construction services market, we assumed that the volume of construction and repair work in the Belgorod market still tends to grow, despite the absence of construction teams from neighboring countries. If we take into account that labor migrants officially registered in Russia and involved in construction work, in general, accounted for only 10% of the total number of specialists providing construction services in the city of Belgorod, then we could talk about an increase in the volume of construction and repair work in the studied market.
Taking into account the above, it was decided to study or analyze the amount of expenses in the context of individual items that residents of Belgorod and the region sent for the repair and construction of houses, apartments and suburban areas. A total of 1,568 people took part in the survey, each of whom was the owner or co-owner of a separate property that was being renovated or under construction. Most of the respondents were men (80% or 1,254 people) and 314 were women. Accordingly, it can be said that the most active position in this area was occupied by men, and it was for them that this activity was important during the pandemic. At the same time, 1,280 people were interviewed on the territory of large construction hypermarkets and 288 people by phone.
RESPONDENTS WERE ASKED TO ANSWER A NUMBER OF QUESTIONS
1. Are you building a new house or cottage, completing existing buildings, or planning to make repairs in a finished house or apartment?
2. Why did you decide to carry out construction or repair work during this period?
3. Do you plan to carry out construction work by yourself or with the involvement of construction teams?
4. What is the average cost of services that you plan to pay for the services of builders?
5. How much money do you plan to spend on the purchase of construction materials in general?
6. Do you plan to purchase construction materials using your own savings or make a purchase on credit?
7. Do you plan to ask for help in purchasing of construction materials from specialists with whom you sign a contract for construction or repair works?
8. You are planning to purchase building materials for economy class or rely on the acquisition of premium products of famous manufacturers?
The respondents ' answers were summarized in tables and analyzed. Let's look at the respondents ' replies in more detail. The results of the answer to the first question are shown in Table 1 Source: Search data.
The survey results are shown in Chart 1.
Source: Search data
According to the data obtained as a result of the answer to this question, it can be concluded that 36% of men and 20% of women planned to build new houses or cottages. At the same time, 54% and 65% of men and women, respectively, planned to carry out work as part of the completion of the already started construction. 10% and 15% of men and women, respectively, planned to spend money on repairs. Thus, we can say that the majority of respondents have an unfinished house or cottage and planned to carry out work that will allow them to finish the property and put it into operation. The respondents ' reasons for starting construction or repair work are shown in Table 2. Based on the results of the study, the following conclusions were made. The main purpose of a greater number of respondents was the investment of funds. This answer was chosen by 60% of men and 45% of women. Accordingly, fearing the results of the pandemic, which had a negative impact on currency exchange rates and devalued the Russian ruble, respondents determined that the best investment of available funds should be to carry out repairs or construction. Full-fledged household management is the second option in terms of the number of positive responses. Accordingly, the respondents who chose it considered that the organization of providing their family with crop and livestock products from their own plot would help preserve the health of family members and reduce food costs. Among their response options were the following: "We have been planning for a long The results of the answer to the third question are shown in Table 3. According to the responses of respondents, more than half of both men and women plan to hire specialists, and only 25% of women and 45% of men plan to perform work independently. Consequently, this figure reflects the increased demand for construction professionals in Belgorod and the Belgorod region. The respondents were also asked about the average cost of services provided by construction teams involved in the work. This question was asked to those respondents who planned to attract hired labor to perform repair and construction work. The results of the responses to this question are shown in Table 4. Graphically, the data in the Table 4 is shown in Chart 4.
Source: Search data
Thus, we can conclude the following: most of the respondents planned to spend between 100 and 200 thousand rubles to pay for construction services. this was indicated by 45% of men and 30% of women. The lowest percentage of respondents determined that they will send more than 200 thousand rubles to pay for construction services 15% (men) and 5% (women) The next question was: "How much money do you plan to spend on purchasing construction materials in General?" The answers to this question are shown in Table 5. Chart 5.
Source: Search data
According to the data obtained from the survey, it can be concluded that most of the respondents (55% of men and 50% of women) plan to spend from 200 to 300 thousand rubles on the purchase of construction materials. Also, during the survey, we found out how respondents plan to purchase construction materials with their own savings or on credit. The results of the survey are shown in Table 6. Source: Search data The survey results are shown in Chart 6.
Source: Search data
Accordingly, most of the respondents plan to purchase construction materials using their personal savings. We also found out whether respondents plan to ask for help in purchasing construction materials from specialists with whom you sign a contract for construction or repair. The results of the responses are shown in Table 7. The results of the responses are shown in Chart 7.
Source: Search data
The data indicate that the majority of respondents plan to seek help in purchasing materials from specialists with whom they conclude a contract for construction or repair. This is due to the fact that such specialists usually have the opportunity to get a discount in those stores where they regularly purchase construction materials. Therefore, the cost of building materials will be cheaper. Consumers were also asked a question: "Are you planning to purchase economy-class construction materials or are you planning to purchase premium-class products from well-known manufacturers?" The results of the response are included in Table 8. According to the data obtained, a significant part of the respondents plans to purchase premium construction materials for construction or repair.
DISCUSSION
The results of the survey showed that most of the residents of Belgorod and the Belgorod region who plan to carry out repair or construction work are men. At the same time, most of the work will be carried out by respondents on their own savings, which indicates that the population has free funds and a desire to invest them in real estate. The amount that respondents plan to spend on repairs or construction is, on average, from 100 to 200 thousand to pay for the work of builders and from 200 to 300 thousand-for the purchase of construction materials. Accordingly, judging by the volume of expenditures, respondents do not plan to carry out a small amount of repair or construction work, but to make full-fledged investments in the construction or repair of a real estate object. This fact is confirmed by the fact that the majority of respondents choose premium construction materials, not economy, as well as the fact that most consumers plan to attract professionals to perform construction work.
However, the approach to purchasing construction materials from respondents still has its own specifics: most of them plan to apply for this service to construction teams involved in construction or repair in order to get a discount when purchasing. This indicates that consumers still want to save money. Accordingly, all of the above confirms the respondents ' response to the question that their main goal in carrying out repairs or construction is to invest money that is constantly devalued due to inflation and currency exchange rate growth.
CONCLUSION
In the context of the pandemic, the population of our country needs both to invest money and to solve urgent problems related to the implementation of certain plans to organize the repair of their own apartments or the construction of houses or country houses on their land plots. In this regard, it can be concluded that the owners of apartments and houses plan to spend significant funds to achieve this goal. This is due to the desire of people to save money from inflation and currency exchange rate growth, as well as an attempt to improve their own land plot in order to receive plant and animal products during its operation, which will reduce food costs. Accordingly, the analysis showed that given the growing demand for the services of construction teams, as well as the significant costs of the population for repairs and construction during the pandemic, the volume of expenditures of citizens of our country in the construction industry tends to increase. | 2021-03-29T23:26:28.017Z | 2021-04-28T00:00:00.000 | {
"year": 2021,
"sha1": "9f6001cb9ef7c163b78581b96e23e002cdae46b2",
"oa_license": "CCBYNCSA",
"oa_url": "https://laplageemrevista.editorialaar.com/index.php/lpg1/article/download/781/719",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a3fa776940e9070ac9be34044c86fd6d878a4aa8",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
221358366 | pes2o/s2orc | v3-fos-license | Auranofin Enhances Sulforaphane-Mediated Apoptosis in Hepatocellular Carcinoma Hep3B Cells through Inactivation of the PI3K/Akt Signaling Pathway
The thioredoxin (Trx) system plays critical roles in regulating intracellular redox levels and defending organisms against oxidative stress. Recent studies indicated that Trx reductase (TrxR) was overexpressed in various types of human cancer cells indicating that the Trx-TrxR system may be a potential target for anti-cancer drug development. This study investigated the synergistic effect of auranofin, a TrxR-specific inhibitor, on sulforaphane-mediated apoptotic cell death using Hep3B cells. The results showed that sulforaphane significantly enhanced auranofin-induced apoptosis by inhibiting TrxR activity and cell proliferation compared to either single treatment. The synergistic effect of sulforaphane and auranofin on apoptosis was evidenced by an increased annexin-V-positive cells and Sub-G1 cells. The induction of apoptosis by the combined treatment caused the loss of mitochondrial membrane potential (ΔΨm) and upregulation of Bax. In addition, the proteolytic activities of caspases (-3, -8, and -9) and the degradation of poly (ADP-ribose) polymerase, a substrate protein of activated caspase-3, were also higher in the combined treatment. Moreover, combined treatment induced excessive generation of reactive oxygen species (ROS). However, treatment with N-acetyl-L-cysteine, a ROS scavenger, reduced combined treatment-induced ROS production and apoptosis. Thereby, these results deduce that ROS played a pivotal role in apoptosis induced by auranofin and sulforaphane. Furthermore, apoptosis induced by auranofin and sulforaphane was significantly increased through inhibition of the phosphoinositide 3-kinase (PI3K)/Akt pathway. Taken together, the present study demonstrated that down-regulation of TrxR activity contributed to the synergistic effect of auranofin and sulforaphane on apoptosis through ROS production and inhibition of PI3K/Akt signaling pathway.
INTRODUCTION
The incidence of liver cancer including hepatocellular carcinoma (HCC) is the sixth most common incidence of cancer and ranks third among the cancer deaths worldwide (Ferlay et al., 2018). While most cases of HCC are caused by infection with hepatitis B or C virus (HBV or HCV) or excessive alcohol consumption, recent studies predicted that increases in the number of cases of non-alcoholic fatty liver (NAFLD), which increased the risk of HCC, along with metabolic syndrome and obesity, will sooner or later become a major cause of HCC (Baffy et al., 2012;Kulik and El-Serag, 2019). Current options for the treatment of HCC are radiation therapy, surgical resection, and chemotherapy for advanced-stage HCC (Llovet et 443 Auranofin Enhances Sulforaphane-Mediated Apoptosis in Hepatocellular Carcinoma Hep3B Cells through Inactivation of the PI3K/Akt Signaling Pathway Likhitsup et al., 2019). However, no treatment has shown remarkable results in treating HCC due to side effects, treatment inefficiency, drug toxicity, and resistance and insufficient anticancer effects. Yet, chemotherapy is still the main treatment for HCC (Conklin, 2000;Trotti et al., 2000;Schirrmacher, 2019). Therefore, it is imperative to identify therapeutic agents for HCC with low toxicity and high effectiveness.
The thioredoxin (Trx) and Trx reductase (TrxR) system is composed of Trx and nicotinamide adenine dinucleotide phosphate (NADPH)-dependent TrxR, which is functionally involved in several processes including anti-oxidation, redox regulation and cell proliferation Lu and Holmgren, 2014). Several previous studies reported that Trx or TrxR was overexpressed in acute lymphocytic leukemia, lung, breast, colorectal, pancreatic, hepatocellular and gastric cancers, and the sensitivity to radiotherapy and chemotherapy in the treatment of melanoma, colon, and breast cancer was further increased by TrxR suppression (Lincoln et al., 2003;Urig and Becker, 2006). TrxR has a redox-active center consisting of a cysteine-selenocysteine redox pair, and the metal complex can be bound to the active site to inhibit its activity (Zhong et al., 2000;Ren et al., 2018). Consequently, TrxR is expected to be a pharmacological target for metallodrugs (Becker et al., 2000;Cheng and Qi, 2017).
Auranofin is a gold phosphine complex and has been used as a medication for rheumatoid arthritis, but is more recently known as a TrxR inhibitor (Isab and Shaw, 1990;Madeira et al., 2012). Auranofin can inactivate TrxR by forming diselenide bridges with the human TrxR Sec 498 residue, reducing the NADPH-dependent reduction of oxidized thioredoxin and thus, affecting intracellular redox regulation, cell proliferation and antioxidant defense (Becker et al., 2000;Fang and Holmgren, 2006). Auranofin induces apoptosis of tumor cells and excessive reactive oxygen species (ROS) production by modulating the cellular redox status (Marzano et al., 2007;Cox et al., 2008). Based on evidence that TrxR inhibition and ROS accumulation inhibited cancer cell growth, auranofin has been considered for an anti-cancer agent for leukemia, lung cancer and epithelial ovarian cancer (Madeira et al., 2012;Ralph et al., 2019; U.S. National Library of Medicine, ClinicalTrials. gov).
Phytochemicals, natural plant-derived bioactive components, are helpful compounds with few side effects and a variety of potential roles as chemical and biological functional agent (Phan et al., 2018). One of the phytochemicals, sulforaphane (1-isothiocyanato-4-(methanesulfinyl)-butane) is an isothiocyanate, which is abundant in cruciferous vegetables such as broccoli, cabbage and cauliflower (Robbins et al., 2005). Sulforaphane has reported anti-cancer effects through cell cycle arrest and apoptosis in various cancer cells, such as prostate, lung, breast, and colon cancers (Gamet-Payrastre et al., 2000;Herman-Antosiewicz et al., 2006;Mi et al., 2007;Li et al., 2010). Although our previous studies confirmed the anticancer effects of sulforaphane or auranofin in Hep3B cells (Moon et al., 2010;Hwang-Bo et al., 2017), the combined treatment of sulforaphane with auranofin has not been evaluated. In the present study, sulforaphane and auranofin were used to evaluate the synergistic effect of combination therapy on apoptosis to effectively increase anti-cancer activity in Hep3B cells.
Cell culture and chemicals
HCC Hep3B and HepG2 cells were obtained from the American Type Culture Collection (Manassas, VA, USA). The cells were cultured in DMEM medium supplemented with 10% (v/v) FBS and 1% penicillin/streptomycin and incubated in a humidified atmosphere containing 5% CO2 at 37°C. Auranofin and sulforaphane were dissolved in DMSO with stock concentrations of 10 mM and 20 mM, respectively, and stored at -20°C until use. The culture media used for cell treatment contained a final concentration of DMSO of up to 0.04% or less, it is concentration in which cytotoxicity does not appeared.
Primary hepatocytes isolation
Primary hepatocytes were isolated from 6-week-old male C57BL/6 mice and used immediately after hepatic portal perfusion and isolation, as previously described (Hwang-Bo et al., 2019). In brief, the portal vein of the liver was continuously injected with ethylene glycol-bis(2-aminoethylether)-N,N,N′,N′tetraacetic acid (EGTA) buffer (5.4 mM KCl, 0.44 mM KH2PO4, 140 mM NaCl, 0.34 mM Na2HPO4, 0.5 mM EGTA, and 25 mM Tricine) at a rate of 5 mL/min, and the injected buffer and blood were discharged by cutting the infrahepatic inferior vena cava. To disperse the liver tissue, 0.075% collagenase was addi-tionally perfused. The digested liver tissue was washed and filtered with a 40 µm cell strainer. The hepatocyte pellets were collected and a Percoll cushion (45%) was used to perform gradient-based hepatocyte isolation. The cells were cultured in Williams E medium with no phenol red supplemented with primary hepatocyte maintenance supplements and incubated overnight at 37°C in 5% CO2.
Cell viability assay
To investigate cell viability, the cells were seeded in 6-well plates at 1.5×10 5 cells per well and incubated at 37°C for 24 h. The cells were treated with auranofin (0.5, 1, 1.5, and 2 µM) or sulforaphane (2.5, 5, 7.5, and 10 µM) for 24 h, and then 200 µL of MTT at 5 mg/mL was added, as previously described (Hasan et al., 2019). After 2 h, the medium was removed and 2 mL of DMSO was added to each well for 10 min. The cell viability was measured by an enzyme-linked immunosorbent assay (ELISA) reader (Molecular Devices, Sunnyvale, CA, USA) at 540 nm. The results are expressed as percentages of the treated group compared to the control group.
TrxR enzymatic activity assay
TrxR activity was measured by utilizing a TrxR colorimetric assay kit (Cayman Chemical, Ann Arbor, MI, USA), based on the NADPH-dependent reduction of 5, 5'-dithio-bis-(2-nitrobenzoic) acid (DTNB) to 5-thio-2-nitrobenzoic acid. In brief, cells were seeded in a 100 mm dish at a plating density of 7.5×10 5 cells/dish and treated with the indicated concentration of auranofin and sulforaphane for 24 h. After that, the cells were harvested and homogenized in a buffer containing 50 mM potassium phosphate, pH 7.4 and 1 mM EDTA. The samples (20 µL) were added to 96-well plates, and then 180 µL of the reaction mix (140 µL assay buffer, 20 µL DTNB and 20 µL NADPH) was added. The linear increase in absorbance at 412 nm was measured during 15 min using ELISA plate reader. TrxR activity was calculated as a percentage of the enzyme activity compared to that of the control group.
Flow cytometry analysis of apoptosis
To determine apoptotic cell death, the ratio of sub-G1 and annexin-V-positive cells were analyzed by flow cytometry according to the previously described method (Zhang et al., 2020). The cells were seeded and stabilized on a 6-well plate (1.5×10 5 cells/well), and then incubated with the indicated concentrations of auranofin and sulforaphane for 24 h. To measure the sub-G1 DNA population and apoptotic cell death, the cells were stained by PI solution and FITC annexin-V, respectively, and analyzed with an Accuri C6 flow cytometer (BD Sciences, Franklin Lakes, NJ, USA) at Core-Facility Center for Tissue Regeneration, Dong-eui University (Busan, Korea). For each experiment, 10,000 events per sample were recorded.
Caspase-3, -8 and -9 activity
To quantify caspase activity, the cells were seeded in 100 mm dishes at 7.5×10 5 cells and stabilized for 24 h. The cells were exposed with or without 1 µM of auranofin and the indicated concentration of sulforaphane for 24 h. Caspase activities were determined using caspase-3, -8 and -9 colorimetric assay kits (R&D Systems, Minneapolis, MN, USA) performed according to the manufacturer's protocol. After the treatment period, the cells were harvested and lysed with lysis buffer, and the protein content was quantitated at 3 µg/µL. Cell lysates (50 µL) were dispensed into each reaction well, and 2×reaction buffer (50 µL), and DEVD (Asp-Glu-Val-Asp), IETD (Ile-Glu-Thr-Asp), and LEHD (Leu-Glu-His-Asp), which were substrate of caspase-3, -8 and -9, respectively, were added and incubated at 37°C for 1-2 h. The samples were assessed with a ELISA reader (Molecular Devices) at a wavelength of 405 nm.
Western blot analysis
The cells were treated with or without 1 µM of auranofin and the indicated concentration of sulforaphane for 24 h and then the cells were harvested, washed in phosphate-buffered saline (PBS), and lysed in lysis buffer [250 mM NaCl, 25 mM Tris-Cl The cells were harvested and lysed to measure TrxR1 activity using a colorimetric assay kit. Absorbance was measured using a ELISA reader (Molecular Devices, Sunnyvale, CA, USA) and calculated as described by the manufacturer's protocol. (C, D) MTT assay was performed to confirm the cell viability affected by auranofin and sulforaphane. The absorbance was measured using a ELISA reader (Molecular Devices), and the results were compared by setting the control group viability to 100%. The data represent the average of three independent experiments (mean ± SD). Statistical analysis was performed using ANOVA with Tukey's post-hoc test (*p<0.05, ***p<0.001).
(pH 7.5), 5 mM EDTA (pH 8.0), 1% NP-40, 1 mM 4-(2-aminoethyl) benzenesulfonyl fluoride hydrochloride, 5 mM dithiothreitol, and protease inhibitor cocktail] followed by centrifugation at 14,000 rpm for 30 min at 4°C. The supernatants were collected and the protein concentrations were estimated with a ELISA reader (Molecular Devices) at 595 nm using a protein assay dye. After quantification at 3 µg/ul protein per sample and mixing 1:1 with Laemmli sample buffer, the samples were heated at 95°C for 5 min to denature the protein and stored at -80°C until use. To perform sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), 15 µg of protein sample was loaded in each lane of 12% SDS-polyacrylamide gels, electrophoresed, and transferred to membranes. The membranes were blocked with 5% skim milk in PBS containing 0.1% Tween 20 (PBST) for 1 h and then, probed with the appropriate concentrations of primary antibodies overnight at 4°C. After washing three times with PBST, the membranes were reacted with secondary antibody (anti-mouse or antirabbit) for 2 h at room temperature, and the proteins were visualized using ECL.
Evaluation of mitochondrial membrane potential (MMP)
JC-1 dye, as an MMP (Δψm) indicator, can selectively enter mitochondria and reversibly change color from red fluorescence to green fluorescence with decreases in MMP. In healthy cells with high MMP, JC-1 is present as an aggregate that exhibits red, and has a monomeric form that is green in apoptosis-induced cells. After auranofin and sulforaphane administration, the cells were harvested and stained with 10 µg/ mL JC-1 for 20 min in the dark. The MMP changes by treatment were analyzed by Accuri C6 flow cytometer and fluorescence imaging system (EVOS FL Auto 2, Thermo Fisher Scientific).
Detection of intracellular ROS and mitochondrial superoxide
In brief, the cells were pre-treated with NAC for 1 h and then further incubated with auranofin (1 µM) and sulforaphane (7.5 µM) for 1 h. After the incubation, the cells were exposed to DCFH-DA (10 µM) and MitoSOX (10 µM) for 20 min at 37°C. Then, the cells were harvested and the intracellular ROS levels were measured using flow cytometry. The samples were further stained with DAPI and Mitotracker and visualized us-Biomol Ther 28 (5) Hep3B, HepG2 cells and primary hepatocytes were incubated with auranofin and sulforaphane alone or together for 24 h. (A, D, E, G) Cellular TrxR activity was measured in vitro using an DTNB assay. (B, F, H) Cell viability was determined by an MTT assay. The absorbance was measured using an ELISA plate reader and compared to the control which was set to 100%. The data are the average of three independent experiments (mean ± SD). Statistical analysis was performed using one-way ANOVA with Tukey's post-hoc test (**p<0.01 and ***p<0.001). (C) The isobologram analysis for the synergism of auranofin and sulforaphane was drawn based on the half maximal inhibitory concentration (IC 50 ). The straight lines connecting the respective IC 50 values for the two agents correspond to the effects independent of each other, and the values below the straight lines indicate the synergistic effects.
ing a fluorescence imaging system (EVOS FL Auto 2, Thermo Fisher Scientific).
Molecular docking
The molecular docking of the enzyme-compound complexes was calculated (tested, performed, conducted) by their binding affinity and binding sites using a PyRx virtual screening program (https://pyrx.sourceforge.io). The 3D structure of TrxR was acquired from the Protein Data Bank (PDB). Its PDB ID code was 2CFY. Additionally, the two-dimensional structure of auranofin and sulforaphane were obtained from The National Center for Biotechnology Information (NCBI) PubChem compound database. Each compound ID (CID) is shown in Table 1. The virtual screening results from Pyrx were analyzed and expressed with PyMOL (https://pymol.org).
Statistical analysis
All statistical analyses were performed with GraphPad Prism (GraphPad Software, Inc., La Jolla, CA, USA) using one-way analysis of variance (ANOVA) for multiple compari- sons, followed by Tukey's post hoc test. Each experiment was evaluated at least three times, and all numerical data are expressed as means ± standard deviation (SD). Results with a p value<0.05 were considered statistically significant.
Synergistic suppression of TrxR1 activity and cell viability by auranofin and sulforaphane in Hep3B cells
As shown in Fig. 1, treatment with either sulforaphane or auranofin alone at low concentrations weakly inhibit TrxR activity and decreased cell viability in Hep3B cells. After confirming the auranofin and sulforaphane conditions that did not affect Hep3B cells, combined treatment was performed to measure TrxR activity and cell viability. Combined treatment significantly reduced TrxR activity and cell viability compared to single treatments with auranofin or sulforaphane in Hep3B cells ( Fig. 2A, 2B). Next, the synergistic effect of auranofin and sulforaphane on growth inhibition was quantified by isobologram analysis (Fig. 2C). The difference in basal TrxR activity in the two types of HCCs (Hep3B and HepG2 cells) and normal hepatocytes without chemical treatment was measured. As shown in Fig. 2D, TrxR activity in HCCs was higher than normal hepatocytes. However, combined treatment did not alter the TrxR activity and cell viability in HepG2 and normal hepatocytes (Fig. 2E-2H).
Sulforaphane has synergistic effects with auranofin on the induction of apoptosis and activity of caspases in Hep3B cells
Apoptosis was quantified and visualized in various ways to determine whether the cell viability reduced by the combined treatment was due to apoptosis. In the first method of quantifying apoptosis, the percentage of sub-G1 cells was determined by measuring the cellular DNA content. The results showed that the percentage of sub-G1 cells was increased by auranofin and sulforaphane treatment (Fig. 3A, top, 3C). The annexin-V/PI-double staining, another method of quantification of apoptosis, also confirmed that apoptosis was increased by the combined treatment with auranofin and sulforaphane (Fig. 3B, top, 3D). However, auranofin and sulforaphane did not increase the number of sub-G1cells (Fig. 3A, bottom, 3C) and annexin-V-positive cells (Fig. 3B, bottom, 3D) in HepG2 cells.
To determine which regulators affected the combined treatment-induced apoptosis, caspase activity assays and Western blot experiments were conducted. In the caspase activity assays, the combined treatment led to increased caspase-3 and -9 activity, whereas caspase-8 activity was not significantly changed (Fig. 4A). The results of Western blotting showed that the combined treatment increased cleavage of PARP, a downstream target of activated caspase-3, and decreased the expression of inhibitor of apoptosis protein (IAP) members including XIAP and cIAP-1 (Fig. 4B).
Loss of mitochondrial dysfunction in apoptosis induced auranofin and sulforaphane in Hep3B cells
The expression of Bax, a protein present in mitochondria and involved in apoptosis, increased, but there was no change in the expression of Bcl-2 and Bid (Fig. 4C). Therefore, the loss of MMP (Δψm), one of the events in apoptosis, was evaluated to determine whether the combined treatment affected mitochondrial function. As shown in Fig. 5, the JC-1 aggregates emit orange fluorescence at the control level, and JC-1 monomers emitted green fluorescence, specifically due to MMP loss, which was induced in Hep3B cells by the combined treatment but not in HepG2 cells. These results demonstrate that the combined treatment induced apoptosis and that the pathway proceeded via mitochondrial dysfunction.
Elevated cellular ROS and mitochondrial superoxide by combined treatment with auranofin and sulforaphane in Hep3B cells
To measure excess intracellular ROS, flow cytometry analysis and fluorescent images were observed using DCFH-DA, a cell penetrable ROS probe. As shown in Fig. 6A, after combined treatment for 1 h, the amount of ROS was increased by about 3-fold (9.6%) compared to the control group (3.1%). To confirm whether the generation of ROS caused by auranofin and sulforaphane mediated mitochondria, mitochondrial superoxide was measured by staining with MitoSOX. As with the DCFH-DA staining results, the percentage of mitochondrial superoxide increased with auranofin and sulforaphane treatment but was restored by NAC (Fig. 6B). Furthermore, the fluorescence emission of DCFH-DA (green) and Mitotracker mitochondrial (red) staining was observed under a fluorescent microscope to visually confirm these results. As shown in Fig. 6C, DCFH-DA, an ROS probe, was not only increased intracellularly but also co-localized with Mitotracker staining. Biomol Ther 28(5), 443-455 (2020) The cells were lysed and assayed for caspase-3, -8, and -9 activity using the appropriate substrates. The colorimetric assay was measured using a ELISA reader (Molecular Devices, Sunnyvale, CA, USA). All results represent three independent determinations with fold increases calculated based on the control group. *p<0.05 and ***p<0.001 compared to the control group. (B, C) The cells were lysed and equal amounts of cellular proteins were subjected to Western blotting analysis using specific primary antibodies corresponding to each protein. Equal protein loading was confirmed by actin expression.
Moreover, an ROS scavenger was used to confirm the association between the induction of apoptosis by auranofin plus sulforaphane and ROS production. As shown in Fig. 7A and 7F, TrxR activity and cell viability inhibited by the combined treatment were restored to control levels by pretreatment with NAC. Likewise, the annexin-Vpositive cells and loss of MMP (Δψm), which were increased by auranofin and sulforaphane, was decreased by NAC (Fig. 7C, 7D). Also, the expression of apoptosis-related proteins including PARP and XIAP were altered to control level (Fig. 7E). Considering that the induction of apoptosis and mitochondrialmediated ROS by auranofin and sulforaphane was decreased by NAC, these results suggest that ROS production was responsible for the combined treatment-induced apoptosis. However, cell viability, apoptotic cell death and MMP loss were not altered in HepG2 cells by the combined treatment ( Fig. 7B-7D, bottom).
Suppression of PI3K/AKT signaling pathway by auranofin and sulforaphane
To estimate whether sulforaphane and auranofin affected the PI3K/Akt pathway, the cells were treated with the combined treatment for varying incubation times (0.5, 1, 3, 6, 12, and 24 h). Western blotting was used to confirm the expres-sion of phosphorylated PI3K/Akt. As shown in Fig. 8A, with increasing incubation times of combined treatment, the expression levels of p-PI3K and its downstream protein p-Akt decreased. Additionally, LY2940020, an inhibitor of PI3K/Akt signaling, was used to determine the effect of combined treatment-induced apoptosis. The results showed that pretreatment with PI3K/Akt inhibitor further decreased cell viability (Fig. 8C) and enhanced apoptotic cell death (Fig. 8B, top) and cleavage form of PARP (Fig. 8E) in Hep3B cells compared to the combined treatment without pretreatment. However, combined treatment and LY294002 did not change the cell viability and apoptosis in HepG2 cells (Fig. 8B, bottom, 8D).
Regulation of ROS-mediated PI3K/Akt signaling by auranofin and sulforaphane
Furthermore, whether ROS or PI3K/Akt pathway acted upstream in combined treatment-induced apoptosis was confirmed using ROS and PI3K/Akt inhibitors (NAC and LY294002, respectively). The cells were pretreated with NAC and LY294002 for 1 h and then incubated with auranofin plus sulforaphane for 24 h. After pretreatment with NAC/LY294002 and combined treatment, cell viability (Fig. 9A) and TrxR activity (Fig. 9F) were restored to control levels; annexin-V-positive cells (Fig. 9C, top), loss of MMP (Fig. 9D, top) and cleavage form of PARP were reduced; and the expression of XIAP increased to the control levels ( Fig. 9E) in Hep3B cells but not in HepG2 cells (Fig. 9B-9D, bottom). The above-mentioned results indicated that the combined treatment-induced apoptosis suppressed PI3K/Akt signaling via the ROS-dependent pathway.
Molecular modeling of auranofin and sulforaphane docking to TrxR1
To support the results that auranofin and sulforaphane inhibited TrxR activity, molecular modeling of the binding interaction of TrxR1 with auranofin and sulforaphane was conducted using PyRx (The Scripps Research Institute, CA, USA). The enzyme-compound complexes were analyzed for docking using PyRx (The Scripps Research Institute) and visualized by PyMOL (Schrodinger, Inc., NY, USA). As shown in Table 1 and Fig. 10, auranofin and sulforaphane were predicted to be covalently bound to TrxR1 and located in different surface pockets of TrxR1. Auranofin bound to TrxR1 with a high affinity (-5.5 kcal / mol) and interacted with the Cys 498 residue, which is essential for the catalytic activity of TrxR1 (Fig. 10C, 10D, left panel). To confirm the critical role of the Cys 498 residue in the TrxR1-auranofin complex, the Cys 498 residue was mutated to alanine (Ala), which eliminated the binding of TrxR1 to the essential residue of TrxR1 (Fig. 10C, 10D, right panel). Sulforaphane was predicted to interact covalently with the Asp 334 of TrxR1, which was not as strong as auranofin (Fig. 10C, 10D, middle panel). Therefore, these results demonstrated that TrxR and auranofin and sulforaphane interacted structurally and electrochemically.
DISCUSSION
Chemotherapy is associated with cytotoxicity, which leads to cell death not only of tumors but also of normal dividing cells. Many previous studies have suggested that the additive or synergistic effects of the combined treatment of two or more drugs may be effective in chemotherapy (Emens and Middleton, 2015;Niedzwiecki et al., 2016). The approach to Biomol Ther 28 (5) combination therapy was conceived in the method of treating tuberculosis with antibiotic combinations in the 1960s and has been successfully achieved in the treatment of cancers such as acute lymphocytic leukemia and lymphoma (McKelvey et al., 1976;Robak et al., 2016;Kerantzas and Jacobs, 2017). Combination chemotherapy, which treats with drugs acting through molecular mechanisms, can reduce drug resistance and normal cell cytotoxicity by using two or more low-dose drugs instead of one high-dose drug while increasing cancer cell death (Pritchard et al., 2012). TrxR is a pivotal enzyme that maintains or regulates the intracellular redox system and is highly sensitive to gold com- NAC for 1 h and then incubated with auranofin (AF) (1 µM) and sulforaphane (SFN) (7.5 µM) for 24 h. (A, B) Cell viability was determined by MTT assay. The error bars represent the standard deviation of three independent experiments. *** indicates significant differences at p<0.001, compared to the control group, ### indicates significant differences at p<0.001, compared to auranofin and sulforaphane-treated group. (C) Flow cytometry was carried out by annexin-V/PI staining to determine apoptosis. (D) A dot plot analysis of flow cytometry with JC-1 staining for the combined treatment with auranofin and sulforaphane compared to pretreatment with NAC. (E) The cells were lysed and equal amounts of cellular proteins were subjected to Western blotting analysis using specific primary antibodies corresponding to each protein. Equal protein loading was confirmed by actin expression. (F) TrxR activity was measured by DTNB reduction assay. Statistical analysis was performed using ANOVA with Tukey's post-hoc test. The error bars represent the standard deviation of three independent experiments. *** indicates significant differences at p<0.001, compared to the control group, ### indicates significant differences at p<0.001, compared to auranofin and sulforaphane-treated group.
pounds including auranofin (Omata et al., 2006;Ouyang et al., 2018). The overexpression of TrxR has been selected as a defensive mechanism by external stimuli in various types of cancer cells (Jia et al., 2019). Hence, the dysfunction of TrxR or the inhibition of TrxR activity both represent novel strategies for human cancer therapy, and TrxR is emerging as a potential target for anti-cancer drug design. We predicted the possibility that auranofin and sulforaphane could bind to the active site of TrxR and investigated whether it could inhibit the activity of TrxR. According to the results of three-dimensional (3D) structural protein-chemical complex prediction, auranofin could bind to the active cysteine residue site of TrxR. Sulforaphane was weaker than auranofin but had the potential to combine with TrxR (Fig. 10). Binding with TrxR was confirmed to affect its activity, as expected. TrxR activity was decreased depending on the concentration of auranofin, whereas treatment with sulforaphane did not change the activity of TrxR. Interestingly, the TrxR activity measured following combined treatment was lower than that of the single treatments, exhibiting a synergistic effect ( Fig. 2A-2C). In this study, combined treatment is proposed as a candidate for chemotherapy to effectively treat HCC. However, Hep3B cells were more sensitive to TrxR activity effects by sulforaphane and auranofin than HepG2 cells, indicating that the TrxR system played an important role in maintaining Hep3B cells as cancer cells.
In this study, auranofin or sulforaphane was treated separately under conditions that did not affect cell viability to maximize the effect of combination treatment. Combined treatment with auranofin and sulforaphane did not have a significant effect on normal hepatocytes, whereas, in Hep3B cells, the combination treatment synergistically decreased cell viability (Fig. 2). Combined treatment-induced cell death revealed features of apoptosis. The population of sub-G1 cells and the percentage of cells with annexin-V-positive staining, representing apoptosis, also increased (Fig. 3). The important proteins in the execution of apoptosis are caspases. Caspases acted differently in the apoptotic stages and are divided into initiator caspases, such as caspase-8 and -9, and effector caspases, including caspase-3 and -7 (Green and Llambi, 2015). Combined treatment activated caspase-9 and caspase-3, which increased the cleavage form of its substrate such as PARP. In contrast, the expression of XIAP and cIAP-1 was reduced by treatment with auranofin plus sulforaphane (Fig. 4). In addition, the combined treatment increased the expression of the mitochondrial permeabilization regulator Bax, and increased the Bax/Bcl-2 ratio even though the expression of Bcl-2 was unchanged. Since the activity and expression of caspase-8 and Bid were consistent compared to the controls, the extrinsic pathway was not associated with combined treatment-induced apoptosis. The alteration in the expression of Biomol Ther 28 (5) (1 µM) and sulforaphane (SFN) (7.5 µM) for the indicated time. For Western blot analysis, total cell lysates were separated by SDS-PAGE. After that, the proteins were blotted onto membranes and the specific antibodies were incubated to observe protein expression. Actin served as a control for equal loading. The cells were treated with LY294002 (5 µM) as a pharmacological inhibitor of PI3K/Akt 1 h prior to combined treatment with auranofin (1 µM) and sulforaphane (7.5 µM). After incubation for 24 h, apoptotic cell death was measured by annexin-V/PI staining using flow cytometry (B). (C, D) The cell viability was determined by MTT assay. The data are presented as the means ± SD of at least three independent experiments. *** indicates significant differences at p<0.001, compared to control group; ### indicates significant differences at p<0.001, compared to auranofin and sulforaphane-treated group. Apoptosis-related proteins were detected by Western blotting under the same conditions as above (E). mitochondrial proteins by the combined treatment suggests mitochondria dysfunction. MMP (Δψm) plays an important role in mitochondrial homeostasis and is a driving force for the transport of ions and proteins required for mitochondrial function, which were reduced by the combined treatment (Zorova et al., 2018). Therefore, combined treatment-induced apoptosis may be a potential approach in inhibiting cancer by targeting mitochondria.
Apoptosis induced by oxidative stress is a much-discussed paradigm for the treatment strategy of cancer (Gerl and Fig. 9. ROS-mediated PI3K/Akt signaling pathway modulated combined treatment-induced apoptosis in Hep3B cells. The cells were pretreated with NAC (10 mM) and LY294002 (5 µM) for 1 h prior to combined treatment with auranofin (AF) (1 µM) and sulforaphane (SFN) (7.5 µM). (A, B) Cell viability and TrxR activity were measured by MTT assay and DTNB reduction assay, respectively. The error bars represent the standard deviation of three independent experiments. *** indicates significant differences at p<0.001, compared to the control group; ### indicates significant differences at p<0.001, compared to auranofin and sulforaphane-treated group. (C) Following treatment, the cells were stained with annexin-V/PI and analyzed by flow cytometry. (D) MMP was measured by JC-1 staining and analyzed flow cytometry. (E) Alterations in apoptosis-related protein expression were analyzed by Western blotting. Total cell lysates were separated by SDS-PAGE, and the proteins were transferred to membranes. Each membrane was reacted with the indicated antibodies against apoptosis-related proteins. Actin served as a control for equal loading. (F) TrxR activity was measured by DTNB reduction assay. Statistical analysis was performed using ANOVA with Tukey's post-hoc test. The error bars represent the standard deviation of three independent experiments. *** indicates significant differences at p<0.001, compared to the control group, ### indicates significant differences at p<0.001, compared to auranofin and sulforaphane-treated group.
2005). In particular, combined treatment inhibited the activity of TrxR, a component of the oxidative defense system, and promoted the generation of ROS (Fig. 6, 7). In this case, DCFH-DA staining of intracellular ROS and mitochondrial indicator were observed to be co-localized. Additionally, the occurrence of mitochondrial superoxide was measured by MitoSOX, and the results showed that auranofin and sulforaphane treatment increased MitoSOX-stained cells. However, the inhibition of TrxR activity and cell viability by the combined treatment was restored by a free radical scavenger (NAC). Moreover, apoptotic cell death and the loss of MMP were recovered by NAC pretreatment, resulting in a reduction of cleaved PARP and an increase of XIAP. Consequently, mitochondrial-mediated ROS was induced by the combined treatment, and the generated ROS regulated auranofin and sulforaphane-induced apoptosis.
Activation of PI3K/Akt signaling pathway is a representative anticancer target in types of cancers because it can regulate various cellular functions, including cell proliferation, inhibition of apoptosis, tumor growth and angiogenesis (Yu and Cui, 2016;Cheng et al., 2019). Considering that PI3K/Akt signaling plays an important role in cancer cells, whether combined treatment influenced the expression and phosphorylation of PI3K and Akt was investigated. The expression of p-PI3K and p-Akt was decreased by treatment with auranofin plus sulforaphane in a time-dependent manner, suggesting that PI3K/ Akt signaling pathway should be considered in auranofin and sulforaphane-induced apoptosis. Furthermore, pretreatment with a PI3K inhibitor (LY294002) confirmed the alteration in combined treatment-induced apoptosis. Pretreatment with the PI3K inhibitor induced more cell death than the combined treatment, indicating that the apoptosis induced by auranofin and sulforaphane was due to inhibition of PI3K/Akt signaling pathway. Because the time of ROS occurrence was earlier than inhibition of the PI3K/Akt pathway, the generation of ROS was expected to act upstream in combined treatment-induced apoptosis. To confirm whether ROS and PI3K/Akt signaling pathway were independent or related to each other, an ROS scavenger and PI3K inhibitor were used to pretreat Hep3B cells. As shown in Fig. 9, despite the presence of PI3K inhibitor, cell viability and TrxR activity were recovered by NAC pretreatment, and apoptotic cell death and loss of MMP decreased. Thus, combined treatment induced apoptosis, inhibiting the PI3K/Akt signaling pathway dependent upon ROS generation.
These results demonstrated that auranofin and sulforaphane could synergistically induce apoptosis in Hep3B cells, and the combined treatment enhanced mitochondrial dysfunction and ROS accumulation and decreased decreasing TrxR activity in the process of inducing apoptosis. | 2020-08-29T13:01:48.528Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "bc1bd0d66df9252f98a94bce487b63869d1a080f",
"oa_license": "CCBYNC",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7457169",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7026592125f931d9f1109d25dc7ea8492f4b81e0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
264143440 | pes2o/s2orc | v3-fos-license | Mutation in Chek2 triggers von Hippel-Lindau hemangioblastoma growth
Purpose Von Hippel-Lindau (VHL) is a rare inherited disease mainly characterized by the growth of tumours, predominantly hemangioblastomas (Hbs) in the CNS and retina, and renal carcinomas. The natural history of VHL disease is variable, differing in the age of onset and its penetrance, even among relatives. Unfortunately, sometimes VHL shows more severe than average: the onset starts in adolescence, and surgeries are required almost every year. In these cases, the factor that triggers the appearance and growth of Hbs usually remains unknown, although additional mutations are suspected. Methods We present the case of a VHL patient whose first surgery was at 13 years of age. Then, along his next 8 years, he has undergone 5 surgeries for resection of 10 CNS Hbs. To clarify this severe VHL condition, DNA from a CNS Hb and white blood cells (WBC) was sequenced using next-generation sequencing technology. Results Massive DNA sequencing of the WBC (germ line) revealed a pathogenic mutation in CHEK2 and the complete loss of a VHL allele (both tumour suppressors). Moreover, in the tumour sample, several mutations, in BRAF1 and PTPN11 were found. Familiar segregation studies showed that CHEK2 mutation was in the maternal lineage, while VHL was inherited by paternal lineage. Conclusions Finally, clinical history correlated to the different genotypes in the family, concluding that the severity of these VHL manifestations are due to both, VHL-and-CHEK2 mutations. This case report aims to notice the importance of deeper genetic analyses, in inherited rare diseases, to uncover non-expected mutations.
The natural history of VHL shows that the average onset is at 33 years old and that 80% of the patients will bear, at least, 1 CNS Hb in their lifetime [22,24].Furthermore, visceral lesions such as, clear-cell renal cell carcinoma (ccRCC), pheochromocytomas, pancreatic neuroendocrine tumours, and benign cyst adenomas of the adnexal organ, appear in those patients.The CNS tumours include Hbs of retina and the cranio-spinal axis (cerebellum, brain stem, and spinal cord) as well as endolymphatic sac tumours (ELSTs) [7,18].CcRCC and its metastasis constitute the most frequent cause of mortality associated to the VHL disease, while the development and progression of retinal Hbs and CNS Hbs are cause of a high morbidity and loss of quality of life, such as blindness due to retinal detachment and the need of surgeries to control the associated symptoms [9].
Without an effective treatment for VHL tumours, repeated surgeries become the first line treatment to tackle the disease [5].Unfortunately, continued surgeries decrease the quality of life of the patient [6].Inhibitors of VEGF/VEGFR and mTOR have been tested in clinical trials, showing a limited success [14].More precisely, in advanced ccRCC, tyrosine kinase inhibitors such as Pazopanib have been used, but resistance appears as adaptation to the tumour environment [26].Recently, belzutifan, a selective HIF-2α inhibitor for VHL RC, showed a limited response in a trial on 61 VHL patients suffering from early-stage non-metastatic ccRCCs [17,26].
Clinical diagnosis of VHL is as follows: when there is a relative affected, the presence of a single CNS Hb, pheochromocytoma or ccRCC confirms the diagnosis; while in the absence of familiar history, the presence of 2 CNS Hbs, or 1 CNS Hb and other visceral tumour, is required.Obviously, the presence of a pathogenic mutation in the VHL gene by genetic testing is a definite criterion for the diagnosis [18].
Lonser et al. [21] and Dornbos et al. [11] described the natural history of VHL disease by means of a study including 225 VHL patients, which developed more than 2500 CNS Hbs, in a mean follow-up of 6.9 years, the observation ranging from 2.1 to 9.0 years.Most tumours (72 %) grew following a saltatory pattern (periods of quiescence followed by periods of growth).In a minor proportion, tumours progressed following exponential and linear patterns (22% and 6%, respectively) [11].Finally, only 159 (6.3%) of all Hbs became symptomatic, requiring surgery during the follow up (6.9 years).
Zhang et al. aimed to elucidate the genotype-phenotype correlations and clinical outcomes in VHL patients with large deletions (LDs).They concluded that VHL patients with deletion in exon 2 had an earlier onset age of ccRCC and pancreatic lesion, but the risk of ccRCC was lower in VHL patients with LDs and a BRK1 deletion.In addition, the group with earlier age of onset had poorer prognosis [19].
The histological origin of Hbs was unknown up to 2003 [18].More recent investigations have shown that they may be originated from mesodermal derived hemangioblasts arrested during the embryonic development [15].As normal embryologic hemangioblasts, Hb cells express markers of hemangioblasts.Among them, TBXT gene, called brachyury, a transcription factor within the T-box family of genes, VEGF receptor 2 (Flk-1), and stem cell leukaemia gene (SCL), which encodes a tissue-specific basic helix-loop-helix (bHLH) protein with a pivotal role in hemopoiesis and vasculogenesis [2].The VHL tumour-derived stromal cells (hemangioblasts) can differentiate into hematopoietic and endothelial progenitors which may explain the anatomic distribution and variability of Hbs in VHL patients like the SCL axis of expression during embryogenesis [2].
Higher tumour burden has been recently associated to male sex and germline deletions.Patient with missense VHL mutations harbour less tumours than those with nonsense mutations and deletions [21].The rate of Hb progression is significantly more rapid in symptomatic tumours and tumours with associated cysts [19,21].Patients younger than 20 years are significantly more likely to develop new tumours than those older than 40 years [21].Consequently, surgical resection between 20 and 40 years could result in clinical stability since the risk of new tumour development decreases with age.Development of new tumours is also associated with the tumour burden at their presentation.A greater tumour burden may be indicator/consequence of other aggressive underlying pathological factors.
Knowledge and understanding of tumour pathophysiological features may help prognosis and improve the effectiveness of surgical and nonsurgical treatments and provide avenues for potential new therapies.
In this context, we studied the case of a VHL family with a member presenting a very early onset and a dramatic rate of Hb growth, requiring 8 surgeries in less than 10 years.This evolution of the disease contrasted with a more classical evolution in his relatives, bearing the same VHL familial mutation.To find a putative triggering factor, massive DNA sequencing in germline and in tumour cells was performed.An additional pathogenic mutation in a tumour suppressor gene was found, possibly becoming a modifying factor of the natural course of the disease.
With this work, we want to outline the importance of personalized analysis in medicine, especially in inherited rare diseases and oncology, to uncover non-expected mutations.
Human samples
Blood samples and surgical surplus from members of a VHL family were processed in our lab.Informed consent from donors was obtained.The Ethical Committee of CSIC (Spanish National Research Council) approved all the procedures (references 075/2017 and 228/2020).
DNA total extraction and sequencing
Total DNA was extracted from PBL (peripheral blood lymphocytes) and fresh tumour pieces using the QIAamp Mini Kit (Qiagen, Düsseldorf, Germany) and following the manufacturer indications.In the case of buccal swabs, DNA extraction was also made with QIAamp Mini kit (Qiagen) but with longer time of lysis in the first step.The panel used for the preparation of the library has been designed using SureSelectXT technology (Agilent Technologies, CA, USA), aimed at capturing the exons of the genes of clinical interest and the flanking splicing regions (5-20 bp).Sequencing of the library was performed on a next-generation mass sequencer, NovaSeq 6000 SystemTM (Illumina, CA; USA).The tumour sample was sequenced with a reading depth of 500×.The sequences obtained were aligned against the reference genome (GRCh38/hg38) and filtered according to specific quality criteria.Subsequently, they have been analysed for the identification of genetic variants included in exonic regions or splicing regions (at least 5 bp), including missense or nonsense mutations, synonymous mutations, indels, small insertions, or deletions found at a higher allele frequency (> 30% of germline reads and > 5% of tumour sample reads).Both processes were carried out using the DRAGENTM BioIT Platform software (Illumina, version 07.021.510.3.5.7).The identified variants were filtered and narrowed down to the study genes using the bcftools view tool (Developed by Li et al. version 1.15.1)[27].
Variant annotation was performed using the freely available online platform wANNOVAR (WGLAB, PA, USA), which compiles the main databases major databases such as ClinVar (with specific information on variants associated with a known genotype) and databases of population frequency data -dbSNP, gnomAD (Genome Aggregation Database), 1000 Genome Project, or NHLBI-ESP 6500 exons.
The pathogenicity of the variants was also estimated using CADD and the addition of selected prediction systems included in the dbNSFP database (SIFT, PolyPhen2, Mutation Taster, Mutation Assessor, LRT, FATHMM, and MetaSVM) for missense mutations.For mutations identified in splicing regions (including synonymous mutations), the effect on mRNA processing has been assessed using the SpliceAI [13], Splice Site Finder and Max-Ent-Scan prediction systems, included in the SPiCE algorithm.The conservation of nucleotide position has been evaluated according to the UCSC score ranges for the PhyloP tool.
Finally, the association of the identified variants with OMIM syndromes has been evaluated (date updated to 24 April 2022).
The nomenclature and classification of the variants is based on the guidelines of the Human Genome Variation Society (HGVS) (http:// varno men.hgvs.org/) and the American College of Medical Genetics and Genomics (ACMCG).
The analysis of CNVs (copy number variations) is a screening performed with the established parameters based on a set of control samples with the DRAGEN software (version 07.021.572.3.6.3).This algorithm allows the identification of non-recurrent CNVs associated with the patient's phenotype following the quality criteria.
Genetic analysis of VHL gene in a family
The VHL main case of study is a 23-year-old male, third generation with VHL disease and genetic diagnosis at 11 years of age.The family mutation consisted of a complete deletion of one VHL allele, being the patient hemizygous for the VHL gene in the germ line.The loss occurred on chromosome 3 at the 3p25.3cytoband, in the genomic region chr3:10141778-10149995 (GRCh38).This mutation was inherited through the parental lineage and was also present in his sister.The VHL disease segregation among the members of the family is shown in Fig. 1A (pedigree).
The patient III-1, case of this study (Fig. 1A, black arrow) bears a complete gene deletion.In Fig. 1B, the interactive tool Integrative Genomics Viewer, (IGV) for the visual exploration of genomic data was used.He was primarily attended at other centre and then referred to our VHL unit at FJD, for follow-up and treatment of CNS tumours derived from VHL disease.
Clinical manifestations of the VHL affected members
Table 1 shows the different surgeries undergone by the VHL-affected members of the family, including: age at the VHL disease onset, date of each surgery, and type of tumours resected from the father of the patient (II-1); the patient, main case of study (III-1); and his sister (III-2).In Fig. 1C, a timeline scheme of the surgeries is shown for better comparison and easier understanding of the clinical courses of these 3 patients.
Figure 2 shows MRI images of different tumours present in the main case of study prior to surgeries.
Patient I-1
The index case of VHL disease.He was clinically diagnosed late in age, and no clinical data from this patient are available.
Patient II-1
A 55-year-old male VHL patient with onset of the disease at 42 years of age, genetically diagnosed with a complete allelic loss of VHL.This patient underwent his first surgery in other centre with removal of 2 hemispheric cerebellar Hbs in 2009, requiring a ventriculo-peritoneal shunt due to hydrocephalus.In 2010, he underwent a second surgery at FJD for removal of two Hbs located on the posterior medulla oblongata and upper cervical spinal cord.Ending 2010, additional surgery took place for removal of two other Hbs at the brain stem (nested in the obex and floor of IV ventricle), and two cerebellar Hbs (paravermal and right hemisphere).In 2014, a new surgical intervention was performed with removal of three tumour nodules, the largest in posterior medulla location and two smaller ones in a left medullary location.Then, under the same anaesthetic procedure, a second surgical intervention was performed with a left retro-sigmoid approach, for removal of a fourth tumour in the cerebellar hemisphere.Postoperative period followed without neurological deficits.Since 2014 until now, no further neurosurgeries have been needed.
Patient III-2
A 21-year-old woman, genetically diagnosed with VHL disease at the age of 9, carrier of a pathogenic mutation inherited from her father (patient II-1), who was referred in 2014 for MRI evaluation at the VHL unit of the FJD at the age of 12, being asymptomatic at that moment.In 2017, MRI showed 2 Hbs in obex (medulla oblongata) and right para-medullary region, both of 5 mm, without associated cyst.A full spine MRI showed absence of Hbs in spinal cord.Conservative management was decided with programmed MRI follow-up after 1 year.In 2018, the patient started with episodes of hiccups, coughing, and choking, presenting enlargement of the lesion located in obex (8 mm of size), while maintaining unchanged the right 5mm HGB in para-medullary region.Surgical intervention was decided to avoid the progression of symptoms and a permanent neurological deficit.The patient underwent surgery, achieving the removal of the two medullary tumours, without complications.No further neurological surgeries have been necessary since then.
Patient III-1
Son of patient II-1 and genetically diagnosed of VHL at 11 years old, asymptomatic and without tumours at the time of diagnosis.During his follow-up in 2012, an MRI showed for the first time the presence of a small left lateral medullary nodule, as well as a 7.5mm nodule in the posterior medulla oblongata.The latter grew quickly reaching 11mm in a year.Given the rapid growth and location of the lesion, he underwent a first surgery in 2013 to resect 2 medullary and 1 cervico-medullar Hbs.After surgery, the patient presented some transitory sequelae with slight loss of finger sensitivity at the right hand, and slight dysmetria in the left upper limb with complete recovery in consecutive revisions.Three months after the intervention, the patient presented a surgical wound infection and post-laminoplasty cervical kyphosis, requiring reoperation for cleaning, with anterior cervical arthrodesis C4-C5 and posterior instrumentation.
The patient remained stable for 18 months.Then, in 2015, a brain and spine MRI exam revealed the presence of a subcentimetric Hb on the right lateral medulla, without associated cyst; two more lesions were detected at C3-C4 and C6, without associated cysts; as well as a nodular Hb at the posterior conus medullaris, with associated edema and thickening of the cone.In August 2016, while the cervical and cone lesions remained stable, the lesion in the right margin of the medulla showed a millimetric growth.The lack of significant symptoms plus the millimetric growth, led to a conservative management for the next 2 years of follow-up.However, in 2018, an MRI of the whole spinal cord showed an increase of the conus medullaris lesion, with significant growth of the cyst and edema at that level and the surrounding spinal cord respectively (Fig. 3).These findings conditioned a new surgery with a complete resection of the tumour and an adequate post-surgical evolution.Nevertheless, 6 months later, in 2019, the patient presented paraesthesia on the left upper limb with radiological stability and consequently a conservative management.In January 2020, the patient presented a worsening of the neurological symptoms in the extremities.Imaging tests showed an increase in the size of the medullary lesion, as well as oedema surrounding the associated cyst.Likewise, an increase in the size of the cyst over the C4-C5 lesion was apparent, with medullary bulging and perilesional edema.Given these symptoms, in March 2020, surgery was performed on the medullary and cervical lesions, resecting a total of 3 Hbs (1 medullary and 2 cervical), with no complications.Postoperatively, the patient presented alterations in proprioceptive sensitivity and coordination in the right upper limb, which improved during hospitalization.
The follow up of the patient showed progressive improvement of the sensitive deficit in the right hand, but a persistent decrease in manual ability.The brain lesions remained stable, but significant growth of the cystic area associated with the C5 anterior Hb was apparent in MRI of the cervical spine.Conservative management was decided on this occasion.However, the evolution was to continuous cervicalgia with pain radiating to both trapezia.Given the growth of the cervical lesion cyst, the patient's symptoms, and the surgical accessibility by posterior approach, a new intervention was decided.In October 2021, surgery was performed, and the lesion was resected without associated complications.In October 2022, the patient presented a rapid clinical worsening with uncontrollable pain at the level of the left costal grid.An MRI of the dorsal spine was performed showing significant growth of a previously known dorsal intramedullary Hb, with a large cystic cavity and associated myelopathy that had remained stable until that moment (Fig. 2).Given the radiological and clinical myelopathy, a new surgery was practiced in December 2022, achieving the removal of the lesion at D3 level, without complications and with adequate post-surgical evolution.
In the last MRI evaluation, in February 2023, a growth of the right cerebellar lesion was evident, with probable future surgical management, given the progressive growth of the lesion and the patient's coordination alterations.
Genetic analysis from tumoral and blood DNA by NGS
Looking at the clinical symptoms of this family, (Table 1), especially at the onset age and the number of surgeries/ tumours, the case of patient III-1 is striking by the early and severe VHL presentation.
In certain cases of VHL disease, the fast growth of Hbs may occur due to the accumulation of the already present loss of function (LOF) of the VHL gene with somatic mutation(s) in other tumour-related gene(s).To investigate the possibility of additional mutations in these genes, a massive sequencing analysis of total DNA from a piece of Hb coming from the last surgery (Table 1) in cerebellum of patient III.1, was performed.The massive sequencing, with a high level of readings, yielded three different mutated genes present at this tumour, in addition to the already known mutation in the inherited VHL gene.
The variant c.1259+1G>C (NM_007194.4,rs121908707) was detected in heterozygosis in the CHEK2 gene, affecting the first nucleotide of intron 10, interfering with the splice site leading to skipping of exon 10, loss of the reading frame and a premature termination of the protein sequence (p.I336Pfs*2).In silico CADD (combined annotation dependent depletion) indicated a high predictor value of 34, pointing to high probability of pathogenicity.
Analysis of the DNA from peripheral blood indicated the presence of the same mutation in the germ line, as shown in Fig. 4B (III.1).
To determine whether the mutation was inherited from the parents or was a de novo mutation in patient III.1, segregation analysis of CHEK2 mutation in the family was studied.Figure 4A shows the familiar pedigree and the Sanger sequencing of CHEK2 DNA from saliva in both parents and his sister.As seen in the DNA sequencing chromatogram, CHEK2 mutation was found in the mother but absent in the father and the sister, both VHL patients.Of note, his mother had been diagnosed of an in situ breast carcinoma, resected 5 years ago, and his maternal grandfather was diagnosed of prostate cancer.In both cases the presence of tumours correlated to the finding of the CHEK2 mutation.
In addition to CHEK2 mutation found in the tumour and inherited through the germ line, 2 more somatic mutations were detected only in the tumour sample.
The analysis of CNVs in the tumour sample showed the presence of a deletion in exon 1 of BRAF.The loss occurs on chromosome 7, at cytoband 7q34, in the genomic region chr7:140924459-140924753 (GRCh38).The BRAF protooncogene encodes a protein of the RAF family, composed of serine/threonine kinases that mediate cellular responses to grow signals by activating the mitogen-activated protein kinase pathway (MAPK).Additionally, the analysis also showed the presence of a deletion in exon 1 of PTPN11, Fig. 3 Contrast-enhanced T1 and T2 MRI sagittal images obtained from patient III-1, showing a hemangioblastoma with associated cyst located in the conus medullaris that required surgery a protein member of the protein tyrosine phosphatase (PTP) family.The loss occurs on chromosome 12, at cytoband 12q24.13, in the genomic region chr12:112419022-112419200 (GRCh38).The potential consequences of these mutations will be explained in the discussion section.
Discussion
Rare diseases are characterized by the low number of patients suffering from each individual disease, but also by the scarce knowledge at clinical and research level, due to their rarity.In addition to their condition, it must be considered that these patients are not free from other diseases or mutations that affect general population.Particularly, in the case of VHL disease, we have published the case of a 76-year-old woman in which a mutation in CLN5 (ceroid lipofuscinosis, neuronal, 5) offers a protective effect, preventing VHL-related tumour development [4].Here, we introduce the other side of the coin, a mutation that could dramatically affect Hb CNS tumours raising and development.
In RDs, there are two main goals: an early diagnosis and the natural history knowledge of each disease, which contribute to a better clinical management and to find therapies improving quality of life.Knowledge of the natural history of the disease is a key point for its clinical management.For this purpose, the medical record documents from patients and review studies from them, would allow to know the expected evolution of a rare pathology.
In the case of VHL disease, makes mandatory a follow up of patients by experienced clinicians.
As mentioned in the Introduction section, surgery arises as the only way to resolve the symptoms of VHL disease.Since patients have multiple asymptomatic/symptomatic tumours in the context of the disease, surgeons must follow a conservative approach, and only remove those leading to life-threatening symptoms, deciding the proper timing in each case.
To describe and publish the natural history of CNS Hb development in VHL patients is extremely useful.Lonser et al. indicated that in 70-80% of cases, the common pattern is an evolution of growth periods followed by quiescence phases in tumour development [21].
Several clinical trials have been carried out with drugs, most of them used in cancer chemotherapy to stop and space the need of surgeries, extending the periods of tumour quiescence (angiogenesis inhibitors, HIF-2 dimerization inhibition, β-blockers, etc.).However, no clearly positive results are available so far [14].
The present work describes the particularly severe case of a VHL patient, with a very early disease onset, at the age Fig. 4 Familiar pedigree and the Sanger sequencing of CHEK2 DNA.As seen in the DNA sequencing chromatogram, CHEK2 mutation was found in the mother but absent in the father and the sister, both VHL patients of 11 years, and with the need of a first surgery at the age of 13 years old.Since then, a conservative surgical basis has been applied, removing only those tumours with severe clinical impact on the patient.A total of 8 neurosurgeries have been necessary, and he is currently 23 years old.The patient inherited paternally the complete deletion of a VHL allele, but both, father and grandfather had the disease with a much later onset, presenting their first symptoms after the age of 40.Regarding the father, his first surgery of the CNS was at the age of 42 and, as shown in Table 1 and Fig. 1, in a period of 4 years he underwent several surgeries.However, notably since 2014, he has remained without the need for new interventions.
It could be hypothesized that in patient III-1 (Fig. 1), there is a phenomenon of gene anticipation as published in VHL [25].However, the present work has another explanation.In fact, the present work is a paradigmatic case of how useful a personalized medicine approach is.By massive genetic sequencing analysis of DNAs from both, tumour, and germ line (peripheral blood), we have discovered the presence of two germline mutations in the patient.He inherited an allelic loss of VHL from the paternal side and a pathogenic mutation in the tumour suppressor gene CHEK2 from the maternal side.
CHEK2 encodes the Chek2 protein kinase activated in response to DNA damage, being involved in cell cycle arrest.Mutations in CHEK2 are responsible for an increased predisposition to breast, prostate, colon, stomach, and brain cancer [20].In relation to VHL, it has been shown that CHEK2 binds to the β-domain of pVHL and phosphorylates it upon DNA damage.Therefore, this modification enhances pVHLmediated transactivation of p53, recruiting p300 and Tip60 to the chromatin of p53 target gene [23].Moreover, CHEK2 functions as a DNA damage checkpoint kinase by phosphorylating p53 [16].Bell et al. [20] described heterozygous germline mutations in the CHEK2 gene in patients with Li-Fraumeni syndrome, suggesting that CHEK2 is a tumour suppressor gene whose loss of function confers predisposition to develop sarcoma, breast cancer, and brain tumours.The CHEK2 variant found in our case had been previously identified and described in patients with non-Hodgking's lymphoma by Havranek et al. [1].
Upon finding the CHEK2 mutation, it was hypothesized that this mutation would not be present in the rest of the family siblings carrying the VHL mutation.Thus, the segregation of CHEK2 in the family germline was carried out.As expected, it was observed that the mutation came from the maternal line (pedigree II-2).It is worth mentioning that the mother underwent a breast cancer surgery several years ago.In the family, the only case where pathogenic mutations in both genes, VHL and CHEK2, are concomitant was in patient III-1, showing an unusual and very severe clinical history of VHL disease.We, therefore, conclude that the mutation in CHEK2 is the factor triggering the rapid growth of CNS Hbs in this patient.Furthermore, at the time or writing this manuscript, Zhang et al. found two VHL patients with LDs also carrying CHEK2 and FLCN germline mutations, respectively [19].
Moreover, in other pathologies such as familial cavernomatosis, it has recently been described that somatic mutations in PIK3CA trigger the growth and bleeding of cavernomas that lead to surgeries [3].Considering the evolution of the patient and the differences with respect to first-degree relatives (bearing a VHL mutation but lacking the CHECK2 mutation), is very likely that the combination of mutations in two different tumour suppressor genes is responsible for the more aggressive behaviour of the disease in patient III-1.Following the clinical-genetic finding, the family has been referred to familiar oncology, to study the possible benefit derived from CHEK2 mutation-targeted therapy, which could stop the continuous triggering of tumour growth.
In addition, a complete genetic analysis of the tumour revealed not only the presence of CHEK2 and VHL mutations, but also the presence of deletions in exon 1 of BRAF-1 and PTEN11, genes involved in tumour development by LOF.It is worthy to remark that these mutations were not present in the germline.
In the case of BRAF-1, a key intermediate in the RAS pathway and in the transmission of signals that regulate cell proliferation, differentiation, and survival, exon 1 corresponds to the N-terminal regulatory domain that precedes the Ras-binding domain of BRAF.In the work of Martínez-Fiesco et al. [12], they suggest that this region may represent an additional level of regulation in terms of RAS-BRAF interaction.Moreover, Terrell et al. [23] reported a high affinity of BRAF for KRAS; however, they observed that BRAF proteins lacking the N-terminal domain had an increased affinity for other RAS family proteins different from KRAS, such as HRAS and NRAS.
PTPN11, as PTP protein, regulates a variety of cellular processes, including cell growth, differentiation, the mitotic cycle, and oncogenic transformation.Mutations in PTPN11 lead to the activation of the RAS-MAPK pathway.Exon 1 is included within the SH 2 N-terminal domain involved in the switching of the protein between its inactive and active conformations.Therefore, alterations in this domain can cause a significant shift in the balance, favouring the active conformation and, therefore its malignancy.
As mentioned above, another interaction in the germline was described by our group recently [4].Fortunately, the opposite effect to the present case was produced since the tumour development due to a VHL mutation was counteracted by a germ line mutation in heterozygous condition at the CLN5 gene [4].These findings of gene interactions between VHL and other genes in the germline underlines the need to study the whole exome in those cases in which the disease does not follow a natural course.Therefore, completing the two main points above indicated (diagnosis and natural history of the disease), massive germline and tumour NGS study should be done with all VHL patients, starting from their first surgery.
Advances in genetics and molecular biology techniques make it advisable to apply personalized medicine to monitor and treat each patient according to their genetics and associated symptoms.
Thus, by showing this case, we propose deeper genetic studies, searching for the origin of tumoral abnormal growth outside the natural history.In many cases, if we were able to find the cause, maybe the quiescence of the disease could be reached, by treating patients according to the affected gene.The study of genetic factors that can trigger the development of tumours through massive sequencing can help to modify the natural evolution of genetic tumoral diseases like VHL, in a personalized way.On the other hand, we cannot rule out that there may be also environmental or epigenetic causes triggering Hbs' growth, and that should also be considered.
Conclusions
The present work shows the importance of personalized medicine in, the case of a VHL patient bearing a second inherited mutation, which we propose as the trigger for the growth of his CNS hemangioblastomas.VHL is a tumour suppressor gene, and we postulate that the cause of the abnormal severity in the course of his disease, is an additional mutation in CHEK2, another tumour suppressor gene.Consequently, the onset of the disease is sped up to 11 years old and, along the following 12 years, he underwent 5 surgeries (10 hemangioblastomas), while his VHL mutationbearing relatives (father and sister) had a much later onset and underwent 4 and 1 surgeries, respectively, in their life.
Thus, our objective is to provide an explanation why some VHL patients have an accelerated growth in their hemangioblastomas and to highlight the importance of exhaustive genetic analyses to uncover mutations in additional genes that can modulate the development of the disease and establish therapeutic targets.
Fig. 1
Fig. 1 Familiar pedigree for VHL mutation.Reads ratio representation from the Integrative Genomics Viewer (IGV) of VHL-III-1.Timeline of surgeries of VHL patients of the pedigree
Table 1 CNS
Fig. 2 Contrast-enhanced T1 and T2 MRI sagittal cervical and dorsal images obtained from patient III-1, showing some of the hemangioblastomas with associated cyst that the patient presented | 2023-10-17T06:17:37.698Z | 2023-10-16T00:00:00.000 | {
"year": 2023,
"sha1": "f702e50ed8d44bda704b86b45f6c4c0b2f185356",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00701-023-05825-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c9d137569c5315f5f3b60cbd39282d216107d1a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251741574 | pes2o/s2orc | v3-fos-license | Origin of nonsymmorphic bosonization formulas in generalized antiferromagnetic Kitaev spin-$\frac{1}{2}$ chains from a renormalization-group perspective
Recently, in the Luttinger liquid phase of the one-dimensional generalized antiferromagnetic Kitaev spin-1/2 model, it has been found that the abelian bosonization formulas of the local spin operators only respect the exact discrete nonsymmorphic symmetry group of the model, not the emergent U(1) symmetry. In this work, we perform a renormalization group (RG) study to provide explanations for the origin of the U(1) breaking terms in the bosonization formulas. We find that the lack of U(1) symmetry originates from the wavefunction renormalization effects in the spin operators along the RG flow induced by the U(1) breaking interactions in the microscopic Hamiltonian. In addition, the RG analysis can give predictions to the signs and order of magnitudes of the coefficients in the bosonization formulas. Our work is helpful to understand the rich nonsymmorphic physics in one-dimensional Kitaev spin models.
I. INTRODUCTION
Kitaev materials have attracted intense research attentions in the past decade 1-24 , since they not only provide potential experimental platforms for realizing the Kitaev spin-1/2 model on the honeycomb lattice -a prototypical strongly correlated model for topological quantum computations 25,26 , but also are representatives of frustrated magnetic systems, having close relations to the fields of strongly correlated quantum magnetism 27,28 and quantum spin liquids [29][30][31][32][33][34] . Theoretical and experimental studies have established the fact that Kitaev materials can be described by generalized Kitaev spin models 2, [16][17][18][19][20] which -in addition to Kitaev interaction -contain other types of couplings including the Heisenberg interaction, the off-diagonal Γ and Γ terms, and beyond nearest neighbor interactions. One of the central themes in the field of Kitaev materials is to understand the effects of such additional interactions which are inevitable in real materials.
Indeed, it has been demonstrated in Ref. 44 that the zigzag phase in 2D Kitaev-Heisenberg-Gamma model can be obtained by weakly coupling an infinite number of 1D chains, thereby providing a controllable approach to the 2D zigzag order. In addition, 1D studies also have their independent merits, since there have been proposals on realizing 1D generalized Kitaev spin models in real materials 24 .
As shown in Ref. 40, the system has an emergent U(1) symmetry at low energies in the gapless Luttinger liquid phase in the generalized Kitaev spin-1/2 chain with an antiferromagnetic (AFM) Kitaev coupling. At first sight, it seems that the discrete nature of the nonsymmorphic symmetry group is lost in the long wavelength limit. However, as discussed in detail in Ref. 44, the discreteness of the nonsymmorphic symmetry group still has notable influence on the low energy properties, reflected by the constraints on the abelian bosonization formulas for the spin operators. The abelian bosonization formulas build the connections between the lattice spin operators on one side and the low energy field theory degrees of freedom on the other side, and the two sides have to be covariant under symmetry transformations.
One typical type of the nonsymmorphic symmetry operations is the screw operation, where a spatial translation followed by a spin rotation is a symmetry of the system, whereas neither the translation nor the spin rotation alone leaves the system invariant. Unlike the on-site spin rotational symmetry in a translationally invariant system, a screw symmetry relates the spin operators on different sites. Hence it is expected that the constraint imposed by a screw symmetry is much looser than the constraints imposed by translation plus global spin rotation.
Indeed, it was found in Ref. 44 that the bosonization formulas for the spin-1/2 Kitaev-Heisenberg-Gamma chain contain a large number (equal to ten) of nonuniversal bosonization coefficients, which are only compatible with the exact nonsymmorphic symmetry group, not respecting the emergent U(1) symmetry. The ten bosonization coefficients are determined by density matrix renormalization group (DMRG) numerical simulations to a high degree of accuracy 44 . However, although a symmetry analysis is able to determine the constraints arXiv:2204.05441v3 [cond-mat.str-el] 22 Aug 2022 on the relations among the bosonization coefficients, it cannot give any prediction on the magnitudes or signs of the coefficients, neither can it provide explanations for the mechanism of how these coefficients arise.
In this work, in view of the aforementioned incapability of the symmetry analysis, we perform a renormalization group (RG) study in the Luttinger liquid phase of the Kitaev-Heisenberg-Gamma spin-1/2 chain in the AFM Kitaev region. The basic idea is that the U(1) breaking terms in the microscopic Hamiltonian renormalize the spin operators along the RG flow, and the nonsymmorphic bosonization coefficients are reminiscences of such renormalization effects in the low energy physics. Our RG study is able to explain the origin of the U(1) breaking bosonization coefficients. In addition, it can also give predictions on the signs and order of magnitudes of the bosonization coefficients. We note that as revealed by this RG study, the U(1) breaking effects in the bosonization coefficients arise at the "Planck scale" of the lattice, before the lattice sites within a unit cell get smeared and lose distinguishability. Therefore, we emphasize that our RG treatment is applied in the ultraviolet (UV) high energy region, unlike the usual cases where RG analysis is typically performed in the low energy limit. This RG study cannot produce quantitative predictions, though indeed, it correctly captures the qualitative features of the related physics.
The rest of the paper is organized as follows. In Sec. II, we introduce the model Hamiltonian, discuss the phase diagram of the model, and give a review on the nonsymmorphic bosonization formulas in the Luttinger liquid phase under interest. In Sec. III, the general framework of the RG treatment in this work is formulated. Sec. IV derives and solves the RG flow equations for the scaling fields which are coupled to the spin operators. In Sec. V, the bosonization coefficients are derived by solving the flow equations. Finally in Sec. VI, we briefly summarize the main results of the paper. We consider a spin-1/2 Kitaev-Heisenberg-Gamma chain in zero magnetic field defined as in which i, j are two sites of nearest neighbors; γ = x, y is the spin direction associated with the γ bond shown in Fig. 1 (a); α = β are the two remaining spin directions other than γ; K, J and Γ, are the Kitaev, Heisenberg and Gamma couplings, respectively. The coupling constants K, Γ can be parametrized as K = cos(ψ), Γ = sin(ψ), in which ψ ∈ [0, π]. The phase diagram of the model in terms of J, ψ is shown in Fig.2.
FIG. 2: Phase diagram of the spin-1/2 Kitaev-Heisenberg-Gamma chain in the region K > 0, J < 0, in which the vertical axis is J and the horizontal axis is ψ where K = cos(ψ) and Γ = sin(ψ). In the figure, "LL" and "FM" denote the Luttinger liquid and FM phases, respectively 40,44. The phase boundary between LL and FM phases is described by an emergent SU(2)1 conformal symmetry at low energies 44.
From here on, we will stick to the four-sublattice rotated frame unless otherwise stated.
B. Phase diagram in the antiferromagnetic Kitaev region The phase diagram in the region K > 0, J < 0 is shown in Fig. 2. Since a global spin rotation around z-axis by π changes the sign of Γ but leaves K and J invariant, it is enough to consider the Γ > 0 region.
As can be seen from Fig. 2, there are two phases close to the Γ = 0 line (i.e., the vertical axis), including a Luttinger liquid phase (denoted as LL in Fig. 2), and a ferromagnetically ordered phase (denoted as FM). It has been shown in Ref. 44 that in the sense of low energy field theory, the phase boundary between the LL and FM is essentially a phase transition between planar and axial spin-1/2 XXZ chains. Hence, the low energy physics of this phase boundary is described by the SU(2) 1 Wess-Zumino-Witten (WZW) model.
In this paper, we will focus on the Luttinger liquid phase in Fig. 2.
C. Nonsymmorphic abelian bosonization formulas
In this subsection, we briefly review the nonsymmorphic bosonization formulas in the Luttinger liquid phase in Fig. 2, which are proposed in Ref. 44 based on a symmetry analysis.
The system in the four-sublattice rotated frame is invariant under the following symmetry operations 40,44, (4) in which T is time reversal; I is the spatial inversion with inversion center located at the middle of the bond connecting sites 2 and 3; T na is the spatial translation by n sites; and R(n, θ) represents a global spin rotation aroundn-axis by an angle θ. It has been proved in Refs. 40,44 that the symmetry group G = <T, R(ŷ, π)I, R(ẑ, − π 2 )T a > is nonsymmorphic and satisfies G/<T 4a > ∼ = D 4d , in which <...> represents the group generated by the elements within the bracket; and In the Luttinger liquid phase, the low energy theory is described by the Luttinger liquid Hamiltonian in which v is the velocity; κ is the Luttinger parameter; and the fields θ, ϕ satisfy [ϕ(x), θ(x )] = i 2 sgn(x − x).
For later convenience, it is useful to define the following fields, where J ± = J x ± iJ y and N ± = N x ± iN y . Since dxJ z (x) is the generator for the global spin rotation around z-axis, J α and N α transform under R(ẑ, β) as A ± → A ± e ±iβ and A z → A z , where A = J, N . Clearly, the low energy field theory has an emergent U(1) symmetry corresponding to rotations around z-axis, even though the microscopic Hamiltonian only has a discrete nonsymmorphic symmetry group.
On the other hand, the discrete and nonsymmorphic nature of the symmetry group still has significant effects on the low energy properties of the system. We note that when the microscopic Hamiltonian is U(1) invariant (for example, the planar XXZ model), the bosonization formulas of the spin operators are given by S α j = λJ α + µ(−) j N α , in which λ, µ are constants. However, these relations cease to apply in the Kitaev-Heisenberg-Gamma chain. In Ref. 44, the following nonsymmorphic bosonization formulas are proposed in which: n is the index for the unit cell; j (1 ≤ j ≤ 4) represents the site within the four-site unit cell; x = j + 4n is the spatial coordinate in the continuum limit; and α, β = x, y, z. Two comments are in order. First, Eq. (7) was obtained in Ref. 44 by covariance of the two sides under symmetry transformations. Notice that for nonsymmetry transformations, the two sides in Eq. (7) are not covariant, since the transformed J β and N β operators are driven out of the low energy subspace of the Hilbert space in such situations. Second, Eq. (7) equally applies to nonabelian bosonization formulas in the nonsymmorphic case, except that the J β and N β operators should be replaced by the WZW current operators and primary fields, respectively. As shown in Fig. 2, the line separating LL and FM phases has an emergent SU(2) 1 conformal symmetry at low energies (see Ref. 44 for details). Therefore, a nonabelian bosonization version of Eq. (7) should be used along this phase transition line. Defining 3 × 3 matrices D j and C j whose matrix elements at position (α, β) are D αβ j and C αβ j , the coefficients in Eq. (7) can be compactly expressed as and where j = 2, 3, 4, and It can be seen that there are ten non-universal coefficients in Eq. (7), which spoil the emergent U(1) symmetry and only respect the exact nonsymmorphic symmetries of the system. Explicit expressions of the nonsymmorphic bosonization formulas are included in Appendix B.
On the other hand, although the symmetry analysis is able to determine the form of the bosonization formulas, it has no predictive power on the order of magnitudes nor the signs of the ten bosonization coefficients a Λ , b Λ , c Λ , h Λ , i Λ (Λ = C, D). In addition, the symmetry analysis gives no explanation to the origin of the bosonization coefficients, i.e., there is no information on how they arise microscopically. In view of these issues, it is the purpose of this work to derive the ten bosonization coefficients using an RG approach.
III. SETUP FOR RG FLOWS
In this section, we set up the method for deriving the RG flow equations which can provide explanations for the microscopic origin of the nonsymmorphic bosonization coefficients.
The low energy physics of the 1D spin-1/2 repulsive Hubbard model at half filling is known to be described by the SU(2) 1 Wess-Zumino-Witten (WZW) model, which is the same as the low energy theory of the spin-1/2 AFM Heisenberg model (for details, see 49 and Appendix C). Hence, the weak coupling repulsive Hubbard model can be used to mimic the low energy physics of the Kitaev-Heisenberg-Gamma model at the hidden AFM point (i.e., K + 2J = 0, Γ = 0) in the four-sublattice rotated frame. Then K + 2J and Γ can be treated as perturbations to the repulsive Hubbard model.
Here we make some comments on the reasons why a fermion model has to be introduced for an RG treatment, and the limitations of the method. We first emphasize that the bosonization coefficients arise from the microscopic lattice structures. Hence a perturbation in the low energy sector cannot capture these bosonization coefficients, and the physics at the "Planck scale" of the lattice has to be involved. It seems that there is still hope since the spin-1/2 Heisenberg model is an integrable system solvable by the Bethe ansatz method, which is applicable to any energy scale. However, a perturbation on the Heisenberg model is analytically intractable since Bethe ansatz is a very intricate method, not suitable for perturbative calculations.
On the other hand, it is standard to perform perturbative calculations based on the free fermion models. Therefore, in the weak coupling limits, i.e., when the Hubbard interaction, the combination K + 2J, and the Gamma interaction are all small, an RG analysis can be applied in the vicinity of the free fermion fixed point. Notice that this directly implies the limitation of the method. Our RG analysis is only qualitative, since the model is changed from a pure spin model to a fermion model. However, this RG analysis is able to provide explanations for the origin of the bosonization coefficents, justifying the proposed nonsymmorphic bosonization formulas in Eq. (7). It is able to give predictions on the signs and order of magnitudes of the bosonization coefficients.
We start from the following fermion model in the foursublattice rotated frame, in which where At half-filling and for a repulsive U , H 0 + H U reproduces the SU(2) 1 WZW model in the low energy limit (see Ref. 49 and Appendix. C). Then by adding H 4 , the low energy physics of the Kitaev-Heisenberg-Gamma model is recovered. The partition function for H F is given by where The goal is to compute the spin correlation functions G iα,jβ (16) in which τ is the imaginary time, i, j ∈ {1, 2, 3, 4}, and where a, b =↑, ↓. The outgoing arrow, ingoing arrow, and wavy lines represent fermion creation operator, fermion annihilation operator, and external magnetic field, respectively.
We note that upon integrating over the fast modes in a momentum shell, H int renormalizes the spin operators. In fact, by separating the fast and slow modes, we obtain in which H int,>,< represents the mixing term between the fast and slow modes, ... > is defined as and only first order renormalization is taken into account for the spin operators. Eq. (18) leads to a set of coupled Callan-Symanzik equations 50 , which can be solved to determine the behaviors of the correlation functions. The interactions and the spin operators are represented by the diagrams in Fig. 3 and Fig. 4, respectively. In particular, Fig. 3 There are two diagrams which contribute to the contractions between H int,>,< and the spin operators as shown in Fig. 5 and Fig. 6. It is clear that Fig. 5 introduces a renormalization of the spin operators, whereas on the other hand, Fig. 6 produces new terms along the RG flow, which are of the forms c † i σ λ c j where i = j ± 1. Although c † i σ λ c j is not of the form of an on-site spin operator, it becomes indistinguishable from a spin operator in the low energy limit when the difference between adjacent sites is smeared out. Later in Sec. IV B we will see that the value of the diagram in Fig. 6 vanishes. However, we still include it here from a conceptual consideration, and in addition, if the Hamiltonian in Eq. (1) contains beyond next-nearest neighbor terms (which is always the case in real materials), the diagram in Fig. 6 indeed contributes. We note that the set of coupled Callan-Symanzik equations for the correlation functions to the one-loop level can be obtained from the two diagrams in Fig. 5 and Fig. 6. Here we take an alternative route for later convenience. Instead of considering the Callan-Symanzik equations, we introduce the following set of magnetic fields into the action − dτ in which n is the index of the unit cell;,i and j are site indices within a unit cell, and the h α ij (τ, n) terms are inserted since they can be generated upon RG flow as a result of the diagram in Fig. 6. The spin correlation functions can be obtained from the functional derivatives as where F = − ln Z is the free energy. We will determine the RG flow equations for the scaling fields h α j (τ, n) and h α ij (τ, n). In Eq. (12), the free fermion Hamiltonian H 0 is gapless at ±k F = ±π/(2a), giving rise to left mover c La and right mover c Ra (a =↑, ↓) at low energies, where c La and c Ra are the fermion annihilation operators for the left and right movers, containing Fourier components with wavevectors close to −k F and k F , respectively. Then in the low energy limit, the wavevectors in the spin operators are either close to zero or π, corresponding to intra-mover and inter-mover contributions. Keeping only the low energy modes, the spin operatorS α r (τ ) at smeared position r and time τ can be written asS in which the uniform component S α u (τ, r) and staggered component S α s (τ, r) are given by where c λ = (c λ↑ , c λ↓ ) T (λ = L, R) and both S α u and S α s are smooth functions of r (i.e., no Fourier components with a wavevector far from zero). Here we note that since h α l (τ, n) is defined every four sites, the zeroand π-wavevector components cannot be distinguished in h α l (τ, n) or h α ij (τ, n) since both components are smooth in n.
Finally we make a comment on the energy scales in the problem. There are five characteristic energy scales Λ 0 , Λ s , Λ L , m c , and E, where Λ 0 ∼ 1/a is the UV cutoff of the lattice structure, Λ s ∼ 1/(4a) is the energy scale where the four sites within a unit cell are smeared and can no longer be clearly distinguished, Λ L is the energy scale where a linearization of the free fermion spectrum around ±k F can be performed, m c ∼ e −const.t/U is the charge gap due to the repulsive Hubbard term, and E is the energy scale of the correlation functions which we are eventually interested in. The hierarchy of the energy scales is clearly We note that below Λ L , the fermion has an emergent Lorentz symmetry, and is fractionalized into a U(1) charge boson and an SU(2) 1 spin boson 49 . When the energy is further lowered below m c , the charge boson is gapped, and we are left with only spin degrees of freedom J α and N α . We also note that since the microscopic lattice structure is lost at Λ s , our RG analysis stops at an energy scale ∼ Λ s .
IV. RG FLOW EQUATIONS
In this section, we derive the RG flow equations for the scaling fields h α l (τ, n) and h α ij (τ, n) from the diagrams in Let's first consider the diagram in Fig. 5. It is nonvanishing when ν = α, and renormalizes h µ i . Suppose we lower the cutoff from The perturbation process in Fig. 5 gives rise to the following term in the action leading to a renormalization of h µ i by h α l , where ∆ ln b = ∆b/b. Define the free fermion Green's function G(k) as in which k = (iω, k) where ω is Matsubara frequency and k is the wavevector in space (we define the spatial wavevector as a vector to distinguish it from the spacetime combined index k, even though the system is 1D and k is in essence a scalar), and (k) is the free fermion dispersion which includes the chemical potential term. The coefficient λ jl can be derived as where a is the lattice spacing, andx is the unit vector in the spatial direction. We note that because of the translation and inversion symmetries of the free fermion theory, λ ij satisfies the following relations where l ∈ Z. We briefly describe the derivation of λ jl . Detailed derivations are included in Appendix D 1. The Fourier transforms of the fermion operator and the scaling field are defined as and (30) in which N is the system size, β is the inverse of the temperature, j in Eq. (29) is summed over all sites in the chain, and n in Eq. (30) is summed over the unit cells.
Integrating over the fast modes in the momentum shell and using momentum conservations in the free fermion model, the expression of the diagram in Fig. 5 is given by in which q in h α l (−q ) satisfies | q | ∼ 0, since h α l (n) is a smooth function of n. In Eq. (31), the factor e −i q ·(j−l)ax can be set as 1 since it is a slowly varying variable. Comparing with the following Fourier representation of the magnetic field term in the action it can be seen that Eq. (31) is of the form in Eq. (25) which renormalizes h µ i , in which λ jl is given by Eq. (27). Eq. (27) is the desired expression for the coefficients λ jl 's in the RG flow equations. An analytic expression of λ jl is difficult, so we will turn to numerical calculations. The numerical value of λ jl relies on the underlying free fermion band structure (k), but the essential physics does not depend on the details of the band structure.
Hence, the precise form of the band structure is not essential in our RG treatment. The free fermion term H 0 in Eq. (12) has a −t cos( k ·x) dispersion. For simplicity, we modify the dispersion to a linear form, as where v = t/Λ 0 , and Λ 0 = π/(2a). The figure for the spectrum in Eq. (33) is shown in Fig. 7 (a). The dispersion is essentially a Dirac fermion as shown in Fig. 7 (b), in which the positions of the two gapless Fermi points are combined.
Next we evaluate the value of λ jl along RG flow. Although the Dirac fermion has a cutoff Λ 0 in momentum space as shown in Fig. 7 (b), the value of the Matsubara frequency at zero temperature is continuous and can extend to infinity. Therefore, RG starts with an initial cutoff Λ i ∼ ∞ in the frequency-momentum space, and stops at Λ s ∼ Λ 0 /4 as explained before. The values of λ jl in general depend on the cutoff Λ = Λ 0 /b where Λ 0 = π/(2a). We note that b can be smaller than 1 since the value of the Matsubara frequency can take large values.
According to Fig. 7 (b), the modes satisfying b are integrated over. The frequency and wavevector in the momentum shell can be parametrized as where for each value of | k| − Λ 0 ∈ [−Λ 0 , Λ 0 ], we have both the left mover and the right mover. We note that when b < 1, θ cannot take all values in [0, 2π], since −Λ 0 ≤ | k| − Λ 0 ≤ Λ 0 . On the other hand, when b > 1, k cannot take all values in [−π/a, π/a], since some of the k's have been integrated over.
Then λ jl (b) as a function of b can be obtained from Eq. (27) as in which ν = 1 and −1 corresponds to the right and left movers in Fig. 7 (b), respectively; f (ν, m, θ, b) is defined as imposing the condition that the magnitude of the spatial wavevector cannot exceed the cutoff; and¯ is defined as where −2 ≤ mod(x, 4) ≤ 2.
Next we write down the flow equations for h α l (b), in which b is the flow parameter, defined as Λ(b) = Λ 0 /b where Λ(b) is the cutoff at the stage of the flow in consideration. Since we are only interested in the U (1) breaking effects, we neglect the renormalizations of the scaling fields due to the Hubbard term. Although the Hubbard term also renormalizes the scaling fields, such renormalizations are SU(2) symmetric, which does not affect the conclusions on U(1) breaking effects in the bosonization coefficients on a qualitative level. We will also neglect the flows of the coupling constants K +2J and Γ. The reason is as follows. As will be discussed in Sec. V C, the contributions to the bosonization coefficients from the b ∼ 0 region (i.e., the Λ(b) ∼ ∞ region) are negligible. Hence it is enough to consider the RG flows within the range [b i , b s ] where b i ∼ O(1) and b s ∼ 4. Since K + 2J and Γ have scaling dimensions equal to zero and thereby are marginal operators, their flows can be safely neglected between the scales b i and b s . On the other hand, in the high energy region b ∼ 0 (i.e., Λ(b) ∼ ∞), there is no singularity in the perturbations, and as a result, K(b i ) + 2J(b i ) and Γ(b i ) are analytic functions of the bare couplings K + 2J and J. Hence, in the weak coupling limit, it is enough to keep the leading order terms in K(b i ) + 2J(b i ) and Γ(b i ), which are exactly given by K + 2J and J. To summarize, according to the above arguments, K(b) + 2J(b) and Γ(b) can be just taken as K + 2J and J throughout the RG process in consideration.
The flow equation of h µ l (1 ≤ l ≤ 4, µ = x, y, z) up to one-loop level derived from the diagram in Fig. 5 is given by in which the conventions are: γ = x, y,x,ȳ; α = β = γ; the spin direction indexx (andȳ) is identified with x (and y) in the Kronecker delta and the scaling fields; <ij> = γ; i < j; 1 ≤ i, j ≤ 4; 5 is identified with 1. The first term in Eq. (38) arises from the tree level scaling of the field h µ l (the dimension of the scaling field h µ 2 is the dimension of the fermion operator at the free fermion fixed point), whereas the second term is the one-loop correction. Explicit expressions of the flow equations are included in Appendix E.
We note that Eq. (38) is invariant under the nonsymmorphic symmetry operations of the system, as proved in Appendix F. Fig. 6 Next we consider the diagram in Fig. 6. This diagram gives rise to
Details of the derivation of Eq. (40) is included in Appendix D 2.
We demonstrate that the integration in Eq. (40) vanishes when j = i ± 1, which applies to our case. Notice that for j = i ± 1, Then performing change of variables (ω → −ω, k → k + π ax ) (the change of variable for ω is legitimate since −ω also lies in the momentum shell) and using ( k + π ax ) = − ( k ), it can be seen that the integration in Eq. (40) changes sign, hence λ ilj = −λ ilj = 0. We note that when the Hamiltonian contains beyond nearest neighbor terms (e.g., |j − i| = 2), the integration in Eq. (40) no longer vanishes, and the diagram in Fig. 6 will contribute.
Because of the vanishing of λ ilj , the RG flow equations of λ ilj are where j = i ± 1. Notice that initially h (0)µ ij = 0 at the beginning of the RG flow, hence the solution of Eq. (42) is C. Solving the RG flow equations The RG flow equations for h µ ij have already been solved in Eq. (43). To obtain h α j (b), the coupled RG flow equations in Eq. (38) need to be solved, which is a difficult problem. Here we make the assumption that both K +2J and Γ are very small and only keep up to their first order terms. With this approximation, all the terms on the right hand side of the flow equations proportional to K + 2J or Γ can be replaced with bh (0) , where h (0) is the initial value (i.e., bare field) at the beginning of the RG flow b 0 . Here we note that b 0 in principle should be taken as b 0 = 0 since the Matsubara frequency can take infinite values.
Within the first order approximation, we obtain the following typical flow equation, where x = ln b, and λ is on order of K + 2J or Γ. Let h = ye x , Eq. (44) can be rewritten as which can be easily solved as Hence the solution of Eq. (44) is Using Eq. (47), Eq. (38) can be solved as in which γ = x, y,x,ȳ; α = β = γ; the spin direction indexx (andȳ) is identified with x (and y) in the Kronecker delta and the scaling field; <ij> = γ; i < j; 1 ≤ i, j ≤ 4; 5 is identified with 1; and h (0)µ lm = 0 is used.
V. BOSONIZATION COEFFICIENTS FROM RG FLOW EQUATIONS
In this section, we derive the nonsymmorphic bosonization coefficients from the solutions of the RG flow equations. Since h µ i,i±1 's are all zero as shown in Eq. (43), we will focus on the terms involving h µ i .
A. The uniform and staggered scaling fields at low energies Below the energy scale Λ s , the differences among the sites within a unit cell are smeared out and our RG analysis stops. At this stage, the coupling to the scaling fields is in which b > Λ 0 /Λ s . In addition, the K + 2J term becomes indistinguishable from the U(1) symmetric inter- , and the Γ interaction cancels due to the (γ) factor in Eq. (3). Therefore, below the scale Λ s , although the RG flow continues to renormalize the scaling fields, such renormalizations respect the U(1) symmetry and there is no further U(1) breaking effect. In view of this, for the purpose of a qualitative understanding of the U(1) breaking effects in the bosonization coefficients, we will not discuss the flow equations below Λ s , bearing in mind that they only give rise to some overall U(1) preserving factors.
When the cutoff is further lowered below Λ L , the only spin degrees of freedom are S α u and S α s defined in Eq. (22), since the wavevectors far from 0 and π have all been integrated out. We make a comment on the low energy field theory at the scale Λ L . As explained in Eq. (24), below Λ L , the fermion model can be approximated as a 1+1-dimensional Dirac fermion, and the spin-charge separation is applicable. The low energy field theory contains a spin part and a charge part. The charge Hamiltonian has a cos( √ 8πφ) term due to the repulsive Hubbard interaction where φ is the charge boson, which eventually opens a charge gap at the energy scale m c (where the mass acquires the same order of magnitude as the cutoff). The spin Hamiltonian is of the XXZ type, since the smeared K + 2J interaction lowers the symmetry of the low energy Hamiltonian from SU(2) to U(1). Clearly, the low energy theory has an emergent U(1) symmetry below the energy scale Λ L . It is worth to mention that although U(1) breaking renormalizations have already stopped at the scale Λ s , it is not legitimate to talk about a low energy theory at Λ s , since Λ s ∼ Λ 0 /4 is still in the high energy region.
To express Eq. (49) in terms of S α u and S α s when the energy scale is below Λ L , we should first project Eq. (49) to left and right movers of the fermions, and then rewrite the expression using S α u and S α s . Clearly, the projection of S α j+4n is given by Plugging Eqs. (23,50) into Eq. (49), we arrive at in which the uniform and staggered scaling fields h α u , h α s are given by in which D αν l and C αν l are some numerical factors. Notice that the low energy fields J α and N α live at an energy scale below m c where the charge sector has been gapped out, leaving only the spin degrees of freedom. When the energy scale is further lowered from Λ L to below m c , S α u and S α s become just J α and N α , respectively, since S α u (S α s ) and J α (N α ) both correspond to the zero-(π-) wavevector component of the low energy spin operator S α r (τ ) defined in Eq. (22). As mentioned earlier, the RG flow between Λ L and m c respects the emergent U(1) symmetry. Hence, up to some additional U(1) symmetric renormalization factors, the coupling to scaling fields in Eq. (51) becomes below the scale m c . Recall that performing functional derivatives ∂/∂h α j , ∂/∂h α u , and ∂/∂h α s on the free energy can give the correlation functions involving S α j , J α , and N α , respectively. Using Next we establish the precise relations between the bosonization coefficients and the solutions of the RG flow equations. Notice that in Eq. (48) which give the bosonization coefficients via Eq. (57). From Eq. (59), the explicit expressions of the ten bosonization coefficients up to first orders in K + 2J and Γ can be derived as and in which λ ij 's are functions of b as determined by Eq. (35). It can be seen from Eqs. (60,61) that up to first order in K + 2J and Γ, the coefficients b D and b C vanish. In fact, they start to appear at second order. Take b D as an example. It can be observed from the flow equations that h y j contributes to the flow of h z j , and h z j contributes to the flow of h x 1 . As a result, h x 1 is affected by h y j , eventually leading to a nonzero b D . However, this is clearly a second order effect. Also notice that in Eqs. (60,61), there are the relations c D = h D , c C = h C . However, these equalities are not expected to hold when higher order terms are included.
We make some comments on the effects of the RG flow below the energy scale Λ L . The scaling fields h α η (b) (η = u, s and α = x, y, z) are related to h α in which the matrix M (η) is a function of b (for fixed b L ) and has U(1) symmetry since the U(1) breaking renormalization along the RG flow has already stopped at the scale Λ s (which is greater than Λ L ). Using the chain rule of partial derivatives we see that the matrices When b satisfies Λ 0 /b < m c , Eq. (63) produces the bosonization formulas, and D l (b), C l (b) become the matrices of bosonization coefficients in Eq. (7). It is clear from Eq. (64) that all the bosonization coefficients D αβ l , C αβ l are affected by the RG flow below Λ L , though in a U(1) invariant manner. For example, in the special SU(2) case (i.e., the matrices M (η) have SU(2) symmetry, not just U(1) symmetry, which applies to the SU(2) 1 line in Fig. 2 Eqs. (60,61), it can be observed that a D , a C ∼ O(1) and c D , h D , c C , h C ∼ O(Γ), whereas b D , b C are second order in K + 2J and Γ. Therefore, in the weak coupling limit (i.e., |(K + 2J)/J|, |Γ/J| 1), we have even though the value of K + 2J is already large (equal to 1). Next we define λ D and λ C as Using λ 41 = λ 45 , as well as the inversion and translation symmetries, we obtain λ 21 = λ 41 , which demonstrates that up to one-loop level there is the relation Hence it enough to consider λ C (b). Fig. 8 shows λ C (b) as a function of ln b obtained by numerically calculating the integral in Eq. (35), and it can be seen that λ C (b) is always negative. We note that the integral d ln b · λ C (b) converges when b is integrated from 0 to b s ∼ 4. When b 1, the integration in Eq. (35) is restricted within a narrow range θ ∼ b due to the factor f (ν, m, θ, b). Let x = ln b, and split xs −∞ dxλ C (x) as y −∞ dxλ C (x) + xs y dxλ C (x) where x s ∼ ln 4 and y 1. Since y −∞ dxλ C (x) goes like y −∞ dxe x which converges, we see that xs −∞ dxλ C (x) is a converging integral. As a result, we conclude from Eq. (60) and Eq. (61) that RG predicts
D. Comparison with numerics
Next we check if the predictions in Eq. (68) are consistent with the numerical results. The method for numerically determining the signs of the bosonization coefficients has been discussed in detail in Supplementary Materials in Ref. 44. In this subsection, we follow the method in Ref. 44. We will focus on the "C" coefficients, since they correspond to N α (α = x, y, z) which are relevant operators and open a spin gap at low energies. Appendix G discusses the numerical determinations of the "D" coefficients, which are not successful, and the reasons remain not clear.
Throughout this subsection, we work in the foursublattice rotated frame and take the parameters as Applying a small staggered magnetic field h z π along zdirection, the low energy Hamiltonian can be derived as −h z π i C dxN z . Since N z is a relevant operator, a spin gap opens and a nonzero expectation value N z is developed in the low energy theory. Using the nonsymmorphic bosonization formulas, the spin expectation values are Taking h z π = 10 −3 , DMRG simulations are able to verify the pattern in Eq. (69), with This shows that Notice that i C is the dominant coefficient, and we expect that it does not change sign compared with the U(1) symmetric case for the microscopic Hamiltonian (i.e., when K +2J = 0 and Γ = 0). Therefore, h C < 0 as determined from Eq. (71), which is consistent with the prediction in Eq. (68). In addition, Table I gives a ratio |h C /i C | equal to 0.0380 where the values are obtained from studying spin correlation functions 44 . It can be seen that the two approaches (magnetic field response vs. correlation functions) are fully consistent with each other.
Next applying a small staggered magnetic field h x π along x-direction, the low energy Hamiltonian can be derived as −h x π (a C dxN x − b D dxJ y ). Since the scaling dimension of N x is smaller than that of J y , we expect that a nonzero expectation value N x develops in the low energy theory. Then the spin expectation values can be determined as follows from the nonsymmorphic bosonization formulas, Since a C is the dominant coefficient, again a C is expected to be positive. The patterns in Eq. (72) are verified by DMRG numerics, with the following values which give the ratios as Hence the sign of c C is consistent with the prediction in Eq. (68). However, Table I gives a ratio |c C /a C | = 0.189, not consistent with the result in Eq. (74). The reason for such discrepancy is unclear, and one possibility may be the neglection of the J y term in the analysis.
VI. SUMMARY
In summary, we have performed an RG study on the origin of the U(1) breaking terms in the bosonization formulas in the Luttinger liquid phase of the onedimensional spin-1/2 Kitaev-Heisenberg-Gamma model with an antiferromagnetic Kitaev interaction. The RG analysis provides explanations for the origin of the ten non-universal bosonization coefficients in the abelian bosonization formulas of the spin operators. It can also give predictions on the signs and order of magnitudes of these bosonization coefficients. Our work is helpful to understand the rich physics related to nonsymmorphic symmetries in the gapless Luttinger liquid phases of the one-dimensional Kitaev spin models.
Appendix C: Nonabelian bosonization of 1D repulsive Hubbard model at half filling Here we give a quick review of the nonabelian bosonization method (for details, see Ref. 49). The 1D spin-1/2 Dirac fermion exhibits the phenomenon of spin-charge separation and can be decomposed into an SU(2) 1 spin boson g and a U (1) charge boson φ, where the actions in real time for the SU(2) matrix g and the real scalar φ are given by in whichg is an extension of g from two-dimensional spacetime to three-dimension, and the velocities in S g and S φ have been absorbed into a redefinition of time. We note that because of the topological nature of the second term (i.e., WZW term) in S g , the partition function does not depend on the way of extension. In terms of g and φ, the hopping term between the left and right movers can be bosonized as follows where const. is a real constant. When a repulsive Hubbard interaction U > 0 is introduced, S g and S φ are changed into in which λ φ > 0 is a constant, and the WZW current operators J L and J R are defined as where ∂ ± = ∂ t ± ∂ x . It can be shown that the spin sector remains gapless since J L · J R is marginally irrelevant. However, a gap opens in the charge sector since the cos( √ 8πφ) term is relevant at low energies. The scaling of the charge gap can be solved as m c ∼ e −U/(πt) .
The above analysis shows that in the weak-U limit, the low energy physics of the 1D repulsive Hubbard model is described by the SU(2) 1 WZW theory. On the other hand, we know that according to the standard second order perturbation, the large-U limit reduces to the SU(2) AFM Heisenberg model. Since there is no phase transition between the weak-U and large-U limits, the low energy physics of the SU(2) AFM Heisenberg model is also described by the SU(2) 1 WZW theory. This provides a nonabelian bosonization description for the low energy physics of the AFM Heisenberg model in 1D.
Appendix D: Evaluation of Feynman diagrams 1. Evaluation of diagram in Fig. 5 We need to express the interactions and the spin operators in the frequency and momentum space. The interaction term is dτ n S α i+4n (τ )S β j+4n (τ ) = 1 N β dτ n k1,k2,k3,k4 where ω is Matsubara frequency and k is the wavevector in space (we define the spatial wavevector as a vector to distinguish it from the spacetime combined index k, even though the system is 1D and k is in essence a scalar), a is the lattice spacing, N is the system size, β is the inverse of the temperature, n is summed over the unit cells, andx is the unit vector in the spatial direction. Using the identity Eq. (D1) can be written as i.e., where By defining the Fourier transform h α l (q) as the coupling to the magnetic field becomes Notice that q ∈ [0, π 2a ) in h α l (q) since h α l (n) is defined every four sites. We set the momentum transfer q in h α l (q ) as | q | ∼ 0 (both S α u and S α s correspond to | q | ∼ 0, which is the reason why they are not separated above the energy scale Λ s ). The expression corresponding to the diagram in Fig. 5 is given by Since the free fermion propagator is diagonal in the frequency-momentum space, there are the following constraints (m = 1, 2, 3, 4) Plugging Eq. (D9) into Eq. (D8) and rearranging the terms, we obtain the following alternative expression for Eq. (D8), In Eq. (D10), the factor e −i q ·(j−l)ax can be set as 1 since q is a slowly varying variable. In fact, if we expand the exponential e −i q ·(j−l)ax , | q | n becomes gradients in the real space, which renders the n = 0 terms less relevant than the leading n = 0 term in the RG sense. This justifies in a more rigorous way why e −i q ·(j−l)ax can be taken as 1.
Then by using Eq. (D7), it can be checked that Eq. (D10) becomes in which where G(k) is the free fermion Green's function defined as In Eq. (D13), (k) is the free fermion dispersion which includes the chemical potential term. Fig. 6 The expression corresponding to Fig. 6 is
Evaluation of diagram in
Momentum conservation requires Then it can be shown that Eq. (D14) becomes in which the factor of two before the Green's function comes from the sum over the spin degree of freedom, and k is restricted within the momentum shell, i.e., the fast modes. Notice that Plugging Eq. (D17) into Eq. (D16) and neglecting the e i q ·(−j+l)ax factor in Eq. (D16) since q is a very small wavevector, we obtain where the coefficient λ ilj is Notice that shifting l by a multiple of 4 does not affect the result in Eq. (D19), hence we can impose the condition l ≥ min{i, j}. Apparently, Eq. (D19) is invariant under spatial translation (i, j, l → i + t, j + t, l + t) and inversion (i, j, l → −i, −l, −j), as it must be.
On the other hand, using R z (S x , S y , S z ) → (−S y , S x , S z ), it can be seen that the invariance of the RG flow equations under the symmetry operation R z T a exactly requires Eq. (F5). Hence we conclude that the flow equations have the symmetry imposed by R z T a .
On the other hand, using R y (S x , S y , S z ) → (−S x , S y , −S z ), it can be seen that the invariance of the RG flow equations under the symmetry operation R y I exactly requires Eq. (F9). Hence we conclude that the flow equations have the symmetry imposed by R y I. Appendix G: Numerical determination for the signs of the "D" coefficients In this appendix, we study the signs of the five "D" coefficients. As in Sec. V D, we work in the four-sublattice rotated frame and take the parameters as K +2J = 1, J = −1, Γ = 0.35. DMRG numerical simulations are performed on a system of L = 144 sites with periodic boundary conditions. The bond dimension m and truncation error in DMRG simulations are taken as m = 1400 and = 10 −9 .
The low energy Hamiltonian can be derived as −h z 0 i D dxJ z . Using the nonsymmorphic bosonization formulas, the spin expectation values are expected to be Choosing h z 0 = 10 −3 , DMRG numerical simulations give | 2022-04-13T01:16:17.433Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "5387f9866f30e77770ee60bff6cde18291a8ed08",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5387f9866f30e77770ee60bff6cde18291a8ed08",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
267128465 | pes2o/s2orc | v3-fos-license | Comparison of Endoscopic and Artificial Intelligence Diagnoses for Predicting the Histological Healing of Ulcerative Colitis in a Real-World Clinical Setting
Abstract Background Artificial intelligence (AI)-assisted colonoscopy systems with contact microscopy capabilities have been reported previously; however, no studies regarding the clinical use of a commercially available system in patients with ulcerative colitis (UC) have been reported. In this study, the diagnostic performance of an AI-assisted ultra-magnifying colonoscopy system for histological healing was compared with that of conventional light non-magnifying endoscopic evaluation in patients with UC. Methods The data of 52 patients with UC were retrospectively analyzed. The Mayo endoscopic score (MES) was determined by 3 endoscopists. Using the AI system, healing of the same spot assessed via MES was defined as a predicted Geboes score (GS) < 3.1. The GS was then determined using pathology specimens from the same site. Results A total of 191 sites were evaluated, including 159 with a GS < 3.1. The MES diagnosis identified 130 sites as MES0. A total of 120 sites were determined to have healed based on AI. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of MES0 for the diagnosis of GS < 3.1 were 79.2%, 90.6%, 97.7%, 46.8%, and 81.2%, respectively. The AI system performed similarly to MES for the diagnosis of GS < 3.1: sensitivity, 74.2%; specificity: 93.8%; PPV: 98.3%; NPV: 42.3%; and accuracy: 77.5%. The AI system also significantly identified a GS of < 3.1 in the setting of MES1 (P = .0169). Conclusions The histological diagnostic yield the MES- and AI-assisted diagnoses was comparable. Healing decisions using AI may avoid the need for histological examinations.
Introduction
Ulcerative colitis (UC) is a refractory intestinal disorder caused by a combination of mechanisms, including immunological mechanisms. 1 The number of patients with UC in Japan is increasing. 2 Several patients with UC have recurrent or chronic persistent intestinal inflammation.][5] Achieving endoscopic mucosal remission is a therapeutic goal for UC that can help avoid these complications.The Selecting Therapeutic Targets in Inflammatory Bowel Disease-II initiative, proposed by the International Organization for the Study of Inflammatory Bowel Disease, recently recommended that endoscopic remission be a therapeutic goal to achieve the higher goals of improved quality of life and disappearance of disability. 6The Mayo endoscopic score (MES) is used in the endoscopic evaluation of UC. 7,8 Endoscopic mucosal remission is often defined by an MES of 0 or 1. 9,10 However, endoscopic assessment involves a subjective component and is prone to variability. 8,11,12Histologic healing has been reported as a more advanced therapeutic goal for UC [13][14][15][16] and has the potential to demonstrate mucosal healing more objectively than an endoscopic evaluation.However, the determination of histologic healing requires invasive biopsies.
The EndoBRAIN-UC system (Cybernet Systems) is a fully automated diagnostic system with artificial intelligence (AI) that uses endocytoscopy to identify the presence of histologic inflammation associated with UC.The AI system analyzes features such as invisibility, dilation, and hyperplasia of capillaries in the colonic mucosa via ultra-magnification endoscopic observation using narrow band imaging (NBI) 17 and enables the histological evaluation of UC via the determination of a Geboes score (GS). 18An AI-assisted diagnosis of histological healing, based on a GS < 3.1, may reduce unnecessary biopsies; however, there are no published reports regarding the clinical use of the AI system in patients with UC.This study compared the diagnostic performance for histological healing of the AI-assisted EndoBRAIN-UC system with that of conventional light non-magnifying endoscopic evaluations in patients with UC.
Materials and Methods
This retrospective study was conducted at Tokyo Women's Medical University from June to November 2021.Consecutive patients who met the diagnostic criteria for UC in Japan 1 and underwent a total colonoscopy in a laboratory equipped with an ultra-magnifying endoscope were included in this study.Therefore, patients in the non-remitting phase were included.However, patients with severe symptomatic UC were excluded due to the potential physical burden of total colonoscope observation and the extended examination time.Patients who did not undergo a biopsy at the time of AI-assisted diagnosis were also excluded from the study.Nonmagnified observations using white light were used to determine an MES diagnosis.Simultaneously, the AI system was used to diagnose the same site at which the MES diagnosis was obtained, and a biopsy was also performed.The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the MES-and AI-assisted diagnoses were calculated using the biopsy pathology results as the gold standard.The results of the AI-assisted diagnosis for each MES value (0, 1, 2, and 3) were compared with the biopsy pathology results.
MES Diagnoses
All patients underwent colonoscopy using the usual preparation of bowel-cleansing agents and an Olympus CF-H290ECI (Olympus) colonoscopy device.The colonoscopies and MES diagnoses were performed by 3 physicians with at least 10 years of endoscopic experience in patients with UC.The MES was agreed upon by the endoscopists using images of the biopsy site after colonoscopy.Discrepancies were resolved via discussion.MES0 was defined as mucosa in endoscopic remission.
AI-Assisted Diagnoses
Each AI-assisted diagnosis was performed at the same site as the MES diagnosis using conventional endoscopy.The AI system was connected directly to an endoscopic system (EVIS LUCERA ELITE, Olympus).The latest version of the AI system (EndoBRAIN-UC) was recently approved in Japan, and each AI-assisted diagnosis was performed using the ultramagnification function of the Olympus CF-H290ECI, a scope with a 12.8-mm-dia.tip, which provides a maximum magnification of 520×.
The NBI observation mode was used for the colonic mucosa in which the inflammatory activity was evaluated.The endoscope was then set to the maximum magnification (520×), and an ultra-magnified endoscopic image was acquired.When the image was captured, the program automatically analyzed the image, and the results of the analysis were displayed on a computer screen.The predicted output of the
Pathological Diagnoses
Each pathological diagnosis was made using biopsies of the same mucosal surface imaged for the MES-and AI-assisted diagnoses.The pathological diagnosis was made by a single pathologist using prepared hematoxylin and eosin-stained specimens, with a final agreement with a second pathologist.The GS was used for pathological diagnoses (Supplementary Table S1).The GS subdivides grades according to morphological changes in the mucosal tissue and inflammatory cell infiltration.In this study, histological healing was defined as a GS < 3.1 with no histological erosions, ulcers, or crypt neutrophil infiltration.
Statistical Analyses
All data are expressed as median and interquartile range (IQR).The sensitivity, specificity, PPV, NPV, accuracy, and precision of the diagnostic methods were determined using Fisher's test and a 2 × 2 table.JMP statistical analysis software (version 16; SAS) was used for all analyses.
Ethical Considerations
The study protocol was approved by the Institutional Ethics Review Committee of our hospital on January 7, 2023 (IRB number 2022-0123).Based on the retrospective nature of the study, all patients were offered the opportunity to refuse treatment.A public announcement was posted on our website on January 7, 2023, as approved by the ethics committee.
Diagnostic Yields of MES and AI for Pathology
MES had a sensitivity of 79.2%, specificity of 90.6%, PPV of 97.7%, NPV of 46.8%, and accuracy of 81.2% for the diagnosis of GS < 3.1.The AI system had a sensitivity of 74.2%, specificity of 93.8%, PPV of 98.3%, NPV of 42.3%, and accuracy of 77.5% for the diagnosis of GS < 3.1 (Tables 3 and 4).
Comparison of AI-Assisted Diagnosis and Pathology for Each MES Value
For all MES values, there were both "Healing" and "Active" decisions based on AI-assisted diagnosis.Among the MES0 lesions, the AI system diagnosed 83.7% as the Healing decision.In MES0, there was no significant difference in the percentage of GS < 3.1 in the Healing and the Active decisions based on AI-assisted diagnosis.Among the MES2 lesions, the AI system diagnosed 92.9% as Active decision.In MES2, there was also no significant difference in the percentage of GS < 3.1 regardless of the result of AI-assisted diagnosis.Among the MES1 lesions, 29.4% were classified as the Healing decision and 70.6% as the Active decision.In MES1, the Healing decision with AI-assisted diagnosis identified significantly more GS < 3.1 than did the Active decision with AI-assisted diagnosis.(P = .0169)(Table 5 and Figure 1).
Discussion
The EndoBRAIN-UC system became commercially available following a study by Maeda et al. 18 regarding the use of AI in the management of UC.This system detects the GS score via a histological evaluation of UC based on endoscopy that is capable of ultra-magnified observation.Maeda et al. reported different relapse rates at 1 year for lesions reported as active or healing by the AI system. 19However, the previous study was conducted in a research and development facility; no reports of the real-world clinical utility of ultra-magnified endoscopic observation of UC using commercially available AI systems have been reported.
Histological evaluations based on the capillary structure of the mucosa obtained using automated evaluation systems have been reported, 20 though no studies have demonstrated the usefulness of the system in actual clinical practice.2][23] This study compares the diagnostic yield of the commercially available EndoBRAIN-UC system with that of MES in a real-world clinical setting.
In this study, the diagnostic yield for a GS < 3.1 was similar between MES (using white-light observation) and the AI.The diagnostic yield of the AI-assisted diagnosis was equivalent to that in the report by Maeda et al. 18 with the exception of the NPV, confirming the high reproducibility of the AI-assisted diagnosis in clinical settings.
The NPV in the current study was lower than previously reported NPVs as this investigation was conducted in a realworld setting and fewer MES ≥ 2 specimens were endoscopically classified as inflammatory.Additionally, the results of the AI system used in this study do not fully reflect the pathological results as the GS is based on various factors including inflammatory cell infiltration and crypt destruction. 13mong the present 52 cases, 83.7% of the MES0 cases were judged to be Healing by the AI-assisted diagnosis, and 97.7% of the MES0 cases were GS < 3.1 by histological diagnosis.On the other hand, 92.9% of the MES2 cases were also judged as Active in the AI-assisted diagnosis, and only 32.1% of the MES2 cases were judged as GS < 3.1 by histological diagnosis.Therefore, an AI-assisted diagnosis may not be necessary when MES0 or MES2 can be clearly determined using unmagnified white light observation.Unnecessary ultramagnification should be avoided as it is a more technical procedure that may increase the examination time compared to conventional endoscopy. 19Although an endoscopic diagnosis involves subjectivity and may lead to divided judgment, MES0 and MES2 are unlikely to be confused. 8,11,12In addition, an AI system that provides MES classifications using unmagnified white light may be commercially available in the future. 22,23ifferent prognoses for subsequent relapses have been reported for MES0 and MES1. 9,24,25However, MES1 lesions often do not relapse.Although the presence of histological inflammation plays a role in relapse, [13][14][15][16] there may be intervening histological differences in the mucosa classified as MES1 (Figure 2).In the present study, the AI system reported a GS < 3.1 in MES1 lesions as healing, suggesting that this AI-based diagnosis may help determine differences in histological inflammation and the subsequent risk of relapse.
Histological determinations are conducted to evaluate inflammation and to diagnose neoplastic lesions.However, in cases where the purpose of biopsy is to evaluate inflammation, tissue sampling can be reduced if inactive mucosa can be identified without biopsy.The use of AI-assisted diagnosis may reduce unnecessary biopsies for the diagnosis of MES1, although prospective studies with relapse as an outcome are required to test this possibility.This study has several limitations.This was a single-center, retrospective analysis of a small number of patients.This approach reveals the degree of inflammation but does not contribute to the detection of dysplasia.It is also unclear whether treatment interventions affect AI-assisted diagnoses, and further studies are required.In addition, ultra-magnification requires technical proficiency, which may have affected the results.However, the AI-assisted diagnoses with ultramagnification were all performed by the same experienced endoscopists.The AI-assisted diagnostic protocol also included obtaining multiple images from the same site, and the most reproducible results were used.This reduces the technical influence as much as possible.
Conclusion
In conclusion, MES and AI-assisted diagnoses have similar diagnostic yields for a GS < 3.1.An AI-based diagnosis of MES1 may reduce the need for biopsies for histologic examination.
Figure 1 .
Figure 1.Comparison of Geboes score based on artificial intelligence (AI) diagnosis and Mayo endoscopic score (MES).The percentage of lesions with a Geboes score (GS) < 3.1 was significantly higher when the AI-assisted diagnosis was healing in MES1 lesions.In MES0 and MES2, the percentage of GS < 3.1 was not significantly different between the AI-assisted diagnoses.
Table 3 .
Comparison of Geboes score based on AI diagnosis and MES diagnosis.
Table 5 .
AI diagnostic performance and proportion of Geboes <3.1 in each endoscopic assessment score.
Data are presented as the number (%) of biopsy points.Abbreviations: AI, artificial intelligence; GS, Geboes score; MES, Mayo endoscopic score. | 2024-01-24T16:29:10.622Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "f57754581657d8c53ead083471419550868f5a73",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/crohnscolitis360/advance-article-pdf/doi/10.1093/crocol/otae005/56320693/otae005.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "194c46de093bbfaf1df6db3331a64e4f5a2a4f08",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
} |
216273131 | pes2o/s2orc | v3-fos-license | The Role of Entrepreneur Characteristic and Financial Literacy in Developing Business Success
One important contribution of Small-and-Medium Enterprises (or SMEs) to the economic development is the absorption of workers. Absorbing more workers and reducing unemployment are achievable only if SMEs have been success in managing the sustainability of their business. The success of SMEs’ business is determined by few variables, which are to be examined in this research. Therefore, research is aimed to conduct empirical test over the effect of entrepreneur characteristic and financial literacy on business performance. Type of data is primary, which is acquired through distribution of questionnaire to respondents. Research sample includes Small-and-Medium Enterprises (SMEs) in Tarakan City. Analysis technique is Partial Least Square-Structural Equation Modelling (PLS-SEM). Result of research showed that personal characteristic, psychological characteristic, entrepreneur competency, and financial literacy are determinants to SMEs’ performance. Theoretical implication of this research is that research findings are supporting upper echelon theory and RBV theory when both theories explain factors that determine performance.
Introduction
It is undeniable that Small-and-Medium Enterprises (or SMEs) always have important roles in economic growth of Indonesia. Its contribution to Gross Domestic Income (GDI) is huge, which reaches around IDR 7,005,950,000 or 62.57% of Total GDI (LEI, 2018). Besides having position that determines economic growth, SMEs become front runner in alleviating unemployment. Sudarno (2011) found that SMEs in Depok City can absorb 534,500 workers or 73% of total workforces. Small-and-Medium Enterprises are potentially capable to grow the spirit of entrepreneurship (Sari, Suwarsinah, & Baga, 2016). One reason is because entrepreneurship is one of indirect methods to alleviate unemployment (Sukidjo, 2005). One incentive behind the development of SMEs' volume is high entrepreneurship motivation.
In general, entrepreneur characteristic is the representation of personal or psychological attributes, which are made of attitude and interest (Sari et al., 2016). To be noted, entrepreneur characteristic is many and varying. The diversity of entrepreneur characteristic, thus, is arranged into three dimensions, namely, personal dimension, entrepreneurial dimension (related with innovation), and managerial/organizational dimension (Abood & Aboyasin, 2014).
In the other hand, SMEs have weaknesses that can be the factor of failure to the business. Among these factors are low financial literacy, poor access to the bank, bad financial control, and inappropriate investment strategy (Arasti, Zandi, & Bahmani, 2014). Most SMEs' entrepreneurs do not use the advance financial analysis instruments to manage their business, and thus, they find themself as weak in financial literacy (Plakalovic, 2015). Empirical study conducted by Lusardi and Mitchell (2007) has indicated that not only financial literacy is low either in developing or developed countries, but also only few peoples can understand the basic concept of finance. In 2016, Financial Literacy Index of Indonesia is low, which it can reach around 29.7%. In other words, Indonesian peoples who use financial products and services are 67.8%, but it is only 29.7% of them who are well financially literate (OJK, 2016).
That business owners or entrepreneurs are required to have financial literacy is a crucial issue in both developing and developed countries due to the change of their financial landscape (Hilgert & Hogarth, 2003). Several studies already showed that financial literacy can substantiate the opportunity of business success and business sustainability because entrepreneurs are given flexibility to access financial facility (Adomako & Dans, 2014;Aribawa, 2016;Dahmen & Rodríguez, 2014). Therefore, the understanding about financial literacy is surely important to SMEs' entrepreneurs when they are determined to develop their business success.
The objective of this research is to elaborate the relationship between entrepreneur characteristic and financial literacy, and to examine the role of both variables on SMEs' success in developing their business. Research takes place at SMEs in Tarakan City. This research location is selected because there is a limited number of researches that examine the effect of entrepreneur characteristic and financial literacy on business performance of Tarakan SMEs. Ariani and Utomo (2017) found that the weaknesses of SMEs in Tarakan City are: 1) Capital limitation; 2) Lack of understanding about business management, and strategy, system, and process of marketing; 3) Not yet enlisted into business association in Tarakan City; 4) Lack of marketing network and supporting information technology; and 5) Lack of availability for human resources with the required skills and experiences. Through these findings, Ariani and Utomo (2017) declared that some of these weaknesses are indicators that explain entrepreneur characteristic, such as low managerial competency, and lack of skills and experiences, while others are indicators that explain financial literacy, such as capital limitation and lack of information technology network. Considering this explanation as background, the current research attempts to understand how is the effect of entrepreneur characteristic and financial literacy on business performance of SMEs in Tarakan City. Abood and Aboyasin (2014) described that entrepreneurs can be identified through three (3) characteristics, respectively: 1) Personality, explained by indicators of sense of capability and diligence, self-reliant, personal enthusiasm, self-confident anhd optimism, courageous and responsible, and highly motivated toward achievement; 2) Innovative, measured by indicators of having future vision as the motivation of current act, risk taking, thinking out of the box, capable to capture opportunity, being flexible, and thinking openly; 3) Managerial and Organizational Competencies, which is described by indicators of managerial and organizational experiences, dislike the routines (orr traditions), sense of authority and control on what is doing (internal control), capable to invest the resources at proper place, efficient self-management, and social competency (building relationship with others). Sari et al. (2016) divided entrepreneur characteristic into three (3) variables, respectively : 1) Personal (Individual) Characteristic, which is explained by indicators of age, education, experience (related with entrepreneurship) and cosmopolite; 2) Psychological Characteristic, which is measured by indicators of hard working, self-confident, discipline, dare to take the risk, tolerance to uncertainty, innovative, self-reliant, and responsible; and 3) Entrepreneurship Competency, which is decribed by indicators of managerial competency, conceptual competency, social competency, decision-making competency, and timing competency.
Financial Literacy
Entrepreneurs must face a reality that they make various complex financial decisions to improve their business. For example, entrepreneurs must make financial decisions in the
Utomo, Cahyaningrum, & Kaujan
The Role of Entrepreneur Characteristic and Financial Literacy in Developing Business Success form of savings, investments, and pensions. Financial literacy, therefore, is very important feature to entrepreneurs' financing decisions, which then affects their performance (Adomako & Dans, 2014).
Financial literacy is the understanding and knowledge about financial principles that must be used in the making of financial decisions and products in order to give an impact of improving welfare (Basu, 2005). Financial literacy is also a discipline of personal financial facts and a key for personal financial management (Garman & Forgue, 2002).
Entrepreneurs' understanding and knowledge about finance (or financial literacy) can be estimated through indicators. Dahmen and Rodríguez (2014) assess financial literacy of SMEs' entrepreneurs through four (4) indicators, respectively: (i) the preparation of monthly financial statement (earning/loss statement and balance sheet); (ii) the review on monthly financial statement; (iii) the financial analysis over monthly financial statement; and (iv) the understanding about gross earning ratio and its contribution to total earnings. These indicators are arranged in Likert Scale at 7 points starting from very agree to very disagree. Chen and Volpe (1998) assess financial literacy level also with four (4) indicators, which include basic knowledge about how to manage finance, credit, savings & investment, and risk. Aribawa (2016) estimates financial literacy level through some indicators, such as: 1) Opening bank account on behalf of enterprise; 2) Enterprise identification during account opening; 3) Minimum fund deposit during account opening; 4) Knowledge about surety of savings; 5) Understandings about potential returns of savings in a year; 6) Understandings about potential returns of savings in multi-years; 7) Understandings about annual credit interest; 8) Knowledge about premium of two optional products; 9) Knowledge about the effect of inflation on currency; 10) Knowledge about value of money over times; and 11) Understandings about the effect of inflation on firm growth.
Performance of Small-and-Medium Enterprises
Small-and-Medium Enterprises (SMEs) are said to be successfully managed if they have good performance. Their performance is affected by many factors, either in positive or negative ways. Mostly, the success comes from how entrepreneurs think about how they should plan their business strategy (Singh & Pathak, 2013). Entrepreneurs' behavior can give a distinguishing effect on business performance (Davis, Marino, & Vecchiarini, 2013). Entrepreneurs always play significant role in the success and sustainability of business.
The performance of SMEs can be measured with indicators. Adomako and Dans (2014) used indicators, such as: Return on Asset (ROA), Return on Equity (ROE) and Tobin's Q market value to assess SMEs' performance . Aribawa (2016) measured SMEs' performance with several indicators, such as: job is done on plan and schedule; job mistakes are too often and causing repetition; the sale is growing; fixed cost is declining; anticipation of production on demand is improving; and there is a surety of punctuality for customers and for compatibility of product and specification. Sari et al. (2016) defined business performance through indicators of earnings and sale.
Utomo, Cahyaningrum, & Kaujan The Role of Entrepreneur Characteristic and Financial Literacy in Developing Business Success
The Effect of Entrepreneur Characteristic on Business Performance Upper Echelon Theory believes that there is relationship between entrepreneur characteristic and business performance. This theory also says that organization and anything inside it are the reflection of top-management's characteristic (Hambrick & Mason, 1984). Entrepreneur characteristic intended by this research is the observable characteristic, such as age, term, functional path, career experience, formal education, heterogenity, managerial process, and organizational performance (Sambu & Kihara, 2015).
Some empirical reviews explain that entrepreneur characteristic is affecting business performance. Abdulwahab and Al-Damen (2015) examined the impact of entrepreneur characteristic on business success by observing Jordanian small enterprises that have business in medical equipments and devices. They found that entrepreneur characteristic has positive effect on the success of these enterprises. By this finding, it can also be said that the success of small business is closely related with entrepreneur characteristic. Mothibi (2015) conducted an empirical study to analyze the effect of entrepreneur characteristic on business performance at Small-and-Medium Enterprises (SMEs) in Pretoria. Structured questionnaire is used to collect the data about the characteristics of entrepreneurs and enterprises, which are perceived as affecting performance of the enterprises. Based on the result of multiple regression analysis, it is found that managerial competency, education qualification, work experience, location, firm size, business duration, and business sector, have significant and positive effect on SMEs' performance. Isaga (2017) has studied three hundreds (300) Small-and-Medium Enterprises (SMEs) at furniture sector in four different regions of Tanzania. Structural Equation Modelling (SEM) is used as the approach to the simultaneous tests over direct and indirect effects of entrepreneur characteristic on SMEs' performance. The finding shows that personal characteristic of entrepreneurs, represented by cognitive characteristic, has significant effect on SMEs' performance.
Given the explanations previously given, then three (3) hypotheses are made. The third hypotheses are written as follows:
H1
: Personal Characteristic has positive effect on Business Performance.
H2
: Psychological Characteristic has positive effect on Business Performance.
H3
: Entrepreneur Competency has positive effect on Business Performance. (Barney, 1991). High financial literacy helps enterprises to access financial sources (as resources) in order to be optimally used to create firm value (Adomako & Dans, 2014).
The positive effect of financial literacy is already stressed by several empirical reviews. Dahmen and Rodríguez (2014) in their empirical study have found a strong relationship between financial literacy and financial performance of the firms. Based on their survey on small entrepreneurs in United States, it is found that 50% business owners (7/14) do not regularly monitor their financial statement and as a consequence, they find most of their business (86%, or 6/7) suffering from financial difficulty. It can be said that poor financial literacy causes entrepreneurs to suffer from financial trouble. Aribawa (2016) confirmed that financial literacy affects business performance and business sustainability of Small-and-Medium Enterprises at creative sector in Central Java. It has implication that by having good financial literacy, thus, SMEs must be able to make proper decisions on managerial and financial issues to ensure the improvement of business performance and business sustainability. Eniola and Entebang (2016) reviewed the effect of financial literacy on performance of Small-and-Medium Enterprises in Nigeria. Result of this review confirmed that the knowledge about finance should help entrepreneurs to get better business performance. This result clarifies the importance of financial literacy for all SMEs owners when they manage their business.
Taking into account all studies above, a hypothesis is developed:
H4
: Financial Literacy has positive effect on Business Performance.
Research Variable and Research Indicator
There are five (5) latent variables observed and measured by this research, and these include personal characteristic, psychological characteristic, entrepreneur characteristic, entrepreneur competency and financial literacy. These four variables are independent/exogenous variables, while business performance is dependent/endogenous variable.
All these variables are latent/unobserved (unmeasured) variables, and these are proxied by using perception of respondents on predetermined indicators. Variables and indicators of this research are described in the following Table 1.
Utomo, Cahyaningrum, & Kaujan
The Role of Entrepreneur Characteristic and Financial Literacy in Developing Business Success (Aribawa, 2016;Sari et al., 2016) Source: Empirical theories and studies that are relevant to this research.
Population and Sample of Research
Population of this research includes all Small-and-Medium Enterprises (SMEs) in Tarakan City. Sampling technique is area probability sampling. By this technique, sample where data are collected is determined based on area. The area intended by this research includes several districts, such as Central Tarakan, West Tarakan, East Tarakan and North Tarakan. The sample is obtained by subjecting research population to the criteria of financial literacy, which include has savings in public banks, and has ever received business credit from banking and non-banking financial institutions. One hundred (100) respondents constitute the sample and each comes from different background of Data collection technique involves the compilation of primary data. Field study is conducted at research location and questionnaires are distributed to respondents (SMEs' entrepreneurs). The answers to questionnaires provide the needed data, and then data are differentiated into time dimensions. Therefore, data of this research are cross sectional.
Analysis Model
Research hypotheses are tested using the combination of Partial Least Squares (PLS) and Structural Equation Modelling (SEM), which is operated with WarpPLS version 6.0. Research model that the author wants to elaborate is written as follows: where P = Business Performance, PerC = Personal Characteristic , PsyC = Psychological Characteristic, EC= Entrepreneur Competency , FL = Financial Literacy.
Result and Discussion
Research model is evaluated in two stages, respectively, evaluation of measurement model and evaluation of structural model. The evaluation of research model is conducted using PLS-SEM with computer application of WarpPLS version 6.0. Two algorithm methods are used in this research. Outer model is examined with PLS at Regression Mode, while inner model is checked with warp2 (non-linear). Both methods are chosen because they produce p-value with the best significance level (Sholihin & Ratmono, 2013). Method of resampling is stable method, which represents the default method at the application of WarpPLS 6.0. Source: Primary data are processed (2018) Some indicators have factor loading value below 0.6. Based on the rule of thumb in measuring reliability and validity, indicators with factor loading value less than 0.6 are not included in (or eliminated from) the measurement of research variables. Factor loading value of indicators, and composite reliability and AVE of the variables are shown in Table 3. Source: Primary data are processed (2018) As shown by the table above, all indicators that explain variables of Personal Characteristic, Psychological Characteristic, Entrepreneur Competency, Financial Literacy, and Business Performance, are considered as valid because all have factor loading value above 0.6. Therefore, all indicators in this table are those that achieve indicator reliability. Moreover, AVE rates of the variables are > 0.5, which thus put the variables in very good category because they fulfill the condition of convergent validity. Composite Reliability values of the variables are > 0.7, which lay the variables in very good category because they fulfill the condition of internal consistency's reliability.
Utomo, Cahyaningrum, & Kaujan The Role of Entrepreneur Characteristic and Financial Literacy in Developing Business Success
The rate of Square-Root AVE is compared with the rate of correlations across the variables/constructs. The result of this comparison is indicated in Table 4.
Utomo, Cahyaningrum, & Kaujan The Role of Entrepreneur Characteristic and Financial Literacy in Developing Business Success
Jurnal Manajemen Bisnis, Vol 11 No 1 (2020) | 36 Source: Primary data are processed (2018) Square-Root AVE of each variable is higher than the rate of correlations across variables, and therefore, the variables show good discriminant validity.
Evaluation of Structural Model
Evaluation of structural model (inner model) is conducted to predict relationship across variables by examining how much variance should be explained in order to acknowledge the significance of P-value (Latan & Ghozali, 2016). This evaluation should facilitate the author in conducting hypothesis test.
Before evaluating the relationship across variables, goodness-of-fit of research must be evaluated first. The output of this evaluation is shown in Table 5. Based on the outputs of the table above, it can be said that research model has good fit because P-values for APC, ARS, and AAR are < 0.05, precisely APC = 0.247, ARS = 0.350, and AARS = 0.323. The rates of both AVIF and AFVIF are < 3.3, and thus, there is no multicollinearity problem across indicators and across exogenous variables. Goodness-offit (GoF) has rate of 0.470> 0.36, which signifies that research model has very good fit. The parameters of SPR, RSCR, and SSR are equaled to 1, and NLBCDR has parameter of 0.875, which based on these results, it is said that there is no causality problem in research model (Latan & Ghozali, 2016).
The estimated relationship across variables and the rate of variance are displayed in table 6. In addition, the estimation of relationship across variables and the rate of variance are illustrated in Figure 1. Source: Primary data are processed (2018) In addition, the estimation of relationship across variables and the rate of variance are illustrated in Figure 1.
Utomo, Cahyaningrum, & Kaujan The Role of Entrepreneur Characteristic and Financial Literacy in Developing Business Success
As indicated in Table 6, R-squared (R 2 ) for the variance that affects Business Performance is 0.35. Pursuant to this result, it can be said that the effect of the variances of Personal Characteristic, Psychological Characteristic, Entrepreneur Competency, and Financial Literacy, on the variance of Business Performance, is valued at 35%, while the remaining 65% are affected by other variable out of research model. The rate of R-squared (R 2 ) that affects Business Performance is included into moderate category (R 2 > 0.25). Moreover, the rate of Q-Squared for Business Performance is 0.299 (>0), which from this, it can be said that research model has predictive relevancy (Latan & Ghozali, 2016).
By taking into account the outputs of Table 6 and path description in Figure 1, some preliminary indications can be explained as follows. Personal Characteristic has positive and significant effect on Business Performance, and this relationship has path coefficient value of 0.181 and P-value of < 0.01. It supports Hypothesis 1, and thus, this hypothesis is accepted. This result corresponds with Upper Echelon Theory because the theory says that organization and anything inside it are the reflection of the characteristic of topmanagement (Hambrick & Mason, 1984). These findings are in line with previous studies conducted by Sambu and Kihara (2015) and Mothibi (2015) who found that personal characteristic has positive impact on business performance. If entrepreneurs have better personal characteristic, it must improve their business performance. Entrepreneurs who are in productive age and have more entrepreneurship experiences should be more motivated to manage their business more efficiently to attain better business performance.
Psychological characteristic has positive and significant effect on business performance. This relationship has path coefficient value of 0.371 and P-value of < 0.01. It supports Hypothesis 2 and thus, this hypothesis is accepted. Previous studies that corroborate this finding include Sari et al. (2016), Abdulwahab and Al-Damen (2015) and Isaga (2017). In general, these studies assert that personal characteristic of entrepreneurs has significant effect on performance of Small-and-Medium Enterprises. Psychological characteristic has positive effect on business performance, and this positive sign can be described as that better psychological characteristic must motivate entrepreneurs to improve business performance. The current research confirms that psychological characteristic (represented by indicators of hard working, self-confident, discipline, innovative, selfreliant and responsible, having future vision, and flexible and open-minded) is determinant to whether entrepreneurs are successful or not in developing their business. This finding is contributive to Upper Echelon Theory, which explains that there is a relationship between entrepreneur characteristic and business performance (Hambrick & Mason, 1984).
Moreover, the outputs of Table 6 and the description in Figure 1 also display that entrepreneur competency has positive and significant effect on business performance at path coefficient value of 0.136 and P-value of 0.04. It supports Hypothesis 3 and thus, this hypothesis is accepted. This result gives confirmation to the previous studies conducted by Camuffo, Gerli, and Gubitta (2012), Barazandeh, Parvizian, Alizadeh, and Khosravi (2015) and Pamela, Pambudy, and Winandi (2016). These studies, in general, found that entrepreneur competency has positive effect on business performance. In other words, if Jurnal Manajemen Bisnis, Vol 11 No 1 (2020) | 38
Utomo, Cahyaningrum, & Kaujan
The Role of Entrepreneur Characteristic and Financial Literacy in Developing Business Success entrepreneurs are more competent in their business, they might be successful in improving their business performance. The current research proves that entrepreneurs with good managerial and conceptual competencies are those who can map their business properly, and these competencies have positive impact on performance. Making appropriate decisions, good internal control, and efficient self-management are the determinant factors to the successful business performance.
Other result shows that financial literacy has positive and significant effect on business performance. This relationship is given with path coefficient value of 0.3 and P-value of <0.01. Hypothesis 4 is supported, and that is why this hypothesis is accepted. This result justifies Resource-Based View Theory where this theory says that if firms are able to manage the existing resources into valuable, rare, inimitable and unsubstitutable products, then such firms will be able to improve their performance and get sustainable competitive advantage (Barney, 1991). This finding is also in accord with previous studies conducted by Dahmen and Rodríguez (2014), Aribawa (2016) and Eniola and Entebang (2016). In general, these studies found that financial literacy has positive impact on business performance. Also, it can be said that high financial literacy, especially concerning with financial management, allows entrepreneurs to have better access to financial sources (as resources), which then manage these resources to improve business performance. Entrepreneurs who are able to manage savings & investment efficiently should then be supportive to effort to attain high business performance.
Conclusion
The current research attempts to answer the question of whether entrepreneurs' characteristic and competency, and also financial literacy are factors that determine successful performance of Small-and-Medium Enterprises in Tarakan City. Four hypotheses are proposed to answer this question. Hypothesis 1 has been tested, and the result shows that better personal characteristic improves business performance. Result of test on Hypothesis 2 shows that better psychological characteristic also improves business performance. Hypothesis test on Hypothesis 3 gives a result that entrepreneurs who are more competent in entrepreneurship are finding themself in easier way to improve business performance. The testing on Hypothesis 4 has given a result that high financial literacy has positive effect on business performance. High financial literacy, which is indicated by financial management and savings & investment management, is indeed the valuable, rare and inimitable resource.
Moreover, there are more explanations about how to improve business performance. Psychological characteristic has the biggest positive effect on business performance. Its path has the highest path coefficient value, which based on it, then can be said that psychological characteristic plays the most important role to the improvement of business performance. Financial literacy occupies the best second rank in improving business performance, and therefore, financial literacy is very important to entrepreneurs. | 2020-04-09T09:20:02.242Z | 2020-03-06T00:00:00.000 | {
"year": 2020,
"sha1": "1ef8307d868da9079a78d9ace32f20a919d7cfee",
"oa_license": "CCBYSA",
"oa_url": "https://journal.umy.ac.id/index.php/mb/article/download/6950/5068",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b2f1ac1371f40098c3447501a66cebef4649fd3e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
55824718 | pes2o/s2orc | v3-fos-license | Finite-Difference Time Domain Techniques Applied to Electromagnetic Wave Interactions with Inhomogeneous Plasma Structures
Motivated by the emerging field of plasma antennas, electromagnetic wave propagation in and scattering by inhomogeneous plasma structures are studied through finite-difference time domain (FDTD) techniques. These techniques have been widely used in the past to study propagation near or through the ionosphere, and their extension to plasma devices such as antenna elements is a natural development. Simulation results in this work are validated with comparisons to solutions obtained by eigenfunction expansion techniques well supported by the literature and are shown to have an excellent agreement. The advantages of using FDTD simulations for this type of investigation are also outlined; in particular, FDTD simulations allow for field solutions to be developed at lower computational cost and greater resolution than equivalent eigenfunction methods for inhomogeneous plasmas and are applicable to arbitrary plasma properties such as spatially or time-varying inhomogeneities and collision frequencies, as well as allowing transient effects to be studied as the field solutions are obtained in the time domain.
Introduction
A plasma dipole is an antenna with a radiating structure based on a plasma element instead of a metallic conductor [1][2][3][4][5].The plasma, kept activated by an ionizing source, is the conducting material that acts as the source for electromagnetic fields, which can then be modulated and used to carry information in telecommunication links.This is achieved by applying a secondary signal source to the plasma, which will then reemit the signal as electromagnetic radiation.Plasma antennas have several technological advantages, such as their radiation properties being electronically controlled, which results in an antenna that is not restricted to its original fabrication characteristics and can be reconfigured with simplicity and in real time [6,7].If the ionizing source is turned off, the plasma antenna is deactivated, becoming an inert element that is invisible to electromagnetic radiation, eliminating coupling problems with other antenna elements.
These advantages can be employed in solutions to one of the great challenges in the synthesis of antennas for telecommunication links, which is the lack of flexibility in changing parameters, like, for example, the radiation pattern, once the antenna is deployed.This problem can be avoided by application of antenna arrays, where control over the excitation of each element allows the radiation pattern to be conformed.These antenna arrays are particularly interesting for use in reconfigurable antennas in mobile communications and satellite link applications [8][9][10], but the presence of metallic elements creates additional complexity in the synthesis process due to parasitic interactions.Plasma antennas provide an alternative to metallic elements in these arrays, as they remain inert when deactivated and therefore parasitic interactions are minimized.
Despite the advantages in utilizing plasmas as conducting elements in antennas, there are still several difficulties and obstacles for this technology.Obtaining the real characteristics of a plasma antenna, in a generic situation, requires the complete description of the plasma configuration, which creates theoretical and numerical difficulties when trying to analyze such systems, as outlined in [7].In particular, the behavior of electromagnetic waves within a plasma structure is not always trivial to be analyzed.
The theoretical investigation of electromagnetic field behavior within a cylindrical inhomogeneous plasma structure is usually carried out through eigenfunction expansions [11,12], which consists of expanding the electromagnetic field in Bessel functions, or other eigenfunctions appropriate to the problem's geometry, and then finding the unknown expansion coefficients by application of boundary conditions within the plasma and at the plasma container's boundaries.There are limitations to this method, however, such as computational costs in developing the field solutions and the comparatively low spatial resolution obtained for the fields internal to the plasma structure or the requirement that the inhomogeneities be of special cases so that an analytical solution can be found.
In this paper, a simulation scheme based on FDTD techniques is proposed to investigate the problem of electromagnetic propagation within arbitrary inhomogeneous plasma structures appropriate for telecommunication applications and the waves scattered by such structures.Such simulations have already been successfully applied to the study of electromagnetic propagation through or near the ionosphere [13][14][15][16][17][18].Some aspects of plasma antennas have already been treated with FDTD techniques [19][20][21][22], but simulated results for electromagnetic fields pertaining to waves propagating within the plasma structure or scattered by it have not been presented to the authors' knowledge, and aforementioned works explore only homogeneous `plasma.
For the ease of comparison with results from the literature, the plasma under consideration is inhomogeneous, cold, unmagnetized, and collisional.The plasma is also assumed to be confined to a cylindrical structure and under illumination from a transverse magnetic (TM) plane electromagnetic wave with electric field parallel to the cylinder's axis and in a situation of normal incidence, as shown in Figure 1.The method described herein is more efficient than equivalent eigenfunction expansions and can also provide field solutions with greater resolution and accuracy.
The remaining of this work is organized as follows.In Section 2, the theory utilized to describe the plasma structure in both the eigenfunction solution and the FDTD simulation is briefly outlined.In Section 3, the particulars of simulating the plasma behavior with the FDTD algorithm are presented.Section 4 provides validating results for homogeneous plasmas through comparisons with the usual eigenfunction methods as well as results that explore the characteristics of electromagnetic propagation through the inhomogeneous plasma.Section 5 provides the concluding remarks.
Theory
The electromagnetic wave propagation in a cold, collisional, isotropic plasma with no background magnetic field can be characterized by the relative permittivity with where ω p is the plasma frequency, ν is the electron collision frequency, ω is the angular frequency of the propagating electromagnetic wave, n is the electron density, e is the electron's charge, m is its mass, and 0 is the permittivity of free space.
Equating (1) with the relative permittivity of an arbitrary lossy material allows an effective conductivity σ ef and associated plasma current density in the frequency domain to be found as Applying an inverse Fourier transform and some algebraic manipulations allows (3) to be written, in the time domain, as The relevant equations that govern the evolution of the electromagnetic fields within the plasma are then given, in the time domain, by
FDTD Technique
The FDTD technique consists of discretizing Maxwell's equations through the use of finite difference approximations for the derivatives.This section will provide the basic characteristics of the FDTD method utilized in this work.
3.1.Computational Domain and Update Equations.The basic scheme laid out by Yee in his seminal paper [23] is used to create the two-dimensional computational domain with N x × N y spatial cells for a simulation run for N t steps.Assuming the time step is given by Δ t and the spatial steps are given by Δ s = Δ x = Δ y , a function f of space and time evaluated at a Region 2 3 International Journal of Antennas and Propagation discrete point in space-time (nΔ t , iΔ x , and jΔ y ), where n, i, and j are integers or half-integers, is denoted as Electromagnetic fields are spatially discretized over a finite Cartesian grid as per Figure 2, and the cylinder's boundary is realized with a staircase Cartesian approximation.Of note is that electric and magnetic fields are shifted by half steps from each other both spatially and in time, that is, electric fields are stored at integer times and positions, while magnetic fields are stored at half-integer times and positions.This allows for Maxwell's equations to be readily discretized as
7
which can be used to propagate electromagnetic waves through a discretized free space within the computational domain.The equations for propagation within the plasma remain unchanged for the magnetic fields, but the plasma current that arises in (3) has to be addressed in the update equation for the electromagnetic field.
This term can be efficiently handled by means of an auxiliary differential equation (ADE) formulation [24].The updated equation for the current term is given by where The update equation for the electric field requires knowledge of the plasma current at time step n + 1/2, which is obtained by a time average such that the update equation for the electric field within the plasma is given by where These update equations naturally take into account any spatial variations in both the electron density and the (a) Fictitious boundary for the problem at hand.
Fields inside and outside the boundary are continuous across it (b) Equivalent problem with fields inside the fictitious boundary set to zero and material properties set to that of free space
Incident Wave Generation.
The incident wave under consideration for the FDTD technique is a plane wave propagating towards the right (positive x-axis) of the computational domain.It is generated by means of a total-field scattered-field (TFSF) formulation [25] as shown in Figure 3.This region separation will also be exploited to calculate the scattered far field from the structure, discussed in Section 3.5.The computational domain is divided into two regions, and an auxiliary one-dimensional FDTD simulation is concurrently run to account for the propagation of the incident wave.The field values from the incident wave are then directly added and subtracted in the field update equations in computational cells surrounding the region separation.
Grid Termination.
Absorbing boundary conditions (ABCs) based on one-way wave equations [26,27] are used to analytically absorb incoming waves at the computational boundaries and simulate propagation towards infinity to prevent unphysical backscattered waves from polluting the numerical results.Mur's ABC is only valid in a vacuum, but since the TFSF technique is used to generate, the incident wave is implemented in vacuum cells surrounding the plasma cylinder that condition is automatically fulfilled.x which can be readily implemented concurrently to the leapfrogging algorithm at all grid points by means of where Simpson's rule was employed to numerically evaluate the Fourier integral [28].
3.5.Near-to-Far-Field Transformation.With the steady-state fields calculated as described in Section 3.4 from the nodes in region 2 of the TFSF technique, a near-to-far-field transformation can be employed to numerically obtain the scattering amplitude A(φ) of the structure of interest.The strategy consists of constructing a fictitious boundary within region 2 and taking advantage of the equivalence principle as shown in Figure 4.
With the equivalent electric and magnetic currents at the fictitious boundary, the electromagnetic potentials for a twodimensional problem can be written in terms of Green's functions as where primed coordinates denote points upon the fictitious boundary and the closed path integral is calculated over the boundary.With the electromagnetic potentials, the scattered fields can be found by means of Additionally, by considering an observation point in the far field, the asymptotic expression for the Hankel function can be used, resulting in, for the z-component of the electric field, where the angle ψ is given by cos ψ = ρˆ• ρˆ0.The numerical scattering amplitude is then given by which can be readily calculated with the application of Simpson's rule, as all necessary quantities are known at the end of the FDTD run.
Validation Results with Homogeneous Plasma.
To validate the FDTD technique, it is applied to a homogeneous plasma and the solution compared with solutions obtained from eigenfunction expansions, which can be efficiently calculated for homogeneous cases.The simulation parameters are as follows: time discretization Δ t = 2.35702 × 10 −12 seconds, spatial discretization Δ s = 1 × 10 −3 meters, maximum temporal step N t = 1200, and number of spatial cells N = N x × N y = 63001.Figure 5 shows the time evolution of the intensity of the electric field across the computational domain for a homogeneous plasma, showing the transient effects of the plasma on the electromagnetic wave such as refraction at the cylinder boundaries as well as internal reflections within the plasma.
Figures 6-8 show comparisons between the magnitudes of the electric field within the homogeneous plasma cylinder obtained by the FDTD simulation and by the eigenfunction method for different incident frequencies, electron densities, International Journal of Antennas and Propagation and collision frequencies, respectively.Figures 9-10 show the same, but for linear cuts along the x-and y-axes, which allows for more precise comparisons between the two methods than the two-dimensional colour plots.
Figure 12 shows a comparison between the resulting scattering amplitude for the electric field obtained by the eigenfunction expansion method and the FDTD simulation for varying values of the homogeneous plasma parameters.Excellent agreement is found between the two approaches.
These results show that the FDTD simulations provide solutions that differ less than 1% from those obtained by the eigenfunction method that is widely used in the literature in most cases and less than 5% in the worst case.It can also be seen that the FDTD technique allows the study of the time domain evolution of the wave propagation, so transient effects can be analyzed.Additionally, it can be seen that the algorithm is still precise even for the plasma that is overly dense in relation to the incident frequency (Figures 6(b) and 7(f)) or that displays large losses (Figure 8(f)), so that spatial accuracy is only dependent on the mesh being fine enough to resolve the wave propagation, which is a well-known condition in finitedifference algorithms.
Investigation of Inhomogeneous
Plasma.An inhomogeneous plasma is now investigated, with the same simulation parameters as the homogeneous case.The plasma inhomogeneity is defined by the quadratic density profile where n 0 is the plasma density at the center of the plasma, ρ is the distance from the center, and r is the cylinder radius.
Figure 13 shows the time evolution of the intensity of the electric field across the computational domain for an inhomogeneous plasma, showing the transient effects of the plasma on the electromagnetic wave such as the deflection in the propagation direction of the wave.Figures 14-16 show the magnitude of the electric field within the inhomogeneous plasma cylinder obtained by the FDTD simulation for different incident frequencies, central plasma densities, and electron collision frequencies, respectively.Figures 17-19 show the same, but for linear cuts along the x-and y-axes.Figure 20 shows the resulting scattering amplitude for the electric field obtained by the FDTD simulation for varying values of the inhomogeneous plasma parameters.
One qualitative analysis is straightforward: as expected from inspecting (1), variations in the plasma frequency ω p (which depends on the plasma density) change the behavior of electromagnetic propagation inversely to variations in both wave frequency ω and electron collision frequency ν.This effect can be seen, for example, by comparing the results for f in = 5 GHz in Figure 14 with the results for n 0 = 5 × 10 18 m −3 in Figure 15.
Additionally, another qualitative analysis of the results shows a phenomenon of wave path deflection when the propagation is through inhomogeneous plasmas.This is due to the spatial variation in the electron density, which in turn causes a spatial variation in the refraction index of the plasma medium.Continuous spatial variations in refraction indexes, in turn, are well-known to cause ray deflection.
In broad terms, two different behaviors can be observed from the presented results: (1) electromagnetic waves 16 International Journal of Antennas and Propagation penetrating the plasma and propagating while being conditioned by the plasma, that is, suffering dispersion, attenuation, and deflection, when appropriate to each situation's characteristics, and (2) electromagnetic waves being reflected from the plasma and exhibiting very low penetration (or, for the inhomogeneous cases, very low penetration after a certain point in the inhomogeneous cylinder).These two different types of behavior are related to the real part of the plasma's dielectric permittivity, with penetration possible for ℜ ε r > 0 and reflection occurring for ℜ ε r < 0. This behavior shift is shown in Figure 21.These results are consistent with previous qualitative results in the literature obtained by the application of the eigenfunction expansion technique, but solutions are obtained at lower computational cost and with greater resolution when applying the FDTD technique.
A limitation that comes with the FDTD technique for inhomogeneous plasmas, however, is the staircasing effect on the local plasma density n.Even with an overall spatially dependent profile, the density is taken to be constant within each grid cell, so there is an additional constraint that the spatial step must be small enough to properly approximate the desired profile up to a tolerance threshold that will depend on the application and desired error measure for the solution.
For the cases presented herein, the difference between the actual plasma density function and its staircase approximation in the FDTD grid is shown in Figure 22.The maximum absolute point-wise error between the approximation and the density function is less than 2.5%.
Conclusions
FDTD simulations are of great value in identifying different behaviors within the plasma structure and exploring the effects of the plasma parameters on the electromagnetic propagation as well as studying transient effects.In particular, for inhomogeneous cases or any case with other kinds of spatial complexity (e.g., complicated geometries for the structure), the computational cost of running analytical methods based on eigenfunction expansions is prohibitively high, and the results obtained lack good enough resolution to be precisely analyzed, two limitations that do not exist for FDTD simulations.The FDTD technique described herein allows for arbitrary time or spatially varying parameters to be incorporated in the simulation, as well as providing step-by-step transient solutions, two features that eigenfunction expansions lack.
Understanding the effects of plasma inhomogeneities in electromagnetic wave propagation and scattering and being able to correctly simulate those effects are important steps in the design of plasma devices such as telecommunication antennas, especially when the inhomogeneities are timedependent or present sharp spatial variations.The algorithm presented herein is an efficient solution that, to the authors' knowledge, has not been previously applied to 2D plasma systems in the context of telecommunication devices.
Future perspectives for this work include extending the numerical algorithm for a TEz-polarized incident wave.Due to the nature of TMz-polarized waves, the electric field was restricted to having only a z-component, which caused Another perspective is including ionization processes in the algorithm, which would allow the simulation of the start up and turn off of a device; so far, the plasma has been considered to be in a steady state of ionization; that is, the source responsible for ionization is considered to be active for a long time and recombination processes are ignored.
With these extensions, the algorithm would be able to simulate fully self-consistent plasma systems in three spatial 19 International Journal of Antennas and Propagation dimensions, thus allowing for the full simulation of an entire device like a plasma antenna or even the interaction between multiple devices operating simultaneously.
Figure 1 :
Figure 1: Plane wave incidence on the plasma cylinder.
1 Figure 2 :
Figure2: Generic Cartesian spatial cell (i,j) and surrounding cells in the 2D computational domain.Stored values for each cell are the z-component of the electric field at time n and position (i,j), the x-component of the magnetic field at time n + 1/2 and position (i,j + 1/2), and the y-component of the magnetic field at time n + 1/2 and position (i + 1/2,j).
Region separation for the TFSF technique.Region 1 consists of total fields while region 2 consists only of scattered fields Region 1 b) Detailed view of the field components adjacent to the TFSF boundary.A computational cell at the corner of the TFSF boundary is shown
Figure 3 :
Figure 3: Total-field scattered-field technique used to generate the incident fields and calculate the scattered far fields.
3. 4 .Figure 5 :
Figure 5: Time evolution of the intensity of the electric field, in volts/meter, across the computational domain, for the homogeneous case where incident wave frequency is set to f in = 10 GHz, electron density is set to n = 5 × 10 17 m −3 , and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 6 : 3 40 3 40 8 (
Figure 6: Comparison between the magnitude of the electric field, in volts/meter, within the cylinder obtained by the eigenfunction method and the FDTD simulation for different incident frequencies; electron density is set to n 0 = 5 × 10 17 m −3 , and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 7 :
Figure 7: Comparison between the magnitude of the electric field, in volts/meter, within the cylinder obtained by the eigenfunction method and the FDTD simulation for different electron densities; incident wave frequency is set to f in = 10 GHz, and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 8 :
Figure 8: Comparison between the magnitude of the electric field, in volts/meter, within the cylinder obtained by the eigenfunction method and the FDTD simulation for different collision frequencies; incident wave frequency is set to f in = 10 GHz, and electron density is set to n 0 = 5 × 10 17 m −3 .
Figure 9 :
Figure9: Comparison between the magnitude of the electric field, in volts/meter, inside the plasma for linear cuts through the cylinder obtained by the eigenfunction method and the FDTD simulation for different incident frequencies; electron density is set to n 0 = 5 × 10 17 m −3 , and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 10 :
Figure 10: Comparison between the magnitude of the electric field, in volts/meter, inside the plasma for linear cuts through the cylinder obtained by the eigenfunction method and the FDTD simulation for different electron densities; incident wave frequency is set to f in = 10 GHz, and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 11 :Figure 12 :
Figure 11: Comparison between the magnitude of the electric field, in volts/meter, inside the plasma for linear cuts through the cylinder obtained by the eigenfunction method and the FDTD simulation for different collision frequencies; incident wave frequency is set to f in = 10 GHz, and electron density is set to n 0 = 5 × 10 17 m −3 .
Figure 13 :
Figure 13: Time evolution of the intensity of the electric field, in volts/meter, across the computational domain, for the inhomogeneous case where incident wave frequency is set to f in = 10 GHz, central electron density is set to n 0 = 5 × 10 17 m −3 , and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 14 : 3 40
Figure 14: Magnitude of the electric field, in volts/meter, within the inhomogeneous plasma cylinder obtained by the FDTD simulation for different incident frequencies f in .Central electron density is set to n 0 = 5 × 10 17 m −3 , and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 15 :
Figure 15: Magnitude of the electric field, in volts/meter, within the inhomogeneous plasma cylinder obtained by the FDTD simulation for different central electron densities n 0 .Incident wave frequency is set to f in = 10 GHz, and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 16 :−
Figure 16: Magnitude of the electric field, in volts/meter, within the inhomogeneous plasma cylinder obtained by the FDTD simulation for different collision frequencies ν.Incident wave frequency is set to f in = 10 GHz, and central electron density is set to n 0 = 5 × 10 17 m −3 .
Figure 17 :−
Figure17: Magnitude of the electric field, in volts/meter, inside the plasma for linear cuts through the cylinder obtained by the FDTD simulation for different incident frequencies.Central electron density is set to n 0 = 5 × 10 17 m −3 , and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 18 :
Figure18: Magnitude of the electric field, in volts/meter, inside the plasma for linear cuts through the cylinder obtained by the FDTD simulation for different central electron densities n 0 .Incident wave frequency is set to f in = 10 GHz, and electron collision frequency is set to ν = 500 × 10 6 Hz.
Figure 19 :
Figure 19: Magnitude of the electric field, in volts/meter, inside the plasma for linear cuts through the cylinder obtained by the FDTD simulation for different collision frequencies ν.Incident wave frequency is set to f in = 10 GHz, and central electron density is set to n 0 = 5 × 10 17 m −3 .
ν and f in
Figure 21 :
Figure 21: Behavior shift for the plasma dielectric permittivity as a function of the local plasma density n, the plasma collision frequency ν, and the incident wave frequency f in .Red region represents ℜ ε r > 0, and blue region represents ℜ ε r < 0.
Figure 22 :
Figure 22: Comparison between the actual inhomogeneous density function for the plasmas considered herein and its staircase approximation in the FDTD grid. | 2018-12-11T21:49:47.279Z | 2018-03-29T00:00:00.000 | {
"year": 2018,
"sha1": "4b5cf9c2347842bf52219fbd1bae5ad7db2b900d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijap/2018/3476462.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b5cf9c2347842bf52219fbd1bae5ad7db2b900d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
132790818 | pes2o/s2orc | v3-fos-license | Understanding Spatial Variability of Air Quality in Sydney : Part 1 — A Suburban Balcony Case Study
There is increasing awareness in Australia of the health impacts of poor air quality. A common public concern raised at a number of “roadshow” events as part of the federally funded Clean Air and Urban Landscapes Hub (CAUL) project was whether or not the air quality monitoring network around Sydney was sampling air representative of typical suburban settings. In order to investigate this concern, ambient air quality measurements were made on the roof of a two-storey building in the Sydney suburb of Auburn, to simulate a typical suburban balcony site. Measurements were also taken at a busy roadside and these are discussed in a companion paper (Part 2). Measurements made at the balcony site were compared to data from three proximate regulatory air quality monitoring stations: Chullora, Liverpool and Prospect. During the 16-month measurement campaign, observations of carbon monoxide, oxides of nitrogen, ozone and particulate matter less than 2.5-μm diameter at the simulated urban balcony site were comparable to those at the closest permanent air quality stations. Despite the Auburn site experiencing 10% higher average carbon monoxide amounts than any of the permanent air quality monitoring sites, the oxides of nitrogen were within the range of the permanent sites and the pollutants of greatest concern within Sydney (PM2.5 and ozone) were both lowest at Auburn. Similar diurnal and seasonal cycles were observed between all sites, suggesting common pollutant sources and mechanisms. Therefore, it is concluded that the existing air quality network provides a good representation of typical pollution levels at the Auburn “balcony” site.
Introduction
Air quality in Sydney is relatively good compared to other large industrialised cities [1].Background ozone concentrations in Sydney are comparatively low: the annual mean ozone concentration for Sydney was 18.5 ppb in 2017 [2].In comparison, the 2017 mean ozone concentration for urban sites in the UK was 27.9 ppb [3].In New South Wales (NSW), measured particulate matter (<2.5 µm diameter; PM 2.5 ) concentrations are generally <15 µgm −3 , but occasionally exceed the national daily standard (25 µgm −3 ) particularly during wildfire events [4].Despite the relatively low levels of harmful air pollutants measured in Sydney, air quality constitutes a health risk in the city.It has been established that exposure to some pollutants, including ozone and PM 2.5 , at concentrations considered generally safe by the US EPA is nevertheless associated with negative health outcomes [5,6].Furthermore, approximately 2% of deaths in Sydney have been attributed to ozone and particulate pollution [1].These species dominate exceedances of national air quality standards in Sydney [2,7].Ozone and particulate matter have therefore been identified as pollutants of most concern in Sydney.Ozone exceedances in Sydney are associated with very high summer temperatures, with influence from both synoptic [8] and mesoscale [9] meteorological variables.The Sydney region predominantly experiences a NO x -limited regime during ozone events, with the influence of biogenic emissions highlighted in recent literature [10].Despite the strong influence of bushfires and dust storms on PM 2.5 exceedances [11,12], traffic emissions have been shown to be the largest single source of PM 2.5 within the Sydney basin [13].
Methods used to gather ambient air quality data largely utilise fixed-site ground-level monitoring stations often located in local parks that measure background pollutant concentrations.For example, the New South Wales Office of Environment and Heritage (OEH) maintains a network of permanent, stationary air quality monitoring stations throughout Sydney [2].
Increasing population in Sydney is driving increased construction of (and residence in) apartment buildings.Apartment buildings accounted for one-third of all new residential building approvals in Australia in 2015, with more than 30% of these in Sydney [14].As of the 2016 census, 20.7% of residences in New South Wales were apartments, with greater than 85% of these apartments located in Sydney [15].Therefore, it is reasonable to assume that urban balconies are a site of possible exposure to poor air quality for a significant proportion of the population of Sydney.
A small body of research exists discussing the effect of balconies on ventilation impacting indoor air quality in high rise buildings [16,17] and regarding pollutant mixing in urban street canyons [18].However, limited research exists regarding air quality measurements at balcony sites.Nevertheless, general research into vertical changes in urban air quality has been more thoroughly researched.Ozone has been modelled to vary with height above street level.The presence of other ozone-destroying compounds in urban areas is modelled to deplete surface ozone up to altitudes of 20 m [19].This phenomenon has been measured in Beijing [20], with the effect of shears in wind speed and wind direction highlighted.Several studies have found that PM 2.5 concentrations decrease with increased height in an urban environment [21,22].Contrastingly, a study on multi-storey buildings in Singapore showed that mean PM 2.5 concentration was highest at the mid-floors in comparison to upper and lower floors, and the upper floors had the lowest fine particular matter mass concentration [23].It was noted, however, that this may have been the result of particle interception by surrounding tree leaves and inflow of cleaner air from higher altitudes.Han et al. [24] found that measurement sites at near-ground height (5-10 m) were most influenced by human emission activities compared to measurements at higher altitudes.It has been noted that a number of factors influence the vertical profiles of PM 2.5 concentrations, including vehicle emissions and new particle formation [25].Urban street canyons and the presence of neighbouring buildings have also been shown to play a role by altering vertical mixing and flow fields [18].
The Clean Air and Urban Landscapes (CAUL) hub is a project of the National Environmental Science Program, which is funded by Australia's Department of the Environment and Energy.CAUL focuses on cross-disciplinary research on the sustainability and liveability of Australian urban environments [26].Air quality is an important part of this investigation.CAUL research is partially driven by public concerns expressed at a number of "roadshow" events.A recurring question posed by members of the public was "how does the background air quality reported for my area relate to my likely exposure when I am outside?"Although we acknowledged that we cannot answer this question for any particular individual, we nevertheless set about trying to address this problem via the use of two separate case studies: 1.
WASPSS-Auburn (Western Air-Shed Particulate Study for Sydney in Auburn)-provides an assessment of whether the local air quality monitoring stations give a good representation of pollutant concentrations at a site representative of a suburban balcony setting.
2.
The RAPS campaign (Roadside Atmospheric Particulates in Sydney) provides an assessment of PM 2.5 concentrations near a busy road in the Sydney City metropolitan area, and how these compare to reported air quality levels from nearby statutory monitoring stations.The spatial and temporal variability of PM 2.5 , relevant to members of the public seeking to minimise their exposure to fine particulate matter, are also explored.The campaign also provided an opportunity for the first calibration of a microscopic traffic emissions simulation.
The first case study is discussed in the present paper, and the second is covered in a companion paper, also in this issue [27].
The WASPSS-Auburn campaign incorporated a mobile air quality monitoring station and an open-path infrared Fourier transform spectrometer (OP-FTIR).The OP-FTIR system, which can measure infrared active gases such as CO, NH 3 , N 2 O and CH 4 , has previously been deployed for agricultural [28][29][30] and biomass burning [31,32] emissions estimates.Details of the main findings from the OP-FTIR during WASPSS, which relate to vehicle ammonia emissions and episodes of significant smoke pollution, are presented in two separate papers [33,34].
In this paper we present the results from the mobile air quality monitoring station during the 16-month WASPSS-Auburn campaign at the simulated suburban balcony site and compare the pollution levels observed to those measured at the closest permanent air quality monitoring stations.Local-scale phenomena, including the built environment and meteorological processes, dominate vertical variability in urban pollutant concentrations, especially particulates and ozone, making it very difficult to generalise findings at any one site to a broader region; however this is not the purpose of this study.Instead, we aim to observe the similarities and differences between pollutant concentrations at a simulated urban balcony site and regional air quality monitoring stations and test the assumption that regional background measurements provide a reasonable representation of pollutant concentrations to which local residents may be exposed to on an urban balcony.
The Mobile Air Quality Station
The Mobile Air Quality station (MAQ) (inset, Figure 1) is a mobile compact air quality station that complies with the Australian/New Zealand Standards for the measurement of ambient air quality, National Environmental Protection (Ambient Air Quality) Measure (NEPM) [35].The MAQ is fitted with the following instruments.Calibration and communications equipment was also installed, allowing quality control and monitoring of the instrumentation.Measurements were taken at one-minute time resolution and averaged to one hour mean values.Maintenance and calibration were performed in accordance with Climate and Atmospheric Science Standard of Operation Procedures [36].Note that all observations described in this paper are from the MAQ station unless explicitly stated otherwise.Further details on the measurements are publicly available [37].
Auburn Balcony Measurement Site
Auburn is a suburb in Western Sydney that is located 16 km west of downtown Sydney (measured from Sydney Harbour Bridge) containing residential, business and industrial areas, as well as numerous parks and sporting complexes.On the 23 May 2016 the MAQ station was placed on site on the roof of the second storey of a commercial business at 2 Percy Street, Auburn, (33.854690 • S, 151.037400 • E, 6.72 m above ground level, 20.6 m above sea level).This site was chosen purely for pragmatic reasons, as we had connections to allow us access to the roof (and gained permission to locate the retroreflectors for the open-path measurements on the council building 400 m away across the town centre).However, Auburn is a good representative suburban centre with a good mixture of land uses including residential, industrial and transport.The MAQ inlet height was 3.3 m above the rooftop.The Auburn site (Figure 1 main image) is adjacent to a major rail passageway, with several industrial sites in the vicinity.There is a nearby major intra-urban road (A6) situated 330 m to the east of the site with the Great Western highway (A44) and the M4 motorway, also runs from north to east, at distances of just over 2 km from the site.Measurements are available from 26 May 2016 until 18 September 2017.Data from the MAQ and from the Open Path FTIR and associated instruments are available at the Pangaea Data Publisher [37].
Atmosphere 2018, 9, x FOR PEER REVIEW 4 of 19 purely for pragmatic reasons, as we had connections to allow us access to the roof (and gained permission to locate the retroreflectors for the open-path measurements on the council building 400 m away across the town centre).However, Auburn is a good representative suburban centre with a good mixture of land uses including residential, industrial and transport.The MAQ inlet height was 3.3 m above the rooftop.The Auburn site (Figure 1 main image) is adjacent to a major rail passageway, with several industrial sites in the vicinity.There is a nearby major intra-urban road (A6) situated 330 m to the east of the site with the Great Western highway (A44) and the M4 motorway, also runs from north to east, at distances of just over 2 km from the site.Measurements are available from 26 May 2016 until 18 September 2017.Data from the MAQ and from the Open Path FTIR and associated instruments are available at the Pangaea Data Publisher [37].
Chullora, Prospect and Liverpool Air Quality Monitoring Stations
The OEH monitors air quality in urban areas of New South Wales using strategically placed air quality monitoring stations.These sites are equipped with instrumentation sufficient to meet the Ambient Air Quality NEPM [38], with measurements taken using a standard sampling protocol and undergoing rigorous quality assurance [39].Instrumentation at permanent sites is different to that used in the MAQ station, with specifications for each site being publicly available [40].This allows the OEH to provide current data online publicly and allows for comparison between network sites.Measurements taken at the Auburn balcony site were compared with the following three air quality monitoring stations in western Sydney: Chullora, Liverpool and Prospect.
Chullora, Prospect and Liverpool Air Quality Monitoring Stations
The OEH monitors air quality in urban areas of New South Wales using strategically placed air quality monitoring stations.These sites are equipped with instrumentation sufficient to meet the Ambient Air Quality NEPM [38], with measurements taken using a standard sampling protocol and undergoing rigorous quality assurance [39].Instrumentation at permanent sites is different to that used in the MAQ station, with specifications for each site being publicly available [40].This allows the OEH to provide current data online publicly and allows for comparison between network sites.Measurements taken at the Auburn balcony site were compared with the following three air quality monitoring stations in western Sydney: Chullora, Liverpool and Prospect.
1.
Chullora air quality monitoring station (33 • 53 38 S, 151 • 02 43 E, 32 m above sea level), is located in the grounds of the Southern Sydney TAFE, Worth St, Chullora, in a mixed residential and commercial area.Nearby traffic influences include the A6 and Hume Highway, both within 0.5 km from the site and with a major road joining the two approximately 150 m to the south.These sites were selected as they are the most proximate stations (within 15 km) to the Auburn site, located to the southwest, southeast and northwest, respectively (Figure 2).Each of these stations is located in or adjacent to reserves of parklands.Publicly available hourly measurements from each air quality monitoring station were downloaded from the OEH website [41].
Atmosphere 2018, 9, x FOR PEER REVIEW 5 of 19 These sites were selected as they are the most proximate stations (within 15 km) to the Auburn site, located to the southwest, southeast and northwest, respectively (Figure 2).Each of these stations is located in or adjacent to reserves of parklands.Publicly available hourly measurements from each air quality monitoring station were downloaded from the OEH website [41].
Traffic Counters
Measurements from two traffic counters, located on Olympic Drive (station 7153, 1.25 km SSE of Auburn balcony) and on Silverwater Road (station 7112, 1.40 km NNE of Auburn balcony), were used to assist with pollutant analysis (see Figure 1 for locations).Traffic counters are maintained by the New South Wales Roads and Maritime Services.Hourly measurements of total vehicle count from 26 May 2016 to 13 September 2017 were downloaded for each site and processed to mean hourly counts during the measurement period.Both cameras count traffic travelling northbound on the A6.Data is publicly available from the Roads and Maritime Services website [42].
Data Analysis
Variables selected for analysis were wind speed, temperature, carbon monoxide (CO), oxides of nitrogen (NOx), ozone (O3) and PM2.5.Meteorological variables were chosen to account for the effect of temperature and wind speed on pollutant concentrations.Ozone and PM2.5 have been identified as pollutants of most concern in Sydney and were therefore critical to the project.CO and NOx were analysed as they are associated with traffic emissions and interact with and influence concentrations of the pollutants of most concern.
Data were analysed using the software "R" (version 3.4.0)[43] making extensive use of the "openair" package (version 2.6.1)[44].Mean statistics reported refer to the entire measurement campaign.Mean bias values were calculated using the openAir "modStats" package [44] and are expressed as a percentage of the overall mean value for the variable at the Auburn balcony site.
Traffic Counters
Measurements from two traffic counters, located on Olympic Drive (station 7153, 1.25 km SSE of Auburn balcony) and on Silverwater Road (station 7112, 1.40 km NNE of Auburn balcony), were used to assist with pollutant analysis (see Figure 1 for locations).Traffic counters are maintained by the New South Wales Roads and Maritime Services.Hourly measurements of total vehicle count from 26 May 2016 to 13 September 2017 were downloaded for each site and processed to mean hourly counts during the measurement period.Both cameras count traffic travelling northbound on the A6.Data is publicly available from the Roads and Maritime Services website [42].
Data Analysis
Variables selected for analysis were wind speed, temperature, carbon monoxide (CO), oxides of nitrogen (NO x ), ozone (O 3 ) and PM 2.5 .Meteorological variables were chosen to account for the effect of temperature and wind speed on pollutant concentrations.Ozone and PM 2.5 have been identified as pollutants of most concern in Sydney and were therefore critical to the project.CO and NO x were analysed as they are associated with traffic emissions and interact with and influence concentrations of the pollutants of most concern.
Data were analysed using the software "R" (version 3.4.0)[43] making extensive use of the "openair" package (version 2.6.1)[44].Mean statistics reported refer to the entire measurement campaign.Mean bias values were calculated using the openAir "modStats" package [44] and are expressed as a percentage of the overall mean value for the variable at the Auburn balcony site.
The percentage of valid data collected over the measurement period was as follows for the analysed variables; temperature and wind speed: 100%; CO: 94%; NO x : 78%; O 3 : 60.1%; PM 2.5 and PM 10 ; 99%.
Results and Discussion
Measurements from the Auburn balcony site were compared to the regional background concentrations as measured by three proximal air quality monitoring stations located at Chullora, Liverpool and Prospect.
Wind and Temperature
Examination of wind speed measurements revealed that the Auburn balcony has a diurnal cycle similar to the background sites (Figure 3A).At each location, the wind speed is lowest overnight and into the morning (23:00-06:00).Wind speed then increases through the morning, peaking between 12:00 and 18:00, before dropping back to a minimum late in the evening.All permanent air quality monitoring sites display higher wind speeds throughout the 24-h cycle than the balcony site.This is reflected by a lower mean wind speed at Auburn (1.3 ms −1 ) compared to Chullora, Liverpool and Prospect (1.7, 2.0 and 1.9 ms −1 , respectively).The average mean bias between Auburn and the air quality monitoring stations is −47%.This is likely to be due to the positioning close to buildings dampening the measured wind speed at Auburn, compared to the measurements in more open areas at the permanent air quality stations.Measurements from the Auburn balcony site were compared to the regional background concentrations as measured by three proximal air quality monitoring stations located at Chullora, Liverpool and Prospect.
Wind and Temperature
Examination of wind speed measurements revealed that the Auburn balcony has a diurnal cycle similar to the background sites (Figure 3A).At each location, the wind speed is lowest overnight and into the morning (23:00-06:00).Wind speed then increases through the morning, peaking between 12:00 and 18:00, before dropping back to a minimum late in the evening.All permanent air quality monitoring sites display higher wind speeds throughout the 24-h cycle than the balcony site.This is reflected by a lower mean wind speed at Auburn (1.3 ms −1 ) compared to Chullora, Liverpool and Prospect (1.7, 2.0 and 1.9 ms −1 , respectively).The average mean bias between Auburn and the air quality monitoring stations is −47%.This is likely to be due to the positioning close to buildings dampening the measured wind speed at Auburn, compared to the measurements in more open areas at the permanent air quality stations.Plotting monthly mean wind speeds demonstrates that all sites follow the same annual trend (Figure 3B) containing an autumn minimum in May and an October spring maximum.The similarity in seasonal cycle between sites is unsurprising considering their relatively close proximity.Lower wind speeds at Auburn are again evident throughout the cycle.
Temporal variations in wind direction were also examined.Measurements were binned into four periods-00:00-06:00, 06:00-12:00, 12:00-18:00, and 18:00-00:00-for each site and plotted as wind roses (Appendix A, Figure A1).Winds until 12:00 at all sites were dominated by SW winds at all sites, with a northerly flow also evident at Prospect.Winds appeared more variable in the afternoon and evening with a greater variability in wind direction evident.Variability observed between sites is unsurprising given the sensitivity of local surface winds to the environment immediately surrounding the measurement site.
Seasonality is observed in wind direction at all sites.Site specific wind roses, binned by season, are presented in the Appendix A, Figure A2.Spring and summer winds are the most variable at all sites, with noted northerly airflows observed at Chullora, and easterly flows at Liverpool during summer.Southerly and westerly flows dominate winter winds at Auburn, Chullora and Liverpool, with a distinctive NW flow evident during winter at Prospect.
Analysis of mean hourly temperature (Figure 3C) again reveals a similar cycle at all sites, with minimum temperatures experienced just prior to sunrise, and building to a maximum in the early afternoon.Auburn is slightly warmer overnight and during early hours of the morning than other sites.The average mean temperature bias of the Auburn balcony compared with the air quality monitoring stations is close to 1 • C warmer, perhaps due to the thermal retention properties of the concrete Auburn rooftop as compared to the parkland sites of the other air quality monitoring stations.
A plot of mean monthly temperatures (Figure 3D) displays the expected seasonal cycle at all sites.Temperature is at its highest (close to 25 • C) during summer, with the winter minimum temperature experienced in July.Geographical proximity is again responsible for similarity in the seasonal temperature cycle.Again, slightly warmer temperatures are observed at Auburn compared to other sites.This is particularly noticeable in the cooler months.
Carbon Monoxide
Diurnal and seasonal cycles of CO, NO x and ozone are presented in Figure 4.A plot of hourly mean CO mole fraction (Figure 4A) displays a bimodal distribution at all sites.CO pollution begins growing at 05:00, reaching a first peak between 07:00 and 08:00.A decrease from this peak gives diurnal minimum concentrations in the early afternoon, coincident with the timing of peak wind speeds (Figure 3A).The evening peak grows from near 17:00 to a maximum near 22:00.The mean CO mole fraction measured at Auburn (0.38 ppm) is similar, albeit slightly higher than that measured at Chullora (0.27 ppm) and Liverpool (0.33 ppm).The mean CO mole fraction at Prospect is significantly lower (0.11 ppm), with lower amounts evident throughout the diurnal cycle.This is reflected in the mean bias between Auburn and the air quality monitoring stations as ranging from +10% (compared to Liverpool) to +68% (compared to Prospect).The significantly lower CO mole fractions at Prospect (which is in a residential area) suggest that the commercial activities near the other sites contribute a substantial fraction of the CO pollution at Auburn, Chullora and Liverpool (that are all mixed residential and commercial areas).
Oxides of Nitrogen
Plotting hourly mean mole fractions of oxides of nitrogen (NO x, Figure 4C) reveals a similar bimodal distribution at all locations.The first peak occurs between 07:00 and 08:00, with the broader second peak occurring between 19:00 and 22:00.Morning maximum NO x pollution varies between sites from an average of 60 ppb at Liverpool to an average of almost 30 ppb at Chullora.During the evening peak, NO x pollution levels at the different sites are more similar to each other especially at Auburn, Liverpool and Chullora.The broadening observed in the evening peak is likely to be caused by a coupling of the evening traffic peak and the collapse of the daytime boundary layer.Again, hourly NO x mole fractions are consistently lower at Prospect than at the other sites.This is reflected in a significantly greater mean bias between Auburn and Prospect (+32%) compared to Auburn and Chullora (+6.9%), and Auburn and Liverpool (−5.3%).Mid-afternoon minimum NO x pollution is observed at all sites in a similar manner to minimum CO pollution.
Atmosphere 2018, 9, x FOR PEER REVIEW 8 of 19 hourly NOx mole fractions are consistently lower at Prospect than at the other sites.This is reflected in a significantly greater mean bias between Auburn and Prospect (+32%) compared to Auburn and Chullora (+6.9%), and Auburn and Liverpool (−5.3%).Mid-afternoon minimum NOx pollution is observed at all sites in a similar manner to minimum CO pollution.
Traffic as a Major Source of Carbon Monoxide and Oxides of Nitrogen
The similarity in diurnal cycles implies that a common source of CO and NO x dominates at all measured locations.It is suggested that the diurnal cycle is related to morning and evening rush hours at all sites, with high overnight concentrations attributable to low boundary layer conditions [45,46].Examining traffic counts along this road supports this attribution.Northbound traffic along the A6 (300 m east of the balcony site) shows a morning peak at 06:00-08:00 at sites both north and south of the Auburn balcony (Figure 5).This aligns with the morning peak in CO and NO x .This suggestion is further supported by examining polar bivariate plots of the Auburn balcony site.The slightly elevated pollution levels of CO and NO x associated with relatively strong easterly winds (Figure 6A,B) is likely attributable to the A6 highway.An explanation for mid-afternoon minimum pollution levels is found when examining diurnal wind speed patterns.Wind speed and CO/NO x amounts are anticorrelated.During the mid-afternoon, turbulence and local wind speed are at maximum (Figure 3A).This leads to a deep boundary layer and hence more pollutant dilution.
Atmosphere 2018, 9, x FOR PEER REVIEW 9 of 19 Auburn balcony site and three surrounding air quality monitoring stations.Calibration of the ozone monitor occurred at 13:00-14:00 daily, and hence measurements have been removed.
Traffic as a Major Source of Carbon Monoxide and Oxides of Nitrogen
The similarity in diurnal cycles implies that a common source of CO and NOx dominates at all measured locations.It is suggested that the diurnal cycle is related to morning and evening rush hours at all sites, with high overnight concentrations attributable to low boundary layer conditions [45,46].Examining traffic counts along this road supports this attribution.Northbound traffic along the A6 (300 m east of the balcony site) shows a morning peak at 06:00-08:00 at sites both north and south of the Auburn balcony (Figure 5).This aligns with the morning peak in CO and NOx.This suggestion is further supported by examining polar bivariate plots of the Auburn balcony site.The slightly elevated pollution levels of CO and NOx associated with relatively strong easterly winds (Figure 6A,B) is likely attributable to the A6 highway.An explanation for mid-afternoon minimum pollution levels is found when examining diurnal wind speed patterns.Wind speed and CO/NOx amounts are anticorrelated.During the mid-afternoon, turbulence and local wind speed are at maximum (Figure 3A).This leads to a deep boundary layer and hence more pollutant dilution.High pollutant mole fractions at low wind speeds for CO and NOx in Figure 6 indicate local pollutant sources, because during periods of very low wind speed pollutant transport is supressed.High pollution levels during low wind speeds also suggest accumulations in periods of atmospheric stability.In addition to local traffic and domestic emissions mentioned, there are industrial facilities proximate to the Auburn site contributing to the observed CO and NOx levels, as documented in Australia's National Pollutant Inventory.Five-hundred metres to the NE of the Auburn balcony is a printery, emitting 2.8 Tyr −1 CO and 4.2 Tyr −1 NOx [47].Nine-hundred metres to the NE is a major brewery, emitting 31 Tyr −1 CO and 150 Tyr −1 NOx [48].The higher pollution associated with wind speeds between 1 and 2 m s −1 from the West is likely the result of katabatic drainage associated with highly stable conditions and low atmospheric mixing that traps urban pollution close to the ground-level [45,46].High pollutant mole fractions at low wind speeds for CO and NO x in Figure 6 indicate local pollutant sources, because during periods of very low wind speed pollutant transport is supressed.High pollution levels during low wind speeds also suggest accumulations in periods of atmospheric stability.In addition to local traffic and domestic emissions mentioned, there are industrial facilities proximate to the Auburn site contributing to the observed CO and NO x levels, as documented in Australia's National Pollutant Inventory.Five-hundred metres to the NE of the Auburn balcony is a printery, emitting 2.8 Tyr −1 CO and 4.2 Tyr −1 NO x [47].Nine-hundred metres to the NE is a major brewery, emitting 31 Tyr −1 CO and 150 Tyr −1 NO x [48].The higher pollution associated with wind speeds between 1 and 2 m s −1 from the West is likely the result of katabatic drainage associated with highly stable conditions and low atmospheric mixing that traps urban pollution close to the ground-level [45,46].
Ozone
Typical diurnal cycles of ozone are observed when plotting hourly means across a 24-h period (Figure 4E) at the Auburn balcony sites, and at the Chullora, Liverpool and Prospect air quality monitoring stations.There is a pre-dawn minimum of less than 10 ppb at all sites growing to a mid-afternoon maximum greater than 20 ppb, due to the daytime photochemical production of ozone followed by titration of ozone by NO overnight.The relationship between NOx and ozone is evident in the anticorrelation between the species (correlation coefficient, r = −0.57,number of points, n = 38,869).This is also evident when comparing the polar bivariate plot of NOx (Figure 6B) to that of ozone (Figure 6C), and also in the diurnal cycles of the species (Figure 4C,E, respectively).The relationship between ozone and temperature is also clearly expressed in these measurements (r = 0.63, n = 39,539).Ozone levels are similar between sites, except during summer when ozone is significantly lower at the Auburn balcony site than at the proximate air quality monitoring stations.However, it must be noted that a reduced number of ozone measurements were taken during the summer maximum at the Auburn site.This may contribute to the lowered average concentration.Calibration of the ozone instrument occurred at 14:00 or 15:00 each day, creating a lack of
Ozone
Typical diurnal cycles of ozone are observed when plotting hourly means across a 24-h period (Figure 4E) at the Auburn balcony sites, and at the Chullora, Liverpool and Prospect air quality monitoring stations.There is a pre-dawn minimum of less than 10 ppb at all sites growing to a mid-afternoon maximum greater than 20 ppb, due to the daytime photochemical production of ozone followed by titration of ozone by NO overnight.The relationship between NO x and ozone is evident in the anticorrelation between the species (correlation coefficient, r = −0.57,number of points, n = 38,869).This is also evident when comparing the polar bivariate plot of NO x (Figure 6B) to that of ozone (Figure 6C), and also in the diurnal cycles of the species (Figure 4C,E, respectively).The relationship between ozone and temperature is also clearly expressed in these measurements (r = 0.63, n = 39,539).Ozone levels are similar between sites, except during summer when ozone is significantly lower at the Auburn balcony site than at the proximate air quality monitoring stations.However, it must be noted that a reduced number of ozone measurements were taken during the summer maximum at the Auburn site.This may contribute to the lowered average concentration.Calibration of the ozone instrument occurred at 14:00 or 15:00 each day, creating a lack of measurements during the daily ozone peak.This also contributes to the lower concentrations (average mean bias: −24%) reported at the Auburn balcony compared to the other sites.
Annual Cycles of Carbon Monoxide, Oxides of Nitrogen and Ozone
Monthly means of CO and NO x (Figure 4B,D, respectively) reveal similar seasonal cycles for these species.Highest pollution levels for both pollutants at all sites are observed in May, June or July.Higher winter mole fractions are attributed to a combination of factors: slower photochemical removal, lower boundary layer mixing heights and additional contributions to CO and NO x from combustion heating.CO and NO x are lower at Prospect than at other sites throughout the year.Again, a dependence on wind speed is evident, with the seasonal cycle of wind speed (Figure 3B) anticorrelated with those of CO and NO x (Figure 4A,C).
Monthly mean ozone mole fractions (Figure 4F) also align with the expected seasonal cycle at all sites.A December maximum is observed at all sites, coinciding with the period of peak solar irradiance and high summer temperatures.This agrees with previous studies regarding ozone in the Sydney basin, with the importance of extreme heat and biogenic emissions noted [10,49,50].The May-June minimum is aligned with less daylight hours and cooler temperatures, giving rise to slower photochemistry and less ozone production.Mean mole fractions throughout the year are similar for the three air quality monitoring stations.Ozone measurements are lower at Auburn throughout the year, with the greatest difference observed in late summer, partly due to the reasons previously stated.
PM 2.5
Plotting hourly mean PM 2.5 concentration (Figure 7A) reveals, as with other pollutants, a similar diurnal cycle between measurement locations.The four sites follow a similar bimodal trend, where the PM 2.5 concentration is at a maximum near 06:00, and again in the evening.The higher concentration of PM 2.5 during the evening and into the night is most likely the particulates being trapped within the low nocturnal boundary layer.
A likely source of particulate pollution at all sites is local traffic, similar to CO and NO x .The trough present in early afternoon PM 2.5 concentration is likely due to the growth of a turbulent boundary layer.Evidence for particulate dilution is provided in maximum wind speed coincident with minimum particle concentrations in the mid-afternoon (Figure 3A), and minimums in the morning and evening during high observed concentrations of PM 2.5 .Auburn shows mean concentrations comparable to the other sites, although the morning peak is later than at the other sites, suggesting a possible influence of local traffic that could be associated with school drop-off times.The mean bias for the Auburn balcony with each site is Chullora −1.29 µgm −3 , Liverpool −1.42 µgm −3 and Prospect −0.640 µgm −3 .Negative values in each instance show that the Auburn balcony sees slightly lower mean PM 2.5 than each permanent site.The polar bivariate plot for Auburn (Figure 6D) shows the impact of the nearby A6 motorway when winds are from the east.Secondary particulate formation is driven by regional scale processes within the Sydney basin, since the precursors are dominated by biogenic sources from the surrounding forested regions.For this reason, photochemically driven particle formation processes are not expected to contribute significantly to differences between PM 2.5 concentrations at the different sites.
Annual trends in PM 2.5 , examinable by plotting monthly mean concentrations (Figure 7B), are also similar between all sites.The most notable exception is a very high March mean at Chullora, which is not present at other sites.This localised increase in monthly mean concentration is attributable to a fire that occurred on 22 February 2017 at a recycling plant less than one kilometre from the AQMS [51].Reduced February and March concentrations at Auburn may be an artefact of a period of reduced measurements due to technical difficulties.All sites show a winter maximum, likely attributable to combustion heating emissions [33].A smaller, secondary maximum in December-January is due to more active secondary photochemical particle formation processes (partially temperature and oxidant driven) [52].The influence of wind speed on PM 2.5 is also clear when comparing the annual cycle of wind speed (Figure 3A) to that of PM 2.5 (Figure 7B), with the two annual cycles anticorrelated.
Atmosphere 2018, 9, x FOR PEER REVIEW 12 of 19 when comparing the annual cycle of wind speed (Figure 3A) to that of PM2.5 (Figure 7B), with the two annual cycles anticorrelated.Diurnal and seasonal cycles of PM10 were also plotted.The diurnal cycle (provided in the Appendix, Figure A3A) is very similar to that of PM2.5 at all sites.The seasonal cycle (Appendix, Figure A3B) varies significantly, showing summer maxima at all sites and no winter peak.Similar to PM2.5, PM10 concentrations at Auburn are lower than those at all permanent sites as indicated by mean bias (Chullora: −2.26 µgm −3 ; Liverpool: −3.24 µgm −3 ; Prospect: −1.53 µgm −3 ).
Discussion-Comparison of Balcony Site to Regional Background
The Auburn balcony site demonstrates similar diurnal and seasonal cycles to nearby permanent air quality monitoring stations sites for all analysed variables.This implies that similar mechanisms are driving variability in pollutant concentrations observed at the balcony site and at regional background sites.There is some difference in pollution levels (amplitude of diurnal and seasonal cycles) between sites.Auburn recorded lower wind speed than those recorded at the air quality monitoring stations and slightly higher temperatures.Higher levels of carbon monoxide were observed at Auburn that at any of the other three sites.
Diurnal and seasonal cycles of the pollutants of most concern, ozone and PM2.5, were similar at the balcony site and at the other air quality monitoring stations.Interestingly, lower levels of O3 and PM2.5 were observed at Auburn compared to other sites.This reflects previous research in urban areas in the case of PM2.5 that showed a decrease in concentration with height above ground-level [21,22].This finding provides a rare air quality benefit to increased urbanisation and higher population densities in Australian cities, at least for residents of high level apartments.The significance and causes of the lower O3 amounts measured at the balcony site are less clear.Whilst missing data may contribute to the lower mole fractions observed, the low bias is consistent through much of the day and all of the year.Titration by NOx cannot explain the differences in O3 since the Diurnal and seasonal cycles of PM 10 were also plotted.The diurnal cycle (provided in the Appendix A, Figure A3A) is very similar to that of PM 2.5 at all sites.The seasonal cycle (Appendix A, Figure A3B) varies significantly, showing summer maxima at all sites and no winter peak.Similar to PM 2.5 , PM 10 concentrations at Auburn are lower than those at all permanent sites as indicated by mean bias (Chullora: −2.26 µgm −3 ; Liverpool: −3.24 µgm −3 ; Prospect: −1.53 µgm −3 ).
Discussion-Comparison of Balcony Site to Regional Background
The Auburn balcony site demonstrates similar diurnal and seasonal cycles to nearby permanent air quality monitoring stations sites for all analysed variables.This implies that similar mechanisms are driving variability in pollutant concentrations observed at the balcony site and at regional background sites.There is some difference in pollution levels (amplitude of diurnal and seasonal cycles) between sites.Auburn recorded lower wind speed than those recorded at the air quality monitoring stations and slightly higher temperatures.Higher levels of carbon monoxide were observed at Auburn that at any of the other three sites.
Diurnal and seasonal cycles of the pollutants of most concern, ozone and PM 2.5 , were similar at the balcony site and at the other air quality monitoring stations.Interestingly, lower levels of O 3 and PM 2.5 were observed at Auburn compared to other sites.This reflects previous research in urban areas in the case of PM 2.5 that showed a decrease in concentration with height above ground-level [21,22].This finding provides a rare air quality benefit to increased urbanisation and higher population densities in Australian cities, at least for residents of high level apartments.The significance and causes of the lower O 3 amounts measured at the balcony site are less clear.Whilst missing data may contribute to the lower mole fractions observed, the low bias is consistent through much of the day and all of the year.Titration by NOx cannot explain the differences in O 3 since the NO x values are consistent with those measured at the other sites.However, despite a seemingly significant bias, outside of the summer months the difference is typically less than 2 ppb, which is within the range of calibration accuracy of the method.An inverse relationship with wind speed confirms the importance of atmospheric stability on local air pollutant concentrations as noted by Chambers et al. [46].Some variability is expected between sites as the pollutants measured are highly variable over small spatial scales.The permanent monitoring stations aim to measure regional background levels of pollution; however, the measurements at each site do reflect local sources, with Chullora and Liverpool showing higher levels of pollution than Prospect.Differences between the permanent air quality monitoring stations are greater in all cases than differences between the Auburn balcony site and the air quality monitoring stations.Therefore, we conclude that the regional air quality monitoring stations provide a good representation of pollutant concentrations at the Auburn balcony site, including for the pollutants of most concern in Sydney.
Summary and Conclusions
The WASPPS-Auburn campaign provided evidence that the diurnal and seasonal cycles of all pollutants at the balcony site were similar to those at the permanent air quality monitoring sites, suggesting common pollutant sources and mechanisms.Traffic signals and the influence of wind speed (as a proxy for surface turbulence and atmospheric stability) dominate diurnal cycles of CO, NO x and PM 2.5 .Low winter boundary layer heights and the influence of combustion heating provide these species with a winter peak.Ozone follows the opposite trend to NO x with a photochemically driven summer mid-afternoon peak.
During the 16-month campaign, differences in pollution levels between the sites were within the expected range, given the high spatial variability of air quality.CO was highest at the Auburn balcony site, but nitrogen oxides were within the range measured at the other sites and the pollutants of most concern (O 3 and PM 2.5 ) were lowest at Auburn.Therefore, we conclude that the existing air quality network provides a good representation of typical pollution levels at the Auburn "balcony" site selected for this study.Although this result cannot be generalised to all suburban balconies in Sydney, it demonstrates the effectiveness of the regional air quality monitoring network in western Sydney at providing an indication of personal exposure to outdoor air quality pollutants at a simulated balcony site.
Figure 1 .
Figure 1.Inset-The mobile air quality station (MAQ) in position at the Auburn balcony site.Main image map indicating the position of the MAQ (blue pin) and the surrounding roads.Location of Silverwater Road (north) and Olympic Drive (south) traffic counters are indicated by grey pins.Note the presence of the major A6 road 330 m east of the site.Generated in OpenStreetMap ®.
Figure 1 .
Figure 1.Inset-The mobile air quality station (MAQ) in position at the Auburn balcony site.Main image map indicating the position of the MAQ (blue pin) and the surrounding roads.Location of Silverwater Road (north) and Olympic Drive (south) traffic counters are indicated by grey pins.Note the presence of the major A6 road 330 m east of the site.Generated in OpenStreetMap ® .
2 .
Liverpool air quality monitoring station (33 • 55 58 S, 150 • 54 21 E, 22 m above sea level) is located in the Council depot, off Rose Street, Liverpool, in a mixed residential and commercial area.The Hume Highway and M5 motorway are both within approximately 1 km of the site.3. Prospect air quality monitoring station (33 • 47 41 S, 150 • 54 45 E, 66 m above sea level) is located in William Lawson Park, Myrtle Street, Prospect, in a residential area.The Great Western Highway lies approximately 1 km to the south, with the M4 motorway a further 400 m south.
Figure 2 .
Figure 2. Map showing the around the WASSPS-Auburn campaign site (indicated by the blue pin), including surrounding air quality monitoring stations (yellow pins), generated using Google Earth ®.
Figure 2 .
Figure 2. Map showing the around the WASSPS-Auburn campaign site (indicated by the blue pin), including surrounding air quality monitoring stations (yellow pins), generated using Google Earth ® .
Figure 3 .Figure 3 .
Figure 3.Diurnal and seasonal cycles of mean wind speed (A and B, respectively) and temperature (C and D, respectively) at the Auburn balcony site and three surrounding air quality monitoring stations; 95% confidence intervals in the mean are shaded.Plotting monthly mean wind speeds demonstrates that all sites follow the same annual trend (Figure3B) containing an autumn minimum in May and an October spring maximum.The
Figure 4 .Figure 4 .
Figure 4. Diurnal and seasonal cycles of mean carbon monoxide mole fractions (A and B, respectively), oxides of nitrogen (C and D, respectively), and ozone (E and F, respectively) at the
Figure 6 .
Figure 6.Polar bivariate plots of CO, NOx, O3 and PM2.5 (A-D, respectively) at the Auburn balcony location.Concentric circles from the origin indicate increasing wind speed, direction is indicated by quadrant and warmer colours indicate increasing pollutant concentration.
Figure 6 .
Figure 6.Polar bivariate plots of CO, NO x , O 3 and PM 2.5 ((A-D), respectively) at the Auburn balcony location.Concentric circles from the origin indicate increasing wind speed, direction is indicated by quadrant and warmer colours indicate increasing pollutant concentration.
Figure 7 .
Figure 7. Hourly (A) and monthly (B) mean PM2.5 concentration at the Auburn balcony site and the three surrounding air quality monitoring station sites; 95% confidence intervals in the mean are shaded.
Figure 7 .
Figure 7. Hourly (A) and monthly (B) mean PM 2.5 concentration at the Auburn balcony site and the three surrounding air quality monitoring station sites; 95% confidence intervals in the mean are shaded.
Figure A2 .Figure A2 .Figure A3 .
Figure A2.Site-specific wind roses binned by season.Colour represents wind speed, while distance from the origin represents the proportion of total wind direction measurements captured within each 30° segment. | 2019-04-08T12:04:42.999Z | 2019-04-04T00:00:00.000 | {
"year": 2019,
"sha1": "4a02c4f38d41391a0d4e6263119ad033cc8e35eb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/10/4/181/pdf?version=1554373363",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "580c3557fe5c872aba1b22268543e0116879b5f7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
233597980 | pes2o/s2orc | v3-fos-license | Small-Signal Stability Analysis for Power System Frequency Regulation with Renewable Energy Participation
With the improvement of the permeability of wind and photovoltaic (PV) energy, it has become one of the key problems to maintain the small-signal stability of the power system. )erefore, this paper analyzes the small-signal stability in a power system integrated with wind and solar energy. First, a mathematical model for small-signal stability analysis of power systems including the wind farm and PV station is established. And the characteristic roots of the New England power system integrated with wind energy and PV energy are obtained to study their small-signal stability. In addition, the validity of the theory is verified by the voltage drop of different nodes, which proves that power system integrated with wind-solar renewable energy participating in the frequency regulation can restore the system to the rated frequency in the shortest time and, at the same time, can enhance the robustness of each unit.
Introduction
Recently, with the exhaustion of fossil energy and the deterioration of the natural environment, renewable energy has attracted wide attention [1]. Wind energy and solar energy are the most widely used intermittent clean energy, and they are highly complementary in terms of resource and time distribution [2]. If wind and solar energy are integrated to form a wind-solar complementary energy system and participate in the frequency regulation of the power system, the utilization efficiency of intermittent energy can be improved to a certain extent and the global energy shortage can be alleviated [3,4].
However, the random fluctuation of the output of wind and solar energy causes huge regulatory peak pressure to the power balance of the power system [5,6]; on the other hand, the power system is disturbed by small-signal all the time during operation [7,8]. An unstable system is difficult to operate properly in practice [9,10]. us, the analysis of small-signal stability of power system becomes one of the important tasks of power system [11,12]. Literature [13] establishes a small-signal model of PV generation connected to a weak AC grid. e stability of PV power generation under different power grid strength and control parameters is studied by means of eigenvalue analysis. Literature [14] studies the influence of a large number of wind power generation on small-signal stability and corresponding control strategies to alleviate this negative influence. In [15], the Lyapunov stability criterion is used to analyze the stability research method of the integrated hybrid system. Stability research can be carried out for different renewable energy sources, such as the wind power generation system, photovoltaic system, and micro hydropower system. However, the above analysis regards wind-solar and other renewable energy sources as a perturbation of the power system and does not consider their participation in the frequency regulation of the power system. erefore, it is not effective in analyzing the stability of the power system in which wind-solar renewable energy participate in the frequency regulation.
us, this paper studies the integrated energy system including wind power and PV system with the method of eigenvalue analysis and studies the oscillation modes of the power system when wind and solar power are connected separately, and when the wind farm is connected first and then the PV system is connected. e simulation model of the system is established and the New England power system is used to verify the correctness of the small-signal stability analysis. e remaining of this paper is organized as follows: Section 2 develops the system modelling. In Section 3, smallsignal stability analysis is described. Comprehensive case studies are undertaken in Section 4. e different systems are discussed in Section 5 and Section 6 summarizes the main contributions of the paper.
Multimachine Power System Modelling.
e third-order model of the ith generator in a multimachine power system can be expressed by the following formula: where subscript i denotes the variables of the ith machine; δ i is the relative rotor angle; ω i is the generator rotor speed; ω 0 is the system speed; E qi and E qi ′ are the voltage and transient voltage on the q-axis; P mi is the constant mechanical power input; P ei is the electric power output; V ti is the generator terminal voltage; V di and V qi are the d-axis and q-axis generator terminal voltages; x di and x di ′ are the d-axis synchronous and transient impedances; x qi is the q-axis synchronous impedance; H i is the rotor inertia; T d0i is the d-axis transient short-circuit time constant; I di and I qi are the d-axis and q-axis generator currents; Y ij is the equivalent admittance between the ith and jth nodes; B ij is the susceptance between i and j nodes; G ij is the conductance between i and j nodes; and u fdi and E fdi are the excitation voltage and the initial excitation voltage, respectively.
System Modelling of DFIG Based Wind Turbine.
DFIG is connected to the power system through the voltage source converter, as shown in Figure 1 [16]. e aerodynamic mathematical model of the wind turbine can be described as [16] where ρ is the air density, R denotes the radius of the wind turbine, and v wind means the wind speed. C P (λ, β) is a function of tip-speed-ratio λ and blade pitch angle β representing the power coefficient. A specific wind speed corresponds to a wind turbine rotational speed to obtain C Pmax, namely, the maximum power coefficient and therefore tracks the maximum mechanical (wind) power. ω m denotes the wind turbine rotational speed [17]. e 4 th -order mathematical model of DFIG can be described as where ω b represents the electrical base speed, ω s denotes the synchronous angle speed, and ω r means the rotor angle speed; e ds ′ and e qs ′ denote the equivalent d-axis and q-axis (dq-) internal voltages; i ds and i qs are the dq-stator currents; υ ds and υ qs represent the dq-stator terminal voltages; and υ dr and υ qr are the dq-rotor voltages. L m means mutual inductance. e pitch angle control system is designed to improve wind energy conversion efficiency and make wind turbine output stable. Its model can be described as follows: where β ref is the reference value of pitch angle; T β is the inertia time constant of the pitch control system. e grid-side converter which is directly connected with the power system has the main function of maintaining constant capacitive voltage under the control of the DC regulating system and the function of adjusting the power factor. e DC sides of both converters are supported by a common capacitor. e power equation of the converter can be described as [18] P r � P g + P DC , P r � v dr i dr + v qr i qr , where P r is the active power of the AC terminal of the machine side converter and P g is the active power of the AC terminal of the grid-side converter. P DC is the active power of the capacitor tie line; i dr and i qr are the d-q axis components of rotor current respectively; i dg and i qg are the d-q axis components of the system side converter current, respectively; v dg and v qg are the d-q axis components of the system side converter voltage respectively; v DC and i DC are the current and voltage of the DC link in the converter; C is the capacity of the capacitor. Equation (6) can be rewritten as 2.3. Modelling of PV System. e control structure of the PV system is shown in Figure 2 [19].
According to Kirchhoff's law, the U-I equation of PV cell can be described as where I sc is the short-circuit current; U oc is the open-circuit voltage; U m is the voltage at maximum power; I m is the current at maximum power; S ref is the illumination intensity under standard environment, which is 1 kW/m 2 . T ref is the temperature in the standard environment, which is 25°C. I sc ′ , U oc ′ , I m ′ , and U m ′ are, respectively, the correction values of I sc and U m under different environments. α and c are temperature compensation coefficients; andβ is the compensation coefficient of PV irradiation. In addition, DC/DC converter mainly plays the role of Boost and power transformation, it can be described as DC link is the intermediate link connecting DC side and AC side, namely, the DC bus capacitance model. According to the capacitance energy and voltage relationship, the DC link model can be described as
Mathematical Problems in Engineering
where P PV2 is the DC side input power of the DC link; P pve is the output power of the DC link inverter side; C is the capacitance value of the DC link capacitance; V D is the voltage value of the dc link; the E C is the amount of energy stored on a capacitor. e wind-solar complementary energy system has three operating states: first, the wind turbine generated independently; second, the PV array independent power generation state; and third, wind-solar complementary power generation. Wind speed, solar radiation, load power consumption, and charging and discharging capacity of the energy storage device all determine the operation state of the wind-solar complementary energy system. Due to the randomness of these factors, the stability of the power system is bound to be affected to some extent. erefore, it is necessary to analyze the stability of the small-signal of the power system integrated with renewable energy. e control structure diagram of the wind-solar energy system is giving in Figure 3.
Small-Signal Stability Analysis
e Lyapunov linearization method is related to the local stability of nonlinear systems. e basic idea is to obtain the local stability of nonlinear systems near their equilibrium operation points from the linear approximation stability property of nonlinear systems [20][21][22].
For the dynamic characteristic differential-algebraic equation of the power system, linearization at the steadystate operating point (x 0 , y 0 ) can be obtained as follows [23][24][25]: where Δx represents the state variable that describes the dynamic characteristics of the power system in the system of differential equations and Δy represents the operating parameters of the system in algebraic equations. A, B, C, D are, respectively, their partial derivatives at steady-state operating point (x 0 , y 0 ). Omitting operation parameter y, the following equation can be obtained: with Matrix A is usually called the state matrix of the system. e stability of the analyzed system at the steady-state operating point (x, y) can be judged by obtaining the eigenvalue of matrix A [26][27][28]: (a) When the real part of all eigenvalues of A is negative, it means that the actual power system can maintain stability when the equilibrium point encounters a small-signal. (b) When at least one real part of all eigenvalues of A is positive, it means that the actual power system will lose stability when it encounters a small-signal at the equilibrium point. (c) When all eigenvalues of A have no positive eigenvalue of the real part, but at least one eigenvalue of the real part is zero, then the linearized system is in A critical stable state, but it cannot be used to judge whether the actual power system is stable at the equilibrium point. (d) A real characteristic root corresponds to a nonoscillating mode. e modes represented by negative real characteristic roots are attenuated, and the greater the absolute value, the faster the corresponding modes decay. (e) Complex characteristic roots always appear as conjugate pairs and can be described as Complex eigenvalues are always composed of conjugate pairs, which can be described as the negative real part represents the damping oscillation mode [29][30][31]. e positive real part represents the increased oscillation, and the real part of the eigenvalue represents the damping of the system oscillation, while the imaginary part represents the frequency of the system oscillation [32]. e frequency of oscillation can be expressed as [33] f � ω 2π .
e damping ratio is defined as It represents the attenuation characteristic of the oscillation amplitude.
Case Studies
e proposed methodology is tested on the New England power system, as shown in Figure 4. It consists of 39 buses and 10 generators, and the New York grid connected to the New England power system is represented by the first generator. In addition, detailed system parameters are shown in literature [16]. e proposed methodology has been developed in MATLAB 2017 b environment. In order to analyze the damping characteristics of interconnected systems when wind farm and PV system are connected to the power system, the eigenvalue analysis is carried out for the following four working conditions: (a) Initial system (b) Only wind farms are connected on bus #1 and output 5 MW Table 1 shows the partial eigenvalues of the system in four cases. It can be seen that when wind farm and PV system are connected separately, their characteristic roots are all far away from the imaginary axis. In particular, after the addition of wind and solar energy, the characteristic root distribution was well improved, which indicates that wind power and PV system independent access system both can significantly improve the stability and, at the same time, are complementary to each other. And Root loci distribution of different conditions is given in Figure 5. It can be seen that the system that does not involve wind-solar renewable energy in frequency modulation has the worst recovery ability after small-signal, while, with the connection of wind and solar energy, the recovery ability of the system after small-signal is improved. In particular, the system combination of wind and solar energy has the best recovery from small-signal and the ability to adjust the system frequency to near the rated frequency in the shortest amount of time. Figure 7. It can be found that, with the system combination of wind and PV, the rotor angle difference regulation capacity of generator G 1 is significantly improved, its oscillation amplitude is significantly reduced, and it is restored to the rated value in the shortest time. In addition, it has the best regulation ability for active power and reactive power and will adjust the system to the steady-state in the shortest period, so that the system subject to small-signal has the strongest frequency regulation ability. Mathematical Problems in Engineering
Bus #3 Voltage Drop.
In order to further study the positive effect of the energy storage system on the PV station, based on the above case, this paper considers that the energy storage system is configured in the PV station connected to bus #2. In addition, voltage drop 0.8 p.u. occurred at bus #3 when 5 s and recovered after 0.1 s, and its system response is shown in Figure 8. It can be seen that after the PV system is configured with the energy storage system, the stability of small-signal is better. Compared with the initial PV system, it can restore system frequency in a relatively short time. Mathematical Problems in Engineering 11
Bus
in Figure 9. It can be found that the frequency regulation ability of the PV station equipped with the energy storage system is greatly improved, which can well suppress the frequency fluctuation of the power system subjected to small-signal disturbance. In addition, it can help the synchronous generator to recover to the stable state in a short time. Table 2, in which IAE x � T 0 |x − x * |dt and x * denotes the reference of variable x, respectively. In particular, IAE δ12 of the system of combination of wind and PV is merely 61.90%, 74.79%, 78.49%, 82.84%, and 88.27% of that without wind and PV, only wind, only PV, PV station with energy storage system, and PV followed by wind, respectively, acquired in #3 bus voltage drop (bold colour indicates the best results in Table 2).
Conclusions
More and more large-and medium-sized renewable energy power stations have been built and connected to the power system, and they account for an increasing proportion of the power system. It affects the stability and damping characteristics of the traditional power system. In this paper, the influence of wind power and photovoltaic energy on the stability of the power system is studied, and the main conclusions are as follows: (a) Based on the calculation of characteristic roots, it is proved that power system integrated with wind and solar energy participating in frequency regulation has better stability. (b) Based on the New England power system, the damping characteristics of the system can be effectively improved and the system can be more stable after the wind-solar renewable energy is incorporated into the power system. (c) Based on the New England power system test, it is verified that the photovoltaic power station can improve its stability to a certain extent after installing the energy storage system. Particularly, IAE f acquired by PV with energy storage system is merely 88.30% and 95.40% of that without wind or PV and only PV, respectively, on the case of bus #3 voltage drop.
Data Availability e data that support the findings of this study are available upon request from the corresponding author. e data are not publicly available due to privacy or ethical restrictions.
Conflicts of Interest
e authors declare no conflicts of interest. | 2021-05-04T22:05:29.591Z | 2021-04-05T00:00:00.000 | {
"year": 2021,
"sha1": "ae7360d596b17f645fcd813f201b2385e4a1a3db",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2021/5556062.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "03e02a375c8196eddffe1f65f014a7d9360f7114",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245669004 | pes2o/s2orc | v3-fos-license | Towards Unsupervised Open World Semantic Segmentation
For the semantic segmentation of images, state-of-the-art deep neural networks (DNNs) achieve high segmentation accuracy if that task is restricted to a closed set of classes. However, as of now DNNs have limited ability to operate in an open world, where they are tasked to identify pixels belonging to unknown objects and eventually to learn novel classes, incrementally. Humans have the capability to say: I don't know what that is, but I've already seen something like that. Therefore, it is desirable to perform such an incremental learning task in an unsupervised fashion. We introduce a method where unknown objects are clustered based on visual similarity. Those clusters are utilized to define new classes and serve as training data for unsupervised incremental learning. More precisely, the connected components of a predicted semantic segmentation are assessed by a segmentation quality estimate. connected components with a low estimated prediction quality are candidates for a subsequent clustering. Additionally, the component-wise quality assessment allows for obtaining predicted segmentation masks for the image regions potentially containing unknown objects. The respective pixels of such masks are pseudo-labeled and afterwards used for re-training the DNN, i.e., without the use of ground truth generated by humans. In our experiments we demonstrate that, without access to ground truth and even with few data, a DNN's class space can be extended by a novel class, achieving considerable segmentation accuracy.
INTRODUCTION
Semantic segmentation is a computer vision task that terms the classification of image data on pixel level. State-of-the-image & novelty annotation prediction quality estimation prediction of the initial DNN prediction of our extended DNN Figure 1: Comparison of the semantic segmentation predictions of an initial DNN (bottom left) whose semantic space does not include the category bus and a DNN which is incrementally extended by this novel class (bottom right, novel class in orange) for an image from the Cityscapes dataset. The novel class is highlighted in orange (top left). Further, the initial prediction exhibits a low prediction quality (top right) on pixels belonging to the novel objects, which is indicated by red color.
art approaches are based on deep convolutional neural networks (DNNs) [Chen et al., 2018b, Wang et al., 2021, Zhao et al., 2017, benefiting from finely annotated datasets, e.g., for automated driving [Cordts et al., 2016, Geyer et al., 2020, Neuhold et al., 2017, Yu et al., 2020. However, DNNs for semantic segmentation are usually trained on a predefined, closed set of classes. This closed world setting assumes, that all classes present during testing were already included in the training set. In an open world setting, this assumption does not hold. In particular for safety-critical open-world applications like perception systems for automated driving, it is indispensable that neural networks recognize previously unseen objects instead of wrongly assigning them to one-ofthe-known classes. In addition, they must constantly adapt to evolving environments.
Some terms often used interchangeably for anomaly are outlier, out-of-distribution (OoD) object and novelty. As there is no clear convention on how to distinguish these terms, we define them as subcategories of anomalies: outliers and OoD objects denote noise or samples drawn from another distribution than the model was trained on, respectively. In this work, we are seeking novelties, which we define as previously-unseen objects that constitute a new concept, i.e., objects of the same category appear frequently. In automated driving, detecting and learning those novel classes becomes necessary, e.g., due to new appearances like e-scooters or due to local specialities like boat trailers near the sea. The concept of detecting and learning novelties was first introduced in Bendale and Boult [2015] as open world recognition. Open world recognition for different computer vision tasks is an emerging research area [Bendale and Boult, 2015, Joseph et al., 2021, Cen et al., 2021, Shu et al., 2018, still only little explored for unsupervised methods [He andZhu, 2021, Nakajima et al., 2019], yet.
We propose a new and modular procedure for learning new classes of novel objects without any handcrafted annotation: 1. Anomaly segmentation to detect suspicious objects, 2. clustering of potentially novel objects, 3. creation of so-called pseudo labels, and 4. incremental learning of novel classes.
In the following, we will outline each of these four steps in more detail.
For the first step, we post-process the predictions of an underlying semantic segmentation DNN via a meta regressor, that estimates the quality of the predicted segments, similar as proposed in Rottmann and Schubert [2019], Rottmann et al. [2020], Maag et al. [2020]. In the following, the term segment will always refer to connected components of pixels in the semantic segmentation prediction. The segmentwise quality score is obtained on the basis of aggregated dispersion measures and geometrical information, i.e., without requiring ground truth. The output of the semantic segmentation DNN on anomalous objects is often split into several segments. To this end, we first aggregate neighboring segments, i.e., segments that have at least one adjacent pixel each, with quality estimates below some threshold, into (potentially) anomalous objects, termed suspicious objects.
For the second step, we adapt the idea introduced in Oberdiek et al. [2020] to gather segments with poor prediction quality and to cluster them into visually related neighborhoods. Therefore, all suspicious objects (of sufficient size) are cropped out in the RGB images and the resulting image patches are fed into a convolutional neural network (CNN), e.g., for image classification. Whether an image patch is sufficiently large depends on the minimum input size required by this CNN. To obtain comparable in-formation about the suspicious objects, we then extract the features provided by the penultimate layer of the CNN, i.e., right before the final classification layer. By reducing the dimensionality of these features up to two, we enable the use of low-dimensional, unsupervised clustering techniques, such as Ester et al. [1996], MacQueen [1967].
As third, we obtain pseudo labels for novel classes in an automated manner: each (large / dense enough) cluster constitutes a novel category, and each pixel belonging to a clustered object is assigned to the appropriate (not necessarily named) class. More precisely, the prediction of the segmentation model is updated at those pixel positions to the next "free" label ID.
Finally, the segmentation network is incrementally extended by these novel classes (see Fig. 1 for an example). To this end, we apply established incremental learning methods [Hinton et al., 2015, Robins, 1995. However, these are mainly examined for supervised learning tasks, while we do not include any hand-labeled new data. This last two steps were never done in literature so far.
We perform five experiments, following a hierarchical structure of complexity. For the first three experiments, the initial segmentation network is trained on the Cityscapes dataset, but on different subsets of the available training classes.
Here, we do not change the data itself, but the training IDs of the Cityscapes classes. For the other experiments, we start with an initial segmentation network that is trained on Cityscapes and test our method on the A2D2 dataset. For those, we have a mapping between the Cityscapes and the A2D2 classes. For most Cityscapes classes, there is a matching class in A2D2. In some cases, A2D2 has coarser classes, e.g., we map the Cityscapes classes vegetation and terrain to the A2D2 class nature.
To outline our contributions, we demonstrate in our experiments that our method is able to incrementally extend a neural network by novel classes without collecting or annotating novelties manually. To the best of our knowledge, we are the first to introduce an unsupervised approach for open world semantic segmentation with DNNs. Fine-tuning neural networks on automatically created pseudo-labels instead of human-made annotations is economically valuable. We observe in all experiments, that even a poor labeling quality is sufficient to learn novel classes, achieving IoU values around 40%. Further, the amount of new data was less mostly than 100 images, respectively. Unsupervised open world semantic segmentation therefore is a powerful tool for open world applications, that provides an enormous potential for future improvement.
RELATED WORK
In this section, we first review anomaly detection methods and briefly go into class discovery approaches. Then we Novelty Detection. The detection of anomalous objects in general is a key task in many machine learning applications. Early works estimate the prediction uncertainty, e.g., by uncertainty measures derived from the softmax probability [Hendrycks andGimpel, 2017, Liang et al., 2018]. Uncertainty-based approaches can be further improved by integrating anomalous data into the training procedure [Devries andTaylor, 2018, Chan et al., 2021b]. Another line of works employs generative models such as autoencoders (AEs) or generative adversarial models (GANs) to reconstruct or synthesise images and measure the reconstruction quality. Various of those novelty detection methods are described in Vasilev et al. [2018], not only reconstruction-, but also density-or distance-based. A benchmark for anomaly segmentation, i.e., anomaly detection methods for semantic segmentation, was recently published in Chan et al. [2021a], providing a cleaner comparison of proposed methods. Given a set of anomalies, the prevailing approach for class discovery is to form clusters based on some similarity measure or intrinsic features with traditional clustering methods. A detailed survey of image clustering has been published in Liu et al. [2021].
Class-Incremental Learning. Class-incremental learning refers to the extension of a neural network's semantic space by further, previously unknown, classes. This extension is achieved by fine-tuning a model on additional, usually human-annotated data [Jung et al., 2018, Li and Hoiem, 2018, Klingner et al., 2020, Michieli and Zanuttigh, 2019, whereas in this work we only provide pseudo labels for these new images. The primary issue to tackle when re-training a neural network is to mitigate the performance loss on pre-viously learned classes, commonly known as catastrophic forgetting [McCloskey and Cohen, 1989]. To this end, we employ two different strategies: first, we penalize large variations of the softmax output (compared to the one of the original network) [Hinton et al., 2015], second we utilize a subset of the previously-seen training data [Robins, 1995].
The first strategy belongs to the category of regularization based approaches, or more specifically to knowledge distillation methods. These were originally developed to distill knowledge from sophisticated into simpler models [Hinton et al., 2015], i.e., for model compression. Thereupon, distillation methods have evolved for incremental learning in image classification [Li and Hoiem, 2018, Yao et al., 2019, Kim et al., 2019, Jung et al., 2018, Lee et al., 2019, some of which were later adapted to semantic segmentation [Klingner et al., 2020, Michieli and Zanuttigh, 2019, Tasar et al., 2019.
The second approach belongs to so-called rehearsal methods [Robins, 1995], where old training data is included in the re-training process [Rebuffi et al., 2017, Castro et al., 2018. Our work introduces an open world semantic segmentation framework, where a neural network is incrementally extended by novel classes. These classes are discovered and labeled without any human effort. Therefore, our work goes beyond all existing approaches in this research area.
DISCOVERY OF UNKNOWN SEMANTIC CLASSES
Whether a class is novel or not depends on the neural network's underlying set of known classes C = {1, . . . , C}.
Let f : X → (0, 1) |H|×|W|×|C| be a semantic segmentation DNN which is trained on the classes in C, mapping an image x ∈ X ⊆ [0, 1] |H|×|W|×3 onto its softmax probabilities for each pixel z ∈ H × W. Then, f z,c (x) ∈ (0, 1) denotes the probability with which the model f assigns some pixel z to a class c ∈ C. As decision rule, we apply the arg max function, i.e., we obtain the semantic segmentation mask m(x) ∈ C |H|×|W| with m z (x) = arg max c∈C f z,c (x). In the following, we will estimate the prediction quality on a segment-level instead of pixel-wise, employing a meta regression approach that was first introduced in Rottmann et al. [2020]. On that account, we denote a segment, i.e., a connected component of pixels that share the same class in m(x), as k ∈ K(x).
Meta Regressor. As model for the meta regressor we apply the gradient boosting from the scikit-learn v.0.24.2 library using the standard settings. The training datasets contain from 67 to 75 uncertainty metrics depending on the number of classes. We train on 313, 720 to 946, 318 segments. Further details on the definition of the segment-wise metrics, the exact size of the training data and the tree models obtained are provided in the Appendix. For any predicted segment k, the gradient boosting regressor, via clipping, outputs a value between 0 and 1, where a value close to 0 expresses low, a value close to 1 high prediction quality.
The motivation to use a segment-wise meta regression framework is to identify segments with low predicted IoU as candidate segments that potentially stem from OoD objects.
Uncertainty Metrics and Prediction Quality Estimation.
We consider novelties as none-of-the-known objects, i.e., they differ semantically from the model's training data. Assuming that the segmentation DNN produces unstable predictions on these unexplored entities, various measurable phenomena occur. For instance, the model exhibits a high prediction uncertainty. This is quantified by dispersion measures as the softmax entropy, probability margin or variation ratio, which we compute pixel-wise via respectively. These are then averaged over the segments k ∈ K(x) or over the segment boundary. Moreover, we examine some geometrical properties of the segments, such as their size, i.e., the number of pixels |k| contained in k, their shape or their position in the image. For in-depth details on the constructed metrics, we refer to Rottmann et al.
[2020] and the appendix. By feeding these metrics into a meta regression model, we obtain prediction quality estimates for each segment k ∈ K(x), which we denote by s(k) ∈ [0, 1]. These quality estimates approach the true segment-wise Intersection over Union (IoU) with reasonably high accuracy [Rottmann et al., 2020]. To fit the meta regressor, we compute the metrics plus the true IoU values of all segments included in the training data of the segmentation network. This meta model is then applied to unseen data, i.e., data that was not included in the training of f , for the purpose of anomaly segmentation. Here, we consider a segment k to be anomalous, if its quality score is below some predefined threshold τ ∈ [0, 1], i.e., if s(k) < τ . By that, we identify individual segments as unknown, however, the semantic segmentation of unknown objects usually consists of several segments, i.e., of different predicted classes. As we can uniquely assign each pixel z to a segment k(z), we obtain a binary pixel-wise classification mask a ∈ {0, 1} |H|×|W| via where the class label 1 {s(k(z))<τ } = 1 indicates anomalous pixels. Finally, the connected components in the anomaly mask a merge adjacent anomalous segments into suspicious objects. Under ideal conditions, 1. the semantic segmentation network performs perfectly on in-distribution data, 2. the meta model detects all (but only) unknowns, and 3. novel objects of different classes are separable.
image from A2D2 semantic segmentation prediction prediction quality estimation from 0 (red) to 1 (green) pseudo ground truth Figure 3: Novelty segmentation: example for obtaining pseudo ground truth with regard to some image patch (outlined in red) of image x. If segments inside the red box exhibit quality estimates below some predefined threshold, they are "re-labeled" in the segmentation mask m(x).
Embedding and Clustering of Image Patches. Image clustering usually takes place in a lower dimensional latent space due to the curse of dimensionality. To this end, we feed image patches tailored to the suspicious objects into an image classification DenseNet201 Huang et al. [2017], which is trained on the ImageNet dataset [Deng et al., 2009] with 1000 classes. The patches are not equally sized. That nevertheless the DenseNet feature extractor returns features of equal size (1, 920) for each patch is a consequence of the application of the AdaptiveAvgPool2d layer that is applied as the last layer after the fully convolutional and depthwise interconnected layers of the DenseNet. Put shortly, this last layer pools over both spatial dimension of the feature maps and thereby the output is not dependent on the size of the input, that is transported through the fully convolutional layers. Their feature representations are further compressed, resulting in a two-dimensional embedding space as illustrated in Fig. 2 This procedure for image embedding is adopted from Oberdiek et al. [2020], where the authors evaluated several feature extractors, distance metrics and feature dimensions. We employ the best performing setup in this quantitative analysis to obtain clusters of visually related image patches. Beyond that, we identify these clusters using the DBSCAN [Ester et al., 1996] algorithm. This clustering method requires two hyperparameters, namely the radius ε ∈ R that defines a neighborhood B ε (·) and a threshold N min ∈ N regarding the number of data points within this ε-neighborhood. Let E = {e 1 , e 2 , . . .} ⊂ R 2 denote the set of the embedded features. Then, an embedding is considered a core point, if and only if it has at least N min neighbors, i.e., The algorithm further distinguishes between border points, i.e., embeddings that are not core points themselves, but belong to a core point's neighborhood, and noise else. To mitigate the risk of failures, i.e., objects from a different category in the novel clusters, we only consider the core points. We further reject embeddings representing image patches that are smaller than some predefined size. The cluster with the most remaining core points (or all clusters that involve "enough" core points) will be used to extend the segmentation network by new classes (Fig. 2, bottom).
Novelty Segmentation. Using pseudo labels instead of manually annotated targets is a cost-efficient (in the sense of human effort) method of training neural networks on unlabeled data. For the sake of simplicity we assume that exactly one cluster is returned by the aforementioned procedure. For some image x ∈ X , we denote the predicted segmentation mask by m(x) and the respective segments by K(x). Let K novel (x) ⊆ K(x) describe the set of segments k ∈ K(x) that are also included in the considered cluster. If K novel (x) = ∅, i.e., image x (probably) contains the novel class, we include the tuple (x,ỹ(x)) ∈ X × {1, . . . , C + 1} |H|×|W| into the re-training data D C+1 for learning the novel class C + 1. Here,ỹ(x) denotes the pseudo label, wherẽ i.e., a pixel z is either assigned to the novel class ID C + 1, or to the class c ∈ C that was predicted by the initial model f . An example for acquiring pseudo ground truth for one image is given in Fig. 3. In the following section we extend the segmentation DNN f by fine-tuning it on D C+1 .
EXTENSION OF THE MODEL'S SEMANTIC SPACE
In this section we describe our approach to semantic incremental learning with the pseudo ground truth acquired by novelty segmentation. Starting from our initial segmentation model f , we are seeking an extended model g : X → (0, 1) |H|×|W|×(C+1) that retains the knowledge of f while additionally learning the novel class C + 1. Denote novelty pseudo ground truth classes predicted by initial DNN 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Underlying classes c ∈ C of initial DNN f the extended semantic space by C + = C ∪ {C + 1}. In more detail, we replace the ultimate layer of f and reinitialize only the affected weights to obtain the initial model g for re-training, i.e., the model we train on the newly collected data D C+1 . As loss function we apply a weighted cross entropy loss [Yi-de et al., 2004], denoted by l ce,ω . The classwise weights ω c ∈ (0, 1], c ∈ C + , are recalculated for each batch based on the inverse class frequency to alleviate class imbalances.
Knowledge distillation in class-incremental learning aims at minimizing variations of the softmax output restricted to only the old classes c ∈ C. This is realized by an additional distillation loss function [Michieli and Zanuttigh, 2021] l d , where Overall, we aim at minimizing the objective with λ regulating the impact of the distillation loss.
Rehearsal methods propose to replay (some of) the data D train ⊂ X ×C |H|×|W| seen during the training of the initial model f . We select a subset D known ⊆ D train that contains as much data as D C+1 . This subset is chosen largely at random, but in such a way that it involves classes, that are 1. not or rarely present in D C+1 (class frequency), or 2. similar or related to the novel class.
As there is no measure for the second case, we identify those classes by considering the frequency, with which a class is predicted by f on pixels assigned to the novel class. This is, for all data (x,ỹ(x)) ∈ D C+1 and classes c ∈ C, we sum up the number of pixels z ∈ H × W whereỹ z (x) = C +1∧m z (x) = c. An example is given in Fig. 4, where the classes truck, train and car are the most frequently predicted classes for instances of the novel class bus.
EXPERIMENTAL SETUP & EVALUATION
We evaluate our approach on the task of detecting and incrementally learning novel classes in traffic scenes, for which there exist large datasets such as Cityscapes [Cordts et al., 2016] and A2D2 [Geyer et al., 2020]. To this end, all evaluated segmentation DNN's were trained on a training split and only on a subset of all available classes. We then perform our experiments on a test split of the same dataset on which the DNN was trained in order to extent it by exactly one or even multiple novel classes. We measure the performance of the extended models computing the evaluation metrics intersection over union (IoU), precision and recall for a validation set. Each of those initial DNNs is employed to predict the semantic segmentation masks for the images contained in the respective test set. For the segment-wise prediction quality estimation introduced in Sec. 3, we apply a gradient boosting model to obtain the quality scores s(k) ∈ [0, 1] for each segment k ∈ K(x) and image x in the test set. The threshold in Eq. (4) is set to τ = 0.5, i.e., a segment k ∈ K is considered as anomalous, if s(k) < 0. For the class-incremental extension of an initial DNN f , we replace its final layer to obtain a larger DNN g (see Sec. 4).
Only the decoder of this model is trained for 70 epochs on the newly collected data D C+1 together with the replayed data D known . We use random crops of size 1000 × 1000 pixels, the Adam optimizer with a learning rate of 5 · 10 −5 and a weight decay of 10 −4 . Further, the learning rate is adjusted after every iteration via a polynomial learning rate policy [Chen et al., 2018a]. The distillation loss and the cross-entropy loss are weighted equally in the overall loss function defined in Eq. (8), i.e., λ = 0.5 (analogously to Michieli and Zanuttigh [2019]).
As the five experiments struggle with different issues, the experimental setup slightly differs. For the first case, we construct the novel category human, which is "well" separable from all known classes, to enhance the purity of the "human cluster" and to simplify the learning of novel objects. However, we observe that the DNN tends to "overlook" many humans, i.e., they are assigned to the class predicted in the background, e.g., to the road class. As a consequence, the segment-wise anomaly detection fails to detect such persons, which is why these will be assigned to other classes in our acquired pseudo ground truth. To not distract the extended segmentation network, we modify the pseudo labels by ignoring all known classes c ∈ C during the incremental training procedure. The bus class added in the second experiment is closely related to other classes in the vehicle category, such as truck, train and car, which complicates the construction of pure clusters. We mitigate the impact of objects from similar classes by discarding all objects from the cluster that consist of only one segment in the predicted segmentation. Experiment three extends the previous ones by facing multiple unknown classes, namely human, bus and car. The last two experiments deal with an addi- As performance metrics, we provide the mean IoU over the old and new classes, denoted by mIoU C and mIoU C + , respectively, and the IoU value of the novel class(es), IoU novelty .
tional domain shift from urban street scenes in Cityscapes to countryside and highway scenes in A2D2. To bridge this gap, we fine-tune the initial DNN on our A2D2 training set, which, however, requires A2D2 ground truth for the known classes. Without fine-tuning, the prediction quality and thereby the quality of our pseudo ground truth suffers.
On that account, we discard images that are generally rated as badly predicted, i.e., where the relative amount of pixels with a low quality estimate exceeds 1/3 of the image in total. Moreover, we renounce the replay of previously-seen data, since this prevents the DNN from adapting to the new domain.
Evaluation of Results. In the following, all evaluation values belonging to our extended models are averaged over five runs of the respective experiment. For in-depth details we refer to the appendix. We provide a qualitative comparison of different models for all conducted experiments in Tab. 1, reporting the mean IoU over the known classes and over the extended class set, denoted as mIoU C and mIoU C + , respectively, as well as the IoU value of the novel classes (IoU novelty ). The models considered in this comparison are the initial and the extended DNN, where the class space is extended via our method. For the first and second experiment we further compare our approach with a baseline, where a DNN is extended using a self-training approach. That is, we employ a so-called teacher network, which is al- Table 2: Direct comparison of the initial and the extended DNNs for all conducted experiments. We report the IoU, precision and recall values for the novel class (highlighted with gray rows), respectively, as well as averaged over the previously-known and the extended class spaces C and C + .
ready trained on the extended semantic space C + , to produce pseudo labels for some student network. Thereby, we obtain a high quality pseudo ground truth. Apart from this, the baseline DNN is extended analogously to ours. In addition, for the first four experiments we provide results of an oracle, i.e., a DNN, that is initially trained on the extended class set C + and only with human-annotated ground truth. In the fifth experiment, we extend the initial DNN by a novel class derived from a different dataset. To some extent, the oracle from experiment four (a) can serve as a coarse reference for experiment five. In Tab. 2 we give a more detailed overview about all experiments, reporting not only the IoU, but also the precision and recall values of the novel class as well as averaged over C and C + . Note that the fourth experiment is evaluated twice, once for (a) the DeepLabV3+ and once for (b) the PSPNet. For class-wise evaluation results and visualizations, we refer to Appendix A.
In general, we observe that our approach succeeds in incrementally extending a DNN by a novel class, while the performance on previously-known classes remains stable. On Cityscapes, we achieve IoU values for the novel classes human and bus of IoU human = 39.80 ± 0.73% and IoU bus = 44.73 ± 1.46%, respectively. For the third experiment with two novel classes, we obtain similar results for the human class with IoU human = 40.22 ± 1.77% and for the car class even IoU car = 81.27 ± 1.16%. While these IoU values are a considerable achievement for a method working without ground truth, the distinct gaps to the oracle's IoU values still leave room for further improvement. Compared to the baseline DNN, we do not achieve competitive performance in the first experiment, while in the second experiment, our approach actually performs slightly better. This is explained by the fact, that the pseudo ground truth for the human class incorporates much more noise than that for the bus class. In the fourth experiment we mitigate the domain shift from Cityscapes to A2D2 by prior fine-tuning of the networks, using A2D2 ground truth. By that, we obtain IoU values of IoU guardrail = 46.10 ± 4.8% for the DeepLabV3+ and IoU guardrail = 32.79 ± 3.48% for the PSPNet. We conclude, that our approach achieves better results for models which are initially better-performing. Without fine-tuning the DeepLabV3+ on A2D2, we obtain IoU guardrail = 20.90 ± 1.73%, while the mean IoU over the previously-known classes C slightly increases from 59.38% to 60.48 ± 0.47%.
CONCLUSION & OUTLOOK
In this work, we have introduced a new and modular procedure for the class-incremental extension of a semantic segmentation network, where novel classes are detected, annotated and learned in an unsupervised fashion. While there already exists an unsupervised open world approach for semantic segmentation [Nakajima et al., 2019], we are the first in this field to extend a neural network's semantic space by robust novel classes. We performed five hierarchically structured experiments with an increasing level of difficulty. We demonstrated that our approach can deal with novelties that are either "well" separated or related to known categories, and that it is even applicable when the test data is sampled from a slightly different distribution than the DNN was trained on. Moreover, we applied two different models in the fourth experiment, where the initial DeepLabV3+ already outperformed the initial PSPNet. This performance gap is also reflected in the model's ability to learn the novel class, thus we conclude that our method benefits significantly from high performance networks.
For future work, we plan to improve the extension of a neural network by multiple classes at once. On that account, suitable datasets are in demand. Two datasets for the task of anomaly segmentation were recently published in Chan et al.
[2021a], however, these show a wide variety of anomalous objects. To advance the research in class-incremental learning, it requires datasets where novel objects, i.e., objects that do not appear in the training data, appear frequently in the test data.
We are currently working on a synthetic dataset tailored to our approach. This data is generated using the CARLA 0.9.12 simulator Dosovitskiy et al. [2017], similar as extensively described in Kowol et al. [2022]. The data include annotated street scene images, generated on the same maps for training and testing. Since we aim at detecting novel classes in the test data, these images are enriched by several never-seen object classes, e.g., deer, construction vehicle or portable toilet (examples provided in Appendix B).
Besides, we plan to adapt our approach to video instead of image data, where anomaly detection includes anomaly tracking over multiple frames.
LIMITATIONS & NEGATIVE IMPACT
With the procedure presented in this work, we are taking a first step towards a new machine learning problem. This first step is highly experimental and our method has not the technology readiness level to be applied to real-world problems in a fully automated fashion. Especially from the safety point of view, a neural network should not be modified without any supervision, since we can not guarantee to avoid significant performance drops.
A EVALUATED MODELS
We performed six experiments that differ in terms of underlying datasets, network architectures and novelties. In this section we provide a class-wise evaluation of each initial and extended DNN, as well as example images for all evaluated models, i.e., also for the baseline and the oracle DNNs. For the extended models, we report the mean and standard deviation of the evaluation metrics for five runs, respectively, using the random seeds 14, 123, 666, 375 and 693.
A.1 EXPERIMENT 1
For the first experiment, we trained a DeepLabV3+ on the Cityscapes dataset, excluding the classes pedestrian and rider, both together constituting the class human. This novelty is well separable from all the known classes as these belong to different, non-organic categories. As there are no similar classes, humans are either totally "overlooked" by the segmentation DNN, i.e., assigned to the class predicted in their background, or predicted as related classes, e.g., as bicycle, motorcycle or car (cf. Fig. 5). Since our anomaly detection method fails to spot overlooked persons, these remain mislabeled even in the pseudo ground truth, thus negatively affecting the incremental training procedure. For an example we refer to Fig. 6, where a cyclist is assigned to the background classes road and car. To prevent this issue, we ignore all known classes c ∈ C present in the pseudo labels. Our newly collected data D C+1 contains 76 pseudo-labeled images. The replayed training data is selected such that at least 25% -35% of the images contain cars, motorcycles and bicycles, respectively.
We evaluated the initial and the extended DNN on the Cityscapes validation data. Class-wise results are provided in Tab. 3. Besides the novel class, which achieves an IoU value of nearly 40% with approximately 50-60% precision and recall, the incremental training has only little impact on previously-known classes. For many classes, however, we observe an improvement in precision at the expense of the corresponding recall values, e.g., for the classes fence, truck and train. This is also reflected in the mean precision and recall values over C, i.e., while precision increases by 3.53%, recall decreases by 3.77%. Especially the classes motorcycle image patch predicted segmentation quality estimation Figure 6: Image patch, semantic segmentation and prediction quality estimation for a scene, where a cyclist is overlooked by the initial DNN. and bicycle gain performance regarding the IoU and precision, which is mainly due to human pixels initially assigned to those classes, while the proportion of bikes (motor-or bicycles) that are predicted correctly drops significantly.
A comparison of all evaluated models in the first experiment is illustrated for an example image in Fig. 7. We observe a reduction of noise in the model's predictions, starting from the initial DNN, to the extended DNN, the baseline and the oracle. Nonetheless, the predicted segmentation of our extended DNN comes close to those predicted by the comparative models that both require ground truth for the novel class.
A.2 EXPERIMENT 2
The setup of the second experiment is the same as in the first one (DeepLabV3+, Cityscapes dataset), but excluding busses from the set of known classes instead of humans. This novelty belongs to the vehicle category, thus being akin to other vehicle classes as train or truck. These are also the classes the objects declared as novel were predicted for the most part, as we illustrated in Fig. 4. On that account, at least 50% of the 55 images in D C+1 contain trucks, 30% trains. As a consequence of the visual relatedness, trucks and trains that exhibit a low prediction quality, i.e., that are treated as anomalies, contaminate the cluster of busses in the two-dimensional embedding space. We observed, that the segmentation network predicts most of these "detected" trucks and trains correctly, while it assigns multiple classes, i.e., multiple segments in the semantic segmentation prediction, to a bus. Thus, we delete anomalies from the embedding space, whose predicted segmentation consists of only one segment (ignoring segments with less than 500 pixels).
Again, we provide a class-wise evaluation on the Cityscapes validation split in Tab. 4 and present a comparison of different models for one exemplary street scene in Fig. 8 Figure 7: Comparison of the semantic segmentation predictions of all DNNs evaluated in the first experiment for an exemplary scene from the Cityscapes validation data. Table 3: In-depth evaluation on the Cityscapes validation data for the first experiment, where we incrementally extend a DeepLabV3+ by the novel class human on the Cityscapes dataset. We provide IoU, precision and recall values obtained for both, the initial and the extended DNN, on a class-level as well as averaged over the classes in C and C + , respectively.
large parts of the bus in the foreground are predicted correctly by our extended DNN. The bus in the background is even better recognized by our network than by the baseline and oracle. Analogous to the first experiment, the most similar classes truck and train show increasing IoU and precision, but decreasing recall values. Averaged over the known classes c ∈ C, we again observe improvement in IoU and precision with a concurrent drop in recall. Averaged over the extended class set C + , all three performance measures increase after class-incremental learning.
A.3 EXPERIMENT 3
In the next experiment we extend the previous ones by enlarging the set of novel classes, withholding the classes pedestrian&rider, bus and car. Again, we trained a DeepLabV3+ network on the Cityscapes dataset to learn the remaining, non-novel classes. We reconsidered our approach to reject possibly known objects from the embedding space to improve the purity of novel object clusters. Instead of rejecting anomalous segments that consist of only one Table 4: In-depth evaluation on the Cityscapes validation data for the second experiment, where we incrementally extend a DeepLabV3+ by the novel class bus on the Cityscapes dataset. We provide IoU, precision and recall values obtained for both, the initial and the extended DNN, on a class-level as well as averaged over the classes in C and C + , respectively.
predicted segment in the semantic segmentation mask, we include a random choice of objects / segments from each known class into the embedding space. If an anomalous object can be assigned to an existing class, it is no longer taken into account in the further procedure. To decide whether an object is novel or known, we consider its 2.75-neighborhood. If this contains at least 10 known objects from which at least 80% belong to the most frequent class, we assume the anomaly belongs to even this class, i.e., we reject it. Consequently, we discard the detected bus segments since these are closely related to the classes truck and train. However, we obtain two clusters, one for the class car (1375 segments) and one for the class human (135 segments). We incrementally expand the model by these classes, achieving a similar IoU value (around 40%) for the human class as in experiment 1, where we only learned a single class. For the bus class, we even get an IoU value of more than 80%. Detailed results are provided in Tab. 5. Table 5: In-depth evaluation on the Cityscapes validation data for the third experiment, where we incrementally extend a DeepLabV3+ by the novel classes human and car on the Cityscapes dataset. We provide IoU, precision and recall values obtained for both, the initial and the extended DNN, on a class-level as well as averaged over the classes in C and C + , respectively.
A.4 EXPERIMENT 4(A)
The fourth experiment involves two different network architectures. Results for the first one are shown in experiment 4(a), results for the other one in 4(b). We start with a DeepLabV3+ network trained on the Cityscapes dataset and aim to detect and learn the guardrail class using images taken from the A2D2 dataset. To mitigate a performance drop caused by the domain shift from Cityscapes to A2D2, we first fine-tune the decoder for 70 epochs on our A2D2 training split, applying the same hyperparameters we used for the incremental training (see Sec. 5). By that, we im- Table 6: In-depth evaluation on the A2D2 validation data for the fourth experiment, where we first fine-tune and then incrementally extend a DeepLabV3+ by the novel class guardrail on the A2D2 dataset. We provide IoU, precision and recall values obtained for both, the initial and the extended DNN, on a class-level as well as averaged over the classes in C and C + , respectively.
prove the mean IoU of the initial network from 59.38% to 75.77%. The classes which suffer the most are person, motorcycle and bicycle, which is presumably due to their rare occurrence on country roads and highways, and therefore, low frequency in the re-training data, which involves only 30 pseudo-labeled and 30 replayed images. Further details are provided in Tab. 6.
A.5 EXPERIMENT 4(B)
In experiment 4(b), we employ a PSPNet instead of a DeepLabV3+, for the rest we proceed as in the previous subsection. Again, the training data consists of 30 images Figure 10: Comparison of the semantic segmentation predictions of all models incrementally extended by the guardrail class for an example image from the A2D2 validation split.
predicted segmentation quality estimation Figure 11: Illustration of prediction quality differences (green color indicates high, red color low prediction quality), caused by the domain shift from Cityscapes to A2D2, mainly due to weather conditions. with pseudo ground truth and 30 labeled, replayed images (containing only old classes) from the A2D2 training split. Note that these 30 images are not the same as in experiment 4(a) due to the different network providing predictions of estimated low quality on different images. In total, the initial and the extended PSPNet are outperformed by DeepLabV3+, however, both architectures show similar patterns: • extended DNN exhibits a high precision guardrail and a low recall guardrail • classes that are mostly affected by re-training: person, motorcycle, bicycle • averaged over C and C + , respectively, IoU and recall Table 7: In-depth evaluation on the A2D2 validation data for the fourth experiment, where we first fine-tune and then incrementally extend a PSPNet by the novel class guardrail on the A2D2 dataset. We provide IoU, precision and recall values obtained for both, the initial and the extended DNN, on a class-level as well as averaged over the classes in C and C + , respectively.
values decrease, precision values increase
For more detailed information we refer to Tab. 7.
A.6 EXPERIMENT 5
Finally, we perform the same experiment as in 4(a) without prior fine-tuning the initial DNN on A2D2. Consequently, the domain shift causes many noisy predictions, exhibiting low prediction quality estimates. We exclude such images from the further process based on two criteria: Table 8: In-depth evaluation on the A2D2 validation data for the fifth experiment, where we incrementally extend a DeepLabV3+ (trained on Cityscapes) by the novel class guardrail on the A2D2 dataset. We provide IoU, precision and recall values obtained for both, the initial and the extended DNN, on a class-level as well as averaged over the classes in C and C + , respectively.
1. mean quality score (averaged over pixels) less than 0.7 2. more than 1/3 of all pixels with quality estimate less than 0.9.
If at least one criterion holds, we reject the image, as illustrated in the bottom row of Fig. 11.
Applying our method, we obtain 70 pseudo-labeled images. The incorporation of data seen during training of the initial DNN, i.e., the Cityscapes training data, restrains the network from adapting onto the new domain. We therefore decided to extend the model only on D C+1 .
Class-wise evaluation results are reported in Tab. 8. Even with a domain shift, we achieve an IoU of 20.90 ± 1.73% for the novel class. This is less than the value obtained with prior fine-tuning. However, this DNN still outperforms the PSPNet from the previous experiment considering only the precision. The low recall values are tolerable since many guardrails are still assigned to the "supercategory" fence.
For most other classes, the IoU values increase or remain roughly the same. In contrast to the other experiments, the motorcycle class improves in IoU, precision and recall values. Only classes that are rare in rural street scenes, e.g., sidewalk or bicycle, suffer from the incremental training.
A visual comparison of the experiments 4(a), 4(b) and 5 is provided in Fig. 10. All three extended DNNs have learned to predict the novel class to some extent. The prior fine-tuned networks show similar predictions, though DeepLabV3+ is much more precise than the PSPNet and better recognizes the guardrail on the right. The model from the fifth experiment predicts the left guardrail as fence (which is not totally mistaken), though it performs better on the right-hand guardrail than the others. Both oracles illustrate, Table 9: Overview about the training data of the meta regressor for each experiment. We report the number of metrics per segment k (that depends on the number of classes |C|) as well as the number of segments produced by the initial network during inference of the training data.
that the guardrail class is learnable with high accuracy, still leaving room for improvement of unsupervised methods.
B SYNTHETIC DATASET
We generated a synthetic dataset with the CARLA simulator, that contains novel classes such as deer in the test data. Two examples are provided in Fig. 12. All classes considered as novel are never seen before, i.e., they are not contained in the training data. Besides that, the street scenes for training and testing are recorded under identical conditions, i.e., on the same maps, with the same weather conditions, camera angles etc., so that the segmentation network is not distracted by anything different than the novel objects.
C MODULES
We present a modular procedure, this is, the individual modules can be modified or exchanged. In this section, we provide a deeper insight into the modules meta regressor and feature extractor.
C.1 UNCERTAINTY METRICS & META REGRESSION
For every segment k ∈ K(D train ) we compute the following metrics: . . . Figure 13: Coarse illustration of the feature extraction process. Detected unknown objects (here: human and guardrail) are cropped out (indicated by the red box). The image patches are fed into an encoder, the resulting feature vectors are then projected into a two dimensional space. Table 10: Ablation study for the feature extractor: we provide the IoU, precision and recall values for the first experiment, where we incrementally extend a DeepLabV3+ by the novel class human on the Cityscapes dataset, using three different architectures for the feature extraction. For each feature extractor, we report the mean and standard deviation over five runs, respectively.
Feature Extraction
• the size of the segment k, its interior k o and its bound- • several dispersion measures aggregated over k, k o and ∂k, respectively: where D ∈ {E, M, V }, i.e., softmax entropy E, probability margin M and variation ration V .
• the relative dispersion measures: • the variance of the dispersion measures • the predicted class c ∈ C • the mean softmax probabilities for each class c ∈ C • the pixel position of the segment's geometric center • the ratio of the amount of pixels in the neighborhood of segment k predicted to belong to class c ∈ C to the neighborhood size for each class c ∈ C Further, we compute the IoU (averaged over each segment), which is the only metric that requires ground truth and serves as target value for the meta regressor. The number of training metrics, i.e., explanatory variables, is reported in Tab. 9 for each experiment. This is, the training data for the meta regressor has a dimension of |K(D train )| × #metrics.
C.2 FEATURE EXTRACTOR
We apply an image classification CNN, pre-trained on Im-ageNet, without the final classification layer to extract features of image patches as illustrated in Fig. 13. This feature extraction CNN can be exchanged arbitrarily, as long as the resulting feature vectors equally sized for different input dimensions. In Tab. 10 we compare the results for experiment 1, using three different feature extractors, namely DenseNet201, ResNet18 and ResNet152.
D RESULTS -VISUALIZATION
In Fig. 14 we provide an overall visualization of all conducted experiments. Our approach predicts the novel objects with adequate accuracy while the predictions of the initial and the extended DNNs remain similar on previously-known objects. Note that in the fifth experiment, the A2D2 ground truth consists of coarser classes than the segmentation DNN, which is trained on Cityscapes. Further, Fig. 15 illustrates the mean and standard deviation of the main evaluation metrics for each experiment, respectively. We observe, that the standard deviation values regarding the mean over C are at the maximum 1.20%, and besides that ≤ 1%. This is, our method is robust considering the initially known classes. In experiment 4 (a) and (b), we observe the highest standard deviation for the IoU values of the novel class with 4.80% and 3.48%, respectively, which is < 2% for all other experiments. | 2022-01-05T02:16:24.673Z | 2022-01-04T00:00:00.000 | {
"year": 2022,
"sha1": "b341abc8d6a7f6735fd85751d4a9c8a24bde7a32",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b341abc8d6a7f6735fd85751d4a9c8a24bde7a32",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
226300866 | pes2o/s2orc | v3-fos-license | Does It Matter What I Say? Using Language to Examine Reactions to Ostracism as It Occurs
Most of our knowledge related to how social exclusion affects those who ostracize and those who are being ostracized is based on questionnaires administered after the ostracism situation is over. In this research, we strived to further our understanding of the internal dynamics of an ostracism situation. We therefore examined individuals’ language—specifically, function words—as a behavior indicative of psychological processes and emergent states that can be unobtrusively recorded right in the situation. In online chats, 128 participants talked about a personal topic in groups of three. In the experimental group (n = 79), two conversation partners ignored every contribution by the third. We found that, compared to the control group, these targets of ostracism used language indicative of a self-focus and worsened mood, but not of social focus or positivity, although positivity was related to a writer’s likeability. Sources of ostracism used language suggesting that they were distancing themselves from the situation, and they further engaged in victim derogation. We discuss how our results highlight the severity and potential self-sustainability of ostracism.
INTRODUCTION
Being ignored and excluded is an intensely painful experience (DeWall et al., 2010;Eisenberger, 2012;Ferris et al., 2019) and strongly motivates its targets to achieve re-affiliation (Williams, 2007). Ostracized individuals, thus, tend to try to connect with individual interaction partners (Maner et al., 2007), are motivated to work with others, join social clubs (Baumeister and Leary, 1995), or even join extreme groups (Hales and Williams, 2018).
In the long run, social isolation can have detrimental consequences for the individuals excluded, spanning physical and mental health issues such as depression, aggression, eating disorders, and higher mortality (Williams, 2001), but even seemingly trivial episodes of ostracism can be distressing (Nezlek et al., 2012;Hartgerink et al., 2015). Given the severity of the consequences ostracism has on the ostracized (henceforth called targets), imposing ostracism on others is perceived as a harsh violation of a general inclusion norm (Rudert and Greifeneder, 2016). Thus, it is strenuous for the individuals doing the ostracism (henceforth called sources) as well: They report emotional distress (Poulsen and Kashy, 2011;Legate et al., 2013), and find themselves in need for justification for their behavior (Nezlek et al., 2015).
Most previous research investigating ostracism has looked at participants' judgments and behavior only after the ostracism situation has been concluded. Studies have tended to rely on questionnaires to ask participants how they felt while being excluded in an online ball-tossing game, but usually administer these questionnaires after the game itself is finished (Williams et al., 2000;Williams, 2009). This can be problematic since there is evidence that people's recollection of their reactions to and ability to cope with negative events are biased (Todd et al., 2004), and this is particularly relevant for situations that conflict with one's self-concept and are perceived as shameful or threatening to selfesteem, like ostracism (Williams, 2009;Wesselmann et al., 2016): Such shameful situations can prompt self-protective, defensive behavior (Barrett et al., 2002), and recollection of such events might be affected by various cognitive or motivational biases: selfserving biases, social desirability, or, more generally speaking, self-or other-deception. Thus, explicit measures about such socially sensitive topics are not considered to be always thorough or accurate (Barrett et al., 2002;Hofmann et al., 2005).
Taken together, while the temporal need threat model of ostracism suggests that the immediate, reflexive reaction to the situation differs from the more controlled, reflective reaction (Williams, 2009), only a few studies have directly measured truly reflexive reactions to ostracism. These studies have made use of physiological measures such as fMRI (Eisenberger et al., 2003), or specific mood dials to obtain continuous self-reports during experiments . Aside from this handful of studies, what we currently know about ostracism may more accurately reflect whatever sense the involved individuals make of the situation afterward than what they experience in the situation itself. So what do sources of ostracism do to justify their actions in the very situation? What do targets immediately do to try and end the exclusion and achieve re-affiliation? How do they feel? Avoiding expensive and lab-locked technology, or biased and intrusive live self-reports, we look at individuals' language when ostracism occurs. More specifically, we investigate individuals' use of function words in ostracism situations. While content words-such as nouns and verbs-convey meaning, the use of function words such as pronouns or articles, has been linked to several psychological states and processes (Tackman et al., 2019;Tausczik and Pennebaker, 2010). For example, the use of personal pronouns can indicate where the speaker or writer puts his or her social focus (Zimmermann et al., 2013). To the best of our knowledge, the analysis of language has only been used to investigate reports of ostracism once (Klauke et al., 2020). In this study, it was found that when reporting ostracism, participants used language that indicated a stronger self-focus, lower connectedness, and higher complexity than when they were reporting an instance of social inclusion. However, like other previous studies on social exclusion, this study did not look at ostracism right as it was happening. Thus, in the present study, we aim to contribute to the existing literature in three ways: first, we plan to replicate previous findings on language use and ostracism, and extend it by applying language analysis to live ostracism situations, capturing a truly reflexive reaction to social exclusion. Second, we want to further our understanding of how and whether targets focus on themselves and others, and how they try to immediately achieve re-affiliation, by assessing their language. Third, we aim to examine the sources of ostracism: We want to know whether they engage in victim derogation, and what their language use could tell us about how they engage in dissonance reduction when actively ignoring someone.
TARGETS AND SOURCES OF OSTRACISM
Social ostracism involves at least two parties: one party being ostracized, that is, ignored and excluded, and one party doing the exclusion, i.e., ignoring and excluding the former. Targets of ostracism are arguably more severely affected. They feel pain, suffer from negative affect, and their basic social needsbelongingness, self-esteem, control, and meaningful existenceare threatened (Baumeister and Leary, 1995;Eisenberger and Lieberman, 2004;Williams, 2009). Consequences of ostracism are often suggested to be more severe than other forms of (social) pain, as its targets are often not provided with a reason for their behavior (Sommer et al., 2001;Williams, 2009). This provokes rumination, causing the targets to introspect and come up with all kinds of self-related reasons for their treatment (Wesselmann et al., 2013a;Hales et al., 2016b). To re-fulfill their needs, targets first strive for reintegration, and, to that end, fine-tune their social perception. They become more attuned to social cues (Gardner et al., 2000;Pickett et al., 2004) and remember them better (Gardner et al., 2000). Furthermore, they get better at decoding these cues, e.g., distinguishing fake smiles from genuine smiles (Bernstein et al., 2008). On a behavioral level, this often leads to higher social servility (Williams, 2009): targets of ostracism cooperate more (Maier-Rigaud et al., 2010;Sheremeta et al., 2011), mimic potential interaction partners more (Lakin et al., 2008), express a greater desire to make friends (Maner et al., 2007), and are more willing to join even extreme groups (Hales and Williams, 2018). Taken together, this literature suggests that targets of ostracism are in a state of both self-focus and heightened attention to social cues, searching for connections with others.
As ostracism is so painful to those who experience it, people usually hesitate to harm others in such a way (Legate et al., 2013;Wesselmann et al., 2013b). In most situations, ostracism constitutes a violation of a general inclusion norm (Rudert and Greifeneder, 2016). Particularly the exclusion of likeable individuals is mentally straining to the sources of exclusion as well (Sommer and Yoon, 2013), and is considered immoral (Rudert et al., 2017). Ostracizing others, thus, does not only lead to feelings of guilt, shame, and even pain (Legate et al., 2013;Gooley et al., 2015;Nezlek et al., 2015), but also to the experience of cognitive dissonance (Festinger, 1957;Wirth and Wesselmann, 2018). When compensating the target for the inflicted pain (as in Wesselmann et al., 2013b) is not possible, research indicates several options for reducing dissonance. One way is victim derogation: perpetrators come up with reasons why committing such an offense is justified by devaluing the target (Festinger, 1957;Gawronski, 2012;Wesselmann et al., 2014). Another possibility is self-deception (von Hippel and Trivers, 2011): people can, for example, refuse to take full responsibility for their behavior (Schober and Glick, 2011) or play down the severity of one's actions, decreasing their estimation of the pain they inflicted in others (Brock and Buss, 1962). Taken together, sources of ostracism are in need for justification of their actions.
In a first step, we thus expect that sources will try to reduce cognitive dissonance by devaluing the target of ostracism on basic, universal dimensions of social perceptions, i.e., warmth and competence (Fiske et al., 2002;Cuddy et al., 2008). Research so far has often hinted at the possibility that sources of ostracism engage in such victim derogation, but that assumption has, to our knowledge, not been systematically tested before (cf. Wirth and Wesselmann, 2018). Thus, we first want to establish whether: Hypothesis 1: Sources will perceive targets as less warm (H1a) and less competent (H1b) than individuals not in an ostracism situation perceive each other.
The main goal of this study is to assess the effects of an ostracism situation on targets and sources as the situation unfolds. To that end, we employ an online chat paradigm where two participants were made confederates and ignored a third participant's messages. There, we can record individuals' language as immediate behavioral responses to ostracism.
Language and Exclusion
Language is central to the coordination of groups and can signal several processes and emergent states (Van Swol and Kane, 2019). The words we use can be roughly differentiated into two categories: content words (e.g., nouns and verbs) and function words (e.g., pronouns and articles). Content words are words that carry a meaning which can, generally, be understood without further context or explanation (cf. Pennebaker, 2011). These meaning-bearing and relatively consciously chosen words are the traditional subject of content analyses and explore the ideas that people want to express (Boyd, 2017). Function words, however, have three advantages over content words that make them useful for the assessment of psychological states and traits: first, they are used independently of the topic that is communicated about. They are thus less reflective of the topic, but more of the author's mindset (Pennebaker and King, 1999;Tausczik and Pennebaker, 2010). For example, suicidality of poets could be linked to how they used function words regardless of what their poems were about (Wiltsey Stirman and Pennebaker, 2001), and twitter users' personality could be inferred by the way they tweet, regardless of what they tweeted about (Qiu et al., 2012). Second, they are used frequently, providing plenty of material for analysis: on average they make up more than half of the words used in a given text, although only making up a small percentage (about 1-2%) of the overall vocabulary (Pennebaker et al., 2015;Meier et al., 2018). Additionally, their use is almost automatic, and therefore, hard to control or manipulate (Chung and Pennebaker, 2007). Consequently, function words are minimally prone to motivational biases Chung and Pennebaker, 2007;Cohen, 2012), making them particularly useful in assessing unpleasant or shameful memories and events like ostracism (Barrett et al., 2002). Function words have been found to signal how people relate to themselves and others (Zimmermann et al., 2013), and differ in reports of inclusion versus exclusion (Klauke et al., 2020). However, to the best of our knowledge, the language individuals use in ongoing situations of social exclusion has not been studied yet.
As laid out above, the state of targets of ostracism is one of disconnection, self-focus, and low status, while sources have to cope with cognitive dissonance and feelings of guilt. While targets want to achieve re-integration, sources need to reduce their cognitive dissonances, possibly via victim derogation and engagement in distancing behavior. These states and behaviors can find their representations in the language that individuals use, making language style-the use of pronouns, articles, and other function words-a useful and unobtrusive tool to study human interaction.
Use of Personal Pronouns
The way in which an individual refers to itself-via a collective "we" or an individualizing "I"-strongly relates to this person's current state in relation to others (Zimmermann et al., 2013). The use of first-person singular pronouns (such as I, or me) seems to broadly relate to self-focus (Ireland and Mehl, 2014). It is connected to negative affect (Pennebaker and Lay, 2002;Tackman et al., 2019), depression (Edwards and Holtzman, 2017;Tackman et al., 2019), low status (Chung and Pennebaker, 2007;Kacewicz et al., 2014), and has been found to be used more in reports of social exclusion than in reports of social inclusion (Klauke et al., 2020). Furthermore, first-person singular pronouns are used less when individuals are distancing themselves from their behavior, or when they are deceiving themselves or others (Newman et al., 2003;Schober and Glick, 2011).
On the other hand, the use of first-person plural pronounslike we-can reflect a collective identity (Brewer and Gardner, 1996;Sexton and Helmreich, 2000;Boals and Klein, 2005). Manipulating pronouns use leads participants to perceive relationships with friends as well as confederates on a task as closer and higher in quality when using "we" rather than "she and I" (Fitzsimons and Kay, 2004). Furthermore, the use of we has been shown to relate to a stronger perceived self-other-overlap in romantic relationships (Agnew et al., 1998). As another group of pronouns relevant to social interactions, third-person pronouns (e.g., she, they) have occasionally been linked to self-monitoring and general social awareness (Hoover et al., 1983;Ickes et al., 1986;Pennebaker et al., 2003).
Targets of ostracism suffer from a sense of lowered selfworth and disconnection, while also focusing on their social surroundings. Sources, on the other hand, try to distance themselves from the situation. We expect individuals' pronoun use to reflect these states, and we examine the following hypothesis: Hypothesis 2: While sources of ostracism will use fewer firstperson singular pronouns (H2a), targets will use more firstperson singular pronouns (H2b), fewer first-person plural pronouns (H2c), and more third-person pronouns (H2d), than individuals not in an ostracism situation.
Use of Articles
The use of articles (a, the) has often been found to be connected to more a formal or more distanced and abstract ways of writing or talking, as compared to a more narrative, personal style (Pennebaker and King, 1999;Heylighen and Dewaele, 2002). Individuals using a more article-heavy style are more likely to be of higher status: their use is positively related to both parental education and individual academic success, regardless of the academic subject . Further, individuals low in article use tend to be more neurotic and agreeable (Pennebaker and King, 1999). Consequently, when reporting past experiences of social exclusion, people have been found to use fewer articles than when writing about inclusion (Klauke et al., 2020).
Summing up, articles are used more by individuals of higher status, when talking in a distanced, formal way, whereas agreeable, neurotic people use them less. Since we expect sources to try and distance themselves from the situation while targets are immediately put in a relatively low-status position, we hypothesize: Hypothesis 3: Targets use fewer articles (H3a), while sources use more articles (H3b) than individuals not in an ostracism situation.
Use of Language to Increase Likeability
In essence, language is a tool to communicate. Its nature is therefore inherently social, and it is little surprising that some aspects of language, such as the use of positive emotionality, asking questions, or engaging in language mimicry, have been found to increase liking by others and foster relationship building. Since targets of ostracism feel disconnected and are in search for re-connection, these aspects are of particular relevance to this research.
Positive emotionality
The expression of positive emotion has been connected to (lowstatus) individuals seeking approval: Positive emotion words (such as happy and nice) were used more often by low status members in online forums (Reysen et al., 2010) and in e-mail negotiations (Belkin et al., 2013). Regardless of status, using positive emotion words increases the chance of reaching an agreement in online negotiations (Hine et al., 2009), and are used more by candidates before compared to after their election (Danescu-Niculescu-Mizil et al., 2013). Taken together, these results suggest the use of positive emotionality cues a warm self-image and increases an individual's likeability and popularity.
Asking questions
Another conversational behavior linked to increased likeability is asking questions. In long-term relationships, people that draw out more information from their partners are rated as more likeable by their partners (Miller et al., 1983). Huang et al. (2017) found that asking questions signals responsiveness and increases liking in conversational partners, both in a natural environment and in an experimental setting when the number of questions asked was manipulated. There is also tentative evidence that lowstatus individuals ask more questions (Dino et al., 2009). This behavior makes sense, particularly in an ostracism context: if individuals want responses, a viable course of action would be to provoke those responses directly by asking.
Language style matching
Linguistic mimicry-mimicking the way one's conversation partners are speaking-can be another way to affiliate with said partners, as the Communication Accommodation Theory (Giles et al., 1973) posits. Language divergence, on the other hand, can be used to express disaffiliation, to increase or emphasizes social distance, and usually leads to less liking (Gasiorek, 2016). A meta-analysis found that accommodation is consistently associated with positive evaluations of the communication, while divergence or non-accommodation is related to negative evaluations (Soliz and Giles, 2014). These findings have been extended to function words: mirroring an interaction partner's linguistic style increases liking and can go as far as positively predict mutual romantic interest and relationship stability (Ireland et al., 2011), and particularly low-status individuals are evaluated as more empathetic when matching the language style of their conversation partners (Muir et al., 2016).
To sum up, the use of positive emotion words, questions, and the use of language style matching are used by individuals who are reaching out, trying to connect with others-while previously reviewed literature shows that ostracized individuals strive for connection and re-integration. We assume that targets of ostracism use language to achieve their goals and hypothesize: Hypothesis 4: Targets use more positive emotion words (H4a), more question marks (H4b), and engage more strongly in language style matching (H4c) than the control group.
As laid out above, we assume that sources will distance themselves from the situation and therefore emphasize social distance. This distancing can be reflected in language divergence, so we postulate: Hypothesis 5: Sources use more language divergence than the control group.
We assume that the use of positive emotionality, questions, and language style matching are viable tools to convey warmth, trustworthiness, and friendliness, so we examined: Hypothesis 6: The use of positive emotionality (H6a), question marks (H6b), and Language Style Matching (H6c) increases warmth perceptions in conversational partners.
Participants
Participants were recruited via the psychological faculty's mailing list and online social networks. They were grouped in teams of three for an online experiment involving a chat where they were all asked to talk about their favorite holiday destination. A total of N = 141 participants took part in our study for course credit, and/or to participate in a raffle to win 2 × €25. No-shows in the registered groups of three were substituted by confederates. This procedure had to be followed in five experimental groups where one source of ostracism had to be replaced each and for one participant in two control groups. The data of those confederates were excluded from analysis. To ensure that participants followed instructions, two judges checked all chat logs for sources' replies to the targets. In two of the experimental groups, such replies were found, and these groups were excluded from the analysis.
The remaining N = 128 participants 1 were, on average, 24.16 years old (SD = 5.36). A total of 80 identified as female, 47 as male, and one person did not answer. In the experimental group, 51 participants were instructed to be sources of ostracism, excluding another 28 participants as targets of ostracism. In the control group, which consisted of 49 participants, no participant received any further instructions.
Procedure
Participants signed up to the experiment via an online calendar with their e-mail address, which was anonymous to other participants. When three participants signed up for any time slot, the group was randomly assigned to either the experimental condition or the control condition with a chance of 2:1. On the designated date, the group was sent an e-mail with a link to an online survey. This survey contained a short demographic questionnaire as well as login credentials and instructions for a subsequent group chat. In these instructions, all participants were asked to write about their favorite holiday destination, and to convince the others that their destination was the best. In the experimental condition, two of the three participants were individually instructed to ignore and exclude the third participant, and not to respond to any of their utterances.
After all participants were online, they were invited to a group chat and instructed by the investigator to start the discussion. After 15 min, the discussion was stopped by the investigator, who then sent a link to every participant in a private chat room. This link started the second part of their questionnaire, containing questions about their chatroom experience and their fellow participants.
This study's procedure was reviewed and approved by the Ethics Committee of the Faculty of Life Sciences of the Technische Universität Braunschweig. The participants provided explicit informed consent to participate in this study both at the beginning of the experiment, and after the debriefing.
Need Threat Questionnaire
Social need threat (Williams, 2009) was assessed via the German version of a semantic differential (Rudert and Greifeneder, 2016). It consists of one item for each basic need, judged on a ninepoint scale [e.g., rejected (1) to accepted (9) for belongingness]. The total scale's internal consistency in our study was α = 0.925. This scale served as a manipulation check.
Stereotype Content Model
The social perception of the other participants was assessed based on the stereotype content model, using four items for warmth and 1 Following a power analysis, to achieve to achieve 80% power to discover medium effect sizes (f = 0.25) at p < 0.05 in an ANOVA, we aimed to recruit N = 159 participants. Due to time constraints and exclusions of participants not following our instructions, we ended up with N = 128. A power sensitivity analysis indicates that our tests thus yield a power of 80% to discover effect sizes down to f = 0.28. competence each (Fiske et al., 2002). Similar constructs have been used to assess the effects of victim derogation before (e.g., Hafer, 2000;Correia et al., 2012;Oldmeadow, 2018;Tepe et al., 2020). The items (e.g., able for competence, friendly for warmth) were translated to German and rated on a five-point scale. In our study, the internal consistency was α = 0.873 for the warmth scale and α = 0.848 for the competence scale. For our analyses, we used the mean of the scores by both other participants: Targets were rated by the two sources, participants in the control condition were rated by their two conversation partners. The rating of sources was not relevant to our analysis.
Language Analysis
The analysis of the linguistic style of participants' chat protocols was conducted using the software LIWC 2015 (Pennebaker et al., 2015) with the German dictionary (Meier et al., 2018). This software counts words of any given text and classifies them into several linguistically and psychologically meaningful categories. It then reports each categories' share of words of the overall word count. Before entering the texts in LIWC, corrections for typographical errors were made. Word recognition rate over all messages ranged from 78-94% per participant, with an average of 88%. Word recognition rate did not differ between conditions (all p > 0.394, MD < 0.606). Participants wrote an average of 184.91 Words (SD = 98.61). Word count differed by condition,
Language Style Matching
To assess coordination of language style, we used the reciprocal LSM (rLSM) metrics for conversation-based individual rLSM and dyadic rLSM scores by Müller-Frommeyer et al. (2019). We used the individual scores for targets and participants in the control group, assessing their matching with both other conversation partners. For sources, we used the dyadic scores assessing the language style matching between sources and targets, as our hypotheses were not concerned with the language style matching of sources with each other.
Suspicion Check
We further gave participants the possibility to comment on the experiment in a text box. We checked their entries for notions of suspicion or improper adherence to the instructions. Two targets directly indicated suspecting a manipulation of which one specifically stated that he still felt awkward not being acknowledged. Two more participants mentioned they were wondering why they were excluded, and suspected experimental manipulation amongst other reasons. This insecurity about the reason for one's treatment is typical for targets of ostracism, and is theorized to make them "consider a laundry list of bad things they have done or said" (Williams, 2009, p. 289). On the sources' side, one participant indicated that the situation felt unnatural, another one commented that the instructions made it easier to strike up a conversation with foreigners (though not specifically referring to the exclusion instructions). A total of 12 sources indicated that they felt regret and/or that it was difficult for them not to reply to the target.
Analysis
Group differences of warmth and competence and Language Style Matching were tested using an ANOVA and pairwise comparisons (one-sided) between the groups mentioned in the hypotheses. Data on word use frequency is, in essence, count data, and often noticeably non-normally distributed (Karlgren, 1999). It often follows rather a Poisson or binomial shape and is prone to zero-inflation, particularly in less frequently used categories. This holds also true for the data presented in this research, as examination of Q-Q-plots and Shapiro-Wilk statistics showed: Except for the articles category, all language category data deviated significantly from normal distribution in at least one condition, three categories (we, other, and question marks) across all conditions. F-tests and t-tests tend to handle non-normal zero-inflated data poorly, the use of rank-base tests is suggested instead (Šimkovic and Träuble, 2019). Thus, we examined group differences in language data using a Kruskal-Wallis test. Pairwise comparisons between the groups mentioned in the hypotheses were assessed using the Dunn's test. Dependence between variables was assessed using the Kendall's correlation.
Hypotheses 1 through 5 were tested between either sources or targets and the control group. Hypothesis 6-the hypotheses that positive emotionality, question marks, and language style matching positively affects warmth perceptions-was tested using data from the control group. Our reasoning for this is that sources of ostracism were instructed to ostracize targets, and we expected them to engage in victim derogation. Therefore, the relationship between target behavior and source judgment could be different in these participants. Consequently, we tested this hypothesis on the control group data, not affected by our manipulation.
For all directional hypotheses, p-values of direct comparisons or correlations are reported one-sided (cf. Cho and Abe, 2013;Lakens, 2016). Two-sided p-values are indicated as such.
RESULTS
To assess the quality of our paradigm, we checked whether the manipulation caused the assumed threat to the four basic social needs. We found that needs were substantially affected by condition, F (2,124) = 89.89, p < 0.001, and that the targets' needs were less satisfied (M = 3.04, SD = 1.24) than those of participants in the control group [M = 6.91, SD = 1.68; t(76) = −10.86, p < 0.001], as a pairwise comparison showed. The need satisfaction of sources was even higher (M = 7.66, SD = 1.46) than in the control group, t(98) = 2.45, p = 0.016 (two-tailed). Investigating each need separately, we found that the targets had significantly lower need satisfaction on all four needs [all b ≤ −3.14, all t(75) ≤ −7.19, all p < 0.001, two-tailed], while sources scored slightly higher on all needs [b ≥ 0.71, t(75) ≥ 2.12, all p ≤ 0.036, two-tailed] but self-esteem [t(75) = 0.84, p = 0.403, two-tailed].
Ostracism's Effect on Language Use
Overall, we found various differences in the language that targets and sources of ostracism use when compared to the control group. Medians, quartiles, and the mean ranks for the assessed linguistic categories can be found in Table 1.
Use of Personal Pronouns
As expected, condition did significantly affect the use of firstperson singular pronouns, H(2) = 17.00, p < 0.001. According to our hypotheses H2a and H2b, we found that targets used more first-person singular pronouns than the control group (z = 2.07, p = 0.019), while sources of ostracism used fewer such I-words (z = −2.32, p = 0.010).
While the overall effect of condition on the use of first-person plural pronouns was not significant, H(2) = 5.02, p = 0.081, planned comparisons revealed that targets were found to use "we" less frequently than the control group (z = −2.20, p = 0.014), confirming our hypothesis (H2c). However, no differences by condition could be found regarding the use of third-person pronouns; H(2) = 0.98, p = 0.613.
Use of Articles
The use of articles was affected by condition, H(2) = 6.94, p = 0.031. Pairwise comparisons revealed that neither targets (z = −1.59, p = 0.056) nor sources (z = 1.21, p = 0.112) significantly differed from the control group in their use of articles as predicted, but exploratory analysis showed that the targets used significantly fewer articles than the sources; z = −2.63, p = 0.009 (two-tailed), with the control group ranking in between (see Table 1).
Use of Likeable Language
We hypothesized that targets would use more positive emotion words. Differences in use of positive emotion words were not significant between conditions, though H(2) = 2.93, p = 0.231. The same was true for language style matching, which was not significantly affected by condition, F(2,125) = 1.89, p = 0.156. Although we did not find a significant overall difference between conditions regarding the use of question marks, H(2) = 4.75, p = 0.093, targets used more question marks than participants in the control group did (z = −2.13, p = 0.017), lending support to our prediction (H4b).
Language and Judgment of Warmth
In partial support of our hypothesis (H6a), we found that using more positive emotion words is related to others perceiving the Median and quartiles represent the percentage of words used from each respective category. First and third quartiles appear in brackets under the medians. Mean rankings were obtained from the Dunn's tests for comparisons between conditions.
DISCUSSION
The present study investigated how language use is affected by ostracism as it occurs. We employed a chat paradigm where two participants were asked not to respond to a third participant. Sources readily followed orders to exclude the targets, and the manipulation effectively threatened the targets' social needs. This paradigm then allowed us to investigate how linguistic style is affected by an ongoing ostracism situation. We found that both targets and sources of ostracism considerably differed from participants in a control group in their use of language. As predicted, we found that targets of ostracism used more first-person singular pronouns but fewer first-person plural pronouns than participants in a control group with no ostracism. We did not find targets to use more third-person pronouns nor more "likeable" language: neither did they use more positive emotion words, nor did they match the linguistic style of sources more. However, targets did make greater use of question marks than the control group.
Sources, on the other hand, rated targets' warmth and competence lower than participants in a control group rated each other. Sources also used significantly fewer first-person singular pronouns than individuals in the control group. Furthermore, we found that sources used more articles than targets.
Targets' Use of Language
Targets' use of first-person pronouns fits well with the empirical results presented in our theory section, combining ostracism and language literature. By using more "I" and fewer "we"pronouns, the targets' language use reflects their inclusionary status. The increased use of I-talk indicates that their attentional focus shifts toward themselves. This shift has previously been linked to neuroticism (Yarkoni, 2010;Holtgraves, 2011;Qiu et al., 2012), which is characterized by a ruminative self-focus and negative thoughts (Teasdale and Green, 2004). Furthermore, the use of "I" has been positively linked to self-oriented impression management, i.e., Machiavellianism, but negatively related to a more other-oriented, accommodative, impression management (Ickes et al., 1986). Accordingly, this might be an explanation for why our ostracized participants in the reflexive stage of ostracism do not use more other-referencing pronouns (such as they, or she), which are thought to signal social awareness and selfmonitoring (Hoover et al., 1983;Mehl and Pennebaker, 2003).
Further, we found no evidence for targets making an effort trying to come across as particularly friendly via the use of positive emotion words or engagement in language style matching. So why do targets of ostracism not use strategies readily (and presumably unintentionally) used by individuals before an election as well as low-status online community members seeking for approval (Dino et al., 2009;Danescu-Niculescu-Mizil et al., 2013)?
A possible interpretation lies in the temporal need-threat model of ostracism (Williams, 2009): targets of ostracism first enter a reflexive stage feeling pain and negative affect, and suffer from threatened social needs. They only begin to focus on re-fortifying these needs in the ensuing reflective stage. It is possible that targets are so stupefied by the unexpected exclusion that they only really react after a prolonged period of time. However, it was found that targets do adjust their behavior in compliance with group norms when threatened with exclusion (Kerr et al., 2009;Sheremeta et al., 2011), so in an ongoing ostracism situation, individuals have been found to try and achieve re-inclusion. Furthermore, we found that targets asked more questions than the control group, suggesting a prevailing interest in social interaction. We, therefore, offer a different, albeit speculative, explanation: individuals previously found to be using more positive emotion words were at least members of their respective communities. Targets of ostracism, on the other hand, are unsure about their status on a much more fundamental level and feel threatened-they might, thus, simply not consider it a good idea to present themselves as warm and open, particularly since high warmth perception tends to come at the expense of seeming low in competence, and therefore, vulnerable (Fiske et al., 2015). This would fit with a finding that ostracized individuals tend to become more disagreeable over time, and disagreeable individuals also tend to be ostracized more readily (Hales et al., 2016a).
Taken together, these findings hint at the gravity of ostracism, as the potential chain reaction of disagreeableness and ostracism could begin earlier than expected: targets' focus shifts away from others to themselves. At the same time, they refrain from signaling agreeableness not only after a prolonged time but right in the moment of their exclusion. As ostracism is likely to be overdetected (Williams, 2009), this could not only increase the likelihood of ostracism persisting but could potentially turn trivial episodes of neglect into vicious cycles of ostracism. Our findings highlight the necessity of further investigation of the internal dynamics of an ostracism situation to substantiate these interpretations.
Sources' Behavior
Sources rated targets' warmth and competence as lower than participants in a control group rated each other. As sources were instructed to ostracize the target, they had no a priori reason to assume lower warmth or competence in the targets. We interpret this as victim derogation: complying to unfairly treating others for no justified reason is known to cause cognitive dissonance (Festinger, 1957). A biased perception of others as more unfavorable is well suited to reduce such dissonance (Gawronski, 2012): cold and incompetent individuals or groups are readily met with contempt and rejection (Cuddy et al., 2008), and ostracizing more cold and incompetent people is regarded as comparably acceptable and less morally disgusting (Rudert et al., 2017). Thus, we argue that convincing oneself that one's victims are cold and incompetent reduces cognitive dissonance and makes it more morally acceptable to exclude them.
Another way to reduce cognitive dissonance is to distance oneself from the behavior perceived as shameful or immoral (Schober and Glick, 2011;von Hippel and Trivers, 2011). Consequently, language use of the sources of ostracism hints at sources trying to distance themselves from their behavior: low amounts of self-references have been found to be associated with deceit both of others (Newman et al., 2003) and of the self (Schober and Glick, 2011). Although our finding that sources use more articles than targets was not predicted and should therefore be considered exploratory, it still lends tentative support to our hypothesis that sources try to distance themselves from the situation as article use is linked to a more factual, less narrative, and emotional linguistic style (Pennebaker and King, 1999;Heylighen and Dewaele, 2002).
It seems at odds with this interpretation that we did not find sources to use language divergence toward the target. We assume that this null finding could be due to our manipulation: sources were not ostracizing the target on their own volition but were complying with the experiment's instructions. Therefore, they might be motivated to distance themselves from the situation but not from the target, as such behavior might further increase cognitive dissonance.
To summarize, by linking research on ostracism and language style, we were able to show how currently being a target or source of ostracism is represented in an individual's language use. We found support for our hypothesis that targets focus on themselves right in the moment of the exclusion. Their language, however, did not indicate that they are particularly sensitive to their social surroundings or that they make any effort to come across as especially warm and friendly to achieve re-integration. We further were able to show that sources of ostracism devaluate their victims, and that they show linguistic signs of distancing themselves from the situation.
Limitations and Future Directions
Although our study extends work on both language and ostracism research, our findings need to be contextualized within their limitations. Automated word count analysis is a coarse measure of language, ignorant of both context and content. Furthermore, the interpretation of language use as a signal for processes, e.g., the use of "I" as a sign of self-focus, is solely based on theoretical considerations, and therefore, a case of reverse inference. Thus, although such reverse inferences can have substantial predictive power (Hutzler, 2014), we can only assume that, e.g., it is actually self-focus that causes the use of first-person singular language. It is therefore particularly necessary to strictly differentiate between empirical findings and interpretations with regards to the current research.
Another limitation concerns the external validity of our paradigm. In general, the chat paradigm we employed is closer to ostracism seen in real life than in very abstract paradigms such as Cyberball (Williams et al., 2000). In our case, however, the sources were put in a forced compliance situation, asked to inflict (social) pain in another individual by excluding them, without any additional motivation to do so. This is important to keep in mind when interpreting our results on sources of ostracism: when having an actual motivation to exclude others, processes of reducing cognitive dissonance might be different. Nevertheless, our findings could lay the foundation for the analysis of inclusion and rejection in online communication such as group chats and online social networks or transcripts of face to face conversations. Future studies could back it up by analyzing transcripts of the language used in real-world ostracism situations.
Furthermore, our research left some questions unanswered. Contrary to our hypothesis, we neither found evidence for targets of ostracism showing linguistic signs of a focus on others nor using more positive emotionality, although, was the use of positive emotionality tied to the speaker being rated as warmer. Future research should investigate under which circumstances targets become more socially attuned and friendly when under the threat of ostracism, and how the words they use can help them reconnect and put an end to the exclusion. Understanding the internal dynamics of an ostracism situation and the actual behavior of both sources and targets can have important implications for helping targets reconnect and stop sources from causing psychological harm. Our research provides a first step toward such solutions.
Conclusion
We found that both sources and targets of ostracism change their language in response to the different situations, signaling introspection and self-focus on the side of the targets, and distancing of the self from the situation on the sources' side. Our findings suggests that the targets' initial reaction to ostracism is not one of other-focus, and not one of attention to social cues, but a potentially detrimental self-focus, which has previously been associated with rumination, neuroticism, and depressive symptoms as well. Sources seem to avoid involvement in the situation. Together, this behavior could potentially turn a short episode of ostracism into a vicious circle.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of the Faculty of Life Sciences of the Technische Universität Braunschweig. The participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
FK conceived the original idea for this research, performed all analyses, interpreted the data, and drafted the manuscript. SK gave constructive feedback during the conceptualization phase of the study, helped in interpreting the results, and assisted with drafting the manuscript. Both authors approved the manuscript to be published.
FUNDING
We acknowledge support by the German Research Foundation and the Open Access Publication Funds of the Technische Universität Braunschweig. | 2020-11-12T14:16:46.482Z | 2020-11-12T00:00:00.000 | {
"year": 2020,
"sha1": "c1d9960e444a056e8dc7e376d6ef86fbb4887a9f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.558069/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1d9960e444a056e8dc7e376d6ef86fbb4887a9f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
246894857 | pes2o/s2orc | v3-fos-license | The effect of knowledge on attitude of pregnant women in prevention of worm infections
The effect of knowledge on attitude of pregnant women in prevention of worm infections Sulastri1*, Diah Ayu Agus Triana2 Introduction: Helminthiasis is an endemic and chronic disease caused by parasitic worms with a high prevalence rate and is non-fatal, but it affects the health of the human body by reducing the absorption of nutrients and proteins in infected individuals and reducing blood levels in the human body so that if it occurs in pregnant women, it can affect pregnancy and childbirth. This study aimed to examine the influence of findings on attitudes of pregnant women towards the prevention of helminthiasis in the work area of the Gatak Health Center, Sukoharjo. Methods: This study used a descriptive observational research method with a cross-sectional approach with 144 samples taken with an all-sampling technique. Results: Statistical test shows that the value of p 0.000 is less than alpha 0.05, which means there is a relationship between the knowledge the Pregnant women and the attitude of pregnant women insist the mother in preventing the occurrence of worms. Conclusion: The pregnant woman’s posture is classified as functional because it influences the mother in the prevention and treatment of helminthiasis and affects her state of health.
INTRODUCTION
Worm infections, especially intestinal worm infections, are environmental diseases that remain a problem in Indonesia. The infected worms can inhibit nutrient uptake and the occurrence of bleeding to reduce patient productivity. All age groups range from 40% to 60%, and up to 195 million people live in endemic areas of the world. 1 Results from studies in several other countries showed that at Kasoa Polyclinic, Ghana, 300 pregnant women whose stool samples were examined with the direct wet preparation technique and the concentration of ether former, 43 (14.3%) of them had intestinal parasites. 2 The results of a study in Ethiopia that the prevalence of intestinal helminth infections in pregnant women in 5 areas studied was 277 (37.3%). 3 Infection with intestinal worms has negative effects. This disease can weaken the patient's health, nutrition, intelligence, and performance, for which it causes many economic losses, as it causes loss of carbohydrates and proteins and blood loss (anemia). 4 Worm infection is one factor that makes anemia worse because as the number of worms in the intestine increases, so makes blood loss, which disrupts the iron balance as more iron is released than the iron supplied. 5 Earthworm species from the group of soilborne helminths are still a health problem, namely Ascaris lumbricoides, Trichuris trichiura, Strongyloides stercoralis, and hookworms (Necator americanus and Ancylostoma sp). 6 Mouth up to the mucous membrane of the small intestine. Hookworms ingest blood and travel from site to site in the lining of the intestine, leaving minimal bleeding and injury. 7 Worm infections can be transmitted through food contaminated with worm eggs because they are not washed properly. The ingested water contains worm eggs.
Infections are also caused by economic and environmental factors and poor personal hygiene. 2 Based on the health profile data in Sukoharjo in 2019 regarding environmental conditions, it was found that there were still some house buildings that still had dirty floors, especially the kitchen, there was no sewage, so there were still puddles in the house and household garbage. It had not been properly administered to provide access to drinking water. There is still water that does not meet the requirements for consumption because Coli bacteria are found, the water source may be contaminated with feces, or there is a leak in the mains. 8 Worm infections can be controlled by regularly administering anthelmintics, improving hygiene, improving personal hygiene, and providing health education to vulnerable groups. 9 One of the factors that cause helminthiasis is the knowledge of pregnant women about helminthiasis. The better the pregnant woman's knowledge, the better the mother will behave in preventing helminthiasis. Based on the previous problem, this study aimed to investigate pregnant women's knowledge and attitudes towards helminthiasis in the work area of the Gatak Health Center, Sukoharjo.
METHODS
The present study is a cross-sectional descriptive observational study conducted in January-February 2021; the study was conducted on pregnant women in the work area of Gatak Health Center, Sukoharjo Regency, Central Java, Indonesia. The study sample consisted of 144 pregnant women with pregnancy. Age in the first to third trimesters was recorded using a total sampling technique. The inclusion criteria in this study were pregnant women who could communicate well and were willing to become participants, while the exclusion criteria this study were pregnant women. who did not collect research questionnaires, the variables this study were the characteristics of pregnant women (age, gestational age, education, occupation and when they last took antiparasitic drugs) as independent variables and the level of knowledge and attitudes of women pregnant as the dependent variable.
The data collection technique used primary data obtained by completing questionnaires by pregnant women. The questionnaire consisted of 12 questions about the mother's knowledge of helminthiasis and 17 questions about the mother's attitude towards the prevention of helminthiasis, and then the data were evaluated using univariate and bivariate analysis; A univariate analysis was carried out, which describes the frequency and representation of the characteristics of the pregnant woman, which are represented by a frequency distribution table. Diverse analysis was conducted to determine the relationship between the characteristics and knowledge, and attitudes of pregnant women about helminthiasis in each prevalence using the SPSS application by chi-square analysis.
Characteristics of Respondents
The results of the study of 144 respondents gave the distribution of the characteristics of the respondents in Table 1 with the age of the respondents at the productive age of 20-35 years, namely 114 respondents (79.2%) and 30 respondents (20.8%) were of risk age. Most of the respondents were in the second trimester of pregnancy, up to 76 respondents (52.7%), 48 respondents (33.3%) in the third trimester, and 20 respondents (14.0%) in the first trimester. Ninety-two respondents (64%) already
Level of Knowledge and Attitude of Respondents
Of the 144 test samples, the level of knowledge and the attitudes in Table 2 with the level of knowledge of the pregnant woman about the infection with intestinal parasites, 90 respondents ORIGINAL ARTICLE , there is no influence between work and knowledge. The variable taking medication has a p-value of 0.320, which means no effect exists between the last use of medication and knowledge.
The Effect of Characteristics on Attitude
The following table shows the results of the statistical tests that show the influence of the characteristic variables of pregnant women on attitudes with the results of the age sub-variable with a p-value of 0.120 > of alpha 0.05; there is no influence between the mother's age in the settings. Variable gestational age with a p-value of 0.062 > alpha 0.05 means that maternal gestational age does not influence attitudes. Education variable with p-value 0.00 < alpha 0.05, which means that there is an influence between education and attitudes of the pregnant woman. Working variable with a p-value of 0.000 < from alpha, which means that there is an influence between the work and the attitude of the pregnant woman and the variable intake of drugs with a p-value of 0.440, which means that there is no effect between the last drug intake and the attitude of the pregnant woman at the prevention of worms. Table 5 shows that pregnant women who are good at preventing helminthiasis are set at the level of those with good knowledge up to 68. Statistical test results show that the p-value of 0.000 is less than alpha 0.05 between the knowledge of pregnant women and the mother's attitude to preventing worms from occurring.
DISCUSSION
Of the 144 pregnant women surveyed who were willing to fill out the Gatak Health Center work area questionnaire, Sukoharjo was dominated by mothers of productive age (20-35 years). Most of them were housewives and had an average high school education. Results 10 showed that hookworm prevalence was more common among younger age groups, less than seven years of education, and farmers. The study also found that the majority of 118 respondents (81.9%) had never taken anti-parasitic drugs before and did not know if they were adults; they also had to take anti-parasitic medicines. To combat intestinal parasites in pregnant women, the Indonesian government has implemented a program for the administration, detection, and treatment of Fe tablets from the second and third trimesters under medical supervision. 1 An attempt to control and eradicate helminth infections to prevent anemia, low birth weight, and the risk of infant death. 11 Table 2 shows that 90 respondents (63%) had a good level of knowledge, and 86 respondents (59%) had a good attitude towards helminthiasis prevention in pregnant women. Various factors that can influence knowledge, such as education, information/media, socio-cultural and economic, environment, experience, age, and attitudes, influence factors such as personal or other experiences, the influence of culture and media, education and religion, and emotional factors. 12 Although the results of the study in Tables 3 and 4 results from the characteristics of the respondents such as age, gestational age, education, occupation, and time of the last drug intake, nothing influenced the level of knowledge that the statistical test p-value >0.05 can do caused by other factors not examined in this study, such as information, socio-cultural and economic information, environment and experience of the pregnant woman, but similar results were found in the study where education and work influence attitudes of the pregnant woman.
Good knowledge will lead mothers to understand helminth infections, and mothers 'understanding of helminth infections will influence mothers' attitudes. Based on the research results carried out 13 , the better the knowledge, the better the behavior to avoid helminthiasis. In the results of the study, there is a coincidence that of the respondents with a good level of knowledge, up to 68 respondents have a good attitude. Still, not all have a good attitude. This can be influenced by several factors such as personal experience or other people, and the missing information to get where Never take anti-drug worms.
A lack of information can affect health, and poor personal hygiene can increase the risk of worm infection. 11 same source as in animals 10 , open stool, the habit of washing hands with soap before eating and after defecation, the habit of cutting nails, and washing vegetables before processing. 3,14, 15 Infectious worms can cause malnutrition and bleed up to anemia. Hookworms, in particular, can absorb nutrients from the food intake of the host (humans) so that they experience malabsorption and lose the body's nutrients. Hookworms also swallow blood by attaching it to the lining of the upper small intestine, which can lead to digestive tract bleeding and chronic anemia during pregnancy. 16 Many studies 17-20 have shown that helminth infections affect pregnancy and childbirth. Worm screening is necessary. Pregnant women during prenatal care visits and offer training on personal hygiene and environmental cleanliness, household waste disposal, and housekeeping to change attitudes in prenatal care. Helminthiasis ventilation in pregnant women.
The limitation of this study is that not all pregnant women in the working area of the Gatak Health Center can participate. Some pregnant women do not dare to do antenatal care in health services for fear ORIGINAL ARTICLE of contracting COVID-19. This results in the number of respondents not being maximal.
CONCLUSION
The level of knowledge and attitudes of pregnant women about helminthiasis in the work area of Gatak Health Center Sukoharjo are mostly good. However, there are still pregnant women who have a fairly good level of knowledge and attitude, even not good. What affects mothers' differences in knowledge and attitudes is the personal experience that creates behavioral habits. There is a need to improve pregnant women's health education to prevent and control intestinal worms and their effects on pregnancy and childbirth. | 2022-02-17T16:12:39.842Z | 2021-12-30T00:00:00.000 | {
"year": 2021,
"sha1": "e47a521dcfdfb507f278633987a998b8a446ba50",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15562/bmj.v10i3.2850",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "664b713bf77a0297c6257934152c9cf76fd836bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
227158742 | pes2o/s2orc | v3-fos-license | Continuous ultrafiltration during extracorporeal circulation and its effect on lactatemia: A randomized controlled trial
Introduction Hyperlactatemia occurs during or after extracorporeal circulation in the form of lactic acidosis, increasing the risk of postoperative complications and the mortality rate. The aim of this study was to evaluate whether continuous high-volume hemofiltration with volume replacement through a polyethersulfone filter during the extracorporeal circulation procedure decreases postoperative lactatemia and its consequences. Materials and methods This was a randomized controlled trial. Patients were randomly divided into two groups of 32: with or without continuous high-volume hemofiltration through a polyethersulfone membrane. Five patients were excluded from each group during the study period. The sociodemographic characteristics, filter effects, and blood lactate levels at different times during the procedure were evaluated. Secondary endpoints were studied, such as the reduction in the intubation time and time spent in ICU. Results Lactatemia measurements performed during the preoperative and intraoperative phases were not significantly different between the two groups. However, the blood lactate levels in the postoperative period and at 24 hours in the intensive care unit showed a significant reduction and a possible clinical benefit in the hemofiltered group. Following extracorporeal circulation, the mean lactate level was higher (difference: 0.77 mmol/L; CI 0.95: 0.01–1.53) in the nonhemofiltered group than in the hemofiltered group (p<0.05). This effect was greater at 24 hours (p = 0.019) in the nonhemofiltered group (difference: 1.06 mmol/L; CI 0.95: 0.18–1.93) than in the hemofiltered group. The reduction of lactatemia is associated with a reduction of inflammatory mediators and intubation time, with an improvement in liver function. Conclusions The use and control of continuous high-volume hemofiltration through a polyethersulfone membrane during heart-lung surgery could potencially prevent postoperative complications. The reduction of lactatemia implied a reduction in intubation time, a decrease in morbidity and mortality in the intensive care unit and a shorter hospital stay.
Introduction
Lactate is a biomarker for which its increase or decrease can serve as a predictor of morbidity and mortality in intensive care units (ICUs) [1]. Arterial lactatemia results from the production and elimination of lactate molecules. The concentration of lactate in the body is generally less than 2 mmol/L [2]. When there is a decrease in the supply of oxygen such as anemia or low cardiac output, there is an increase in anaerobic metabolism with the conversion of pyruvate to lactate, increasing its concentration in the blood [3].
Lactate can also be synthesized in critical patients, especially in cases of cardiogenic shock, acute respiratory failure, pneumonia, or sepsis [4,5]. These pathologies, together with a decrease in clearance due to renal or hepatic failure, contribute to an increase in lactate levels in patients.
Cardiac surgery is relevant due to its relationship with cardiac biochemical processes during extracorporeal circulation (ECC), in which the presence of elevated lactatemia is a predictor of postoperative outcomes [6]. In fact, levels higher than 4.4 mmol/L are related to an increased stay in the ICU and general ward [7].
During ECC, an increase in myocardial and peripheral tissue lactate is demonstrated, associated with impaired tissue oxygenation and early onset hyperlactation, probably due to accelerated anaerobic metabolism as a result of increased circulating epinephrine and inflammatory proteins [3].
Hyperlactatemia is considered when the average blood lactate value exceeds 2 mmol/L [8]. Its occurrence during or after ECC increases postoperative complications, such as infections, while its decrease in the first 24 hours is associated with a decreased mortality rate [9]. However, the occurrence of lactic acidosis during ECC is a complex phenomenon, depending on factors such as duration or hemodilution and ECC time [10,11]. It is an additional independent risk factor that leads to poor postoperative outcomes. Without the possibility of adequate elimination from the bloodstream, lactate can reach levels above 4 mmol/L, which are associated with an increased risk of postoperative morbidity including a higher rate of 30-day mortality after cardiac surgery [8,[12][13][14][15].
Intraoperative lactate measurement is a reliable assessment performed by perfusionists to monitor tissue perfusion during surgical procedures [16]. The objective of the ECC, in addition to eliminating excess liquid, is to eliminate toxic and pro-inflammatory substances. This is a technique that improves hemodynamics, lung function, and hemostasis [16]. Although this combined practice (conventional and modified ultrafiltrations) is considered a safe technique during ECC [17], some authors have associated conventional ultrafiltration with the occurrence of intraoperative hyperlactatemia during the ECC procedure and recommend its use only in situations where the patient suffers from renal failure, a positive fluid balance, poor response to diuretics, or prolonged ECC (more than 120 minutes) [18].
Continuous high-volume hemofiltration with volume replacement is used throughout the ECC procedure to achieve the benefits of both techniques. A polyethersulfone membrane is used for the transfer of solutes by flow dragging and according to the pore size of the membrane to achieve electrolyte and lactate purification. It is important to use solutions with a low lactate content to fill the circuits of the ECC pump. These compounds have a supraphysiological rate of plasma acetate throughout the ECC process [19,20], and even small concentrations of acetate produce pro-inflammatory and cardiotoxic effects [21]. However, they are widely used as priming solutions in ECC [22]. Therefore, the objective of our study was to evaluate whether continuous high-volume hemofiltration with volume replacement through a polyethersulfone filter during the ECC procedure decreases postoperative lactatemia and its consequences.
Primary hypothesis
Continuous high-volume hemofiltration with volume replacement by the use of a polyethersulfone membrane during the ECC procedure in patients undergoing cardiac surgery decreases intraoperative lactatemia.
Design
A randomized controlled trial was conducted between June 2017 and February 2018 at the Puerta del Mar University Hospital (Spain). No variations were made to the trial design or outcomes after trial commencement. This paper uses a trial protocol and the guidelines for reporting parallel group randomized trials (CONSORT); see S1 Protocol and S1 Checklist. The authors confirm that all ongoing and related trials for this drug/intervention are registered.
Sample size calculation
Consecutive sampling was performed; as patients met the inclusion criteria, they were selected to participate in the study. To determine the sample size, the value of the variance of the response variable was calculated in a reference group [18]. The basic response variable in our study was the lactate elimination rate expressed in amount/unit of time. To determine the sample, we assumed an alpha risk of 0.05, a power of 0.80, a clinically important minimum difference of 0.5, and a standard deviation in the outcome variable of 0.7. The need for a total of 64 participants was finally concluded.
Ethics
This research conformed to the principles described in the Declaration of Helsinki and was approved prior to its initiation by the ethics committee (IRB) of the Puerta del Mar University Hospital on December 2nd, 2016. All patients who participated in the study signed an informed consent form.
Participants, recruitment, randomization, and treatment allocation
The inclusion criteria were patients without urgent clinical interventions, patients undergoing extracorporeal circulation normothermic surgical procedures and patients who had a minimum time before decannulation of more than 60 minutes (myocardial reperfusion completed, unclamped aorta, and ECC completed). The exclusion criteria were patients who did not sign the informed consent form, patients with previous renal or hepatic failure, and procedures without ECC. Although some oral antidiabetic agents, such as metformin, may alter lactate levels [23], patients with diabetes were not excluded from the study because the preoperative lactate values of these patients were within the limits of normality.
For the patient recruitment, a previous interview was conducted 24 hours before the surgery after the hospital admission; in this interview the patients were informed about the components of extracorporeal circulation, the hemofiltration technique, and the study overview; they were also informed that, although they had signed the informed consent, they could refuse to participate in the study at any time. Recruitment began on September 1st, 2017 and ended on February 28th, 2018.
The study was blinded to the patients, data analysts, and ICU staff. The allocator randomly divided the patients into a control group (CG) or a hemofiltered group (HG). In the HG a polyethersulfone filter was used throughout the ECC while in the CG conventional procedures without hemofiltration were used. Surgeries were randomly assigned to four groups: 44 cases of valvular surgery (68.75%), 9 cases of coronary surgery (14.06%), 6 cases of valvular and coronary surgery (9.38%), and 5 cases of aortic and ascending aortic replacement surgery (with the Bentall technique) (7.81%).
The allocator was assigned by the head of the hospital's ethics committee. He randomly assigned patients into eight blocks, all being equivalent in all procedures except in the treatment maneuvers; there were no notable differences between the possible confounders measured in the two analyzed groups. There was no stratification.
Procedure
In the present study, all patients were operated on under propofol-induced general anesthesia and maintained during the procedure with the volatile anesthetic agent sevoflurane, including during the ECC period. All procedures were performed through a median sternotomy, and normothermia (309.15 K; 36˚C; 96.8˚F) was maintained in all patients throughout the procedure.
An ECC open circuit consisting of a set of polyvinyl chloride tubes and a polypropylene membrane oxygenator with an integrated arterial filter with a coating based on phosphorylcholine molecules was used. The ECC device consisted of a biopump, and all procedures were performed with a centrifugal pump. In HG, the hemofiltration membrane used was a membrane made of polyethersulfone, with a surface area of 1.35 m 2 .
The cardioplegia solution and procedure used in the surgical procedure was a modification of that developed by Calafiore et al. [24], and the cardioplegia solution was administered antegrade through the aortic artery and retrograde through the coronary sinus. For this purpose, 80 mEq of KCl and 1.5 grams of MgSO 4 were mixed in a 50 cm 3 syringe using a volumetric infusion pump connected to the circuit through a three-step key.
The data were collected with a CONNECT 1 system (LivaNova Deutschland, Münich, Germany). To measure lactate levels, a GEM premier 4000 1 analyser was used, with amperometric biosensors connected to the ECC pump and a recording system for further statistical analysis [25,26]. The blood flowed from the oxygenator to the hemofilter through a recirculation line with a flow of 100 to 500 mL/min, depending on the time of surgery, without exceeding the maximum transmembrane pressure of 500 mmHg as recommended by the manufacturer. The effluent rate was 110 ml/min to an average of 80 ml/kg/h, similar to HERO-ICS study [27].
The HG was subjected to high-volume hemofiltration together with replacement of the hemofiltered liquid with a solution used in extrarenal purification techniques comprising 2 mmol/L of K + , 32 mmol/L of HCO 3-, 111. 5 mmol/L of Cl -, 140 mmol/L of Na + , 0.5 mmol/L of Mg 2+ , 1.75 mmol/L of Ca 2+ , 6.1 mmol/L of glucose, and 3 mmol/L of lactate. Pursuing a zero balance, this solution allows for the replacement of 3,000 cm 3 per hour with a maximum of 60,000 cm 3 /day.
Data analysis
The studied variables included biometric and analytical data that were analyzed preoperatively (age, sex, height, weight, and EuroSCORE), intraoperatively (diuresis, time of clamping, time of ECC, and attendance time between unclamping of the aorta and the completion of ECC), and post-operatively at 24 hours after surgery or thereafter (time of use of inotropic agents, Creactive protein, intubation time, and time spent in ICU). Lactate and hemoglobin levels were measured at all stages up to 24 hours after surgery. In the HG group, ultrafiltration was performed from the beginning of ECC to the end. Blood samples were collected prior to initiation of ECC; every 20 minutes during the procedure, with the highest value recorded; at the end of the ECC; and 24 hours after admission to the ICU.
The parameters of oxygen supply, oxygen consumption, venous oxygen saturation and oxygen extraction were continuously recorded throughout the procedure [28]. After hemodynamic and respiratory stability with effective cough and neurological stability, spontaneous Ttube ventilation was initiated. If the patient's stability persisted, extubation was performed [29]. These variables and procedures have allowed us to study secondary endpoints, including possible benefits in patients with renal failure and the reduction in the intubation time, time spent in ICU and C-reactive protein levels.
Quantitative variables were expressed by arithmetic means and standard deviations, or medians and interquartile ranges. Qualitative variables were expressed as frequencies and percentages. The normality of continuous variables was evaluated with the Kolmogorov-Smirnov test. For the analysis of intergroup changes, the following analyses were performed: ANO-VA-RM test with a post hoc least significant difference (LSD) test, Student's t-test to evaluate the difference between the means of two independent groups, and the Mann-Whitney U test for nonnormally distributed variables. To evaluate the statistical independence of the categorical variables, the chi-square test was used. Statistical significance was considered at p<0.05.
Results
Of the 64 patients enrolled in this study, 32 were randomly assigned to each group (CG and HG). During the assignment process, the following patients were excluded from the CG: 1 due to the need for ventricular assistance at the end of ECC [30] and 4 for hemofiltration due to preoperative anemia. In the HG, 3 of the patients were excluded from the study due to bleeding that resulted in a new intervention within 24 hours after surgery, 1 was excluded due to mediastinitis requiring a prolonged stay in the ICU, and 1 was excluded due to death from intraoperative vasoplegia without response to vasoactive drugs. Fig 1 demonstrates the flow of participants throughout the trial. Table 1 shows the descriptive results of the sociodemographic, anthropometric and biochemical variables analyzed in both groups. With respect to the sex of the patients studied, there By segmenting according to the EuroSCORE risk grades of the patients, the results indicate that in the group of low risk patients, the difference in the time of use of inotropics is high. Therefore, in this case, even having also high variability in the CG and the reduced number of low risk patients in the EuroSCORE, the difference becomes statistically significant (p = 0.02). Furthermore, the size of the effect (very large: 51.5%) supports the existence of this relationship, according to which the time of use of inotropics is higher in the CG (Table 2). In addition, attendance time between unclamping of the aorta and the completion of ECC is less variable and is reduced in low-risk EuroSCORE HG patients, so hemodynamic parameters remain more stable than in the same EuroSCORE in CG (Z U = 2.32; p = 0.02). There are no significant differences in the rest of the patients with medium and high EuroSCORE risk about variables time of use of inotropic agents and attendance time between unclamping of the aorta and the completion of ECC.
Lactate levels
Lactate measurements were performed by perfusionists at baseline, during ECC (the maximum value was considered), post-ECC, and at 24 hours in each group. In both groups, there was a clear elevation in the mean values obtained during and after ECC as well as at 24 hours with respect to the reference values. The differences between the measurements were significant (p<0.001), with an effect size of 30.6% in the CG and 37.1% in the HG.
When the initial lactate measurement was excluded and the comparison was made with only the latter three measurements (maximum during ECC, post-ECC, and at 24 hours), the analysis showed two results. First, the CG retained statistical significance, but the effect size was reduced to 12.7%, which is moderate-high. Second, there were no significant overall differences in the HG (mild effect of 4.1%). As such, in both groups, these results clearly demonstrated that the differences between the previous value and the latter three were significant (p<0.001), with the previous value being lower than all the others. Likewise, in the HG the values of these three measures did not differ significantly from each other, whereas in the CG lactate increased significantly from the maximum ECC value to the post ECC (p<0.01) and 24-hour values (only p<0.05 due to the high variability in this measurement). Between these last two measurements (after ECC and 24 hours), the difference was no longer significant (Table 3).
Regarding the analysis between groups (Table 4), the results show that there is also no statistically significant difference between the mean values of maximum lactate in ECC. However, from there on, significant differences already appear. At the post-ECC moment, the mean lactate value is higher (difference: 0.77 mmol/L; IC 0.95: 0.01-1.53) in CG (p = 0,047) although with moderate effect (7.4%). The 24-hour mean lactate value is even higher (p = 0.019) in CG (difference: 1.06 mmol/L; IC 0.95: 0.18-1.93).
Finally, an intergroup analysis was performed on different parameters. Statistical significance was found for intubation time, and there was a noticeable difference in the length of stay in the ICU between the two groups. The length of stay in the ICU was lower in the HG than in the CG. The relationship between this time factor and the lactate levels was not the same in each group. In the CG, the only possible relationship between these two variables was a direct linear relationship (p<0.001; R 2 = 55.8%). In the HG, although the most likely relationship was also linear (p<0.001; R 2 = 65.5%), the decrease in the last measurement indicated a quadratic-type association (p<0.01; R 2 = 30.9%).
The mean values of the arms are similar with a small difference of 8.5 minutes. The time in CG being higher, although the difference does not reach statistical significance. There is no difference in the use of blood transfusions between the two groups.
The intubation time variable is not normally distributed (p<0.001 in the Kolmogorov-Smirnov test) due to a large asymmetry with accumulation of cases in the low values versus very few with high values. The total mean is 6.48 (CI 0.95: 5.13-7.84) with a median of 5 in a range of 1 to 31 hours. Using the Mann-Whitney U-test, statistically significant differences were found (p<0.05) so that according to the data of the averages (both mean and median) the time is somewhat higher (about 3 hours) in CG. The effect size equivalent to this significance is moderate (8.6%).
The time spent in ICU is also not normally distributed (p<0.01 in the Kolmogorov-Smirnov test) due to the concentration of cases in the low values as in the previous one. The mean time of the total group is 4 days (CI 0.95: 3.33-4.74) and the mean values (mean and median) of both groups are very similar to each other.
There is no statistical significance between groups in renal failure. Actually, the number of cases with renal failure is very small, that in spite of the observed difference, this result must be taken with caution and is not sufficient statistical evidence to be able to intuit an effect.
C-reactive protein is normally distributed with an average value of 81.69 mg/dL (IC 0.95: 71.53-91.85). The mean value of CG is slightly higher than the mean value of HG but this difference does not reach the statistical significance so our data does not allow us to admit that the filter use factor is statistically related to C-reactive protein values (Table 5).
Discussion
Several authors have identified preoperative factors that favor the onset of hyperlactatemia during ECC [13]. Patients with type II diabetes mellitus often increases lactate associated with reduced aerobic oxidative capacity and restricted lactate transport. There are also factors that associate the increase in lactate concentration with age, sex, and comorbidities. Anemia is another factor associated with hyperlactatemia by decreasing oxygen supply and producing tissue hypoxia even with normal intravascular volume [31,32].
However, there are other mechanisms responsible for the appearance of hyperlactatemia in the intraoperative period of cardiac surgery with ECC: (1) the duration of surgery with ECC, with a relationship directly proportional to the time of surgery [10,33], and (2) a deficit in oxygen intake and consumption as well as an increase in its extraction [28] as they affect the morbidity of these patients, especially when lactate levels are higher than 4 mmol/L [7]. Nonetheless, the results obtained show that high-volume hemofiltration pursues a zero balance at the end of ECC, managing to mitigate the presence of high amounts of lactate regardless of sex and previous pathologies in patients during the preoperative period as well as the duration of ECC.
In the context of this background, it was important to assess the difference between the two studied groups; in the preoperative period, the elevation of lactate in the HG from ECC until the last analysis at 24 hours had no significance (p>0.05). However, there was a significant increase in the CG (p<0.01), which was independent of the time of ECC.
We must emphasize that in our study at all temporal points the optimal values of oxygen contribution, consumption, and extraction were maintained because gases and the lactate levels were measured to control pH and oxygenation. Even so, hyperlactatemia appeared during and after ECC, probably due to alterations in microcirculation [3]. However, the level of hyperlactatemia was lower in the HG than in the CG, contrary to what Soliman et al. reported [18]; this is possibly attributable to the low concentration of lactate in the extracorporeal circuit priming solution used in this study (3 mmol/L) compared with that in Ringer's solution (27 mmol/L).
It is important to highlight the time period from ECC departure until 24 hours after surgery. In the CG, lactate continued to rise; however, in the HG, the reverse occurred. This quadratic association can be attributed to a reduced need for clearance of lactate by the kidneys and liver, thus improving liver function [34,35]. This theory is also supported by the fact that in postoperative patients who underwent cardiac surgery with ECC, increased lactate in the absence of dysoxia can be caused by an exacerbated inflammatory response, mitigated by the use of continuous high-volume hemofiltration with a polyethersulfone filter [36], thus resulting in a zero balance at the end of the procedure [37].
The results obtained in this study showed a decreased lactate level in the HG during ECC, at the end of ECC, and 24 hours after surgery. The HG did not show significant changes in contrast to the CG, possibly due to the permeability of the polyethersulfone membrane. This membrane has a lactate screening coefficient equal to 1; thus, the concentration of lactate obtained in the effluent is equal to that existing in the plasma.
Lactate elevation in the intra-and postoperative periods is due to complex mechanisms rather than a single cause [38,39], just as not all patients develop it in the same way. A number of mechanisms can be involved during ECC, such as low oxygen supply [40] and prolonged ECC time [41]. Currently, lactate deficit clearance in postoperative patients has been demonstrated to be an independent risk factor for poor outcomes in postoperative cardiac surgery with ECC patients [42].
On the other hand, lactate levels greater than 3 mmol at 6 hours after surgery increase the probability of major complications [14,43]. Therefore, these results suggest prove that in both group, lactate varied significantly depending on the condition/time at which the measurement was performed.
As for secondary outcomes, the use of inotropic agents in the postoperative period in relation to EuroSCORE indicates that low risk patients (0-2 points) benefit more from the technique of continuous hemofiltration with volume replacement, as non-hemofiltrators require a longer time of use of inotropes. In fact, according to a recent study, the increased time of use of inotropes implies an increased risk of morbidity and mortality after cardiac surgery [44]. In the same way, the study shows that there is an improvement in the hemodynamic stability of hemofiltrated patients at low risk in the EuroSCORE due to a shorter attendance time. However, these results should be interpreted with caution due to the low number of patients included in this category of the EuroSCORE.
Lactate reduction decreases mechanical ventilation time of patients undergoing cardiac surgery with ECC. The length of stay in the ICU was also significantly reduced as a postoperative intubation time greater than 12 hours is directly related to an increased length of stay [45]. Moreover, it prevents the appearance of high lactate levels caused by long stays in the ICU [46,47]. This result is due to the use of hemofiltration and extracorporeal circuit priming solution [48].
The use of intraoperative hemofiltration may be beneficial to the patient in the short term as well as to patients with preoperative renal dysfunction in the long term [49]. Additionally, it may also be recommended for patients with previous liver disease [36].
Strengths and limitations
In some countries, perfusionists collaborate with surgeons and anesthetists to control and maintain ECC in patients before, during, and after surgery. The findings of this study involve a series of interventions in clinical perfusionist practice to eliminate the increase in serum lactate in surgical interventions with ECC and thereby reduce intubation time and morbidity and mortality in the ICU as well as improved liver function.
The main limitation of this study was a lack of previous research on continuous high-volume hemofiltration with volume replenishment during ECC, preventing proper comparisons with other studies. Moreover, more research focusing on these results with an equal or greater sample of participants in this study is needed to obtain more consistent results.
Other limitations were the variability in the surgical procedures, the interventions by different surgeons during the procedure that could have affected the time of surgery, the time of ECC and the different medical and nursing teams during the postoperative period in the ICU [11]. In addition, there are currently no analytical determinations that can discern the type of hyperlactatemia that patients develop. However, this study aimed to reduce hyperlactatemia regardless of its origin.
Conclusions
Although the occurrence of hyperlactatemia is common at the end of the ECC procedure, protocols aimed at reducing intra-and postoperative lactate levels through continuous high-volume hemofiltration with volume replacement and a zero balance favor the elimination of lactate during the ECC procedure. This is reflected in the lactate levels at the end of ECC and at 24 hours. The serum lactate levels in patients after ECC are decreased when continuous high-volume hemofiltration with a polyethersulfone membrane is used. The reduction in lactate levels within 24 hours after ECC in the CG was related to a decreased concentration of serum lactate after ECC, allowing improved purification. Moreover, these results could potentially stabilize the hemodynamics in low cardiac risk patients. This study could be fundamental to establishing specific protocols for its use in cardiac surgery with ECC, which, in combination with postoperative nursing care, could shorten the duration of care of individuals undergoing this type of intervention. | 2020-11-25T14:06:50.736Z | 2020-11-23T00:00:00.000 | {
"year": 2020,
"sha1": "715193a17591bc10facdd575eba5dcd757980895",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0242411&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5b4ad0bfd42471ff21eb864da2add17967b8ec8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266205516 | pes2o/s2orc | v3-fos-license | Emphysematous Gastritis: A Case Series on a Rare but Critical Gastrointestinal Condition
Emphysematous gastritis (EG) is a rare and life-threatening condition characterized by gas-forming microorganisms causing gas to accumulate within the stomach wall. It has a high mortality rate and is associated with risk factors like gastroenteritis, alcohol use disorder, diabetes mellitus, renal failure, recent abdominal surgery, long-term corticosteroid use, and ingestion of corrosive agents. Diagnosis is challenging due to its rarity and nonspecific symptoms, including severe abdominal pain, coffee-ground emesis, fever, and signs of systemic infection. We present two cases of patients with signs and symptoms of EG, where prompt diagnosis and treatment were achieved, avoiding further complications. Surgical intervention was avoided due to the successful response to conservative treatment. These cases highlight the importance of early detection and intervention in improving patient outcomes and preventing complications associated with EG.
Introduction
Emphysematous gastritis (EG) is an uncommon and life-threatening medical condition characterized by gas within the stomach wall.Fraenkel described the first documented case of EG in 1889 [1], and it is associated with a high mortality rate ranging from 55% to 61% [2,3].Emphysematous gastritis represents a severe variant of gastritis primarily caused by gas-forming bacteria such as Streptococcus species, Escherichia coli, Enterobacter species, Clostridium species, Pseudomonas aeruginosa, Staphylococcus aureus, Candida species, and Mucor species being prominent culprits [4].In typical cases, a computed tomography scan reveals gas appearing non-linearly along the inner surface of the stomach wall [5].Several EG risk factors have been identified, including gastroenteritis, alcohol use disorder, diabetes mellitus, end-stage renal disease, prolonged use of corticosteroids or nonsteroidal anti-inflammatory drugs (NSAIDs), previous abdominal surgery, and caustic ingestion [2].Given its critical nature, EG necessitates immediate medical attention and management.Nevertheless, its rarity and the limited number of reported cases make it challenging for clinicians to diagnose and treat it effectively.Therefore, timely identification and appropriate intervention are essential to enhance patient outcomes and reduce the mortality associated with this condition.
Case Presentation Case 1
A 35-year-old man with a medical history of asthma and hearing loss due to a prior head injury presented to the emergency room with complaints of abdominal pain, non-bilious vomiting with specks of blood, and five episodes of diarrhea over the past two days.He also reported a fever with chills, a dry cough, and nasal congestion.The abdominal pain was diffuse but mainly localized to the lower and middle abdomen, described as sharp, and scored 7/10 in intensity with no radiation.His home medication included only albuterol (as needed), and he denied any use of NSAIDs.On physical examination, the patient was febrile with a temperature of 101.8°F, tachycardic with a heart rate of 125 beats per minute, and hypertensive with a blood pressure of 150/100 mmHg.Abdominal examination revealed tenderness in the middle and lower abdomen with guarding but preserved bowel sounds.Initial laboratory investigations showed a hemoglobin level of 14 g/dl and a leukocyte count of 11.7 x 10^9 cells/L, with a neutrophilic predominance of 79%.The patient had elevated liver enzymes, with aspartate aminotransferase (AST) and alanine transaminase (ALT) levels of 88/102 U/L and alkaline phosphatase of 114 U/L.His lipase level was mildly elevated at 56 U/L.A respiratory viral panel was positive for rhinoenterovirus and negative for COVID-19.The chest X-ray was unremarkable.The CT of the abdomen and pelvic imaging revealed circumferential thickening of the gastric wall with multiple small collections of air within the gastric wall and mild peri-gastric fat stranding, leading to the favored diagnosis of EG based on the clinical presentation and radiological findings (Figures 1-2).
FIGURE 2: Coronal view of CT abdomen and pelvis without contrast
Circumferential thickening of the gastric wall with multiple small collections of air within the gastric wall and mild peri-gastric fat stranding (blue arrow points to intramural gas in the stomach).
A nasogastric tube was placed for gastric decompression.He received IV fluids and once-daily IV pantoprazole, along with a combination of IV antibiotics (piperacillin-tazobactam, vancomycin, and metronidazole) and IV metoclopramide for antiemetic management.Blood cultures were negative.The patient's symptoms gradually improved during his hospital stay, and he tolerated oral feeding well.Followup laboratory tests showed improved liver enzyme levels and normalization of leukocyte counts.After completing five days of conservative management, the patient was discharged home.
Case 2
An 81-year-old man with a history of hypertension, dementia, chronic kidney disease, anemia, and benign prostatic hyperplasia with a chronic indwelling catheter was brought from a nursing home to the hospital with complaints of abdominal pain for three days.He reported one day of nausea and vomiting and an episode of coffee-ground emesis.A previous upper GI endoscopy one month ago revealed grade C esophagitis and chronic mild gastritis.The patient was recently admitted for abdominal pain and managed a large fecal ball with ileus, receiving an aggressive bowel regimen.His home medications include metoprolol tartrate, multivitamins, ascorbic acid, and ferrous sulfate; he denied any intake of corticosteroids or NSAIDs.On arrival, the patient had one more episode of coffee-ground emesis but remained hemodynamically stable.Physical examination revealed a distended, tympanic abdomen with rebound tenderness involving the left lower quadrant and normoactive bowel sounds.Initial investigations showed a hemoglobin level of 11.4 g/dl and a leukocyte count of 16.4 x 109 cells/L.Elevated lactic acid, blood urea nitrogen (BUN), and creatinine levels were noted (3.0 mmol/L, 72 mg/dl, and 1.6 mg/dl, respectively).Ammonia levels were elevated to 85 umol/L, while liver enzymes and bilirubin were within normal limits.The chest X-ray showed no acute cardiopulmonary abnormality.Non-contrast CT abdomen and pelvis demonstrated a distended stomach with questionable pneumatosis within the wall of the fundus, portal venous gas within the liver, and thickening of the wall of the ascending colon, suggestive of bowel ischemia and severe fecal impaction involving the rectum (Figures 3-4).Emphysematous gastritis was favored due to his presentation and the radiological findings.The patient was admitted to the ICU, and a nasogastric tube was placed for decompression, revealing coffee-ground aspirate.Intravenous fluids and a combination of IV antibiotics (piperacillin-tazobactam, vancomycin, and metronidazole) were initiated.Gastroenterology was consulted for coffee-ground emesis and recommended a bolus of IV pantoprazole followed by twice-daily IV pantoprazole.The patient's constipation was managed with an aggressive bowel regimen involving enemas, lactulose, senna, docusate, and manual disimpaction.Follow-up labs showed normalization of ammonia levels.Urinalysis indicated pyuria, positive leukocyte esterase, bacteriuria, and crystals in the setting of a chronic indwelling catheter, but urine culture was negative and subsequently treated by replacing the Foley catheter.During his hospital stay, the patient showed clinical improvement, started oral feeds, and was transferred to the medicine floor.His symptoms resolved, and he tolerated feeding better.Follow-up labs revealed two sets of negative blood cultures and normalized labs to his baseline.The patient was discharged back to the nursing home on oral proton pump inhibitors after successfull conservative management.
Discussion
Emphysematous gastritis is a rare, life-threatening condition characterized by stomach inflammation and gas produced by microorganisms within the gastric wall [6].It is a severe form of gastritis that can rapidly progress to necrosis, perforation, and sepsis.The clinicopathological description of this condition was initially documented by Fraenkel in 1889, and Weens first established its radiological diagnosis in 1946 [7].It exhibits a striking clinical feature of abdominal pain, sepsis, and shock and is associated with a high mortality rate of 60% [8].The most common causative organisms, including Streptococci, E. coli, Enterobacter species, Clostridium welchii, S. aureus, and P. aeruginosa, account for most cases.Other causative organisms, such as Proteus species and Candida, are also implicated [9].Mucor species, a spore-forming anaerobic bacterium that ferments carbohydrates, producing gas and toxic substances, has been recognized as potential infectious agents associated with EG [10][11].The risk factors associated with EG [12][13] are presented in Figure 5. Numerous underlying factors that can lead to the damage of the gastric mucosal barrier have been associated with the development of this condition.The precise pathophysiology of EG remains uncertain; however, it is believed that pre-existing gastric ulcers or ischemic lesions create a favorable environment for bacterial infection, leading to their proliferation and infiltration into the gastric wall.Moreover, reduced acidity or the absence of lesions in the gastric mucosa might enable bacteria to colonize the stomach lining.Alternatively, the bacteria can spread through the bloodstream from a remote septic source, leading to EG [14,15].The pathogenesis is presented in Figure 6.The clinical presentation of EG can be nonspecific, making early diagnosis challenging.Common symptoms include severe abdominal pain, nausea, vomiting, hematemesis, and signs of systemic infection such as fever and tachycardia.The clinical signs in our first patient include abdominal pain, non-bilious vomiting with specks of blood, and five episodes of diarrhea.These symptoms were also associated with fever with chills, dry cough, and nasal congestion.Our second patient presented with more severe symptoms, including abdominal pain, nausea, vomiting, and an episode of coffee-ground emesis.Patients may appear critically ill with signs of sepsis, including hypotension and altered mental status [15].Emphysematous gastritis is a critical medical condition that should be taken into account when evaluating acute abdominal cases, especially in the presence of risk factors.Timely diagnosis and intervention are essential, and urgent abdominal CT scanning is required to aid in early detection and management [16].
The definitive diagnosis of EG can be achieved through radiological evidence of gas within the stomach wall.The CT scan is the preferred imaging method, which shows the noticeable changes in the stomach, such as thickened folds in the inner lining of the stomach and swelling, as well as pockets of air trapped within the gastric wall.In some instances, air may also be observed in the veins that drain blood from the stomach and even in the portal vein [17].Both of our patients underwent a CT scan immediately without any delay, which is crucial for optimizing patient outcomes in EG.If necrotic tissue is present, nasogastric intubation can provide valuable insights to aid the diagnosis.Moreover, the CT scan of the abdomen helps differentiate EG from other forms of acute abdomen.It can also be seen on the abdomen as linear, thin lucencies along the stomach wall [18].Other imaging modalities, like ultrasound, may also aid in the diagnosis.Esophagogastroduodenoscopy (EGD) plays a crucial role in EG by assessing the extent of the condition, detecting signs of gastric tissue damage, and ruling out other potential gastrointestinal disorders [19].
The treatment of EG typically involves a combination of medical and supportive measures.It is a serious condition, and immediate medical attention is essential.The primary treatment approach involves initiating early antibiotic therapy that targets anaerobic bacteria and gram-negative Bacilli, along with administering IV fluids for hydration [20].Both individuals in our study were started on a comprehensive range of antibiotics, a crucial step that aided us in ensuring timely and suitable intervention.This approach is vital for reducing the potential for these unfavorable consequences.
Adequate nutrition should also be initiated.Surgical management of EG is usually reserved for severe cases or when there are complications that cannot be adequately addressed with medical therapy alone.The decision for surgical intervention depends on the patient's overall condition, the extent of gastric necrosis, the presence of perforation or abscess formation, and the response to initial medical treatment [1].It is crucial to tailor the treatment to each patient's unique situation, and the management should be done in close coordination with a team of healthcare professionals, including gastroenterologists, surgeons, infectious disease specialists, and critical care specialists.Early diagnosis and prompt, appropriate treatment are essential to improve the patient's outcome and reduce the risk of complications.
It is important to differentiate this condition from the other possible causes of gas-producing bacteria, which include gastric emphysema.It is crucial to distinguish between these two conditions as they exhibit distinct clinical symptoms, radiological features, treatment approaches, and prognosis.The differences are listed below in Table 1 [21].Typically, when EG is detected early and immediate, proactive medical intervention is administered, such as prompt antibiotic therapy and comprehensive supportive care, the outlook is more positive.Effective infection control and addressing the root causes contribute to increased chances of patient recovery.
Conclusions
Emphysematous gastritis is an unusual but critical diagnosis to consider in patients with abdominal pain and suggestive radiological findings.Early identification and prompt initiation of appropriate treatment are vital in managing this potentially life-threatening condition.Advances in imaging technology, such as CT scans, have significantly improved our ability to diagnose this condition, leading to better outcomes.The importance of our case series highlights that timely and appropriate management is essential to minimize the risk of these adverse outcomes, especially as delayed diagnosis or inadequate treatment can result in rapid disease progression and increased morbidity and mortality.Further research and the accumulation of clinical evidence are necessary to enhance the recognition and treatment of this rare condition.
FIGURE 4 :
FIGURE 4: Coronal view of CT abdomen and pelvis without contrast White arrows: Distended stomach with questionable pneumatosis within the wall of the fundus; Yellow arrows: Portal venous gas within the liver
FIGURE 6 :
FIGURE 6: Pathogenesis of EG EG: Emphysematous gastritis Image created by author Qasim.
TABLE 1 : Difference between EG and gastric emphysema
EG: Emphysematous gastritis | 2023-12-14T16:05:16.970Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "4e0790c593539732cb3083b800ba6bc26794b797",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/208395/20231212-16768-17cfz38.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05755b6e2bfbc65bdbb8851118cbb2668961fcec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
118030689 | pes2o/s2orc | v3-fos-license | Investigation of Coronal Mass Ejections Related to Solar Flare Event and The Formation of The Small Geomagnetic Storm
This paper is highlighted on the duration of time for the Coronal Mass Ejections (CMEs) to occur related to solar flare event and the class of solar burst type III that present within the two phenomenon. It is important to understand the evaluations of solar flare until CMEs mean to be appearing and know the basic characterization of solar radio burst type III. It can be observed that CME is even larger than the sun itself. At certain period of time, when the Sun launches billons tones of electrically conducting gas plasma into the space at millions of miles per hours it is assigned that CMEs begin to launch. The data on 23rd of April was selected whereby; solar radio burst type 3 was detected (about 17:36 UT – 17:44 UT). At 17:40 solar flare with a radio burst and CMEs were produced by the sun. Associated with this event, current condition of solar wind speed is 359.5 km/sec with density of 6.0 protons/ and sunspot number are 118. Those at the high latitude have a chance of aurora due to the small geomagnetic storm.
INTRODUCTION
Coronal Mass Ejections (CMEs) are enormous eruptions of plasma and magnetic fields ejected from the sun into interplanetary space, seen by coronagraphs as they move out of their field of view over the course of minutes to hours. CMEs can only be observed by blocking the intense glare of photosphere because their brightness is of the order of magnitude of the solar corona. Meanwhile, the solar flares are the most energetic phenomena that occur within our solar system. The huge amount of energy that is released by magnetic fields of active regions is used to characterize a flare. Solar corona and chromosphere are accelerated and electromagnetic radiation covering the entire spectrum is emitted during a flare. Besides thermal conduction, non-thermal conduction particle beam, radiation transport and the mass motions causedlarge amountst of energy are transferred between corona and chromosphere. Coronal structure is mainly controlled by magnetic field due to the stronger magnetic force in the corona.
In radio region, type III burst is an indicator of the formation of an solar activity from an active region [1,2,3]. It reveals a wave-particle and wave-wave interactions in magnetic traps in the solar corona [4]. At meter wavelengths the type III burst is usually, though not invariably, preceded by others types of burst. This burst is still one of the interest burst in order to understand the flare plasma diagnostics in the low corona [5]. Interestingly, the motion follows the predominant magnetic field direction, the apparent speed is a significant fraction of the speed of light. These burst radio emission are rather frequently observed, especially a few days before solar flare and Coronal Mass Ejection phenomena [6,7,8].
Metric radio burst is normally a non-thermal particles accelerated and trapped during those events. The solar radio burst type III solar burst is the most dominant with the solar flare phenomenon was first introduced by Wild in 1963 [9] in the frequency range 500 − 10 MHz [10,11,12]. There are three sub-types of Type III burst that originate in the interplanetary (IP) medium which are (i) isolated Type III bursts from energy system and small-scale energy releases, (ii) a complex Type III bursts during CMEs, and (iii) Type III storms. This stage can be considered as a pre-flare stage that could be a signature of electron acceleration [13]. It is found that 60 % of fast drifts (type III) solar radio bursts are synchronized in time with solar flares [14]. Some evidence showed that type III are generated in a weak-field region comes from the absence or low degree of circular polarization of the bursts [15]. Nevertheless, the most important is that the nonlinear wavewave interaction which involving interaction of electrostatic electron plasma that called as Langmuir waves active region radio emissions is believed to be a main subject that relevant with a type III burst [16,17,18,19,20]. It is believed that a beam-plasma system is unstable to the generation of Langmuir waves, which are high frequency plasma waves at the local plasma frequency [21,22]. Type III bursts early in the rise of impulsive solar flares may indicate that open field lines are an essential part of models for energy release by magnetic fields in such flares [23,24]. Nevertheless, it is important to analyze in radio and x-ray region to understand the distribution of high and low energy [25,26,27,28]. The next section will highlight the solar flare and solar bursts in X-ray and radio region.
SOLAR FLARE OBSERVATION AND e-CALLISTO SOLAR SPECTROMETER NETWORK
The solar flare is one of the main event of the Sun that affect the space weather and climate changes [29,30,31]. The observation of solar radio burst was done by using the Compact Astronomical Low cost, Low frequency Instrument for Spectroscopy and Transportable Observatory (CALLISTO) from BLEIN 7 meter dish telescope at ETH, Zurich in frequency range of 45 until 870 MHz. [32,33]. Signal from the feed will be fed into the receivers. After that, the signal will be converted to a first intermediate frequency of 37.7 MHz by two local oscillators [31,34,35,36,37]. This antenna covered from 45 -870 MHz [38,39,40,41].
The CALLISTO spectrometer is a low-cost radio spectrometer used to monitor metric and decametric radio bursts, and which has and the named CALLISTO which is inspired from the name of one of the Jupiter's larger moons [42,43,44,45,46]. In this case, we focused the range of 150 MHz till 900 MHz [47,48,49]. CALLISTO consist three main components which are the receiver, a linear polarized antenna and control/logging software [50,51]. We have selected the data from the 150 MHz till 900 MHz region seems this is the best range with a very minimum of Radio Frequency Interference (RFI) [51,52,53,54,55]. In this paper, we have focused the study area of solar flares in an X-ray and radio region to evaluate the distribution of high and low energy [38]. At present, more than 66 instruments have been installed at more than 35 locations, with users from more than 92 countries in the e-CALLISTO network. Figure 1 shows the schematic diagram of the CALLISTO system.
International Letters of Chemistry, Physics and Astronomy Vol. 50
RESULTS AND ANALYSIS
At certain period of time, when the Sun launches billons tones of electrically conducting gas plasma into the space at millions of miles per hours it is assigned that CMEs begin to launches. It is critical when CMEs and the magnetic field which laced together with CMEs' cloud smashed into Earth magnetic field. This is because; they will dump energy into earth magnetic field that can cause magnetic storms. Widespread blackouts by overloading power line equipment will happen due to the storms. The image in Figure 2 on the left and right show bright solar flare and CME exploding respectively. From the images it can be observed that CME is even larger than the sun itself. Meanwhile, flares are only erupting in an active region on the sun.
Both solar flare and CMEs are energetic event that occurs on the sun and associated with high energy particles. Both of them also depend on magnetic fields on the sun. However, CMEs are ejected into the space at high speeds and sometimes in the direction of the earth. Besides, CMEs also are larger eruptions compared to flares which are local events. The obvious difference that can
ILCPA Volume 50
be highlighted is the spatial scale on which both events to be occurred. Solar flare and CMEs can take place in the absence of each other, but both of them are often occur together. Energetic explosion in the low solar atmosphere is called solar flare which can heat the surrounding material to millions of degrees in just few seconds or minutes. Besides, it occurs typically near to sunspots due to the concentrated magnetic field in the active region on the photosphere. Radiations of several bands of electromagnetic spectrum (white light, untraviolet, x-rays, gamma rays) are also emitted and are observed by ground based and space based telescopes. Solar flares also accelerate particles which are ejected into space to emit large amount of radiation. Image above shows solar flare with radio burst and CMEs. NASA SWC is focused on providing critical space weather notification for NASA Robotic Missions. They predicted that CMEs will reach the earth on 27th of April 2012 at 5.49 UT with only minor impact. Those at the high latitude have a chance of aurora due to the small geomagnetic storm. Geomagnetic storms induced by CMEs and effect human activity the most. Aurora only occurs near the poles when solar wind is quiet and the magnetosphere is undisturbed. However, the aurora will expand and brighten and moves to lower latitude at the moment when the solar wind and the magnetosphere are disturbed.
CONCLUDING REMARKS
Magnetized plasma is hurled into space interrupting the steady solar wind during eruption of CME from the sun. Disturbance will be created when the ejected coronal materials moves through the solar wind. This disturbance may include a shock wave that moves ahead of the CME and accelerating some solar wind particles to high energies as it moves. Like what has been mention earlier, once the CME reaches the Earth there can be significant consequences to communications, satellite operations and power generation. Solar flare and CMEs often seem to occur together and also can take place in the absence of each other. Solar burst type III also appear within the events that assigned big eruptions also occur. CMEs that occur will effects magnetic field of the earth and caused disturbance to communications, satellite operations and power generation due to geomagnetic storm. Fortunately, if the geomagnetic storm is small it is only cause minor affect such beautiful aurora.
Acknowledgement
We are grateful to CALLISTO network; STEREO, LASCO, SDO/AIA, NOAA and SWPC make their data available online. This work was partially supported by the 600-RMI/FRGS 5 | 2019-01-22T19:35:48.355Z | 2015-05-03T00:00:00.000 | {
"year": 2015,
"sha1": "5fd72485ad3e70fa5c727d582037c6a61f7a14e0",
"oa_license": "CCBY",
"oa_url": "https://www.academicoa.com/ILCPA.50.26.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0b875bcedac2ace5394f2404ddf0c31fd5c86dc2",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
16257904 | pes2o/s2orc | v3-fos-license | CP symmetry and fermion masses in O(10) grand unification models
O(10) grand unification models which do not necessarily have an extra global symmetry are discussed, taking the model with one 10-plet in the Yukawa sector as an example. A strong correlation between mass ratios and CP is found. The mass relation $m_t/m_b=v_u/v_d$ is recovered when $G_W=0$; and another special relation $m_t/m_b=G_E/G_W$ appears when $v_d=0$, where $G_{E,W}$ are Yukawa coupling constants and $v_{u,d}$ are VEVs. To facilitate this discussion, a set of $O(10)$ $\gamma$- matrices is offered based on a physical representation of the spinors and that of the vector of the $SO(10)$ group. Flavor changing neutral currents in such models are also discussed.
There will be no FCNC at low energies for MOTM with three generations of fermions, if only one of the two sets contributes. On the other hand, if only one of the 5-plets contribute, but both couple to fermions, as they must do because they are in the same multiplet, there can still be FCNC.
Indeed, one will see that the top-bottom mass ratio can be written as Here where v u and v d are the VEVs of 5 1 and 5 * 2 respectively of the 10-plet Higgs. g e and g o are respectively coupling constants of O(10) parity even and odd terms. The 10-d space reflection is not an element of the SO(10) group.
Therefore O(10) is also of interest. One can see from this formula that in order to have |m t /m b | = 1, not only one needs both vacua v 1 and v 6 but also both couplings g e and g o .
When there is a maximal CP mixing, g o = ±ig e , the mass ratios will be adversely affected in the MOTM.
Two special cases are worth noting: 1) v 1 = v 6 : This means that there is only one nonzero VEV, v u = 0, v d = 0, which is typical for a vector (the 10-plet) to develop VEV.
In this case, one obtains m t /m b = (g e + g o )/(g e − g o ).
2) The Yukawa couplings satisfy the condition for being self-dual, g o = g e . The definition of dual (denoted by E for convenience) and anti-dual (denoted by W) in O(10) is similar to that of left-handed (L) and right-handed (R) in the Lorentz group. In this case one obtains a two Higgs doublet model with either supersymmetry or a global U(1) symmetry [2] at low energies.
In general, the counterpart of a MOTM at low energies is a general two Higgs doublet model [3] with FCNC and a complicated relation between m t /m b and v u /v d . The second special case, particularly when it is resulted from supersymmetry, is widely applied for discussions of SO(10) mass relations [4]. In this note we will analyze the O(10) models without any constraints.
Therefore a one 10-plet Higgs O(10) model (MOTM) may in general correspond to a two Higgs doublet model with FCNC. If more Higgs multiplets are involved in the Yukawa sector, then there will be more FCNCs. In either case, the Yukawa coupling constants can be complex, which may cause explicit CP violation. In addition to this, spontaneous CP violation due to a relative phase of the VEVs may appear in all the electro-weak interactions.
In general, one may need eight U-matrices in order to diagonalize the mass matrices of the up-, down-, neutrino-and lepton-mass matrices: U U L , U U R ; U D L , U D R ; U ν L , U ν R ; and U l L , U l R . All of them can be physically relevant; in other words, in addition to the CKM matrix R appears in the right-handed charged current gauge interactions; the matrixG U = U U L G U Y U U R leads to the scalar mediated FCNC interactions among up-type quarks, where G U Y is the matrix of Yukawa couplings of this scalar. Unless G U Y of a Higgs doublet is proportional to the corresponding mass matrix, FCNC mediated by this Higgs field is in general nontrivial. CP violation can in principle appear in any of above interactions. V ′ andG U and the alike represent physics beyond the CKM matrix [3,5].
This work is devoted to the general relations among mass ratios, FCNC, and CP violation in an arbitrary O(10) model. The MOTM will be taken as an explicit example. However, the results can be applied to any non-minimal O(10) models. Before such a discussion, a set of explicit γ-matrices will be provided in Section II. The properties of mass operators will be discussed in detail in Section III. The special cases will be reviewed in terms of the dual (E) and anti-dual (W) coupling constants. In Section IV, symmetries beyond O(10) are discussed which may help to forbid the W term or the E term.
II. The SO(10) Gamma Matrices and Mass Operators
There have been discussions on the group of SO(10), since it was recognized as a potential candidate group for grand unification theories (GUT) [1,6]. A different approach will be taken in this work. The components of the fundamental spinor and vector representations will be assigned first, in terms of the familiar quantum numbers, such as color, flavor, and B − L. Then the γ-matrices will be built up on this specific basis.
A fermion field can be seen as a sum of the left-handed and the right-handed parts where ψ c = Cψ T , and C = iγ 2 γ 0 is the C-matrix of the Lorentz group, From here on, all Lorentz group matrices will be underlined, in order to distinguish them from the SO(10) matrices.
The present task is to find ten 32 × 32 matrices which satisfy the Clifford algebra: The anti-commuting relation (3) keeps its validity under orthogonal transformations γ ′ M = a M N γ N . In addition one has a freedom to choose the components of the reducible spinor The two irreducible spinors of SO(10) are represented here by ψ and ψ c . They are used to represent respectively Lorentzan left-handed and right-handed Weyl fields in one family of fermions and the 16 * -plet is just its charge conjugate. The color indices (from 1 to 3) for the quarks are suppressed. The arrangement of the components in (4) It is convenient to first define 10 symmetric 16 × 16 matrices in two groups: α i , β p , (i = 1, 2, 3, 4, 5; p = 6, 7, 8, 9, 10), where αs and βs mutually commute, while each groups make separate Clifford algebras, The gamma matrices are then simply As mentioned before, the Clifford algebra is also invariant under orthogonal transformations, which changes the components of a 10-plet, for example. Therefore, the specific form of gamma-matrices depends on how we choose the components of a 10-plet. Using the fermion symbols, we represent quantum numbers of the chosen basis components for 10-plet as the following 4 3 The SU (5) × U (1) decomposition can be reached by rearranging the components to the following form: The fermion symbols used here are for their quantum numbers. For example, the quantum numbers of a term ν 1L ν c 2R are: electric charge Q = 0, lepton charge L = 0, and (T 3L , T 3R ) = ( 1 2 , − 1 2 ). The attached subscripts (1 ro 2) are useful to avoid the 10-plet to be self-conjugated. Readers who do not prefer this basis for 10-plet, which mixes components with opposite quantum numbers, may read the next section for other representations in which no Clifford algebra can be found though.
the α-and β-matrices on the basis (7) are where and In all of these matrices, empty fields correspond to zeros.
There are some additional matrices which will be useful for the discussion of discrete symmetries. First, γ 11 is defined as the product of all the ten gamma matrices, γ 11 = −iγ 1 γ 2 · · · γ 10 = diag I 16 , −I 16 .
Secondly the C-matrix is the product of the first five γ-matrices γ 11 can be used to construct derived γ-matricesγ N = γ 11 γ N , which satisfy the same conditions for a Clifford algebra, except for a sign difference in normalization.
The SO(10) anti-symmetric tensor representations are simplyΨ 1 Γ (a) Ψ 2 , where a C of the Lorentz group is implied as is shown in Eq (2).
All non-zero elements of these operator matrices in (17) and (18) There are four mass operators in 126 also. Two of them are 5 However, when one of the ten dimensions (say, the 10th) is time-like, part of Γ MN are Hermitian due to the following definition: γ 0 = iγ 10 which is anti-Hermitian. One then has This is the main difference between SO(9, 1) and SO(10) groups. 6 The naturalness of developing VEVs at two specific positions of a 10-plet is a question subject to study [7].
which enjoy the same property as described in (19). The other two, from its quantum number analysis, are found all to have zero elements except those with quantum numbers (1, 0, ±1, ∓2), or (1, ±1, 0, ∓2) and the first one is normally used to give right-handed neutrinos huge Majorana masses in order that a "see-saw" mechanism may take place to render a tiny left-handed neutrino mass [8].
A linear combination of the operators in (17) and (18) or (20) can provide flexibility to produce desired quark-lepton Dirac mass relations, as applied in all previous works. Their properties are listed in Table 1, where subindices i, j represent generation (or family) numbers.
The method used here to produce all the necessary matrices on a physical basis can be used for other SO groups.
III. Masses and CP Violation
The most general Yukawa term involving one 10 is Note that the expressions (Ψ i γ N Ψ j +Ψ j γ N Ψ i ) and (Ψ i iγ 11 γ N Ψ j +Ψ j iγ 11 γ N Ψ i ) are real.
Both terms in (21) can be CP even if Img e g * o = 0.
According to Eq(21), all fermions may get masses and mix together. For the purpose of illustrating how up-down relation goes in O(10) models, we write down explicitly the relevant terms for the third family with H ′ 6 = −iH 6 and g e,o = g 33 e,o . It is easy to check that when g e and g o are real, the Yukawa term is indeed CP even if ReH 1 and ReH ′ 6 are assigned CP even while ImH 1 and ImH ′ 6 are CP odd. The mass relation in (1) is obtained, with a substitution of To discuss the special cases, it is more convenient to use the dual representation. This is done by introducing the following two sets of reduced γ-matrices, which do not belong to any Clifford algebra, The E and W project operators (1 ± γ 11 )/2 are SO(10) invariant. They separate 16 from 16 * in a 32 reduced representation. The Yukawa terms expressed in the E-W basis is where H u,d = H 1 ± iH 6 , G E,W = g e ± g o . H u and H d are respectively in 5 and 5 * of SU (5) which are respectively up and down components of 10 (please compare with Eq (7)).
Let us now return to the special cases discussed in Section I.
It can give the phenomenological mass ratio with only one VEV. But if Reg e g * o = 0 (which corresponds to a maximal CP phase when g e g * o = 0) one is forced to have the trivial O(10) mass relation |m t | = |m b |.
The mass matrix for the up-type quarks and down-type quarks are, in the case of three families of quarks and leptons, while the Yukawa couplings for the H d field, which does not develop VEV, are The P even (G E = G W ) Yukawa interaction is then This representation is convenient for discussions of charged currents and gauge interactions.
IV. Discussions
It has been explicitly shown that a general O(10) grand unification model with one 10-plet Higgs provides a natural motivation for the most general two Higgs doublet model (2HDM) at low energies as discussed in detail in Ref. 3. In a general MOTM, there must be FCNC, if trivial up-down O(10) mass relations is to be avoided. It is easy to see that this behavior also appears when one Higgs 120 or 126 contributes to Dirac fermion masses, except that 120 contributes only inter-family masses.
In addition to self-duality, one can also rule out the W-current-H coupling by the use of: a) supersymmetry; b) an extra U(1) quantum number; c) a discrete symmetry of an order higher than five; d) a complex nonabelian group.
Actually, certain amount of FCNC is tolerable within the accuracy of the present experimental data, as discussed in Ref. 3. While self-duality can make m b = m t , the most general When FCNC is allowed there are possibilities to realize the desired up-down mass ratio by a combination of two VEVs and two coupling constants.
It is very interesting that within the realm of the explicit CP invariant MOTM, one can adjust the up-down mass ratio in one family by adjusting g o /g e and v u /v d . Therefore it is possible to find an O(10) model with a CP invariant Lagrangian. In such a model, CP violation all will be spontaneous.
In conclusion, there is a correlation between explicit CP violation and fermion mass rela- Except for the O(10) group, other O(2n) (n ≥ 2) and E 6 models may also have similar correlation between CP violation and masses.
One of the authors (DD) sincerely thanks H.J. He, for very useful discussions. Comments from R. Arnowitt is appreciated. The CERN theory group and the DESY theory group | 2014-10-01T00:00:00.000Z | 1996-03-26T00:00:00.000 | {
"year": 1996,
"sha1": "1a065bef2927f463b81b68204b31858555da8424",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9603418",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f3a1f9a431d13ec01af730ebc6ce33665cab7c26",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257940417 | pes2o/s2orc | v3-fos-license | Global governance for pandemic prevention and the wildlife trade
Although ideas about preventive actions for pandemics have been advanced during the COVID-19 crisis, there has been little consideration for how they can be operationalised through governance structures within the context of the wildlife trade for human consumption. To date, pandemic governance has mostly focused on outbreak surveillance, containment, and response rather than on avoiding zoonotic spillovers in the first place. However, given the acceleration of globalisation, a paradigm shift towards prevention of zoonotic spillovers is warranted as containment of outbreaks becomes unfeasible. Here, we consider the current institutional landscape for pandemic prevention in light of ongoing negotiations of a so-called pandemic treaty and how prevention of zoonotic spillovers from the wildlife trade for human consumption could be incorporated. We argue that such an institutional arrangement should be explicit about zoonotic spillover prevention and focus on improving coordination across four policy domains, namely public health, biodiversity conservation, food security, and trade. We posit that this pandemic treaty should include four interacting goals in relation to prevention of zoonotic spillovers from the wildlife trade for human consumption: risk understanding, risk assessment, risk reduction, and enabling funding. Despite the need to keep political attention on addressing the current pandemic, society cannot afford to miss the opportunity of the current crisis to encourage institution building for preventing future pandemics.
Introduction
A paradigm shift for pandemic governance is required in the context of wildlife trade for human consumption (panel). International and domestic regulatory frameworks for addressing pandemics have focused more on outbreak detection, containment, and response (known as downstream prevention) than on prevention of zoonotic spillovers (known as upstream, deep prevention, or prevention at source; figure 1, panel). 2,6 However, increased human mobility through transport infrastructure, larger population centres, and expanding wildlife markets with complex supply chains (panel) reduce the feasibility of containment even when early detection occurs. 7 Thus, the risk of another pandemic, should an outbreak emerge, remains latent. 8 With signals of support from the international community for negotiating a so-called pandemic treaty, 9,10 in this Personal View we argue that such an international institutional arrangement should be explicit about prevention of zoonoses emerging from the wildlife trade for human consumption.
Despite international cooperation efforts, crucial governance gaps for addressing pandemics persist. Countries have typically developed institutional arrangements to advance specific collective action goals, causing a silo problem whereby system-wide interactions among interdependent sectors are seldom considered. 11 The origin of some pandemics reveals problems of sectoral isolation of public health, biodiversity conservation, food security, and trade within a global governance context. As an approach to break down some of those silos, One Health emerged as a policy paradigm for addressing public and environmental health that explicitly recognises the need to work across sectors. 12 Specifically, the wildlife trade for human consumption, both domestic and international (including markets and associated supply chains), is a driver of zoonoses, which can lead to pandemics. 13 As a consequence, calls for changes to the wildlife trade have been made during pandemic events, 14 such as severe acute respiratory syndrome and COVID-19, even though the exact spillover origin of COVID-19 remains debated. 15, 16 Many ideas have been advanced about what should be done to prevent future pandemics 17,18 but with less consideration
Personal View
given to the governance mechanisms required to operationalise such a goal, let alone within the specific context of the wildlife trade for human consumption. 19 As the drivers and negative effects of the wildlife trade and potential zoonoses emerging from it can extend beyond single countries, addressing these requires governance mechanisms that are international and multisectoral. For instance, the wildlife trade for human consumption, driven by domestic and international demand, can lead to population declines of species, 20 even to extinction, 21 whereas zoonotic spillovers can lead to pandemics. 22 Within this context, we argue that global health governance, global biodiversity governance, global food governance, and global trade governance should be more effectively coordinated if pandemics are to be prevented (figure 2). Here, we consider the current landscape of institutional arrangements and mechanisms for pandemic prevention in light of calls to potentially negotiate a so-called pandemic treaty, 9,10 and propose institutional design principles that could play a central role in fostering coordination across those four policy domains through specific goals for preventing zoonotic spillovers from the wildlife trade for human consumption.
We chose to focus on the wildlife trade for human consumption as it is a plausible cause of the COVID-19 pandemic 16 and other zoonotic outbreaks over the past couple of decades, such as severe acute respiratory syndrome. 13 Consequently, we exclude here other zoonotic drivers of pandemics that are also important, 23 such as land-use change, domestic animal production, and the wildlife trade for purposes other than for human consumption (eg, pet trade or traditional medicine). Although our insights of institutional design could also be applied to these other drivers of pandemics, they would need to be tailored and, hence, considered in their own right due to variation in their biological and socioeconomic mechanisms as well as institutional frameworks. The exclusion of other zoonotic drivers in this Personal View reflects an analytical approach rather than empirical reality, as some of those can interact with the wildlife trade for human consumption.
Conceptualising zoonotic disease emergence from the wildlife trade as a collective action problem
Public health is a public good, wildlife is a common-pool resource in most parts of the world, and zoonoses are a negative externality that can stem from the wildlife trade for human consumption, compromising public health and, in turn, economic activity. One challenge arising from the causal linkage between wildlife trade and zoonoses is the disconnect in how incentives are structured, because the wildlife trade is a collective action problem in its own right but can generate a problem that spills well beyond resource users. In turn, zoonotic diseases can be conceptualised as a negative externality in economic terms, which requires institutional responses to be corrected. What makes this problem of collective action different is that environmental or collective action problems usually stem precisely from the cumulative effects of the individual choices of many actors, as is the case with marine debris and climate change. Conversely, pandemics of zoonotic origin are not the result of cumulative effects per se but rather can be conceptualised as punctuated effects enabled by wildlife trade driving health risk transfer. Furthermore, in the case of pandemics driven by zoonoses emerging from the wildlife trade, it is a problem that can spread internationally but that originates in the individual choices of a small subset of people or actors in some particular regions of the world. Although zoonoses pose an imminent risk to individuals along the supply chain, their likelihood of emerging from the wildlife trade is usually low (but can be catastrophic) and as a result individual risk perception might not be enough to induce behaviour change. 24 Uncertainty plays a key role, since it is usually not certain when wildlife trade will result in a pandemic event should a zoonotic outbreak occur. 25 Many such outbreaks might remain localised and contained although others might not, thus becoming a pandemic. 26 Global governance of public health, biodiversity conservation, food security, and trade Global health governance, global biodiversity governance, global food governance, and global trade governance present similarities in their practice and scholarship insofar as each of them focuses on their role in addressing collective action problems that countries cannot solve unilaterally. [27][28][29][30] The systems of governance across these four policy domains have emerged since, at least, the early 1900s and became cemented with the creation of the UN after World War 2. At the heart of these four governance systems are institutional arrangements dominated by
Evidence of silos in the current institutional landscape for pandemic prevention
The creation of separate silos for the global governance for public health, biodiversity conservation, food security, and trade has resulted in gaps regarding zoonosis prevention emerging from the wildlife trade. The gaps are evident from the absence of international institutional arrange ments that straddle both human health and biodiver sity conservation in their mandate; 31 public health prescriptions (ie, International Health Regulations) under WHO that are exclusively focused on the containment of zoonotic outbreaks, not on prevention at source; 32 no interin stitutional arrangements between CITES and WHO; 33 and limited mandate of CITES at the outset of the COVID-19 pandemic meaning that zoonoses were not only not considered but explicitly deferred to other institutions that belong to public health (ie, WOAH) and food security (ie, FAO). 34
Looking ahead for pandemic prevention
The road to an international institutional arrangement for pandemics Despite some international institutional arrangements being in place to address pandemics (including downstream and upstream prevention through various mech anisms, such as the International Health Regulations 35 and the Quadripartite Partnership on One Health 36 ), a new coordinating institutional arrangement, the so-called pandemic treaty, is under consideration by the international community but is not without challenges. Although negotiating new international institutional arrangements can be costly and lengthy, 37 there is also precedent for relatively rapid negotiations. 38 Furthermore, the potential negative consequences of another pandemic are probably too great to abandon the possibility of developing a new insti tutional arrangement. Like other policy domains with systems of multiple institutions, such as climate change and refugees, 39,40 a new pandemic instrument could become the core institutional arrangement of the pandemic governance system. A pandemic treaty was first proposed by the Government of Chile in April, 2020, and, after over a year of consideration at various policy forums (figure 3, appendix pp 2-4), garnered support from 61 countries, the European Council, and WHO (figure 4, appendix pp 5-7). This initiative was subsequently endorsed by the World Health Assembly at a special session held between Nov 29 and Dec 1, 2021, through a consensus decision among WHO's 194 member states, whereby a global process was launched to draft and negotiate a convention, agreement, or other instrument on pandemic prevention, preparedness, and response under the WHO aegis, referred to as a pandemic treaty. 42 The negotiation and drafting process for this pandemic treaty has now officially been launched and is underway with the leadership and purview of the Intergovernmental Negotiating Body, with a target for final consideration by the World Health Assembly in May, 2024 (figure 3). 43 The drafting and negotiation process has not started without challenges, as tensions between globalism and state-centrism have emerged whereby an international instrument for pandemics is perceived as a much-needed Personal View solution but also as potentially undermining national sovereignty. [44][45][46] Notably, the Global North and Global South divide has also emerged, as high-income countries continue to push for inclusion of compre hensive surveillance, reporting, and pathogen sharing by lowincome and middle-income countries but with little commitment to equity in the sharing of tools and resources. 47 Additionally, the Russian invasion of Ukraine could reshape the geopolitical landscape as Russia grows isolated from the west due to ongoing sanctions, including a WHO resolution that could strip Russia of membership rights, and recalcitrance from Russia as it considers withdrawing itself from WHO. 48,49 Several options are being considered under the aegis of WHO as the negotiations are underway. 50 To assist with the Intergovernmental Negotiating Body's decision, the WHO Secretariat prepared an information paper out lining the three main types of possible outcomes from an institutional arrangement perspective: the World Health Assembly can adopt conventions or agreements as per WHO's Article 19, similar to the Framework Convention on Tobacco Control; the World Health Assembly can adopt regulations as per WHO's Article 21, similar to the International Health Regulations; and the World Health Assembly can make recommendations as per WHO's Article 23, similar to the Pandemic Influenza Preparedness Framework. Although the first two instrument types would be legally binding, the third one would not. The selection of one instrument type is not necessarily exclusive of others, which means that more than one instrument can be developed, invoking more than one WHO article. Like wise, there is an option for more than one institu tional arrangement being developed under a single WHO article. For instance, if following the framework con vention type as per Article 19, its mandate could provide for developing additional protocols with more strict and targeted prescrip tions and, in turn, a protocol specifically focused on prevention of zoonosis emergence could be negotiated once the framework convention enters into force. This protocol for pandemic prevention could potentially address all drivers of zoonosis emergence, although our focus here is only on design principles as it pertains to the wildlife trade for human consumption.
Importantly, the Intergovernmental Negotiating Body decided at its second meeting (held in July, 2022) that the pandemic instrument should be legally binding and developed under WHO's Article 19. 51 This architecture would potentially allow for a framework convention with attention to a wide range of issues through a more detailed focus on substantive areas requiring specific negotiations, such as prevention and response. 52,53 Subsequently, a conceptual zero draft of the pandemic treaty was released in November, 2022, by the Bureau of the Intergovernmental Negotiating Body, 54 which includes an article focused on One Health and the importance of prevention of health threats at the interface of the environment, animals, and humans, such as the wildlife trade. Although this conceptual zero draft's article recognises the need to work across sectors, it does not include the institutional design we propose here.
Institutional design principles for zoonotic spillover prevention with a focus on the wildlife trade for human consumption
With this background of potential avenues for the development of an international institutional arrangement for the prevention of zoonotic spillovers, we do not necessarily advocate for one outcome over another one. Instead, we present design principles that any given institutional arrangement on pandemics should include for upstream prevention within the context of the wildlife trade for human consumption. These principles are codified in four goals (figure 5), interweaving governance mechanisms already in place or in progress that could enable operationalisation. 55 Goal 1: risk understanding Improving knowledge of risk of zoonoses emerging from the wildlife trade, and how to manage them, is pivotal for pandemic prevention. Despite the understanding of the wildlife trade as a driver of emerging zoonoses, 13 uncertainty remains regarding more specific attributes of such a process, both biophysical (eg, pathogen pressure) and sociocultural (eg, exposure through human behaviour), that could inform prevention strategies at domestic and international levels. 56,57 Research should be conducted to reduce the uncertainty about the relative risk of zoonotic spillover events potentially resulting in pandemics from domestic compared with international wildlife trade. 58 Within this context, a policy-relevant science platform has already been proposed by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. 6 This platform should be tasked with, among other things, four primary objectives: improve knowledge on specific risks of zoonoses emerging from the wildlife trade for human consumption both from a biological and sociocultural perspective; develop a framework (including indicators) for risk evaluation and monitoring; conduct impact evaluation of interventions for risk reduction; and reach consensus on risk perception and acceptance.
Operationally, mechanisms might already be in progress to advance this goal. WHO and the Convention on Biological Diversity developed a Joint Work Programme on Biodi versity and Health in 2012 and subsequently a Memorandum of Cooperation in 2015, which established the Interagency Liaison Group on Biodiversity and Health in 2017 with ten additional members, including other sectors such as food governance (ie, FAO). 59 This group aims at, among other things, addressing trade-offs, and fostering synergies, between public health and biodiversity conservation goals through a cross-sectoral approach. This group has focused on four themes: capacity building; developing databases, metrics, and indicators; implementing research, case studies, and exchange of best practices; and communication, awareness-raising, and advocacy. Building and expanding on the Interagency Liaison Group on Biodiversity and Health, a new Expert Working Group on Biodiversity, Climate, One Health and Nature-based Solutions was formed by WHO, the International Union for the Conservation of Nature, and the Friends of Ecosystem-based Adaptation network in April, 2021. 60
Personal View
Independent of the previous mechanisms, a new One Health High-Level Expert Panel was formed by WHO, WOAH, FAO, and the UN Environment Programme in May, 2021, to advance policy-relevant science, focusing on the drivers of zoonotic disease emergence. 61 Still in the making, the Convention on the Conservation of Migratory Species of Wild Animals Scientific Council agreed in July, 2021, to create an expert working group on migratory species and public health, including zoonoses linked to the wildlife trade. 62 These initiatives combined could potentially be used as a starting point to launch the policyrelevant science platform for pandemic prevention, which could be similar to the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services and the Intergovernmental Panel on Climate Change.
Goal 2: risk assessment
Reducing the risk of emerging zoonoses from the wildlife trade, including from markets and associated supply chains, will require baseline data of current risk levels in each member country, as well as longitudinal data. To that end, we propose that the resulting international institutional arrangement should consider a combined system of self-reporting from parties and third-party audits focusing on two matters: characterisation of the entire supply chains and networks of the wildlife trade from a biophysical, legal, and sociocultural standpoint; and characterisation of the corresponding regulatory frameworks and funding available for their implementation. The characterisation of supply chains and trade networks should consider the entire process, from harvest or capture to point of sale to the end consumer, accounting for both legal and illegal trade, including key variables-eg, the stage at which killing takes place, shipping conditions, market size, traded taxa, animal density and interspecies mixing (both wild and domestic), and supply chain length and breadth. 63 This self-reporting and third-party audit process could be devised by the policy-relevant science platform for pandemic prevention on the basis of the most up to date knowledge of risk of emerging zoonoses from the wildlife trade for human consumption. As knowledge will accumulate over time, we suggest an adaptive framework, so that periodic reporting can be adjusted according to the best available evidence. The baseline information on the characteristics of the wildlife trade in each country would allow a risk assessment using the best available evidence on risk according to the policy-relevant science platform for pandemic prevention. Explicit assessment of current risks could be conducted and reported through the use, emulation, or expansion of already existing governance mechanisms. For instance, risks stemming from the wildlife trade for human consumption could be assessed using the Global Health Security Index through the inclusion of more specific metrics with input from the policy-relevant science platform for pandemic prevention, including data on legal (eg, CITES reporting) and illegal wildlife trade (eg, Wildlife Trade Portal). With this framework, 195 countries, corresponding to the parties of WHO's International Health Regulations, were quantitatively assessed for the first time in 2019. The index attempts to evaluate the baseline of where countries are at in relation to pandemic prevention, detection, and response, including risk factors, so that gaps can be identified and progress tracked over time. This index, however, is not without pitfalls, as analyses have already identified the need for a more holistic set of indicators beyond technical capacities. [64][65][66] This recommendation should be included for risk assessment within the context of the wildlife trade for human consumption. Likewise, risk assessment could be done by adopting the Joint Risk Assessment Operation Tool prepared by WHO, FAO, and WOAH. This tool provides the blueprint to set up domestic governance structures to assess zoonotic risk across sectors. 67 In turn, the char acterisation, including reporting, of regulatory frame works and funding available for pandemic prevention in relation to the wildlife trade for human consumption could be devised on the basis of the WHO Joint External Evaluation Tool. 68 This framework was initially developed to support the implementation of WHO's International Health Regulations in 2016, with a focus on appraising parties' capacity for surveillance, containment, and mitigation.
Goal 3: risk reduction
Prevention of pandemics driven by the wildlife trade ultimately hinges on reducing the risk of zoonotic disease emergence in the first place. Risk of zoonotic disease emergence can be present along the entire supply chain to various degrees, depending on context, from harvest or capture, through transport and distribution, to point of sale to the end consumer, and including slaughtering. 56,63 Within this context, we argue that prescriptions for pandemic prevention will likely require improved governance frameworks for legal wildlife trade 69 and strategies to reduce illegal wildlife trade, which is intrinsically unregulated, through sanctions and incentives. 70 These prescriptions could include, but not be limited to, a reduction in demand and supply, particularly of those taxa bearing high zoonotic risk (eg, rodents and primates), 71 and improved management of supply chains, including markets, through chains of custody, food safety standards, and considerations for interspecies mixing. 58,72,73 Importantly, specific decisions on bans of markets trading wild meat for human consumption, although suggested and even already implemented, 74 should be informed by the best available evidence from the policy-relevant science platform to ensure effectiveness and avoid unintended consequences. 75 After all, access to meat from wild animals is deeply intertwined with livelihoods and culture in some regions around the world. 76,77 Hence, developing substitutes to wild meat use (eg, by promoting locally acceptable alternative livelihoods) will Personal View likely be necessary. 78 Risk reduction should not be approached as a single universal solution, but rather as an adaptive, context-dependent, evidence-informed systems approach with careful targeting, considering pandemics are not the result of cumulative effects but rather punctuated events. For instance, wildlife markets in large cities with highly interconnected transport infrastructure should receive special attention due to the high risk of zoonotic outbreaks becoming a pandemic. 7 A governance approach that considers the balance between multiple goals (ie, public health, biodiversity conservation, food security, and economic exchange), and between local context as well as global effects, will be paramount. Some governance mechanisms that are already in place and others under development could serve as models to operationalise this goal, as well as to strengthen coordination and cooperation through existing institutional arrangements. Reducing public health risk stemming from the animal-human-environment interface, on the premise that zoonotic outbreaks can only be prevented and addressed through a multisectoral approach, is an objective of the Tripartite Partnership on One Health, launched in 2010 between WHO, FAO, and WOAH. 79,80 The UN Environment Programme joined this effort in March, 2022, so this initiative is now known as the Quadripartite Partnership on One Health, to contribute expertise on the environmental determinants of zoonoses and antimicrobial resistance. 36,61 Additionally, WOAH released a Wildlife Health Framework in March, 2021, reinforcing a One Health strategy. 81 One of its objectives entails improving WOAH members' capacities to manage the risk of pathogen emergence in wildlife and transmission at the human-animal-ecosystem interface while observing biodiversity conservation goals. Considering CITES does not include public health prescriptions as part of its mandate but some CITES-listed species are zoonotic vectors and subject to trade for human consumption, 82 a working group has been established to better understand what role this convention could play in pandemic prevention. 83 The outcomes of discussions and recommendations of that working group were considered at the 19th Conference of the Parties in Panama City (Panama) in November, 2022, and a decision was adopted accordingly. 84 Specific actions from such a decision include, among others, improved cross-sectoral coordination and establishment of a baseline of actions taken by parties to reduce the risk of zoonotic spillover associated with the wildlife trade. As not all wildlife trade requiring attention is international, the Post-2020 Global Biodiversity Framework adopted in December, 2022, known as the Kunming-Montreal Global Biodiversity Framework, could help address cross-sectoral integration domestically, as one of its considerations for implementation includes the interlinkages between health and biodiversity. 85 Strategies for risk reduction of zoonotic spillover devised by parties to the resulting international institutional arrangement for addressing pandemics could be incorporated and reported as part of the already existing National Action Plans for Health Security. 86 These documents are currently voluntary, multiyear planning processes that use a One Health approach and aim to, among other things, implement WHO's International Health Regulations and contribute to achieving the Sustainable Development Goals. 87
Goal 4: enabling funding
Analyses have revealed the insufficiency, inadequacy, and fragility of current funding for addressing pandemics, warranting the development of new financial mechanisms. 41 Funding will be needed for advancing each of the three previously presented goals (ie, risk understanding, risk assessment, and risk reduction). Additionally, funding is required to support the development and implementation of governance structures for the accomplishment of such goals and to cover the overhead costs associated with managing the funds. Importantly, two key initiatives were created in early 2021 for analysing financing gaps and scoping potential means for addressing pandemics, namely the WHO Working Group on Sustainable Financing and the G20 High-Level Independent Panel on Financing the Global Commons for Pandemic Preparedness and Response. 88 These two processes create opportunities to craft a funding strategy for the proposed international institutional arrangement for addressing pandemics with specific reference to prevention of zoonotic spillover from the wildlife trade for human consumption, as both incorporate forums with high-level political engagement that include national governments and international financing institutions.
We propose a two-pronged strategy for meeting the funding needs of pandemic prevention in line with the Working Group on Sustainable Financing, the High-Level Independent Panel, and the Independent Panel for Pandemic Preparedness and Response. Negotiations for an international institutional arrangement that accounts explicitly for prevention of zoonotic spillover from the wildlife trade for human consumption should include considerations for financing, leading to stipulations for the development of specific mechanisms enshrined in the final document. Governance functions and core programmatic activities, such as risk understanding (ie, Goal 1) and risk assessment (ie, Goal 2), could be financed through a mix of assessed and voluntary contributions from member countries. More specifically, assessed contributions should follow an incremental structure over time accounting for economic recovery of countries in the aftermath of the COVID-19 pandemic. In addition, a Global Pandemic Financing Facility, with contributions from select donor countries, could be established drawing on lessons from the Global Environmental Facility. 89 This could be used as a mechanism For more on the WHO Working Group on Sustainable Financing see https://apps.who. int/gb/wgsf/ Personal View to mobilise resources for the Global South, where countries generally have lower financial and technical capacity, in this case with a focus on risk reduction following structured decision making on the basis of risk assessments. In terms of concrete figures, it has been estimated that governments should commit to an increased international financing pool for addressing pandemics by US$5-15 billion annually, which spans prevention, preparedness, and response. 88,89 Although these figures are now available, much work remains to be done in terms of deciding allocation across those three areas of work. Importantly, these considerations supplement existing mechanisms, which should not be rolled back in light of additional contributions from the private sector, non-governmental organisations, and international financial institutions, such as the World Bank's Health Emergency Preparedness and Response Multi-Donor Fund and the Financial Intermediary Fund for Pandemic Prevention, Preparedness and Response. 90,91 This strategy would allow for a robust financing base with predictability, agility, adaptability, and leverage to attract additional funds.
Conclusion
If the role of governance includes supplying institu tional arrangements in response to demand of societal problems, then pandemics reveal a probable institutional failure requiring a strong governance response. Public health, biodiversity conservation, food security, and trade are intertwined and their causal pathways for the emergence of zoonotic diseases spilling over into pandemics are more connected than ever due to increased exploitation of biodiversity, intensified interconnectivity of the world, and a larger human population. Pandemics require collective action not only across countries but also across sectors. Addressing this causal link is now paramount, but the acceleration of such a causal pathway has so far outpaced the development of institutional responses to address it. 89 With increased globalisation and urbanisation, contain ment of zoonotic outbreaks and prevention of spillovers into pandemics will likely become more difficult, hence the imperative for prevention at source to take centre stage in future strategies. 7,8 As a potential response to this issue, we have argued how an international institutional arrange ment that addresses pandemics, accounting explicitly for the prevention of zoonotic spillovers from the wildlife trade for human consumption, could be built institution ally upon many mechanisms already in place or under development that foster accountability, transparency, coordination, and resource mobilisation. Importantly, a holistic and coordinated approach to zoonotic spillover prevention across all drivers is imperative. As institution building seems to be at the agenda formation and negotiation stages, 55 our recommendations for institutional design could also be applied and tailored to additional zoonotic drivers in the context of a potential WHO instrument for pandemic prevention, as well as to all zoonotic drivers within an international institutional arrangement negotiated outside the WHO framework. 52 For instance, the Convention on Biological Diversity's Subsidiary Body on Scientific, Technical, and Technological Advice is working on the issue of Biodiversity and Health, including (but not limited to) the prevention of zoonotic spillover from the wildlife trade. Indeed, the Subsidiary Body on Scientific, Technical, and Technological Advice discussed a possible Action Plan on Biodiversity and Health at its meeting in Geneva (Switzerland) in March, 2022. 92 Despite the paradox between timing and urgency for treaty negotiations, there is a need to act while the effects of a pandemic are still tangible as they can stimulate institution building. Times of crises might not be perceived as most appropriate for institution building as all efforts are deployed in dealing with the current problems as they unfold. Conversely, although periods between crises could enable more political bandwidth for institution building, the sense of urgency to do so could wane as crises are overcome. Acknowledging this conundrum, we recommend the impetus given by the COVID-19 crisis is used catalytically to develop the macrostructure of an international system for pandemic prevention without necessarily developing all details in the immediate future.
Contributors EG-C, ND, AP, and MW conceived and framed the initial idea. EG-C wrote the first draft, and all authors contributed equally to subsequent iterations, revisions, and the final manuscript.
Declaration of interests
We declare no competing interests. | 2023-04-05T15:09:44.150Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "fc97137f00aef65bb919cb68e64de2e5df0084a5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "146db019d8ce062071431ab1abd1a416cdb95a80",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
74045441 | pes2o/s2orc | v3-fos-license | Protein S Deficiency-An Uncommon Cause with Common Presentation
Correction: The correct PDF for this article was loaded on 9th March 2017. We offer our sincere apologies for having the wrong PDF loaded for this article. Stroke in child poses a major health problem. Thrombophilic factors have been implicated in 4-8% of young stroke worldwide. Protein S deficiency is a very rare cause of stroke. A few cases have been reported in literature. We are reporting a rare case of protein S deficiency causing stroke in a two year old child. J Nepal Paediatr Soc 2015;35(2):192-194
Introduction
S troke or cerebro-vascular accident poses a major health problem.Thrombophilic factors have been implicated in 4-8% of young stroke worldwide 1 .Protein S is a naturally occurring vitamin K dependant protein, which in conjunc on with ac ve protein C inhibits the clo ng cascade.Protein S defi ciency is known to be of clinical signifi cance in pa ents with deep venous thrombosis or pulmonary emboli.The incidence of deep vein thrombosis is one episode for every 1000 persons.Protein S defi ciency is found to be associated with cerebro-vascular occlusion, although exact role is controversial.Till now no case has been reported having protein S defi ciency presen ng as proptosis followed by hemiplegia.So we want it to bring into the no ce of every clinician.
The Case
Dipali Tudu, a two year old female child with body weight of 8 kgs was admi ed in our ins tu on with high grade fever for three days and unilateral proptosis of the right eye.She had history of development of furuncle over right side of nose two days before development of fever.There was no contact history of TB.
On examina on, surface temperature was raised (103 o F).Proptosis was no ced on right eye, pupil was normal in size, reac ng normally to light, while le eye was normal on examina on.Heart rate-110/min,Rrespiratory rate -28/min, Blood pressure of 90/60 mmHg.On ausculta on the chest was clear.There were no associated abnormal neurological signs.For Orbital celluli s of right eye and high grade fever, the pa ent was put on intravenous an bio cs, an pyre cs and other suppor ve management.On the second day of admission, pa ent developed le sided complete hemiparesis with UMN type of facial nerve palsy (le ).
IV an bio cs were con nued and aspirin was started along with physiotherapy.Workup for thrombophilias revealed reduced protein S func on 32(50-140), whereas protein C-84 (70-140) and An thrombin III-120(80-120) level were within normal limit.Factor V Leiden muta on, MTHFR gene muta on and Prothrombin gene muta on were not detected.An cardiolipin an body and lupus an coagulant were within normal limits.
During fi rst two weeks of treatment, the pa ent gradually became afebrile and began to walk with support regaining lost power and was ul mately discharged a er about three weeks when the pa ent was able to perform her normal daily ac vi es and was asked to come for follow-up.
Discussion
Stroke in young popula on has a high incidence of approximately 25-35%, according to some studies in India.Abraham et al 2 from Vellore reported an incidence of 25% in popula on less than 40 years of age.Munts et al 3 reported that idiopathic coagula on disorders were found in about a quarter of young stroke pa ents, though there was no clear cut data from India.Carod-A et al 4 studied about ischemic stroke subtypes and prevalence of thrombophilia in Brazilian stroke pa ents.They examined 130 consecu ve young and 200 elderly pa ents.Prevalence of thrombophilia was respec vely: protein S defi ciency (11.5% versus 5.5%), protein C defi ciency (0.76% versus 1%).They drew a conclusion that prothrombo c condi ons were more frequent in of undetermined causes.
The importance of thrombophiolic disorders in arterial stroke has been debatable.Ischemic stroke has been reported as a rare manifesta on of protein S defi ciency.Girolami et al 5 and Sie et al 6 were among the fi rst who reported the associa on of familial defi ciency of protein S as a cause of ischemic stroke in young.Wiesel et al 7 studied 105 pa ents with protein S defi ciency, out of which 14 had arterial thrombo c accidents involving central nervous system or the myocardium, while most studies revealed a weaker associa on between the two 8,9,10 .Douay et al 9 reported that hereditary defi ciencies of coagula on inhibitors are rare in ischemic stroke pa ents under 45 years and their systema c detec on seems to be of poor interest.Mayer et al 8 also supported the fact that acquired defi ciency of free protein S is not a major factor for ischemic stroke.There were only few case reports showing associa on with arterial thrombosis as reported by Ok E J et al 11 .Pantam M et al 12 reported a 20 years old case of protein S defi ciency, presented with homonymous hemianopia and decreased sensa on in right side of the baby.
In this two year old pa ent without any risk factors, the factor S defi ciency possibly played a role for the internal caro d artery thrombosis.Factor S defi ciency should be considered in venous stroke, recurrent pulmonary embolism, unusual site of venous occlusion, family history of vascular events, and stroke in young popula on.Ae ology of such vascular events in young must be thoroughly inves gated so as to guide preven on and treatment of this devasta ng disease.Measurement of total and free protein S levels should be a part of the evalua on for any young adults who has had a stroke.
Conclusion
Therefore when dealing with a case of stroke in children, protein S defi ciency could also be thought of before making proper diagnosis.As protein S defi ciency predisposes to recurrent thrombophilic accidents, long term follow up is required a er diagnosis.Early diagnosis and targeted approach can help such pa ents to prevent recurrent thrombo c episodes.
Table 1 :
Showing results for thrombophilias | 2018-12-21T18:22:16.457Z | 2016-01-20T00:00:00.000 | {
"year": 2016,
"sha1": "bbcb8fc86625305a87b4f854df28bb5896875b69",
"oa_license": "CCBY",
"oa_url": "https://www.nepjol.info/index.php/JNPS/article/download/13616/11690",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bbcb8fc86625305a87b4f854df28bb5896875b69",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119064497 | pes2o/s2orc | v3-fos-license | Recent Cross Section Work From NOvA
The NOvA experiment is an off-axis long-baseline neutrino oscillation experiment seeking to measure $\nu_{\mu}$ disappearance and $\nu_{e}$ appearance in a $\nu_{\mu}$ beam originating at Fermilab. In addition to measuring the unoscillated neutrino spectra for the purposes of predicting the oscillated neutrino spectrum in the far detector, the 293-ton near detector also enables high-statistics investigation into neutrino scattering in numerous reaction channels. We discuss the various near detector analyses currently in progress, including inclusive measurements of both electron and muon neutrino charged-current interactions and efforts to constrain the off-axis NuMI flux using the elastic scattering of neutrinos from atomic electrons.
Introduction
Over the course of the last three decades, neutrino oscilliation experiments have sought to use the quantum-mechanical properties of the neutrino as a probe of the fundamental nature of the lepton family. Since the weak-force coupling of neutrinos to other particles is extremely small, terrestrial neutrino oscillation experiments, such as NOvA, typically construct large detectors from materials composed of heavy nuclei in an effort to maximize the neutrino interaction rate. But the intractibility of calculating the dynamics of nucleons within the nucleus in the lowenergy limit of the strong force introduces significant uncertainties into the reaction predictions used in measurements made with these detectors. Even in the two-detector paradigm used by NOvA and other experiments, in which a detector close to the neutrino source (the near detector, ND) is used to constrain the product of interaction cross section models and the flux prediction (which is then extrapolated to the far detector, FD, where oscillations are observed), direct measurements of neutrino interaction cross sections on the target materials are extremely valuable for constraining and choosing between models.
The 293-ton NOvA near detector is an ideal instrument to use for this sort of cross section measurement for several reasons. First, its location 14.6 mrad off-axis in the Fermilab NuMI neutrino beam it samples yields a narrow neutrino energy spectrum centered on 2 GeV, producing an event sample rich in interaction types (including copious examples of quasielastic scattering, baryon resonance production, and deep inelastic scattering) and exhibiting multiple kinds of nuclear effects (including coherent meson production, multi-nucleon scattering, and final-state hadron rescattering). Second, the detector itself is a mostly-active, fine-grained, segmented tracking calorimeter constructed of PVC cells filled with liquid scintillator with excellent spatial and energy resolution. We present status reports on a number of measurements currently in progress using the NOvA ND. NOvA Preliminary Figure 1: Predicted distribution of muon particle identification classifier described in the text for tracks (ν µ CC signal, red line; other predicted reactions, blue line) compared to ND data (black points). Events with Muon ID > 0.3 are retained as candidate ν µ CC events.
2 ν µ charged-current inclusive scattering During its lifetime the NOvA ND is expected to record an immense sample of charged-current (CC) interactions of muon neutrinos on the liquid scintillator (ν µ CH 2 → µ − X) ultimately numbering in the millions. The statistical power of this sample offers an unprecedented opportunity both to verify the basic nucleon-level models for CC reactions in detail and to examine the relevant nuclear effects near E ν = 2 GeV; this energy range has previously been explored mostly in light bubble chamber experiments in measurements reporting only total cross sections.
The lepton system
The comparatively long lifetime and clean ionization profile of muons make the lepton kinematics in CC reactions particularly amenable to precise measurement. NOvA reconstructs muons as tracks and separates them from the hadronic background using a k-nearest neighbors (kNN) algorithm trained with four variables: the track length, the longitudinal energy profile (dE/dx), the scattering along the track, and the fraction of energy in the neutrino event associated with the track. The distribution of the resulting classifier is shown in figure 1; the observed data distribution is well-described by the prediction. Events which have Muon ID > 0.3 and whose energy is contained inside a fiducial volume buffered from the edges of the detector by two cells are retained as candidate ν µ CC events. The predicted resolutions in both muon energy and angle for this sample are very good (averaged over the sample, 50 MeV → 3.8% and 4 • → 1.6%, respectively), as indicated in figure 2. A doubly-differential cross section measurement in these variables is currently in progress; the influence of systematic effects (such as energy scales and the flux prediction) is currently under investigation.
The hadronic system
Because NOvA is a tracking calorimeter, it offers detailed reconstruction of the hadronic part of ν µ CC interactions as well. Here the effect of the nucleus on neutrino interactions takes center stage; we observe clear evidence for an extra reaction type beyond those predicted by default GENIE 2.10.4 lying in between the quasielastic (QE) and baryon resonance (RES) channels in momentum transfer variables (where E µ and E had are the reconstructed muon and non-muon energies in the system): This is illustrated in figure 3. Inspired by recent work in neutrino scattering 2 , we interpret this absence as the lack of a model for a two-particle, two-hole (2p2h) process, where the neutrino scatters from a nucleus and ejects two of the nucleons (which were previously in some kind of correlated state) together. GENIE 2.10.4 does ship with an "optional" (not enabled by default), mostly empirical model for 2p2h reactions 3 , "Empirical MEC a " (previously called "Dytman MEC," after its author). Because it is unclear whether the kinematic assumptions built in to this model that were constructed largely from observations at lower E ν should extrapolate correctly to NOvA's neutrino energy range, we further modify this model as follows: 1. We reverse the linear turn-off of the cross-section between 1 and 5 GeV (so that the Empirical MEC cross section becomes a constant fraction of the QE one) since there are recent indications that 2p2h exists with similar strength at energies above 5 GeV. 2 2. We reverse the fraction of scattering from neutron-neutron and neutron-proton pairs in the model to 20% and 80%, respectively, based on indications from electron scattering 5 and expectations from theoretical expectations in neutrino scattering 6 . b a Meson Exchange Currents (MEC) are one predicted class of 2p2h which have generated intense theoretical interest in recent years. Good summaries of the various strategies can be found elsewhere. 4 b The typo that led to the need for this correction has been corrected in GENIE 2.12. 3. We apply a momentum-transfer-dependent weight derived from our ND data as described in the next paragraph.
To construct weights that constrain the Empirical MEC to better fit our observed data, we first examine the data excess in | q| (effectively the difference of the integrals of data and simulation c in each panel of figure 3). We reweight the Empirical MEC such that it agrees with the data excess in this variable. To set the fourth component of the four-momentum transfer, q 0 , we fix it to the shape of the predicted q 0 distribution in each bin of | q| taken from the GENIE quasielastic channel. This somewhat underestimates the E had in the observed distribution, as illustrated in figure 4b, but the overall agreement relative to the untuned version (figure 4a) is substantially improved. The GENIE 2.10.4 prediction with tuned Empirical MEC is the base prediction for current oscillation analysis efforts, including those discussed elsewhere in this volume.
3 ν e charged-current inclusive scattering Electron neutrinos are expected to undergo the same types of reactions and their interactions are expected to experience the same types of nuclear effects as ν µ , up to the influence of the difference in the charged lepton masses. Understanding whether this is actually the case is very important for oscillation experiments like NOvA, for which the interactions of ν e appearing via oscillation from a ν µ beam comprise a critical signal channel. However, at energies around several GeV, until recently it has been challenging to accumulate enough ν e interactions to make statistically significant measurements. The very intense NuMI beam used by NOvA, on the other hand, has about a 1% admixture of ν e , opening the door for high-statistics investigation. For a cross section analysis, NOvA begins selecting ν e interactions using a likelihood classifier constructed from the longitudinal energy profiles of various particle templates; the performance of this classifier (after a baseline selection requiring containment and rejecting especially minimum-ionizing tracks to reject ν µ CC), and the selection cut made on it, is illustrated in c After applying the correction to non-resonant 1π production from neutrons suggested by Rodrigues et al. figure 5a. Once this electromagnetic cascade-enhanced sample is obtained, further purification is accomplished using a boosted decision tree using shower shape variables (both longitudinal and transverse); its performance is shown in fig. 5b. Studies of sideband regions in these variables are underway in order to better constrain the predicted backgrounds and understand what the dominant uncertainties will be for a cross section.
4 Constraining neutrino flux with ν − e elastic scattering The neutrino flux prediction is an essential ingredient to any cross section measurement because it represents the normalization coefficient as a function of neutrino energy; traditionally flux uncertainties comprise the largest source of error for extracted cross sections. This owes primarily to the fact that ab initio calculations of horn-focused neutrino beams like NuMI depend on predictions of the strong-force dynamics of protons colliding (and re-interacting) with complex molecular targets like graphite, which are difficult. However, it is in principle possible to constrain the flux prediction using an in situ measurement of a neutrino scattering process with a well-understood cross section. Because of the complexities of neutrino interactions with nuclei, however, purely leptonic processes like ν + e → ν + e scattering (neutrinos with atomic electrons) are the the reactions most amenable to use in this fashion. Unfortunately, the cross section of ν + e → ν + e scattering is suppressed relative to nucleon scattering by the ratio of the electron to nucleon masses and other kinematic factors, resulting in σ ν−e /σ ν−N ∼ 10 −4 . Therefore statistics are typically low in this channel.
As in the ν e CC case, NOvA uses two PID classifiers to identify candidate electron showers for this analysis: one that distinguishes between electromagnetic showers and other backgrounds, and one that specifically distinguishes between electron-induced and photon-or neutral pioninduced showers. After selections on these variables, we employ a cut at 0.005 GeV × rad 2 on the kinematic variable E e θ 2 e , which is limited to very small values by the kinematics of the interaction itself, to further enrich the signal; this is illustrated in figure 6a. The resulting electron energy spectrum, which will be used to constrain the flux, is shown in figure 6b. Currently efforts are being devoted to quantifying the size of uncertainty in the signal efficiency and background cross section and flux predictions. It is expected that this technique will constrain the flux normalization to around 10% uncertainty. | 2016-11-21T14:08:17.000Z | 2016-11-08T00:00:00.000 | {
"year": 2016,
"sha1": "cd2f455e33e5e1dca47a0ef90ed2c25a8bed1f3f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f56c4c7a72033e3b5800a39b7345d9e2f10ac37b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233297025 | pes2o/s2orc | v3-fos-license | Quick Learner Automated Vehicle Adapting its Roadmanship to Varying Traffic Cultures with Meta Reinforcement Learning
It is essential for an automated vehicle in the field to perform discretionary lane changes with appropriate roadmanship - driving safely and efficiently without annoying or endangering other road users - under a wide range of traffic cultures and driving conditions. While deep reinforcement learning methods have excelled in recent years and been applied to automated vehicle driving policy, there are concerns about their capability to quickly adapt to unseen traffic with new environment dynamics. We formulate this challenge as a multi-Markov Decision Processes (MDPs) adaptation problem and developed Meta Reinforcement Learning (MRL) driving policies to showcase their quick learning capability. Two types of distribution variation in environments were designed and simulated to validate the fast adaptation capability of resulting MRL driving policies which significantly outperform a baseline RL.
I. INTRODUCTION
In the last decade, Reinforcement Learning (RL) methods have excelled and been applied to many problems, including video games [1], robotic manipulations [2], natural language processing [3], business management [4], healthcare [5] and intelligent transportation systems [6]. Amongst the recent work in the field of RL, there are two inspiring success stories. The first one is a system that plays video games at a superhuman level [7]. The second well-known success story is AlphaGo, which combines supervised learning and reinforcement learning [8] and defeated one of the top rank human Go masters. A wide variety of RL applications can be found in the comprehensive survey by Li [9].
However, RL in real world applications have not been as successful due to several factors. 1) Real-world applications usually do not have a unique and clearly-defined reward function. Unlike playing Atari video games or Go/Chess, most real-world cases neither have a single goal, nor have a unique reward function. Instead, designing the reward function to balance multiple sub-goals, or discovering an underlying reward function from demonstrations [10], or learning without a reward function [11] all are possible first task to be solved. 2) For real-world safety critical problems such as automated driving systems and robot manipulators, learning from mistakes is not practical due to the high penalty (e.g. crashes) that comes with real-life blunders. Recently, several work formulated RL safety problems to train policies with guard rails including [12] that utilized Constrained MDPs (CMDPs) and [13] a short-horizon control embedding safety rules or safety agents [14]. Another approach uses Lyapunov functions [15], following safe exploration strategies [16]. 3) Video games, board games, and other closed world environments have fixed environment dynamics in how they transition, albeit highly complex. Most real-world challenges, on the other hand, are ever evolving and will inevitably contain environments with distribution not present in the training data. Thus an RL policy trained in a single simulation environment may find it difficult to generalize towards the real-word distribution. There are more challenges need to be addressed to implement RL to real-world problems, and we refer readers to the work by Gabriel [17].
In this paper, we will address the third gap in AV applications. Many challenges remain in capturing all distribution of the real world environment transition function into automated driving simulations. In addition to sensors and actuators uncertainties, environmental agents such as other road users are especially difficult to characterize, and all of them can drift overtime. In this paper, we will focus on the changing distribution of traffic environment, since it can be considered as the most challenging issue with human behaviors and driving styles being diverse and known to change overtime [18].
Amongst all decision-making problems in autonomous driving, the lane change is an important feature for automated vehicles to maintain mobility. A lane change that is not urgent but desirable (e.g. overtaking a slower vehicle) is considered a Discretionary Lane Change (DLC), as opposed to Mandatory Lane Change (e.g. required upon lane closing). In this work, we formulate the discretionary lane change problem as a quasi-static MDP, and the automated vehicle is supposed to learn to adapt to a range of different MDPs.
Meta-RL-based policies for discretionary lane change applications were developed in this paper. To validate our method, we experiment with two types of varying environment distributions. The first considers varying distribution/number of adversarial vehicles, and the second contains varying distribution of surrounding driving behaviors -ranging from egoistic to altruistic by assigning different parameters in the driver model. We want to validate that the trained policy can handle both hostile and normal (albeit varying) surrounding vehicles. The rest of this paper is organized as follows. Problem definition and literature are shown in Section II. The background knowledge is described in Section III. In Section IV the simulation environments are interpreted, followed by the training and testing setup in Section V. Simulations results are shown and analyzed in Section VI. Finally the paper is concluded with Section VII.
II. RELATED WORKS
Different methods were used to design fast learning agents to adapt to unseen environments. Markov Decision Process (MDP) is a popular approach for such design problems. Approaches using multi-MDPs adopt different optimization and adaptation methods. Frequently, the agent trains an identifier using supervised learning. Subsequently, for each identified model, Model Predictive Control (MPC) or Dynamic Programming (DP) methods can be used to learn the policy. During adaptation, we can witch to the corresponding controller based on the identified model. For instance, in [19], Nagabandi et al. use meta learning to train a dynamics model prior. And this prior can rapidly adapt to the local context when combined with recent data. The controller is extracted using model predictive path integral control. However, the models need to be enumerated with their structure, limiting the agent's generalization ability.
In other studies, researchers use behavior cloning for the adaptation step. For example, in [20], [21], the authors present Domain-Adaptive Meta-Learning, a system that allows robots to learn from a single video of a human via prior meta-training data collected from related tasks. During training, the agent is provided with demonstration data. The agent is taught how to infer a policy from just one demonstration. During testing, only one expert demonstration is provided, and the agent runs behavior cloning. The performance after the adaptation can be outstanding, however, it requires expert demonstration and in our work, we don't assume to have it.
Model-free Meta Reinforcement Learning (MRL) can also solve adaptation problems. In [22]- [24] the authors use a Recurrent Neural Network (RNN) and encode the MDP's information as the hidden memory of the RNN. And the policy contains the information in its weights to adapt to different environments. However, there is no mathematical convergence proof and we cannot guarantee that the RNNbased MRL methods adapt well or converge at all. Therefore, a more consistent MRL method is needed.
Another class of MRL methods uses the policy gradient approach for both the meta training and the adaptation steps [25]- [27]. In [25], Finn et al. developed the Model Agnostic Meta-Learning (MAML) method. The idea is that the agent is trying to find the parameter θ, such that when the agent takes a few gradient steps on that θ, it will get to a θ * i which is optimal for a given MDP M i . However, policy gradient methods suffer in sparse reward environments. The agent cannot update its policy using trajectories with no reward. Also, if the reward functions are the same for different environments, the MAML agent may not capture the environments' features.
Both the RNN-based and gradient-based approaches use on-policy RL methods for both the meta training and the adaptation steps and thus are data inefficient. The adaptation step is inherently on-policy learning since, given a new environment, the agent needs to collect new data using the current policy. On the other hand, the meta training step does not have to be on-policy. Leveraging a stochastic encoder to capture the context of adaptation data, Rakelly et al. [28] developed an off-policy MRL with its meta training step based on the Soft Actor-Critic (SAC) [29]. The developed method is called the Probabilistic Embedding for Actor-critic RL (PEARL). The PEARL MRL method is consistent, dataefficient, and has an advanced exploration strategy.
In this work, we will implement the gradient based MAML method and the stochastic encoder based method PEARL in our DLC application and compare their performance.
III. META REINFORCEMENT LEARNING BASICS
In this section, we will introduce the Meta Reinforcement Learning (MRL) basics and its notations. Unlike traditional Reinforcement Learning (RL) methods, in which a single Markov decision process (MDP) problem (M) is solved, MRL tries to solve the multi-MDPs adaptation problem. The training and testing tasks of MRL are different but drawn from the same task distribution p(M), where each task is a MDP, consisting of a set of states, actions, a transition function, and a bounded reward function. Each task M i consists of the state space, the action space, the transition probability, the reward function and the discounted faction, i.e., In the traditional RL, we solve: where α is the parameter vector of policy π, r is the reward function, and γ is the discount factor. While in MRL, we solve: where the θ is the parameter of the adaptation function f θ (M i ). M i the i th MDP environment. The meta-learner Equation (2) adapts its parameter based on the sum of expected return collected from performing each task with the adapted policy Equation (3). This meta learning step of Equation (2) is understood as the outer loop, whereas the adaptation step Equation (3) is the inner loop that fine tunes its parameter based on trajectories collected with the initial policy associated with the meta learner parameter.
In this work, we will implement two state-of-the-art MRL methods which are the MAML [25] and PEARL [28] method. In the MAML approach, both the meta training and adaptation loops are performed using policy gradient, for which the updating equation of the meta training step, integrated with parameter update of inner loops, is: where α 1 and α 2 are the learning rates, and r i is the reward function of the i th MDP. After the meta training step, the agent will learn a θ that is sensitive to all given reward functions. So that starting from that θ, with only a few steps of adaptation, the agent can find a better policy.
In the PEARL method [28] shown in Figure 1, the agent consists of a stochastic encoder for adaptation and an offpolicy RL SAC algorithm for meta learning. During meta training, the encoder characterizes different environment with a latent variable z and form its belief of p(z|c), the z distribution given the context c, i.e. the batches of (s, a, s , r) or adaptation data collected from the environment; while the SAC algorithm improves its policy given the belief (or specifically, given the latent variable z sampled from the belief of p(z|c)) and feeds back its critic loss to coach the encoder. At inference, the agent collected recent data (of current environment) with initial policy to infer the belief, sampled from it, and feed it into the trained off-policy SAC. Fig. 1: PEARL method illustration. Image is from [28].
In this work, we will implement both PEARL and MAML in our application.
IV. VARYING TRAFFIC CULTURES
In order for an automated vehicle to minimize traveling time and avoid lanes with traffic shockwaves, performing discretionary lane change is necessary. For an automated vehicle, we further require it to drive with roadmanship, i.e., to make efficient lane changes without annoying or endangering other drivers, and to respond to crash threats safely -swiftly yet appropriately -without creating hazards for others. It is worth noting that while driving cautiously is usually safe, being overcautious is not acceptable [30].
Next section describes different traffic cultures that we will introduce on a simulated highway where we expect an MRL automated vehicle can quickly adapt to the culture and exhibit appropriate roadmanship accordingly. The different traffic cultures correspond to different MDP or statistical distribution of how the environment may transition.
A. Varying adversarial vehicles (w.r.t. trained distribution) In [31], we generated "socially acceptable" attacks by training an adversarial attacker to explore the weakness of an AV with a fixed policy. The attacker attempted to invoke out of distribution traffic behaviors to confuse the AV. It showed that the trained attacker was capable of exploiting the fixed policy and induced collisions for which the AV is largely to blame. In this paper, we want to show that by implementing MRL, the trained AV agent can adapt to different environments and reduce crash rate. To demonstrate that, we design the distribution with three variables in Equation (5) to characterize the environments: 1) the traffic density variable α den , which is a scale of average distance between vehicles; 2) the number of total vehicles n car , which can be sampled from 10 to 30; and 3) the number of attackers n att , which is from 0 to 3. The attackers are randomly positioned around the AV.
M i ={α den , n car , n att }, α den ∼ U (0.5, 1.5) , These variables determine the initial conditions of the simulation environment, as shown in Figure 2. The reward functions for different environments are the same as in [13]. The blue car is the agent, the red cars are the attackers designed in [31], and the white cars are regular drivers designed in [13].
B. Varying social behaviors from egoistic to altruistic
In this experiment, we build the distribution of environments based on the highway-env [32] environment.
The state-space S ⊆ R n of the learning agent (the green box in Figure 3) includes the host vehicle's lateral position y and longitudinal velocity v x , the relative longitudinal position of the i th surrounding vehicle ∆x i , the relative lateral position of the i th surrounding vehicle ∆y i and its relative longitudinal velocity ∆v i x . In total, we have a continuous state space of 2 + 3 × 6 = 20 dimensions, i.e., S ⊆ R 20 . The actions of the learning agent are the steering angle and acceleration, which are both continuous. The steering angle's range is [− π 4 , π 4 ], and the acceleration's range is [−6 m/s 2 , 6 m/s 2 ].
In the highway-env [32] environment, the surrounding vehicles are controlled by the IDM-Mobil model, and the vehicle will change lane when: where a c is the ego vehicle's acceleration in the current lane andã c is the potential ego vehicle's acceleration if it changes lane. New and old successors are denoted as n Fig. 3: The highway-env environment [32]. The white box is the agent, the yellow boxes are aggressive drivers, the blue boxes are normal drivers and the green boxes are conservative drivers.
and o, the corresponding a is the current acceleration and theã is the potential if the ego vehicle changes lane. p is the politeness factor and ∆a th is the switching threshold. Therefore, the social behavior and aggressiveness of the surrounding vehicle can be represented by parameter p and ∆a th . The Equation (7) is the safety criterion that guarantees after the lane change, the deceleration of the successor in the target lane does not exceed a given safe limit b saf e . Since the politeness factor and the switching threshold are correlated for one kind of driver behavior, we do not sample them separately. Instead, we designed three different kinds of driver behavior: the aggressive driver, the normal driver, and the conservative driver [33]. The corresponding parameters are listed in Table I. From the table, we can see that the aggressive drivers will not consider other surrounding vehicles and may change lanes with a small acceleration advantage, while the conservative drivers do consider other surrounding vehicles and will change lanes only when there is a big acceleration advantage. The normal drivers are just in between. Each environment is decided by the following variables: the traffic density variable α den , which is a scale of the average distance between vehicles; the total number of vehicles n which is the sum of the number of aggressive drivers n agg , the number of normal drivers n nor and the number of conservative drivers n con . To sample an environment, we first uniformly sample the traffic density variable α den from 0.5 to 1.5 and the total number of vehicles n from 10 to 30. Then the numbers of different driver behaviors (i.e. n agg , n nor and n con ) are sampled from the multinomial distribution M (n, k), where n is the total number of vehicle and k = 1 3 . By sampling from M (n, k), we will have n agg + n nor + n con = n and the probability of sampling from each category is the same.
The reward functions [32] for different environments are the same and is composed of a velocity term and collision term: where v, v min , and v max are the current, minimum, and maximum speed of the agent, respectively, and α, β are weighting coefficients. For details, please refer to [32].
V. TRAINING AND TESTING SETUP We have implemented the PEARL and MAML MRL methods in both environments with varying distributions and compared their meta training process in Figure 4a and Figure 5a. Both algorithms are trained in 8 different tasks sampled from each environments distribution. Then at the end of each iteration, the meta agent is tested in 4 unseen tasks sampled from each environments distribution to show the adaptation method. The average returns of 4 unseen tasks during meta training are compared with the x-axis being the total environment transition steps which represent the amount of data used to train the meta learner in Figure 4a and Figure 5a. And the returns of each algorithm are averaged across five random runs. Both the hyper-parameters of the PEARL and MAML are tuned using optuna package [34], which is an open-source hyperparameter optimization framework to automate hyper-parameter search.
For evaluating the performance of trained meta agents, we compare the PEARL trained agent and the MAML trained agent with a fine-tune method based on the Trust Region Policy Optimization (TRPO) [35] method with safety check implemented from [36]. The fine-tune method will just keep updating the initial policy in a new environment given collected data. The evaluation results will be compared with the x-axis being the data collected in the new environment. We will sample 10 4 different environments and evaluate all three approaches.
For the DLC application, the crash rate of the trained agent is another important metric. Therefore, we will also report the crash rate of the fine-tune agent, MAML agent, and the PEARL agent in Section VI. We will compare the crash rate with the benchmark policy trained in the original environment using the method designed in [36].
A. Training Results
This section shows the meta testing returns of the PEARL method and MAML method during the meta training. Results Figure 4a and Figure 4b. In Figure 4a, we show the before and after adaptation of PEARL and MAML in the logarithmic axis. The x-axis is the total environment steps (s, a, s , r) representing how much data they use for training. As can be seen from the figure, the PEARL method converges after collecting 10 5 data points, meanwhile the MAML converge after collecting 10 7 data points. PEARL is one hundred times more data-efficient than MAML which is consistent with the conclusion in [28]. Moreover, if we look at the before and after adaptation curve of each approach, we can see that the agent trained by the PEARL method shows good adaptation. While for the MAML method, there is almost no adaptation. If we zoom in on the last ten iterations of MAML and PEARL and put them together, we can have this Figure 4b. The red dashed line is the crash line. The average reward below this line indicates there are crashes in that iteration. As you can see, the MAML not only shows no adaptation, but there are also still many crashes at the end of the training. While for PEARL, we can see that there is no crash after adaptation at the end of training.
Results for the IDM-Mobil environment described in Section IV-B are shown in Figure 5a and Figure 5b. In Figure 5a, we show the before and after adaptation of PEARL and MAML in the logarithmic axis, and in Figure 5b, we offer the last ten iterations of the MAML and PEAR training curve.
We can have a similar conclusion that the PEARL method is much more data-efficient than the MAML method. Moreover, from Figure 5b, we can see that the PEARL agent shows good adaptation that the after adaptation reward is much higher than the before adaptation reward. Since the reward design of the IDM-Mobil is different from the attacker's environment, there is no intuitive crash line. Therefore, we only summarize the crash rate in Table III in Section VI-B.
As can be seen from the training curves, the trained MAML agent shows almost no adaptation while the PEARL agent can adapt efficiently. The reason is, in PEARL, the stochastic encoder takes full path data as input (actually (s, a, s , r) tuples), so the trained PEARL agent would be able to capture the features of different environments with different transition probabilities. In comparison, the trained MAML agent in our experience is not sensitive to varying transition probabilities of different environments/tasks, which seems logical since the integrated inner/outer loops of MAML policy update Equation (4) involves only the reward function.
B. Evaluation Results
In this section, we evaluate the trained agent with random tasks sampled from each distribution of environments. The evaluation results of the attackers environments are shown in Figure 4c and the results of the IDM-Mobil environments are shown in Figure 5c. We compared the PEARL approach and the MAML with the fine-tune approach in which we keep training the policy in a new environment. The x-axis is how much data we provide for the adaptation step. Each trajectory is at most 200 time steps if not crash. As you can see, after collecting two trajectories of data (at most 400 data points), the PEARL can adapt to new environments well in both distributions of environments. However, the MAML and fine-tune methods do not show improvement even with ten trajectories of data. This is because that the collected data in the new environment are not useful for the MAML agent and fine-tune agent to update its policies.
Next, we report the different agents' crash rates during evaluation in Table II and Table III for the attacker environments and IDM-Mobil environments, respectively. All the methods are evaluated in random environments. On the leftmost column, we have the benchmark policy from Section V. The crash rate of the trained agent in the original environment is very low. However, when we test it in random environments, the crash rate increases significantly in both setups. For the fine-tune approach, the result shows that the agent cannot adapt to new environments with limited data, so the crash rate in new environments is around the same level for both setups.
In the attacker environments, as shown in Table II, the MAML agent keeps getting worse and worse, given the data. This due to insufficient exploration during the adaptation. Meanwhile, the PEARL agent can adapt to a new environment quickly with limited data. The crash rate of the PEARL agent reaches a very small number given 10 trajectories of data, which can compare to the benchmark's crash rate in the original environment. In IDM-Mobil environments, the MAML agent has better crash rates with more and more given data. However, the improvement still not significant enough compared to the PEARL agent. As can be seen from Table III, the PEARL can adapt to a new environment quickly with limited data. The crash rate of the PEARL is comparable to the benchmark's crash rate in the original environment. Since in the IDM-Mobil environments, there is no short-horizon safety check, the benchmark crash rate is higher than the attacker environment. Moreover, in the IDM-Mobil environments, the agent controls the steering angle and the acceleration directly without any robust lower level controller. This causes a higher crash rate compared to the attacker environment. The crash rate results show that the PEARL trained agent can achieve the benchmark level crash rate with only ten trajectories of data in both setups.
VII. CONCLUSIONS
In this work we showed that, with all things being equal, solving the multi-MDPs problem offers significant adaptability for the resulted discretionary lane change policy under varying traffic cultures. This is important since surrounding drivers can behave differently at a different time of the day or in different weather conditions, and a trained policy under normal traffic behavior can be brittle when experiencing adversarial vehicles that can exploit the weakness of a fixed automated driving policy [31].
To observe how well an AV can adapt to an unseen traffic environment, two types of distribution variation in environments were designed, i.e., varying density of adversarial vehicles and varying mixture in surrounding vehicles' social behaviors. We witnessed that both our MRL (with MAML and PEARL approaches) enabled the AV to be a quick learner of the encountered new traffic behavior when compared to the baseline of a classic RL that fine-tuned its policy with the same amount of trajectory data. Within ten data trajectories, MRL driving policy quickly adapted to the unseen traffic culture with the new MDP transition probability, and reduced crash rate significantly when compared to the baseline RL driving policy. | 2021-04-20T01:15:53.552Z | 2021-04-18T00:00:00.000 | {
"year": 2021,
"sha1": "449c4f166f2856029687c6e9ba2e8b7de119a33c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.08876",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "449c4f166f2856029687c6e9ba2e8b7de119a33c",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
231579778 | pes2o/s2orc | v3-fos-license | Wesley LifeForce Suicide Prevention Gatekeeper Training in Australia: 6 Month Follow-Up Evaluation of Full and Half Day Community Programs
Background and Objective: Wesley Mission LifeForce training is an Australian suicide prevention gatekeeper program which has not been formally evaluated. The aims of this evaluation were to (1) determine the short- and medium- term impacts of the training on worker capabilities (perceived and declarative knowledge), attitudes, and reluctance to intervene measures; and (2) compare the impact of the half and full day workshops on these measures. Method: 1,079 Australian community workers of diverse professional backgrounds completed a pre-workshop questionnaire as part of registration for the Wesley LifeForce suicide prevention training between 2017 and 2019. Of these, 299 participants also completed the post workshop questionnaires (matched sample). They attended either half day (n = 97) or full day workshops (n = 202) and completed also a 3- and 6- month follow-up questionnaire. We used linear mixed-effect modeling for repeated measures to analyze data. Results: LifeForce training participants experienced an increase in perceived capability, declarative knowledge, more positive attitudes and reduced reluctance to intervene, at least in the short term. The program is particularly well targeted for community gatekeepers with no prior training, albeit those with prior training in this study also experienced positive and significant gains on most measured constructs. Conclusions: We found evidence of effectiveness of the Wesley LifeForce training over time, without difference between the short (half day) and longer (full day) formats of delivery. Nevertheless, the latter format offers skills-based and skills rehearsal opportunities, the impacts of which we were unable to measure in this evaluation and should be estimated in the future.
INTRODUCTION
There are several definitions of gatekeeper (GK), a concept that has evolved over time from being simply "a person to whom troubled people are turning for help" (1, p. 39) to those in a position to recognize a crisis and the warning signs that someone may be contemplating suicide (1), or a community member who has some face to face contact with numerous community members as part of their standard role (and who may be trained to identify at risk persons and refer them to appropriate support services) (2). The role of GK can be informally denoted, such as parents, friends, neighbors, sports coach or, formally designated such as teachers, doctors, nurses, police officers, and others who may, as a function of their work role, come into contact with suicidal persons (3).
There is some evidence for gatekeeper training (GKT) as a promising suicide prevention initiative (4), For example, GKT has been found to increase perceived knowledge and declarative knowledge about suicide (5-7); enhance self-efficacy for intervening (8,9); reduce reluctance to intervene (10,11); reduce stigma associated with suicide (12) and improve attitudes toward suicide/suicide prevention (13). However, while there is some evidence for the short-term efficacy of GKT, there is less evidence for long-term effects of constructs other than knowledge and self-efficacy (14). Interestingly, there is no evidence for retention of attitudinal change over time (14), which, according to Burnette et al. (15), represents a particularly critical outcome for GTK.
The Wesley LifeForce community suicide prevention training program is part of Wesley Mission's national suicide prevention program, funded by the Commonwealth Department of Health as part of Australia's National Suicide Prevention Strategy. The three main activities of Wesley LifeForce include: a) Suicide Prevention Training, b) Suicide Prevention Networks, and c) Memorial Services. The first of these, Wesley LifeForce Suicide Prevention Training, was the focus of the current evaluation.
GKT programs, such as Wesley LifeForce training, aim at educating volunteers or designated individuals in the community to be able to identify people who may be at-risk of suicide. They are designed specifically to enhance knowledge, attitudes, and skills of the GK in order to enable competency to identify those at risk, determine appropriate action for optimal safety of the person, and make appropriate referrals as necessary (15).
Evaluation of Wesley LifeForce training included Phase 1-review of the appropriateness of the training in terms of alignment with minimum training competencies in content and structure; and Phase 2-evaluation of the short to medium term impacts of the training on GK knowledge, attitudes and skills. Phase 1 evaluation findings can be reviewed in a separate report provided to Wesley Mission (see 2). In brief, the evaluation found that the Wesley LifeForce training complied with nearly all minimum standards and competencies for GKT as defined in the study. Recommendations were made for minor improvement of content-related competencies (associated with key learning outcomes of the program) and more significant modifications to the delivery/structural competencies of the training. All recommendations were subsequently implemented. The current study presents Phase 2 findings of the Wesley LifeForce Suicide Prevention Training Evaluation (an updated edition following implementation of the recommended changes).
The Wesley LifeForce training package was designed to meet the needs of both informal and formal GKs, with the former addressed by community training and the latter via more targeted specialized training (e.g., for aged care nurses and relationship counselors). The aim of the current paper is to evaluate the effects of Wesley LifeForce suicide prevention training program targeted at informal GKs (the general community). Specifically, we aimed to compare and determine impacts of the half day and full day general community training programs on perceptions of capability, declarative knowledge, attitudes toward suicide prevention and reluctance to intervene from before to after training, and at three and six-month follow-up periods.
METHOD Intervention
The general community training's target audience are persons with moderate to no suicide prevention training and/or those requiring contemporary refresher training. LifeForce community workshops are offered as half day (4 h) or full day (6 h) options, with the latter including more skills-based learning mechanisms using video and role-play activities. The training goals for community training include: Identify people who may be at risk of suicide; Communicate appropriately with a suicidal person; Ask a person if they are considering suicide; Conduct a suicide intervention. Three sessions are covered in the training: Session 1 covers the scope of suicide in Australia (statistics, terminology, definitions, theoretical models); Session 2 examines personal/professional beliefs and attitudes as well as barriers to suicide prevention; and, risk and protective factors, and warning signs and 'triggers' for suicidality/suicide; and Session 3 bridges understanding to skills-based responses using the S.A.L.T (See, Ask, Listen, and Take the person to help) intervention model to guide knowledge application. This intervention model is unique to Wesley LifeForce training, and therefore any gains in measures of declarative knowledge testing this specific model of intervention is less likely to have been gained from general exposure to suicide prevention education or awareness.
Study Design and Data Collection
Recruitment of participants to the training was via the Wesley Mission website and related news articles and online community networks' newsletter. More specifically areas of high suicide rate around the country were identified and local organizations were approached to reach local networks. Training is hosted at multiple local community venues within each jurisdiction of Australia (all states and territories), with offerings of community training occurring roughly 4 times per month nationally. Participant numbers at workshops were 10-20 per delivery.
A prospective study design was used, with online questionnaires distributed at four time-points to all community training participants. Registration required completion of the pre-workshop online questionnaire, while responding to the subsequent questionnaires relied on participants' willingness to continue participation in the study, which ran from January 2017 until December 2019. The post-workshop questionnaire was sent soon after the workshop, and the follow-up questionnaires were emailed to attendees at 3-and 6-months after the workshops. Two reminders were sent to participants within 2-3 weeks of each wave of the study. The attrition rates were 72.3% from pre-to post-, 72.9% from post to 3-month follow-up, and 44.4% from 3-to 6-month follow-up. All procedures were approved by the Griffith University Human Research Ethics Committee (2017/241).
Measures
Background information included participants' age, gender, Indigenous status, Culturally and Linguistically Diverse background (CALD), professional role, work status, education, years in suicide prevention role, prior training, and expectation of using training in future.
Outcome measures included reluctance to intervene, perceived capability in suicide prevention, declarative knowledge about LifeForce training learning outcomes, and attitudes toward suicide and suicide prevention. The specific measures were as follows: Reluctance to Intervene is a 9-item scale measuring reluctance to intervene with a suicidal individual (10). Participants rated their level of agreement on a 5-point Likert scale from "strongly disagree" to "strongly agree, " with two items reverse-scored. Each item value is summed for a total score ranging from 9 to 45 where higher values mean less reluctance. This scale had poor internal consistency (α = 0.45) as compared to the original testing results by the authors of the scale (α = 0.68) (10).
Perceived Capability Scale is a 15-item scale measuring perceived suicide prevention capabilities on skills and/or knowledge items that may be relevant when acting as a 'gatekeeper' and assisting someone at risk of suicide, and which are covered in the LifeForce training content (16). Participants are asked to rate their current level of capability on a 5-point Likert scale ranging from "not at all capable" to "highly capable." A total score ranged from 15 to 75, where higher scores mean higher capability. This scale presented an excellent internal consistency (α = 0.95).
Declarative Knowledge Scale was developed to align with the LifeForce learning objectives and outcomes of all training modules (16). It includes 17-items in True/False/Do not know answer format. Correct answers to these questions were ascertained by referring to the workshop training material developed by Wesley Mission. Score equals the percentage of correct answers. This scale showed a good internal consistency (α = 0.73).
Attitudes to Suicide Prevention scale (ASP) is a 14-item self-report scale measuring attitudes toward suicide and suicide prevention (17). Thirteen items use a Likert scale from "strongly agree" to "strongly disagree" and the final item response ranging from "none" to "all." The responses to these items are scored from one (strongly disagree/none) to five (strongly agree/all) and summed, resulting in a total score ranging from 14 to 70, with higher scores indicating more negative attitudes. This scale had a poor internal consistency (α = 0.47) as compared to the original testing results by the authors of the scale (α = 0.77) (17).
Statistical Analysis
The outcome measures presented above were used as dependent variables. All scales had a normal distribution (the range for skewness or kurtosis between +1.5 and −1.5). We used linear mixed-effect modeling for repeated measures, which accounts for the correlation between the repeated measures for each individual (18). Moreover, this method also deals with unbalanced data with the assumption that missing data are missing at random and they are not dropped from the analyses.
For the linear mixed-effect regression models, workshop type (full day and half day), time (pre, post, 3-and 6-month follow-up), age group (<35 years; 35+ years), working in suicide prevention (never, 0-12 months, 1-5 years, 5-10 years, 10+ years), gender (male, female, other gender identity), work discipline (community support, health, other), and the workshop type × time interaction, and group were entered as fixed effects. The participant ID variable was included in the random intercept to model for within-person factors at baseline. To reduce multicollinearity, all variables included as fixed effects were centered (19). Time (pre, post, 3-month, and 6-month follow-up) was included as a repeated effect. A First-Order Autoregressive (AR1) and Unstructured (UN) covariance structures were examined using −2 Res Log Likelihood and Akaike's information criterion (AIC). Both structures were applied to the levels of group (workshop group) * person (as workshops were delivered in groups and participants were therefore nested within these groups). Random intercepts for participants were included to model for the correlation of within-person factors at the baseline. The AR1 structure was identified as the model with the best fit with all dependent variables. Post hoc analyses for the linear mixed models were conducted with Sidak adjustment. Statistical analysis was conducted in the IBM SPSS 25.0.
RESULTS
Of the 1,079 participants who completed the pre-workshop questionnaire, 299 (27.7%) participants completed the postworkshop questionnaire and were thus included in the analyses. Of the 299, 81 participants also completed the 3-month and 45 completed the 6-month follow-up survey. There were significant differences between those who completed the post-workshop questionnaire and those who did not by gender (χ2(1) = 0.23, p < 0.05), age (χ2(1) = 4.11, p < 0.05), and expected training use (χ2(1) = 6.02, p < 0.01; Supplementary Table 1).
A total of 202 participants in the full day and 97 half day workshops were included in the analyses. Demographic information for these participants are presented in Table 1. The only significant differences between the two workshop types are that those who participated in the full day more frequently indicated that they would use the training in the future compared to those in the half-day workshop (χ2(1) = 6.77, p < 0.01). Changes in the main outcome measures over the study period are presented in Figure 1.
Reluctance to Intervene: Mixed-effects regression analysis (Table 2) showed that time was a significant predictor of the change in mean score of reluctance to intervene (i.e., less reluctance) (F (3,55.3) = 9.74, p < 0.001), but not workshop type, nor the interaction of time and workshop type. Post-hoc analyses (ST 2) indicated that there was a significant increase in scores from pre-to post-intervention (Mdif = 1.46, 95%CI: 0.71, 2.22; p < 0.001), but not from pre to 3-month follow-up (Mdif = 1.49, 95%CI: 0.33, 3.87; p = 0.15). There was some decline in scores observed after 3-month follow-up.
DISCUSSION
The main aims of the current study were to evaluate the effects of Wesley LifeForce suicide prevention training targeted at the general community by analyzing the endurance of their impacts on a number of measures, and to compare the impacts of full day and half a day programs. The results support the effectiveness of Wesley LifeForce Suicide Prevention training, for the full and half day training packages for community GKs. All outcome measures including perceptions of capability, declarative knowledge, attitudes toward suicide prevention and reluctance to intervene showed immediate improvements from pre-to post-training. Moreover, these gains were all maintained from post to 3-month, and from 3-and 6-month, with the exception of perceived capability, whereby scores decreased after 3 months follow up.
We did not identify any significant differences in outcomes between participants attending full day or half day workshops. Although there was a significant interaction between workshop types and time for declarative knowledge, post hoc analyses indicated there were no significant differences between workshop types at any time. Similarly, Cross et al. (20) compared brief GKT vs. GKT plus behavioral skills training to determine their impacts on skills and use of training and found significant increases for both workshops in attitudes and knowledge at post training as well as follow-up (20). However, those who received skills training via role play and behavioral rehearsal showed higher total skills scores (20). It is well established in the GKT literature that knowledge does not necessarily translate to practice (21). A recent systematic review of school-based GKT revealed that only three studies (out of 14) had measured GK behavior/skills changes, which showed generally significant positive effects from pre to post training. However, upon closer examination of these findings, no studies reported maintenance of positive changes and the combined findings implied that the knowledge and skills-based changes may not translate to behavior change (22). However, it was also suggested that this finding may be a result of short follow-up periods during which it is difficult to identify any changes (particularly based on lesser opportunities to apply the skills) (22). Nevertheless, as the application of skills to the real world is the least measured outcome in GKT studies, it is important that such outcomes are included in future investigations.
Reluctance to Intervene
Related to one's motivation to intervene, this study found less reluctance to intervene with a suicidal person post training, with this difference sustained until the 6-month follow-up. This aligns with other studies that have shown reduced reluctance levels post training, maintained at 5 months post training (10), even when using a randomized control group design (11). However, in the few studies that have looked at the translation of intentions or motivations to intervene following training, there seems to be no association with putting this into practice as measured by self-reported behavioral change (11). As discussed above, while we were unable to measure the behavioral change implications, it would seem important to place additional emphasis within both types of LifeForce training workshops on discussing and practicing ways to overcome potential obstacles to utilization of skills in real-life situations. As stated, only the Lifeforce full day workshop includes skills-based activities, so while we did not find differences between the different formats of delivery, it would be worthwhile measuring specific skills based outcomes between them in the future to better understand their impact of skill utilization. Further, the LifeForce training should pay specific additional emphasis on the role-playing elements of intervention in the context of discussions about the influence of skills-rehearsal on willingness to intervene and reducing discomfort that can often accompany intervention behaviors (23)(24)(25). Additionally, some type of professional support or booster training is recommended at least within the 3-months post training to sustain a willingness to respond to and intervene with suicidal persons.
Perceived Capabilities and Declarative Knowledge
Assessment of perceived capabilities in suicide prevention included examination of a suite of minimum competencies aligned directly with the LifeForce training packages in the form of a self-report measure. This is an important measure as it has been shown previously that confidence in personal abilities can have positive effects on motivating and encouraging participation in suicide prevention activities (15,26). We found that perceived capability increased post-training and was sustained up to 3-months but decreased at 6-months. This attrition could be related to the lack of opportunity to utilize knowledge and skills over time, despite being unable to report on the opportunities presented to participants to engage suicidal persons during the study period. Nevertheless, it seems fair to assume that informal GKs have much less frequent contact with persons at risk of suicide compared to formally designated GKs whose work necessitates the ongoing use of GK capabilities as part of their role (6).
Our examination of participants' declarative knowledge (a more objective account of assessing suicide prevention facts, directly aligned with LifeForce training learning objectives) showed significantly enhanced knowledge post-training which was maintained over the follow-up period. This is consistent with other findings where GK training has improved suicide-related knowledge in diverse community populations (5)(6)(7)27).
We also found that prior training in suicide prevention and more experience in suicide prevention predicted higher scores on perceived capability and declarative knowledge. Other studies have reported similar associations between prior training and experience with more enhanced training outcomes. For example, GK studies on health professionals (28), and workers from diverse behavioral and health fields (29) have found prior suicide training to be related to greater knowledge and confidence in GKT outcomes. Increased practice and rehearsal of acquired capabilities is known to maintain skills, which may in turn maintain both actual knowledge and perceptions of capabilities (20). Provision of booster training and other supportive education may enhance capability and reinforce acquired skills in the absence of opportunities for intervention. This may be particularly important for informally denoted GKs who are not regularly in contact with suicidal persons.
Attitudes to Suicide Prevention
Regarding attitudes outcome, we found that negative attitudes to suicide prevention decreased from pre to post-workshop and from pre to 3-month follow-up but not also to 6-month follow-up. Positive attitudinal change toward suicide prevention is one of the most difficult GKT outcomes to sustain long-term, as demonstrated in a recent review by Yonemoto et al. (30) which identified only one RCT study that found attitude changes sustained to 6 months post training among youth helpers (13). We observed that younger age and those with prior training had more positive attitudes, compared to those with no prior training. This demonstrates that regardless of the impacts of LifeForce training, the individual's pre-training experience arguably plays a role in current attitudes toward suicide and suicide prevention. Consistent with the extant literature on attitudes and GKT (15), our results endorse that training generally can result in more positive attitudes for the better, however, this outcome cannot be solely attributed to the impacts of LifeForce workshop.
LIMITATIONS
Our study has several limitations and results should be interpreted against this background. Firstly, in light of the fact that there were some significant differences between completers and non-completers, it is possible that the study suffers from a self-selection bias which may have impacted results. Further, other methodological limitations may prevent causal links being made between LifeForce Training and the enduring participant gains. We did not use a control group to compare different training program effects so were unable to conclude whether competency gains were the result of LifeForce training per se, or whether such impacts might be gained from a multitude of other influences. Moreover, not all training attendees participated in the research, and attrition rates were quite high over all time periods; similar experiences were reported by other studies including heterogeneous community samples (11). We attempted to address this limitation through the use of mixed linear modeling as this method accounts for within-and betweenparticipant variance and accounts for correlations between repeated measures for each participant. Finally, scales measuring reluctance to intervene and attitudes to suicide prevention had low internal consistency, in both the original scale development studies and in the current study. Thus, it is possible that results obtained on these scales are not robust enough to be conclusive.
CONCLUSION
We found evidence for effective impacts of the Wesley LifeForce training over time, for both the short (half day) and longer (full day) formats of delivery. The latter format offers skills-based and skills rehearsal opportunities which we were unable to measure in this evaluation, but which we recommend be emphasized in future evaluation studies of this program. Specifically, findings revealed that training participants exposed to LifeForce training are likely to experience increased perceived capability, declarative knowledge, positive attitudes and reduced reluctance associated with intervening, at least in the short term. In particular, the program is well targeted for those with no prior training, despite those with prior training also experienced positive and significant gains on nearly all measured constructs. Community members and organizations with different professional background undertaking this training can expect to gain significant learning's and gains in key factors known to impact intervention behaviors.
DATA AVAILABILITY STATEMENT
We have not provided this statement. The raw data can be made available upon a reasonable request.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Griffith University Human Research Ethics Committee. The participants provided consent by progressing past the information sheet informing them that continuation into the online survey will represent their consent to participate. | 2021-01-12T14:19:28.775Z | 2021-01-12T00:00:00.000 | {
"year": 2020,
"sha1": "b1b78ba7a2d8ebd792fb023a84e6f75c681143a9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2020.614191/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1b78ba7a2d8ebd792fb023a84e6f75c681143a9",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270010812 | pes2o/s2orc | v3-fos-license | Ovine KRT81 Variants and Their Influence on Selected Wool Traits of Commercial Value
Keratins are the main structural protein components of wool fibres, and variation in them and their genes (KRTs) is thought to influence wool structure and characteristics. The PCR–single strand conformation polymorphism technique has been used previously to investigate genetic variation in selected coding and intron regions of the type II sheep keratin gene KRT81, but no variation was identified. In this study, we used the same technique to explore the 5′ untranslated region of KRT81 and detected three sequence variants (A, B and C) that contain four single nucleotide polymorphisms. Among the 389 Merino × Southdown cross sheep investigated, variant B was linked to a reduction in clean fleece weight, while C was associated with an increase in both greasy fleece weight and clean fleece weight. No discernible effects on staple length or mean-fibre-diameter-related traits were observed. These findings suggest that variation in ovine KRT81 might influence wool growth by changing the density of wool follicles in the skin, the density of individual fibres, or the area of the skin producing fibre, as opposed to changing the rate of extrusion of fibres or their diameter.
Introduction
Wool is primarily made up of two types of protein: keratins and keratin-associated proteins (KAPs).Wool keratins are the principal structural components of the fibre and form heterodimeric pairs that are then assembled into structures called intermediate filaments (IFs).These are embedded in, and covalently linked to, a diverse protein matrix composed of KAPs [1].Two types of wool keratin have been defined: type I (acidic) and type II (basic-neutral) keratins.In sheep, a total of seventeen wool keratin genes (KRTs) have been identified, and these encompass ten type I wool keratin genes (KRT31, KRT32, KRT33A, KRT33B, KRT34-KRT36, KRT38-KRT40) and seven type II wool keratin genes (KRT81-KRT87) [2][3][4][5].
In this study, an investigation was conducted of the 5 ′ untranslated region (UTR) of ovine KRT81 in Merino × Southdown cross sheep to ascertain if genetic variation existed, and if identified, to explore whether it affected selected wool traits that determine the commercial value of wool.This cross was being developed to obtain lower mean fibre diameter (MFD) and higher mean fibre curvature (MFC) wool in sheep that have faster liveweight gains, earlier maturation and better carcass meat yield.The overall aim was to obtain further insight into the genetic basis of variation in wool characteristics and potentially lay a foundation for the selective breeding of sheep to improve wool quality.
Sheep Blood and Wool Samples
Three hundred and eighty-nine Merino × Southdown cross sheep, these being the offspring of six sires, were investigated.These sheep were produced over several years.The sheep were of a similar age, and they were managed as part of a single mob on improved pasture.For each sheep, a venous blood sample from the ear was gathered onto TFN paper (Munktell Filter AB, Falun, Sweden), and genomic DNA that was bound to the paper was refined using a procedure described by Zhou et al. [12].
PCR-Single Strand Conformational Polymorphism (PCR-SSCP) Analysis
Two PCR primers were designed, based on an ovine KRT81 gene sequence X62509 [3], to amplify a 427 bp fragment of the 5 ′ UTR region.The sequences of these primers were 5 ′ -TGCACACACACAGGTCACC-3 ′ (forward primer) and 5 ′ -GAATCCTGATCCGCAGGTC-3 ′ (reverse primer), and they were synthesised by Integrated DNA Technologies (Coralville, IA, USA).PCR amplification was conducted in a 15 µL reaction comprising the purified genomic DNA on a 1.2 mm punch of TFN paper, 0.25 µM of each primer, 150 µM of each dNTP (Bioline, London, UK), 2.5 mM of Mg 2+ , 0.5 U of Taq DNA polymerase (Qiagen, Hilden, Germany) and the 1× reaction buffer provided with the enzyme.The thermal profile consisted of an initial denaturation for 2 min at 94 • C, followed by 35 cycles of 30 s at 94 • C, 30 s at 62 • C and 30 s at 72 • C, and with an ultimate extension stage of 5 min at 72 • C. The thermal cycling was accomplished in S1000 thermal cyclers (Bio-Rad, Hercules, CA, USA).
DNA Sequencing and Sequence Analyses
Representative selections of the PCR amplicons that displayed apparent homozygosity for different variants upon PCR-SSCP analysis were subjected to direct sequencing in both the forward and reverse directions at the Lincoln University DNA Sequencing Facility, NZ.In the situation where a variant was only observed in heterozygous sheep, a different sequencing method described by Gong et al. [14] was employed.In this approach, a PCR-SSCP band corresponding to the variant was removed as a gel slice from the polyacrylamide gel, crushed, and then used as a template for re-amplification.The resulting 'homozygous' amplicon was then subject to DNA sequencing.
Statistical Analyses
Statistical analyses were conducted using Minitab version 16 (Minitab Inc., State College, PA, USA).General linear models (GLMs) were used to assess the impact of the presence or absence of the KRT81 variants on the various wool traits that were measured.Genotypes with a frequency greater than 5% were used in GLMs to compare the various wool traits in sheep with those genotypes.To address the issue of undertaking multiple comparisons and reduce the chances of obtaining false positive results, a Bonferroni correction was applied and a post hoc Benjamini-Hochburg procedure was used to ascertain the potential for type I errors (false positives).
The models incorporated sire, gender and birth rank as fixed effects.Sire was identified to have an influence on all the wool traits, while gender and birth rank (whether the sheep was born as a single, twin or triplet) were identified as factors impacting only some wool traits.While the year of wool sample collection was also recorded, sire and year were absolutely confounded, with sire being chosen as the explanatory factor for the models as it explained more variation in the traits.The presence/absence model was: Y jklm = µ + V j + G k + S l + B m + e jklm ; where Y jklm is the observed trait in the jklm th animal, µ is the group raw mean for the trait, V j is the effect of the j th variant (presence and absence), G k is the effect of gender, S l is the effect of the l th sire, B m is the birth rank, and e jklm is the random residual effect.The genotype model was: Y jklm = µ + GT j + G k + S l + B m + e jklm ; where Y jkml is the observed trait in jklm th animal, µ is the group raw mean for the trait, GT j is the fixed effect of the j th genotype, G k is the effect of gender, S l is the effect of the l th sire, B m is the birth rank, and e jklm is the random residual effect.
Results
Three different SSCP banding patterns were identified for the 5 ′ UTR amplicon of ovine KRT81 (Figure 1).Sequencing of selected amplicons revealed three sequence variants that were named A, B, and C. Upon comparing these sequence variants, four SNPs were identified as c.-309G/A, c.-295G/A, c.-226T/C and c.-178T/A (Figure 2).The sequence of variant B was identical to the reference gene sequence X62509.The variant presence/absence models revealed that the variation in KRT81 was associated with two wool traits, GFW and CFW.Specifically, the presence of B was associated with a decrease in CFW, while the presence of variant C was linked to an increase in both GFW and CFW (Table 1).As might possibly be expected given the relationship between CFW and GFW, there was a trend suggesting an association between KRT81 variation and yield.No associations were observed with other wool traits.Four genotypes out of the six that might be expected were detected in the 389 sheep investigated.These genotypes and their frequencies were: AA (31.9%),AB (40.6%),AC (10.0%) and BB (17.5%).Consequently, the frequency of variants A, B, and C in this population was 57.2%, 37.8%, and 5.0%, respectively.
The variant presence/absence models revealed that the variation in KRT81 was associated with two wool traits, GFW and CFW.Specifically, the presence of B was associated with a decrease in CFW, while the presence of variant C was linked to an increase in both GFW and CFW (Table 1).As might possibly be expected given the relationship between CFW and GFW, there was a trend suggesting an association between KRT81 variation and yield.No associations were observed with other wool traits.The corrected genotype models also revealed a difference in GFW and CFW between genotypes.These two associations persisted upon post hoc Benjamini-Hochburg analysis at a false discovery rate of 25%.Genotype AC was found to be associated with higher GFW and CFW, whereas genotype AB exhibited lower GFW and CFW (Table 2).Once again, there was a trend suggesting a relationship between genotype and yield, while no associations were observed with the other wool traits.
Discussion
The identification of four SNPs comprising three sequence variants in the 5 ′ UTR of ovine KRT81 is noteworthy given the absence of sequence variation in the coding and intron regions described in a previous study [8].This 5 ′ UTR region putatively contains several sequence motifs identified by Powell et al. [3], such as HK1, AP-1 AP-2, TATA, CAAT and CAP sites (Figure 2).While Powell et al. [3] used a primer extension assay and suggested that the presence of two putative CAP sites at c.-65 and c.-63, an online tool (https://www.fruitfly.org/cgi-bin/seq_tools/promoter.pl;accessed on 22 February 2024), predicts a putative CAP site at c.-67, which is different to those proposed by Powell et al. [3].Interestingly, Powell et al. [3] identified a CAAT sequence motif (5 ′ -CAAGCCCATAAA-3 ′ ), which significantly differs from the consensus sequence 5 ′ -GG(T/C)CAATCT-3 ′ [15].However, no sequence resembling the CAAT consensus sequence was found by us in the region analysed.
These sequence motifs may play a role in regulating wool keratin gene expression, and although most of the SNPs revealed in this study are not located within these identifiable sequence motifs, variation in these regions could still exert an influence on gene expression by altering promoter structure.Regardless, the variation revealed may have a functional consequence and impact the structural and functional characteristics of wool fibres.
In this respect, two type I wool keratin genes KRT31 and KRT34, have also been reported to be polymorphic in their 5 ′ UTR regions, and this variation has also been associated with variation in key wool traits [6,7], although another study failed to find variation in the type II wool keratin gene KRT83 promoter [9].This variation, along with other reports of variation in wool keratins [7,8,17,18] and the KAP genes [19][20][21][22], suggests that genetic variation exists in nearly all wool protein genes.The polymorphic nature of these genes, combined with the extensive number of genes that have been identified, suggests considerable complexity underpinning the variation in wool fibres and wool traits.
The finding that variation in ovine KRT81 affects the two related fleece weight traits, without impacting staple length and fibre diameter traits has not been observed for other KRTs and KRTAPs.For example, variation in KRT31 [6], KRTAP1-2 [23] and KRTAP20-1 [24] affects fleece weight traits, but also other traits like MSL and/or the fibre diameter traits, like MFD, FDSD and CVFD.The absence of an effect on MSL or fibre diameter traits leaves three things that may affect the weight of the fleece: variation in the density of the individual fibres, variation in the number of wool follicles per unit area of skin, or variation in the amount of skin that contains follicles, the latter suggesting KRT81 is in some way affecting the skin, not just the wool follicles therein.Given that Yu et al. [10] illustrated that KRT81 was expressed in the cortex of the wool follicle, the latter two seem less likely, though not impossible; thus, fibre density appears most likely to be what the variation in the promotor of KRT81 may be affecting.The level of expression of the gene may play a role in determining the quantity of heterodimers produced for the assembly into intermediate filaments, with this in turn influencing the ratio of intermediate filaments to the matrix, and this potentially affecting fibre density.This is speculative and further research is most certainly needed to gain a better understanding of whether the 5 ′ UTR variation revealed here affects transcription and gene expression, and subsequently wool traits.
Conclusions
This study identified three sequence variants of ovine KRT81 and reported four SNPs in the 5 ′ untranslated region.The variation in this region was found to be associated with wool fleece weights but not with staple length or mean-fibre-diameter-related traits, suggesting that the gene influences wool growth, likely by affecting the density of individual fibres.Institutional Review Board Statement: Ethical review and approval were waived for this study due to that the collection of sheep blood drops by the nicking of their ears was covered by Section 7.5 Ani-mal Identification, in: Code of Welfare: Sheep and Beef Cattle (2016); a code of welfare issued under the Animal Welfare Act 1999 (New Zealand Government).
Informed Consent Statement: Not applicable.
8 Figure 1 .
Figure 1.PCR-SSCP gel electrophoresis patterns for a fragment of the 5′ UTR of ovine KRT81.Three different patterns (A, B and C) are observed in either homozygous or heterozygous forms.
Figure 1 .
Figure 1.PCR-SSCP gel electrophoresis patterns for a fragment of the 5 ′ UTR of ovine KRT81.Three different patterns (A, B and C) are observed in either homozygous or heterozygous forms.
Figure 1 .
Figure 1.PCR-SSCP gel electrophoresis patterns for a fragment of the 5′ UTR of ovine KRT81.Three different patterns (A, B and C) are observed in either homozygous or heterozygous forms.
Figure 2 .
Figure 2. Alignment of the ovine KRT81 sequences.Three variant sequences (A, B and C) identified in this study are aligned with the GenBank sequence X62509.The putative HK1, AP-1, AP-2, CAAT, TATA and two CAP sites identified by Powell et al. [3] are marked, and the start codon is highlighted in bold.Nucleotides identical to the top sequence are denoted by dashes.Grey shaded regions indicate the PCR primer biding sites.The positions of the SNPs identified are indicated above the sequences.
Figure 2 .
Figure 2. Alignment of the ovine KRT81 sequences.Three variant sequences (A, B and C) identified in this study are aligned with the GenBank sequence X62509.The putative HK1 , AP-1 , AP-2 , CAAT , TATA and two CAP sites identified by Powell et al. [3] are marked, and the start codon is highlighted in bold.Nucleotides identical to the top sequence are denoted by dashes.Grey shaded regions indicate the PCR primer biding sites.The positions of the SNPs identified are indicated above the sequences.
Table 1 .
Association of KRT81 variants with various wool traits.
Table 2 .
The effect of KRT81 genotypes on various wool traits.GFW-greasy fleece weight; CFW-clean fleece weight; MFD-mean fibre diameter; FDSD-fibre diameter standard deviation; CVFD-coefficient of variation of fibre diameter; MSL-mean staple length; MSS-mean staple strength; MFC-mean fibre curvature; PF-prickle factor.2Estimatedmarginal means, standard errors and p values derived from GLMs.Means within rows that do not share a superscript letter (e.g., a) were different at p < 0.05. 3 p < 0.05 are highlighted in bold. 1 | 2024-05-26T15:07:11.650Z | 2024-05-24T00:00:00.000 | {
"year": 2024,
"sha1": "fd49b05c3843b4724693ec5f4874d081b182f8a3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/15/6/681/pdf?version=1716561309",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc845cb6a0bc4519130ee286153848b286e18994",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260414823 | pes2o/s2orc | v3-fos-license | Nanomembranes-Affiliated Water Remediation: Chronology, Properties, Classification, Challenges and Future Prospects
Water contamination has become a global crisis, affecting millions of people worldwide and causing diseases and illnesses, including cholera, typhoid, and hepatitis A. Conventional water remediation methods have several challenges, including their inability to remove emerging contaminants and their high cost and environmental impact. Nanomembranes offer a promising solution to these challenges. Nanomembranes are thin, selectively permeable membranes that can remove contaminants from water based on size, charge, and other properties. They offer several advantages over conventional methods, including their ability to remove evolving pollutants, low functioning price, and reduced ecological influence. However, there are numerous limitations linked with the applications of nanomembranes in water remediation, including fouling and scaling, cost-effectiveness, and potential environmental impact. Researchers are working to reduce the cost of nanomembranes through the development of more cost-effective manufacturing methods and the use of alternative materials such as graphene. Additionally, there are concerns about the release of nanomaterials into the environment during the manufacturing and disposal of the membranes, and further research is needed to understand their potential impact. Despite these challenges, nanomembranes offer a promising solution for the global water crisis and could have a significant impact on public health and the environment. The current article delivers an overview on the exploitation of various engineered nanoscale substances, encompassing the carbonaceous nanomaterials, metallic, metal oxide and metal–organic frameworks, polymeric nano-adsorbents and nanomembranes, for water remediation. The article emphasizes the mechanisms involved in adsorption and nanomembrane filtration. Additionally, the authors aim to deliver an all-inclusive review on the chronology, technical execution, challenges, restrictions, reusability, and future prospects of these nanomaterials.
Introduction
Water is a precious resource essential for life, and access to clean water is a basic human right.Unfortunately, water contamination has become a global crisis, affecting the health and well-being of millions of people worldwide.Polluted water can cause a range of diseases and illnesses, including cholera, typhoid, hepatitis A, and dysentery, among others.The World Health Organization (WHO) estimates that contaminated water and poor sanitation are responsible for the deaths of approximately 3.4 million people annually, mostly children under the age of five.Conventional methods of water remediation, such as chemical treatment, flocculation, sedimentation, filtration and disinfection that generally involve chlorination, ozonation, and ultraviolet radiation have been exploited for many years to remove contaminants from water.However, these methods have several challenges, including limitations in removing specific contaminants, high cost, and environmental impact [1,2].
One significant challenge with conventional water remediation methods is their inability to remove emerging contaminants, such as pharmaceuticals, pesticides, and endocrine disruptors, which can have adverse health effects on humans and wildlife.These contaminants are often present in low concentrations, making their removal difficult with conventional methods.As a result, they may persist in water and accumulate over time, posing long-term risks to public health and the environment.Another challenge with conventional methods is their high cost.Chemical treatment methods, such as coagulation and flocculation, require large amounts of chemicals, energy, and equipment, which can be expensive and resource-intensive.Similarly, sedimentation and filtration methods require high maintenance and operational costs, which can be prohibitive in many settings.Conventional water remediation methods can have adverse environmental impacts.Chemical treatment methods, for instance, can produce toxic by-products, which can harm aquatic ecosystems and wildlife.Sedimentation and filtration methods can also generate large amounts of waste, which can be difficult to manage and dispose of safely [3].
Nanomembranes offer a promising solution to the challenges associated with conventional water remediation methods.Nanomembranes are thin, selectively permeable membranes, typically less than 100 nanometers in thickness, that can remove contaminants from water based on size, charge, and other properties.Nanomembranes offer several advantages over conventional methods, including their ability to eradicate emerging contaminants, their low operational costs, and their reduced environmental impact.Nanomembranes can remove incipient contaminants, such as pharmaceuticals and pesticides, with high efficiency.The small pore size of nanomembranes allows them to filter out even the smallest particles, including viruses and bacteria.This makes them highly effective in removing emerging contaminants, which are often present in low concentrations and difficult to remove with conventional methods.
Nanomembranes also have low operational costs.Unlike chemical treatment methods, which require large amounts of chemicals, energy, and equipment, nanomembranes require minimal energy and equipment.They also have low maintenance costs, making them an attractive option for water remediation in resource-limited settings.Nanomembranes have a reduced environmental impact compared to conventional methods.They produce less waste and require fewer chemicals, reducing their impact on aquatic ecosystems and wildlife.They also have a smaller carbon footprint, as they require less energy to operate [4].
In spite of having countless latent advantages, the field of nanomembranes still has several challenges that need to be addressed.There are also several challenges associated with the use of nanomembranes in water remediation [5].
One of the main challenges is fouling and scaling, which can occur when contaminants accumulate on the surface of the membrane, reducing its efficiency over time.The fouling of membranes takes place due to the presence of suspended solids (generally >0.01 microns), whereas scaling occurs because of the dissolved solids especially salts, when exceeded from their solubility.Fouling and scaling can be caused by several factors, including the quality of the feed water, the membrane material, and the operating conditions.Addressing fouling and scaling is critical for maintaining the long-term performance of nanomembranes.Fouling occurs when contaminants accumulate on the surface of the membrane, reducing its efficiency over time.Scaling occurs when mineral deposits accumulate on the surface of the membrane, leading to reduced water flux and enhanced pressure drop.Fouling and scaling can be caused by several factors, including the quality of the feed water, the membrane material, and the operating conditions.Fouling and scaling can be mitigated through several strategies, including cleaning and maintenance of the membrane, optimization of operating conditions, and the use of antifouling coatings.However, these strategies can be costly and time-consuming, and their effectiveness may depend on the specific application and membrane material [6][7][8][9][10].
Another challenge associated with the nanomembranes is cost-effectiveness.While nanomembranes have low operational costs, they can be expensive to produce, making them inaccessible in many settings.The cost of nanomembranes is mainly due to the high cost of raw materials, such as polymers and ceramics, and the complex manufacturing process required to produce membranes with nanoscale features [11].
Membranes 2023, 13, 713 3 of 32 Scientists and researchers are making continuous efforts to reduce the cost of nanomembranes through the development of more cost-effective manufacturing methods and the use of alternative materials.Exploration of the use of graphene and other 2D materials as alternative membrane materials is one of the recent examples, which may offer advantages in terms of cost, scalability, and performance [12].
An additional factor with nanomembranes is their potential impact on the environment.While nanomembranes have a reduced environmental impact compared to conventional methods, there are concerns about the release of nanomaterials into the environment during the manufacturing and disposal of the membranes.Nanomaterials may have unknown effects on aquatic ecosystems and wildlife, and there is a need for further research to understand their potential impact [13].
There are various challenges associated with the scale-up and implementation of nanomembranes in real-world applications.While nanomembranes have shown promise in laboratory settings, their performance and durability in real-world conditions are not well understood.Additionally, there may be regulatory and policy barriers that need to be addressed before nanomembranes can be widely adopted.
However, while nanomembranes offer several advantages over conventional methods in water remediation, there are also several challenges that need to be addressed.These challenges include fouling and scaling, cost-effectiveness, environmental impact, and implementation in real-world applications.Addressing these challenges will be critical for the successful implementation of nanomembranes in water remediation and for achieving the goal of providing access to clean water for all.
This review article delivers an overview of the exploitation of various engineered nanoscale materials, including carbon-based nanomaterials, metallic nanomaterials, metal oxide-based nanomaterials and polymeric nano-adsorbents, MOFs (metal-organic frameworks) and nanomembranes for water remediation.The article also focuses on the mechanisms involved in adsorption and nanomembrane filtration.Additionally, the authors aim to provide a comprehensive review on the chronology, technical execution, challenges, restrictions, reusability and future prospects of these nanomembranes.
Chronology of Nanomembranes
The study of nanomembranes can be traced back to the effort of Langmuir and Blodgett, who created monolayers of omniphilic structures on the surface of water, which were then relocated onto the solid surfaces or grids [14,15].In spite of noteworthy study, Langmuir-Blodgett (LB) membranes never became mainstream commercial applications, which was possibly due to the challenging synthetic procedures process involving singlelayer development on liquid surfaces and transfer to solids.During the late 1970s, Sagiv created alkylsilane monolayers on silicon surfaces, giving birth to the concept of "selfassembled monolayers" (SAMs)-an arrangement of molecules on solid surfaces.This was a significant breakthrough, as it allowed for the formation of molecular films through controlled means.Sagiv also delved into the electrical conductivity of these SAMs, which is now seen as one of the first experiments in molecular electronics [16,17].
The formation of bifunctional organic disulfide monolayers on gold surfaces was first reported by Nuzzo and Allara in the 1980s [18].They observed that by immersing a gold surface in a disulfide solution, a tightly packed molecular monolayer could be formed spontaneously within a few hours under normal conditions.The process for creating self-assembled monolayers (SAMs) was detailed in their study.The mechanism of thiol SAM formation on gold has been extensively researched and described in various publications and texts, including its structure, dynamics, and kinetics.When a gold surface is immersed in a thiol solution, the S-H bond in the thiol dissociates, releasing hydrogen and forming covalent Au-S bonds.The molecular backbones are then ordered laterally through intermolecular interactions, leading to the creation of well-ordered monolayers.This process is well documented and has been the subject of numerous reviews and books.SAMs offer several benefits over Langmuir-Blodgett (LB) films, including ease of preparation and the ability to directly coat solid surfaces with regimented SAMs without the need for transmission procedures.Whitesides and his colleagues acknowledged the easiness of SAM fabrication as a significant benefit and extensively explored their potential applications.One prominent example is soft lithography, a surface-patterning technique that uses various printing methods to deposit SAM patterns on surfaces.This approach bridged the gap between physical chemistry and nanolithography, providing a simple and reachable approach to construct nanostructures on a chemical laboratory bench [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37].
The layer-by-layer (LbL) technique for fabricating nanomembranes was introduced by Decher in the 1990s.To create a precise polymeric film with a thickness range of 15 nm to several hundred nm, a process called layer-by-layer assembly is utilized.This technique involves immersing a surface that carries an electrical charge into oppositely charged polyelectrolyte solutions, which leads to the formation of a defined polymeric film.This is achieved by the repeated deposition of alternating layers of positively and negatively charged polyelectrolytes, resulting in a well-controlled and highly ordered film.LbL membranes have found use in various applications, including surface protection and drug delivery.Another approach to generate carbon nanosheets is through the use of amphiphilic monolayers on a water surface.This method involves compressing floating films to create various molecular arrangements, leading to the formation of nanosheets.Stable nanosheets can be created by connecting the molecules in the films using covalent bonds, organometallic linkers, or crosslinking with UV exposure.To customize the properties of the nanosheets, factors such as the molecules exploited, surface tension, surface area, and linker or radiation exposure circumstances can be varied.By combining the Langmuir and Blodgett's recognized procedures with new methods of linking molecules at the liquidgas interface, promising possibilities have emerged for integrating these nanosheets into devices [38,39].
Geim and Novoselov made a significant discovery in the 2000s by exploring graphene, which is a nanomembrane only one carbon atom thick (0.35 nm).This material exhibited exceptional mechanical, electronic, and optical properties and has become the new benchmark for two-dimensional systems.Researchers have intensively studied graphene with the aim of introducing that into innovative technologies, including electronics, sensing, and medicine.However, the homogeneous and chemically inert surface of graphene posed a challenge to its efficient functionalization with other functional groups or biomolecules, which hindered its fast market entry [40][41][42][43].
Nanomembrane Production Methods
Nanomembrane production procedures encompass two distinct approaches.The first one is the additive/subtractive procedure of manufacturing the nanomembranes and another is the top-down/bottom-up tactic.The additive/subtractive fabrication is the procedure related with the membrane material.As the name indicates, in additive, new material has been incorporated to the membrane, whereas in subtractive, the material is detached from the membrane.The top-down/bottom-up approach explains in what way the addition and subtraction are achieved.Top-down or bottom-up approaches can be of either type, the additive or the subtractive.Details classification is provided in Figure 1 [44].
Top-Down Approach
Most of the ultrathin film deposition procedures involve the exploitation of microelectronics and microsystem (MEMS) technologies to produce biomimetic nanomembranes in addition to the combination with the sacrificial layer etching procedure.The top-down approaches are further divided into the physical and chemical types.Physical methods involve the deposition of nanomembranes in an optimized way that covers evaporation, radio-frequency sputtering, epitaxial growth, physical vapor deposition (PVD), spin coating, electrospray deposition, dip-coating, drop-coating, molecular beam epitaxy atomic layer deposition, ion beam deposition, electron beam deposition, cathodic arc deposition, molecular layer deposition, pulsed laser deposition, etc.
Top-Down Approach
Most of the ultrathin film deposition procedures involve the exploitation of microelectronics and microsystem (MEMS) technologies to produce biomimetic nanomembranes in addition to the combination with the sacrificial layer etching procedure.The top-down approaches are further divided into the physical and chemical types.Physical methods involve the deposition of nanomembranes in an optimized way that covers evaporation, radio-frequency sputtering, epitaxial growth, physical vapor deposition (PVD), spin coating, electrospray deposition, dip-coating, drop-coating, molecular beam epitaxy atomic layer deposition, ion beam deposition, electron beam deposition, cathodic arc deposition, molecular layer deposition, pulsed laser deposition, etc.
Bottom-Up Approach:
The Following Are the Common Types of Bottom-Up Approaches [45] 3.2.1.Self-Assembly Approach Self-assembly can be understood as a natural phenomenon through which the structures organize themselves on their own into bigger units and the properties of larger units are regulated by the characteristics of smaller, as explained by the characteristics of their smallest component.
Langmuir-Blodget Method
This method exploits the self-assembly of surfactant molecules, which are specified to have a lipophilic tail and a lipophobic head moiety.Fatty acids, glycolipids and phospholipids are the common examples of this type.
Layer-by-Layer Self-Assembly
The layer-by-layer deposition adsorption procedure includes the deposition of alternate macromolecular layers with opposite charge over and over to provide 5 to 500 nm thick multilayers.
Block Copolymer Self-Assembly
This procedure involves the concurrent polymerization of two or more initial monomers to form a block copolymer with two or more dissimilar varieties of blocks, each consisting of a different homopolymer, and they are chemically dissimilar and immiscible.
Sol-Gel Process
In this deposition process, sol accumulates on a substrate and becomes steadily set into a gel with an incessant integrated solid network of nanoparticles and/or polymer in the liquid.
Dip-Coating
In dip-coating, membranes form on an object after it is dipped in a solution containing the substance to be deposited.In an alternate way, nanoparticle suspension may also be exploited, as shown in Figure 2. In drop-coating, droplets of the suspension or solution consisting of the substance to be placed are speckled on the object's surface in precisely measured amounts.Figure 3 demonstrates the drop-coating procedure.
Types of Nanomembranes and Their Water Remediation Applications
Nanomembranes are broadly classified into inorganic, organic and hybrid types and synthetic biomembrane types (Figure 4) based on the materials they are made from.
Types of Nanomembranes and Their Water Remediation Applications
Nanomembranes are broadly classified into inorganic, organic and hybrid types and synthetic biomembrane types (Figure 4) based on the materials they are made from.
Organic Nanomembranes
Organic nanomembranes are made up entirely of one or more organic constituents that epitomize a significant group of existing self-supporting nanomembranes [45].
There is an enormous number of organic complexes and their blends that could potentially be exploited, although it is important to note that not all organic compounds are suitable for creating nanomembranes.Membranous structures cannot be made from certain organic compounds due to their gaseous or liquid state.Organic materials are com-
Organic Nanomembranes
Organic nanomembranes are made up entirely of one or more organic constituents that epitomize a significant group of existing self-supporting nanomembranes [45].
There is an enormous number of organic complexes and their blends that could potentially be exploited, although it is important to note that not all organic compounds are suitable for creating nanomembranes.Membranous structures cannot be made from certain organic compounds due to their gaseous or liquid state.Organic materials are composed of carbon compounds, except for pure carbon in either of its allotropic forms, as well as basic carbon compounds like carbides, oxides of carbon, carbonates, and cyanides.Many macromolecular/polymeric structures are well-suited for the production of freestanding nanomembranes, including polysaccharides, synthetic polymers, synthetic lipids, proteins, RNA, and DNA-based membranes [46][47][48][49][50][51].
Despite the potential advantages of macromolecular nanomembranes, these materials tend to exhibit a high level of sensitivity to temperature changes, and their beneficial characteristics are typically only maintained within a limited temperature range.They can also be attacked and dissolved by organic solvents and may be endangered by enhanced humidity.Furthermore, their mechanical properties tend to be impaired as membrane thickness reduces, and they typically have a low Young's modulus.Most macromolecular nanomembranes start to slink and are enduringly plastically distorted underneath permanent pressure.Pyroxylin (nitrocellulose, collodion) was likely the first organic nanomembrane produced, which was created by Bechhold in 1907 [52].Organic nanomembranes can be further classified into CNM, pure (single polymer) and blended (copolymer type).
Carbon Nanomembranes
The use of carbonaceous nanomaterials, encompassing graphene, carbon nanotubes (CNTs), carbon nanofibers (CNFs) and fullerenes, has gained widespread popularity in the field of environmental remediation.These nanomaterials possess distinct properties that make them ideal for numerous applications associated with air and water decontamination.Specifically, graphene and its oxide have been utilized as adsorbents for heavy metal ion removal.Furthermore, the occurrence of hydroxyl and carboxyl functional groups present on the graphene oxide surface enhances its sorption capacity.In summary, carbonbased nanomaterials have proved to be effective and versatile tools in the fight against environmental contamination [53,54].
Carbon nanomembranes (CNMs) are synthetic two-dimensional panes of carbon with unique physical and chemical characteristics that rely on their molecular composition, structure, and environment.With their molecular thinness, they act as "bulkless" interfaces that segregate various solid, liquid, or gaseous components while regulating the exchange of atoms and molecules among them.The thin and film-like nature of CNMs is evident in Figure 5, which depicts a He ion microscopy image of a CNM straddling a hexagonal gold mesh.To summarize, CNMs are ultrathin carbon-based sheets that can function as selective barriers for molecular exchange.The first CNMs were created around the time that graphene was discovered, but they did not receive as much attention initially.But as of now, it is very much clear that CNMs could offer a path to overcome some of the challenges associated with adapting graphene to various applications.CNMs have several advantages, including their thinness, chemical surface functionality, and ease of fabrication, which is similar to self-assembled monolayers (SAMs) and layer-by-layer (LbL) techniques.To create CNMs, molecules are arranged on a solid surface and crosslinked to produce a two-dimensional film.The resulting nanomembrane is then detached and released.The thickness, homogeneity, nanopore presence, and surface chemistry of the CNM depend on the original molecular monolayer.Due to their versatile fabrication and applicability, CNMs provide a robust foundation for applied research and new product development [55].
tion, which is similar to self-assembled monolayers (SAMs) and layer-by-layer (LbL) techniques.To create CNMs, molecules are arranged on a solid surface and crosslinked to produce a two-dimensional film.The resulting nanomembrane is then detached and released.The thickness, homogeneity, nanopore presence, and surface chemistry of the CNM depend on the original molecular monolayer.Due to their versatile fabrication and applicability, CNMs provide a robust foundation for applied research and new product development [55].Turchanin and Gölzhäuser (2016) [55] discussed various examples of the unique behavior of CNMs and discussed their potential applications in nanotechnology in their review, including filtration and sensorics.In this study, he claimed a more straightforward and universal approach to creating membranes that involved exposing a self-assembled monolayer on a solid surface to radiation, such as low-energy electrons or photons.This radiation exposure causes intramolecular bonds to break, resulting in the crosslinking of the rest of the molecular cores into a 2D framework.The result was the carbon nanomembrane (CNM) with a precise area, thickness, and surface functionality, as shown in Figure Turchanin and Gölzhäuser (2016) [55] discussed various examples of the unique behavior of CNMs and discussed their potential applications in nanotechnology in their review, including filtration and sensorics.In this study, he claimed a more straightforward and universal approach to creating membranes that involved exposing a self-assembled monolayer on a solid surface to radiation, such as low-energy electrons or photons.This radiation exposure causes intramolecular bonds to break, resulting in the crosslinking of the rest of the molecular cores into a 2D framework.The result was the carbon nanomembrane (CNM) with a precise area, thickness, and surface functionality, as shown in Figure 6.This method of fabrication allowed for controlled functionalization by using self-assembled monolayers with specific functional moieties.A monomolecular SAM produces a CNM with even surface chemical groups, while a mixed molecular SAM generates a CNM with mixed surface characteristics.The mass density and defect density of the CNM are determined by the packing density and lateral order of the parenting SAM.The properties of CNMs were found to be highly diverse and claimed to be tailored to specific applications based on the environments through their manufacture, such as the molecular building blocks, SAM development, surface ordering, crosslinking, and release.These procedures can be modified to adjust the characteristics of the CNM.A CNM is essentially a molecular membrane without long-range order but with well-defined mechanical and electrical properties and surface functionalities.CNMs were found to be ideal for customization to meet specific application requirements.For instance, the exploitation of the molecular precursors with specific end groups leads to SAMs with even surface end groups, which, after crosslinking, resulted in a CNM with a specific surface functionality, which might not be the same as that of the SAM.In addition, amino-functionalized CNMs made up of nitro-functionalized aromatic SAMs have been exploited to attach different molecules such as polymers, dyes, and biomolecules to membrane surfaces, leading to a broader, fluorescent, or bio-functional CNMs.Now, it can be concluded that carbon nanomaterials (CNMs) are versatile objects that can be easily fabricated and adapted to various environments.They possess exceptional durability against heat, electron irradiation, chemical substances, and pressure variations.By altering their precursors and treatment, CNMs can be made conductive or insulating.These materials are exploited in a broad range of applications, including microscopy support, pressure sensor diaphragms, gas and liquid filtration, protective coatings, and in conjunction with other 2D substances.CNMs possess such immense potential due to their 2D structure and effortless surface customization, making them ideal for crafting ultrathin membranes exploited in material separation and filtration.Conventional filtration membranes have a thickness of over 100 nm and operate through either a diffusion-solution process or via narrow pores.These membranes selectively separate a mixture of materials based on varying atomic or molecular species' diffusion coefficients, resulting in the ideal infusion of specific species.This selectivity allows for the separation of gas or liquid mixtures into their individual constituents.
port, pressure sensor diaphragms, gas and liquid filtration, protective coatings, and in conjunction with other 2D substances.CNMs possess such immense potential due to their 2D structure and effortless surface customization, making them ideal for crafting ultrathin membranes exploited in material separation and filtration.Conventional filtration membranes have a thickness of over 100 nm and operate through either a diffusion-solution process or via narrow pores.These membranes selectively separate a mixture of materials based on varying atomic or molecular species diffusion coefficients, resulting in the ideal infusion of specific species.This selectivity allows for the separation of gas or liquid mixtures into their individual constituents.The diffusion process involved in materials separation through membranes can be slow, requiring high-pressure transformations among the two sides of the membrane to generate sufficient flux (φ).The formula for flux is given as φ = DAΔp/l, where D represents diffusivity, A represents the membrane area, Δp represents the pressure change between the two sides, and l represents the membrane thickness.Thin membranes need lower pressure differences compared to thicker ones to maintain the same level of flux.Two-dimensional membranes such as CNMs and graphene are considered the thinnest membranes and function like "sieves" in materials separation.The molecules can The diffusion process involved in materials separation through membranes can be slow, requiring high-pressure transformations among the two sides of the membrane to generate sufficient flux (ϕ).The formula for flux is given as ϕ = DA∆p/l, where D represents diffusivity, A represents the membrane area, ∆p represents the pressure change between the two sides, and l represents the membrane thickness.Thin membranes need lower pressure differences compared to thicker ones to maintain the same level of flux.Two-dimensional membranes such as CNMs and graphene are considered the thinnest membranes and function like "sieves" in materials separation.The molecules can permeate through these membranes in a "ballistic" process, where they either translocate through a pore or bounce off the solid material between the pores.The unique potential of CNMs lies in their two-dimensional geometry and ease of surface modification.The primary application of CNMs is in the production of ultrathin membranes for materials isolation and filtration.To achieve this, the CNMs are altered to suit the specific separation or filtration requirements.Traditional membranes exploited for gas and liquid filtration are relatively thick (>100 nm) and require high-pressure differences to generate a sufficient flux.In contrast, thin CNMs require much lower pressure differences to maintain the same flux due to their "ballistic" permeation process.The emergence of mass-produced two-dimensional membranes may pave the way for CNMs to revolutionize the field of materials separation technology through disruptive innovation.
In their study, Ai and co-workers (2018) [50] investigated latent carbon nanomembranes (CNMs) for gas separation.They exploited a poly (dimethyl siloxane) (PDMS) composite membrane as a support to enhance the mechanical stability and reduce the roughness-induced strain.The scientists investigated the transportation of gases across PDMS membranes that were either left bare or covered with CNMs across a range of different gases.The results of this study revealed that while using a PDMS support with a single-layer CNM, the permeance values reduced to a range of 70% to 20% in comparison to PDMS without the CNM layer.In contrast, values between 4% and 2% are observed for three-layer CNMs, except for He and H 2 .The transport mechanisms for single-layer CNMs and multilayer CNMs are different.For single-layer CNMs, gas molecules flow through intrinsic pores or via direct transport through the 1 nm thick film, resulting in higher selectivity for H 2 /N 2 , He/N 2 , and CO 2 /N 2 .In contrast, it was suggested that the transport mechanism for CNM multilayers involves lateral diffusion between individual CNMs, resulting in higher permeation for He and H 2 compared to larger gas molecules.They found that depositing a single layer of BPTCNM on PDMS enhanced gas selectivity for CO 2 /N 2 , indicating that the high selectivity was due to the great permeance of vertical channels for small molecules together with CO 2 .However, multilayers that had extra "tunneling" connecting the various layers exhibited reduced permeance and selectivity (see Figure 7).
CNMs, resulting in higher permeation for He and H2 compared to larger gas molecules.They found that depositing a single layer of BPTCNM on PDMS enhanced gas selectivity for CO2/N2, indicating that the high selectivity was due to the great permeance of vertical channels for small molecules together with CO2.However, multilayers that had extra "tunneling" connecting the various layers exhibited reduced permeance and selectivity (see Figure 7).Pure (single polymer) and blended (copolymer type) nanomembrane: Nanomembranes of this class consist entirely of organic materials and are typically made up of large macromolecules or polymers such as synthetic lipids, proteins, polysaccharides, RNA, DNA-based membranes, and synthetic polymers.These may be made up of a single polymer and known as pure (single polymer)-based nanomembranes and can be made up of two or more than two different polymer types and called blended (copolymer-type) nanomembranes.The creation of these nanomembranes involves using various types of polymers, including epoxy resins, polysulfone, polycarbonate, polyethersulfone, nylon, polyacrylate, polystyrene, cellulose, nitrocellulose, polyamide, polyimide, polypropylene, polydopamine, polyurethane, polyvinylchloride, poly (methyl Pure (single polymer) and blended (copolymer type) nanomembrane: Nanomembranes of this class consist entirely of organic materials and are typically made up of large macromolecules or polymers such as synthetic lipids, proteins, polysaccharides, RNA, DNA-based membranes, and synthetic polymers.These may be made up of a single polymer and known as pure (single polymer)-based nanomembranes and can be made up of two or more than two different polymer types and called blended (copolymertype) nanomembranes.The creation of these nanomembranes involves using various types of polymers, including epoxy resins, polysulfone, polycarbonate, polyethersulfone, nylon, polyacrylate, polystyrene, cellulose, nitrocellulose, polyamide, polyimide, polypropylene, polydopamine, polyurethane, polyvinylchloride, poly (methyl methacrylate), polyester, poly(vinylidene fluoride), polytetrafluoroethylene (PTFE, Teflon), poly(lactic acid), polyacrylonitrile and polydimethylsiloxane (PDMS) [56].
Watanabe and Kunitake (2007) [57] successfully prepared a freestanding epoxy nanomembrane that was 20 nm thick and could be transferred intact onto a wire frame measuring 1 cm in diameter.Although the nanomembrane was too thin to be directly visible from a perpendicular position, light reflection was observed at an angle of 15.The high-quality epoxy nanomembranes did not show any visible defects or cracks.This study explored the feasibility of employing epoxy resins as a material for nanomembranes, which were subsequently transported onto AAO (anodized aluminum oxide) films for scanning electron microscopy (SEM) analysis.The thinnest freestanding nanomembranes demonstrated a consistent thickness of (23 ± 2) nm, as determined by averaging thickness values obtained from varied locations on the same sampling and dissimilar SEM specimens.These films exhibited flexibility on the uneven contour of the AAO support, which was only observed in films thinner than 40 nm.On the other hand, thicker films (80 and 200 nm) appeared stiff on the SEM support, but all membranes demonstrated flexibility on the macroscopic scale without any observable cracks or defects on the membrane surface.As such, the study concluded that the epoxy membranes were uniform and devoid of defects over their entire area.To evaluate whether the outstanding properties of epoxy resins remained consistent even at nanometer thicknesses, the mechanical strength of the thinnest membrane was examined through a bulging test.The tensile strength (r) and the ultimate elongation (e) were found to be 30 MPa and 0.2%, respectively.The value for r lies within the range of 1-100 MPa that has been established for conventional thick epoxy resins of various compositions.However, the eventual extension ratio was one order of magnitude smaller than the values for bulk epoxy resins.
Polysulfone (PSU) is a popular membrane material due to its excellent mechanical, thermal, and chemical stability, making it suitable for manufacturing porous membranes for microfiltration (MF) to nanofiltration (NF).However, its low lipophobic properties make it susceptible to fouling during water purification.To overcome this issue, Ahmadipouya (2020) developed nanofiltration membranes by incorporating a metal-organic framework (MOF) into the polysulfone (PSf) matrix to remove organic dyes like methylene blue (MB), malachite green (MG), methyl red (MR), and methyl orange (MO) from water.
UiO-66 particles were exploited as lipophobic fillers to enhance the lipophobicity of the PSf membrane.These particles possess exceptional water stability, high thermal stability, chemical stability against organic solvents, and affinity toward organic dyes.The aperture size of UiO-66 MOF is about 6 Å, which is between the kinetic diameter of water molecules (≈2.6 Å) and most organic contaminants, making it a promising candidate for fabricating mixed matrix membranes (MMMs) for water purification.UiO-66 particles were synthesized using a solvothermal method and activated using two methods: Soxhlet extraction and centrifugation.The adsorption performance of UiO-66 MOF for the selected organic dyes was investigated, and the optimized UiO-66 MOF was incorporated into the PSf matrix to prepare the MMMs using the phase inversion technique (Figure 8).The prepared MMMs' water remediation performance was evaluated using pure water flux and organic dye rejection, and their antifouling ability was assessed using Bovine Serum Albumin (BSA) solution as a sample.The results showed that the prepared MMMs exhibited improved lipophobicity, leading to reduced fouling and better permeation characteristics.Additionally, the MMMs' organic dye rejection efficiency was significantly higher than that of the pure PSf membrane.The prepared MMMs also exhibited excellent antifouling characteristics when tested against BSA solution.A conclusion has been made in this study that incorporating UiO-66 particles into the PSf matrix to fabricate MMMs improved the lipophobicity, water remediation performance, and antifouling ability of the membranes.The study's findings suggest that MMMs incorporating UiO-66 MOF could be a promising approach for developing high-performance membranes for water purification [58].In another study by Abdelhamid and his co-workers [59], PSU has been investigated for ultrafiltration applications in the presence of clay nanoparticles, which enhanced the antifouling and flux recovery of the prepared membranes.Moreover, PSU has been modified with eugenol and zinc oxide to improve its ultrafiltration characteristics, antifouling, and antibacterial abilities.Quaternized graphene has been employed to reinforce PSUbased membranes for alkaline fuel cells.However, some polymer-based membranes are lipophilic, which makes them vulnerable to contamination by dyes and contaminants during wastewater remediation.The effectiveness and lifespan of membranes can be compromised by the obstruction of their pores, which can be caused by various factors including fouling.To address this issue, alternative materials are being explored to improve the lipophilicity and antifouling characteristics of membranes.Inorganic nanoparticles are a promising solution, with studies indicating that their incorporation into the membrane matrix during the phase inversion process can boost the membrane s lipophobicity and In another study by Abdelhamid and his co-workers [59], PSU has been investigated for ultrafiltration applications in the presence of clay nanoparticles, which enhanced the antifouling and flux recovery of the prepared membranes.Moreover, PSU has been modified with eugenol and zinc oxide to improve its ultrafiltration characteristics, antifouling, and antibacterial abilities.Quaternized graphene has been employed to reinforce PSU-based membranes for alkaline fuel cells.However, some polymer-based membranes are lipophilic, which makes them vulnerable to contamination by dyes and contaminants during wastewater remediation.The effectiveness and lifespan of membranes can be compromised by the obstruction of their pores, which can be caused by various factors including fouling.To address this issue, alternative materials are being explored to improve the lipophilicity and antifouling characteristics of membranes.Inorganic nanoparticles are a promising solution, with studies indicating that their incorporation into the membrane matrix during the phase inversion process can boost the membrane's lipophobicity and antifouling characteristics.One such example is the use of titanium dioxide nanoparticles, which have been shown to enable the production of membranes with photocatalytic characteristics.
Graphene is a two-dimensional material composed of a monolayer of atoms arranged in a honeycomb sp 2 carbon lattice.Graphene boasts remarkable mechanical characteristics, chemical stability, and a significant surface area.Graphene-based membranes demonstrate characteristics akin to ceramic membranes and can be fashioned into films through graphene/graphene oxide fluid phase dispersions, mimicking polymers.Current research endeavors aim to elevate the transport characteristics of graphene-based membranes, particularly high permeability and selectivity for gases and liquids.Utilizing GO has shown to enhance forward osmosis and PSU anion exchange membranes' performance.PSU nanofibrous membranes have also demonstrated additional antibacterial activity that make them more advantageous over others.
Graphene and/or GO can be functionalized to extend their efficiencies by grafting functional groups onto their surface.GO can attract functional groups and act as a potential adsorbent for removing heavy metal ions and organic contaminants.Fused heterocyclic compounds enriched with nitrogen, such as pyrano-pyrazoles, pyrazolo-pyridines, and pyrazolo-pyrido-pyrimidines, have shown expedient biological characteristics and can be exploited for functionalization.Some multicomponent reactions have been developed for synthesizing these heterocycles.
The performance of the filtration membranes is determined by their ability to reject salt or dye and allow water to pass through, posing a challenge to improve both characteristics without compromising the other.In this study, polymeric membranes based on PSU are prepared with the addition of f-GO.A heterocyclic compound loaded with nitrogen is attached to the surface of GO to functionalize it.The resulting membranes can be exploited for nanofiltration in water remediation applications and will be tested for their capacity to remove dyes with different surface charges, both anionic and cationic.
Amini and co-workers (2011) [60] investigated the efficacy of an ultraviolet radiationtreated acrylic grafted polysulfone nanomembrane for removing dyes from colored textile wastewater.In this study, the acrylic acid modification of the polysulfone ultrafiltration membrane was investigated, and the impact of various operating parameters on the modified membrane's performance was evaluated.The outcomes of the study indicated that the membrane, which was subjected to photo grafting, demonstrated remarkable efficiency concerning both its flux and rejection capabilities.Specifically, the membrane exhibited a range of dye rejection rates between 86% and 99.9% and a hydraulic permeability of 7.6 L m −2 h −1 bar −1 .However, the researchers observed that the rejection rates of dyes reduced as the salt concentration enhanced.This was attributed to a decline in the Donnan effect, which had a greater impact on low molecular weight and highly ionic dyes compared to other types of dyes.When 80 mM Na 2 SO 4 was added to the dye solution, there was a reduction in dye rejection of over 15%.However, increasing the driving pressure from 1 to 4 bar did not significantly enhance rejection.The results of the study indicated that dyes possessing lower charges are more susceptible to operating pressure compared to those with higher charges.This suggests that the acrylic grafted nanomembrane could be a viable solution for removing dyes from colored textile effluent.
Numerous polymers, including polyethylene terephthalate (PET), polycarbonate (PC), cellulose nitrate, and allyl-diglycol-carbonate, have been thoroughly investigated for the preparation of ion-track membranes.These micro/nanofilters with nuclear tracks have found widespread application in diverse fields, such as microelectronics, biotechnology, fuel cells, air stream filtration, the pharmaceutical industry, biological cell separation, and wastewater recycling.By modifying the parameters of the irradiation process and chemical etching process, such as pore size, shape, and density, it is possible to create track-etched membranes with desired transport and retention characteristics.The crucial factors in pore creation are the etching temperature, pH, and time, which must be determined through experimentation.As ionizing particles traverse through polymers, they displace electrons and cause localized chemical alterations along their path.Permanent physical or chemical damage along the trajectory of a particle enables the chemically etched disrupted areas surrounding the particle path to erode faster than the undamaged material, resulting in visible tracks.This phenomenon is possible due to the higher track etching rate (Vt) in irradiated polymers compared to the bulk etching rate (Vb) [61][62][63][64][65].
Ziaie [66] disclosed the creation of nuclear track micro/nanofilters using polycarbonate (PC) films that are exposed to α particles from 241 Am, which are trailed by chemical etching with an alkaline solution.This study also revealed that by increasing the etching time to a certain point, the pore diameters will also enhance, but for most cases, further increasing the etching time will cause the pore diameters to reduce.Eventually, at longer etching times, the pore diameters tend to become relatively constant.It has been observed that after 1 h of etching time at the same temperature, etching and annealing processes reach an equilibrium state.Therefore, to achieve the maximum pore diameter in the PC film, an etching solution with a normality of 4 N can be exploited for an etching time of 30 min.Increasing the temperature up to around 80 • C does not significantly affect the pore size.The standard deviation calculation of pore size measurement yielded an uncertainty of about 30%.It is possible to control the pore diameter by adjusting the etching parameters such as temperature, time and etchant solution concentration.
Wang and co-workers (2021) [67] discussed the utilization of the polystyrene nanomembrane photonic crystals for the detection of tetracycline through low triggered potential electrochemiluminescence and signal amplification.This application showcases the potential of utilizing photonic crystals in various fields such as medical diagnostics and environmental monitoring.This work claimed a novel electrochemiluminescence (ECL) strategy to detect the tetracycline antibiotic by exploiting gold-filled photonic crystals (GPCs) electrodes.The electrodes of GPCs are composed of photonic crystals formed by the self-assembly of polystyrene spheres and gold nanoparticles within the gaps of the crystals.These GPCs electrodes serve as a detection platform to bind antigens and label antibodies with Ru(bpy) 3 2+ -COOH, which is a luminophore.The immobilized antigen on the surface of the photonic crystals is linked to Ru(bpy) 3 2+ -COOH/Ab via immunoreaction to avoid direct contact with the gold nanoparticle surface.Electrochemiluminescence (ECL) emission is initiated by the electrochemical oxidation of tripropylamine (TPrA), since Ru(bpy) 3 2+ -COOH cannot be directly oxidized on the electrode surface.TPrA + cation and TPrA • radicals, which are produced by TPrA oxidation, interact with Ru(bpy) 3 2+ -COOH close to the electrode surface, leading to ECL emission.The electrodes exploited in GPCs consist of photonic crystals that form through the self-assembly of polystyrene spheres and gold nanoparticles in the gaps between the crystals.These electrodes are exploited as a detection platform to bind antigens and label antibodies with Ru(bpy) 3 2+ -COOH, which is a type of luminophore.To prevent direct contact between the antigen and the gold nanoparticle surface, the immobilized antigen on the photonic crystal surface is linked to Ru(bpy) 3 2+ -COOH/Ab through an immunoreaction.The electrochemical oxidation of tripropylamine (TPrA) initiates electrochemiluminescence (ECL) emission, since Ru(bpy) 3 2+ -COOH cannot be directly oxidized on the electrode surface.When TPrA is oxidized, it produces TPrA+ cation and TPrA• radicals that interact with Ru(bpy) 3 2+ -COOH near the electrode surface, resulting in ECL emission.The oxidation potential of TPrA (0.95 V vs. SCE) is lower than that of Ru(bpy) 3 2+ -COOH (1.25 V vs. SCE), which results in a 300 mV lower ECL potential.By using nanomembranes made of photonic crystals, the electrochemiluminescence can be enhanced.To detect tetracycline antibiotic, a competitive immunoassay was conducted on GPCs electrodes using this technique, achieving a detection limit of 0.075 pg/mL (S/N = 3).
This demonstrates the potential for widespread application in the field of analysis and detection.The GPCs electrodes comprising photonic crystals, polystyrene spheres, and gold nanoparticles provide a sensitive detection platform that enables the detection of minute amounts of analytes.
In 2020, Nizam Uddin [68] conducted a study on sustainable freshwater harvesting from the atmosphere using nanocomposite fibers made of recycled polystyrene foams.The aim of this study was to develop cost-effective and efficient materials for water collection to address the water crisis in arid and semi-arid regions of the world.Plastic waste poses a growing environmental concern, and a sustainable solution to this issue is to recycle such waste into value-added materials.To this end, the current study employed the electrospinning technique to transform recycled expanded polystyrene (REPS) foam into super-lipophilic nanocomposite fibers by incorporating titanium dioxide (TiO 2 ) nanoparticles and aluminum (Al) microparticles.These nanocomposite fibers demonstrated exceptional super-lipophilic characteristics, with a water contact angle of 152.03 • and an effective fog-harvesting capacity of 561 mg/cm 2 /h.They have diverse industrial applications, such as water collection, filtration, tissue engineering, and composites.The recovered water can be utilized for drinking, agriculture, industrial and other purposes.
In a research work by Yang in 2011 [69], researchers utilized electrospun polystyrene nanomembranes to address contaminants such as methylene blue (10 mg/L), Cr 6+ (5 mg/L), and Cu 2+ (5 mg/L) found in simulated dyeing wastewater.The polystyrene liquor (8% (m/m) liquified in chloroform) was processed into a nanofibrous membrane with a diameter ranging from 250 nm to 15 µm, a detected pore size ranging from 3 nm to 0.5 µm, and a membrane thickness of 170 µm.A plate membrane system was utilized to examine the nanofiltration characteristics of the contaminants.The research demonstrated that the interception rates for contaminants were over 91% and the water flux ranged from 5.8 to 15.4 mL/(cm 2 h).
Highly efficient membranes for the desalination of seawater in various technologies, including FO, CDI, RO, MD, solar distillation, and electrodialysis have been successfully fabricated using nanocellulose and cellulose derivatives.Cellulose is a polymer that consists of multiple D-glucose moieties, and it is derived from renewable sources such as plants, wood, bacteria, and algae.This bio-polymer has been exploited in various forms, including macro and nano, and it can be treated with different chemicals to produce cellulose derivatives.These forms of cellulose have been utilized in the creation of water desalination membranes.Cellulose and their derivatives are a preferred material for membrane fabrication due to their attractive characteristics like biocompatibility, biodegradability, low cost, lipophobicity, and mechanical toughness (Figure 9).Cellulosic/cellulose-based membranes have been widely exploited as a sustainable solution for seawater desalination in the last decade.Meanwhile, commercial cellulosic membranes can achieve better permeability and a high level of salt or small molecule rejection with precise functionalities, porosity and pore size.Surface-tailored transport channels in cellulosic membranes have potential to provide opportunities for the recovery and separation of valuable products.Various types of cellulosic materials, including nanocelluloses (CNCs, CNFs, or BNC) and cellulose derivatives, have been utilized in the development of membranes for seawater desalination.These pristine or functionalized cellulosic membranes have been successfully applied in various membrane-based technologies, such as RO, FO, and MD, either as the membrane material itself or as a reinforcing agent to improve the performance of other membranes.While there are advantages to using certain materials, such as high biocompatibility and low toxicity, there are also drawbacks that must be considered.One of these limitations is their sensitivity to pH levels, which can impact their effectiveness.Additionally, these materials may exhibit reduced activity when subjected to higher temperatures, and they may not possess adequate thermal or mechanical strength.Furthermore, they may be prone to fouling and may not have sufficient resistance to chlorine [70].A dual-scaled porous nitrocellulose (NC) membrane with underwater superoleophobicity for highly efficient oil/water separation was fabricated by a facile perforating method (Figure 10).The NC membrane is a commonly available material that has been functionalized and widely exploited in microfluidic technology, immunoassays, and biochemical analyses due to its excellent wetting characteristics and high protein-binding capability.In this study, researchers developed perforated nitrocellulose (p-NC) membranes with dualscaled pores consisting of intrinsic nanopores and an array of perforated micropores.The p-NC membranes exhibited exceptional underwater superoleophobicity and high A dual-scaled porous nitrocellulose (NC) membrane with underwater superoleophobicity for highly efficient oil/water separation was fabricated by a facile perforating method (Figure 10).A dual-scaled porous nitrocellulose (NC) membrane with underwater superoleophobicity for highly efficient oil/water separation was fabricated by a facile perforating method (Figure 10).The NC membrane is a commonly available material that has been functionalized and widely exploited in microfluidic technology, immunoassays, and biochemical analyses due to its excellent wetting characteristics and high protein-binding capability.In this study, researchers developed perforated nitrocellulose (p-NC) membranes with dualscaled pores consisting of intrinsic nanopores and an array of perforated micropores.The p-NC membranes exhibited exceptional underwater superoleophobicity and high The NC membrane is a commonly available material that has been functionalized and widely exploited in microfluidic technology, immunoassays, and biochemical analyses due to its excellent wetting characteristics and high protein-binding capability.In this study, researchers developed perforated nitrocellulose (p-NC) membranes with dual-scaled pores consisting of intrinsic nanopores and an array of perforated micropores.The p-NC membranes exhibited exceptional underwater superoleophobicity and high efficiency in separating oil and water.The micropores facilitated faster and easier water penetration through the membrane, with a water penetration time of only 8 min for 40 mL of water.In contrast, the NC membrane with only overlapped nanopores showed much slower water penetration, taking 103 min for 40 mL of water to pass through.The p-NC membranes were able to selectively and efficiently separate water from various oil/water mixtures, including gasoline, diesel, hexane, petroleum ether, and high-viscosity crude oil/water mixtures without the need for external power.The separation efficiency was greater than 99%.The separation time and intrusion pressure of the p-NC membrane for different oil/water mixtures could be easily adjusted by controlling the size of the perforated micropores.Additionally, the p-NC membranes demonstrated excellent underwater superoleophobicity in corrosive liquids, indicating excellent environmental stability and promising applications in practical oil spill cleanup and oily wastewater remediation.The p-NC membranes with dual-scaled pores and an array of perforated micropores have superior separation efficiency and stability, making them ideal for oil/water separation in various applications.The membrane's perforated micropores can be tailored to achieve desired separation times and intrusion pressures, while the underwater superoleophobicity ensures high efficiency in separating oil and water [71,72].
Qi and co-workers conducted a recent study where they incorporated polydopamine (PDA) into ionic liquid-capped polyimide (IL-PI) membranes using an in situ growth method, resulting in a membrane with strong PDA adsorption characteristics.The IL-PI membranes were lipophobically modified with an IL containing lipophobic groups and PDA.The polymerization time was controlled to create a composite membrane that effectively separated oil-water emulsions.Scanning electron microscopy revealed an enhancement in PDA content in the composite membrane fibers and on the surface with longer PDA coating times.The PDA coating reduced the surface contact angle of the membrane from 72.87 • to 12.06 • and improved its wettability.The PDA-modified fibrous membranes exhibited an excellent separation of emulsified oil-water mixtures, achieving a maximum membrane flux of 280 L•m −2• h −1 and a separation efficiency of >99%.After ten repeated cycles, the separation efficiency remained >92%.This study presents a promising approach for designing future wastewater remediation solutions [73].
In one more research work, researchers investigated the water permeability of polyimide/GO thin film with a multilayer structure having an interlayer spacing of about 0.83 nm.The concentration of GO was between 0 and 0.02 wt.%, and the lipophobicity of the film enhanced with increasing GO concentration.The permeate water flux enhanced from 39.0 ± 1.6 to 59.4 ± 0.4 L/m 2 h under 300 psi with increasing GO concentration, while rejections of NaCl and Na 2 SO 4 only slightly reduced from 95.7 ± 0.6% to 93.8 ± 0.6% and 98.1 ± 0.4% to 97.3 ± 0.3%, respectively.The interlayer spacing of GO nanosheets acted as a water channel and significantly impacted the water permeability [74].
A nanocomposite membrane of silver-doped fly ash/polyurethane (Ag-FA/PU) was successfully fabricated in a one-step electrospinning process, incorporating fly ash particles (FAPs).The process involved using a colloidal solution of polyurethane (PU) with FAPs and an Ag metal precursor, which was electrospun to create a spider-web-like nanocomposite membrane.The presence of N, N-dimethylformamide, a solvent of PU, reduced silver nitrate to Ag nanoparticles (NPs).The incorporation of Ag NPs and FAPs into the electrospun PU fibers was verified through electron microscopy and spectroscopic techniques.The addition of these NPs onto the PU nanofibers resulted in a spider-web-like nano-netting for NPs separation, enhanced absorption capacity for the removal of carcinogenic arsenic and toxic organic dyes, and antibacterial properties with reduced bio-fouling for membrane filter application.The Ag-FA/PU nanocomposite membrane exhibited promising potential for water remediation, demonstrating its cost-effectiveness and environmentally friendly nonwoven matrix for water purification.This approach offered a new opportunity for using one pollutant material to control other contaminants in a scalable and cost-efficient manner.Preliminary observations suggest that the Ag-FA/PU nanocomposite membrane is suitable for water remediation, making it a promising candidate for addressing water contamination issues [75].
In a study conducted by Asman [76], poly(vinyl pyrrolidone) (PVP) and dextran, which are water-soluble complex polymers, were utilized for the ultrafiltration (UF) of aqueous Fe 3+ solutions utilizing poly(methyl methacrylate-co-methacrylic acid) (PMMAco-MA) membranes.The study examined the impacts of polymer concentration and pH on the filtration of Fe 3+ solutions as well as the volume collected and percentage retention (R%).The findings revealed that increasing polymer concentration resulted in reduced PMMA-co-MA membrane permeability, while pH enhanced Fe 3+ solution retention.The retention rates for Fe 3+ solutions with PVP and dextran were found to be 62% and 48%, respectively, at pH 3.0, for an 80 min filtration period, while the retention for Fe 3+ solution without any complex-forming polymer was just 14%.The membranes were examined by AFM analysis and contact angle measurements [77].
Polytetrafluoroethylene (PTFE) is a highly desirable polymer for creating porous membranes to filter aggressive streams, even under severe temperature conditions, due to its exceptional chemical resistance and thermal stability.The lipophilic nature of PTFE membranes also suggests their potential application in a process called membrane distillation, which is increasingly being explored as an alternative and advantageous means of reverse osmosis treatment for concentrated and preferably warm solutions.Commercial PTFE porous membranes are typically produced using a complex process that involves mixing PTFE powder with a lubricant liquid.The resulting paste is then extruded in the form of a flat sheet or tube, which is then stretched and sintered to create a porous structure consisting of nodes and tiny interconnected fibrils.The pore size generally ranges from 0.1 to 2-3 µm, depending on the preparation conditions.However, flat sheet expanded membranes are usually thin and require bonding to a polyethylene or polypropylene support, such as a woven or nonwoven fabric, to improve handling and mechanical characteristics.Unfortunately, this also leads to the lower heat and chemical resistance of the final product.However, PTFE membranes are generally expensive due to the complex and time-consuming production process.Despite this, their unique properties make them a highly desirable material for various filtration applications, including aggressive streams and high-temperature environments.Researchers continue to explore new methods of producing PTFE membranes that offer improved properties and lower costs, with the aim of making them more accessible for use in a wider range of applications [78][79][80].
The concept of inducing roughness to achieve super-lipophilic surfaces through nanoparticle inclusion has been well established, but challenges with consistency and secondary contaminants need to be addressed.To potentially solve these issues, a superlipophilic nanofibrous membrane was proposed by electrospinning a blended solution of polyacrylonitrile and lipophilic polydimethylsiloxane (PAN/H-PDMS) and undergoing a post-heat treatment process.The process of carbonization results in the creation of a hierarchically nano-rough surface on electrospun nanofibers due to the differential shrinkage between PAN and H-PDMS.This micro-nanoscale roughness significantly improves the super-lipophilicity of the material, with a water contact angle (WCA) of 163.48 • and sliding angle (SA) of 4.2 • .The resulting composite super-lipophilic nanofibrous membrane (CSN-M) exhibits excellent robustness against tape peel, abrasion, and bending cycles, maintaining a WCA higher than 158 • and SA less than 6.5 • .Additionally, the membrane displays a self-healing feature, which restores the WCA to 162.25 • and reduces the SA to 5.0 • after heat treatment at 60 • C. The CSN-M has a tensile modulus of 12.11 Mpa, a hydrostatic pressure of 39.18 cmH2O, and excellent breathability.It is highly permeable, durable, and strong, making it ideal for applications such as water/oil separation and self-cleaning [81].
Hybrid (Inorganic/Organic) Nanomembranes
In addition to traditional inorganic carbon-based nanomaterials, transition metalbased nanomaterials and organic nanomaterials, there have been proposals to incorporate other nanomaterials for water remediation purposes.One such example is the use of magnetic halloysite nanotube (MHNT) composites that have been modified with molecularly imprinted polymers (MIPs) to selectively recognize and adsorb 2,4,6-trichlorophenol (TCP) for the remediation of wastewater.It has been suggested that this approach has potential for the development of commercially available products.In this work, the researchers developed a magnetic molecularly imprinted polymer (MMIP) for the selective recognition of 2,4,6-trichlorophenol (TCP) using magnetic halloysite nanotubes particles (MHNTs) as the base.Scientists have produced magnetic halloysite nanotubes (MHNTs) by attaching magnetic nanoparticles to carboxylic acid-functionalized halloysite nanotubes (HNTs−COOH) through a high-temperature reaction of ferric Tri acetylacetonate in 1-methyl-2-pyrrolidone (Figure 11).The researchers utilized the MHNTs to create molecularly imprinted polymers (MMIPs) and conducted several characterization techniques such as X-ray diffraction, Fourier transform infrared analysis, thermogravimetric analysis, vibrating sample magnetometer, transmission electron microscopy, elemental analysis, and Raman spectroscopy.The MMIPs had a 5.0−15.0nm imprinted polymer film and demonstrated magnetic properties and thermal stability.Batch mode adsorption studies revealed the MMIPs' specific adsorption equilibrium, kinetics, and selective recognition, with the Langmuir isotherm model fitting better than the Freundlich model.The MMIPs' monolayer adsorption capacity was determined to be 246.73mg g −1 at 298 K, and they exhibited high affinity and selectivity toward TCP over other phenolic compounds.Furthermore, the researchers found that the MMIPs were regenerable, with only a 11.0% loss in pure TCP solution and a 16.1% loss in coexisting phenolic compound solution after the fifth use.They also successfully used the MMIPs to remove TCP from environmental samples, highlighting the potential of MMIPs for the efficient removal of target contaminants from complex matrices.This study shows that the development of MHNTs and their utilization in MMIPs can lead to the effective removal of specific contaminants in complex samples [82,83].
Membranes 2023, 13, x FOR PEER REVIEW 20 of 33 other nanomaterials for water remediation purposes.One such example is the use of magnetic halloysite nanotube (MHNT) composites that have been modified with molecularly imprinted polymers (MIPs) to selectively recognize and adsorb 2,4,6-trichlorophenol (TCP) for the remediation of wastewater.It has been suggested that this approach has potential for the development of commercially available products.In this work, the researchers developed a magnetic molecularly imprinted polymer (MMIP) for the selective recognition of 2,4,6-trichlorophenol (TCP) using magnetic halloysite nanotubes particles (MHNTs) as the base.Scientists have produced magnetic halloysite nanotubes (MHNTs) by attaching magnetic nanoparticles to carboxylic acid-functionalized halloysite nanotubes (HNTs−COOH) through a high-temperature reaction of ferric Tri acetylacetonate in 1-methyl-2-pyrrolidone (Figure 11).The researchers utilized the MHNTs to create molecularly imprinted polymers (MMIPs) and conducted several characterization techniques such as X-ray diffraction, Fourier transform infrared analysis, thermogravimetric analysis, vibrating sample magnetometer, transmission electron microscopy, elemental analysis, and Raman spectroscopy.The MMIPs had a 5.0−15.0nm imprinted polymer film and demonstrated magnetic properties and thermal stability.Batch mode adsorption studies revealed the MMIPs specific adsorption equilibrium, kinetics, and selective recognition, with the Langmuir isotherm model fitting better than the Freundlich model.The MMIPs monolayer adsorption capacity was determined to be 246.73mg g −1 at 298 K, and they exhibited high affinity and selectivity toward TCP over other phenolic compounds.Furthermore, the researchers found that the MMIPs were regenerable, with only a 11.0% loss in pure TCP solution and a 16.1% loss in coexisting phenolic compound solution after the fifth use.They also successfully used the MMIPs to remove TCP from environmental samples, highlighting the potential of MMIPs for the efficient removal of target contaminants from complex matrices.This study shows that the development of MHNTs and their utilization in MMIPs can lead to the effective removal of specific contaminants in complex samples [82,83].García-Torres et al., 2022 [84] developed a straightforward method for creating flexible electronic hybrid materials featuring nanostructured surfaces, using free-standing perforated two dimensional nanomembranes that host regimented 1D metal-based nanostructures.The fabrication process involves depositing alternating layers of perforated poly(lactic acid) (PLA) and poly (3,4-ethylenedioxythiophene), which are then incorporated with copper metallic nanowires (NWs) via electrodeposition.The top PLA layer with nanoperforations is then coated with silver through a transmetallation reaction (Figure 12).This approach combined the conformability and flexibility of the ultrathin, soft polymeric nanomembranes with the excellent electrical properties of metals, making it ideal for use in bio-integrated electronic devices.By tailoring the nanomembrane surface chemistry, this work demonstrated its sensing capabilities toward H 2 O 2 , with a good linear range (0.35-10 mM) of concentration, limit of detection (7 µm) and sensitivity (120 µA cm −2 mM −1 ).The hybrid nanomembranes produced were flexible and conformable, with selectivity toward H 2 O 2 , good stability and reproducibility, and such characteristics were confirmed by EDX, SEM, XPS, EIS, CV and contact angle analyses.
Membranes 2023, 13, x FOR PEER REVIEW 21 of 33 García-Torres et al., 2022 [84] developed a straightforward method for creating flexible electronic hybrid materials featuring nanostructured surfaces, using free-standing perforated two dimensional nanomembranes that host regimented 1D metal-based nanostructures.The fabrication process involves depositing alternating layers of perforated poly(lactic acid) (PLA) and poly (3,4-ethylenedioxythiophene), which are then incorporated with copper metallic nanowires (NWs) via electrodeposition.The top PLA layer with nanoperforations is then coated with silver through a transmetallation reaction (Figure 12).This approach combined the conformability and flexibility of the ultrathin, soft polymeric nanomembranes with the excellent electrical properties of metals, making it ideal for use in bio-integrated electronic devices.By tailoring the nanomembrane surface chemistry, this work demonstrated its sensing capabilities toward H2O2, with a good linear range (0.35-10 mM) of concentration, limit of detection (7 µm) and sensitivity (120 µA cm −2 mM −1 ).The hybrid nanomembranes produced were flexible and conformable, with selectivity toward H2O2, good stability and reproducibility, and such characteristics were confirmed by EDX, SEM, XPS, EIS, CV and contact angle analyses.A simple fabrication procedure of epoxy resin and silica-based hybrid nanomembranes was also described by Watanabe 2009 [85].In this study, the reaction of poly [(ocreyl glycidyl ether)-co-formaldehyde] (PCGF) and 3-aminopropyl triethoxysilane (APS) at room temperature was followed by spin-coating and baking at 120 °C to produce uniform nanomembranes with a thickness range of 20-50 nm.The epoxy and amine groups were homogeneously mixed through chemical linking, resulting in robust and defect-free nanomembranes.The nanomembrane had a lipophilic nature and persisted intact onto the water surface.It exhibited exceptional chemical steadiness without becoming swelled in many of the organic solvents, and its membrane morphology was preserved even after heating at 600 °C for 3 h, despite the complete eradication of the organic contaminant.The mechanical properties of the nanomembrane were not significantly altered from those of the epoxy-only nanomembrane described in an earlier study.The authors also discussed the significance of cross-linking density and hybridization in relation to the steadiness of A simple fabrication procedure of epoxy resin and silica-based hybrid nanomembranes was also described by Watanabe 2009 [85].In this study, the reaction of poly [(ocreyl glycidyl ether)-co-formaldehyde] (PCGF) and 3-aminopropyl triethoxysilane (APS) at room temperature was followed by spin-coating and baking at 120 • C to produce uniform nanomembranes with a thickness range of 20-50 nm.The epoxy and amine groups were homogeneously mixed through chemical linking, resulting in robust and defect-free nanomembranes.The nanomembrane had a lipophilic nature and persisted intact onto the water surface.It exhibited exceptional chemical steadiness without becoming swelled in many of the organic solvents, and its membrane morphology was preserved even after heating at 600 • C for 3 h, despite the complete eradication of the organic contaminant.The mechanical properties of the nanomembrane were not significantly altered from those of the epoxy-only nanomembrane described in an earlier study.The authors also discussed the significance of cross-linking density and hybridization in relation to the steadiness of various giant nanomembranes.This simple and straightforward method for fabricating hybrid nanomembranes can be applied to a wide range of precursor materials.This approach is expected to be highly effective for creating durable nanomembranes that are designed for specific applications.By carefully selecting the appropriate precursor materials and combining them in a hybridization process, a wide range of functional nanomembranes can be produced.The combination of macroscopic size and sub-100 nm thickness makes these nanomembranes suitable for use in highly competent material transport and extensive single-molecule governed devices for fundamental research.Additionally, these nanomembranes have potential practical applications in materials separation and specific ion transport.The formulated hybridization process elaborated in the article can help these nanomembranes perform well at higher temperatures [86].
Inorganic Nanomembranes
Inorganic nanomembranes have become gradually widespread recently for their latent use in water remediation applications.These membranes are typically composed of inorganic substances like metals, metal oxides and ceramics.One of the most promising inorganic nanomembranes for water remediation is graphene oxide.This material possesses excellent mechanical and thermal properties, as well as a high surface area, which allows for the efficient adsorption of contaminants.Other inorganic nanomaterials that have been investigated for water remediation include metal-organic frameworks (MOFs), zeolites, and mesoporous silica.MOFs, for example, are highly porous materials with tunable characteristics that can be exploited for the adsorption and degradation of organic contaminants.Inorganic nanomembranes have also been exploited to remove heavy metals from water.For example, iron oxide nanoparticles have been introduced to polymeric membranes to create nanocomposite membranes with improved performance in the removal of lead and cadmium from water.Inorganic nanomembranes show great promise in water remediation applications.With their unique characteristics and tunability, these materials have the potential to revolutionize the way we treat polluted water and confirm the availability of clean and harmless potable water for forthcoming generations.
Shayesteh and his co-workers (2016) [87] described the synthesis and performance evaluation of titania-gamma-alumina multilayer nanomembranes.The rejection ratio of a multilayered membrane in both acidic and alkaline solutions was investigated in a recent study.The support for the nanomembrane was prepared using the slip-casting method, which involved the use of alpha-alumina tubes.A gamma-alumina sub-layer and titania top layer were sequentially coated on the support, and the water flux and permeability of the nanomembrane were characterized.The study also examined the nanomembrane's ability to reject microorganisms and several ions in a model wastewater at different pH levels.The support was evaluated for treating a model wastewater using a nanomembrane.The permeability of water through the nanomembrane was reduced when pressure in the range of 1-10 bar was applied.However, the permeability became almost constant at higher pressures and enhanced water flux.Rejection tests were conducted on a model wastewater containing ions, and the results indicated that the nanomembrane produced by the support fabricated using the slip-casting method could partially reject ions and successfully separate all microorganisms.Adjusting the pH was found to enhance ion rejection.The study exploited alpha-alumina tubes made through the slip-casting method as the support for the multilayered membrane, and the intermediate gammaalumina sub-layer and titania top layer were sequentially coated on the support.The study also employed several characterization techniques to analyze the nanomembrane's characteristics, including water flux, permeability, and ion rejection.
In a review, Cavallo and his co-authors [88] mentioned a method for creating inorganic nanomembranes (NMs): the two-step process involves depositing a thin active layer on a sacrificial layer, which is typically on a bulk substrate, followed by selective etching of the sacrificial layer to free the active membrane from the substrate.The functional layer can be deposited intentionally with strain or without, and its shape can be flat or conformed to different shapes.To create thin membranes, a multilayered structure is usually exploited, which involves a sacrificial layer between the functional membrane and the handle substrate.Group-IV NMs were observed to form thin sheets, rippled structures, and rolled-up structures through scanning electron micrographs.A trilayer configuration was exploited to balance strain in the growth direction and maintain the 2D geometry of the NM upon release.If anchor points remain, uniformly and compressively strained membranes relax by lateral expansion, resulting in periodic wrinkles.On the other hand, a high strain gradient in the growth direction causes the functional membrane to curl up and eventually form a tube [83].
Silicon-on-insulator (SOI) is a composite material that has gained widespread use in the semiconductor device manufacturing industry.It consists of a thin crystalline layer of Si, known as the template layer, that is separated from a bulk wafer by a SiO 2 film.This technology was developed approximately 15 years ago, and it rapidly gained acceptance in the industry due to its ability to produce a thin Si template layer quickly and reliably.SOI has numerous advantages over bulk Si crystal, particularly in low-power circuit applications exploited in portable electronic devices.The use of a thin Si layer on top of an oxide has significantly improved semiconductor device performance.In addition to its applications in semiconductor device fabrication, SOI is the most commonly exploited platform for the development of micro-and/or nanoelectromechanical systems (MEMS and/or NEMS) and Si nanomembranes.In another case, the SiO 2 layer that is buried acts as a layer that can be sacrificed, and it is removed through a selective etching process.Currently, commercially available SOI wafers have a top layer of Si that can be as thin as 20 nm.In the context of nanomembrane applications, these upper layers are often entirely detached and moved onto different host materials, although there are instances where contact points are retained.Hence, SOI has become an essential material in the semiconductor industry and has enabled significant advancements in low-power circuit applications and the development of micro-and/or nanoelectromechanical systems [89][90][91][92].
Yin and co-workers (2012) [93] proposed a method for enhancing the lipophobicity of thin film nanocomposite (TFN) membranes through integrating SiO 2 nanoparticles (NPs) using an in situ interfacial polymerization procedure.They found that the improved solubilization and diffusion of water through the membrane contributed to the enhanced lipophobicity.A higher declining interfacial contact angle of liquid-vapor was suggested as the critical factor controlling the membrane surface lipophobicity, as previously shown in studies by Wang and co-workers (2011) [94].Wang and colleagues suggested that depositing a polymer electrolyte membrane (PEM) on a rough substrate coated with sub-micrometerscale silica spheres could result in a Wenzel state of membrane wetting, allowing high hysteresis contact angles of the liquid-vapor interface to be achieved.Sabir and co-workers (2016) [95] exploited the thermally induced phase inversion separation (TIPS) procedure to synthesize a polymer matrix SiO 2 NP (PM-SNP)-conjugated membrane.Incorporating 0.4 wt% SiO 2 NPs (PM-S4) into the PM membrane led to a marked improvement in salt rejection during reverse osmosis (RO) (flux of 2.39 L/m 2 h) compared to the unmodified membrane (flux of 2.1 L/m 2 h) due to the improved surface roughness conditions that facilitated water transport.Ahmad and colleagues (2015) [96] investigated the effect of SiO 2 nanoparticles (NPs) at different weight percentages (1-5 wt%) on cellulose acetate and cellulose acetate/polyethylene glycol (CA/PEG) membranes.They observed that the inclusion of SiO 2 NPs upgraded the thermal and mechanical steadiness of the CA/PEG membrane, resulting in an enhancement in flux from 0.35 to 2.46 L/m 2 h and an 11.41% enhancement in salt rejection.Among the tested membranes, CPS-5, containing 5 wt.% silica, was the most effective and resistant to fouling during RO.Pang and Zhang (2018) [97] developed a lipophilic fluorinated SiO 2 NP-based thin-film nanocomposite (TFN) membrane for treating high salt content samples (2000 ppm).They observed an increase in desalination from 96% to 98.6% with a lowering down in flux from 0.99 to 0.93 m 3 /m 2 /day.Incorporating SiO 2 nanoparticles into membranes can significantly enhance their perfor-mance in salt rejection and water transport.The improvements in thermal and mechanical stability, resistance to fouling, and salt rejection suggest that SiO 2 NP-based membranes are a promising technology for various applications, including water remediation and desalination.
In one more study, researchers enhanced a glass fiber membrane by incorporating SiO2 nanoparticles and then subjected it to surface fluorination and polymer coating to develop an omniphobic membrane that can be exploited for the direct contact membrane distillation (DCMD) of a sodium lauryl sulfate (SLS) solution.The omniphobic membrane was pitted against a commercial polytetrafluoroethylene (PTFE) membrane, and comparisons were made based on contact angle and DCMD applicability.The results showed that the omniphobic membrane performed better when tested with various types of feed solutions like humic acid, kerosene oil, diiodomethane and detergent-for example, sodium lauryl benzene sulfonate-as compared to the PTFE membrane.Additionally, the omniphobic membrane exhibited anti-wetting characteristics toward water, ethanol, mineral oil and decane, while the PTFE membrane only displayed effectiveness when dealing with water [98].
In a study by Huang and co-workers, 2017 [99] a super-amphiphobic membrane was created for membrane distillation (MD) using electrospinning, calcination, and fluorination.The researchers found that this membrane showed better efficacy than a commercially available polyvinylidene fluoride (PVDF) membrane in treating concentrated feed solutions containing surfactants due to its super-amphiphobic feature.Efome and co-workers (2015) also prepared a SiO 2 -based anti-wetting super-amphiphobic membrane using a phase inversion immersion precipitation procedure to prepare PVDF/SiO 2 flat sheet composite membranes for vacuum membrane distillation (VMD).The researchers studied the blending of super-lipophilic SiO 2 nanoparticles with a PVDF-doped solution, and the modified membrane showed enhanced flux from 0.7 to 2.9 kg/m 2 h with a desalination rate of 99.98% in a VMD procedure.The addition of SiO 2 nanoparticles and surface modification with fluorination and polymer coating showed promise in creating anti-wetting and superamphiphobic membranes for DCMD and MD processes.These modified membranes displayed better performance than traditional commercial membranes in terms of contact angle and treatment efficiency against various feed solutions [100].
TiO 2 is another coating material with favorable characteristics, such as nontoxicity, stability, low cost and photocatalytic characteristics, as noted by several authors [100].Recently, CA/PEG membranes were modified with TiO 2 in various loadings and exploited in RO and MD.Shafiq and co-workers (2018) [101] found that CA/PEG membranes with TiO 2 loaded with 5, 10, 15, 20, and 25 wt% showed maximum desalination rates of 80, 90, 95.4,85, and 80%, respectively, these results confirmed that the optimal loading was 15 wt% for maximum desalination.Membranes coated with TiO 2 and exposed to UV radiation exhibited enhanced lipophobicity and self-cleaning characteristics.However, too much TiO 2 blocked membrane pores and reduced membrane performance.Emami and co-workers (2018) and Stan and co-workers (2019) [102,103] observed that TiO 2 NPs-coated membranes had exceptional self-cleaning characteristics under ultraviolet irradiation.Kwak and coworkers (2001) [104] conducted a study that revealed a TFC membrane, modified with TiO 2 consisting of organic/inorganic hybrids, was less susceptible to fouling as compared to the pure PA membrane when exploited in RO.Safarpour and co-workers.(2015) established a TFN-RO film using interfacial polymerization and coating with reduced graphene oxide/TiO 2 .The altered membrane demonstrated improved lipophobicity and anti-fouling characteristics compared to the unmodified membrane [105].Ren and co-workers (2017) created a TiO 2 -coated PVDF electrospun nanofiber membrane (ENM) that exhibited high flux (73.4 L/m2h) and salt rejection (99.99%) [106].Various methods have been developed for producing super-lipophilic membranes, including nanomaterial-based membrane surface coating, dip-coating and post-modifications of virgin membranes [107].
Zinc oxide (ZnO) nanoparticles (NPs) have gained popularity as an additive due to their low price, high stability (physical, chemical, mechanical, and thermal), high surface area, surface functionalization, and remarkable antimicrobial and anti-corrosive characteristics.In membrane modification, ZnO has been shown to enhance the lipophobicity of blended membranes, which improves permeability and fouling resistance [103].For example, ZnO NPs were incorporated into a cellulose acetate (CA) membrane via electrospinning to improve its antibacterial property for reverse osmosis (RO) [104].Similarly, in order to reduce biofouling in membrane distillation (MD), ZnO was incorporated into cellulose acetate (CA), which proved to be an effective super-lipophilic/omniphobic membrane modification.Researchers prepared a composite membrane of polytetrafluoroethylene (PTFE)/poly(vinyl alcohol)/ZnO through electrospinning, which enhanced the contact area among the surface and microbes without agglomerating the nanoparticles.They also exploited PTFE /ZnO films for the self-cleaning of a fouled film during vacuum membrane distillation (VMD).The membrane demonstrated high chemical and thermal stability, with efficient salt rejection (99.9%) and eradication of dye (45%) as confirmed by photodegradation experiments.ZnO nanoparticles were also exploited to modify a glass fiber membrane via chemical bath deposition to produce an omniphobic film for DCMD (direct contact membrane distillation).This omniphobic film was exceedingly resilient to wetting through low-surface-tension liquids during DCMD and maintained a contact angle of 152.8 • throughout the operation with a flux of 30 L/m 2 h and a salt rejection of 99.99% [108][109][110].
Synthetic Biological Nanomembranes
Model lipid bilayers are the examples of this class, the synthetic organic nanomembranes, which represent replicas of living nanomembranes [44].The first model lipid bilayers were successfully synthesized in 1962 [111].Initially known as "black lipid membranes" or "painted bilayers", they were created as platforms for studying membrane processes in vitro, aiming to facilitate the analysis of transmembrane mechanisms and ion channel function.Among the early achievements in synthetic ion channels, tetrasubstituted β-cyclodextrin was the first fully synthetically produced ion channel reported as early as 1982 [112].This marked a significant advancement in the field, showcasing the potential to artificially create functional channels that mimic natural ion channels found in biological membranes.Subsequently, research in synthetic lipid bilayers and ion channels has continued to progress, leading to further insights into membrane biophysics and their applications in drug delivery, biosensing, and understanding cellular processes.The study of model lipid bilayers and synthetic ion channels remains a critical area of research in biophysics and nanotechnology.
Characteristics of Nanomembranes Attributed with Water Purification
Compared to larger-scale materials, nanoscale materials have a significantly larger surface area and demonstrate unique magnetic, optical, and electrical characteristics.When incorporated into membranes, they create structures with refined filtration mechanisms and diverse physical, chemical and biological characteristics.Some of the characteristics are as follows [113]:
Electrical Properties
Electrical characteristics of nanomembranes were investigated using a potentiostat/ galvanostat system, which measured the output leakage current of approximately 90 µA at 0.5 V for a 30 nm thick nanomembrane transferred onto a substrate.The electrical resistivity was calculated to be 0.5 × 10 11 Ωcm (Figure 13).This value was found to be only seven times smaller than the value measured for a PCGF-PEI film directly fabricated onto a substrate (3.79 × 10 11 Ωcm), indicating that the insulating behavior was not lost upon detachment from the substrate.The high insulating character of the nanomembrane suggested a defectfree behavior.The electrical resistivity value of the nanomembrane was essentially the same as that of a conventional bisphenol-A-type epoxy resin (10 10 -10 12 Ωcm), which was claimed to be highly compatible with many chemical substances and had been exploited to develop superior functional and structural composites.The electrical properties of the nanomembranes were also found to be highly insulating, indicating a defect-free behavior.The resistivity value of the nanomembrane was comparable to that of a conventional bisphenol-A-type epoxy resin, suggesting that the resistivity value remained essentially the same when the material was prepared as a nanomembrane.These findings have important implications for the development of functional and structural composites using nanomembranes.Conclusions have been made that epoxy resins can be utilized as a material for nanomembranes.The resulting membranes were shown to be uniform, defectfree, and flexible, with a consistent thickness of (23 ± 2) nm.Moreover, the thinnest membrane exhibited a tensile strength that is comparable to conventional thick epoxy resins, while its ultimate elongation was substantially lower.These findings provide valuable insights for the development of new applications for epoxy resins in the field of nanotechnology.
Membranes 2023, 13, x FOR PEER REVIEW 26 of 33 Ωcm), which was claimed to be highly compatible with many chemical substances and had been exploited to develop superior functional and structural composites.The electrical properties of the nanomembranes were also found to be highly insulating, indicating a defect-free behavior.The resistivity value of the nanomembrane was comparable to that of a conventional bisphenol-A-type epoxy resin, suggesting that the resistivity value remained essentially the same when the material was prepared as a nanomembrane.These findings have important implications for the development of functional and structural composites using nanomembranes.Conclusions have been made that epoxy resins can be utilized as a material for nanomembranes.The resulting membranes were shown to be uniform, defect-free, and flexible, with a consistent thickness of (23 ± 2) nm.Moreover, the thinnest membrane exhibited a tensile strength that is comparable to conventional thick epoxy resins, while its ultimate elongation was substantially lower.These findings provide valuable insights for the development of new applications for epoxy resins in the field of nanotechnology
Adsorption
Certain nanomaterials (NMs) grounded on nanostructured graphene, metal oxides, carbon nanotubes (CNT), zeolite, porous BN and electrospun nanofibers are capable of serving two functions, namely adsorption and the membrane filtration of heavy metal ions, phosphates and nitrates.These NMs possess active sites and high porosity, making them ideal for adsorbing contaminants.Electrospun nanofiber-based membranes that contain NMs exhibit intriguing characteristics for removing trace quantities of contaminants from water through filtration and adsorption, which is due to their porosity and large surface area.The adsorption of contaminants from an aqueous solution by these materials can occur through chemical binding, physical adsorption (caused by porosity, van der Waals attraction and the large surface area of NMs), or electrostatic attraction [114].
Photocatalysis
The photocatalytic characteristics of TiO2 NP-based NEMs are distinctive and include photodegradation and photoinduced super-lipophobicity.These characteristics provide the membrane surface with fouling resistant, antimicrobial and self-cleaning characteristics.The excitation of valence electrons of the photocatalyst occurs under UV light, causing their migration and resulting in the observed effects [115].
Antimicrobial Activity
The use of Silver NPs as antimicrobial mediators for NEM is widespread because of their highly effective biocidal characteristics.Silver intermingles with biochemical compounds, including cysteine, containing thiol groups (S-H) that contain phosphorus and
Adsorption
Certain nanomaterials (NMs) grounded on nanostructured graphene, metal oxides, carbon nanotubes (CNT), zeolite, porous BN and electrospun nanofibers are capable of serving two functions, namely adsorption and the membrane filtration of heavy metal ions, phosphates and nitrates.These NMs possess active sites and high porosity, making them ideal for adsorbing contaminants.Electrospun nanofiber-based membranes that contain NMs exhibit intriguing characteristics for removing trace quantities of contaminants from water through filtration and adsorption, which is due to their porosity and large surface area.The adsorption of contaminants from an aqueous solution by these materials can occur through chemical binding, physical adsorption (caused by porosity, van der Waals attraction and the large surface area of NMs), or electrostatic attraction [114].
Photocatalysis
The photocatalytic characteristics of TiO 2 NP-based NEMs are distinctive and include photodegradation and photoinduced super-lipophobicity.These characteristics provide the membrane surface with fouling resistant, antimicrobial and self-cleaning characteristics.The excitation of valence electrons of the photocatalyst occurs under UV light, causing their migration and resulting in the observed effects [115].
Antimicrobial Activity
The use of Silver NPs as antimicrobial mediators for NEM is widespread because of their highly effective biocidal characteristics.Silver intermingles with biochemical compounds, including cysteine, containing thiol groups (S-H) that contain phosphorus and sulfur.Through the formation of S-Ag or di-sulfide bonds, silver deteriorates the microbial proteins, denatures the DNA, and interjects the electron transport system, leading to its biocidal effect.In addition to silver, other nanoparticles such as Cu, CNT, and graphene have also been utilized for modifying commercial membranes to enhance their biocidal efficiency and application duration.These modified membranes deliver effective water disinfection while maintaining a high flux recovery ratio [116].
Chlorine Resistance
Functional nanomaterials such as zeolite and silica are being researched for use in NF and RO membranes due to their promising characteristics.Incorporating GO, SiO 2 , CNT and zeolite nanoparticles (NPs) into the barricade sheet of TFNCM has been shown to enhance the membranes' resistance to chlorine.MWCNTs, in particular, act as a defensive coating for PA against free chlorine attack.Meanwhile, the improved chlorine resistance of GO-based membranes is principally attributed to the creation of hydrogen bonds between GO and PA that hinder the chlorine's collaboration with active N-H bonds in PA.Zeolitebased membranes have also been explored for desalination as they are chemically robust and favorable.Zhu et al. [80] conducted a study on the steadiness of MFY-type zeolite films against chlorine cleaning and found that they maintained high chlorine stability.The zeolite membranes retained their high rejection of ions such as Mg 2+ (90%) and Ca 2+ (82%), and they also showed good rejections for Na + (70%) and K + (78%) after exposure to a hypochlorite cleaning solution (1000 ppm) for 7 days at an applied pressure of 3 MPa, with no noteworthy alteration in water flux or salt rejection [117].
Challenges with Nanomembrane-Enhanced Water Remediation
Nanomembranes have emerged as a potential solution for wastewater remediation due to their unique characteristics, but there are several challenges that need to be addressed.These challenges include the lack of information about the nanomaterials, their potential adverse effects on human health and the environment, and the need for effective and sustainable wastewater remediation methods.The rapid commercialization of nanomembranes has led to an enhancement in their production globally, but they also face various challenges that must be addressed to realize their full potential.This section outlines some of the key challenges in the application of nanomembranes for water treatment.Nanomembranes are susceptible to fouling, where contaminants and particles accumulate on the membrane surface or within its pores over time.Fouling can reduce filtration efficiency and increase operating costs, necessitating frequent cleaning or replacement.Scaling up nanomembrane production to meet large-scale water treatment demands can be challenging and costly.Developing cost-effective manufacturing methods without compromising performance remains a significant obstacle.They are vulnerable to mechanical and chemical degradation, impacting their stability and lifespan.Ensuring durability and longevity under varying water conditions is crucial for practical applications.Achieving high selectivity for target pollutants while avoiding interference from other compounds can be challenging.Fine-tuning membrane properties to selectively capture specific contaminants requires careful material design and engineering.The use of nanomaterials in water treatment raises concerns about potential environmental and health risks.Meeting stringent regulatory requirements and demonstrating the safety of nanomembrane technology is essential for widespread adoption.
Moreover, the performance of nanomembranes can be influenced by the complex and diverse composition of water sources.Variations in water chemistry may impact membrane stability, fouling rates, and contaminant removal efficiency.Some nanomembrane processes may require significant energy inputs for effective water remediation.Developing energyefficient systems to minimize operational costs and reduce the environmental footprint is a critical challenge.Developing scalable and reproducible manufacturing techniques for nanomembranes is essential for cost-effective production and large-scale implementation.Integrating nanomembranes into existing water treatment systems and infrastructure can present compatibility challenges and require adaptations to ensure seamless operation.
Despite these challenges, ongoing research and technological advancements hold the promise of overcoming these obstacles.Collaborative efforts between researchers, industries, and regulatory bodies are essential to address these challenges and unlock the full potential of nanomembranes in providing sustainable and efficient solutions for water remediation.By addressing these concerns, nanomembranes can contribute significantly to mitigating water scarcity and ensuring access to clean water resources for a sustainable future [118][119][120][121].
Conclusions
Nanomembranes have emerged as a highly promising technology in the field of water remediation, offering innovative solutions to tackle the growing global water crisis.These ultrathin synthetic structures have demonstrated remarkable potential in various water treatment applications, including filtration, desalination, and contaminant removal.The development of nanomembranes has provided a breakthrough in water purification processes.Their nanoscale porosity allows for precise filtration, effectively removing impurities, particles, bacteria, and even viruses from water sources.Additionally, nanomembranes can selectively target specific pollutants, making them ideal for removing heavy metals, organic contaminants, and emerging pollutants that pose significant environmental and health risks.In desalination, nanomembranes play a crucial role in efficiently removing salts and minerals from seawater or brackish water, addressing freshwater scarcity challenges in coastal regions.This breakthrough technology has the potential to revolutionize desalination processes and provide a sustainable source of freshwater.The integration of nanomembranes with advanced technologies, such as nanocatalysts and sensors, enables real-time monitoring and targeted pollutant degradation, further enhancing the efficiency and effectiveness of water remediation processes.The environmental benefits of nanomembranes are evident, as their implementation can reduce the energy consumption and environmental footprint associated with conventional water treatment methods.Their high selectivity and efficiency translate into decreased chemical and energy usage, making them an environmentally friendly choice for water purification.Despite the significant progress made in nanomembrane research, challenges remain.The scalability and cost-effectiveness of large-scale production require further attention.Continuous efforts are needed to optimize manufacturing processes, reduce production costs, and make nanomembrane technology economically viable for widespread implementation.Additionally, as nanomaterials are involved in their fabrication, it is crucial to assess and address any potential environmental and health impacts associated with their use.The responsible disposal of used nanomembranes and the development of sustainable, recyclable materials are essential aspects to be considered in the future.Nanomembranes represent a promising frontier in water remediation.Their exceptional capabilities in selective filtration, desalination, and contaminant removal make them valuable assets in ensuring access to clean and safe water.Continued research and development, along with collaborations between academia, industry, and regulatory bodies, will be pivotal in harnessing the full potential of nanomembranes for addressing global water challenges.Embracing this cutting-edge technology can contribute significantly to achieving water sustainability and safeguarding water resources for generations to come.
Figure 5 .
Figure 5. Image of CNM placed on hexagonal supporting group (reprint with copyright permission) [55].
Figure 5 .
Figure 5. Image of CNM placed on hexagonal supporting group (reprint with copyright permission) [55].
Figure 6 .
Figure 6.Fabrication route for CNMs and graphene: construction of self₋assembled monolayers on a substrate (reprinted with copyright permission) [55].
Figure 6 .
Figure 6.Fabrication route for CNMs and graphene: construction of self-assembled monolayers on a substrate (reprinted with copyright permission) [55].
Figure 9 .
Figure 9. Water desalination using nanocelluloses/cellulose derivatives based membranes for sustainable future reprinted with copyright permission [70].
Figure 10 .
Figure 10.Programmed perforating process to fabricate dual-scaled porous NC membrane for oil/water separation Reprinted with copyright permission [71].
Figure 9 .
Figure 9. Water desalination using nanocelluloses/cellulose derivatives based membranes for sustainable future reprinted with copyright permission [70].
Figure 9 .
Figure 9. Water desalination using nanocelluloses/cellulose derivatives based membranes for sustainable future reprinted with copyright permission [70].
Figure 10 .
Figure 10.Programmed perforating process to fabricate dual-scaled porous NC membrane for oil/water separation Reprinted with copyright permission [71].
Figure 10 .
Figure 10.Programmed perforating process to fabricate dual-scaled porous NC membrane for oil/water separation Reprinted with copyright permission [71]. | 2023-08-03T15:13:50.098Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "f83fffb77d81280ba849db6f690c2e8a01922e2b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/13/8/713/pdf?version=1690851775",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16e490eada493d13a7d80f1dd84f85ac698e995e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
265308527 | pes2o/s2orc | v3-fos-license | Radio survey of the stellar population in the infrared dark cloud G14.225-0.506
Context. The infrared dark cloud (IRDC) G14.225-0.506, is part of the extended and massive molecular cloud located at the south west of the H ii region M17. The cloud is associated with a network of filaments, which result in two di ff erent dense hubs, as well as with several signposts of star formation activity and a rich population of protostars and YSOs. Aims. The aim of this work is to study the centimeter continuum emission in order to characterize the stellar population in both regions, as well as to study the evolutionary sequence across the IRDC G14.225-0.506. Methods. We performed deep ( ∼ 1 . 5–3 µ Jybeam − 1 ) radio continuum observations at 6 and 3.6 cm toward the IRDC G14.225-0.506 using the Karl G. Jansky Very Large Array (VLA) in its most extended A configuration ( ∼ 0.3 ′′ ). Data at both C and X bands were imaged using the same (u,v) range in order to derive spectral indices. We have also made use of observations taken during di ff erent days to study the presence of variability at short timescales towards the detected sources.
Introduction
The formation of intermediate and high-mass stars is a complex process that involves several evolutionary stages, and it is usually found associated with clusters of lower-mass stars (e. g., Pudritz 2002;Lada & Lada 2003).However, when and in what stage massive stars form relative to their low-mass cluster members remains an open question (e. g., Vázquez-Semadeni et al. 2017;Motte et al. 2018).Moreover, it is interesting to probe the earliest stages at which ionization might be present to understand the implications of stellar feedback that could soon disrupt the star-forming cores and limit their further growth.The lack of observational data characterizing these phenomena is due to the distances involved, typically larger than 2 kpc, and the clustered nature of high-mass star-forming regions, which call for high angular resolution and high sensitivity observations.The radio continuum emission at centimeter wavelengths is found in association with young stellar objects (YSOs) in all stages of the star formation processes (from Class 0 to Class III).The origin of radio continuum emission can be distinguished through the spectral index α, defined as S ν ∝ ν α .A non-thermal origin for the radio continuum emission results in a spectral index α < −0.1 (in the frequency range 4-12 GHz, of interest for the current work), and it is generally the result of electrons in presence of magnetic fields.In star-forming regions, this type of emission is commonly detected towards YSOs with an active magnetosphere (gyrosynchrotron radiation) corresponding to Class II/III YSOs (e. g., Feigelson & Montmerle 1985;Güdel 2002;Deller et al. 2013).Low-mass Class 0/I objects also have an active magnetosphere and sometimes present synchrotron flares due to magnetic reconnections in the protostellar surface (Liu et al. 2014).Synchrotron emission can also be generated in very strong magnetized shock spots within jet lobes interacting with the ambient medium (e. g., Carrasco-González et al. 2010;Ainsworth et al. 2014;Rodríguez-Kamenetzky et al. 2016, 2017, 2019;Osorio et al. 2017), towards high-mass binary stars producing synchrotron radiation in the region where their winds collide (e. g., Rodríguez et al. 2012), towards some H ii regions (e. g., Padovani et al. 2019;Meng et al. 2019), and finally, as contaminating background extragalactic sources.
On the other hand, the radio continuum emission can have a thermal origin (thermal bremsstrahlung) which is characterized by spectral indices between −0.1 and +2, and it is interpreted as emission from free-free electron encounters.This emission can arise from shocks in jets powered by low, intermediate and high-mass YSOs (see Anglada et al. 2018, for a review) with typical values for the spectral index α ∼ +0.6.Thermal emission is also commonly detected in the surroundings of massive stars, whose UV photons can ionize the gas and generate an H ii region.Depending on the evolutionary phase and the surrounding environment these H ii regions can be small (< 0.1 pc) and dense (> 10 5 cm −3 ), referred to as hypercompact and ultracompact H ii regions, and be associated with both optically thin (α = −0.1)and partially thick (α ∼ +0.6-+2) emission, or they can be large (> 1 pc) and more diffuse (< 10 3 cm −3 ), referred to as classical or giant H ii regions, and preferentially associated with optically thin emission (e. g., Kurtz et al. 1994;Sánchez-Monge et al. 2013a,b).Other regions, such as the Orion Nebula Cluster (ONC), present protoplanetary disks under the influence of external photoevaporation by the cluster's intense UV field.These formations, known as proplyds, consist of a disk surrounded by an ionization front and present strong thermal radio emission (Ballering et al. 2023).Finally, Class 0/I objects may exhibit strong winds that result in optically thick thermal emission, particularly in the dense region surrounding the protostar (e. g., Rodríguez 1999).Consequently, even if the protostar were to emit non-thermal radiation, it would likely be hidden by the optically thick free-free emission from the surrounding material and remain undetectable to an observer (e. g., Dzib et al. 2013Dzib et al. , 2015)).
Previous radio continuum studies conducted toward massive star-forming regions and infrared dark clouds have been constrained by sensitivity limitations, since the noise level is typically in the order of mJy/beam.This restricts the detection to only the most massive objects (e. g., Kurtz et al. 1994;Sánchez-Monge et al. 2013a;Purcell et al. 2013;De Pree et al. 2014;Moscadelli et al. 2016;Rosero et al. 2016Rosero et al. , 2019;;Hofner et al. 2017;Medina et al. 2018;Kavak et al. 2021;Purser et al. 2021;Irabor et al. 2023;Dzib et al. 2023), thereby missing a significant fraction of the stellar population.The new capabilities of the Karl G. Jansky Very Large Array (VLA), reaching ≈ µJy/beam sensitivities, offer a unique opportunity to extend radio continuum studies in nearby molecular clouds to more distant regions, providing insights into the formation of massive stars, their associated clusters and their implications on the surrounding medium.
Deep radio continuum surveys toward star-forming complexes in the solar neighbourhood, such as Ophiuchus, Taurus-Auriga, Serpens, Perseus, R Coronae Australis and Orion have shown the potential to characterize the population of YSOs within the radio frequency range along with their characteristics at other wavelengths (e. g., Dzib et al. 2013Dzib et al. , 2015;;Liu et al. 2014;Kounkel et al. 2014;Ortiz-León et al. 2015;Pech et al. 2016;Forbrich et al. 2016;Coutens et al. 2019;Vargas-González et al. 2021).Excluding Orion, the mean flux density at 7.5 GHz of the low-mass YSOs in the Gould's Belt VLA survey ranges from ∼ 0.15-0.8mJy in Class 0/I protostars to ∼ 0.2-4 mJy in T Tauri stars (Dzib et al. 2015;Pech et al. 2016).Detecting such a low-mass stellar population in regions lying more than 10 times further away is challenging, since a 1 mJy source translates into a flux density of 10 µJy, requiring, hence, a sensitivity of ∼ 2 µJy beam −1 to be detectable at a 5σ level.The only high-mass star-forming complex that has been studied with such a sensitivity is the Orion Nebula Cluster (Forbrich et al. 2016;Vargas-González et al. 2021), showing an increase in the number of known compact radio sources compared to previous, more shallow surveys (Zapata et al. 2004;Rivilla et al. 2015).We aim at extending this kind of studies to other massive star-forming complexes.
The infrared dark cloud (IRDC) G14.22−0.506(hereafter G14.2), also called M17 SWex, is part of the extended (77 × 15 pc) and massive (> 10 5 M ⊙ ) molecular cloud first reported by Elmegreen & Lada (1976), and located at the southwest of the H ii region M17, which contains the rich cluster NGC 6618 with at least 16 O-type stars and over 100 B-type stars (Chini et al. 1980;Hoffmeister et al. 2008).Based on parallax and proper motions of 12 GHz CH 3 OH masers, the distance to the cloud is estimated to be 1.98 +0.14 −0.12 kpc (Xu et al. 2011;Wu et al. 2014).More recently, Zucker et al. (2020) obtained a distance range of 1488-1574 pc by combining stellar photometric data with Gaia DR2 parallax measurements.For the present work, we use 1.6 +0.3 −0.1 kpc as the distance to the cloud.
High angular resolution observations of the dense gas (NH 3 and N 2 H + ) and submillimeter dust continuum emission (Lin et al. 2017) unveil a network of filaments comprising two hubfilament systems (see Fig. 1; Busquet et al. 2013;Chen et al. 2019).The cloud is associated with several signposts of star formation activity, such as H 2 O and CH 3 OH masers (Jaffe et al. 1981;Palagi et al. 1993;Wang et al. 2006;Green et al. 2010;Sugiyama et al. 2017), a rich population of protostars and YSOs detected with Spitzer, and a population of intermediate-mass premain sequence stars emitting X-rays detected with the Chandra X-ray Observatory, some of them lacking infrared excess emission from circumstellar disks (Povich & Whitney 2010;Povich et al. 2016).The cloud has also been observed with the Atacama Large Millimeter/submillimeter Array (ALMA) at 3 mm (Ohashi et al. 2016) and with the Submillimeter Array (SMA) at 1.2 mm (Busquet et al. 2016).The embedded population consists of 48 dust cores, with masses ranging from 0.7 M ⊙ up to 78 M ⊙ .
One of the most prominent results found by Povich et al. (2016) is that, despite the mass of the cloud (> 10 5 M ⊙ ) and its high star formation rate (⩾ 0.007 M ⊙ yr −1 ), there is a lack of O-type protostars.The brightest IRAS source in the field, IRAS 18153−1651, is associated with an H ii region hosting two stars with spectral types B1 and B3 (Gvaramadze et al. 2017).This absence suggests that either the IRDC G14.2 is only producing up to intermediate-mass stars but does not form massive O-type stars, or the massive clumps are still in the process of accreting enough material to form later the high-mass stars.Interestingly, Povich et al. (2016) observed a large-scale 'filamenthalo' age gradient and mass segregation of the stellar population.The less-obscured population, which corresponds to diskless stars, is distributed across an extended halo of lower-density molecular gas surrounding the IRDC filaments.In contrast, in the more obscured core regions of the filaments (A V > 50 mag), the more-obscured objects cluster together, containing all the youngest and most massive YSOs.Thus, the spatial distribution is accompanied by an apparent age spread.Diskless X-ray population is more evolved, less obscured, and less clustered with respect to the filaments in comparison to the YSOs, exhibiting an actual evolutionary effect.These findings suggest that G14.2 is a complex and dynamic environment with ongoing star formation activity.However, infrared and X-ray data suffer from extinction limitations.In order to overcome these limitation, sensitive and high-angular resolution centimeter continuum observations can fill the missing piece of information by helping to identify which dust cores may actually be associated with centimeter continuum emission.This will provide a more representative sample of the protostellar population that might not be detected at other wavelengths.
In this work, we present deep, large-scale radio continuum observations obtained with the VLA toward the IRDC G14.2.The paper is structured as follows.In Sect. 2 we describe the VLA observations, the data reduction and imaging processes.The results are presented in Sect.3. We analyze the radio properties (thermal/non-thermal emission) of the detected sources in Sect. 4 and discuss the characteristics of the stellar population in G14.2 in Sect. 5. Finally, in Sect.6, we present the summary and main conclusions of this work.
The observations were conducted in two different epochs.First, X-band observations toward G14.2-S were performed during February 2018 (project 17B-236).In the second epoch, we observed G14.2-S in the C-band in two runs (2019 September 24 and 27).For G14.2-N, we followed the same strategy (i.e., two runs in C-band during 2019 September 25 and 26, and six runs in X-band during 2019 August 26, 28, and September 3, 6, 9, and 16).The duration of these individuals runs were 1.7 hours for C-band and 1.8 hours for X-band, yielding a total observing time of 3.4 hours and 10.8 hours at C-and X-bands, respectively.A summary of the VLA observational parameters is given in Table 1.
Data at both C-and X-bands were taken using two 2 GHz wide basebands (3-bit samplers) and in full polarization mode.The total 4 GHz bandwidth was split into 48 spectral windows, each with a bandwidth of 128 MHz, which were divided into 64 channels with a channel width of 2 MHz.3C286 was used as the primary flux density and bandpass calibrator, and J1820−2528 was observed to calibrate the complex gains.The FWHM of the primary beam (i.e., the field of view) of the VLA has a diameter of 7 ′ at 6 GHz and 4.2 ′ at 10 GHz.
The data were processed using the VLA Calibration Pipeline 2 within the Common Astronomy Software Applications (CASA) environment, specifically the CASA 5.4.2 release.Once the data were calibrated, they were imaged using the CASA task tclean at each frequency band.Each epoch was analyzed separately to look for potential source variability.During this analysis, two bright and highly variable sources that interfered in the final image were found outside the field of view, one at C-band in G14.2-N and another one at X-band in G14.2-S.The G14.2-N source peak showed approximately a factor 6 of variability, and the peak intensity of the G14.2-S source has a variability Fig. 2. ALMA image (grey) at 3 mm (Ohashi et al. 2016) of G14.2-hub-N overlaid on the NH 3 (1,1) integrated intensity (black dashed contours) from Busquet et al. (2013).The left panel corresponds to the grey rectangle marked in Figure 1 while the right panel shows a close -up of the central region around G14.2-hub-N.In both panels contour levels of the grayscale image start at 3σ and increase in steps of 15σ, where σ is the rms of the map (0.2 mJy beam −1 ).Red dots depict the dust continuum sources detected with ALMA at 1.3 mm (Zhang et al. private communication).The synthesized beam is shown in the bottom left corner of both images.Symbols are the same as in Figure 1.Notes.Columns marked with † next to their names correspond to the parameters of the observations using the common (u, v) range. (a) 3.6 cm observations were performed in six runs while 6 cm observations were performed in two runs, with the array in the A configuration. (b) During the first three runs the array was in the BnA configuration while the array was being re-configured to its high-resolution A configuration for the last three runs. (c) Units of right ascension (α) are hours, minutes, and seconds, and units of declination (δ) are degrees, arcminutes, and arcseconds.
of more than one order of magnitude.In order to facilitate the cleaning process, these two variable sources were subtracted (see Appendix A).Additionally, there was a discrepancy in the positions and fluxes for all the sources in the X-band, which required the recentering of the data for the different observed days (see Appendix B).
For the purpose of creating the final images, once these variables sources were subtracted and the recentering of the X-band was done, the visibilities of all observations were inspected to establish the (u, v) plane coverage.For each region, images were created with the common (u, v) range between the C and the X band (10.8 to 969.7 kλ for G14.2-N and 6.8 to 791.1 kλ for G14.2-S) in order to ensure that similar spatial-scale structures are recovered and detected in the images at both frequencies.Finally, we performed the imaging including all epochs.All images have been corrected for the primary beam attenuation.In Table 1 we list the robust parameter used for the imaging, the synthesized beam, the position angle (P.A.) and the rms noise level of the combined image.As we can see, the beams in Table 1 are slightly different at each frequency.In order to ensure a proper comparison between the images, we also created another set of images for each field, frequency and day of observation with a common beam of 0 ′′ .6 × 0 ′′ .4 for G14.2-N and 1 ′′ .0 × 0 ′′ .9 for G14.2-S.The position angle for the common beam images was set to zero.These images have been used when a comparison of sources between different frequencies was needed, e. g., when studying the variability or estimating the spectral indices.
Source identification
The 3.6 and 6 cm continuum images towards the IRDC G14.2 reveal a rich population of compact radio continuum sources, whilst no extended emission is detected.This is likely due to the interferometric filtering which resolves out structures with sizes greater than 3 ′′ .In order to identify compact sources we used the Python Blob Detector and Source Finder package (PyBDSF 3 ), which is a tool designed to decompose radio interferometry im-Fig.3. ALMA image (grey) at 3 mm (Ohashi et al. 2016) of G14.2-hub-S overlaid on the NH 3 (1,1) integrated intensity (black dashed contours) from Busquet et al. (2013).The top panel corresponds to the grey rectangle marked in Figure 1 while the bottom panel shows a close -up of the central region around G14.2-hub-N.In both panels contour levels of the grayscale image start at 3σ and increase in steps of 6σ, where σ is the rms of the map (0.2 mJy beam −1 ).Red dots depict the dust continuum sources detected with ALMA at 1.3 mm (Zhang et al. private communication).The synthesized beam is shown in the bottom left corner of both images.Symbols are the same as in Figure 1.ages into sources.The identification of the radio sources has been done by using the images without the primary beam correction.By default, the PyBDSF module recognizes a source as those with a peak intensity larger than a certain threshold above the rms of the image (σ).This tool allows us to get the positions, integrated flux, peak intensity, sizes from an elliptical fit and position angle of each identified source.Nevertheless, for homogeneity, we calculated the fluxes for each source by defining a polygon at the 3σ level based on the positions identified as sources by PyBDSF.The same region defined has been used to get the fluxes on the images for the individual days when studying the variability (see Sect. 3.3) and when estimating the spectral index (see Sect. 3.4), using the images with the same (uv) range and synthesized beam.
In our study, we adopted two different criteria to consider a firm detection: (i) sources with a peak intensity larger than 6σ, where σ is the rms of the image, or (ii) sources with a reported counterpart at other wavelenghts and a peak intensity larger than 3σ.In order to find the counterparts at other wavelengths, we used the list of millimeter sources identified by Busquet et al. (2016) and Ohashi et al. (2016), and the catalog of infrared and X-ray sources from Povich et al. (2016).We established a radius of 2 ′′ (∼ 3200 au) around every source and considered as counterpart the closest source inside that radius.This search radius is (Povich et al. 2016).Cyan crosses depict radio sources with an X-ray and IR counterpart (Povich et al. 2016).Green diamonds depict radio sources with no IR or X-ray counterpart.
reasonable given that many of the VLA sources appear to be jets that ought to be offset from the driving stars.
By combining these two criteria, a total of 66 sources were detected in the IRDC complex G14.2.Only ∼ 10% of the sources present a peak intensity between 3 and 6σ, and only 18 out of the 66 sources do not present a reported counterpart at other wavelengths.A total of 52 sources were detected at 6 cm and 36 at 3.6 cm.Of all these, 22 sources were detected at both bands.Regarding their spatial distribution, 32 sources were located at the G14.2-N field and 34 sources at the G14.2-S.Two sources were detected in both regions because of the overlap in the field of view of the two pointings.Fig. 1 presents the location of the centimeter continuum sources detected in this work overlaid on dense gas emission traced by the NH 3 (1,1) from Busquet et al. (2013), while a close-up view of the two hubs is presented in Figs. 2 and 3, showing also the ALMA 3 mm image from Ohashi et al. (2016).Fig. 4 present the MIPSGAL 24 µm image (Carey et al. 2009) overlaid on the centimeter sources detected in this work with their IR and/or X-ray counterparts.The parameters of the radio sources detected can be found in Appendix C, where the primary beam correction has been applied.Appendix D presents some of the sources that have been studied in more depth.The individual images of each source are presented in Appendix E. In Fig. 5 we show the distribution of sizes and intensities for the identified radio sources.We split the sample into sources detected at different frequency bands (top panels) as well as sources detected in the two fields (middle and bottom panels).As can be seen in the left column of Fig. 5, most of the radio sources detected in the IRDC G14.2 have flux densities between 30 to 70 µJy.This kind of sources would have remained undetected in typical previous surveys of star-forming regions, which typically reach sensitivities of 0.1-1 mJy.In G14.2-N, 15 sources (10 detected at C-band and 5 at X-band) are above 50 µJy.Only one of them, detected at X-band, has a flux density larger than 1 mJy.Regarding G14.2-S, there are three sources above 1 mJy (two at C-band and one at Xband).We have 25 sources with flux densities larger than 50 µJy, mostly detected at C-band.G14.2-S presents a wider range of fluxes, although there are more differences between the values detected at the different frequencies.At the X-band, most of the sources are weaker than for the C-band, with 10 of them having flux densities below 30 µJy.The median fluxes per field and band are reported in Table 2.
The right column of Fig. 5 shows the distribution of source sizes.Most of the radio continuum sources in G14.2 detected in this work are compact (< 200 mas, corresponding to ≈ 300 au at the G14.2 distance), with 35 sources (19 in G14.2-N and 16 in G14.2-S) remaining unresolved at our current angular resolution.Interestingly, the sources in G14.2-N seem to be slightly more compact than in G14.2-S.There are only 4 sources with sizes above 500 mas in G14.2-N, including a very extended and clumpy source (∼ 1100 mas, or ≈ 1700 au) in VLA-19 (see Fig. D.2), whereas for G14.2-S there are 13 sources over 500 mas.The median sizes of the sources per field and frequency are listed in Table 2.
Background sources
After identifying all the compact radio continuum sources in both fields (i.e., in G14.2-N and G14.2-S), an estimation of the number of background sources for the VLA can be calculated using the formula from Anglada et al. (1998): where θ F is the field of view, ν is the frequency and S 0 is the flux density.
We used this expression and considered two different field of view sizes for each region and a flux density of 6σ to estimate the number of background sources in our observations (see Table 3).When considering only a region of 0.4 pc around the center of each hub (∼ 42 ′′ at a distance of 1.6 kpc), we obtain values below 1 for the number of background sources.Therefore, the probability of detecting an object not associated with the star-forming hub is small and we can assume that all the sources detected within the 0.4 pc inner region of each hub are indeed associated with G14.2-N and G14.2-S.Although the level of background contamination is low in the inner region of the cluster-hubs, this may be an important factor when considering the whole field of view at both frequency bands, since we have about 6 to 15 sources being potential background sources (see Table 3).Identifying counterparts at other wavelengths (see Sect. 3.5) will ensure the membership of the object to the G14.2 complex.
Variability
We searched for variability in the radio continuum emission of the detected sources by extracting the flux of each source in each of the different observing days (see Sect. 2).We note that the synthesized beam of these images vary slightly from day to day.In order to avoid possible biases, we convolved all the images to a common beam of 0 ′′ .6 × 0 ′′ .4 for G14.2-N and 1 ′′ .0 × 0 ′′ .9 for G14.2-S, and then evaluated the flux for each source and day.After that, we calculated the difference in flux between the maximum and minimum value, establishing a cutoff at 3σ level for variability detection.This cutoff was computed as 3 σ 2 max + σ 2 min where σ max and σ min are the uncertainties of the maximum and minimum flux, respectively.As indicated in Appendix C, this uncertainty takes into account the uncertainty 5.
in the rms and the uncertainty in the flux calibration.Sources whose uncertainty in the measurement exceeded this cutoff were considered as variable.
Tables 4 and 5 list the variable sources that have been detected at C-band and X-band, respectively.Notably, certain sources exhibit variability in one band but not in the other.It is important to note that this disparity does not necessarily indicate exclusive variability in a specific band.It may be due to variations in the observing days and in the duration of each observation, potentially preventing some sources from achieving the established cutoff.
Figs. 6 and 7 show in more detail the evolution of the variable sources at X-band over the six days of observation.The sources vary on short timescales, as the difference between consecutive observations ranges from hours to weeks.In G14.2-S, all sources present higher fluxes during the last three days of observation, although their behavior differ from source to source.For the rest of them, we cannot see any specific trend on the variability of the sources.Since at C-band we only have two days of observations, we cannot infer any specific trend on the variability from the fluxes reported in Table 4.
As discussed in Sect.2, two sources in the outer parts of the observing fields were found to be very bright and highly variable, and were subtracted to produce cleaner images.These sources are not shown in Tables 4 and 5, and their details can be found in Appendix A.
Spectral indices
In order to determine the origin of the radio continuum emission, we have calculated the spectral indices for the 66 sources detected in G14.2.To do this, we have used the set of images created with the common (u,v) range and convolved to the same beam (see Sect. 2).Since the field of view between the C-and Xband differs, the calculation is limited to those sources within the common field of view.For example, source VLA-17 is located outside the field of view at X-band, which prevents us from deriving a reliable spectral index.For the sources that have been detected in only one of the bands, we assumed a 6σ upper limit for the flux density at the non-detected band since the in-band spectral indices present very large uncertainties.Tables 6 and 7 report the spectral indices for sources in G14.2-N and G14.2-S, respectively.After that, and taking into account the uncertainties, we classify the sources as thermal (if α > −0.1) or non-thermal (if α < −0.1) radio emitters based on their spectral index (see Sect. 4).
It is worth noting that there are some factors that may affect the accuracy of the spectral index estimation.First, the C-and X-band observations were not carried out simultaneously, so any variation in the emission could have led to an inaccurate spectral index.We also have to take into account that the fluxes have been calculated selecting the same region, and the differences in the spatial emission of the two bands may have introduced errors in the calculation.It should be remarked that the absence of detection in one of the frequency bands does not necessarily imply that the continuum radio source has a featureless spectrum.This can be due to the source being faint at that frequency, or its signal may be masked by background noise.This highlights the importance of having observations at multiple frequencies to infer the origin of the emission of the sources.
As listed in Table 6, we found 14 centimeter sources (corresponding to ≈44% of the radio sources) in G14.2-N with spectral indices clearly smaller than −0.1.There is one source (VLA-33) which has a positive spectral index.For some sources, the spectral index is very close to the −0.1 limit but we cannot classify them unambiguously, due to the uncertainty in the measurements.Having this in mind, we consider that sources with an spectral index between −0.3 and +0.1 are expected to show a nearly flat spectrum, indicating emission that remains relatively constant or that slightly varies with frequency.This behavior is also commonly associated with thermal emission.Accordingly, in G14.2-N there are two sources , corresponding to ≈6% of the radio sources, with a flat spectrum that can be considered as thermal candidates.On the other hand, and as listed in Table 7, in G14.2-S, we found 8 sources (≈24%) with an spectral index smaller than −0.1.There are 12 sources (≈35%) that have a spectral index larger than −0.1.There is one source (VLA-40) with a nearly flat spectrum and therefore considered as a thermal candidate.The rest of the sources are either variable, and therefore have been excluded because the spectral index is considered unreliable, or the origin of the emission could not be determined due to the uncertainty or derived limits.Notes.The columns list: S C,max and S C,min : maximum and minimum integrated fluxes of the variable sources detected at the C-band; S C,diff : difference between those two fluxes in µJy; S C,cutoff : cutoff at which the flux difference is considered as variability; and the relative difference with respect to the maximum value.Notes.The columns list: S X,max and S X,min : maximum and minimum integrated fluxes of the variable sources detected at the C-band; S X,diff : difference between those two fluxes in µJy; S X,cutoff : cutoff at which the flux difference is considered as variability; and the relative difference with respect to the maximum value.
Fig. 8 shows the spectral indices and limits obtained for the sources in G14.2.In G14.2-N there are more sources, compared to G14.2-S, whose uncertainty has not allowed us to classify them as thermal or non-thermal emitters.Interestingly, for those sources for which we can unambiguously determine the origin of the radio emission, we find a vast majority of non-thermal objects in G14.2-N (≈70%) compared to G14.2-S (≈40%).Fig. 9 displays the probability density of the spectral index of the radio sources for which it has been possible to determine the origin of the radio emission, corresponding to the black dots shown in Fig. 8.As we can see, G14.2-N is dominated by non-thermal sources.In G14.2-S, we see a wider range of spectral indices, although with a tendency towards positive values.We note that in this figure, the values of the upper and lower limits have been taken as true values.Since in G14.2-N we have mainly upper limits, while in G14.2-S we have more lower limits, the difference between non-thermal and thermal populations in the two regions would be more pronounced if accurate spectral indices, instead of limits, could be derived for all objects.
Counterparts at other wavelengths
The study of the counterparts at other wavelengths can give us more information about the evolutionary stage and properties of the radio continuum sources detected in G14.2.We have searched for counterparts at millimeter, infrared and X-ray, as well as presence of dense gas (Busquet et al. 2013) and maser emission (Palagi et al. 1993;Wang et al. 2006;Green et al. 2010;Sugiyama et al. 2017).For this, we have used the millimeter source catalogues from Busquet et al. (2016) and Ohashi et al. (2016), which have been completed by new high-resolution data at 1.3 mm from ALMA (Q.Zhang, priv.communication); as well as the catalogue of infrared and X-ray sources from Povich et al. (2016).We established a radius of 2 ′′ around every source to consider sources at different wavelengths to be counterparts.It should be taken into account that we have different fields of view for the observations at different wavelengths.We note that while the IR and X-ray observations cover the whole area of G14.2 (∼ 17 ′ ), the mm observations focus only on smaller regions around the central hubs.The radio observations presented in this work cover two large pointings (∼ 7 ′ and 4.2 ′ for C and X-band, respectively).Therefore, there may be additional counterparts with mm sources that cannot be identified with the current catalogues.Table 8 lists all the identified counterparts for the radio sources detected in this work.The stage-system classification taken from Povich et al. (2016) and introduced by Robitaille et al. ( 2006) is based on the physical parameters of the spectral energy distribution (SED) models, with Stage 0/I objects modeled as an SED with an infalling envelope and Stage II objects modeled as an SED with only circumstellar disks.
In Povich et al. (2016), some of the objects were classified as diskless, referring to IR point sources detected in X-rays but with no infrared excess emission above a normally-reddened stellar photosphere.Most of these are intermediate-mass pre-mainsequence stars with strong magneto-coronal X-ray emission but lacking inner dust disks.Therefore, YSOs established by Povich et al. (2016) as diskless are most likely sources in the process of clearing up the circumstellar disk material.As shown in Table 8, most of these sources present variability, which is usually found in sources in a more advanced evolutionary stage.Moreover, 4 out of the 5 diskless sources detected in our observations present non-thermal emission.Thus, our results confirm that the objects classified as diskless by Povich et al. (2016) could be equivalent to Stage III YSOs.Notes.VLA-03, VLA-08 and VLA-32 are variable sources and therefore their spectral index may be inaccurate.Busquet et al. (2016) found that the ratio between the number of infrared sources without a millimeter counterpart and the total number of sources, within a region of about 0.4 pc in diameter around the center of each hub, is 4 times larger in G14.2-N than in G14.2-S, suggesting a more evolved population in the northern hub.We have expanded this analysis to include the radio continuum emission reported in this work as well as the Xray sources.When evaluating the inner 0.4 pc region, we have 5 and 9 radio sources in the G14.2-N and G14.2-S hubs, respectively.Fig. 10 summarizes the number of sources detected at each wavelength in each hub.Similar to the study by Busquet et al. (2016), we list in Table 9 the number of sources at each wavelengths.Computing the ratio of IR sources without a millimeter and/or centimeter counterpart in each hub, we obtained N IR /N radio = 0.2 in G14.2-N and N IR /N radio ≃ 0.05 in G14.2-S.Thus, the relative number of infrared sources with respect to the radio sources is larger in the northern hub by a factor of approximately 4, similar to the results found by Busquet et al. (2016) using observations at millimeter wavelengths.
Analysis
As explained in Sect. 1, the radio continuum emission from YSOs can have a thermal or non-thermal nature and can be originated in different processes (e. g., free-free emission from thermal radio jets or young H ii regions, non-thermal gy- rosynchrotron and synchrotron emission in magnetically-active YSOs).In this section, we analyze in detail the origin of the radio continuum emission for the 37 sources with well constrained spectral indices (see Sect. 3.4) by studying the well-known correlation between the radio luminosity and the bolometric luminosity for thermal radio jets (see Anglada et al. 2018, for a review) and the connection between the radio and X-ray luminosities, expected for non-thermal radio sources with active coronal activity.
4.1.Thermal free-free emission: radio jets or H ii regions?
In this section we investigate whether the thermal radio emitters detected in G14.2 can be explained in terms of photoionization (i.e., H ii regions) or ionization through shocks associated with outflows and jets.For this, we computed the number of Lymancontinuum photons per second, N Ly (see Sánchez-Monge et al. 2013a), using the flux densities at 3.6 cm, which for 19 thermal emitters, the radio luminosities are in the range of 0.04-0.45mJy kpc 2 , with a mean value of ∼ 0.16 mJy kpc 2 .Adopting an electron temperature of T e = 10 4 K, we obtained values N Ly ∼ 3 × 10 42 -3 × 10 43 s −1 , which translates to spectral (4), ( 5), ( 7), ( 8),( 9), (10) VLA-14 , ( 2), (3), [PW2016] 564, G014.2286-00.5088,(4), ( 5), ( 6 , ( 4), ( 5), (10 , ( 2), (3), G014.1142-00.5743 (5), (10 , ( 2), (3), [PW2016] 588, G014.1157-00.5737(4), ( 5), ( 6), (10 types B3-B4 assuming as a ionization source a single zero-age main sequence (ZAMS) star (Panagia 1973;Thompson 1984;Vacca et al. 1996;Diaz-Miller et al. 1998;Martins et al. 2005).In G14.2, the YSO population detected in the IR present luminosities much lower, 10-100 L ⊙ , so we expect N Ly ≪ 10 42 s −1 .Therefore, the emission is likely due shock-induced ionization for most of the sources.Fig. 11 presents the relation between the radio luminosity and the bolometric luminosity for 9 out of the 19 thermal sources, including flat-spectrum sources, identified in G14.2.We compare them with the sample of radio jets compiled by Anglada et al. (2018) as reference.Although our sample comprises a relatively narrow range of luminosities (∼10-800 L ⊙ ), there is an excess of radio emission compared to what is expected for an H ii region, and hence the radio emission is compatible with the well-known correlation for thermal radio jets.Hence, we can discard that these sources are H ii regions.The morphology of these sources appear, in most of the cases, elongated indicating that they are potential thermal radio jets.In fact, 6 out of the 9 thermal radio sources in G14.2 with measured bolometric luminosities, have been classified by Povich et al. (2016) as Stage 0/I YSOs, 3 of them are Stage II and 1 source is classified as Ambiguous.For the Stage III YSOs we do not have the measured bolometric luminosities.Moreover, the centimeter sources found in association with H 2 O and CH 3 OH masers (Palagi et al. 1993; Wang We additionally explored whether the non-thermal radio sources, as well as the unclassified radio sources, with an infrared counterpart, hence with measured bolometric luminosities, follow the empirical correlation for radio jets (see Fig. 11).For those sources only detected at 6 cm, we estimated the 3.6 cm radio luminosity assuming a spectral index of +0.5 (following the approach of Anglada et al. 2018).Since we know, however, that most of these radio sources present a negative spectral index, for well-classified non-thermal sources, we adopted α = −0.7 to extrapolate the flux density at 3.6 cm.Our sample contains 12 radio sources, 6 of them have been classified as Stage 0/I, 5 correspond to Stage II objects and only 1 is classified as Ambiguous according the classification of Povich et al. (2016) .
In order to discard that the sources detected in G14.2 could be H ii regions, we also calculated the expected flux density and thus, the luminosity, from the number of Lyman-continuum photons per second that are expected for H ii regions (Panagia 1973;Thompson 1984).As can be seen in Fig. 11, with the exception of some sources, the rest of them do not follow the expected relation.In contrast, our sample follows the expected relation between the radio luminosity and the bolometric luminosity found by Anglada et al. (2018), suggesting that these sources are also potential radio jets.In fact, several works find that radio jets can present both thermal and non-thermal emission.The central and powering source is usually associated with thermal emission whereas the jet lobes/knots are associated with nonthermal synchrotron emission from relativistic electrons accelerated in strong shocks (e. g., Carrasco-González et al. 2010;Marti et al. 1993;Rodriguez et al. 1989;Rodríguez et al. 2005;Sanna et al. 2019).However, there are some cases in which the radio emission from jets, at the current resolution, seems to be dominated by a non-thermal origin (e. g., Reid et al. 1995;Moscadelli et al. 2016;Kavak et al. 2021).Therefore, the sample of radio sources in G14.2 with negative spectral indices or the unclassified sources are compatible with radio emission arising from radio jets although further observations spanning a wider range of frequencies and in polarization mode would be necessary to fully confirm their nature.
The radio-X-ray relation
Previous VLA surveys of nearby star-forming regions have reported a correlation between the radio emission of YSOs and their associated X-ray emission (see e. g., Pech et al. 2016).Several findings suggest that YSOs adhere to the empirically Güdel-Benz relation (Guedel & Benz 1993;Benz & Guedel 1994) for magnetically active stars: with κ ≤ 1, depending on the type of stars.From our VLA observations, 25 out of 66 sources (i.e., ∼ 38% of our sample) present a reported X-ray counterpart but only 9 of them have measured luminosities (Povich et al. 2016) and do not present variability.
The absorption-corrected luminosities measure the total X-ray band (0.5-8 keV) and the hard X-ray band (2-8 keV).For this work, we use the hard X-ray band since it is less affected by absorption.Fig. 12 shows the X-ray luminosities and our derived radio luminosities for thermal, non-thermal, and unclassified radio sources.For simplicity in the representation, we considered flat sources as thermal emitters.Particularly for the unclassified sources, our data is poorly correlated to what we expected and presents a large dispersion, similar to the results found in M17 (Yanza et al. 2022) and in the Orion Nebula Cluster (Forbrich et al. 2016).As explained in Yanza et al. (2022), the lack of correlation between X-ray and radio observations can be due to the presence of potential thermal sources in the data sample.Moreover, the different timescales and high intrinsic variability of gyrocoronal flares may affect the results since simultaneous X-ray and radio observations are needed to properly study this relation.In fact, for most of the radio sources in G14.2 that present an X-ray counterpart it has not been possible to determine the origin of the emission from the spectral index.The Güdel-Benz relation is valid for non-thermal sources, so it might not apply to most of the sources.Moreover, our sample size is small and thus we cannot infer robust conclusions from the results obtained.
However, if only non-thermal radio sources are considered (i.e., filled black dots in Fig. 12), our observations seem to reproduce the Güdel-Benz relation with κ = 0.03.This suggests that the radio emission in those sources is probably produced by gyrosynchrotron radiation from the mildly relativistic electrons that are responsible for the X-ray emission.The only thermal source presented in Fig. 12 is VLA-22, which was originally classified as flat source.This source is therefore likely to be more compatible with non-thermal emission and also produced by gyrosynchrotron radiation.
Similar results were found in nearby region such as Ophiuchus (Dzib et al. 2013), Taurus-Aurgia (Dzib et al. 2015) and Perseus (Pech et al. 2016), while in Orion and Serpens it was found that the X-ray emission of YSOs was underluminous compared to the Güdel-Benz relation with κ = 1 (Kounkel et al. 2014;Ortiz-León et al. 2015;Forbrich et al. 2016).Despite these promising similarities between G14.2 and other nearby star-forming complex, with only 5 sources in G14.2, we cannot draw firm conclusions regarding the expected radio-X-ray correlation.
Finally, in G14.2 we have identified 22 non-thermal radio emitters, 36% of them remain unresolved with our angular resolution (∼ 0 ′′ .3) and, with the exception of VLA-19 whose emission is very extended, the remaining sources have sizes < 0 ′′ .7. Therefore, based on the compactness of these radio sources, we suggest that the radio emission of most of the non-thermal radio population in G14.2 is most likely associated with gyrosynchrotron radiation from the very active stellar magnetosphere, typically found in Class II/III YSOs (Feigelson & Montmerle 1985).However, in order to fully confirm the gyrosynchrotron origin, follow-up polarization studies are needed to investigate whether these radio sources present some degree of circular polarization.
Levels of fragmentation in the G14.2 hubs
Previous observations of the two hubs in G14.2 with the SMA, at an angular resolution of ∼ 1 ′′ .5 revealed different levels of fragmentation, with G14.2-S being more fragmented than G14.2-N (see Busquet et al. 2016).Despite these differences in fragmentation, the physical properties of both hubs such as the density and temperature profiles, the level of turbulence (Mach number ∼ 5.6-6.4), the Alfvén Mach number (∼ 0.4-0.3), the rotationalto-gravitational energy ratio (β rot ∼ 0.016-0.015),the mass (979-717 M ⊙ ), and the luminosity (995-531 L ⊙ ) are remarkable similar (see Tables 5 and 6 in Busquet et al. 2016, for further details).As explained in Busquet et al. (2016), the different levels of fragmentation may be due to different reasons.The first one is the difference in the magnetic field strength, with G14.2-N having a stronger magnetic field compared to G14.2-S (see Añez-López et al. 2020).The second potential cause is the presence of the luminous IRAS 18153−1651 source, with a luminosity of ∼ 1.1 × 10 4 L ⊙ and strong UV radiation, in G14.2-N.This suggests that the UV radiation from IRAS 18153−1651, as well as from the larger number of IR sources in the northern hub compared to the southern sibling, might be suppressing fragmentation.However, with our VLA data we do not see significant differences in the number of sources (or level of fragmentation) in the two hubs as previously studied in Busquet et al. (2016).In the current work, 32 centimeter sources were detected in G14.2-N and 34 in G14.2-S.While the detection or non-detection of radio continuum sources might be related to evolutionary effects, interestingly, the latest ALMA data at 1.3 mm, with an angular resolution comparable to the VLA observations (Q.Zhang, priv.communication, see also Figs. 2 and 3), do not reveal statistical differences in terms of fragmentation: with 25 millimeter sources without a centimeter and/or IR counterpart in G14.2-N and 30 millimeter sources in G14.2-S.Therefore, we conclude that both hubs show similar levels of fragmentation based on the observations with the VLA and ALMA.Hence, it seems that the different fragmentation levels reported in Busquet et al. (2016) may have been due to poor sensitivity in previous SMA observations, or to different effects controlling fragmentation at different scales.Therefore, although the magnetic field and UV radiation (from the bright IRAS source) could determine the level of fragmentation at intermediate scales (i.e.0.03 pc scale), the fragmentation at smaller scales (i.e.0.005 pc) does not seem to be affected anymore by these effects.Thus, thanks to the new results at high-angular resolutions in the cm and mm regimes, it is very feasible that G14.2-N and G14.2-S are twin hubs in terms of fragmentation, as proposed in Busquet et al. (2016) regarding their large-scale physical properties.
Radio properties of the stellar population
The high sensitivity VLA observations allowed us to detect 66 radio sources in the IRDC G14.225.Our analysis of the spectral index in the 6-3.6 cm range reveals that in G14.2 there are 22 sources (≈33%) that clearly present non-thermal emission and 13 (≈20%) are thermal emitters.There are also 3 sources (≈5%) presenting a nearly flat emission spectrum, most likely associated with thermal emission.
One aspect that should be taken into account when examining the origin of the radio continuum emission based on the spectral index analysis is the variability of the sources.As mentioned in previous sections, we found ten sources that are clearly variable at short-time scale (see Tables 4 and 5 and Figs. 6 and 7 ), but our observations were not designed to carefully characterize radio variability, and therefore, other sources may be also variable even if not detected as such in the current observations.Follow-up simultaneous multi-frequency observations with the VLA, similarly to Liu et al. (2014) and Coutens et al. (2019), might provide a more detailed insight into the variability of the radio sources in G14.2 and thus, a better estimation of their spectral indices and origin of the radio emission.
Despite the high sensitivity of VLA observations, the fraction of radio detections is low in comparison with the IR and X-ray stellar population (Povich et al. 2016).Fig. 13 shows the location of the four different populations in G14.2-N (top panel) and G14.2-S (bottom panel).In each region, there are between 300 to 400 sources detected at IR and/or X-rays, and only 44 have a radio counterpart.The IR/X-ray sources with no radio counterpart, could be rather evolved objects (Class II/III) with quiet corona activity, and hence with no thermal radio jet and no gyrosynchrotron emission.
Regarding the millimeter population (Ohashi et al. 2016;Busquet et al. 2016, Zhang et al. private communication), Fig. 13 makes more noticeable the differences in the fields of view, since millimeter observations are centered on smaller regions around the central hubs.From our study of the counterparts, we found that four radio sources were only associated with mm emission without any other counterpart at another wavelength.We have proposed these four millimeter sources associated with centimeter emission as new YSOs candidates and it is very likely that these objects are Class 0 or deeply embedded Class I objects.Since our study of the mm counterparts is limited only to the central region, we are likely to have more mm sources outside the hubs.
Comparison with other nearby star-forming regions
We compare now the properties of the radio sources in G14.2 to other star-forming complexes from the Gould's Belt VLA survey where their radio-source population has been studied in detail, reaching similar sensitivities and spatial resolutions as for G14.2.In particular, by comparing the radio spectral indices, which serve as indicators of the emission characteristics, we can investigate their properties across the different evolutionary stages of the YSOs.A comparative study of G14.2 with other complexes may unveil potential differences and shed light on the main characteristics of G14.2.
As previously discussed, we adopted the stage categorization in which YSOs were classified as Stage 0/I (SED modelled with infalling envelopes), Stage II (SED modelled with only circumstellar disks) or Stage III (X-ray sources with no mid-IR excess from circumstellar disks) used by Povich et al. (2016) (see also Robitaille et al. 2006).However, the standard classification of YSOs is the class categorization based on the spectral index at infrared wavelengths (Lada 1987;Andre et al. 1993;Gutermuth et al. 2009).For comparison with other nearby regions we equate Stage 0/I to Class 0/I, Stage II to Class II and Stage III to Class III. Figure 14 shows the spectral index for different YSOs in different star forming complexes, with the YSOs classified according to their evolutionary stage.We compare the results of G14.2 with Ophiucus (Dzib et al. 2013), Serpens (Ortiz-León et al. 2015), Taurus-Auriga (Dzib et al. 2015) and Perseus (Pech et al. 2016).
We find that for the detected YSOs in Taurus-Auriga, Ophiuchus and Perseus, the more evolved objects have a more negative spectral index.Based on this, it has been proposed that the radio emission towards Class 0/I objects, with spectral indices between +0.3 and +0.5, is likely dominated by partially optically thick free-free emission (from thermal radio jets).On the other hand, Class II and III objects present radio emission consistent with either optically thin free-free emission or (gyro-)synchrotron radiation (see e.g.Dzib et al. 2013).This is in agreement with the idea that, for more evolved sources, we are no longer able to detect the thermal emission from the surrounding material, since they have already expelled most of it.(e. g., Forbrich et al. 2007;Dzib et al. 2010).
Nevertheless, the Serpens star-forming region (Ortiz-León et al. 2015) and the IRDC G14.2 (this work) do not follow this trend, since younger objects are associated with non-thermal spectral indices.This might be due to the fact that these regions are composed of more massive YSOs in which non-thermal emission may be more dominant (e. g., Carrasco-González et al. 2010;Rodríguez-Kamenetzky et al. 2017;Kavak et al. 2021).An alternative explanation for the detection of non-thermal emission in the less evolved objects might be due to geometrical ef- (Ohashi et al. 2016;Busquet et al. 2016, Zhang et al. private communication).Red symbols indicate infrared sources (Povich et al. 2016).Black symbols indicate X-ray sources (Povich et al. 2016).The symbol sizes do not correspond to the respective angular resolution.The outer and inner dashed circles represent the field of view at 6 cm (∼ 7 ′ at 6 GHz) and 3.6 cm (∼ 4.2 ′ at 10 GHz), respectively.fects rather than the mass of the YSOs (e. g., Ortiz-León et al. 2015).According to this scenario, if the star is seen nearly poleon or nearly edge-on, the non-thermal emission originating in the corona might be less absorbed by the surrounding material and can be more easily observed (Forbrich et al. 2007).This effect could also be obtained through tidal clearing of circumstellar material in a tight binary system (Dzib et al. 2010).
Considering that, statistically, one does not expect a preferential orientation for YSOs, the trend found for the spectral index towards the YSOs of G14.2 might be explained by the presence of more massive YSOs compared to regions such as Ophiuchus, Taurus-Auriga or Perseus.This scenario is plausible for both Serpens and G14.2, since recent studies have confirmed mass segregation effects for both regions (see Povich et al. 2016;Plunkett et al. 2018), with more massive YSOs located in the central regions of the star-forming complex and cor- responding to those preferentially studied in the radio observations.Note also that more massive YSOs evolve more quickly, which could explain why Stage III objects present more negative spectral indices in G14.2 in comparison with other regions, since they should have stronger magnetic flaring activity.As shown in Fig. 14, the dominating population of non-thermal emitters within the less evolved objects is likely to come mainly from the northern region G14.2-N.By examining Table 8, we can see that in G14.2-Sonly one Stage 0/I object clearly shows non-thermal emission.Thus, our results point to G14.2-N likely containing more massive objects.
Evolution and development of G14.2
The molecular cloud environment in G14 extends more than 1 • to the southwest of the H ii region, parallel to the galactic midplane (Elmegreen & Lada 1976;Elmegreen et al. 1979).These authors suggested a sequential massive star formation from the north-eastern side with OB stars in NGC 6618 to the M17 southwest extension, or M17 SWex (Povich et al. 2009;Povich & Whitney 2010).We discuss now on the possible evolutionary stage of the IRDC G14.2 in relation to the more developed M17 star-forming complex.For this, we highlight different aspects regarding the stellar population and physical properties across the IRDC G14.2.
First, the already-developed and large H ii region associated with the bright IRAS 18153−1651 source is located to the northeast of G14.2 (see Fig. 1), while the southern region of the cloud appears more quiescent.This suggests a certain evolutionary gradient from southwest (less evolved) to northeast (more evolved), in agreement with the large scale age evolution proposed by Elmegreen & Lada (1976).
A second aspect refers to the stellar population across the IRDC G14.2.The counterparts of the radio sources at other wavelengths (see Table 9) provide precious information on the stellar population in both regions, their properties and evolutionary stages.The number of infrared sources relative to radio sources in G14.2-N suggests that the northern hub harbours a stellar population in a more advanced evolutionary stage but still having a deeply embedded population of protostellar cores (e. g., the case of VLA-14/MM1, see Appendix D.1).On the other hand, in G14.2-S, we have identified more millimeter sources without an infrared counterpart, suggesting that there is a larger population of objects at an earlier evolutionary stage.The large fraction of non-thermal emitters in G14.2-N could be due to the presence of relatively evolved YSOs (Class II and/or Class III), consistent with the ratio of IR versus mm sources.We note however that G14.2-N harbours several sources at an early evolutionary stage (i.e., classified as Stage 0/I by Povich et al. 2016; see Table 8), and thus the non-thermal radio emission might also result from strong shocks produced by radio jets powered by intermediate-/high-mass objects (Carrasco-González et al. 2010;Rodríguez-Kamenetzky et al. 2017;Kavak et al. 2021).Both analyses lead us to the conclusion that there are differences in the evolutionary stages of the two regions in the IRDC G14.2, and hint towards G14.2-N being more evolved compared to G14.2-S, and likely containing more massive objects.
The differences, in age and mass, seem to be in agreement with the 'filament-halo' gradient observed by Povich et al. (2016).The proposed scenario to explain the gradient combines two interrelated star formation processes, filament-driven star formation with dynamical relaxation (e. g., Bate et al. 2003) coupled with global hierarchical filament collapse (Vázquez-Semadeni et al. 2019).However, the importance of each process in producing the observed distribution is still unclear (see Povich et al. 2016, for further discussion).
One of the possibilities causing these differences could be related to the proximity of G14.2-N to the brightest IRAS 18153−1651 source in the field, located at about 1.5 ′ south-east from G14.2-N (Busquet et al. 2013;Gvaramadze et al. 2017).As mentioned previously, it hosts two B-type stars (B1 and B3).The discovery of an optical arc near the centre of the nebula associated with the IRAS source by Gvaramadze et al. (2017) led to the hypothesis that it might represent a bubble blown by the wind of a young massive star.This could be an evidence that the northern region is a bit more evolved compared to Hub-S, which lacks of similar structures.
Finally, we compared our results to the previous work by Yanza et al. (2022) towards M17 (located north to G14.2 and associated with a bright and well-developed H ii region).In Yanza et al. (2022), the M17 region is studied using VLA observations in X-band with the most extended A configuration.They find a median source size of ≈ 200 mas.In G14.2, our observations at X-band result in a median size of ≈ 320 mas for G14.2-S and ≈ 230 mas for G14.2-N.Therefore, sources tend to get smaller from G14.2-S to G14.2-N, and from G14.2-N to M17.This is consistent with a sequence where centimeter sources are progressively more compact as they tend further north.When connected with evolution, this would point to more evolved objects having a more compact radio continuum emission compared to early-stage objects.Early-stage Class 0/I sources are usually dominated by radio jets, which are elongated and often resolved at sub-arcsecond resolution, whereas more evolved Class II/III YSOs are typically associated with very compact and unresolved radio emission (see Anglada et al. 2018).
Moreover, the compact radio continuum sources in the M17 region are mainly dominated by non-thermal emission.For the sources for which the in-band spectral index could be obtained (see Table 4 from Yanza et al. 2022 for further information), more than 75% of them present spectral indices lower than −0.1.This is in agreement with the results found for G14.2-S and G14.2-N, where most of the non-thermal radio emitters are found in the more evolved G14.2-N region.All together, our results combined with the results of Yanza et al. (2022), confirm an evolutionary sequence starting with G14.2-S, following with G14.2-N and ending in M17 as it was first proposed by Elmegreen & Lada (1976).However, in our work we do not found evidences that support their claim that star formation in M17 SWex is triggered by the presence of the M17 H ii region.2016) have pointed out the remarkable star formation activity in the IRDC G14.2, characterized by a high star formation rate (SFR) of Ṁ = 0.0072 M ⊙ yr −1 .This value is even higher than that of the Orion Nebula Cluster (ONC) and NGC 6618, the cluster ionizing the bright M17 H ii region ( Ṁ = 0.005 M ⊙ yr −1 ).Interestingly, despite the high SFR, G14.2 lacks of O-type stars (M > 20 M ⊙ ), which is difficult to explain in the context of a standard Initial Mass Function (IMF; Salpeter 1955;Kroupa 2001.Using the N(H 2 ) column density map of G14.2 (Lin et al. 2017) and adopting a distance of d ∼ 1.6 kpc to the cloud, we estimate the total mass of G14.2 to be approximately of 12000 M ⊙ .This differs from the previous estimation of ∼ 20000 M ⊙ (Lin et al. 2017) due to the different distances adopted.Taking this updated mass estimate into account, and incorporating it into an analytical model for the cloud's evolution presented in Camacho et al. (2020) (see their Figure 13 for further details), G14.2 would be located closer to the trace of the initial accretion rate of 2.9 × 10 3 M ⊙ Myr −1 and thus, it would correspond to a age of 6-7 Myr, younger than the previously estimated.
These findings shed light on the question raised by Povich et al. (2016) regarding the late birth of massive stars in the IRDC G14.2.The cloud's slightly younger age suggests that it could continue to evolve and potentially form massive stars in the future.Furthermore, the estimated mass reservoir is lower than what was previously thought, challenging our previous understanding of the cloud's star-forming potential.Assuming a total mass of 12000 M ⊙ and a global star formation efficiency (SFE) of 30% (Bontemps et al. 2010), we would have a total of 3600 M ⊙ available to be in the stellar cluster.Considering the IMF described by Kroupa (2001) and 3600 M ⊙ , we would get that the typical total number of stars in the cluster would be about 8300 ± 300 stars4 , with a maximum of 84 ± 22 M ⊙ for the most massive star in the cluster.These values come from 1000 different generations of clusters, following the IMF from Kroupa (2001), and they correspond to the median of the number of stars and maximum total mass.The uncertainty in the values stand for the standard deviation of the 1000 runs.Therefore, our results suggest that, even considering that the total mass reservoir estimation is lower, G14.2 could end up forming massive stars in the future.
Summary and Conclusions
We have carried out VLA observations in its most extended configurations of the radio continuum emission at 6 cm (C-band: 4-8 GHz) and 3.6 cm (X-band: 8-12 GHz) towards the IRDC G14.225-0.506,an infrared dark cloud that is forming stars in two main hubs (G14.2-N and G14.2-S) with similar masses and luminosities.Our study allowed us to identify a hidden radiocontinuum population of compact sources and relate their properties to G14.2 as a whole.The main findings obtained in this work can be summarized as follows: -We detected 66 sources, 32 of which are located in the G14.2-N region and 34 in the G14.2-S region, with two sources detected in both fields.Most of the detected sources in the IRDC have flux densities around 50 µJy and are compact (≈ 200-300 mas, or 320-480 au), specially in G14.2-N.-The number of radio continuum sources in both hubs is similar, suggesting similar levels of fragmentation and consistent with the latest mm data obtained with ALMA, which suggests that the two regions are twin hubs.-We have identified 10 sources (3 located in G14.2-N and 7 in G14.2-S) to have a significant variable flux at radio wavelengths over periods of a few days.We note that variability may be an important factor to consider when detecting and characterizing these faint and compact objects in future studies.-We looked for the counterparts at other wavelengths of the detected centimeter sources.We found that 5 of the radio sources are associated with H 2 O and CH 3 OH maser emission.23 of the sources were already known YSOs and we classified 25 sources as YSOs candidates.In the inner 0.4 pc region around the two main hubs, the relative number of IR sources in front of the radio sources is larger in G14.2-N by a factor of 4, suggesting that the northern part is in a more advanced evolutionary stage.-By examining the spectral index, when possible, we determined the origin of the radio continuum emission.In G14.2-N we found 14 sources with a non-thermal origin and only one thermal emitter.Two sources present a flat spectrum, most likely associated with thermal emission.In G14.2-S we found 8 non-thermal emitters, 11 thermal emitters and 2 sources with a flat spectrum.The dominant continuum emission in G14.2-N could be an evidence of the formation of more massive YSOs, resulting in non-thermal emission due to strong shocks.-By comparing the bolometric luminosity with the radio luminosity, we found that the studied sources are compatible with thermal radio jets, and excluded the presence of embedded H ii regions in G14.2 by comparing our observations with the expected relation for H ii regions.-When comparing the radio sources with their counterparts in X-rays, we found that most of the sources are underluminous with respect to the Güdel-Benz relation with κ = 1.When examining only the sources classified as non-thermal emitters, we find that they follow the Güdel-Benz relation with κ = 0.03, similarly to other star-forming regions.This suggests that radio and X-ray emission are probably caused by magnetic reconnection in the stellar coronae.-We compared the radio properties of the stellar population in G14.2 with other nearby star-forming complexes, such as Taurus-Auriga, Perseus, Ophiuchus and Serpens.The objects in G14.2 follow a similar trend as found in Serpens, with Stage 0/I objects being associated with more non-thermal emission than Stage II YSOs.Similar to Serpens, this may point to our region being composed of more massive objects compare to other low-mass star forming complexes.-A comparison of our results of G14.2 with M17, a more evolved star-forming region to the north-east of G14.2, confirms a wider evolutionary sequence starting in G14.2-S and onwards to the most evolved region M17.-Based on the new distance estimations, G14.2 is slightly younger and harbors a lower mass reservoir than what was previously though.Our analysis point out that the complex has the potential to form massive stars in the future.
Currently, conducting deep radio surveys is highly timeconsuming, taking ∼ 11 hour for a single pointing at X-band, and hence they are limited to relatively nearby (d < 2 kpc) star-forming regions.However, the next generation of radio interferometers, such as the Square Kilometre Array (SKA) and the Next Generation Very Large Array (ngVLA) are expected to significantly improve the observations.These advanced radio telescopes will reach the same sensitivity achieved in G14.2 with just 1 hour of telescope time, revolutionizing the study of young stellar clusters at radio wavelengths by performing systematic surveys across the Milky Way.In addition, these new instruments will allow for systematic studies of short-and longterm variability in radio emissions.This capability is essential for disentangle the nature of the radio emission and, when combined with deep X-ray observations, for investigating coronaltype magnetic activity across a wide range of stellar masses and evolutionary stages.Finally, it is crucial to conduct polarization observations to better understand the radio properties of YSOs.Gyrosynchrotron emission exhibits circular polarization, while synchrotron emission is linearly polarized.Although there are a few instances where linear polarization has been detected in radio jets, such as in the case of HH80-81 with a relatively high degree of polarization (Carrasco-González et al. 2010), in most cases, the polarization degree is lower than in HH80-81, only accessible with the next generation of radio interferometers.
Fig. 1 .
Fig. 1.Spitzer image at 8 µm (color scale) overlaid on the NH 3 (1,1) integrated intensity (green contours) from Busquet et al. (2013).The contour levels range from 3 to 27 in steps of 6, and from 27 to 67 in steps of 20 times the RMS noise of the map, 9 mJy beam −1 km s −1 .The NH 3 (1,1) synthesized beam is shown in the bottom left corner.The black star depicts the position of IRAS 18153−1651.Blue crosses and cyan four-point stars depict radio sources detected in this study only at 6 cm and 3.6 cm, respectively.Pink three-point stars indicate radio sources detected at both frequency bands.The black and yellow dashed circles represent the field of view at 6 cm (∼ 7 ′ at 6 GHz) and 3.6 cm (∼ 4.2 ′ at 10 GHz), respectively.The two observed fields in this work, G14.2-N and G14.2-S are labeled.The grey rectangles indicate the close-up images presented in Figs. 2 and 3.
Fig. 4 .
Fig. 4. MIPSGAL image(Carey et al. 2009) at 24 µm overlaid on the centimeter sources detected in this work in G14.2-N (top panel) and G14.2-S (bottom panel).Red crosses depict radio sources with an IR counterpart(Povich et al. 2016).Cyan crosses depict radio sources with an X-ray and IR counterpart(Povich et al. 2016).Green diamonds depict radio sources with no IR or X-ray counterpart.
Fig. 5 .
Fig. 5. Kernel density estimation (KDE) of integrated fluxes (left) and sizes (right) of the detected sources in G14.2.The middle and bottom panels show the distribution in G14.2-N and G14.2-S, respectively, at C-(blue) and X-band (orange).Vertical lines in the left panels show the 6σ value without primary beam correction for the C-(black) and X-band (grey), while vertical lines in the right panels show the FWHM of the synthesized beams for the C-(black) and X-band (grey).
Table 2 .
Median fluxes and sizes for the radio sources in G14.2
Fig. 6 .
Fig. 6.Integrated flux of the variable sources detected in G14.2-N at Xband during the observed days.The sources were observed during 2019 August 26, 28, and September 3, 6, 9 and 16.The black dashed line corresponds to the established cutoff for each source shown in Table5.
Fig. 7 .
Fig. 7. Integrated flux of the variable sources detected in G14.2-S at Xband during the observed days.The sources were observed during 2018 February.The black dashed line corresponds to the established cutoff for each source shown in Table5.
References. ( 1 )Fig. 8 .Fig. 9 .
Fig. 8. Spectral index of the sources detected in G14.2.Filled and open symbols denote sources located within the 0.4 pc inner region (i.e., hubs) or outside the 0.4 pc inner region, respectively.Black symbols represent those sources for which it has been possible to determine the origin of the radio continuum emission.Grey symbols represent those sources for which it has not been possible to infer its nature.Arrows denote upper or lower limits for those sources only detected in one frequency band.Variable sources have been excluded for this representation.The black dashed line at −0.1 draws the boundary between thermal emission (α > −0.1) and non-thermal emission (α < −0.1).The grey dashed lines trace the limits where sources show a nearly flat spectrum, probably associated with thermal emission.The blue-and orangeshaded regions represent G14.2-N and G14.2-S, respectively.The right black panels indicate if the sources present a reported counterpart at mm, IR and/or X-ray, associated maser emission and/or dense gas emission.
Fig. 10 .
Fig. 10.Schematic representation of the sources detected in the 0.4 pc region around each hub and their counterparts at different wavelengths.Millimeter sources correspond to the ones detected in Busquet et al. (2016), Ohashi et al. (2016) and Zhang et al. (private communication).Infrared sources correspond to those detected in Povich et al. (2016).Centimeter sources correspond to those detected in this work.
Fig. 11 .
Fig. 11.Radio luminosity as a function of bolometric luminosity for the sources with an IR counterpart and a measured bolometric luminosity (Povich & Whitney 2010; Povich et al. 2016).Triangles represent thermal sources, filled dots depict non-thermal sources, and open circles represent radio sources with an unclassified origin of the radio continuum emission in G14.2.For unclassified radio sources not detected at 3.6 cm we extrapolated the 6 cm flux density to 3.6 cm adopting two values of the spectral index, α = −0.7 and α = +0.5 (see main text).Grey dots show the thermal radio jets compiled by Anglada et al. (2018).The solid black line corresponds to the fit done to all radio jets in Anglada et al. (2018).The red dotted line depicts the radio luminosity at 3.6 cm associated with the Lyman continuum flux of H ii regions powered by stars of different luminosities (see Fig. 4 from Sánchez-Monge et al. 2013a).The black dashed line depicts our 5σ sensitivity limit in radio luminosity (∼ 0.02 mJy kpc 2 ).
Fig. 12 .
Fig. 12. X-ray luminosity as a function of radio luminosity for the sources in G14.2 with a measured X-ray luminosity (Povich et al. 2016).The black line corresponds to the Güdel-Benz relation with κ = 1.The red dashed line corresponds to the Güdel-Benz relation but with κ = 0.03.Black dots depict non-thermal sources, black triangles represent the thermal sources, and unfilled black dots represent the radio sources with an unclassified origin of radio continuum emission.Variable sources have been excluded for this representation.
Fig
Fig. E.2.VLA continuum image at X-band (blue contours and grey image) of the source detected in G14.2-Nonly at the X-band.Contour levels are ±5, ±3, 10, and 20 times the rms of the maps (1.5 µJy beam −1 ).The synthesized beam of the X-band image is shown in the bottom-left corner of the panel.
Fig
Fig. E.3.VLA continuum images at C-band (grey image) and X-band (blue contours) of the sources detected at both frequency bands in G14.2-N.Contour levels of the grey image range from 2 to 30 times the rms of the map (2.2 µJy beam −1 ).Blue contour are ±5, ±3, 10, and 20 times the rms of the map (1.5 µJy beam −1 ).The synthesized beams of the two bands (grey and blue for C-and X-band) are shown in the bottom-left corner of the bottom-left panel.
Fig
Fig. E.4.VLA continuum images at C-band (blue contours and grey image) of the sources detected in G14.2-Sonly at the C-band.Contour levels are ±5, ±3, 10, and 20 times the rms of the map (2.9 µJy beam −1 ).The synthesized beam of the C-band is shown in the bottom-left corner of the bottom-left panel.
Fig
Fig. E.6.VLA continuum images at C-band (grey image) and X-band (blue contours) of the sources detected at both frequency bands in G14.2-S.Contour levels of the grey image range from 2 to 30 times the rms of the map (2.9 µJy beam −1 ).Blue contours are ±5, ±3, 10, and 20 times the rms of the map (1.4 µJy beam −1 ).The synthesized beams of the two bands (grey and blue for C-and X-band) are shown in the bottom-left corner of the bottom-left panel
Table 1 .
Parameters of the observations at 6 cm and 3.6 cm with the VLA.
Table 3 .
Number of background sources
Table 5 .
Variability of radio sources at X-band.
Table 8 .
Article number, page 10 of 28 Elena Díaz-Márquez et al.: Radio survey of the stellar population in the infrared dark cloud G14.225-0.506Counterparts of the radio continuum sources detected in G14.2.
Table 9 .
Number of sources detected at different wavelengthsRegion N mm N cm N IR N IR+ N IR /N radio N IR + /N Notes.Number of sources within the inner 0.4 pc around each hub.N mm is the number of millimeter sources without a centimeter and/or IR counterpart.N cm is the number of centimeter sources detected in this work.N IR is the number of IR sources without a millimeter and/or centimeter counterpart.N IR + is the total number of IR sources.N radio is the number of radio sources, that is, N mm +N cm . | 2023-11-22T06:43:19.431Z | 2023-11-21T00:00:00.000 | {
"year": 2023,
"sha1": "aaecf81f493f6f8408e8375a88cc0af9d844d654",
"oa_license": "CCBY",
"oa_url": "https://www.edpsciences.org/images/stories/librarians/EDP-AA-S2O.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aaecf81f493f6f8408e8375a88cc0af9d844d654",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14490171 | pes2o/s2orc | v3-fos-license | Toward a Psychology of Social Change: A Typology of Social Change
Millions of people worldwide are affected by dramatic social change (DSC). While sociological theory aims to understand its precipitants, the psychological consequences remain poorly understood. A large-scale literature review pointed to the desperate need for a typology of social change that might guide theory and research toward a better understanding of the psychology of social change. Over 5,000 abstracts from peer-reviewed articles were assessed from sociological and psychological publications. Based on stringent inclusion criteria, a final 325 articles were used to construct a novel, multi-level typology designed to conceptualize and categorize social change in terms of its psychological threat to psychological well-being. The typology of social change includes four social contexts: Stability, Inertia, Incremental Social Change and, finally, DSC. Four characteristics of DSC were further identified: the pace of social change, rupture to the social structure, rupture to the normative structure, and the level of threat to one's cultural identity. A theoretical model that links the characteristics of social change together and with the social contexts is also suggested. The typology of social change as well as our theoretical proposition may serve as a foundation for future investigations and increase our understanding of the psychologically adaptive mechanisms used in the wake of DSC.
Zoia is a lively 75-year-old Baboushka. Her eventful life has seen her experience some less-than-welcome adventures, but she has always managed to adapt to unfamiliar circumstances. After completing her studies in Moscow, she was, like many other young educated Russians, deported by USSR authorities to another state. Her destination was Frunze (later renamed Bishkek), a land in Central Asia warmer than hers and made slightly cooler by its unfamiliarity. Despite the diversity of Frunze, with ethnic Kyrgyz, Ukrainians, and other Slavic groups forming sizeable minorities, the Russian population remained a majority. During the Soviet era, Zoia was told that she lived in one of the most powerful countries in the world, where crime rates were low and the population enjoyed decent education and food supply, as well as the opportunity to save money for retirement.
The diversity of ethnicities eventually bred great tension, and the collapse of the Soviet Union in the early 1990s deeply affected Zoia's life. At the age of 54, she learned that her country was in ruins, that her rights as a Russian were diminished and that her language was widely frowned upon within the newly formed Kyrgyz Republic, Kyrgyzstan. Meanwhile, the disorganized authority allowed for an explosion in crime rates and increasing scarcity of resources. Zoia lost all of her life savings. The money she earned was no longer sufficient to cover basic necessities. Despite her position as a chief engineer, Zoia was forced to work a second job selling newspapers at the corner of her street just to make ends meet.
Although Zoia's story may seem uniquely dramatic, it is only one among over one billion (Sun and Ryder, 2016). Social change is indiscriminately pervasive and global-restricted to neither developing nor western worlds (e.g., Ponsioen, 1962;Smith, 1973;Chirot and Merton, 1986;Zuck, 1997;Sztompka, 1998;Fukuyama, 1999;Weinstein, 2010;Nolan and Lenski, 2011;Greenfield, 2016). Dramatic social change (DSC) is the new normal and can be witnessed presently across a multitude of contexts from political and economic upheaval, to desperate mass migration, and from natural or human disasters to technological advances.
Social change has always been a field of great interest for the social sciences, especially among sociologists since it seems that "all sociology is about change" (Sztompka, 1993, p.xiii; see also Sztompka, 2004). Many sociology texts have entire sections devoted to social change (e.g., Bauman, 2003;Latour, 2005;Hewitt et al., 2008;Giddens et al., 2011) all aimed at addressing one main question: What leads to social change? Many sociological theories have been suggested to explain the different "macro" processes associated with the onset of revolutions, social movements, or important technological changes. A "macro" theory focuses on the structural factors or defining events that contribute to DSC and are useful when considering how social changes are brought upon an entire group, community, institution, nation, or indeed society as a whole. The macro approach, however, is seriously limited when it comes to "micro" processes, which focus on the equally important question of the consequences of social change, or, in other words, how individual group members are impacted by social change (e.g., Rogers, 2003). Thus, the exclusive research focus on macro processes has left unanswered the pivotal question: What are the psychological consequences of social change?
Given the potentially dire consequences of DSC, it is surprising that psychologists have neglected it as a topic of rigorous academic pursuit, particularly given the current reality of vast globalization and massive immigration. To date, research focusing on the impact of social change on the well-being of individuals has not been clearly established (Kim, 2008;. Moreover, the adaptation mechanisms that people develop when coping with such contexts remain largely unknown . The goal of the present paper is to argue that psychology needs to focus on the psychology of social change (de la de la Sablonnière and Usborne, 2014). I argue that the bridge between the "macro" processes of social change and the "micro" processes of its psychological impacts have yet to be built. I suggest that social scientists must first focus on conceptualizing social change in a manner that includes both macro and micro processes in order to understand individuals' adaptation to social change. Thus, as the first step in moving toward a psychology of social change, I target what is considered the most difficult challenge: conceptualizing social change.
First and foremost, conceptualizing social change requires untangling the complexity of the topic by formulating a typology of social change (see Table 1). To that end, a large-scale meta-review that assembled original perspectives, theories and definitions of social change within both the sociological and psychological literature was performed. The typology of social change that emerged distinguishes four separate social contexts associated with social change: stability, inertia, incremental social change, and DSC. DSC, because of its frequency in today's world, and because it is threatening to people, requires special attention. Thus, the proposed typology of social change drills deeper and articulates four necessary characteristics for a change or an event to be labeled as "dramatic social change": rapid pace of change, rupture in social structure, rupture in normative structure, and threat to cultural identity. Finally, I come full circle by proposing a theoretical model that links together the four characteristics of DSC within the proposed typology of social change (see Figure 1). In sum, the typology of social change I am suggesting can be useful to create a theoretical consensus among researchers about what social change is that perhaps will allow for a coordinated, evidence based strategy to address the psychology of social change.
SOCIAL CHANGE IN SOCIOLOGY AND PSYCHOLOGY
Today, the field of sociology is at the forefront of social change theory and research, with a particular focus upon the factors that constitute and are prerequisites to social change. Within the sociological literature, three main theories have been championed for their attempt to explain social change: Evolutionary Theory, Conflict Theory, and Functionalist Theory. Each theory is characterized by A situation where an event, regardless of its pace, does not affect the equilibrium of a society's social and normative structures nor the cultural identity of group members. The event, may, however, impact an isolated number of individuals.
Inertia A situation where an event, regardless of its pace, does not either reinstate the equilibrium of a society's social and normative structures or clarify the cultural identity of group members.
Incremental social change
A situation where a slow event leads to a gradual but profound societal transformation and slowly changes the social and/or the normative structure or changes/threatens the cultural identity of group members.
Dramatic social change
A situation where a rapid event leads to a profound societal transformation and produces a rupture in the equilibrium of the social and normative structures and changes/threatens the cultural identity of group members.
Frontiers in Psychology | www.frontiersin.org Table 2 where a global overview of the conceptualization of social change is offered 1 . Despite the first appearance of "social change" in the psychological literature more than 70 years ago, only a few isolated psychologists have focused on social change per se and even fewer have offered a clear definition or conceptualization of the concept. The first paper that defined social change was published in the Academy of Political and Social Science and was entitled Psychology of Social Change. Social change was defined as "always a slow and gradual process" (Marquis, 1947, p. 75). From that point in time to the dissolution of the Soviet Union in 1991, there have been very few attempts to reintroduce social change into the field of psychology (e.g., Pizer and Travers, 1975;Schneiderman, 1988). However, after the dissolution of the Soviet Union and the fall of the Berlin Wall, there has been a small surge of research on social change in psychology. For example, several edited books (e.g., Thomas and Veno, 1992;Breakwell and Lyons, 1996;Crockett and Silbereisen, 2000) and special issues of journals Blackwood et al., 2013) have focused exclusively on social change and on people's reactions to it. For clarity purposes, Table 3 attempts to summarize the various theories or perspectives in different subfields of cultural and social psychology while Table 4 attempts to do so in subfields of psychology.
Theories
Perspective on social change Key authors
Evolutionary theory
Society moves in a linear direction from a simple to a more complex structure. Comte, 1853Comte, /1929Spencer, 1898;Pareto, 1901Pareto, /1968 Conflict theory Individuals and their groups fight to maximize their benefits. Society is in a constant state of disequilibrium.
Marx and Engels, 1848
Functionalist theory Society is in a constant state of equilibrium. When a change occurs in one part of society, adjustments are made. Social change occurs when the equilibrium is compromised due to the rapidity with which events occur. Durkheim, 1893Durkheim, /1967Parson, 1951
LIMITATIONS OF CURRENT RESEARCH AND CONCEPTUALIZATION OF SOCIAL CHANGE IN SOCIOLOGY AND PSYCHOLOGY
As indicated in the summary tables, both contemporary and traditional theorists in sociology and psychology have addressed social change through a variety of macro sociological or societal lenses, and equally from a plethora of micro, psychological, or individual perspectives. Theory and research thus far has demonstrated that social change is a complex entity (e.g., McGrath, 1983;Buchanan et al., 2005;Subašić et al., 2012) that Social identity relies on two aspects that may be associated with social change. First, SIT is a theory of social structure that is based on perceptions of legitimacy, stability, and permeability. Second, SIT proposes identity management strategies such as collective action whereby minority groups aim to maintain or acquire a positive and distinctive social identity. Tajfel and Turner, 1986 Social Dominance Orientation (SDO) In terms of SDO, social change can be interpreted as the opposition of hierarchy-enhancing attitudes in individuals with high SDO and hierarchy-attenuating ones in individuals with low SDO. Sidanius and Pratto, 1999 Relative Deprivation Theory (RDT) RDT can be applied to social change in two distinct ways. First, collective relative deprivation occurs when people compare their group to other groups and feel that their group is worse off which will motivate them to improve their status by means of collective action. Second, in times of DSC, people are usually confronted with a unique situation that results in confusion and the loss of social cues. It is therefore easier and more relevant for them to compare their group's present situation to their group's status at another well-defined time period, than to compare their group with another group. Recent research proposes the use of a historical trajectory when assessing one's group's collective relative deprivation. Runciman, 1966;de la Sablonnière et al., 2009ade la Sablonnière et al., , 2010 Immigration and Identity Integration (III) Immigration is a form of social change that requires human adaptation. Research in this field has demonstrated that individuals who simultaneously identify with their culture of origin and with the receiving group's culture and also desire contact with both cultures experience the highest levels of well-being.
Benet- Martínez and Haritatos, 2005;Berry, 2005;Amiot et al., 2007 Identity Process Theory (IPT) IPT explores the structure of an individual's identity and the coping strategies used when facing an identity threat or change that results from social change. Breakwell, 1986 System Justification Theory (SJT) SJT is a theory that explains how to preserve the status quo. It's more a theory of stability than of social change. Both advantaged and disadvantaged individuals endorse system-justifying ideologies, to preserve the existing social structure. Jost et al., 2004 Identity Threat Theory (ITT) In ITT, when a threat to identity occurs as a result of social change, individuals will regulate the structure of their identity by restoring the imbalance and modifying their identity through different processes that include integrating the new elements into their identity and assigning a positive or negative valence to them. Steele et al., 2002 Adjustment to Change Theory (ACT) ACT considers how individuals adjust to social change and argues that factors such as social support and the nature of the event predict the way individuals and groups evaluate social change. Goodwin, 2006 can be conceptualized in many diverging (and confusing) ways. The challenge associated with defining social change may well be to explain why it is an understudied phenomenon (de la and highlight the challenge of moving forward in studying its psychological impact on ordinary people. The typology of social change presented here offers an initial attempt at clarifying the meaning of social change from a psychological perspective. That is, I focus on an individualistic perspective, but attempt to address the role that macro processes play in terms of our more micro or psychological focus. Here, I discuss three main issues that point to the necessity to properly conceptualize DSC. First, and most importantly, the conceptualization and understanding of social change does not reach a consensus within the scientific literature (e.g., Coughlin and Khinduka, 1976). Furthermore, few scientists define precisely what they mean when using the concept (e.g., Saran, 1963). For example, when social change is studied from a social identity theory perspective (Tajfel and Turner, 1986), or a sociological conflict theory perspective, social change is conceptualized almost exclusively in the context of collective action (Krznaric, 2007). In light of this, collective action is defined as a means for group members to achieve an improved social position for their group in the social hierarchy (Taylor and McKirnan, 1984;Batel and Castro, 2015;de Lemus and Stroebe, 2015). In contrast, cultural psychology and developmental psychology conceptualize social change in a broader manner (e.g., societal transformations such as the fall of the Soviet Union; immigration) where change is not limited to the context of intergroup conflict Sun and Ryder, 2016). The fact that there is divergence in conceptualizing social change is preventing coordinated research on social change, because not all types of social change are considered. With some theories (e.g., relative deprivation theory, social identity theory, evolutionary theory, conflict theory), social change is conceived mostly as an autonomously controlled and unidirectional process toward group change; these conceptualizations do not account for social changes that are outside of human control, such as natural disasters (e.g., Coughlin and Khinduka, 1976). Equating social change with collective action (see Stroebe et al., 2015), for example, neglects uncontrollable social transformations such as socio-political reforms and natural disasters over which individuals or groups exert no control. Indeed, the majority of individuals who experience DSC have little control over such events. Since previous classifications can only explain some instances of social change, a theory that would clarify the Feldman and Laland, 1996;Laland et al., 2000 Developmental psychology Research in this field has demonstrated that social change has the potential to impact developmental stages for children and adolescents as well as their identities and well-being. Greenfield, 2009Greenfield, , 2016 Industrial/ organizational psychology Focuses on organizational change as a form of social change. Three main themes emerge from this field: how to successfully implement organizational change, how to limit the negative impact of organizational change and understand the psychological processes of people who are confronting organizational change. Kanter, 1991;Burke and Litwin, 1992;Sanzgiri and Gottlieb, 1992;Meyer and Allen, 1997;Reichers et al., 1997 characteristics required in conceptualizing DSC for all types of change has become a necessity. The second issue that points to the need for a typology of social change is that not all social contexts associated with social change (i.e., stability and inertia) were considered in previous scientific literature. Most theoretical and empirical work on social change in both sociology and psychology has focused on either incremental social change or DSC (e.g., Andersson et al., 2014;Bernstrøm and Kjekshus, 2015). However, in order to have a complete theory or typology of social change, it is also necessary to take into account social contexts where there is no social change, contexts of either stability or inertia ( Table 1). Knowing about incremental social change, inertia and stability, as well as how they relate to DSC is psychologically critical. A clear definition of the four social contexts of social change can facilitate finding solutions for the population to not only the consequences associated with DSC, but also the considerable and potentially unique challenges associated with each of these social contexts (see Abrams and Vasiljevic, 2014). For example, a society in a state of inertia may be misconceived as a society in a state of DSC if no clear understanding of each social context is achieved. In inertia, there might be less hope for reverting to a healthy society and consequently less long-term goals that are developed, whereas a time of DSC, such as a political revolution, may provide some hope for the future and some possibilities for some concrete long-term goals. Although the main focus of our paper is DSC, the full spectrum of social contexts associated with social change is presented. A more comprehensive theory of social change capable of accounting for stability, inertia as well as incremental and DSC is required to fully understand the psychological processes and ramifications of social change. Moreover, it is important to define stability, inertia, and incremental social change because they serve as a base for comparison or contrast to DSC. As Calhoun notes: "To understand social change, thus, it is necessary also to understand what produces social continuity" (Calhoun, 2000(Calhoun, , p. 2642.
Finally, the third issue that pushes me to develop a typology of social change is that, mainly in sociology, a specific event that can be characterized as social change can be interpreted in light of different theories of social change. Let us take the 2005 Tulip Revolution in Kyrgyzstan as an example. Evolutionary theorists may argue that this revolution followed the natural evolution of Kyrgyz society. On the other hand, functionalist theorists may argue that there was disequilibrium in Kyrgyzstan at the time of the revolution. However, it would be beneficial to conceptualize social change the same way in order to be able to assess its impact on individuals. What is needed is a conceptualization of social change that can be interpreted in light of all the theories and processes that have been developed thus far. When an in-depth analysis of the literature is performed, the essential characteristics that define social change across theories may be ascertained. For example, one of the characteristics that was identified in conceptualizing DSC was the rapid pace of social change. The rapid vs. slow pace of social change is important, for instance, to distinguish a DSC from an incremental social change where transformations in the social structure take place without major disruptions. Whether one conceptualizes social change from a functionalist theory, a social identity theory, or a developmental theory perspective, most researchers from these distinctive fields point to the pace of change as one pivotal and essential element that characterizes DSC. Thus, when I base the typology of social change upon such characteristics, garnered from previous research in both sociology and psychology, an all-encompassing conceptualization of social change may be obtained, and later used to guide empirical research independently of the diverging theoretical perspectives.
My observations on the limitations of sociology and psychology should not detract from the insightful contributions these disciplines have made to our understanding of social change. Indeed, these social scientists have tapped into very important issues. For example, although collective action is not the only type of social change, the research on this topic has successfully identified factors that lead individuals and groups to be dissatisfied with their conditions and engage in collective action. However, as Sampson (1989) pointed out: "we have not gone far enough in connecting our theories of the person with social change, in particular, with major historic transformation in the social world" (p. 417). Since our contemporary social world is characterized by social change (Weinstein, 2010), like Sampson (1989), I argue that "a psychology for tomorrow is a psychology that begins actively to chart out a theory of the person that is no longer rooted in the liberal individualistic assumptions, but is reframed in terms more suitable to resolving the issues of a global era" (p. 431).
In sum, social change needs to be clearly examined because future research is limited without an all-inclusive typology of social change; one that can bridge the epistemological differences between theories from various fields of research and diverging theoretical perspectives. What is needed is a clear conceptualization of social change that considers, and includes, the different characteristics that compose DSC and that were suggested by researchers from all these diverging areas and theoretical orientations.
CONSTRUCTING A TYPOLOGY OF SOCIAL CHANGE: THE CHARACTERISTICS OF DSC
Two separate databases from sociology and psychology were targeted to collate relevant peer-reviewed publications: Sociology Abstracts and PsycInfo. Including the year 2016, a total of 5,676 abstracts were carefully analyzed (90% inter-judge reliability; Table 5). Two inclusion criteria were used to determine if a manuscript was relevant to our typology of social change. First, the selected abstract, and then the articles, needed to a) focus on social change by including a relevant original definition or providing an original perspective on the concept (originality), or b) focus on one's perspective of social change at either the individual or group level (perceptions).
When reviewing the literature, I had one main goal: selecting and identifying the necessary characteristics of DSC that could either be present or not in other social contexts (i.e., stability, inertia, and incremental social change). Scientists refer to the characteristics in two different ways: (1) formally, when defining or describing DSC, incremental social change, stability, or inertia, and (2) informally, when introducing their research on social change 2 . I made sure that the included articles sufficiently addressed one or more of the four selected characteristics (i.e., rapid pace of change, rupture in social structure, rupture in normative structure, and threat to cultural identity, see Table 6). These four characteristics were chosen after a first reading of each of the articles (up to October 2013). They emerged most consistently and were singled out more often for their importance. From prior knowledge, I anticipated that "pace of change" and "social structure" would surface. The other two emerged naturally. From prior knowledge, I also expected the term "valence of change" (i.e., negative change) to emerge (e.g., Slone et al., 2002;de la Sablonnière and Tougas, 2008;de la Sablonnière et al., 2009c;Kim, 2008). However, that characteristic did not appear in a significant number of papers. The fact that some authors report "positive" change as having negative consequences (e.g., Prislin and Christensen, 2005;Bruscella, 2015) and "negative" change as having positive consequence (e.g., Yakushko, 2008;Abrams and Vasiljevic, 2014) may explain why the valence did not emerge as an important characteristic of DSC.
To conceptualize an event as DSC, all four characteristics must be present. For example, if an event is affecting only the normative structure in a gradual manner, it would not be possible to label that event as DSC. As for the other three social contexts (stability, inertia, and incremental social change), each has its own unique configuration of characteristics (see Figure 1) 3 .
The Pace of Change
The first characteristic that emerged regards the pace, which could either be slow or rapid, and is defined as the speed at which an event impacts a collectivity. When defining social change, researchers from both sociology and psychology distinguish two types of social change based on the pace of change: incremental (e.g., first-order change, beta change, decline, gradual, smallscale) and dramatic (e.g., second-order, gamma, abrupt, collapse, large-scale).
Theories of social change have explicitly and/or implicitly acknowledged the pace of social change as a central determining factor toward its characterization. For example, in one of the earliest versions of their seminal book, Lenski and Lenski (1974) state: "The most striking feature of contemporary life is the revolutionary pace of social change. Never before have things changed so fast for so much of mankind" (Lenski and Lenski, 1974, p. 3, see also Fried, 1964;Rudel and Hooper, 2005). In their new edition entitled Human Societies: An Introduction to Macrosociology, Nolan and Lenski (2011) describe how slowly human evolution has progressed for thousands of years until about 100 years ago, when humans began to evolve at an accelerated pace. Similarly, Weinstein (2010) suggests that for the last few decades, there has been "rapid and accelerating rates of change in human relations, from the interpersonal to the international level" (p. xvii).
It is worthwhile to note that a few key authors refer to pace when distinguishing different types of social change. For example, in organizational psychology, Nadler and Tushman (1995) distinguish slow "incremental" change from fast "discontinuous" change, where the latter would be characterized as DSC in the typology of social change. According to these authors, incremental changes are intended to continually improve the fit among the components of an organization. These changes can either be small or large; nonetheless, there is a succession of manageable changes and adaptation processes. In contrast, discontinuous changes are often linked to major changes in the global scope of the industry and involve a complete break with the past as well as a major reconstruction of almost all elements of the organization. These changes are more traumatic, painful, and demanding as individuals are required to acquire a whole new set of behaviors and discard old patterns. These dramatic changes are not made to improve the fit, but to construct a new collectivity, be it a nation-state, institution or sub-group of the larger collectivity. Newman (2000) also distinguishes between first-order change and second-order change in the context of organizations. According to him, a first-order change, which is equivalent to incremental social change, "is most likely during times of relative environmental stability and is likely to take place over extended periods of time" (Newman, 2000, p.604). In other words, this type of change occurs slowly and allows the organization and its members to adapt to the changes gradually. However, a secondorder change, or DSC, is radical, and transforms the core of the organization (Newman, 2000). In this case, the change is so sudden that it does not necessarily allow individuals to adapt to the process (Buchanan et al., 2005). Similarly, Rogers (2003) defines social change as abrupt and arises when the entire system is modified and jeopardized because changes are too fast for the system to adjust. In his book, Diamond (2005) contrasts "decline"-where minor ups and downs do not restructure the society-with "collapse"-an extreme form of several milder types of decline-which make it a DSC. An example of collapse is when most of the inhabitants of a population vanish as a result of ecological disasters, starvation, war, or disease. Examples of this are genocides such as Rwanda's which claimed around 800,000 lives, destroyed much of the country's infrastructure and displaced four million people (Des Forges, 1999;Zorbas, 2004;Pham et al., 2004;Staub et al., 2005;Schaal and Elbert, 2006;Prunier, 2010;Yanagizawa-Drott, 2014), the Armenian Massacres, which saw the systematic extermination of about 1.5 million minority Armenians in Turkey (Dadrian, 1989(Dadrian, , 1998 or Cambodia's genocide, which involved the death of almost two million people through the Khmer Rouge's policies of relocation, mass executions, torture, forced labor, malnutrition, and disease (Hannum, 1989). All these events led to an inordinate number of deaths and population movements in a short, restricted period of time.
To be considered dramatic, a social change needs to be quick and must involve a "break with the past" (Nadler and Tushman, 1995; see also Armenakis et al., 1986). The example most often used in the literature is the breakdown of the communist system in Eastern Europe and the Soviet Union (e.g., Kollontai, 1999;Pinquart et al., 2009;Round and Williams, 2010;Walker and Stephenson, 2010;Chen, 2015). For example, when Pinquart et al. (2004, p. 341) introduced their research on social change, they made a distinction between "gradual" change, such as ideological change in many Western societies, and "abrupt social change, " which represents a form of social change that may be spurred by a sudden, dramatic transformation of economic, political, and social institutions.
Rupture in the Social Structure
The second characteristic of DSC that emerges from my review regards a rupture in the social structure of a collectivity or a group. Social structure is a term that has several different uses in the sociological literature and this is, in part, because of the lack of agreement on how the term social structure should be defined (Porpora, 1989;López and Scott, 2000). One main dispute pits the dualism of "action" (or agency) vs. "structure" in mainstream sociological work (for a discussion see López and Scott, 2000). Consequently, many of the definitions describe behaviors rather than the role of social institutions (e.g., Cortina et al., 2012;Tanner and Jackson, 2012;Wilson, 2012). For example, Tanner and Jackson (2012) define social structure as "the formation of groups via connections among individuals" (p. 260), which focuses on meso-level interactions among individuals. Similarly, Macionis et al. (2008) define social structure as "any relatively stable pattern of social behavior" (p. 13).
The social structure being discussed in the present paper refers to macro-level elements of society such as institutions that facilitate and structure collective interactions, roles or behaviors. Thus, directly inspired from the most prominent definitions of social structure in the literature (Marx, 1859(Marx, /1970Giddens, 1979;Porpora, 1989;López and Scott, 2000;Stinchcombe, 2000), social structure is defined here as a system of socio-economic stratification, social institutions, organizations, national policies and laws that help structure the norms, roles, behaviors, and values of community members 4 . 4 Defining social structure represents a challenge that goes beyond the scope of the present paper. From my understanding of the literature, there are as many conceptions of social structure as there are scientists working on that concept. The most important issue that demonstrates how hard it is to define social structure is the fact that one of the most prominent sociologists, Giddens (1979), refers to a "duality of structure" when defining social structure (structure vs. agency). On the one hand, social structure represents institutions or more specifically "collective rules and resources that structure behavior" (Porpora, 1989, p. 195). Here, scientists refer to "groups, institutions, laws, population characteristics, and set of social relations that form the environment of the organization" (Stinchcombe, In both sociology and psychology, a rupture in the social structure is at the heart of definitions of social change. For example, for Breakwell and Lyons (1996), changes involve the disintegration of previous national and international order and sets in motion a process of re-definition and re-evaluation of societal norms, belief systems, and power structures. While the communal sense of continuity and permanence is challenged, social change often represents a period of massive transformations in political, social, and economic structures (e.g., Goodwin, 1998;Kim and Ng, 2008;Chen, 2012). This conceptualization is similar to the definition inspired by sociologists and provided by Silbereisen and Tomasik (2010, p. 243) where "social change is understood as a more or less rapid and comprehensive change of societal structures and institutions, including changes to the economic, technological, and cultural frameworks of a society (Calhoun, 1992)" or to Kohn's definition of radical social change: "we refer not to the pace of change but to the nature of the change-the transformation of one political and economic system into a quite different system" (Kohn et al., 1997, p. 615).
When research focuses on collective action, social structure is placed at the root of their definition. For example, "Breakdown Theories" in sociology argue that social movements result from the disruption or breakdown of previously integrative social structures. This theory regards collective action as a form of social imbalance that results from the improper functioning of social institutions (Tilly et al., 1975). Macionis et al. (2008) also suggest that, "revolutionary social movements attempt to target the whole collectivity by radically changing social institutions" (p. 452). Put differently, for social movements and collective action 2000, p. 142), or to "Lawlike regularities that govern the behavior of social facts" (Porpora, 1989, p. 195). On the other hand, social structure represents "the underlying regularities or patterns in how people behave and in their relationships with one another" (i.e., agency; Giddens et al., 2011, p. 3). Here, the definitions often described normative behaviors or the roles of individuals rather than the role played by social institutions (e.g., Cortina et al., 2012;Homans, 1951;Mayhew, 1980;Tanner and Jackson, 2012;Wilson, 2012). This duality lunched a debate in sociology that was reflected not only in Gidden's work but also in others sociologists that have devoted their writings to defining social structure (e.g., Parsons, 1964;Mayhew, 1980). For example, Porpora (1989) reports four principal ways of conceptualizing social structure that reflect either of these conceptions. More recently, expending on the work of Bourdieu (1975) and of Goffman (1983), López and Scott (2000) proposed that there is another aspect of social structure that must also be considered in addition to the institutional and relational structures: the embodied structure described as the "habits and skills that are inscribed in human bodies and minds" (p. 4). To add to that complexity, some researchers (e.g., Bronfenbrenner, 1979Bronfenbrenner, , 1994; for other "system views" see for example Marx, 1859Marx, /1970Habermas, 1987) describe the possible "systems" that are, like Russian dolls, embedded in each other. These systems include the ecological environments "conceived as a set of nested structures" (Bronfenbrenner, 1994, p. 39): the microsystems, the mesosystems, the exosystems, the macrosystems, and the chronosystems. This "ecological model" illustrates the complexity of social structure as a sociological term. Because of the lack of clarity, or maybe because the definition of social structure points to different aspects of the social structure, scientists often avoid defining social structure in their papers, and thereby contribute to the general confusion. Not that the other aspects or levels of social structure are not important (e.g., meso, micro), but the social structure being discussed in the present paper refers exclusively to macro-level elements of society such as institutions and other environmental factors that help facilitate and structure collective interactions, norms, roles, and behaviors. to occur, social institutions-consequently, the social structure of society-needs to be altered. In other words, social change "is the sudden shifting of power from group to group" (Schrickel, 1945, p. 188). To many authors, DSC involves a rupture in the social structure (e.g., Prilleltensky, 1990) where people need to "negotiate their way through or around social structures" (May, 2011, p. 367).
Rupture in the Normative Structure
The third characteristic of DSC that emerged from the literature is the rupture in the normative structure of society. While reading on the subject, I noticed an important distinction between social structure and normative structure. As mentioned in the previous section, that distinction pointed to a duality that is also observed by theorists in sociology who attempt to define social structure (e.g., Giddens, 1979;Mayhew, 1980;Porpora, 1989;López and Scott, 2000). Although both the social and normative structures refer to the functioning of a society, they each point to two different aspects of communities and groups. As discussed earlier, the social structure is associated with macro processes such as social institutions (e.g., Government), whereas the normative structure is related to micro processes as they principally refer to community members' habitual behaviors and norms.
Based on the work of Taylor and de la Sablonnière (2013, 2014), the normative structure is defined here as the behaviors of most community members whose aim is achieving collective goals. In other terms, when the normative structure is clear, people know what to do and when to engage in specific behaviors in order to meet the overarching goals of the collectivity. The definition of normative structure also takes its inspiration from an array of different domains in the scientific literature. Mainly, it comes from the definitions of social change that most often involve a change in behaviors and habits that are disrupted with the event of a dramatic and rapid social change. For example, Bishop (1998, p. 406) clearly states that social change in its transformational form refers to "the ability of a group to behave differently, even to creating brand-new elements, within the same social identity." This definition concurs with definitions of many more authors, such as Delanty's (2012) concept of "normative culture" or May's (2011), where the mundane "ordinary" activities take a central place in social change.
Research and theories on social change have put normative structure as one of its central tenants. For example, Tomasik et al. (2010), argue that social change involves "changes of the macro-context that disturb habits, interrupt routines, or require novel behaviors relevant for a successful mastery" (p. 247). These authors also assert that when a gradual social change occurs, "old options of thinking and behaving are usually still available whereas abrupt social change is often associated with an immediate blocking of old options" (Pinquart and Silbereisen, 2004, p. 295). Therefore, in the latter case, it will be necessary to develop new ways of doing things. Jerneić and Šverko (2001) argue that "major political and socioeconomic changes may strongly influence people's life role priorities, which are otherwise relatively stable behavioral dispositions" (p. 46). In fact, the normative structure of a society is comprised not only of norms and behaviors, but also of roles that people have in their everyday lives. When a DSC occurs, these normative elements of people's lives are all greatly affected to the point where they need to be redefined. Similarly, McDade and Worthman (2004) refer to "socialization ambiguity, " a state present in the context of DSC where "inconsistent messages or conflicting expectations regarding appropriate beliefs and social behavior during the course of socialization may be a substantial source of stress for the developing individual" (p. 52; see also Arnett, 1995;Tonkens, 2012).
This rupture in the normative structure of society is present not only when radical changes such as natural disasters occur, but also when social change is the result of collective actions within a society. Subašić et al. (2012) acknowledge that "what we do is evidently shaped by social norms, by institutional possibilities, and institutional constraints. But equally, we can act-act together that is-to alter norms, institutions, and even whole social systems" (p. 66). Therefore, when members of a society come together and engage in collective actions, an important aspect of society they aim to change deals with the norms and normative structure.
The importance of the normative component involved in DSC is in accordance with the Normative Theory of Social Change, developed by Taylor et al. de la Sablonnière, 2013, 2014; see also de la Sablonnière et al., 2009b). According to their theory, any group-whether it be at the collective, community or country level-functions along the basic 80-20 principle in times of stability. According to this principle, most of the citizens in a functioning society (i.e., 80% of them) will exhibit normative behaviors that agree with the normative structure of the society in order to accomplish collective goals such as achieving a healthy society, and by extension, personal goals such as maintaining a healthy lifestyle. It is the 80% that provide social support, when necessary, to the 20% of citizens who do not function successfully in the society. In theory, as long as there is a decent majority of people who conform to the normative structure, a society should function relatively smoothly. Unfortunately, this is not always the case. Sometimes, when a society is confronted with DSC, its normative structure is ruptured which may lead to societal dysfunction or important disruptions in the "usual" behavior of group members. In such a situation, the amount of group members exhibiting behaviors that are in agreement with the collective goals of the group will be lower than usual. Therefore, it is possible that instead of having 80% of group members acting according to the normative rules of the society, only 30 or 40 % of individuals will follow these rules. In this case, it becomes very difficult for people to restore the functional equilibrium of the normative structure as only a few group members are in a position to provide the necessary social support for the entire society to function properly (Taylor and de la Sablonnière, 2014). What is suggested here is consistent with the work of Albert and Sabini (1974). These authors refer to the importance of a supportive environment, or social support, which has a sufficient presence in "slow change, " but not when the context is one of rapid change.
Threat to Cultural Identity
The fourth characteristic of social change is threats to the cultural identity of a group. This characteristic is a difficult one to label since different authors use different terms to describe a threat to cultural identity (i.e., lack of clarity, identity conflict, identity crisis, lowered identification, identity confusion). As opposed to terms such as identity conflict, identity crisis, lack of identity clarity and identity change, "threat to cultural identity" was chosen for its capacity to suggest a potential modification in identity. To be considered DSC, the cultural identity in its current form must somehow be jeopardized, challenged, or lowered. Values and beliefs are, per se, questioned and the individual may sense a general lack of clarity and feel threatened to the core of his group identity, value system, or beliefs.
Many scientists have defined and researched collective and/or cultural identity. Recently, Ashmore et al. (2004) have defined collective identity as "first and foremost a statement about categorical membership. A collective identity is one that is shared with a group of others who have (or are believed to have) some characteristic(s) in common" (p. 81). This definition is similar to the one from Taylor (1997), in which cultural identity is referred to as the beliefs about shared rules and behaviors (Taylor, 1997(Taylor, , 2002Usborne and de la Sablonnière, 2014).
When a social change occurs, it threatens the cultural identity of all community members. In the present paper, inspired from previous work on cultural identity, I define threat to cultural identity as a serious threat to identification and to the clarity of the shared beliefs, values, attitudes, and behavioral scripts associated with one's group. Throughout the literature I reviewed, cultural identity threat was manifested according to three main themes. The first theme that stood out is that threats to identity are associated with a loss of identity or an identity change (e.g., subtractive identification pattern; de la Sablonnière et al., 2016). Some authors directly mention the threat to cultural identity within the context of major social change (e.g., Vaughan, 1986;Smelser and Swedberg, 1994;Sztompka, 2000;Wyn and White, 2000;Van Binh, 2002;Terry and Jimmieson, 2003). For example, in his paper on how cultures change as a function of mass immigration Moghaddam (2012) argues that globalization results in sudden contact among different groups of people from different countries. This form of sudden contact has often resulted in the extinction of many cultures and languages such as Indigenous peoples around the world. Therefore, globalization makes people feel that their collective identity is threatened. Specifically, they experience a loss in many components of their cultural identity including their values and their language (see also Van Binh, 2002). The process described by Moghaddam is similar to the one proposed by Lapuz (1976) who argues that when social change occurs rapidly, people's beliefs and values are threatened since the old guidelines are no longer available. One consequence of this threat is that people become confused as values and beliefs contribute to the emotional security and psychological survival of individuals (Lapuz, 1976;Varnum, 2008). This is in agreement with Albert's (1977) proposition: "Rapid change constitutes a major threat to self-identity" (p. 499). Similarly, in their book entitled Changing European Identities, Breakwell and Lyons (1996) discuss the mechanisms associated with change in identities in the context of the development of the European Union and refer to a loss of national identity. This change in cultural identity is similar to what Wall and Louchakova (2002) describe as a "shift in the cultural collective consciousness" (p.253). This consists of a change in the American self and the emergence of new selves, more independent and alive in the context of change (see also Neves and Caetano, 2009;May, 2011).
The second theme is associated with the lack of identity clarity in the event of DSC. This lack of clarity is due to uncertainties or inconsistencies in the definition of one's identity. A clear cultural identity is defined as "the extent to which beliefs about one's group are clearly and confidently defined" (Usborne and Taylor, 2010, p. 883; see also Taylor, 2002). It has been theorized and demonstrated that an unclear cultural identity can result in lower self-esteem (Usborne and Taylor, 2010). Thus, if the entire collective is experiencing an unclear cultural identity, it may affect people's ability to function effectively in their society. Similarly, Macionis et al. (2008) refer to inconsistencies in the context of socialization in times of important change. People try to seek out new roles, try new "selves" (Macionis et al., 2008, p.461). They need to adapt to the inconsistent model their societies are projecting, which leads to "socialization ambiguity" (McDade and Worthman, 2004, p. 49). Because social change brings uncertainty in society, it can affect many aspects of individuals' lives such as family relations (Noak et al., 2001), and aspects associated with the self such as "emotions, values, perceptions, identity" (Wall and Louchakova, 2002, p. 266).
Finally, as a third theme, authors refer to conflicting identities within the context of dramatic contextual change. For example, Becker conducted a study to find out how rapid social change, such as introducing television in a community that had never owned televisions before, would impact body images of girls and women in that community (Becker, 2004). She found that television caused confusion and conflicts about ideal body images, and consequently "reshap [ed] [their] personal and cultural identities" (Becker, 2004, p. 551). In some cases, it even led to eating disorders (Becker, 2004), which has a direct link with the way people evaluate and perceive themselves. In other words, this DSC altered their identity. In fact, severe contextual changes can challenge the meaning of identity and threaten its existence (Ethier and Deaux, 1994;Macek et al., 2013). Similarly, Hoffman and Medlock-Klyukovski (2004) argue that contemporary organizations are "typically marked by conflicting interests and contradictory demands on individuals" (p. 389). This is similar to Chen (2012) who refers to the need for a transformation and the need to create new cultural norms and values when confronted to the context of social change (Chen, 2012).
As many different concepts surround each of the four social contexts, it was necessary to choose a meaningful label for each. For "stability" and "inertia, " the choice was relatively easy because these two labels are commonly used and applied consistently. The term "status quo" was also considered rather than "stability" (e.g., Prilleltensky, 1990;Diekman and Goodfriend, 2007;Mucchi-Faina et al., 2010). However, because there could also be "status quo" in the context of inertia (e.g., Subašić et al., 2008), the term "stability" was preferred.
When it came to "incremental" and "dramatic" social change, the decision was more arduous as authors from different research fields use different labels. For example, instead of referring to "DSC, " Golembiewski et al. (1976) refers to "gamma changes"; Nadler and Tushman (1995), to "discontinuous change." Others refer to "second-order change" (Watzlawick et al., 1974;Bartunek and Moch, 1987;Bate, 1994;Newman, 2000), to "abrupt" (e.g., Back, 1971; or even to "rapid" change (e.g., Becker, 2004;McDade and Worthman, 2004). The term "dramatic" social change was chosen for its ability to clearly and distinctively define the situation confronting ordinary people. In a similar fashion, the term "incremental" social change was preferred over the labels: "first-order change, " "beta change, " and "continuous change."
Stability
When there is stability, the actual state of a society is maintained and the majority of group members are actively attempting to attain society's goals. As Weinstein (2010) describes it, it is a state in which "the established order appears to be operating effectively, and disturbing influences from within or from other societies are insignificant" (p. 9; see also Bess (2015) where no change is equated with stability). Indeed, none of the four characteristic of social change are present. For example, the social and normative structures fluctuate little, and changes do not affect what is defined as normal behavior in a community (Harmon et al., 2015). Indeed, personal change, such as bereavement or divorce, still occurs for some members of society. However, in the event of a personal change, the social or normative structures are not disrupted, mainly because the collective social support system remains functional and people can rely on that support in case they experience changes in their individual lives. This is also consistent with the findings of Albert and Sabini (1974) who argue that changes occurring in a supportive environment or in a peripheral element of society are perceived as less disruptive than those occurring in a nonsupportive environment because the strain upon society is attenuated.
Consistent with previous research, stability can be defined as a situation where an event, regardless of its pace, does not affect the equilibrium of a society's social and normative structures nor the cultural identity of group members. The event, may, however, impact an isolated number of individuals. An example that might clarify this definition of stability is the event of an election. Although many people can get excited and seem to be affected by this event, an election does not necessarily bring about a rupture in a society, even if it involves a change of political party. The core elements of society remain stable and citizens resume their activities without feeling their lives have been overly disrupted by the election and its outcome. If, for instance, supporters of the defeated party feel sad and hopeless about the defeat, plenty of other citizens will be available to help them cope since most of them will not be affected by the change of government. However, in a different context, the event of an election may trigger DSC; for example, when it leads to a social revolution.
Inertia
In contrast with stability, a context where there is inertia involves a situation that does affect a large number of people, if not most of the people composing a society. Inertia is defined as a situation where an event, regardless of its pace, does not either reinstate the equilibrium of a society's social and normative structures or clarify the cultural identity of group members.
In times of inertia, if a "positive" event occurs, there is no sustainability to maintain its positive impact. Here, the example of Belarus is used, a country where the population has been in a state of inertia since the fall of the Soviet Union. Lukashenko has been the president of the country since 1994. Under his autocratic rule, Belarus is known as the last dictatorship in Europe. Many Belarusians are longing for a more democratic and open society, yet the country remains in inertia. Buchanan et al. (2005) describe a situation of inertia as an "absence of appropriate activity, a lack of capability, a failure to pay attention to signals, and thus as an impediment rather than a desired condition" (p. 190). Inertia is seen as an undesirable situation where constructive change is not possible because the organization (or the group) does not have the capacity (e.g., lack of resources or will) to carry out the needed change. These authors also argue that when a change is implemented, its sustainability requires managers and staff (or community members) to share the same objectives. Uncertainty about the future must be minimal.
Accordingly, one can assume that the criteria underpinning sustainability in the event of a change are already absent in a society that has stagnated due to inertia. Therefore, inertia in a society such as Belarus constitutes a context where the population is uncertain about the future and does not share the same longterm goals as its government. There is a desire for positive social change, but the actual structure of the society makes it difficult for any change to be implemented and be sustained. Indeed, for a positive change to be maintained, it must have the support of individuals in power since they have the appropriate resources to address society's problems. Unsurprisingly, sustainability of such a change is threatened by an autocratic style of governing (Buchanan et al., 2005).
In sum, inertia differs from stability. In the case of inertia, most members of society desire a change from the actual state of their group, but are unable to properly sustain change due to a lack of collective social support and an unclear cultural identity. In contrast, in the case of "stability, " the society functions in an efficient manner when meeting the collective goals.
Incremental Social Change
Incremental social change is defined as a situation where a slow event leads to a gradual but profound societal transformation and slowly changes the social and/or the normative structure or changes/threatens the cultural identity of group members. The slow pace is necessary for incremental social change to occur. Moreover, at least one of the other three characteristics needs to occur. In their recent paper, Abrams and Vasiljevic (2014) speak of "growth, " which could represent one form of incremental social change that involves "wider acceptance of shared values and tolerance of different values" and of "recession" where "disidentification" with current groups can occur (p. 328).
One of the most cited examples of incremental social change is technological innovation (e.g., Rieger, 2003;Weinstein, 2010;May, 2011;Hansen et al., 2012). Often, there is no social structural rupture associated with the wide use of technology and normative structure as well as social support remain intact. Given its incremental nature, this type of social change does not instantly produce conflict between old and new behaviors. For instance, when television was introduced, people bought it without knowing the consequences of the implementation of this new technology in their life (Becker, 2004;Macionis et al., 2008;Weinstein, 2010). Today, in retrospect, we know that buying a television set entailed a plethora of new behaviors that altered our society and our way of living. Indeed, some changes in society seem to be a "by-product of our pursuit of other goals and interests" (Subašić et al., 2012, p. 62). The long time span that is typical for incremental social change makes its outcomes unpredictable and unintentional. For instance, as Weinstein states (Weinstein, 2010), "It would be impossible to assess exactly what role electronic telecommunication has played in our global revolution, in part because its effects continue to reverberate and magnify as you read this" (p. 4).
The cell phone is a particularly good example of incremental social change. When it came onto the buyer's market, only a few exclusive people possessed one. However, over the years, it became increasingly normative to have a cell phone and, today, it is almost inconceivable not to have one. Furthermore, when cell phones were first marketed, they were used mainly for business rather than for social purposes, which is the current primary use (Aoki and Downes, 2003). In the same vein, other technological changes, such as the emergence of personal computers (Kiesler et al., 1984;Robinson et al., 1997), Internet (DiMaggio et al., 2001Brignall III and Van Valey, 2005), and social media (Robinson et al., 1997;O'Keeffe and Clarke-Pearson, 2011;Oh et al., 2015) will, in the future, be recognized as key events in the historical transformation of social structures and social norms. Such technology does not represent a DSC, but a social change nonetheless as it has modified the way people interact with one another in an incremental manner. As the change occurs for a relatively long period of time, there is consistency in the pattern of change, which allows social structures to adapt and, thus, to remain intact (Nadler and Tushman, 1995). Individuals experiencing incremental social change are therefore able to adapt, given that the collective social support is not altered. For example, there is support for people that have yet to possess a cell phone; if they want to buy one, but do not understand how it functions, there are plenty of people that can help them adapt to this new technology. Even if technological change is conceptualized here as an incremental change, it is possible that technology is used to provoke a DSC, for example by instigating an important social revolution (Rodriguez, 2013).
Despite technology being the most adequate example, other incremental changes can be observed in other aspects of society such as in medicine. Indeed, advancement in medicine such as effective birth control (Goldin and Katz, 2002) was also the cause of a profound incremental social change. The example of contraception is crucial as the pill deeply affected gender roles in society by empowering women by giving them the capacity to control their sexuality. The pill had not only direct positive effects on women's career investments, but also on the opportunity of attending school longer. The pill forever changed women's involvement in our societies and the repercussions of this incremental social change still echo to us through struggles for gender equality, but also in the form of women actively involved in every level of the modern workplace, including higher managements and governmental position. In other words, the gradual nature of incremental social change makes it a profound change in society that neither disturbs the social structure nor the collective social support system.
Dramatic Social Change
DSC has been defined as "profound societal transformations that produce a complete rupture in the equilibrium of social structures because their adaptive capacities are surpassed" (de la Sablonnière et al., 2009a, p. 325). Although this definition is based on previous sociological work (Parsons, 1964;Rocher, 1992), it is adapted here according to the four characteristic of DSC. Specifically, I suggest that DSC be defined as a situation where a rapid event leads to a profound societal transformation and produces a rupture in the equilibrium of the social and normative structures and changes/threatens the cultural identity of group members.
As with incremental change, DSC induces fundamental transformations in society. However, the shift occurs at a much more rapid pace, provoking a break with the past. Some authors have highlighted this sense of discontinuity by referring to DSC as the disintegration of a previous social order or as the break in a frame of reference (Golembiewski et al., 1976;Nadler and Tushman, 1995;Breakwell and Lyons, 1996). They also use terms such as the "construction of something new, " a "reconceptualization, " or a "re-definition." Indeed, the breakdown of a social structure conveys the need for the reconstruction of core elements in a society. Accordingly, DSC can be conceptualized as a complete rupture in the social structure that marks the end of one period and the beginning of another one, or where a type of society is transformed into another (Tushman and Romanelli, 1985;Kohn et al., 2000;Weinstein, 2010). Other researchers, such as Rogers (2003), also see rapid social change as intertwined with the social structure. More specifically, Rogers (2003) states that rapid social change can threaten social structure by surpassing the adaptive capacities of individuals. Unsurprisingly, DSC is the most disruptive type of change not only for the social structure but also for the majority of society members experiencing it, i.e., the normative structure as well as cultural identities are challenged. As DSC entails a re-definition of values, norms and relations, individuals can no longer rely on their habits and routine; they need to learn new skills and new definitions and more challengingly, unlearn the old ways of doing things (Nadler and Tushman, 1995;Tomasik et al., 2010). Consequently, DSC is described as a painful and confusing experience for individuals (Hinkle, 1952;Lapuz, 1976;Nadler and Tushman, 1995;Kohn et al., 2000;Wall and Louchakova, 2002;Rioufol, 2004;Hegmon et al., 2008).
A good example of DSC is the breakdown of the Soviet Union. If I return to Zoia's example, it is clear that all the people in Kyrgyzstan and in the Former Soviet Union were affected by the breakdown of the Soviet Union. Zoia is not the only one who lost all her savings: the vast majority of people lost their savings within a matter of days. In terms of social support, whom could she have relied on if all of her friends were also in the same situation? Regarding to the fall of the former Soviet Union, Goodwin (2006) argues that older people were inclined to receive less social support in part because the majority of the population, including family members, were struggling with several jobs just to provide themselves with basic needs. Furthermore, elderly citizens could not even rely on formal social services because the collapse of the former Soviet Union caused a decline in formal state support, which left them no time to rebuild their retirement income. This illustrates the rupture in the structure of society that can be found when a DSC occurs as well as the effect on the majority of ordinary group members who cannot rely on collective social support.
COMING FULL CIRCLE: THEORETICAL IMPLICATIONS
Heraclitus, an ancient Greek philosopher, is credited for saying that "the only thing constant is change." Gradually or within an instant, civilizations, societies, communities or organizations that often seem immutable face multiple DSCs. Social scientists agree that social changes are not only intensifying but also defining today's world. In fact, Weinstein (2010) has underscored that "rapid change, both peaceful and violent, is a fact of life that virtually everyone on Earth today has come to expect, if not unconditionally accept" (p. 3).
For the present paper, my aim was to initiate a conversation about the psychology of social change. Thus, I briefly reviewed the major perspectives of social change in both sociology and psychology. Research conducted in both fields and their subfields have remained in distinct silos with no effort made toward aggregating their findings. This has unfortunately resulted in the absence of an encompassing approach in the current literature of social change: social change has never been integrated into a single perspective that would define or contextualize DSC within the spectrum of different social contexts. More importantly, social change has not been conceptualized so that micro processes, macro processes, and the important relations between them are addressed. As a result, the typology of social change introduces different social contexts (e.g., stability) that can serve as a basis of comparison for DSC. Based on my review of the literature, I suggest four necessary characteristics of DSC ( Table 6).
The present paper then offers a first step toward unifying the variety of theories of social change which are currently isolated from each other. Indeed, our approach aims at addressing the challenge raised by Sun and Ryder (2016) concerning our need for "a more nuanced understanding of rapid sociocultural change combined with sophisticated research methods designed to address change in a multilevel way" (p. 9). The typology of social change I am suggesting is an emerging concept; thus, I invite debate with the hope that the views presented here will stimulate others to contribute to a needed understanding of DSC within an individual perspective. More importantly, based on such a typology of social change, theoretical models could be suggested as they might offer a guide to understanding the consequences of social change. For instance, such theoretical models could answer these three questions: Are the different social contexts associated with one-another? What makes a society move from one social context to another (e.g., from stability to DSC)? What is the role of the different characteristics of DSC? So far, answers to the three questions raised above were left lingering and the different characteristics of DSC were not arranged in a sequential way nor were they identified as key movers of one state of society to another. In Figure 1, I offer a theoretical model that integrates the social contexts and the characteristics of DSC as a first step toward a psychology of social change.
As seen in Figure 1, neither a slow nor a fast pace event will influence the status quo in both stability and inertia. There will therefore be no break with the past and so no rupture in the social and normative structures. Thus, in these two social contexts, if an event were to occur rapidly, the current situation of a group or society would remain unaffected by it; that is why pace is not the only characteristic important to define DSC. For example, if a plane crashes, which is a rapidly occurring dramatic event, it does not necessarily affect an entire community. Also, in a state of stability, when a fast-or slow-event takes place, because the normative and the social structures are unaffected, there is no direct threat to the group's cultural identity. Similarly, when an event occurs in a state of inertia, there is no additional threat to the society's cultural identity, because the normative and social structure are unaffected.
In contrast, in a state of incremental social change, slowoccurring events, if profound enough, will gradually change the social and normative structures, as well as threaten or change cultural identity. For a DSC to occur, a fast event needs to take place. If that event has enough impact-therefore not in a state of stability or inertia-, it will rupture the social structure and the normative structures. As shown by many different DSC contexts, there are three possible scenarios when it comes to the rupture of these two structures: (1) the social structure ruptures first, which later leads to the rupture of the normative structure (e.g., Zhang and Hwang, 2007), (2) the normative structure ruptures first, which later leads to the rupture of the social structure (e.g., Centola and Baronchelli, 2015), or (3) both the social and normative structures rupture simultaneously and influence each other.
An example of the first scenario would be the latest presidential elections in the United States. The recent proclamation of Donald Trump as president carries the potential for political transformations as well as changes in the United States' economic structure (rupture to social structure). The leadership of Trump's administration can carry major structural change that would then lead to a rupture of the normative structure. At this point, there are indications that this new governance (social structure) may very well affect the normative structure. Some members of the population have become more "open" to expressing their reluctance to have more immigrants come to the USA, which could eventually lead to a rupture in normative structure where different ethnic groups overtly fight each other within America. A second example was the loss of the French Canadians to the English Canadians at the Battle of the Plains of Abraham in 1759. This battle was a pivotal moment in the 7 Years' War and gave power to the British troops (Veyssière, 2013). The result of the battle culminated in the French losing most of their economical structural powers to the English and the start of a decline of education. Consequently, the French mentality and behaviors were modified. The norms had to be adapted to new rules and to the loss of economic power (Veyssière, 2013).
The normative structure can rupture before the social structure in situations such as the African-American Civil Rights Movement in the United-States, the Fall of Apartheid in South Africa, or the Quiet Revolution in Québec. If in the past African-Americans were afflicted by a sense of resignation, leaders such as Martin Luther King Jr. and Rosa Parks gave them the will they needed to fight for a better future for themselves. This rupture in the normative structure led to the African-American Civil Rights Movement which, in turn, brought about changes to the social structure (e.g., School desegregation). This movement against racial inequality, segregation and discrimination instigated the Civil Rights Act of 1964, which banned any type of segregation based on race, color, religion or sex, as well as other changes in federal legislation.
The breakdown of the Soviet Union is an example that can be used to illustrate a simultaneous rupture of the social and normative structures. This event caused major transformations in the economic, political, and social structures (rupture to social structure). Simultaneously, a large proportion of the population found themselves in a great economic crisis, which led to disruptions in their usual behaviors and habits, such as working multiple jobs instead of just one (rupture of normative structures).
When the normative and the social structures are ruptured (regardless of the order in which this occurs), cultural identity will be threatened. There will be a global sense of confusion, ambiguity, and lack of clarity that might motivate individual group members to change their identification with their group.
Depending on society's and the individual's abilities to cope, there are two possible outcomes: stability or inertia. If the society in which DSC has taken place is able to develop coping and adaptation mechanisms-both at the individual and societal levels-stability might be restored. Stability would then be achieved when the social and normative structures however different are brought back to functionality and when cultural identity is clear and no longer under threat. In contrast, if the society and individuals are not able to develop coping mechanisms, society might enter a state of inertia. In inertia, even though a society in a state of inertia is no longer going through major social changes, the need or desire for change still lingers (Sloutsky and Searle-White, 1993). This can be due to a DSC that did not, in the end, really change the way a collectivity is ruled or how its citizens are treated (Moghaddam and Crystal, 1997;Moghaddam and Lvina, 2002).
CONSEQUENCES OF DSC
Knowing about the range of different social contexts such as stability, inertia, incremental change, and DSC as well as the specific characteristics of DSC, has the potential to guide researchers in terms of assessing DSC and its impact on the psychological well-being of ordinary group members. Specifically, after establishing a clear typology of social change, including potential theoretical models, it is now possible to move on to the second step of the psychology of social change. In this second step, we need to address whether and how different coping mechanisms determine (mediate, moderate) the influence of DSC on psychological well-being. This question goes hand in hand with the work of Norris et al. (2002) who reviewed 160 studies involving natural disasters, mass violence, and technological disasters. They concluded from more than 60,000 participants that such events have negative repercussions on participants' lives. In most of the research they report, social support, economic status, and age were the identified factors that may be associated with a better adaptation to social change. Although diverse factors were suggested, the research they reported was "atheoretical and little of it is programmatic" (Norris et al., 2002, p. 249). In accordance with Norris et al. (2002), I argue that the mediators or moderators involved in adaptation mechanisms should become the focus of future studies. The four characteristics I have identified have the potential to become pivotal in meeting this objective. In sum, the link between social change and well-being is still unclear (e.g., Sun and Ryder, 2016). Such an investigation could eventually guide us in designing concrete interventions to help people adapt to the challenges of DSC (Rogers, 2003;Vago, 2004).
The concept of resilience emerges from the literature as potentially useful for understanding people's coping mechanisms. Resilience is defined as the act of bouncing back in the face of adversity (Bonanno, 2004). For the specific example of DSC, resilient individuals would be those who have been able to maintain their normal functioning and adapt themselves to adverse situations (Masten, 2001;Curtis and Cicchetti, 2003;Luthar, 2003;Masten and Powell, 2003). Research has shown that a significant number of people are able to adapt to challenging personal situations (e.g., Bonanno, 2004). However, resilience has mostly been studied within the context of personal changes such as the death of a loved one or a personal trauma (Bonanno, 2004). Similar to a personal change, this variation in reactions may be due to individual differences in resilience. This highlights the need to consider this variable within the psychology of social change. More concretely, the literature on resilience may prove to be important when linking people's perceptions of the characteristic of DSC to the various paths of recovery (e.g., resilience, recovery, chronic distress, and delayed reactions; Bonanno, 2004).
While most research on resilience focuses on "personal events, " there is, however, another type of resilience known as "collective resilience" or "community resilience" (e.g., Landau and Saul, 2004;Kirmayer et al., 2011) which may be more relevant in the context of DSC as the concept hints that the majority of society is affected by the change. To illustrate collective resilience, let us consider the case where the normative structure of a society is dissolved and its cultural identity is threatened. Individuals in this situation would no longer have guidelines and values to individually cope with DSC. Moreover, every individual affected by the change would be in the same negative situation. Consequently, individuals might need to find ways to collectively adapt to the transformations. The processes associated with resilience may thus differ in situations of personal vs. social change. I therefore believe it is important to explore whether the adaptation mechanisms are the same in a context of DSC where social support is not readily available.
CONDUCTING RESEARCH ON SOCIAL CHANGE
In order to speak of a real psychology of social change, we must be able to actually study social change and its consequences. The use of a mix of methodologies that would include large correlational or longitudinal surveys conducted in the field as well as laboratory experiments (de la Sablonnière et al., 2013; see also Liu and Bernardo, 2014;Sun and Ryder, 2016) might prove to be the only way to truly study social change and its consequences. On the one hand, correlational designs conducted in the field are necessary to capture people's firsthand experience with DSC. They are however limited by their design that prevents claims of causality. They are also known to be demanding in terms of both human and financial resources, and may well be dangerous at times for researchers. Moreover, they require an intimate knowledge of the culture such as the language as well as contacts within the community to facilitate the research and collaboration process.
On the other hand, laboratory experiments are necessary to establish the controlled conditions needed to understand associations between the characteristics of social change and the consequences. Laboratory experiments, however, are difficult to design, because it is a challenge to reproduce the actual characteristics of social change in the laboratory which limits their ecological validity (de la . Indeed, social change typically entails various elements such as historical processes, a collective perspective, and associated cultural elements (Moghaddam and Crystal, 1997) which must be taken into consideration in order to replicate their impact in an artificial setting. For example, the impact of the Tohoku tsunami in Japan or the Syrian conflict cannot be recreated in their entirety in a laboratory; nor can all the characteristic of social change be taken into consideration in a laboratory study designed to assess the impact(s) of social change. However, if an array of studies using different characteristic of DSC were to be conducted (or a combination of multiple characteristic), the convergence of the results would make us able to better understand and thereby predict the impact of DSC on individuals and communities. At the very least in a laboratory, researchers can expose participants to imagined changes through a scenario or a video that would include, in the experimental condition, one or more of the four characteristics of DSC (Pelletier-Dumas et al., submitted). If the scientific community accepts that experimental studies will not exactly mirror DSC, but instead test some of the characteristics in a large number of experiments, there is potential for laboratory experiments to bring an important contribution that would eventually allow a generalization to the real world (for examples see Betsch et al., 2015;Caldwell et al., 2016;Pelletier-Dumas et al., submitted).
The difficulties of conducting research on social change are, however, amplified by the challenge of obtaining ethical consent in a manner that allows for timely research. In terms of experimental manipulations of DSC, obtaining the ethical board's consent can be tedious. Indeed, according to some authors (Kelman, 1967;Bok, 1999;Clarke, 1999;Herrera, 1999;Pittenger, 2002) deceiving participants is difficult to justify ethically. This objection on the use of deception can undermine any attempt to seriously study DSC, as deception can be a valuable methodological asset (Bortolotti and Mameli, 2006), especially with such an elusive subject. Furthermore, research on new grounds require new techniques and methods on which ethicists can put limits, to ensure that they do not cause harm to participants (Root Wolpe, 2006). As with any new technology, methods focused on inducing dramatic-like changes can be perceived as having unsuspected risks.
CONCLUSION
In order to truly understand the interplay between individuals and their context, social psychological theories must take into account that we live in a constantly changing world. Unfortunately, although social psychology was rooted in understanding social change, most modern psychological theories refrain from addressing a "true" psychology of social change and prefer relegating social change to the field of sociology.
Through increasing the focus on social change, we could combine, on the one hand, sociology's emphasis on the importance of social change with, on the other hand, psychology's emphasis on the importance of complex individual processes. As a result, my theoretical proposal aims at bringing together sociology, where social change is central, and psychology, where rigorous scientific methods allow us to study the psychological processes of individuals living in changing social contexts.
In general, more research on the concept of social change is needed so that we can help predict, prevent, and minimize the negative impact of social change. If psychologists and sociologists work together to move toward developing a psychology of social change, perhaps we could come to better understand and help people, like Zoia, who lost almost everything they had, consequently improving the quality of millions of lives experiencing DSC.
AUTHOR CONTRIBUTIONS
RdlS thought and developed the ideas, as well as wrote the article as sole author. Research assistants were paid to find and read the abstracts of all articles reviewed in this manuscript.
ACKNOWLEDGMENTS
RdlS Department of Psychology, Université de Montréal. I wish to thank all my colleagues and the members of the Social Change and Identity Lab for their comments and help. They have heard me talk about social change for the last 10 years and have never stopped encouraging me to pursue these ideas. I am also grateful to all the "Baboushkas" and the people I have met in contexts of DSC. These people continue to inspire me every day. I am grateful to the editor and the three evaluators for their insightful comments. I would also like to thank Matthew Davidson, Saltanat Sadykova, Lily Trudeau-Guévin, Alexie Gendron, Jérémie Dupuis, Raphaël Froment, and Donald M. Taylor for their help during different steps of the preparation of this manuscript. Finally, I want to thank Nada Kadhim who was patient enough to coordinate the material and the team-including me-at all stages. | 2017-05-05T09:20:26.624Z | 2017-03-28T00:00:00.000 | {
"year": 2017,
"sha1": "831934a86885f9f5f2b6d03fc96b13f01491effc",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00397/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "831934a86885f9f5f2b6d03fc96b13f01491effc",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18846693 | pes2o/s2orc | v3-fos-license | Sphingolipids contribute to acetic acid resistance in Zygosaccharomyces bailii
ABSTRACT Lignocellulosic raw material plays a crucial role in the development of sustainable processes for the production of fuels and chemicals. Weak acids such as acetic acid and formic acid are troublesome inhibitors restricting efficient microbial conversion of the biomass to desired products. To improve our understanding of weak acid inhibition and to identify engineering strategies to reduce acetic acid toxicity, the highly acetic‐acid‐tolerant yeast Zygosaccharomyces bailii was studied. The impact of acetic acid membrane permeability on acetic acid tolerance in Z. bailii was investigated with particular focus on how the previously demonstrated high sphingolipid content in the plasma membrane influences acetic acid tolerance and membrane permeability. Through molecular dynamics simulations, we concluded that membranes with a high content of sphingolipids are thicker and more dense, increasing the free energy barrier for the permeation of acetic acid through the membrane. Z. bailii cultured with the drug myriocin, known to decrease cellular sphingolipid levels, exhibited significant growth inhibition in the presence of acetic acid, while growth in medium without acetic acid was unaffected by the myriocin addition. Furthermore, following an acetic acid pulse, the intracellular pH decreased more in myriocin‐treated cells than in control cells. This indicates a higher inflow rate of acetic acid and confirms that the reduction in growth of cells cultured with myriocin in the medium with acetic acid was due to an increase in membrane permeability, thereby demonstrating the importance of a high fraction of sphingolipids in the membrane of Z. bailii to facilitate acetic acid resistance; a property potentially transferable to desired production organisms suffering from weak acid stress. Biotechnol. Bioeng. 2016;113: 744–753. © 2015 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Introduction
The yeast Zygosaccharomyces bailii is considered to be one of the most troublesome food spoilage organisms due to its ability to withstand food preservatives (Zuehlke et al., 2013). Its tolerance to weak organic acids has been extensively studied, as reviewed by (Piper et al., 2001), although the fundamental mechanisms underlying its exceptional resistance have yet to be elucidated. Apart from the development of methods to prevent food spoilage, understanding and harnessing the mechanisms behind Z. bailii's robustness is of the utmost importance if we are to identify the characteristics that can improve the performance of other industrial microorganisms grown under acid stress (Dato et al., 2010). For example, organic acids such as acetic acid and formic acid are released during the pretreatment of lignocellulosic raw material, prior to the production of fuels and chemicals in a biorefinery (Koppram et al., 2014). These acids represent a major obstacle to the fermenting microorganism, commonly Saccharomyces cerevisiae (Parachin et al., 2011). Inhibition occurs mainly by the undissociated form of weak acids, due to their ability to enter the cell in an uncontrolled fashion by passive diffusion across the plasma membrane (Warth, 1989). If the mechanisms and the genetic bases underlying the high tolerance of Z. bailii to organic acids were to be understood, it might be possible to transfer key characteristics to S. cerevisiae, or to other production organisms, through genetic engineering.
The high acetic acid tolerance of Z. bailii has previously been linked to three different factors. First, co-consumption of glucose and acetic acid gives Z. bailii the ability to efficiently remove acetic acid from the intracellular environment (Sousa et al., 1996). This ability is unique, as acetic acid consumption is repressed in the presence of glucose in most other yeast species (Rodrigues et al., 2012). Second, Z. bailii exhibits population heterogeneity with a small subpopulation of cells exhibiting lower intracellular pH, which limits the acetic acid stress in these cells by reducing the accumulation of intracellular acetic acid (Stratford et al., 2013). Third, it has low acetic acid membrane permeability, as indicated by experiments in which Z. bailii retained its intracellular pH better than S. cerevisiae during short-term (Arneborg et al., 2000) and long-term (Fernandes et al., 1999) exposure to acetic acid. No direct comparison has been made of the acetic acid membrane permeability in Z. bailii and S. cerevisiae, but measurements of propionic acid uptake have shown that it is more than ten times faster in S. cerevisiae than in Z. bailii (Warth, 1989). In our previous study, we investigated the plasma membrane lipid profile of S. cerevisiae and Z. bailii, showing a strong difference in lipid profile between the two yeasts, with sphingolipids being several times higher in Z. bailii than in S. cerevisiae, supporting a potential difference in membrane permeability (Lindberg et al., 2013). In addition, Z. bailii showed a unique ability to remodel the composition of its plasma membrane upon acetic acid stress, so as to greatly increase the fraction of sphingolipids (two to nine times increase depending on sphingolipid class), at the expense of glycerophospholipids (overall level reduced by half and phosphatidyl inositol which is required for sphingolipid synthesis increased from 40 to 88% of the total glycerophospholipids in the membrane).
Based on the qualitative evidence discussed in the above section, we formulated the model illustrated in Figure 1, to provide a quantitative theoretical description of the effect of the rate of acetic acid translocation across the plasma membrane on the intracellular concentration of acetic acid in Z. bailii. Intracellular pH is the first critical determinant of intracellular acetic acid concentration (Stratford et al., 2013). Upon exposure of the cells to the acid, undissociated acetic acid will diffuse across the membrane until equilibrium is reached between the intracellular and extracellular sides of the membrane. The difference in pH between the intracellular and extracellular spaces will determine the difference in total concentration of the acid, whereby the commonly found higher intracellular pH will lead to an accumulation of acetic acid inside the cell. To counteract accumulation of acetic acid, Z. bailii has a great advantage over other yeasts, namely the ability to consume acetic acid (Fig. 1, v Cons ) in the presence of other carbon sources (Sousa et al., 1996). Z. bailii has also been shown to have proteins that remove acetic acid by active extrusion of anions and protons (Fig. 1, v Ext ), but their significance in this context is unclear. Entry of acetic acid into the cell occurs by passive diffusion across the plasma membrane ( Fig. 1, v Diff ) and if required by a facilitated uptake mechanism (Fig. 1, v Uptake ) probably induced to allow faster acetic acid consumption when diffusion into the cell is not sufficiently high (Sousa et al., 1996(Sousa et al., , 1998. Accumulation of intracellular acetic acid and consequently acetic acid stress can thereby be avoided when v Diff and, if relevant, v Uptake is less than the sum of v Cons and v Ext . Indeed, a comparison of the acetic acid uptake rate (v Diff and v Uptake ) measured by Stratford et al. (2013) and the acetic acid consumption rate (v Cons ) determined in our previous study (Lindberg et al., 2013) reveals that these rates are of the same order of magnitude, supporting our hypothesis that the diffusion rate is important in intracellular acetic acid accumulation and, hence, tolerance.
In this study, we hypothesize that the high fraction of sphingolipids in Z. bailii, and the membrane remodeling toward more sphingolipids upon acetic acid exposure could be linked to reduced permeability to acetic acid. In the light of this information, we have here investigated the importance of a high fraction of sphingolipids in the plasma membrane of Z. bailii in maintaining low acetic acid membrane permeability and high acetic acid tolerance by combining in silico molecular dynamics simulations with in vivo techniques.
In Silico Membrane Construction
Model membranes were constructed and subjected to molecular dynamics simulations to study their structural and dynamic properties. The membranes were made using inositol phosphorylceramide (IPC) as the representative sphingolipid, 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) and 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-(1 0 -myo-inositol) (POPI) as representatives of glycerophospholipids, and ergosterol. Seven membrane were simulated, with the sphingolipid content varying from 10 to 60% by varying the glycerophospholipid content correspondingly while keeping the ergosterol content constant at 15%, as outlined in Table I.
All membranes consisted of a total amount of 128 lipid molecules; 64 in each leaflet. The leaflets were symmetric with respect to lipid content. The Slipid force field (J€ ambeck and Lyubartsev, 2012a(J€ ambeck and Lyubartsev, , 2012b(J€ ambeck and Lyubartsev, , 2013 was used to describe all lipids. The DOPC force field was already available (J€ ambeck and Lyubartsev, acid concentration in Z. bailii. The pH determines the distribution between the undissociated and the dissociated form of acetic acid, based on the pKa value. The undissociated form enters the cell through passive diffusion (v Diff ) across the membrane, until equilibrium is reached between the two sides of the membrane. The dissociated form can also be taken up by the cell through a facilitated transport system (v Uptake ) (Sousa et al., 1996(Sousa et al., , 1998. Intracellular acetic acid is removed through consumption of the anion (v Cons ) (Sousa et al., 1996). Proteins removing acetic acid by active extrusion of anions and protons (v Ext ) exist in Z. bailii, but their significance in this context is unclear (Piper et al., 2001). The overall sum of all these rates and the intracellular pH determines the acetic acid concentration in the cell. 2012b), and force fields for IPC, POPI, and ergosterol were developed based on established protocols (J€ ambeck and Lyubartsev, 2012a(J€ ambeck and Lyubartsev, , 2012b(J€ ambeck and Lyubartsev, , 2013. Bonded and van der Waals parameters were taken from the CHARMM lipid force field (Klauda et al., 2010), and the charges were derived as follows. The charges on the lipid tails were taken from the Slipid force field, as the alkyl chains of all lipids have the same charges. The charges on the inositol/phosphate head group and the ceramide backbone of IPC were determined by calculations using the restrained electrostatic potential algorithm (RESPA) (Bayly et al., 1993) in which the conformations for the inositol/phosphate head group were taken from the CHARMM membrane builder (Wu et al., 2014a), while the conformations for the ceramide backbone were generated through a LowModeMD conformational search (Labute, 2010) with the MOE software (Chemical Computing Group Inc., 2015). The electrostatic potential (ESP) was then calculated at the B3LYP/cc-pVTZ level of theory (Becke, 1993;Kendall et al., 1992;Lee et al., 1988), using a polarized continuum model (IEF-PCM) of water (Tomasi et al., 1999) to mimic the effect of solvent. Finally, the charges were fitted to the ESP using the RESPA method. The charges on the ergosterol molecule were determined using RESPA on a single conformation optimized at the B3LYP/cc-pVTZ level. The ESP was calculated at the B3LYP/cc-pVTZ level of theory using the IEF-PCM of hexadecane to mimic the interior of the membrane. All quantum mechanical calculations were performed with the Gaussian09 software (Frisch et al., 2009). All force fields developed herein can be downloaded from the Supporting information, Dataset S1.
All bilayers were solvated by 5120 TIP3P water (Jorgensen et al., 1983), and neutralized with Na þ ions. Initial bilayer structures were assembled using the CHARMM Membrane Builder (Wu et al., 2014a). Unfortunately, this tool does not support the building of IPC, so all the bilayers were made of POPI instead of IPC and were, then, subsequently replaced by IPC using an in-house script.
Force fields created for the IPC, POPI and ergosterol molecules are provided in supporting information in Gromacs file format.
In Silico Simulation of Membrane Properties
Model membranes were simulated with molecular dynamics methodology using the Gromacs software (version 4.6) (Hess et al., 2008). The membranes were equilibrated for 100-200 ns in the NPT ensemble (constant pressure and temperature), followed by a 100 ns production run where data were collected every 10 ps. The time step was 2 fs and all covalent bonds were constrained with the LINCS algorithm (Hess et al., 1997). Water molecules were constrained with the SETTLE algorithm (Miyamoto and Kollman, 1992). The pressure was maintained at 1 atm using a Parrinello-Rahman barostat (Parrinello and Rahman, 1981) with a 10 ps coupling constant and a compressibility of 4.5 Â 10 À5 bar À1 . The pressure in the membrane plane was independent of the pressure in the membrane normal. The temperature was kept at 298 K using a Nos e-Hover thermostat (Nos e, 1984) with a 0.5 ps time constant. Electrostatic interactions were treated with the particle-mesh Ewald summation (Darden et al., 1993) with a 1 nm real-space cut-off, and van der Waals interactions were subjected to a 1 nm cut-off with a long-range continuum correction (Allen, 1987).
Membrane thickness was measured as the average distance between the phosphate groups in the different leaflets. Lipid tail order was calculated from the deuterium order parameter (S CD ) according to Equation (1): where u is the angle between a carbon-deuterium bond in the given acyl chain and bilayer normal, and the bracket indicates an average over the MD simulation. Averaging over identical molecules was also performed.
In Silico Simulation of Acetic Acid Membrane Permeability
The potential of mean force (PMF) of undissociated acetic acid along the membrane normal was calculated with umbrella sampling simulations, as described below (Torrie and Valleau, 1977). These calculations give the free energy of transferring acetic acid to different depths in the membrane. The acetic acid molecule was described by the general Amber force field (Wang et al., 2004) with AM1-BCC charges (Jakalian et al., 2002). Two acetic acid molecules were inserted at specific distances from the center of the bilayer. For the 20% IPC system, acetic acid was inserted at distances between 0 and 3.6 nm from the bilayer center, and for the 40% IPC system between 0 and 4.3 nm. The two acetic acid molecules were always placed in different leaflets, separated by 3.6 or 4.3 nm along the membrane normal, and by at least 0.5 nm in the membrane plane. The position in the membrane plane was randomized. Two independent sets of simulations were initiated by placing the acetic acid molecules at different positions in the membrane plane. Initially, the acetic acid molecules were assumed not to interact with the surrounding membrane, and the interactions were turned on gradually during a 5 ns simulation. This procedure allows the membrane to relax naturally around the acetic acid molecule. During these simulations, the two acetic acid molecules were fixed at their initial positions. The simulation parameters were otherwise identical to the procedure described above.
The systems prepared using this procedure were then subjected to umbrella sampling with the PLUMED plug-in (Tribello et al., 2014) (version 2.1) of Gromacs (Hess et al., 2008). The distance between the acetic acid molecule and the center of the bilayer was restrained using a harmonic spring with a 1000 kJ mol À1 nm À2 force constant, and the instantaneous distance was recorded every 10 ps. The simulations with acetic acid molecules at different restrained distances were coupled, and exchange between neighbors was attempted every 4 ps to improve sampling (Neale et al., 2013). The umbrella sampling simulations were 30 ns in length at each point (simulation parameters as described above), and data from the initial 5 ns were throughout discarded as equilibration. The records of the instantaneous distances from the different simulations were combined into a PMF with WHAM (weighted histogram analysis method) (Grossfield;Kumar et al., 1992). Records from the acetic acid molecules in both leaflets, as well as the two independent simulations, were combined to give a single WHAM estimate. Thus, the WHAM estimate was based on a combined 100 ns sampling. The uncertainty in the PMF was estimated by block averaging with 20 blocks of 5 ns each. The permeability coefficient, P, of acetic acid upon transport from the water phase to the center of the bilayer was calculated using the expression in Equation (2) (Marrink and Berendsen, 1994): where DG(z) is the free energy of transferring the acetic acid to depth z in the membrane, calculated by umbrella sampling, D(z) is the local diffusion along the membrane normal at depth z, R is the gas constant, and T is the absolute temperature. The integration runs from the water phase to the center of the bilayer. An approximate estimate of the diffusion was calculated from the expression in Equation (3) (Issack and Peslherbe, 2015;Woolf and Roux, 1994): where <z> is the center of restraint in the umbrella sampling simulations, var(z) is the variance of z during the simulation, and t z is the correlation time of the time series of z. The largest error in the estimation of D(z) lies in the calculation of t z (Hummer, 2005;Issack and Peslherbe, 2015). To reduce the noise in the estimate of D(z), diffusion was estimated from the full simulation without any block averaging, and thus the uncertainty was calculated only from the series in the two leaflets. The uncertainty in P was estimated by computing the standard error from the two independent series of simulations.
Strains and Cultivation Media
Z. bailii strain CBS 7555 (Centraalbureau voor Schimmelcultures (CBS) Fungal Biodiversity Centre strain collection, the Netherlands) was used in this study. Cells were cultured in mineral medium (20 g L À1 glucose, 5 g L À1 (NH 4 ) 2 SO 4 , 0.5 g L À1 MgSO 4 Â 7H 2 O, 3 g L À1 KH 2 PO 4 , 1 mL L À1 vitamin solution, 1 mL L À1 trace element solution). Vitamin solution and trace element solution were prepared as described previously (Verduyn et al., 1992). Potassium hydrogen phthalate buffer, 100 mM, was used to maintain the culture at pH 5, except in the case of the cultivation medium supplemented with L-lactic acid, which had to be adjusted to pH 4 to increase the fraction of inhibitory undissociated acid in the total supplemented acid. It was not possible to achieve an inhibitory concentration of undissociated acid at pH 5 due to the low solubility of L-lactic acid.
Media Supplements
The effect of myriocin on cell growth was evaluated at 0.8, 1.2, and 1.6 mM myriocin (Mycelia sterilia, Sigma-Aldrich). A 1 mM myriocin stock solution prepared in 40% ethanol was used. Acetic acid, formic acid, L-lactic acid, sorbic acid, and benzoic acid were added to the medium as concentrated stock solutions adjusted to pH 5 with KOH, except for the L-lactic acid stock solution, which was adjusted to pH 4.
Inoculum
Inoculum was prepared in Erlenmeyer flasks where the culture occupied a maximum of 10% of the flask volume. Cultures were grown under continuous shaking at 180 rpm at 30 C overnight. Exponentially growing cells were harvested by centrifugation at 3,000 g for 3 min at room temperature, resuspended in fresh medium and added to the Erlenmeyer flask cultures or microscale cultures at an initial optical density at 600 nm (OD 600 ) of 0.2.
Screening of Cell Growth
Cell growth was automatically monitored at 30 C in 150 mL aerobic microscale cultures using Bioscreen C MBR equipment (Oy Growth Curves Ab, Ltd, Finland) with 5-10 replicates per experimental condition to evaluate the effect of weak acids and myriocin. The cultures were shaken continuously, and cell density was measured optically every 15 min using a wideband 450-580 nm wavelength filter. Measured cell density values were converted to equivalent OD 600 values using Equation (4).
The nonlinear correlation between optical density and cell density at high cell concentrations was corrected for using Equation (5) (Warringer and Blomberg, 2003).
Intracellular pH Response After Acetic Acid Pulse Cells were cultured in triplicate in 250 mL baffled Erlenmeyer flasks with 25 mL culture volume. To evaluate the effect of myriocin, it was added to the cell suspension to a final concentration of 1.6 mM at the start of cultivation, and the intracellular pH of myriocin treated cells was compared to that in control cultures with no added myriocin. Cells were harvested at an OD 600 of 2 by centrifugation at 21,100 g for 3 min at room temperature. Carboxyfluorescein diacetate succinimidyl ester (CFDA-SE) (Vybrant CFDA SE Cell Tracer Kit, Life Technologies, Thermo Fisher Scientific) was used as a probe to detect changes in intracellular pH as an indirect measure of acetic acid inflow. The non-fluorescent CFDA-SE enters the cell by passive diffusion, where a highly fluorescent molecule is formed when intracellular esterases cleave off the acetate groups. The dye is retained within the cell by conjugation with intracellular amines. A stock solution of 10 mM CFDA-SE was prepared in DMSO, aliquoted under nitrogen gas, and stored at À20 C. A 20 mM CFDA-SE solution was prepared in McIlvaine buffer (0.2 M K 2 HPO 4 /0.1 M citric acid) at pH 7 immediately before use. Harvested cells were resuspended in 1 mL CFDA-SE solution to obtain an OD 600 of 0.2 and then incubated at 30 C, 800 rpm for 20 min. Stained cells were centrifuged at 21,100 g for 3 min at room temperature, and then diluted tenfold with McIlvaine buffer at pH 5.
Intracellular pH of cells was analyzed using a Guava easyCyte 8HT flow cytometry system (Merck Millipore) equipped with a 75 mW, 488 nm wavelength excitation laser. The emitted fluorescence was measured using a green 525/30 nm bandpass filter. As myriocin-treated cells exhibited a higher staining efficiency, variations in dye loading were minimized by considering only relative changes in fluorescence between cell populations with comparable staining efficiency. More specifically, the reduction in green 525 nm fluorescence was detected 30 s after pulsing the stained cells with 50, 100, and 200 mM acetic acid (final concentration), and compared to the signal from cells not subjected to an acetic acid pulse. To evaluate the possible influence of staining efficiency on the reduction in green fluorescence, cells with 30-40% reduced staining efficiency were obtained using CFDA-SE buffer solution prepared two hours before use, containing a lower amount of CFDA-SE due to hydrolysis of the probe in the aqueous environment. Weaker stained cells only affected the percentage of 525 nm fluorescence reduction after the 50 mM acetic acid pulse to a minor extent, compared to the more strongly stained cells, indicating that the staining efficiency did not influence the reduction in pH after the acetic acid pulse (data not shown). The small amount of background fluorescence detected from nonstained cells was subtracted from each measurement.
Results and Discussion
In this study, in silico techniques were used to investigate how sphingolipids affect the physicochemical properties of the membrane and the acetic acid membrane permeability. In vivo investigations were also performed to evaluate the effect of reduced sphingolipid content on acetic acid tolerance and acetic acid membrane permeability in Z. bailii.
A Higher Fraction of Sphingolipids Gives Thicker and More Dense Membranes
To understand how sphingolipids influence the physicochemical properties of the plasma membrane, seven different model membranes were simulated in silico (Table I). As it is not possible to recreate an exact copy of the yeast membrane, we constructed simplified membranes composed of IPC as the only sphingolipid, and DOPC and POPI as the glycerophospholipids, together with ergosterol. The choice and composition of lipids reflects the characteristics of Z. bailii and S. cerevisiae membranes in terms of composition, chain length, and bond saturation (Lindberg et al., 2013). However, absolute quantification of the lipids were not possible but by looking at the trends, and compare common levels in a previous study (Klose et al., 2012), the model membranes containing 10-20% sphingolipids were designed to represent the plasma membrane lipid composition in S. cerevisiae, while the membrane with 40-60% IPC better corresponds to the membrane composition of Z. bailii cultured with acetic acid.
The simulations predicted that a higher fraction of sphingolipids would give thicker and more dense membranes, i.e., a 26% increase in bilayer thickness and a 17% decrease in the area occupied by the simulated membrane, when comparing membranes with 10% and 60% sphingolipids ( Fig. 2A). An increase in lipid tail order, which is a measure of the rigidity of the carbon bonds in the fatty acyl chains, provided further evidence of the condensation of membranes containing a higher fraction of sphingolipids. The lipid tail order increased on average by 55% for the saturated acyl chain of POPI, when the sphingolipid fraction was increased from 10% to 60% (Fig. 2B). Similar increases in lipid tail order were also observed for the DOPC and IPC lipid tails (Fig. S1A-F). Sphingolipids function in creating thicker and more dense bilayers has previously been predicted based on the molecular structure of sphingolipids, due specifically to the long, saturated acyl chain, in combination with amide carbonyls and hydroxyls capable of hydrogen bonding (Levine et al., 2000). However, in the present study, we performed simulations that allowed us to directly study the structure and dynamics of the membranes at atomic resolution, giving us additional information. For instance, the condensation effect can only be accurately predicted by observing interactions between lipids in the membrane. Similar modeling approaches have been used previously to study a range of phenomena such as the role of ergosterol in mitigating the effect of ethanol on the membrane structure (Dickey et al., 2009), the effect of cholesterol on the permeability of hypericin derivatives (Eriksson and Eriksson, 2011), and the orientation of different phosphoinositides (Wu et al., 2014b). A snapshot from the simulation of a membrane with 40% sphingolipids visualizes the dynamic interaction that occurs between lipids in the membrane (Fig. 3). Upon studying the very long fatty acyl chain on the sphingolipid molecule more closely, the simulations predicted the tail to be positioned in many positions in-between protruding into the opposite leaflet, or bending so as to occupy space between the two leaflets.
A Higher Fraction of Sphingolipids Reduces the Permeability Coefficient of Acetic Acid
Umbrella sampling was used to quantify the free energy barrier for the transport of undissociated acetic acid through the membrane. Simulations were performed at concentrations 20% and 40% of sphingolipids in the membranes, respectively. In the membrane with 20% sphingolipids, the free energy barrier for transport of undissociated acetic acid from the water phase to the middle of the membrane was approximately 15 kJ/mol, and in the membrane with 40% sphingolipids, approximately 20 kJ/mol, i.e., a difference of approximately 5 kJ/mol. Although there is a sizeable uncertainty in the PMF, as can be seen in Figure 4, it is clear that the difference between the PMFs is large at the center of the membrane. A twosided t-test gave a P-value of 0.025, indicating that the difference is statistically significant at the 95% confidence level.
Using solubility-diffusion theory (Marrink and Berendsen, 1994), we calculated a rough estimate of the permeability coefficient, P. The local diffusion coefficient, D(z), was particularly difficult to estimate in the 40% sphingolipid simulations due to the very dense bilayer, however, as the free energy is the dominating factor, the noise in D(z) should have only a minor effect on the relative value of P. The permeability coefficient of acetic acid was found to be 5.4 AE 1.0 Â 10 À9 cm s À1 in the membrane with 20% sphingolipids, and 4.1 AE 1.9 Â 10 À10 cm s À1 in the membrane with 40% sphingolipids, a reduction by an order of magnitude. A two-side t-test gave a P-value of 0.035, indicating that the difference is statistically significant at the 95% confidence level. This suggests that a higher level of sphingolipids in the membrane reduces the acetic acid membrane permeability.
Inhibition of Sphingolipid Synthesis Reduces Acetic Acid Tolerance
To investigate a possible correlation between a high sphingolipid fraction in Z. bailii and its high tolerance to acetic acid, sphingolipid synthesis was decreased by in vivo treatment with the drug myriocin, which binds irreversibly to serine palmitoyltransferase, inhibiting the first step of sphingolipid synthesis (Wadsworth et al., 2013). The use of myriocin is well established, and previous studies have demonstrated its ability to decrease the fraction of sphingolipids in various cell systems, including yeast (Breslow et al., 2010;Huang et al., 2012;Shimobayashi et al., 2013).
The addition of up to 1.6 mM myriocin in the absence of acetic acid had little or no effect on the growth rate of Z. bailii in mineral medium (Fig. 5A). However, in the presence of 200 to 400 mM acetic acid, the specific growth rate of cells was significantly reduced. Myriocin addition had a detrimental effect on cell growth at these acetic acid concentrations, and the effect increased with concentration. This demonstrates the requirement of a high sphingolipid fraction in the membrane to deal with acetic acid stress.
Inhibition of Sphingolipid Synthesis Reduces Tolerance to Other Weak Organic Acids
Passive diffusion across the plasma membrane is a major entry route for many weak organic acids (Piper et al., 2001). To investigate whether the high fraction of sphingolipids in Z. bailii is also involved in the mechanisms offering resistance to other weak organic acids, the effect of myriocin on the growth of Z. bailii in the presence of formic acid, L-lactic acid, sorbic acid, and benzoic acid was investigated. The acid concentrations were chosen so as to give approximately 70% growth inhibition with each acid together with 1.6 mM myriocin. The chemical properties and experimental conditions are given in Table II. Cells cultured with myriocin and either formic acid or L-lactic acid showed a reduction in growth comparable to that in the case of acetic acid ( Figure 5B). However, cells cultured with myriocin and either sorbic acid or benzoic acid showed no reduction in growth (Fig. 5B). This apparent difference in the effect of myriocin in the presence of different weak acids could be explained by their difference in hydrophobicity, commonly expressed as log P, the partition coefficient between octanol and water (Table II). Higher hydrophobicity facilitates diffusion across the membrane, resulting in a higher diffusion rate. Formic acid and L-lactic acid have relatively similar values of hydrophobicity to acetic acid, whereas sorbic acid and benzoic acid are both much more hydrophobic. Sorbic acid and benzoic acid were indeed found to inhibit cell growth at concentrations two orders of magnitude lower than acetic acid, formic acid, and L-lactic acid. Therefore, the reduction of sphingolipid content by myriocin probably causes a larger relative increase in the diffusion rate (v Diff in Fig. 1) of acetic acid, formic acid, and L-lactic acid, which leads to intracellular accumulation of these acids. Sorbic acid and benzoic acid, on the other hand, already have a high diffusion rate and the reduction in sphingolipids caused by myriocin is probably not large enough to influence the overall balance of rates determining the intracellular acid concentration. Although it cannot be excluded that a higher myriocin concentration, creating larger membrane rearrangements, would have affected sorbic acid and benzoic acid tolerance in a comparable way as it did for the less hydrophobic weak acids. Tolerance of Z. bailii and S. cerevisiae to weak acids with different hydrophobicity has recently been investigated, showing that Z. bailii was approximately three times more tolerant than S. cerevisiae to the majority of the investigated acids independently of their degree of hydrophobicity (Stratford et al., 2013). The authors argued that if membrane permeability would have been a resistance mechanism of Z. bailii, there should be a larger difference in tolerance between the two yeasts to the more hydrophobic acids, and therefore rejected membrane permeability as a factor contributing to its tolerance. Figure 5. Growth of Z. bailii in the presence of acetic acid, formic acid, sorbic acid, benzoic acid (pH 5) and L-lactic acid (pH 4) at 0-1.6 mM myriocin. The results are expressed as relative growth rate to the growth rate of cells without acid and myriocin added. The acid concentrations were chosen to give approximately 70% growth inhibition with the specific acid and 1.6 mM myriocin. The results were calculated from five to ten biological replicates. The bars represent the mean AE standard deviation. Ã Significant decrease compared to no myriocin addition, according to the t-test (P < 0.05).
Taking into account our findings, we do not consider that the data presented by Stratford et al. rejects an involvement of membrane permeability in acetic acid tolerance, since for molecules with higher hydrophobicity, the inflow rate is already high, so a difference in membrane permeability will only slightly affect the inflow rate and consequently influence the overall intracellular acid concentration to a lesser extent (Fig. 1).
Inhibition of Sphingolipid Synthesis Increases Acetic Acid Membrane Permeability
To verify that the observed reduction in growth after the addition of myriocin to Z. bailii cultured with acetic acid was due to a difference in membrane permeability, the rate of inflow of acetic acid was measured indirectly using flow cytometry to monitor the change in intracellular pH shortly after an acetic acid pulse, by measuring the fluorescence of the pH-dependent dye CFDA-SE.
A first indication of increased membrane permeability in myriocin-treated cells was the increase in fluorescence intensity. Cells cultured with myriocin displayed almost tenfold higher average emission than control cells at 525 nm (Fig. 6A). The increased emission probably originates from an increase in dye loading of the cells treated with myriocin, which is credible since CFDA-SE enters the cell by passive diffusion, and the staining of Z. bailii without myriocin was relatively poor. The higher signal intensity could also have been due to a higher intracellular pH, but this appears unlikely as the difference in the signal corresponds to an increase of approximately two pH units, which is very high, and the increased emission was also observed at 586 nm, where the emission is less pH-dependent (data not shown) (Stratford et al., 2013).
A second indication of increased membrane permeability in cells treated with myriocin was observed on the decrease in intracellular pH after acetic acid pulses. Pulses of 50-200 mM acetic acid led to an immediate decrease in the fluorescence emission at the pH-dependent wavelength 525 nm, corresponding to a decrease in intracellular pH, both for cells cultured with myriocin and for control cells cultured without myriocin, indicating an inflow of acetic acid into the cell (Fig. 6B). In addition, after pulsing cells with 200 mM acetic acid, the fluorescence emission in myriocin-treated cells decreased by 62%, while it fell by only 43% in the control cells, indicating a faster inflow of acetic acid in myriocin-treated cells. A similar, although less marked trend was also seen following pulses of 50 and 100 mM acetic acid. Microscopic examination of cells cultured with myriocin alone showed no effect on morphology or viability (determined using methylene blue staining, data not shown) compared to control cells, further supporting the hypothesis that the observed reduction in acetic acid tolerance for Z. bailii cultured with myriocin was due to changes in membrane permeability caused by the sphingolipid reduction, rather than a general cellular response to myriocin.
Conclusions
Low acetic acid membrane permeability, due to a high fraction of sphingolipids in the membrane has been found in this study to be a key characteristic contributing to acetic acid resistance in Z. bailii. In silico molecular dynamics simulations showed the role of Benzoic acid 4.20 1.9 5 5.0 0.7 a Obtained from ChemBioDraw Ultra 14.0. b Adjusted to pH 4 in order to ensure a sufficiently high concentration of undissociated acetic acid to cause cell inhibition within the solubility range of the acid. c The most inhibitory form of the acid due to its ability to enter the cell in an uncontrollable fashion by passive diffusion.
sphingolipid in increasing bilayer thickness and density, as well as suggested a reduction in the permeability coefficient for acetic acid diffusion. In vivo reduction of the fraction of sphingolipids in the plasma membrane increased acetic acid membrane permeability, resulting in reduced acetic acid tolerance, further strengthen our in silico predictions. In this work, we also placed acetic acid diffusion in relation to other factors influencing acetic acid tolerance, and concluded that the rate of diffusion into the cell is critical only when the counteracting rates removing acetic acid are larger. Yet, diffusion rate is concentration dependent, therefore, cell growth will occur at the acetic acid concentration that enables a diffusion rate which is lower than the rates removing acetic acid from the cell, thereby avoiding the toxic accumulation of acetic acid. Specific plasma membrane lipid composition is probably crucial for the tolerance to many lipid soluble molecules which currently stresses microbial cell factories. In this work, we provide sphingolipids as an example, and demonstrate the predictive power of molecular dynamics simulations in designing optimal plasma membrane lipid composition for the specific purpose.
Financial support from the Swedish Energy Agency, the Swedish Research Council, the Chalmers Energy Area of Advance, and the Wenner-Gren foundations are gratefully acknowledged. We also acknowledge generous grants of computing time at the C3SE Supercomputing Center in Gothenburg, as provided via the Swedish National Infrastructure Committee (SNIC). | 2018-04-03T01:08:53.278Z | 2015-12-10T00:00:00.000 | {
"year": 2015,
"sha1": "92b62c0065e0e0d9a593497c6da1da262153e5cb",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/bit.25845",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "92b62c0065e0e0d9a593497c6da1da262153e5cb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
214138269 | pes2o/s2orc | v3-fos-license | Investigation of formulation preparation of two plant extracts and determination of the effectiveness on Tetranycus urticae Koch.(Arachnida: Tetranychidae)
In this study fruits extracts of Melia azederach L. (Meliaceae) and Allium sativum L. (Amarylliaceae) were prepared and formulation studies of these extracts were carried out with several inert ingredients. The quality tests of these extracts were performed. According to the results of these tests, preparations, which were found successfully, were separated/ chosen for effectiveness studies on Tetranycus urticae Koch. The acaricidal effect of the formulations/preparates were carried out on T. urticae by using leaf dipping method under laboratory conditions. Effects of prepared formulations at three different concentrations (5, 7, 10 ml/L) were determined on T. urticae. According to the results of laboratory studies, the highest dose found to be effective and theirs two upper doses (10, 15, 20 ml/L) were taken to examine effect on mites at the greenhouse conditions. Each plot consists of 10 plants. When the density of mite population 1-3 live individual/leaf, the plants were sprayed. Counts were carried out before application of formulations and after application 1, 3 and 7 days. Neem Azal T/S was applied as standard in greenhouse trials. The highest effect was at the highest dose of extract of M. azedarach (88.42%). Similar results were obtained in greenhouse trials.
Introduction
Vegetables are grown in both greenhouse and field conditions in almost all regions and it constitutes 95% of the total greenhouse in Turkey. 1 The spider spotted mite, Tetranychus urticae Koch (Acarina:Tetranychidae) is one of the most important pests responsible for yielding losses to many horticultural ornamental and agronomic crops. It has been reported that the mite has attacked about 1200 species of plants, of which more than 150 are economically significant. 2 It is revealed that T. urticae causes remarkable economic loss by reaching high density. 3 T. urticae fed by puncturing and draining the contents, producing a characteristic yellow speckling on the leaf surface. They also produce silk webbing which is clearly visible at high infestation levels. 4 The mites feeding causes graying or yellowing of the leaves and 40-60% loss of product In addition, it causes the spread of various virus diseases. 5 Synthetic pesticides are generally utilized against the two spotted spider mite, as they are easy to apply, effective. However, using pesticides for a long time causes an ecological imbalance, side effects on natural enemies, and environmental pollution. 6 The mite is also difficult to control due to their resistance to many commonly used pesticides. 7,8 Because of the adverse effect of pesticide use, alternative control methods are being researched for T.urticae. Some of the alternative control methods including acaricidal effects of the plant essential oils, plant preparations and microbial secondary metabolites on T.urticae are currently being researched . [9][10][11] There are so many studies about effect the extracts of M. azedarach and A. sativum. It was determined that the extracts M. azedarach showed strong antifeeding effect and ovicidal effect on Leptinotarsa decemlineata Say (Col.: Chrysomelidae). 12,13 Erdogan et al. 14 reported that the extracts of A. sativum had the highest mortality rate on T. urticae.
Many studies have been conducted to search for alternative chemical pesticides in our country and studies are continuing. However, most of the studies have remained in the research stage and there are no biopesticides recorded to the application. Especially in organic agriculture where chemical pesticides are not allowed to be used, biopesticides have been needed to control pests. All recorded biopesticides are overseas and have no biopesticides available for each product. In our country, which has a very rich flora, it is important to study on biopesticides of plants origin, and to evaluate the research results in a way to be applied to this subject. The application of biopesticides obtained from local sources and plant origin substances is expected to fill the gap in the area of organic agriculture, integrated products and good agricultural practices and will significantly contribute to the national economy.
The aim of this study was to determine the insecticidal effects of the extracts obtained from A. sativum and M. azederach and the formulations prepared against T. urticae
Plants material
Plants used in the study were gathered from three different the provinces during 2016. M. azedarach was collected from Adana province, A. sativum was collected Kastamonu. Both of plants were used their frits to prepare extracts.
Dried fruit and plants weighed 100 g into the glass, and then ethanol (99.9% purity) was added to flasks with 1:8 (w/v) ratio. The samples were extracted with a direct solvent under reflux in a water bath set at 60 o C for two hours. At the end of two hours, the extract was filtered through a filter paper and then taken out of the glass balloon. Ethanol was added to the remain part with the same ratio 1:8 (w/v) and extracted for another two hours in a water bath set at 60°C for complete extraction of phenolic components. After two hours the second extract was filtered through a filter paper to the same glass balloon. The solvent of the extract was evaporated to dryness in a vacuum rotary evaporator at 60°C. 15,16 Three-five grams of extract were obtained from a total of 100 g of dry matter.
Formulation preparation of extracts
The solubility of the extract with the appropriate solvents was determined based on the physicochemical properties of the components contained in the extract. The determination of the appropriate solvent was made by the method proposed by Flanagan 17 According to the method, 1.20 g of extract was weighed into the test tube and 2 or 2 parts by volume of oil, water or appropriate solvents were added in the form of a maximum of 10 ml of water. After each 2 ml of solvent addition, the test tube was heated and stirred in a magnetic stirrer. If the amount of added solvent reached to 10 ml and dissolution did not occurred, then another solvent was tried by removing the former solvent from the experiments. In this way by using different vegetable oils as solvent the most suitable solvent was chosen to dissolve the extract and most suitable formulation type was selected. Extract was stirred at 800 rpm in a vertical mixer together with a suitable solvent and co-formulates by taking into account the physical and chemical properties of the extract, and then stirred at 4500 rpm for 1.5 hours in a high-speed vertical mixer until the fineness degree reaches 10-20 microns and a homogeneous distribution was obtained. Thus, homogeneous distribution of the insoluble components in the extract is achieved. The products taken into the resting tank were left for 24 hours and subjected to quality control analyzes.
Quality control analysis
It was conducted quality control tests (as recommended by the Food and Agriculture Organization of the United Nations (FAO) and the World Health Organization (WHO) for Suspension concentrate (SC) formulation. 18
Physical analyzes
Appropriate analyzes of the obtained Suspension Concentrate (SC) formulation were carried out in the Institute's laboratories using CIPAC analysis methods, taking into account the FAO (Food and Agriculture Organization) Criteria.
Aspect:
The Suspension Concentrate (SC) formulation we have prepared has been determined to be a heterogeneous suspension (viscous liquids) with a uniform color and a homogeneous structure when shaken. Once the product has been appropriately agitated, the bottom is checked with a stick and it is observed that it does not contain any layer or precipitate. This method has been realized visually. 18 Specific gravity (density): The specific weight of the prepared formulation was determined by the digital densmeter in our laboratory. It was waited for the device would become ready by making its own calibration after it is turned on via the power button. The air of the sample taken into the syringe was taken and given to the collar through the inlet of the sample-measuring chamber. Measurement button was turned on and waited for the result message on the screen and the results were recorded.
Wet sieve test:
The wet sieve test was performed according to CIPAC MT 185.
Suspension capability:
The suspension capability test was conducted in according to CIPAC MT 184.
T. urticae culture
T. urticae was reared in the laboratory at 25±1°C and under long daylight (18 light: 6 dark) and 65-70% relative humidity on potted bean. The bean plants (Phaseolus vulgaris L.) used in the experiment all were grown in greenhouse.
Dose-mortality tests
Leaf-Dipping Method; from untreated bean leaves 3 cm in diameter disc was punched out. These discs were then dipped into the test solutions (formulations prepared of extracts 1,3, 5, 7 and 10ml/L) for one minute. The control disc was dipped in 0.01% Triton X-100 solution. Then left to dry for 30 minutes. The treated leaf discs were placed into petri dishes, which lined with moistened filter paper. Then 10 adults and larva of T. urticae were introduced onto the treated discs in separate petri dishes. Same procedure was used for control. 19 The experiment was replicated 10 times including control. For each petridish contained 10 adult 3 days old first larval stage was used. Data collection started after 1, 3, and 6 days by counting the number of living larvae and adults. The experiments were conducted in a climate chamber at 25±1°C and under long daylight (18:6 light: dark).
The experiments of greenhouse
Effective doses (10ml/l) and two top doses (15,20ml/l) of the extract obtained from M. azederach and A. sativum plants which was effectively determined in laboratory conditions have been tried in the greenhouse in the Institute garden (100 m2). Bean seeds planted in greenhouse on August 3, 2015 and when the bean plants come to 5-6 leaf each plant was left with leaves that were infested with T. urticae plants and plants which infested with harmful.
The experiment was set up according to the experiment design of randomize plot. The experiment was replicated 5 times including control. The size of each plot was 5m 2 and each plot included 10 bean plants. Applications were carried out with on September 14, 2015. When at least approximately 3 larva, nymph, adult/leaf the plants were sprayed using a hand held sprayer. Neem Azal T/S (%1 azadirachtin, 500 ml/100 liter water) was applied as positive control. The amount of water required for each plot was determined by calibrating before application. About 100-125 ml of water was used for each plant. The applications were made to wet all sides of the plant.
Leaf counts were made directly on each leaf 4cm 2 (2cm 2 +2cm 2 ). Before the sample was taken, the plot was observed and the lower, middle and upper leaves of the plant to represent the harmful population were determined in advance. At least 10 leaves were taken from each plot and the leaves were brought to the laboratory and counted under binoculars. Counts were made before application and 1, 3 and 7 days after application. 20
Statically analysis
The effect was calculated according to Abbott. 21 The obtained results were submitted to a variance analysis and the mean values were compared by Duncan's test (P=0.05) calculated by the program SPSS 20.6. For statistical analysis greenhouse study was used the formula Henderson-Tilton. 22
Quality control analysis
Suspended Concentrate (SC) formulations prepared from extracts were subjected to quality control analyzes such as appearance, specific gravity, suspension ability and fineness. Viscosity tests could not be carried out because samples could not be obtained in sufficient quantities.
The formulation type SC of (Suspension Concentrate) the extract obtained from M. azederach had a brownish black liquid appearance. The specific weight (density) was 1,050g/ml. Suspension ability (CIPAC MT 184) results are 101%; (WHO) was found to pass completely through the 75 micron elec-trode. The formulation type of the extract obtained from A. sativum was SC (Suspension Concentrate). Appearance is yellow-light brown with specific gravity (density) value of 1,020 g/ml. Suspension ability (CIPAC MT 184) results are 103%; It was determined that the depth grade test (WHO) passes completely through the 75 micron mesh.
Dose-mortality tests
Leaf dipping method was used to determine the acaricidal effect of the formulations/prepares on T. urticae in under laboratory conditions. Data is given in Table 1. Table 1. The highest effect was at the highest concentration 10% while the smallest effect was 1% in all extracts of plants. The effect increased depending on doses. It was determined that the highest effect was extract of M. azederach concentration 10%. Statistical analysis showed importance differences between the concentrations (F=57.14; P=0.00).
The experiments of greenhouse
The dose of M. azederach and A.sativum extract, which has an effect of more than 75% in the laboratory conditions, and the two upper doses were tested in under greenhouse conditions. The results are given in Table 2.
According to Table 2, the highest dose of M. azederach extract showed the highest dose at day 7. The lowest effect was determined on the 1st day at the lowest dose. According to the statistical analysis, at the last count all doses were in the same group. It was determined that obtained from A. sativum extract had the highest effect was at the highest dose in the count on the 7th day. According to the statistical analysis, in the last counts, all doses formed the same group. Neem Azal T/S showed higher efficacy than all extracts (F=4.173; P=0.00). Several plants were found to contain bioactive compounds with a variety of biological actions against insects, including repellent, antifeedant, anti-ovipositional, toxic, chemosterilant and growth regulatory activities. 23,24 The use of plant derivatives as an alternative to chemical insecticides was studied throughout the World. Over 20009 plant species were reported to possess pest control properties. 25 There are many of studies on effect of plant extracts on insects, but there are few of formulated product from obtained plants. The most important of these plants is Azadirachta indica (A. Juss). 26 Schmutterer 27 determined that the extracts obtained from A. indica had different effects on many pests. The extracts of A. indica had such as melianone. melianol. 14-epoxyazadiradione, azadiradone, azadirone, gedunin that had effect antifeeding, repellent. oviposition deterrent. As a result of our work, it was revealed that the extracts of M. azedarach showed the highest effect on T. urticae. There are studies in parallel with our study results. For example, Dimetry et al. 28 determined that commercial products named Margosan-0 and Neem Azal S obtained from neem seed extracts had a high mortality rate and reduction in the number of eggs laid on T. urticae. Additionally, it was revealed that Margosan-O and Neem Azal S caused rate of mortality 50% on T. urticae. 29 It was observed that pure azadirachtin showed that had antifeeding effect and the number laid decreasing on T. urticae. 30 Currently there are preparations named Margosan-O, Azatin, Bioneem, Neemguard and Neem Azal T/S developed from extract of A. indica. In addition, It was determined that the extract of M. azedarach and A. sativum showed acaricidal effect and deterrent oviposition on T. urticae. 14 Similarly, Dobrowski and Seredynska 31 revealed that the extracts of A. sativum caused of mortality at 48-57% on T. urticae. M. azederach is the same family Neem tree and contains the same active ingredients. 32 The azadirachtin obtained from A. indica was taken into preparation in the world and in our country and was recommended against many pests.
Conclusion
As a result, it was revealed that the plants extracts obtained from obtained M. azedarach and A. sativum fruit the formulations/prepares were acaricidal effect on T. urticae in the greenhouse conditions. The studies on extraction and preparation of the formulation were carried out pilot scale in the laboratory conditions. It is considered to perform the production of the formulations which is recommended in largescale production facilities and to subject them to quality control tests. More research is required to develop this initial study further. | 2020-01-30T09:09:13.557Z | 2019-06-10T00:00:00.000 | {
"year": 2019,
"sha1": "1d7fa2c5510393eacc6104b14d3be15d12b61807",
"oa_license": null,
"oa_url": "https://medcraveonline.com/HIJ/HIJ-03-00124.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "097ea4c36e7649b78f5368beea0932b51404c452",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": []
} |
14241128 | pes2o/s2orc | v3-fos-license | Electrophysiological properties of isolated photoreceptors from the eye of Lima scabra.
Photoreceptor cells were enzymatically dissociated from the eye of the file clam, Lima scabra. Micrographs of solitary cells reveal a villous rhabdomeric lobe, a smooth soma, and a heavily pigmented intermediate region. Membrane voltage recordings using patch electrodes show resting potentials around -60 mV. Input resistance ranges from 300 M omega to greater than 1 G omega, while membrane capacitance is of the order of 50-70 pF. In darkness, quantum bumps occur spontaneously and their frequency can be increased by dim continuous illumination in a fashion graded with light intensity. Stimulation with flashes of light produces a depolarizing photoresponse which is usually followed by a transient hyperpolarization if the stimulus is sufficiently intense. Changing the membrane potential with current-clamp causes the early phase to invert around +10 mV, while the hyperpolarizing dip disappears around -80 mV. With bright light, the biphasic response is followed by an additional depolarizing wave, often accompanied by a burst of action potentials. Both Na and Ca ions are required in the extracellular solution for normal photoexcitation: the response to flashes of moderate intensity is greatly degraded either when Na is replaced with Tris, or when Ca is substituted with Mg. By contrast, quantum bumps elicited by dim, sustained light are not affected by Ca removal, but they are markedly suppressed in a reversible way in 0 Na sea water. It was concluded that the generation of the receptor potential is primarily dependent on Na ions, whereas Ca is probably involved in a voltage-dependent process that shapes the photoresponse. Light adaptation by repetitive flashes leads to a decrease of the depolarizing phase and a concomitant enhancement of the hyperpolarizing dip, eventually resulting in a purely hyperpolarizing photoresponse. Dark adaptation restores the original biphasic shape of the photoresponse.
INTRODUCTION
During the past few years the application of patch-clamp techniques to photoreceptor physiology has led to important advances in the study of cellular mechanisms of visual excitation. On-cell recording has allowed the demonstration of light-dependent, single-channel currents in both invertebrate (Bacigalupo and Lisman, 1983) and vertebrate cells (Matthews, 1987), while convincing evidence that cGMP is the final link in the transduction chain has been provided by measurements in excised membrane patches from rod outer segments (Fesenko et al., 1985). Traditional invertebrate preparations such as the Limulus ventral eye, which have provided a wealth of information on the visual process, including the first measurements of light-activated ionic channels, are not ideally suited for tight-seal recording because the presence of a surrounding layer of glial cells requires a laborious and delicate mechanical stripping procedure to expose the surface of the plasma membrane (Stern et al., 1982). An isolated invertebrate photoreceptor preparation would provide a convenient and useful model system to address many remaining questions in visual physiology.
In the course of a search for such a preparation, the mollusk Lima scabra appeared as a promising candidate because its eyes contain a large number of photoreceptors (in contrast, for example, with only five in each eye of Hermissenda, and three in Balanus). In addition, the cells appear to be relatively exposed, rather than imbedded in thick layers of connective tissue. The eyes of Lima (usually ~ 20-50) are located along the outer edge of the mantle, beneath a transparent layer of the mantle epithelium. Their basic morphology has been briefly described by Bell and Mpitsos (1968). According to these authors, the eye cup consists primarily of rhabdomeric photoreceptors surrounded by supporting cells containing screening pigment. A transparent structure, which had originaUy been classified as a lens, lies in front of the cup. Anatomical observations of sections through this structure revealed the presence of bundles of cilia and processes, presumably of neural origin. Mpitsos (1973) and McReynolds (1976) pointed out that this may actually be a second, distal retina, consisting of photoreceptors of the ciliary type. Axons from the photoreceptors in the two retinas give rise to separate branches of the afferent circumpallial nerve, which projects to the central ganglia of the animal.
Only a few electrophysiological investigations on the visual excitation process of this organism have appeared (Mpitsos, 1973;McReynolds, 1976;Cornwall and Gorman, 1983). Gross extracellular recordings made from bundles of fibers of the pallial nerve display both transient excitatory responses as well as "off" responses, i.e., inhibition of basal neural activity during light stimulation, followed by a burst of action potentials at the termination of the stimulus (Mpitsos, 1973). On the other hand, nerve fibers from the isolated distal retina seemingly show only the "ofF' response. Similar "ofF' discharges were also seen upon stimulation of extraocular portions of the mantle, suggesting the presence of dermal photoreceptors; however, they could never be identified morphologically. Intracellular measurements in the intact eye revealed both depolarizing and hyperpolarizing responses to light stimulation (Mpitsos, 1973), presumably arising from cells in the two retinas. Cornwall and Gorman (1983) used intracellular microelectrodes to record from the distal portion of the intact eye of Lima, and observed resting potentials around -45 mV and hyperpolarizing photoresponses with a saturating peak reaching about -70 mV. Experiments with current clamp and ionic substitutions revealed that the light response was accompanied by an increase of membrane conductance involving primarily K ions.
Depolarizing and hyperpolarizing photoresponses have also been documented in the eye of the scallop Pecten irradians (Gorman and McReynolds, 1969;McReynolds and Gorman, 1970), making these organisms of considerable interest for physiological studies of visual transduction. However, there are inherent limitations in wholeeye preparations; these include the difficulty of positively identifying the cell type from which recordings are obtained (especially if no intracellular injection of marking dyes and subsequent histological examinations are performed), uncertainty about the extent of control of the extracellular environment, and the possible confounding that may result from synaptic interactions and/or electrotonic coupling between neighboring cells.
This report demonstrates that physiologically viable solitary photoreceptors can be obtained from Lima eyes by enzymatic dissociation, and shows a basic characterization of their photoresponse. A detailed study of voltage-dependent conductances is presented in the following paper, while the third article of this series will examine the photocurrent under voltage clamp.
Subjects
Specimens of Lima scabra were obtained through Carolina Biological Supply Co. (Burlington, NC). Animals can be maintained for several weeks in an artificial sea water (ASW) aquarium (Instant Ocean; Aquarium Systems, Mentor, OH) at a temperature of 24°C.
Dissociation Procedure
The following protocol has been successfully used to obtain viable isolated photoreceptors: Eyes are dissected from the animal under dim red light illumination and enzymatically treated with 0.5% coUagenase (type IA; Sigma Chemical Co., St. Louis, MO) for 30--45 min and then with 0.2% trypsin (type III; Sigma Chemical Co.) for 20-30 min at 26°C. Subsequently, they are rinsed in cold ASW for 5-10 min. Dissociation is accomplished by gentle, repetitive suction and expulsion using a fire-polished Pasteur pipette. Sometimes trituration is performed in nominally 0 Ca ASW, which seems to facilitate cell dispersion. In such cases, exposure to low extracellular Ca is limited to no more than 1-2 min; otherwise, harmful effects can result (see below). An aliquot of the cell suspension is transferred to the recording chamber, which is mounted onto the stage of an inverted microscope (Nikon Diaphot). To increase cell adhesion, the coverslip bottom of the chamber is treated overnight with a 0.1% solution of collagen in distilled water, then with a 0.5% solution of concanavalin A (Sigma Chemical Co.) in 1 M NaCl for 2 h (Bader et al., 1979), and rinsed with distilled water. Within 10-15 rain after being transferred to the chamber most of the cells in the suspension are plated to the bottom, and the flow from a perfusion system that allows rapid solution changes is turned on at a rate of ~ 0.5-1.0 ml/min.
Scanning Electron Microscopy
Dissociated cells plated onto coverslips were fixed for 2 h in sea water containing 1% glutaraldehyde, diluted with distilled water to compensate for changes in osmolarity. After fixation the cells were dehydrated by immersion in ethyl alcohol solutions of increasing concentration (10, 25, 50, 75, 90, 95, and 100%, 5 min in each). Subsequently, they were critical-point dried, sputtered with gold palladium, and viewed with a JEOL-JSM 840 scanning electron microscope (SEM) at 15,000 V. OF GENERAL PHYSIOLOGY • VOLUME 97 -1991
Electrophysiological Recording
Initial intracellular measurements were performed with conventional fine-tipped microelectrodes pulled from omega dot capillary glass (type 27-32-1; Frederick Haer & Co., Brunswick, ME) on a Brown-Flaming horizontal puller (Sutter Instrument Company, San Francisco, CA) to a tip resistance of 100-150 M['I when filled with 4 M KAc. In all subsequent recordings patch electrodes made from fiberless borosilicate glass were used instead. These were pulled in two stages on a vertical puller (model 700; David Kopf Instruments, Tujunga, CA) to a tip o.d. of 1-1.5 ~m, and fire polished using the method described by Hamill et al. (1981). Electrodes were filled with a solution compatible with the cytosol, since exchange with intracellular constituents can be expected given the size of the tip orifice (Fenwick et al., 1982). The intracellular solution contained 300 mM KC1, 12 mM NaC1, 10 mM MgCI 2, 1 mM EGTA, 300 mM sucrose, and 10 mM HEPES buffered to pH 7.3. The electrode resistance measured in ASW ranged between 8 and 15 MI~.
The recording electrode was connected to a capacity-compensated, high-impedance differential amplifier (Thomas, 1977) equipped with a bridge circuit for injection of constant current. The reference electrode was an agar bridge (1% agar in 3 M KCI). A Huxley-type micromanipulator (Custom Medical Research Equipment, Glendora, NJ) was used to position the microelectrode while the cells were visualized with a TV camera (model 1350A; Panasonic) through the side port of the inverted microscope. A long-pass filter (50% transmission cut-off at 650 nm; Ditric Optics, Hudson, MA) was used to provide deep red, dim illumination. Small constant-current pulses (20-100 pA) were repetitively administered to monitor the resistance in series with the electrode. Upon making contact with the cell surface, gentle suction was applied through the side-port of the electrode holder (World Precision Instruments, New Haven, CT). When a high resistance seal was obtained (a criterion of 10 G[I or more was usually adopted), the patch was broken by a brief, intense pulse of suction. Access to the interior of the cell was indicated by (a) a reduction of the measured resistance from a value > 10 G[I to several hundred megaohms (mainly the series combination of access resistance and input resistance of the cell); (b) a much slower rise time of the voltage signal induced by the current pulses, due to the charging of the cell capacitance; and (c) a dc shift in voltage (the cell's membrane potential). Photoreceptors were allowed to dark adapt for ~ 10 min before starting an experiment. Data were either directly recorded with a two-channel strip-chart recorder (model 2400; Gould Inc., Cleveland OH), or fed to a tape recorder (Racal Recorders, Southampton, UK) for subsequent play-back and analysis.
Optical Stimulation
Light stimulation was delivered through an optical bench consisting of a 100-W tungstenhalogen lamp (GTE Sylvania, Winchester, KY), a condenser lens, a heat-absorbing filter, an electromechanic shutter (Uniblitz; Vincent Associates, Rochester, NY), a set of calibrated neutral density filters, an adjustable pin-hole, and a field lens. The stimulating beam was brought into the light path of the microscope illuminator with a cube beam-splitter placed above the microscope condenser; this focused an image of the pinhole onto the preparation. The "full field" illumination therefore consisted of a circular region ~ 100-150 ~m in diameter, such that only the target cell was illuminated. This precaution avoided exposing all the photoreceptors in the chamber to repetitive stimulation during the course of an experiment. The intensity of the unattenuated beam of light, measured with a calibrated radiometer (United Detector Technology, Hawthorne, CA), was 240 I~W/cm 2. White light was used for photostimulation. The shutter was operated through a driver circuit described previously (Cornwall and Thomas, 1979). Fig. 1 A shows a typical photoreceptor, as viewed under Nomarski optics. Cells of this type are usually rather numerous and exhibit a smooth soma, an intermediate, heavily pigmented region, and a rhabdomeric lobe covered with microvilli. Typically the total length is ~ 25 I~m and the width is ~ 10 Ixm. In addition to the rhabdomeric photoreceptors, clumps of small spherical cells are also frequently seen. These are lightly pigmented and are likely to be glia.
Morphology of Dissociated Cells
Scanning electron micrographs reveal finer morphological details of the photoreceptors ( Fig. 1 B). In particular, the microvilli in the distal lobe of the cell can be appreciated. It is likely that their natural length is greater, and that during processing they are broken off. The stump of the axon, which in the intact animal projects along the circumpallial nerve, is clearly visible. Axons are almost invariably severed at the hillock during the dissociation process, and in over 20 cells examined in the SEM only one axon stump longer than 5 ~m was seen. It is thus likely that all aspects of the responses recorded from the isolated cells originate in the soma or the rhabdomere.
Resting Potentials and Passive Properties
Early attempts to record membrane voltage with fine microelectrodes were only marginally successful: impalements rarely lasted for more than a few minutes, and resting potentials were modest, seldom reaching -50 inV. The use of patch electrodes in the whole-cell configuration proved much more fruitful (see Methods). With this technique the average resting potential was -62 mV (SD = 12.8, n = 27), and stable recordings could be made for periods of up to 2 h. Evidence supporting the claim that such methods can indeed maintain the cell in better health, as compared with conventional microelectrode impalements, has been presented by Pelzer et al. (1984). Resting potential often declined somewhat (usually 5-10 mV) during the period of dark adaptation that preceded the beginning of the experiment, but then stabilized.
The input resistance of dissociated photoreceptors is in the range 3-10 × 108 fl, as measured from the steady-state voltage change in response to injection of small pulses of constant current. The membrane capacitance was determined in a few cells from the time constant of the exponential relaxation after a current step, and is of the order of 50-70 pF. This value is approximately a factor of 6 higher than would be predicted from the surface area of a prolate spheroid of dimensions comparable to the cells, assuming a specific membrane capacitance of 1 CF/cm 2. Given the infoldings of the plasma membrane in the rhabdomeric region, such a discrepancy is readily accounted for. In the dark, injection of pulses of constant current depolarizing the cell membrane above -20 mV can trigger a regenerative response (not shown). A few attempts were made to record from clumps of the smaller, round cells. Resting potentials were found to be significantly more negative (around -80 mV), but no active responses could be elicited by depolarizing current injection or light stimulation. These cells were not studied further.
Light Response
After a few minutes of dark adaptation most cells begin to produce discrete waves ("quantum bumps"), usually between 2 and 25 mV in amplitude, with a rate that is typically ~ 1-2/s. In Fig. 2 A a representative record is displayed. The amplitude distribution of the quantum bumps is skewed, as shown in the histogram in the bottom part of the figure, and for this cell the mean value is 11 mV. Small waves (< 20 mV) relax smoothly back to baseline, whereas larger ones usually have a spikelike appearance and are followed by a brief after-hyperpolarization, as though they triggered a regenerative response (arrows). The time intervals between waves follow an approximately exponential distribution (not shown). The rate of quantum bumps can be increased by dim background light stimulation in a way that is graded with stimulus intensity (Fig. 2 B, top). At low intensities the frequency is an approximately linear function of the rate of incident photons, but with brighter lights it becomes markedly sublinear (Fig. 2 B, bottom). Similar results were obtained in two other cells. Fig. 3 shows the effect of presenting steps of dim light of increasing intensity and illustrates the way light-evoked bursts of discrete waves gradually give rise to a phasic depolarizing receptor potential. At higher stimulus intensities not only does the receptor potential acquire a smoother time course, but a hyperpolarizing dip (clearly overshooting the resting potential) also becomes evident. The voltage traces in Fig. 4 A, recorded in a different cell, reveal additional features of the photoresponse elicited by brighter flashes of light. The amplitude of the early spikelike depolarization reaches saturation, and the hyperpolarizing dip is followed by a second, more sluggish depolarizing wave accompanied by a train of action potentials. Very bright flashes are always followed, in addition, by a third depolarization, which develops slowly (over the course of many seconds) and can last for more than 1 min. This slow component was not examined further in this study. All of the features of the complex light response evoked by a brief, intense stimulus can be appreciated in Fig. 4 B, including the initial phase of the prolonged after-depolarization. In most experiments reported below light intensity was adjusted to elicit only the early phases of the response, because after stimulation with brighter stimuli a long period of dark adaptation is required to recover sensitivity.
Ionic Basis of the Photoresponse
A number of ion substitution experiments were performed to elucidate the nature of the conductance changes underlying the light response. Replacement of extracellular Na with Tris on an equimolar basis nearly abolished the light response (first two records in Fig. 5), the effect being fully reversible in most cases (last record). . Effects of removal of extracellular Na on the photoresponse. A brief flash of light was administered as the cell was superfused with normal ASW. After Na was replaced with Tris, a test flash of the same intensity failed to evoke any response, although increasing the stimulus intensity still produced a substantial receptor potential. Upon returning to normal sea water, the cell recovered its initial responsiveness.
Increasing the test stimulus intensity while the cell was bathed in 0 Na ASW, however, still resulted in the production of a sizable photoresponse (Fig. 5, third trace). This inability to completely suppress the light response in 0 Na sea water was confirmed in four other cells. A substantial reduction of the light response was atso observed when photoreceptors were superfused with nominally 0 Ca ASW, replacing Ca 2+ with Mg ~+ (Fig. 6 A). Recovery of the photoresponse upon returning to normal ASW could be obtained, provided that the duration of exposure to 0 Ca ASW did not exceed ~ 5 min (n = 3). Otherwise, an irreversible deterioration usually resulted (n = 5), as in the example shown in Fig. 6 B. 0 Ca ASW not only reduced the amplitude of the response, but also resulted in a characteristic "bumpy" appearance, reminiscent of the effect of 0 Ca ASW observed in Limulus ventral photoreceptors (Lisman, 1976). The integrity of the light response in Lima photoreceptors appears to depend on the presence of both Na and Ca ions in the extracellular bathing medium. The contribution of these ions to the photoresponse, however, may concern fundamentally different processes: for example, light-activated vs. voltage-dependent conduc-tance changes. Since these factors are confounded in the experiments described above, a paradigm was sought that would minimize the contamination of the photoresponse by voltage-dependent mechanisms. To this end, ion substitution experiments were performed using not only discrete flashes, but also prolonged, dim steps of light capable of inducing an increase in the frequency of quantum bumps without causing a net depolarizing shift in the cell membrane potential. In 0 Na ASW both the light-induced quantum bumps and the response to a brighter flash were markedly attenuated in a reversible way (Fig. 7, A-C). By contrast, discrete waves did not seem to be significantly altered by superfusion with 0 Ca ASW for a short time FIGURE 7. Production of discrete waves in response to dim, steady illumination, and responses evoked by moderately bright test flashes under different ionic conditions. Since the membrane potential changed somewhat during various solution changes, the brief test flashes were presented while constant current was injected in order to recover the initial level of membrane potential. (.4) Cell bathed in normal sea water; (B) Na replaced by Tris; (C) return to normal sea water; (D) Ca replaced by magnesium. While removal of either Na + or Ca 2+ has a deleterious effect on the photoresponse evoked by a brief flash, only elimination of Na + severely affects light-induced quantum bumps.
( Fig. 7 D), whereas the response to a flash was markedly degraded under those conditions. This suggests that some other aspects of the full-fledged photoresponse--including perhaps a Ca spike--might have been affected by the removal of extraceUular Ca. Superfusion with 0 Na ASW nearly abolished quantum bumps in another cell tested under similar conditions. The resistance of quantum bumps to removal of Ca from the external solution was confirmed in two other photoreceptors. A supranormal production of quantum bumps, as in Fig. 7, C and D, sometimes developed during prolonged experiments and could be related to a similar phenom-enon described in Limulus ventral photoreceptors internally dialyzed with solutions lacking nucleotides Stern et al., 1985).
Membrane depolarization by constant current injection reduces the size of the early component of the light response and concomitantly increases the size of the dip. Fig. 8 A shows one of the two instances in which it was possible to reverse the depolarizing phase of the response. Most attempts were unsuccessful because a region of marked instability was encountered when the membrane was depolarized above -20 mV. The first component appears to have reversed sign when the cell was depolarized to +14 mV. The dip, on the other hand, vanished with membrane hyperpolarization to -80 mV. Clear reversal of the dip could not be obtained, in part FIGURE 8. Effect of membrane potential changes on the light response. (A) A photoreceptor cell was hyperpolarized or depolarized by injecting steady current through the recording electrode (resting potential -53 mV), and stimulated with a 100-ms flash (-2.4 log). As the voltage is made more positive, the amplitude of the early depolarization decreases, and the size of the dip increases• At + 14 mV the early phase of the response appears to reverse sign. (B) Amplitude of the light response as a function of initial membrane potential, plotted separately for the two components.
because larger hyperpolarizations usually proved detrimental to the cells and were avoided. Fig. 8 B plots the amplitude of the two phases of the response as a function of membrane potential. The lines were fitted to the two sets of data points by the method of least squares. The amplitude of the hyperpolarizing phase of the light response was found to be related to light intensity, growing larger well beyond the point at which the early depolarization reached saturation (n = 5). Fig. 9 A shows a typical example (see also Fig. 3). In addition, this component appeared to be prominent if a cell was stimulated while still light-adapted from a previous flash. In Fig. 9 B, bright stimuli were presented in close succession, one every 15 s. Under these conditions the depolarizing component of the response grew progressively smaller, while the hyperpolarizing dip increased, eventually resulting in a purely hyperpolarizing receptor potential. The induction of the hyperpolarizing response is not simply a consequence of the depolarizing shift of membrane potential shortly after stimulation: if constant current was injected in order to recover the prestimulus resting potential, the response to subsequent test flashes was still hyperpolarizing (Fig. 9 B A dark-adapted cell was stimulated with repeated flashes of moderate intensity every 15 s. The first one (-1.8 log) elicits a normal-looking biphasic response, while with subsequent stimuli (0 log) the early depolarization selectively decreases in such a way that the photoresponse eventually becomes a pure hyperpolarization. The effect is not a consequence of the changes in membrane potential: if baseline membrane voltage is restored by injection of hyperpolarizing current, the light response still remains hyperpolarizing. Interposing a 1-min interval between flashes (last two traces) leads to a recovery of the original shape of the photoresponse.
allowed to elapse, the biphasic response was recovered (last trace in Fig. 9 B). Similar results were obtained in three other cells.
DISCUSSION
The results obtained demonstrate that physiologically viable isolated photoreceptor cells from the eye of the mollusk Lima scabra can be obtained by an enzymatic dissociation procedure. These cells remain healthy for several hours and are suitable for electrophysiological recording with fire-polished patch electrodes. Under condi-tions of dark adaptation they produce discrete waves in the absence of photostimulation. Their shape, amplitude, duration, and frequency are similar to the quantum bumps that have been described in other invertebrates, such as Limulus lateral and ventral photoreceptors (Yeandle, 1958;Millecchia and Mauro, 1969), as well as Hermissenda photoreceptors (Takeda, 1982). The skewed shape of the amplitude histogram is reminiscent of the distribution observed in Hermissenda (Takeda, 1982), and may reflect the presence of two distinct classes of waves (Adolph, 1964;Yeandle and Spiegler, 1973). In such a case they probably overlap considerably, since no discrete peaks were evident in the histograms, and separation may thus be difficult. Dim continuous illumination causes an increase in the rate of quantum bumps, proportional to the light flux (Yeandle, 1958;Adolph, 1964;Lillywhite, 1977). At higher light intensities, however, the frequency of discrete waves falls significantly below such linear relation, as it occurs in Limulus retinular cells (Adolph, 1964). It is possible, however, that temporal overlap of quantum bumps (and the consequent difficulty of accurately counting them) may in part account for the apparent departure from linearity. The effect of dim light is not only to increase the rate of quantum bumps, but also to increase their mean amplitude. In darkness, large amplitude waves are usually infrequent, whereas in the presence of background illumination they tend to become predominant. A similar phenomenon was examined by Yeandle and Spiegler (1973) in Limulus photoreceptors. Flashes of more intense stimulating light evoke a complex pattern of voltage changes, consisting of a phasic depolarization followed by a hyperpolarizing dip. If the light is sufficiently intense, a second depolarizing wave is produced, which is accompanied by action potentials. The almost all-or-none appearance of the initial depolarizing phase of the response at higher stimulus intensities probably reflects the triggering of a regenerative response once a sufficient number of overlapping quantum bumps depolarize the membrane to threshold. This conjecture receives support both from the observation that passive depolarization produced by current injection is capable of eliciting an action potential in darkness, and also from the fact that spikes often accompany the late depolarizing wave of the response. The presence of voltage-dependent conductances in Lima photoreceptors is consistent with the fact that they encode information about light in the form of action potentials which propagate along the circumpallial nerve without the mediation of any second-order cell (Mpitsos, 1973). It is nevertheless noteworthy to find such mechanisms functional in these cells, which are nearly always axotomized as a result of the dissociation procedure.
On the basis of ion substitution experiments it appears that the primary ion involved in the generation of the photoresponse is Na (but certainly not the only one, since Na removal never abolishes the light response completely). Unambiguous determination of the ionic selectivity of the light-sensitive conductance would entail measuring the reversal potential under different ionic conditions (see Brown and Mote, 1974). Such a goal proved elusive with current-clamp techniques because of the instability of the membrane potential above -20 mV, and the detrimental effects of maintaining the cells depolarized for more than a few seconds. Reversal potential measurements under voltage clamp are presented in the third paper of this series.
The elimination of Ca ions from the bath also has an inhibitory effect on the photoresponse, although probably through a less direct mechanism, since it does not adversely affect the light-induced production of quantum bumps. Possible explanations include the reduction of voltage-dependent conductances that may normally contribute to the shape of the photoresponse. This issue is addressed in a more systematic way in the following article, using the whole-cell patch-clamp technique (Hamill et al., 1981;Marty and Neher, 1983). Other mechanisms are also likely to be implicated in the effects of Ca removal: for example, in Limulus ventral photoreceptors 0 Ca solutions markedly degrade the time course of the light response by introducing a large variability in the latency of the underlying discrete waves (Lisman, 1976). In addition, repeated photostimulation in 0 Ca ASW leads to a progressive decline in the response amplitude and sensitivity (Bolsover and Brown, 1985).
A prominent feature of the light response of Lima photoreceptor cells is the transient hyperpolarizing phase, which is reminiscent of the dip that has been observed in the photoresponse of Hermissenda (Detwiler, 1976) and Balanus (Hanani and Shaw, 1977). The dip is usually small in dark-adapted cells stimulated with dim or moderately bright light, but becomes conspicuous at higher stimulus intensities, growing larger and faster well beyond the point where the depolarizing potential reaches saturation. Since the hyperpolarizing transient is reduced (but usually not entirely abolished) in 0 Ca ASW (e.g., Fig. 4 D), it can be hypothesized that it is due to the activation of a Ca-dependent K conductance (Meech and Strumwasser, 1970;Meech and Standen, 1975). If such is the case, the source of the Ca ought to be in part extracellular, with influx occurring either through a voltage-dependent Ca conductance (see the following paper), or perhaps through the light-sensitive conductance. Stimulation under light-adapted conditions progressively increases the magnitude of the dip, relative to the early depolarizing wave, to the point that light stimulation will evoke only a hyperpolarization. A similar phenomenon has been observed in the eye of Hermissenda (Dennis, 1967;Detwiler, 1976) even when axons were severed in order to reduce the potential confounding of cell--cell interactions (Detwiler, 1976).
The response of visual cells has been known to undergo profound changes as a function of the intensity of stimulating light (Fuortes and Hodgkin, 1964;Penn and Hagins, 1974) and the state of adaptation (Fuortes and Hodgkin, 1964;Baylor and Hodgkin, 1974). The effects concern both the amplitude and the time course of the photoresponse. Accounts have been proposed in terms of the cascade of biochemical reactions that intervene between quantum absorption by photopigment molecules and conductance changes at the plasma membrane (for example, Baylor et al., 1974a, b). Within such a context, however, it is usually assumed that a single conductance is implicated that is directly controlled by light-activated mechanisms. Such a simplification in the present case may be unwarranted, and the possibility of parallel light-dependent effector mechanisms suggests itself. Multiple light-controlled processes were first suggested by Lisman and Brown (1971) in Limulus ventral photoreceptors. The observations reported by Detwiler (1976) in Hermissenda also provide evidence for the existence of separate conductance changes controlled by light. Rigorous evaluation of such a possibility requires (a) characterizing those ionic mechanisms that can be activated independently of photostimulation, and (b) analyzing the photocurrent under voltage clamp. These topics will be the subject of the two reports that follow.
A puzzling feature that has been apparent throughout the course of this investigation is the uniformity (both in terms of morphology and responsiveness to light stimulation) of the numerous cells examined. In particular, no dark-adapted photoreceptor was found to respond to light with a hyperpolarizing receptor potential. Clearly, none of the photoreceptors studied fits the description of the distal cells reported by others (Bell and Mpitsos, .1968;McReynolds, 1976;Cornwall and Gorman, 1983). A tentative explanation could be that since the ciliary hyperpolarizing photoreceptors are presumably located in a different supporting structure, conditions for dissociating them may be quite different. For example, one could speculate that such cells are more fragile and easily destroyed during mechanical trituration, or that a more extensive enzymatic incubation is required to free them from surrounding tissue. Alternative dissociation protocols will be explored in an attempt to obtain solitary ciliary photoreceptors. This work was supported by NSF grant BNS-8418842 and NIH grant EY-07559.
Original version received 21 June 1988 and accepted version received 25July 1990. | 2014-10-01T00:00:00.000Z | 1991-01-01T00:00:00.000 | {
"year": 1991,
"sha1": "66dc0f5b337d6c91672ca9d69e14c33d95f8624b",
"oa_license": "CCBYNCSA",
"oa_url": "http://jgp.rupress.org/content/97/1/17.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed5074579995c8d693b4e778aaab2c0cc74d49b6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
3176028 | pes2o/s2orc | v3-fos-license | Discourse-Oriented Anaphora Resolution in Natural Language Understanding: A Review
Recent research in anaphora resolution has emphasized the effects of discourse structure and cohesion in determining what concepts are available as possible referents, and how discourse cohesion can aid reference resolution. Five approaches, all within this paradigm and yet all distinctly different, are presented, and their strengths and weaknesses evaluated.
Introduction
To resolve various forms of definite reference u anaphora in particular --early natural language understanding systems (reviewed in Hirst 1981) typically used a simple kind of history list of concepts previously mentioned in the input, with heuristics for selecting from this list. The history list was usually just a shift register containing the noun phrases from the last sentence or two, and the heuristics would take into account (among other things) selectional restrictions and syntactic constraints on pronominalization. SHRDLU (Winograd 1972) exemplifies this approach. Although able to resolve some types of reference, these systems were not able to handle reference in general, primarily because they did not take into account the effects of discourse structure on reference and pronominalization. This failure motivated work in computational discourse understanding that attempted to exploit discourse structure, especially the relationship between reference and discourse theme, to resolve definite reference.
The present paper 1 is a review of recent work in this area. Five principal approaches are surveyed: 1. Concept activatedness (Kantor) --an examination of the factors affecting the pronominalizability of a concept; 2. Task-oriented dialogues (Grosz) --using a priori knowledge of discourse structure to resolve references; 3. Frames as focus (Sidner) --using discourse cues to choose a frame from a knowledge structure to act as focus; 4. Logical formalism (Webber) --choosing a predicate calculus-like representation to handle problems such as quantification in reference resolution; 5. Discourse cohesion (Hobbs, Lockman, and others) --building a focus and resolving reference by discovering the cohesive ties in a text.
Some preliminary definitions: By focus we mean the set containing exactly those concepts available for anaphoric or other definite reference at a point in a text, a set which may conveniently be divided into parts for nominal concepts, temporal concepts, verbal concepts and so forth.2 The focus is closely related to, but not necessarily identical to, the theme of a discourse --what the discourse is about --and since the latter is also sometimes termed focus, there is some terminological confusion. (See Section 2.6 and Chapter 4 of Hirst 1981 for further discussion of the distinction between theme and focus.) Strictly speaking, we mean by the referent of an anaphor or reference the real-world entity that it specifies, while by antecedent we mean the textual item through which the reference is made. In (1-1): (1-1) The Queen splutters a little when she speaks. 3 the antecedent of she is the text The Queen and the referent is the person who is queen. Generally, however, the two words can be (and are) used interchangeably without confusion.
Concept activatedness
Robert Kantor (1977) has investigated the problem of why some pronouns in discourse are more comprehensible than others, even when there is no ambiguity or anomaly. In Kantor's terms, a hard-to-understand pronoun is an example of inconsiderate discourse, and speakers (or, more usually, writers) who produce such pronouns lack secondary llinguistic] competence. In our terms, an inconsiderate pronoun is one that is not properly in focus.
I will first summarize Kantor's work, and then discuss what we can learn about focus from it.
Kantor's thesis
Kantor's main exhibit is the following text: (2-1) A good share of the amazing revival of commerce must be credited to the ease and security of communications within the empire. 'The Imperial fleet kept the Mediterranean Sea cleared of pirates. In each province, the Roman emperor repaired or constructed a number of skillfully designed roads. They were built for the army but served the merchant class as well. Over them, messengers of the Imperial service, equipped with relays of horses, could average fifty miles a day.
He claims that the they in the penultimate sentence is hard to comprehend, and that most informants need to reread the previous text to find its referent. Yet the sentence is neither semantically anomalous nor ambiguous --the roads is the only plural NP available as a referent, and it occurs immediately before the pronoun with only a full-stop intervening. To explain this paradox is the task Kantor set himself.
Kantor's explanation is based on discourse topic and the listener's expectations. In (2-1), the discourse topic of the first three sentences is ease and security of communication in the Roman empire. In the fourth sentence, there is an improper shift to the roads as the topic: improper, because it is unexpected, and there is no discourse cue to signal it. Had the demonstrative these roads been used, the shift would have been okay.
3 Underlining is used in this and subsequent examples to indicate the anaphor(s) of interest. It does not indicate stress.
(Note that a definite NP such as the roads is not enough.) Alternatively, the writer could have clarified the text by combining the last three sentences with semicolons, indicating that the last two main clauses were to be construed as relating only to the preceding one rather than to the discourse as a whole.
Kantor identifies a continuum of factors affecting the comprehension of pronouns. At one end is unrestricted expectation and at the other negative expectation. What this says in effect is that a pronoun is easy to understand if its referent is expected, and difficult if it is unexpected. This is not as vacuous as it at first sounds; Kantor provides an analysis of some subtle factors which affect expectation.
The most expected pronominalizations are those whose referent is the discourse topic, or something associated with it (though note the qualifications to this below). Consider: (2-2) The final years of Henry's reign, as recorded by the admiring Hall, were given over to sport and gaiety, though there was little of the licentiousness that characterized the French court. The athletic contests were serious but very popular. Masques, jousts and spectacles followed one another in endless pageantry. He brought to Greenwich a tremendously vital court life, a central importance in the country's affairs, and above all, a great naval connection. 4 In the last sentence, he is quite comprehensible, despite the distance back to its referent, because the discourse topic in all the sentences is Henry's reign.
An example of the converse --an unexpected pronoun which is difficult despite recency --can be seen in (2-1) above. Between these two extremes are other cases involving references to aspects of the local topic, changes in topic, syntactic parallelism, and, in topicless instances, recency (though the effect of recency decays very fast). I will not describe these here; the interested reader is referred to Section 2.6.5 of Kantor's dissertation (1977).
Kantor then defines the notion of the activatedness of a concept. This provides a continuum of Concept givenness, which contrasts with the simple binary given-new distinction usually accepted in linguistics (for example, Chafe 1970). Kantor also distinguishes activatedness from the similar "communicative dynamism" of the Prague school (Firbas 1964). Activated-4 From: Hamilton, Olive and Hamilton, Nigel. Royal Greenwich. Greenwich: The Greenwich Bookshop, 1969. Quoted by Halliday and Hasan (1976:14), quoted by Kantor (1977). ness is defined in terms of the comprehensibility phenomena described above: the more activated a concept is, the easier it is to understand an anaphoric reference to it. Thus activatedness depends upon discourse topic, context, and so forth.
The implications of Kantor's work
What are the ramifications of Kantor's thesis for focus? Clearly, the notions of activatedness and focus are very similar, though the latter has not generally been thought of as a continuum. It follows that the factors Kantor finds relevant for activatedness and comprehensibility of pronouns are also important for those of us who would maintain focus in computerbased natural language understanding (NLU) systems; we will have to discover discourse topic and topic shifts, generate pronominalization expectations, and so forth.
In other words, if we could dynamically compute (and maintain) the activatedness of each concept floating around, we would have a measure for the ordering of the focus set by preferability as referent; the referent for any given anaphor would be the most highly activated element which passes basic tests for number, gender and semantic reasonableness. And to find the activatedness of the concepts, we follow Kantor's pointers (which he himself concedes are very tenuous and difficult) to extract and identify the relevant factors from the text.
It may be objected that by applying Kantor's insights all we have done is produce a mere notational variant of our original problem. This is partly true. One should not gainsay the power of a good notation, however, and what we can buy here even with mere notational variance is the power of Kantor's investigations. And there is more. Previously, it has been suggested that items either are in focus or they aren't, and that at each separate anaphor we need to compute a preference ranking of the focus elements for that anaphor. What Kantor tells us is that such a ranking exists independently of the actual use of anaphors in the text, and that we can find the ranking by looking at things like discourse topic. Some miscellaneous comments on Kantor's work: 1. It can be seen as a generalization albeit a weakening of Grosz's (1977aGrosz's ( , 1977bGrosz's ( , 1978) findings on focus in task-oriented dialogues (where each sub-task becomes the new discourse topic, opening up a new set of possible referents), which are discussed below in Section 3. (Kantor and Grosz were apparently unaware of each other's work; neither cites the other.) 2. It provides an explanation for focus problems that have previously baffled us. For example, in Hirst (1977a) I contemplated the problem of the illformedness of this text: (2-3) *John left the window and drank the wine on the table. It was brown and round.
I had previously thought this to be due to a syntactic factor --that cross-sentence pronominal reference to an NP in a relative clause or adjectival phrase qualifying an NP was not possible. However, it can also be explained as a grossly inconsiderate pronoun which does not refer to the topic properly --the table occurs only as a descriptor for the wine, and not as a concept "in its own right". This would be a major restriction on possible reference to sub-aspects of topics.
3. Like too many other researchers, Kantor makes many claims about comprehensibility and the degree of well-formedness of sentences which others (as he concedes) may not agree with. He uses only himself (and his friends, sometimes) as an informant, and then only at an intuitive level. 5 Claims as strong and subtle as Kantor's cry out for empirical testing.6 Barbara Grosz (1977a, 1977b studied the maintenance of the focus of attention in task-oriented dialogues and its effect on the resolution of definite reference, as part of SRI's speech understanding system project (Walker 1978). By a task-oriented dialogue is meant one which has some single major welldefined task as its goal. For example, Grosz collected and studied dialogues in which an expert guides an apprentice in the assembly of an air compressor. She found that the structure of such dialogues parallels the structure of the task. That is, just as the major task is divided into several well-defined sub-tasks, and these perhaps into sub-sub-tasks and so on, the dialogue is likewise divided into sub-dialogues, sub-sub-dialogues, etc, 7 each corresponding to a task component, much as a well-structured Algol program is composed of blocks within blocks within blocks. As the dialogue progresses, each sub-dialogue in turn is performed in a strict depth-first order corresponding to the order of subtask performance in the task goal (though note that some sub-tasks may not be ordered with respect to 5 For a discussion of the problem of idiosyncratic wellformedness judgments, and a suggested solution, see Sections 4.2 and 7.3 of Hirst (1981). 6 Kantor tells me that he hopes to test some of his assertions by observing the eye movements of readers of considerate and inconsiderate texts, to find out if inconsiderate texts actually make readers physically search back for a referent.
Focus of attention in task-oriented dialogues
7 Below I will use the prefix sub-generically to include sub-sub-sub-. . . to an indefinite level. others). As we will see, this dialogue structure can be exploited in reference resolution.
Grosz's aim was to find ways of determining and representing the focus of attention of a discourse -that is, roughly speaking, its global theme and the things associated therewith --as a means for constraining the knowledge an NLU system needs to bring to bear in understanding discourse. In other words, the focus of attention is that knowledge which is relevant at a given point in a text for comprehension of the text. 8 Grosz claims that antecedents for definite reference can be found in the focus of attention. That is, the focus of attention is a superset of focus in our sense, the set of referable concepts (in this case definite reference, not just anaphoric reference). Moreover, no element in the focus of attention is excluded from being a candidate antecedent for a definite NP. Grosz thereby implies that all items in the focus of attention can be referred to, and that hence the two senses of the word focus are actually identical.
Representing and searching focus
In Grosz's representation, which uses a partitioned semantic net formalism (Hendrix 1975(Hendrix , 1978, an explicit focus corresponds to a sub-dialogue, and includes, for each concept in it, type information about that concept and any situation in which that concept participates. For each item in the explicit focus, there is an associated implicit focus, which includes subparts of objects in explicit focus, subevents of events in explicit focus, and participants in those subevents. The implicit focus attempts to account for reference to items that have a close semantic distance to items in focus, or which have a close enough relationship to items in focus to be able to be referred to. The implicit focus is also used in detecting focus shifts (discussed below).
Then, at any given point in a text, antecedents of definite non-pronominal NPs can be found by searching through the explicit and implicit focus for a match for the reference.
After checking the other nonpronominal NPs in the same sentence to see if the reference is intrasentential, the currently active explicit focus (the focus corresponding to the present subdialogue) is searched, and then if that search is not successful, the other currently open focus spaces (that is, those corresponding to sub-dialogues that the present sub-dialogue is contained in) are searched in order, back up to the top of the tree. As part of the search the implicit focus associated with each explicit focus is checked, as are subset relations, so that if a novel, say, is in focus, it could be referred to as the book. If there is still no success after this, one then checks whether the NP refers to a single unique concept (such as the sun), contains new information (such as the red coat, when a coat is in focus, but not yet known to be red), or refers to an item in implicit focus.
A similar search method could be used for pronouns. However, since pronouns carry much less information than other definite NPs, more inference is required by the reference matching process to disambiguate many syntactically ambiguous pronouns, and it would be necessary to search focus exhaustively, comparing the reasonableness of candidate referents, rather than stopping at the first plausible one. In addition, other constraints on pronoun reference, such as local (rather than global) theme, and default referent, would also need to be taken into account; Grosz's mechanisms do not do this. However, Grosz does show how a partitioned network structure can be used to resolve certain types of ellipsis by means of syntactic and semantic pattern matching against the immediately preceding utterance, which may itself have been expanded from an elliptical expression. She leaves open for future research most of the problems in relating pronouns to focus.
Maintaining focus
Given this approach, one is then faced with the problem of deciding what the focus is at a given point in the discourse. For highly constrained task-oriented dialogues such as those Grosz considered, the question of an initial focus does not arise; it is, by definition, the overall task in question. The other component of the problem, handling changes and shifts in the focus, is attacked by Grosz in a top-down manner using the task structure as a guide.
A shift in focus can be indicated explicitly by an utterance, such as: (3-1) Well, the reciprocating afterburner nozzle speed control is assembled. Next, it must be fitted above the preburner swivel hose cover guard cooling fin mounting rack.
In this case, the reciprocating afterburner nozzle speed control assembly sub-task and its corresponding subdialogue and focus are closed, and new ones are opened for the reciprocating afterburner nozzle speed control fitting, dominated by the same open subtasks/sub-dialogues/focuses in their respective trees that dominated the old ones. If however the new subtask were a sub-task of the old one, then the old one would not be closed, but the new one added to the hierarchy below it as the new active focus space. The newly created focus space initially contains only those items referred to in the utterance, and those objects associated with the current sub-task. (Being able to bring in the associated objects at this time is, of course, the crucial point on which the whole system relies.) As subsequent non-shift-causing utterances come in, their new information is added to the active focus space.
Usually, of course, speakers are not as helpful as in (3-1), and it is necessary to look for various clues to shifts in focus. For Grosz, the clues are definite NPs. If a definite NP from an utterance cannot be matched in focus, then this is a clue that the focus has shifted, and it is necessary to search for the new focus. If the antecedent of a definite NP is in the current implicit focus, this is a clue that a sub-task associated with this item is being opened. If the task structure is being followed, then the new focus will reflect the opening or closing of a sub-task.
Shifting cannot be done until a whole utterance is considered, because clues may conflict, or the meaning of the utterance may contraindicate the posited shift. In p~rticular, recall that the task structure is only a guide, and does not define the dialogue structure absolutely. For example, the focus may shift to a problem associated with the current sub-task with a question like this: (3-2) Should I use the box-end ratchet wrench to do that?
This does not imply a shift to the next sub-task requiring a box-end ratchet wrench (assuming that the current task doesn't require one) (cf Grosz 1977b:105).
We can see here that the problem of the circularity of language comprehension looms dangerously: to determine the focus one must resolve the references, and to resolve the references, one must know the focus. In Grosz's work, the strong constraints of the structure of task-oriented dialogues provide a toehold. Whether generalization to the case of discourse with other structures, or with no particular structure, is possible is unclear, as it may not be possible to determine so nicely what the knowledge associated with any new focus is. (See however my remarks in Section 2.2 above on the relationship between Grosz's work and that of Kantor, and Section 6 on approaches which attempt to exploit local discourse structure.) In addition, Grosz's mechanisms are limited in their ability to resolve anaphora that require inference or are intersentential (or both).
The assumption that global focus of attention equals all and only possible referents (except where the focus shifts), while perhaps not unreasonable in task-oriented domains, is probably untrue in general. For example, it is unclear that such mechanisms could handle the effects of local as opposed to global theme that exclude the table from the focus for almost all speakers in (2-3). Similarly, could the level of world knowledge and inference required to resolve the different referents of she in (3-3) and (3-4) be integrated into the partitioned semantic net formalism?
Could entities evoked by, but not explicit in, a text of only moderate structure be identified and instantiated in focus? Grosz did not address these issues (nor did she need to for her immediate goals), but they would need to be resolved in any attempt to generalize her approach.
(Some other related problems, including those of focus shifting, are discussed in Grosz 1978.) Grosz's contribution was to demonstrate the role of discourse structure in the identification of theme, relevant world knowledge and the resolution of reference. We now turn to another system which aspires to similar goals, but in a more general context.
Focus in the PAL system
The PAL personal assistant program (Bullwinkle 1977a) is a system designed to accept natural language requests for scheduling activities. A typical request (from Bullwinkle 1977b:44) is: (4-1) I want to schedule a meeting with Ira. It should be at 3 pm tomorrow. We can meet in Bruce's office.
The section of PAL that deals with discourse pragmatics and reference was developed by Candace Sidner [Bullwinkle] (Bullwinkle 1977b;Sidner 1978a). Like Grosz's system, PAL attempts to find a focus of attention in its knowledge structures to use as a focus for reference resolution.
Sidner sees the focus as equivalent to the discourse topic; in fact in Bullwinkle (1977b) the word topic is used instead of focus.
There are three major differences from Grosz's system: 1. PAL does not rely heavily on discourse structures.
2. Knowledge is represented in frames.
3. Focus selection and shifting are handled at a more superficial level.
I will discuss each difference in turn.
PAL's approach to discourse
Because a request to PAL need not have the rigid structure of one of Grosz's task-oriented dialogues, PAL does not use discourse structure to the same extent, instead relying on more general local cues. However, as we shall see below, in focus selection and shifting, Sidner was forced to use ad hoc rules based on observations of typical requests to PAL.
The frame as focus
The representation of knowledge in PAL is based on frames, and its implementation uses the FRL frame representation language (actually a dialect of LISP) developed by Goldstein (1977a, 1977b).
In PAL, the frame corresponds to Grosz's focus space. Following Rosenberg's (1976Rosenberg's ( , 1977 work on discourse structure and frames, the antecedent for a definite NP is first assumed to be either the frame itself, or one of its slots. So, for example, in (4-2): (4-2) I want to have a meeting with Ross (1). It should be at three pm. The location will be the department lounge. Please tell Ross (2). it refers to the MEETING frame (not to the text a meeting) which provides the context for the whole discourse; the location refers to the LOCATION slot that the MEETING frame presumably has (thus the CLOSELY ASSOCIATED WITH relation (Hirst 1981) is handled), and Ross (e) to the contents 9 of the CO-MEETER slot, previously given as Ross.
If the antecedent cannot be found in the frame, it is assumed to be either outside the discourse or inferred. In (4-2), PAL would search its database to find referents for Ross (1) and the department lounge. Personal names are resolved with a special module that knows about the semantics of names (Bullwinkle 1977b:48).
PAL carries out database searches for references like the department lounge apparently by searching a hierarchy of frames, looking at the frames in the slots of the current focus, and then in the slots of these frames, and so on (Sidner 1978a:211), though it is not apparent why this should usefully constrain the search in the above example. 10 9 Sidner only speaks of reference to slots (1978a:211), without saying whether she means the slot itself or its contents; it seems reasonable to assume, as I have done here, that she actually means both.
10 In fact there is no need in this particular example for a referent at all. The personal assistant need only treat the department lounge as a piece of text, presumably meaningful to both the speaker and Ross, denoting the meeting location. A human might do this when passing on a message he or she didn't understand: (i) Ross asked me to tell you to meet him in the arboretum, whatever the beck that is. On the other hand, an explicit antecedent would be needed if PAL had been asked, say, to deliver coffee to the meeting in the department lounge. Knowing when to be satisfied with ignorance is a difficult problem which Sidner does not consider, preferring the safe course of always requiring an antecedent.
Focus selection
In PAL, the initial focus is the first NP following the main verb of the first sentence of the discourse -usually, the object of the sentence --or, if there is no such NP, then the subject of that sentence. This is a short-cut method, which seems to be sufficient for requests to PAL, but which Sidner readily admits is inadequate for the general case (Sidner 1978a:209). I will briefly review some of the problems. Charniak (1978) has shown that the frameselection problem (which is here identical to the initial focus selection problem, since the focus is just the frame representing the theme of the discourse) is in fact extremely difficult, and is not in the most general case amenable to solution by either strictly top-down or bottom-up methods.
Sidner's assumption that the relevant frame is given by an explicitly mentioned NP is also a source of trouble, even in the examples she quotes, such as these two (Sidner 1978b:92): (4-3) I was driving along th__ S freeway the other day. Suddenly the engine began to make a funny noise.
(4-4) I went to a new restaurant with Sam. The waitress was nasty. The food was great.
(Underlining indicates what Sidner claims is the focus.) In (4-3), Sidner posits a chain of inferences to get from the engine to the focus, the FREEWAY frame.
This is more complex than is necessary; if the frame/focus were DRIVING (with its LOCATION slot containing the FREEWAY frame), then the path from the frame to the engine is shorter and the whole arrangement seems more natural. Thus we see that focus need not be based on an NP at all.
In (4-4), our problem is what to do with Sam, who could be referenced in a subsequent sentence.
It is necessary to integrate Sam into the RESTAURANT frame/focus, since clearly he should not be considered external to the discourse and sought in the database. While the RESTAURANT frame may indeed contain a COMPANION slot for Sam to sit in, it is clear that the first sentence could have been I went <anywhere at all> with Sam, requiring that any frame referring to something occupying a location must have a COMPANION slot. This is clearly undesirable.
But the RESTAURANT frame is involved in (4-4); otherwise the waitress and the food would be external to the discourse. A natural solution is that the frame/focus of (4-4) is actually the GOING-SOMEWHERE frame (with Sam in its COMPANION slot), containing the RESTAURANT frame in its PLACE slot, with both frames together taken as the focus. Sidner does not consider mechanisms for a multi-frame focus.
It is, of course, not always true that the frame/focus is explicit. Charniak (1978) points out that (4-5) is somehow sufficient to invoke the MAGICIAN frame: (4-5) The woman waved as the man on stage sawed her in half.
(See also Charniak (1981) for more on frame invocation problems.) Focus shifting in PAL is restricted: the only shifts permitted are to and from sub-aspects of the present focus (Sidner 1978a:209). Old topics are stacked for possible later return. This is very similar to Grosz's open-focus hierarchy. It is unclear whether there is a predictive aspect to PAL's focus-shift mechanism, 11 but the basic idea seems to be that any new phrase in a sentence is picked as a potential new focus. If in a subsequent sentence an anaphoric reference is a semantically acceptable coreferent for that potential focus, then a shift to that focus is ipso facto indicated (Sidner 1978a:209).
Presumably this check is done after a check of focus has failed, but before any database search. A potential focus has a limited life span, and is dropped if not shifted to by the end of the second sentence following the one in which it occurred.
An example (Sidner 1978a:209): (4-6) I want to schedule a meeting with George, Jim, Steve and Mike. We can meet in my office. It's kind of small, but the meeting won't last long anyway.
(4-7) I want to schedule a meeting with George, Jim, Steve and Mike. We can meet in my office. It won't take more than 20 minutes.
In the second sentence my office is identified as a potential focus, and it, in the first reading of the third sentence, as an acceptable coreferent to my office confirms the shift. In the second reading, it couldn't be my office, so no shift occurs. The acceptability decision is based on selectional and case-like restrictions.
While perhaps adequate for PAL, this mechanism is, of course, not sufficient for the general case, where a true shift, as opposed to an expansion upon a previll On page 209 of Sidner (1978a) we are told: "Focus shifts cannot be predicted; they are detectable only after they occur". Yet on the following page, Sidner says: "Sentences appearing in mid-discourse are assumed to be about the focus until the coreference module predicts a focus shift ....
Once an implicit focus relation is established, the module can go onto [sic] predictions of focus shift". My interpretation of these remarks is that one cannot be certain that the next sentence will shift focus, but one can note when a shift might happen, requiring later checking to confirm or disconfirm the shift. ously mentioned point, may occur. This is exemplified by many of the shifts in Grosz's task-oriented dialogues.
Another problem arising from this shift mechanism is that two different focus shifts may be indicated at the same time, but the mechanism has no way to choose between them. For example: (4-8) Schedule a meeting of t..h_e Experimental Theology Research Group, and tell Ross Andrews about it too. I'd like him to hear about the deocommunication work that they're doing.
Each of the two underlined NPs in the first sentence would be picked as a potential focus. Since each is pronominally referenced in the second sentence, the mechanism would be confused as to where to shift the focus. (Presumably Ross Andrews would be the correct choice here.)
Conclusions
The shortcomings of Sidner's work are mainly attributable to two causes: her avoidance of relying on the highly constrained discourse structures that Grosz used, and the limited connectivity of frame systems, compared to Grosz's semantic nets. tz With respect to the former point, perhaps Sidner's main contribution has been to show the difficulties and pitfalls that lie in wait for anyone attempting to generalize Grosz's work, even to the extent that PAL does.
Webber's formalism
In the preceding sections of this paper, we saw approaches to anaphor resolution that were mainly top-down in that they relied on a notion of theme and/or focus of attention to guide the selection of focus (although theme determination may have been bottom-up).
An alternative approach has been suggested by Bonnie [Nash-]Webber (Nash- Webber and Reiter 1977;Webber 1978aWebber , 1978b, wherein a set of rules is applied to a logical-form representation of the text to derive the set of entities that that text makes available for subsequent reference. Webber's formalism attacks some problems caused by quantification that have not otherwise been considered by workers in NLU, 12 In her thesis (1979) [which was not available to me when this paper was first written], Sidner subsequently proposed the use of an association network instead of frames, and presented more sophisticated focus selection and shifting algorithms. I have emphasized her earlier work here, as it has received much wider circulation. I can only give the flavor of Webber's formalism here, and I shall have to assume some familiarity with logical forms. Readers who want more details should see her thesis (1978a); readers who find my exposition mystifying should not worry unduly --the fault is probably mine --but should turn to the thesis for illumination.
In Webber's formalism, it is assumed that an input sentence is first converted to a parse tree, and then, by some semantic interpretation process, to an extended restricted-quantification predicate calculus representation. It is during this second conversion that anaphor resolution takes place. When the final representation, which we shall simply call a logical form, is complete, certain rules are applied to it to generate the set of referable entities and descriptions that the sentence evokes. Webber considers three types of antecedents those for definite pronouns, those for one-anaphora, 13 and those for verb phrase ellipsis. Each type has its own set of rules; we will briefly look at the first.
(The others are discussed in Sections 5.4.2 and 5.4.3 of Hirst 1981.)
Definite pronouns'
The antecedents for definitepronouns are invoking descriptions (IDs); these are in effect focus elements that are explicit in the text. IDs are derived from the logical form representation of a sentence by a set of rules that attempt to take into account factors, such as NP definiteness or references to sets, that affect what antecedents are evoked by a text. There are six of these ID-rules; 14 which one applies depends on the structural description of the logical form.
Here is one of Webber's examples (1978a:64): (5-1) Wendy bought a crayon. This has this representation: (5-2) Ox:Crayon) . Bought Wendy,x Now, one of the ID-rules says that any sentence S whose representation is of this form:
where C is an arbitrary predicate on individuals and
Fx an arbitrary open sentence in which x is free, evokes an entity whose representation is of this form: 13 One-anaphors are those such as those, one, and some uses of it that refer to a description rather than a specific entity. An example: (i) Wendy didn't give either boy a green tie-dyed T-shirt, but she gave Sue a red one.
14 Webber regards her rules only as a preliminary step towards a complete set that considers all relevant factors. She discusses some of the remaining problems, such as negation, in Webber (1978a:81-88).
(5-4) ej ix: Cx & Fx & evoke S,x where ej is an arbitrary label assigned to the entity and is the definite operator. Hence, starting at the left of (5-2), we obtain this representation for the crayon of (5-1): (5-5) e 1 ,x: Crayon x & Bought Wendy,x & evoke (5-1),x which may be interpreted as e I is the crayon mentioned in sentence (5-1) that Wendy bought. Similarly we will obtain a representation of e 2, Wendy, which is then substituted for Wendy in (5-5) after some matching process has determined the identity of the two.
In this next, more complex example (Webber 1978a:73), we see how quantification is handled: (For any one-place predicate P, maxset(P)y is true if and only if y is the set of all items u such that Pu holds.) Another rule has already given us: (5-9) e 1 tx: maxset(Boy) x "the set of all boys" e 2 tx: maxset(Girl) x "the set of all girls" and so (5-8) is instantiated as: (5-10) e 3 ~z: maxset(A(u:Peach) [(ax • el) (3y • e2) . Gave x,y,u & evoke (5-6),y]) z "the set of peaches, each one of which is linked to (5-6) by virtue of some member of e 1 giving it to some member of e2" Although such rules could (in principle) be used to generate all IDs (explicit focus elements) that a sentence evokes, Webber does not commit herself to such an approach, instead allowing for the possibility of generating IDs only when they are needed, depending on subsequent information such as speaker's perspective. She also suggests the possibility of "vague, temporary" IDs for interim use (1978a:67).
There is a problem here with intrasentential anaphora, since it is assumed that a sentence's anaphors are resolved before ID rules are applied to find what may be the antecedents necessary for that resolution. Webber proposes that known syntactic and selectional constraints may help in this conflict, but this is not always sufficient. For example: (5-11) Marybought each girl a cotton T-shirt, but none of them were the style de rigeur in high schools.
The IDs for both the set of girls and the set of T-shirts are needed to resolve them, but them needs to be resolved before the IDs are generated. In this particular example, the clear solution is to work a clause at a time rather than at a sentence level. However, this is not always an adequate solution, as (5-12) shows: (5-12) The rebel students annoyed the teachers greatly, and by the end of the week none of the faculty were willing to go to their classes.
In this ambiguous sentence, one possible antecedent for their, the faculty, occurs in the same clause as the anaphor. Thus neither strictly intraclausal nor strictly interclausal methods are appropriate. Webber is aware of this problem (1978a:48), and believes that it suffices that such information as is available be used to rule out impossible choices; the use of vague temporary IDs then allows the anaphor to be resolved.
Conclusions
It remains to discuss the strengths and weaknesses of Webber's approach, and she herself (in contradistinction to some other workers) is as quick to point out the latter as the former. The reader is therefore referred to her thesis (1978a) for this. However, I will make some global comments on the important aspects relevant here.
Webber's main contributions, as I see them, are as follows: 1. The anaphor resolution problem is approached from the point of view of determining what an adequate representation would be, rather than trying to fit (to straitjacket?) a resolution mechanism into some pre-existing and perhaps arbitrarily chosen representation; and the criteria of adequacy for the representation are rigorously enumerated.
2. A formalism in which it is possible to compute focus elements as they are needed, rather than having them sitting round in advance (as in Grosz's system), perhaps never to be used, is provided (but compare my further remarks below).
3. Webber brings to NLU anaphora research the formality and rigor of logic, something that has been previously almost unseen.
4. Previously ignored problems of quantification are dealt with.
5. The formalism itself is an important contribution.
The shortcomings, as I see them, are as follows: 1. The formalism relies very much on antecedents being in the text. Entities evoked by, but not explicit in, the text cannot in general be adequately handled (in contrast to Grosz's system).
2. The formalism is not related to discourse structure. So, for example, it contains nothing to discourage the use of the table as the antecedent in (2-3). It remains to be seen if discourse pragmatics can be adequately integrated with the formalism or otherwise accounted for in a system using the formalism.
3. Intrasentential and intraclausal anaphora are not adequately dealt with.
4. Webber does not relate her discussions of representational adequacy to currently popular knowledge representations. If frames, for example, are truly inadequate we would like to have some watertight proof of this before abandoning current NLU projects attempting to use frames.
It will be noticed that contribution 2 and shortcoming 1 are actually two sides of the same coin m it is static pre-available knowledge that allows non-textual entities to be easily found --and clearly a synthesis will be necessary here.
Discourse-cohesion approaches to anaphora resolution
Another approach to coreference resolution attempts to exploit local discourse cohesion, building a representation of the discourse with which references can be resolved. This approach has been taken by (inter alia) Klappholz and Lockman (1977;Lockman 1978). By using only cues to the discourse structure at the sentence level or lower, one avoids the need to search for referents in pre-determined dialogue models such as those of Grosz's task-oriented dialogues, or rigidly predefined knowledge structures such as scripts (Schank and Abelson 1977) and frames (Minsky 1975), which Klappholz and Lockman, for example, call overweight structures that inflexibly dominate processing of text. Klappholz and Lockman emphasize that the structure through which reference is resolved must be dynamically built up as the text is processed; frames or scripts could assist in this building, but cannot, however, be reliably used for refer-ence resolution, because deviations by the text from the pre-defined structure will cause errors.
The basis of this approach is that there is a strong interrelationship between coreference and the cohesive ties in a discourse that make it coherent. By determining what the cohesive ties in a discourse are, one can put each new sentence or clause, as it comes in, into the appropriate place in a growing structure that represents the discourse. This structure can then be used as a focus to search for coreference antecedents, since not only do coherently connected sentences tend to refer to the same things, but knowledge of the cohesion relation can provide additional reference resolution restraints. in particular sees the problem of coreference resolution as being automatically solved in the process of discovering the coherence relations in a text. (An example of this will be given in Section 6.2.) Conversely, it is frequently helpful or necessary to resolve coreference relations in order to discover the coherence relations. This is not a vicious circle, claims Hobbs, but a spiral staircase.
In our discussion below, we will cover four issues: 1. deciding on a set of possible coherence relations; 2. detecting them when they occur in a text; 3. using the coherence relations to build a focus structure; and 4. searching for referents in the structure.
Coherence relations
The first thing required by this approach is a complete and computable set of the coherence relations that may obtain between sentences and/or clauses. Various sets have been suggested by many people, including Eisenstadt (1976), Phillips (1977), Pitkin (1977aPitkin ( , 1977b, Hirst (1977bHirst ( , 1978, Lockman (1978), and Reichman (1978). 15 None of these sets fulfill all desiderata; and while Halliday and Hasan (1976) provide an extensive analysis of cohesion, it does not fit within our computational framework of coherence relations, and those, such as Hobbs, Lockman, Eisenstadt and Hirst, who emphasize computability, provide sets insufficient, I believe, to capture all the semantic subtleties of discourse cohesion. Nevertheless, the works cited above undoubtedly serve as a useful starting point for development of this area.
To illustrate what a very preliminary set of cohesion relations could look like, I will briefly present a set abstracted from the various sets of Eisenstadt, Hirst, Hobbs, Lockman and Phillips (but not faithful to any one of these).
The set contains two basic classes of coherence relations: expansion or elaboration on an entity, concept or event in the discourse, and temporal continuation or time flow. Expansion includes relations like EFFECT, CAUSE, SYLLOGISM, ELABORATION, CONTRAST, PARALLEL and EXEMPLIFICATION. In the following examples, "u" is used to indicate the point where the cohesive tie illustrated is acting: (One may disagree with my classification of some of the relations above; the boundaries between categories are yet ill-defined, and it is to be expected that some people's intuitions will differ from mine.) Temporal flow relations involve some continuation forwards or backwards over time: (6-8) VICTORIA --A suntanned Prince Charles arrived here Sunday afternoon, • and was greeted with a big kiss by a pretty English au pair girl. 17 (6-9) SAN JUAN, Puerto Rico --Travel officials tackled a major job here Sunday to find new accommodations for 650 passengers from the burned Italian cruise liner Angelina Lauro.
• The vessel caught fire Friday while docked at Charlotte Amalie in the Virgin Islands, but most passengers were ashore at the time. 18 Temporal flow may be treated as a single relation, as Phillips, for example, does, or it may be subdivided, as by Eisenstadt and Hirst, into categories like TIME STEP, FLASHBACK, FLASHFORWARD, TIME EDIT, and so on. Certainly, time flow in a text may be quite contorted, as in (6-10) (from Hirst 1978); "m" indicates a point where the direction of the time flow changes: (6-10) Slowly, hesitantly, Ross approached Nadia. • He had waited for this moment for many days. • Now he was going to say the words • which he had agonized over • and in the very room • he had often dreamed about. • He gazed lovingly at her soft green eyes.
It is not clear, however, to what extent an analysis of time flow is necessary for anaphor resolution. I suspect that relatively little is necessary --less than is required for other aspects of discourse understanding.
I see relations like those exemplified above as primitives from which more complex relations could be built. For example, the relation between the two sentences of (6-3) above clearly involves FORWARD TIME STEP as well as EFFECT. I have hypothesized elsewhere (Hirst 1978) the possibility of constructing a small set of discourse relations (with cardinality about twenty or less) from which more complex relations may be built up by simple combination, and, one hopes, in such a way that the effects of relation Ri+R 2 would be the sum of the individual effects of relations R 1 and R 2. Rules for permitted combinations would be needed; for example, FORWARD TIME STEP could combine with EFFECT, but not with BACKWARD TIME STEP. What would the formal definition of a coherence relation be like? Here is Hobbs's (1979:73) definition of ELABORATION: Sentence S 1 is an ELABORATION of sentence S O if some proposition P follows from the assertions of both S O and $1, but S 1 contains a property of one of the elements of P that is not in S 0. The example in the next section will clarify this.
An example of anaphor resolution using a • coherence relation
It is appropriate at this stage to give an example of the use of coherence relations in the resolution of anaphors. I will present an outline of one of Hobbs's; for the fine details I have omitted, see Hobbs (1979:78-80). The text is this: (6-11) John can open Bill's safe. He knows the combination.
We want an NLU system to recognize the cohesion relation operating here, namely ELABORATION, and identify he as John and the combination as that of Bill's safe. We assume that in the world knowledge that the system has are various axioms and rules of inference dealing with such matters as what combinations of safes are and knowledge about doing things. Then, from the first sentence of (6-11), which we represent as (6-12): (6-12) can (John, open (Bill's-safe)) (we omit the details of the representation of Bill's safe), we can infer: (6-13) know (John, cause (do (John, a), open (Bill's-safe))) "John knows that he can perform an action a that will cause Bill's-safe to be open" From the second sentence of (6-11), namely: (6-14) know (he, combination (comb, y)) "someone, he, knows the combination comb to something, y" we can infer, using knowledge about combinations: (6-15) know (he, cause (dial (comb,y), open (y))) "he knows that by causing the dialing of comb on y, the state in which y is open will be brought about" Recognizing that (6-13) and (6-15) are nearly identical, and assuming that some coherence relation does hold, we can identify he with John, y with Bill's-safe, and the definition of the ELABORATION relation is satisfied. In the process, the required referents were found.
Lockman's contextual reference resolution algorithm
Given a set of discourse cohesion relations, how may their use in a text be computationally recognized and employed to build a structure that represents the discourse and can be used as a focus for reference resolution? Only and Lockman (1978;Klappholz and Lockman 1977) seem to have considered these aspects of the problem, though Eisenstadt (1976) discusses some of the requirements in world knowledge and inference that would be required. In this section we look at Lockman's work.
Lockman does not separate the three processes of recognizing cohesion, resolving references and building the representation of the discourse. Rather, as befits such interrelated processes, all three are carried out at the same time. His contextual reference resolution algorithm (CRRA) works as follows: The structure to be built is a tree, initially null, of which each node is a sentence and each edge a coherence relation. As each new sentence comes in, the CRRA tries to find the right node of the tree to attach it to, starting at the leaf that is the previous sentence and working back up the tree in a specified search order (discussed below) until a connection is indicated. Lockman assumes the existence of a judgment mechanism that generates and tests hypotheses as to how the new sentence may be feasibly connected to the node being tested.
The first hypothesis whose likelihood exceeds a certain threshold is chosen.
The hypotheses consider both the coherence and the coreference relations that may obtain. Each member of the set of coherence relations is hypothesized, and for each one, all possible coreference relations between the conceptual tokens of the new sentence and tokens in the node under consideration (or nearby it in the tree) are posited. (The search for tokens goes back as far as necessary in the tree until suitable tokens are found for all unfulfilled definite noun phrases.) The hypotheses are considered in parallel; if none are judged sufficiently likely, the next node or set of nodes will be considered for feasible connection to the current sentence.
The search order is as follows: First the immediate context, the previous sentence, is tried. If no feasible connection is found, then the immediate ancestor of this node, and all its other descendants, are tried in parallel. If the algorithm is still unsuccessful, the immediate ancestor of the immediate ancestor, and the descendants thereof, are tried, and so on up the tree. If a test of several nodes in parallel yields more than one acceptable node, the one nearest the immediate context is chosen.
If the current sentence is not a simple sentence, it is not broken into clauses dealt with individually, but rather converted to a small sub-tree, reflecting the semantic relationship between the clauses. The conversion is based simply upon a table look-up indexed on the structure of the parse tree of the sentence. One of the nodes is designated by the table look-up as the head node, and the sub-tree is attached to the pre-existing context tree, using the procedure described above, with the connection occurring at this node. Similarly one (or more) of the nodes is designated as the immediate context, the starting point for the next search.
(The search will be conducted in parallel if there is more than one immediate context node.) There are some possible problems with Lockman's approach. The first lies in the fact that the structure built grows without limit, and therefore a search in it could, in theory, run right through an enormous tree. Normally, of course, a feasible connection or desired referent will be found fairly quickly, close to the immediate context.
However, should the judgment mechanism fail to spot the correct one, the algorithm may run a little wild, searching large areas of the structure needlessly and expensively, possibly lighting on a wrong referent or wrong node for attachment, with no indication that an error has occurred. In other words, Lockman's CRRA places much greater trust in the judgment mechanism than a system like Grosz's that constrains the referent search area --more trust than perhaps should be put in what will necessarily be the most tentative and unreliable part of the system. Secondly, I am worried about the syntax-based table look-up for sub-trees for complex sentences. On the one hand, it would be nice if it were correct, simplifying processing. On the other hand, I cannot but feel that it is an over-simplification, and that effects of discourse theme cannot reliably be handled in this way. However, I have no counterexamples to give, and suggest that this question needs more investigation.
The third possible problem, and perhaps the most serious, concerns the order in which the search for a feasible connection takes place.
Because the first hypothesis whose likelihood exceeds the threshold is selected, it is possible to miss an even better hypothesis further up the tree. In theory, this could be avoided by doing all tests in parallel, the winning hypothesis being judged on both likelihood and closeness to the immediate context.
In practice, given the evergrowing context tree as discussed above, this would not be feasible, and some way to limit the search area would be needed.
The fourth problem lies in the judgment mechanism itself. Lockman frankly admits that the mechanism, incorporated as a black box in his algorithm, must have abilities far beyond those of present state-of-theart inference and judgment systems. The problem is that it is unwise to predicate too much on the nature of this unbuilt black box, as we do not know yet if its input-output behavior could be as Lockman posits. It may well be that to perform as required, the mechanism will need access to information such as the sentence following the current one (in effect, the ability to delay a decision), or more information about the previous context than the CRRA retains or ever determines; in fact, it may need an entirely different discourse structure representation from the tree being built. In other words, while it is fine in theory to design a reference resolver around a black box, in practice it may be computationally more economical to design the reference resolver around a knowledge of how the black box actually works, exploiting that mechanism, rather than straitjacketing the judgment module into its pre-defined cabinet; thus Lockman's work may be premature.
None of these problems are insurmountable. However it is perhaps a little unfortunate that Lockman's work offers little of immediate use for NLU systems of the present day.
Conclusions
Clearly, much work remains to be done if the coherence/cohesion paradigm of NLU is to be viable. Almost all aspects need refinement. However, it is an intuitively appealing paradigm, and it will be interesting to see if it can be developed into functioning NLU systems.
Epilogue
Each approach examined offers a different insight into some aspect or aspects of the use of discourse structure to resolve anaphora. So far there has been no attempt to integrate these insights into a single cohesive system or model; indeed this will be an extremely difficult task. It should, however, be a most fruitful one, and is the logical next step in computational anaphora resolution. | 2014-07-01T00:00:00.000Z | 1981-04-01T00:00:00.000 | {
"year": 1981,
"sha1": "fc49711dd6f46de428c785a540c1c11d43c7caa9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "fc49711dd6f46de428c785a540c1c11d43c7caa9",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231980644 | pes2o/s2orc | v3-fos-license | Inside the Thrombus: Association of Hemostatic Parameters With Outcomes in Large Vessel Stroke Patients
Background: Actual clinical management of ischemic stroke (IS) is based on restoring cerebral blood flow using tissue plasminogen activator (tPA) and/or endovascular treatment (EVT). Mechanical thrombectomy has permitted the analysis of thrombus structural and cellular classic components. Nevertheless, histological assessment of hemostatic parameters such as thrombin-activatable fibrinolysis inhibitor (TAFI) and matrix metalloproteinase 10 (MMP-10) remains unknown, although their presence could determine thrombus stability and its response to thrombolytic treatment, improving patient's outcome. Methods: We collected thrombi (n = 45) from large vessel occlusion (LVO) stroke patients (n = 53) and performed a histological analysis of different hemostatic parameters [TAFI, MMP-10, von Willebrand factor (VWF), and fibrin] and cellular components (erythrocytes, leukocytes, macrophages, lymphocytes, and platelets). Additionally, we evaluated the association of these parameters with plasma levels of MMP-10, TAFI and VWF activity and recorded clinical variables. Results: In this study, we report for the first time the presence of MMP-10 and TAFI in all thrombi collected from LVO patients. Both proteins were localized in regions of inflammatory cells, surrounded by erythrocyte and platelet-rich areas, and their content was significantly associated (r = 0.41, p < 0.01). Thrombus TAFI was lower in patients who died during the first 3 months after stroke onset [odds ratio (OR) (95%CI); 0.59 (0.36–0.98), p = 0.043]. Likewise, we observed that thrombus MMP-10 was inversely correlated with the amount of VWF (r = −0.30, p < 0.05). Besides, VWF was associated with the presence of leukocytes (r = 0.37, p < 0.05), platelets (r = 0.32, p < 0.05), and 3 months mortality [OR (95%CI); 4.5 (1.2–17.1), p = 0.029]. Finally, plasma levels of TAFI correlated with circulating and thrombus platelets, while plasma MMP-10 was associated with cardiovascular risk factors and functional dependence at 3 months. Conclusions: The present study suggests that the composition and distribution of thrombus hemostatic components might have clinical impact by influencing the response to pharmacological and mechanical therapies as well as guiding the development of new therapeutic strategies.
Background: Actual clinical management of ischemic stroke (IS) is based on restoring cerebral blood flow using tissue plasminogen activator (tPA) and/or endovascular treatment (EVT). Mechanical thrombectomy has permitted the analysis of thrombus structural and cellular classic components. Nevertheless, histological assessment of hemostatic parameters such as thrombin-activatable fibrinolysis inhibitor (TAFI) and matrix metalloproteinase 10 (MMP-10) remains unknown, although their presence could determine thrombus stability and its response to thrombolytic treatment, improving patient's outcome.
Methods: We collected thrombi (n = 45) from large vessel occlusion (LVO) stroke patients (n = 53) and performed a histological analysis of different hemostatic parameters [TAFI, MMP-10, von Willebrand factor (VWF), and fibrin] and cellular components (erythrocytes, leukocytes, macrophages, lymphocytes, and platelets). Additionally, we evaluated the association of these parameters with plasma levels of MMP-10, TAFI and VWF activity and recorded clinical variables.
Results: In this study, we report for the first time the presence of MMP-10 and TAFI in all thrombi collected from LVO patients. Both proteins were localized in regions of inflammatory cells, surrounded by erythrocyte and platelet-rich areas, and their content was significantly associated (r = 0.41, p < 0.01). Thrombus TAFI was lower in patients who died during the first 3 months after stroke onset [odds ratio (OR) (95%CI); 0.59 (0.36-0.98), p = 0.043]. Likewise, we observed that thrombus MMP-10 was inversely correlated with the amount of VWF (r = −0.30, p < 0.05). Besides, VWF was associated with the presence of leukocytes (r = 0.37, p < 0.05), platelets (r = 0.32, p < 0.05), and 3 months mortality [OR (95%CI); 4.5 (1.2-17.1), p = 0.029]. Finally, plasma levels of TAFI correlated with circulating and thrombus platelets, while plasma MMP-10 was associated with cardiovascular risk factors and functional dependence at 3 months.
INTRODUCTION
Stroke is the primary neurovascular disease, being the second cause of death and disability worldwide (5.5 million deaths each year and 176.4 million stroke-related disabled people) with almost 14 million new cases around the world every year (1). Stroke severely hampers the normal daily activities of survivors affecting health and social-care resources (2). Moreover, in 2047, the number of stroke events is expected to increase in almost 40,000 incident strokes and 2.58 million prevalent cases in Europe, in part as a consequence of the aging of the population (3).
Ischemic stroke (IS) accounts for the majority of strokes and in caused by the presence of a thrombus or an embolus in brain vessels. The current goal for the management of IS is based on the restoration of the cerebral blood flow achieved by the use of the thrombolytic drug, tissue plasminogen activator (tPA), and/or endovascular treatment (EVT) to remove thrombi (4). The successful introduction of endovascular thrombectomy procedures within the last decade has allowed thrombus retrieval and its detailed analysis. The study of thrombi is crucial to understand diagnosis, treatment, and secondary prevention of acute IS and to design safe and efficient thrombolytic strategies to improve recanalization and prognosis of IS patients.
Several studies of IS thrombi have focused on their structural and cellular components (5). Among them, platelets and von Willebrand factor (VWF) are important factors in thrombus formation and have previously been shown as key components of acute IS thrombo-emboli (6). Erythrocite dominancy in thrombi has been associated with arterial thrombi from noncardiac source, whereas fibrin/platelet dominancy has been described as related to cardiac thrombi (7)(8)(9). Leukocytes are often present in thrombus and seem to be more dominant in cardiac thrombi (7,8). However, when T cells were analyzed separately by CD3+ immunostaining, the number of T cells was significantly higher in atherothrombotic thrombi than in thrombi from patients with cardioembolic or other causes stroke (10).
In search of new pharmacological alternatives for patients that do not benefit from current therapies, preclinical studies are exploring the potential of new thrombolytic compounds in different models of IS. Specific inhibitors of antifibrinolytic proteins are under development as the diabody against plasminogen activator inhibitor-1 (PAI-1) and thrombinactivatable fibrinolysis inhibitor (TAFI) (11). This simultaneous inhibition of TAFI and PAI-1 showed increased profibrinolytic effects without adverse bleeding (12). Moreover, already approved drugs, as the mucolytic drug N-acetylcisteine, by dissolving the disulfide bonds of large VWF multimers, have been proven to accelerate thrombus dissolution and prevent rethrombosis in rodent models of IS resistant to tPA (13). In line with these results, a disintegrin and metalloproteinase with a thrombospondin type 1 motif member 13 (ADAMTS13), which cleaves VWF, dissolves the t-PA-resistant thrombi. Consequently it reduces cerebral infarct sizes showing a potent thrombolytic activity in experimental models of stroke (14). Finally, matrix metalloproteinases (MMPs) could also play a role in thrombolysis, since the fibrinolytic and the MMPs systems cooperate in thrombus dissolution by acting on fibrin(ogen) directly or by collaborating with plasmin. Precisely, plasmin is able to cleave and activate several MMPs (MMP-1, MMP-3, and MMP-9) that can take part in the dissolution of the fibrin clot directly or interacting with other elements of the fibrinolytic system (15,16). Specifically, our group has shown the fibrinolytic role of MMP-10 by preventing the activation of TAFI (17). We have reported that the administration of MMP-10 is as efficient as tPA reducing infarct size and demonstrated that a combination of MMP-10 with tPA achieves further reduction in brain damage by blocking tPA-induced neuronal excitotoxicity in IS experimental models (18).
The histological location of TAFI and MMP-10 in stroke thrombi still remains unknown, and their presence could determine thrombus stability and the response to thrombolytic therapy. In this study, we therefore collected thrombi retrieved from large vessel occlusion (LVO) stroke patients and subjected them to histological assessment of different hemostatic parameters with a specific focus on TAFI and MMP-10. Furthermore, we investigated their association with clinical outcomes.
Study Population
A total of 53 serial acute LVO IS patients admitted to the Complejo Hospitalario de Navarra Stroke Unit who underwent EVT between November 2015 and November 2017 were recruited. Adequate and correctly processed histological material was available only from 45 patients. Depending on the degree of fragmentation, it was either collected in one piece or in multiple pieces. All collected material from the same patient was processed together as one. The decision to perform EVT, associated or not with intravenous tPA, was made according to guidelines at the time of patient admission as the standard of care for acute IS (19). Endovascular procedure was performed using a stent-retriever [pRESET (Phenox, Germany); Catch (Balt, France); Tigertriever and Comaneci (Rapid Medical, Israel)] or an aspiration device (Penumbra, Penumbra, USA) according to interventionalist's criteria.
Clinical Information
Demographics (age, sex) and other baseline characteristics of the patients, including previous cardiovascular disease, vascular risk factors, systolic, and diastolic blood pressure (SBP and DBP, respectively) at admission, serum glucose, stroke severity assessed by the National Institutes of Health Stroke Scale (NIHSS), previous use of antithrombotic agents (antiplatelet agents and anticoagulants), and treatment with tPA, were recorded. Main vascular risk factors documented were the following: type-2 diabetes mellitus (use of antidiabetic drugs, a casual plasma glucose >200 mg/dl, or fasting blood sugar ≥126 mg/dl or HbA1c ≥6.5%), hypertension (patients taking antihypertensive drugs or with blood pressure >140/90 mmHg on repeated measurements), hypercholesterolemia [patients receiving lipid-lowering agents or with triglycerides ≥200 mg/dl, an overnight fasting cholesterol level ≥240 mg/dl, or low-density lipoprotein (LDL) cholesterol ≥160 mg/dl], and current cigarette smoking. Based on the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification (20), etiological subtypes of ischemic stroke were assesed. C-reactive protein (CRP) and plasma creatinine were measured with autoanalyzers (Architect i2000SR, USA, and Cobas C311, Roche, Germany, respectively).
Alberta Stroke Program Early CT Score (ASPECTS) was collected by two independent radiologists regarding a CT scan obtained at admission for all patients. A second CT scan was done at 24-48 h in all patients to identify hemorragic transformation and evaluate infarct area. A 1.5 MRI scan was obtained within 1 week of stroke onset if not contraindicated (when MRI was contraindicated, delayed CT scan was elective) to confirm IS. The recanalization after thrombectomy was evaluated by angiography during endovascular procedure using the modified treatment in cerebral infarction (mTICI) score.
Deparaffined and hydrated slides were incubated with citrated antigen retrieval solution (pH 6.10, Dako) at 95 • C for 20 min or with Tris-ethylenediaminetetraacetic acid (EDTA) pH 9 for CD3 immunostaining (Master Diagnostica). Then, endogenous peroxidases were blocked with 5% hydrogen peroxide for 20 min at room temperature (RT) in the dark. Slides were then washed in Tris saline buffer (TBS, pH 7.36, 25 mM Tris). Sections were blocked using normal goat serum (Dako) for 1 h at RT. Sections were then incubated overnight at 4 • C, with the primary antibodies. After washing, slides were incubated with the required secondary antibodies using the anti-rabbit or anti-mouse Dako Envision System-HRP (Dako) for 30 min and developed with diaminobenzidine (DAB, Dako) followed by counterstaining with Harris' hematoxylin. Slides were then mounted with distyrene plasticizer and xylene mixture (DPX, VWR Chemicals).
Double immunofluorescence was performed to localize TAFI and MMP-10 with specific cell types in thrombi tissue. Briefly, slides were incubated with a mix of primary antibodies overnight at 4 • C. After washing, slides were incubated with the corresponding secondary antibodies for 30 min, using a goat antirabbit Alexa fluor 488 antibody (Invitrogen) or a biotinylated goat anti-mouse antibody (Dako) that was amplified with the Cy3 NEL 704 kit (PerkinElmer). Finally, slides were mounted with VECTASHIELD R Antifade Mounting Medium on DAPI (Novus Biological). Double immunofluorescence for TAFI and MMP-10 was performed with the rabbit anti-TAFI antibody described above and a monoclonal anti-MMP-10 (MAB9101, R&D systems).
Immunostained slides were subsequently scanned (Aperio ImageScope, Leica ByoSistems, Germany and Vectra Polaris, Perkin Elmer, USA) and quantified with ImageJ software (21). The percentage of positively stained area in total tissue area is presented as representative of thrombi content for TAFI, MMP-10, VWF, fibrin, RBC, and platelets (CD42b), whereas the positive cell number per square millimeter is given for nucleated cells (leukocytes, lymphocytes, and macrophages).
Outcome Measures
Individual scores in the modified Rankin Scale (mRS) at 90 days, established by face-to-face interview with a stroke specialized neurologist, were the main clinical outcome. Other clinical outcomes included were as follows: (a) 3-month allcause mortality; (b) 3-month functional independence (FI), categorized as 90-day mRS <3; (c) successful recanalization, defined as mTICI 2b or 3; and (d) hemorrhagic transformation after ischemic stroke according to the European Cooperative Acute Stroke Study III (ECASS III) classification (22), including hemorrhagic infarcts (HI type 1 or 2), parenchymal hematomas (PH type 1 or 2) and remote hematomas or subarachnoid hemorrhages. Plasma Levels of VWF, MMP-10, and TAFI Within the following 24 h after admission, venous blood samples were drawn from all patients and centrifuged at 1200 × g for 15 min within 2 h of collection and subsequently stored at −80 • C for further analysis. VWF activity (Innovance VWFAc, Siemens, Spain), MMP-10 levels (R&D Systems, USA), and TAFI activity (TAFIa, STA STACHROM TAFI, Stago, France) were measured with an automated ELISA analyzer TRITURUS (Grifols, Spain) in citrated plasma samples after being thawed on ice and thoroughly vortexed. The detection limit of the assays was 2.2%, 15.1 pg/ml, and 5% for VWFAc, MMP-10, and TAFIa, respectively. All experiments were performed and analyzed in a blinded manner.
Statistical Analysis
Normality of distributions was assessed graphically and with the Shapiro-Wilk test. Non-normally distributed variables were presented as median with interquartile range (IQR), while continuous variables with normal distributions were presented as mean with standard deviation (SD). Logarithmic transformation was applied for continuous variables with skewed distributions. An unpaired t-test or the Wilcoxon rank-sum test was applied to compare continuous variables between groups depending on their distribution. The chi-square test or, in the case of small-expected frequencies, Fisher's exact test were performed to compare binary categorical variables distribution between groups. Correlation between continuous variables was evaluated by pairwise Spearman correlation test. Association between MMP-10 and TAFI thrombi content was assessed by linear regression analysis. Based on TOAST criteria, stroke subtype classification was assessed, and dichotomized etiological groups were created. Three groups of stroke severity by NIHSS score were categorized [(0-7), (7)(8)(9)(10)(11)(12)(13)(14), and (>14)], and analysis of variance and trend analysis were performed.
Selected multivariate binary logistic regression models were performed to evaluate associations between thrombi histological parameters and circulating measurements with clinical outcomes. Results were expressed as odds ratios (ORs) with 95% confidence intervals (95% CIs).
Statistical significance was considered for all analyses if p < 0.05. STATA software (version 16, StataCorp LLC, Texas, USA) was the statistic software for this study.
Patients Clinical Characteristics
Fifty-three patients were finally included in the study. Clinical characteristics of the patients are shown in Table 1.
Histological Characteristics of Thrombi
Only 45 IS thrombi properly retrieved after thrombectomy were analyzed. According to usual description of thrombi microscopic distribution (23,24), two different patterns are interspersed within the analyzed thrombi ( Figure 1A): on the one hand, the RBC-rich areas, composed of packed RBC within a meshwork of fibrin and little or no nucleated cells; on the other hand, the platelet-rich areas with fibrin staining through the platelets region. From the 45 analyzed thrombi, this pattern could be identified in 23 with a wide heterogeneity in quantity and distribution of those regions. Some thrombi, however, mainly consisted of a RBC-rich core that was surrounded by a plateletrich matrix (18/45) ( Figure 1B). As shown in Figure 2, leukocytes were mainly found at the interface between RBC-and platelet-rich areas but also within platelet-rich zones. Moreover, VWF staining was localized scattered in platelet-rich areas and through fibrinpositive regions.
To assess the relative contribution of each thrombus element, we quantified the stained area of RBC and fibrin (MSB), platelets (CD42b), and VWF. In addition, number of leukocytes (CD45), macrophages (CD68), and T lymphocytes (CD3) were assessed for all thrombi ( As shown in Table 3 Interestingly, MMP-10 and TAFI proteins were present in all thrombi [median (IQR): 2.9% (0.15-8.1) MMP-10 and 2.1% (0.9-3.8) TAFI] related to leukocyte distribution and primarily found at the interface between RBC and plateletrich areas (Table 2 and Figure 2). As shown in Figure 3, MMP-10 colocalized with CD68 and with some CD45-and CD42b-positive cells, while TAFI signal was observed in some leukocytes and in platelets. Double immunostaining for TAFI and MMP-10 confirmed the colocalization of both proteins in thrombi (Figure 4).
Association of Thrombi Components With Clinical Outcomes
We further analyzed the association of thrombi components with clinical data. We found that the pharmacological intervention with tPA was associated with higher thrombi platelets content None of the analyzed thrombus components were associated with complete recanalization after endovascular procedure, but the number of patients without recanalization was small (n = 8). Nevertheless, higher frequency of recanalization after the first pass of the device was associated to reduced macrophage content in thrombi [48.9 macrophages/mm 2 (29.0-173.0) 1st pass vs. 189.2 macrophages/mm 2 (74.3-305.6) more than first pass, p = 0.04] and remained associated after multivariate analysis by age and sex [OR (95% CI): 0.44 (0.20-0.97), p < 0.05]. Other studied components (platelets, leukocytes, T lymphocytes, fibrin, RBCs, VWF, TAFI, or MMP10) were not associated with recanalization or device passes. No significant association between functional independence (FI) 3 months after stroke and content in thrombus for any of the studied thrombi components was observed (data not shown). Regarding 3-month mortality, patients who died within 3 months had higher VWF staining in thrombi [12.3% (8.9-21.7) vs. 10.6% (4.3-14.6), p < 0.05, Figure 5B]. Multivariate analysis adjusting for confounding factors (age and SBP) showed that thrombus VWF remained statistically significantly associated with mortality [OR (95% CI) Stroke etiological subtypes according to TOAST criteria and hemorrhagic transformation were also assessed in our cohort, and the associations with thrombi components were evaluated, but no association was found.
Association of Circulating Hemostatic Parameters and Clinical Outcomes
When evaluating circulating levels of VWFAc, MMP-10, and TAFIa, no correlation with their thrombus content was found ( Table 4) and only an association between VWFAc and thrombus lymphocytes was observed (r = 0.44, p < 0.01). Higher levels of circulating VWFAc were found in patients treated with tPA Finally, circulating TAFIa was associated with circulating platelets (r = 31, p < 0.05), and a trend between blood and thrombi platelets (r = 0.31, p = 0.061) was also observed ( Table 4).
Neither of the studied circulating parameters (VWFAc, MMP10, and TAFIa) were associated with mortality after ischemic stroke in our cohort nor with stroke TOAST subtypes (data not shown).
DISCUSSION
In this study, we demonstrate the presence of MMP-10 and TAFI in all thrombi retrieved from LVO stroke patients at the interface between RBC and platelet-rich areas, matching leukocytes. Thrombus MMP-10 and TAFI content correlate independently of confounding factors, the local TAFI expression being significantly lower in patients who died within 3 months after stroke onset. Additionally, we show that thrombus MMP-10 inversely correlates with VWF content, which is also associated with 3-month mortality. Interestingly, the presence of platelets in the thrombus is associated with thrombolysis treatment as well as with thrombus VWF. Finally, plasma TAFI activity is associated with blood and thrombus platelets, whereas plasma MMP-10 is related to cardiovascular risk factors and 3-month functional dependence. Taken together, in situ analysis of different hemostatic and proteolytic parameters has prognostic implications in IS patients. These findings will help to understand thrombus stability and the response to IS therapies, leading to the development of individualized treatment strategies based on clot composition, which ultimately will improve patient outcome. TAFI is a metallocarboxypeptidase activated by thrombin/thrombomodulin and plasmin that removes C-terminal lysine residues from partially degraded fibrin, preventing t-PA-plasminogen activation and inhibiting fibrinolysis. Previous reports showed the role of TAFI in the stabilization of newly formed fibrin clots (25). It was proposed that thrombin-induced activation of TAFI render newly formed fibrin clots more resistant to plasmin degradation (26). In vivo evidence for the role of TAFI in fibrinolysis was obtained in experimental venous and arterial thrombosis models using TAFI inhibitors (27)(28)(29)(30). Decreased TAFI activity in rodent models of transient middle cerebral artery occlusion treated with TAFI inhibitor resulted in signs of lower microvascular thrombosis such as reduced fibrin deposition, regardless of infarct volume (29,30). However, data from TAFI knockout mice indicated that TAFI deficiency did not have a significant impact on the rate of thrombus formation in arterial and venous thrombosis models (26,31). Beyond fibrinolysis, TAFI also plays a role in inflammatory conditions, processing C-terminal arginine or lysine from bradykinin, complement factors C5a and C3a, etc., leading to a reduced inflammatory/immune response (32). In this regard, our group reported that TAFI deficiency increased brain damage and circulating microvesicles in IS model under thrombolysis, suggesting a higher inflammatory status in these mice (33). In line with these data, this study reports a significant association of thrombus TAFI with lower mortality, suggesting that TAFI could be implicated in IS at various levels, linking coagulation/fibrinolysis and the inflammatory/immune systems.
Furthermore, we previously demonstrated that MMP-10 cleaves TAFI, preventing its activation and enhancing tPAinduced fibrinolysis in vitro and in experimental models of thrombosis (17). In this study, we first identified TAFI and MMP-10 in human thrombi sections. Both proteins were localized in the same areas associated with leukocytes, and their stainings even colocalized in specific points of the thrombus surface, suggesting that the processing of TAFI by MMP-10 could be operational locally due to their proximity. Moreover, the strong correlation between both reinforces that their coexpression, at the surface of the thrombus, might favor TAFI inactivation by MMP-10 promoting thrombus lysis.
Interestingly, an inverse linear correlation was also observed between MMP-10 and VWF content, the latter previously associated with platelet-rich clots, dense fibrin structures, and poor revascularization outcome (6,34). Our data suggest that higher expression of MMP-10 in thrombi might be associated with more effective fibrin lysis, lower VWF-fibrin structures, and better recanalization-related outcome. Moreover, we have also demonstrated an association between higher thrombus content of VWF and leukocytes with 3-month mortality in multivariate analysis. VWF is a large, multimeric glycoprotein that is crucial for normal hemostasis due to its role in the stable platelet plug formation at sites of vascular injury. Not surprisingly, different studies have identified VWF as an important constituent of stroke thrombi with a direct impact on thrombolysis (5, 6).
Next, we studied VWFAc, TAFIa, and MMP-10 in blood and their expression in thrombi. No significant correlation was found between circulating levels of studied proteins and their thrombus content, suggesting a different role of VWF, TAFI, and MMP-10 in circulation and locally, where they might be involved in thrombus formation and/or on cell-dependent thrombolysis. For instance, systemic VWFAc was associated with thrombus lymphocytes. The important role of immune cells on stroke progression is well established, likewise immune cells interact with molecules involved in platelet signaling, such as VWF, contributing to thrombus formation (35).
Furthermore, plasma TAFIa was correlated with platelets in the blood and thrombus. An association between higher plasma TAFI levels and the occurrence of IS was reported in a number of clinical studies (32)(33)(34)(36)(37)(38). It has been described that TAFI secreted upon platelet activation (39) might contribute to its variations in plasma. Our results support these data demonstrating a correlation between plasma TAFI activity and circulating platelets and locally showing their colocalization in thrombi. Even if TAFI and platelet content in thrombectomies did not correlate, their association within thrombi might suggest a role of locally secreted platelet-derived TAFI in the systemic crosstalk between coagulation and fibrinolysis, protecting thrombus against lysis.
In addition, higher circulating VWFAc were found in patients treated with tPA and in those with greater stroke severity, supporting previous studies showing that increased VWF levels were associated with elevated baseline stroke severity (by the NIHSS score) (40,41). Moreover, elevated VWF antigen concentrations immediately after and 24 h postthrombolysis have also been associated to poor functional outcomes 3 months after ischemia (41), and tPA has been shown as potentially implicated with brain microvascular endothelial injury during postischemia in experimental models (42). Thus, it could be hypothesized that the increased levels of VWFAc after thrombolysis could be due to increased VWF antigen following endothelial damage caused by the thrombolytic agent.
Moreover, patients treated with tPA who underwent thrombectomy presented higher platelet fraction in thrombi. This fact has not been previously described but has been suggested in some studies (43). A paradoxical platelet activation has been reported secondary to fibrinolysis (44) as responsible for delayed thrombosis in some patients with tPA-resistant thrombi causing reocclusion and rethrombosis (45). Other additional mechanisms have been implicated in a higher platelets content in stroke thrombi of patients treated with tPA. An outer shell composed of platelets, extracellular DNA, and tight cross-linking of fibrin that confers resistance to fibrinolysis has been described in acute IS thrombi (46) and could support the higher platelet percentage found in thrombus of tPA-treated patients.
On the other hand, thrombus composition has been shown to be related to interventional times and efficacy of mechanical thrombectomy treatment for LVO stroke (9,47). In this line, in our cohort, a higher macrophage presence in the thrombus was associated with lower frequencies of recanalization with the first pass of the device. There are previous data reporting that fibrinorganized thrombi need longer recanalization times (47) or a higher number of maneuvers during mechanical thrombectomy (9); thus, further studies are needed to analyze more deeply this association.
Finally, in this study, we observed an association of plasma MMP-10 levels with cardiovascular risk factors and 3month functional dependence. In line with these results, we had previously reported that higher serum MMP-10 levels were associated with inflammatory markers and the presence of atherosclerotic plaques in asymptomatic subjects (48). Moreover, in IS patients, serum proMMP-10 concentration was independently associated with higher infarct volume, severe brain edema, neurological deterioration, and poor functional outcome at 3 months (49). Altogether, this study confirms that plasma MMP-10 might play a key role in cardiovascular diseases and therefore could be a potential biomarker for LVO stroke patients.
There are some limitations to this report that are worth considering. First, the modest sample size and the retrospective analysis of prospectively collected data are important methodological shortcomings. Second, only thrombi from those LVO patients in whom the thrombus could be partially or totally retrieved were available for study, whereas not recovered clots or those clots dissolved after tPA treatment could not be studied, and this impedes evaluation of tPA susceptibility and thrombectomy resistance. Third, the observational study design and the use of correlations to evaluate the association between variables do not allow to establish causal relationship and is only a rough approach to probably complex interrelationships between components in thrombi.
CONCLUSION
Histological structure of thrombi is crucial to better understand their pathogenesis, properties, and clinical management in IS. The present findings suggest that the histological composition and distribution of different thrombi hemostatic components have prognostic implications, and it would most likely determine the clinical impact of pharmacological and mechanical strategies in order to guide personalized therapies for stroke patients.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the ethics committee of the Navarra Government (84/2018). The patients or their legally authorized representative provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
JM-E participated in the experimental work, analysis of data, and edited and reviewed the manuscript. MN-O participated in the design of the project, experimental work and wrote, reviewed, and edited the manuscript. RM participated in the design of the project, samples collection, and reviewed the manuscript. GZ, RL, MM, JO-A, and JAP participated in the design of the project and reviewed the manuscript. CR and JO-A were in charge of the whole project design, supervised the work, and wrote, edited, and reviewed the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by CIBERCV (CB16/11/00371), Sociedad Española de Trombosis (SETH), project PI19/00065, funded by Instituto de Salud Carlos III and co-funded by UE (FEDER) 'Una manera de hacer Europa' , and Virto S.A. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication. | 2021-02-22T14:15:28.765Z | 2021-02-22T00:00:00.000 | {
"year": 2021,
"sha1": "3691a1628f2908ade4c20c895f3f144fae5e9d02",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.599498/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3691a1628f2908ade4c20c895f3f144fae5e9d02",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253838665 | pes2o/s2orc | v3-fos-license | Clearing an ESKAPE Pathogen in a Model Organism; A Polypyridyl Ruthenium(II) Complex Theranostic that Treats a Resistant Acinetobacter baumannii Infection in Galleria mellonella
Abstract In previous studies we have described the therapeutic action of luminescent dinuclear ruthenium(II) complexes based on the tetrapyridylphenazine, tpphz, bridging ligand on pathogenic strains of Escherichia coli and Enterococcus faecalis. Herein, the antimicrobial activity of the complex against pernicious Gram‐negative ESKAPE pathogenic strains of Acinetobacter baumannii (AB12, AB16, AB184 and AB210) and Pseudomonas aeruginosa (PA2017, PA_ 007_ IMP and PA_ 004_ CRCN) are reported. Estimated minimum inhibitory concentrations and minimum bactericidal concentrations for the complexes revealed the complex shows potent activity against all A. baumannii strains, in both glucose defined minimal media and standard nutrient rich Mueller‐Hinton‐II. Although the activity was lower in P. aureginosa, a moderately high potency was observed and retained in carbapenem‐resistant strains. Optical microscopy showed that the compound is rapidly internalized by A. baumannii. As previous reports had revealed the complex exhibited no toxicity in Galleria Mellonella up to concentrations of 80 mg/kg, the ability to clear pathogenic infection within this model was explored. The pathogenic concentrations to the larvae for each bacterium were determined to be≥105 for AB184 and≥103 CFU/mL for PA2017. It was found a single dose of the compound totally cleared a pathogenic A. baumannii infection from all treated G. mellonella within 96 h. Uniquely, in these conditions thanks to the imaging properties of the complex the clearance of the bacteria within the hemolymph of G. mellonella could be directly visualized through both optical and transmission electron microscopy.
Introduction
[3][4] In fact, both climate change [5][6][7][8] and the ongoing COVID19 pandemic, [9][10][11][12] are exacerbating the emergence of therapeutically resistant pathogens.There is now a real threat that public health gains made over the last century in areas such as infant mortality may be reversed. [13]In this context, many studies have highlighted particular concerns over the ESKAPE group of pathogens, [14][15][16] which produce the majority of nosocomial infections, and have been classified by the WHO as pathogens that urgently require the development of new treatments. [17,18]cinetobacter baumannii [19] and Pseudomonas aeruginosa [20] are particularly problematic members of the ESKAPE group as many of their strains have intrinsic or acquired resistance to the majority of clinically available antimicrobial agents. [21,22]While management of A. baumannii infections is becoming increasingly challenging due to its innate ability to survive in hospitals and persist on surfaces for extended periods of time, [23] P. aeruginosa is a leading cause of nosocomial infections as it is responsible for 10 % of hospital-acquired infections. [20][25][26] Although there is a critical need to develop and assess new treatments for these pathogens, [27] and Gram-negative bacteria in general, [28] this goal has been hampered by several difficulties.
[31] Furthermore, several analyses have revealed that existing antibiotics active in Gram-negative bacteria have quite distinctive chemical properties compared to typical therapeutics; [32,33] for example, they tend to be more polar, more rigid, and less globular than Gram-positive antibiotics, yet traditional medicinal chemistry has a bias toward less polar, more hydrophobic small molecules.Apart from the difficulty in designing new active molecules, the identification and development of promising antimicrobial leads through traditional in vitro screening methods is also problematic.
The activity of new leads can only be optimized if its therapeutic target is established, and its bacterial uptake is quantified.Consequently, a range of analytical methods to quantify the uptake of antibiotics have been reported.37][38] A second common difficulty in screening arises from antimicrobial efficacy tests, such as disk diffusion assays, commonly used to assess in vitro therapeutic activity, as these methods quite often do not correlate with in vivo efficacies [39,40] later assessed in animal models, such as mice and rats, which in themselves require costly and time-consuming specialized facilities.In this context, Galleria mellonella (the Greater Wax Moth caterpillar) has recently emerged as a convenient and viable alternative for such studies. [41,42]Its ethical and logistical advantages, as well as low costs, make G. mellonella an attractive model for the study of host-pathogen interactions, especially as it has been widely established there is a striking correlation between bacteria virulence in mammals and Galleria. [42,43]Indeed, G. mellonella has been used to study the dynamics and virulence of P. aeruginosa [44,45] and A. baumannii infections. [46,47]nlike other non-vertebrate models -but similar to mammals -insects have a complex innate immune system comprised of humoral and cellular responses. [48]As the larvae's hemolymph cells can phagocytose foreign microbial invaders and even produce antimicrobial peptides, this model provides antimicrobial defense information related to that observed in mammalian infection processes. [49,50][43] So, it is unsurprising that G. mellonella is becoming increasingly employed in studies on the in vivo activity of novel antimicrobial agents, particularly as it also gives information on therapeutic dosage and toxicity.A key factor in this model's burgeoning use [51] is that benchmarking studies show that it provides results that are comparable to traditional in vivo models, [52,53] such as mice, but it is more ethically compliant with the 3R's principle. [54]Furthermore, as its complete genome sequence is now available, [55] immune system mapping facilitating a detailed molecular understanding of host responses is possible.
Given the issues discussed above, it is perhaps unsurprising that studies involving metal complexes as novel antimicrobial leads have attracted increasing attention. [56,57]Yet, although polypyridyl Ru II complexes have been extensively studied as imaging probes [58][59][60][61][62][63][64] and anticancer therapeutics, [65][66][67][68][69] and the fact that as early as the 1950s the Dwyer group had demonstrated that [Ru(phen) 3 ] 2 + (phen = 1,10-phenanthroline) and its methylated derivatives were active against a range of Gram-positive bacteria, [70,71] apart for a few notable exceptions, [72][73][74] the potential of this class of compounds as antimicrobials has only just recently begun to be explored more widely. [75]lso more recently, studies in the G. mellonella have begun to emerge.In 2019, Ude, et al. used G. mellonella to investigate the toxicity of Cu II complexes that display activity against Gram positive pathogens, [76] while Güntzel and colleagues demonstrated that treatment with CO releasing Mn I complexes improve the survival rate of Galleria infected with A baumannii and (to a lesser extent) P aeruginosa. [77]Very recently, O'Shaughnessy, et al. have shown that phenanthroline complexes containing Cu II , Mn II and Ag I centers can potentiate the effect of the conventional antibiotic gentamycin and reduce mortality rate in G. mellonella infected with therapeutically resistant P. aeruginosa. [78]n this context -and as part of a program to develop new therapeutics [79][80][81][82] and phototherapeutics [83][84][85][86][87] based on luminescent metal complexes -the Thomas group has previous reported on several antimicrobial leads including a dinuclear Ru II compound, [1]Cl 4 , Figure 1, that displays high therapeutic activity against a range of Gram-negative bacteria, including pathogenic multi-drug resistant strains such as E. coli EC958, [88][89][90] and is also active on resistant Gram-positive bacteria like Staphylococcus aureus. [91]ne of the attractions of exploiting such systems is that they are genuine theranostics.As [1]Cl 4 is luminescent its internalization can be directly visualized through optical microscopy and the incorporation of two electron-dense metal ions means it is also an excellent contrast probe for transmission electron microscopy, TEM. [88,91]These intrinsic imaging properties, facilitated the identification of the action mechanism of the lead.Super resolution STED nanoscopy imaging, TEM, and membrane damage assays all confirmed that the complex disrupts the bacterial membrane structure before internalization, where it then binds bacterial DNA. [88,91] subsequent transcriptomics-based analysis confirmed that a pathogenic AMR E coli strain (EC958) exposed to 1 4 + displayed downregulation of genes involved in membrane transport, but increased activity of an outer membrane repair mechanism.[90] Figure 1. Sructure of the cationic polypyridyl Ru II complex, 1 4 + , studied in this report.
As in vitro and in vivo studies revealed that the complex is not toxic to eukaryotes -even at concentrations that are several orders of magnitude higher than its minimum inhibitory concentration in AMR pathogens [88] -we set out to investigate its therapeutic potential in an standard infection model.
In this report, the in vitro potency of [1]Cl 4 against a number of highly pathogenic strains of both P. aeruginosa and A. baumannii is explored and G. mellonella is used as a model to investigate its in vivo efficacy in the treatment of A. baumannii.In these latter studies, the optical and transmission electron microscopy imaging properties of [1]Cl 4 were exploited to visualize bacterial clearance and confirm that a single dose of the complex totally irradicates the pathogenic infection in all treated larvae without any detectable deleterious effects to the Galleria.
Assessing MIC, MBC, and localization
Cation complex 1 4 + was synthesized as a hexafluorophosphate salt through a reported procedure [88] and was studied as its water-soluble chloride salt, which was obtained via anion metathesis.
Our previous work revealed that -in contrast to most antimicrobials, including Ru II systems [72][73][74] -the complex exhibits higher activity over Gram-negative bacteria, such as an uropathogenic E. coli strain, than the Gram-positive species E. faecalis and S. aureus. [88,91]In these studies, we built upon previous results, and investigated five pathogenic strains of two different Gram-negative ESKAPE bacteria.As both bacteria have common strains that exhibit carbapenem-resistance they are in the WHO's list of Priority 1 -CRITICAL antibiotic-resistant 'priority pathogens': for research and development. [18]. baumannii strains (AB12, AB16, AB184 and AB210) were chosen as they represent currently important clonal groups in the UK.[92,93] All strains have been shown to exhibit multi-drug resistance, including carbapenem.Additionally, two carbapenem resistant clinical-isolate strains of P. aeruginosa, from Public Health England: PA1-PA_ 007_ IMP (IMP-metallo β-lactamase producing) and PA2-PA_ 004_ CRCN (carbapenem and cephalosporin resistant), as well as a pan-drug resistant clinical isolate strain (PA2017) from the University of Surrey were tested.
The minimum inhibitory concentrations, MIC, of the complex were obtained in both glucose defined minimal media (GDMM) and nutrient rich Mueller-Hinton-II (MH-II).Both media have been used in antimicrobial reports on metal complexesbut MH-II is the medium recommended by the European Committee on Antimicrobial Susceptibility Testing [94] and it more closely replicates the conditions the bacteria will experience in the G. Mellonella.As with previous studies, the complex exhibited a higher activity in GDMM, however comparable activities in MH-II were observed -Table 1.
Strikingly, the MIC values are very low for all the A. baumannii strains.While comparative values are higher for P. aeruginosa, it is notoriously difficult to treat pathogen as it exhibits an innate resistance to a wide range of antibiotics, partly due to the cells membranes hosting several multidrug efflux pumps. [21][95]Notably, 1 4 + continues to exhibit potent activity in the P. aeruginosa strains exhibiting carbapenem resistance.As far as we are aware, this is the first inert Ru II polypyridyl complex to exhibit activities comparable to clinical antibiotics on any P. aeruginosa strain.
Estimates of minimum bactericidal concentrations, MBC, for 1 4 + were also obtained and are summarized in Table 2.The same increase in MBC values between GDMM and MH-II is observed as for the MIC.Again a lowered potency is observed on the P. aeruginosa strain.
As antibacterial agents are usually considered bactericidal if the MBC/MIC ratio is no more than four the data summarized in Tables 1 and 2 indicate that 1 4 + is bactericidal across all strains of these two pathogens in both media, causing a � 99.9 % reduction in the viability of the initial bacterial inoculum.
The lineage of AB184 was initially associated with casualties returning from Iraq conflict and as a consequence it is a very common A. baumannii clonal group in both the UK and the USA. [96]As it was observed that 1 4 + displayed high activity against AB184, the luminescent properties of the complex were used to investigate uptake by this strain through superresolution, structured illumination microscopy (SIM), which allows for sub-diffraction limited resolutions of ~100 nm -Figure 2.
The uptake and localization behavior in A. baumannii was entirely consistent with our previous detailed studies. [88,90,91]omplex 1 4 + initially binds to the membrane of cells; which is again consistent with binding to anionic lipopolysaccharides embedded within the membrane. [88]At 60 minutes, luminescence from 1 4 + is no longer observed from the outer membrane; instead, the compound has internalized, likely binding to DNA as it does in other Gram-negative pathogens such as the EC958 strain of E coli. [88]Indeed, these observations are consistent with those obtained with other cationic species which bind to glycerophospholipids that make-up the inner membrane of A. baumannii cells. [97]t is known that some Ru II complexes can function as photoactivated antimicrobials [98] most often through the welldelineated mechanism of singlet oxygen sensitization. [75,99]owever, although the bacterial studies were carried out in the dark, we saw no evidence that exposure to light increased toxicity effects.Indeed, when singlet oxygen quantum yields were directly measured by assessing luminescence at 1270 nm following photoexcitation of [1](PF 6 ) 4 in acetonitrile a ϕ( 1 O 2 ) estimates of only 10 % was obtained.
As previous studies have determined the toxicity of 1 4 + against a representative non-cancerous human cell-line, HEK293 to be IC 50 = 135 μM, [88] therapeutic indices for the compound against all the A. baumannii strains and P. aeruginosa were determined -Table 3.Although the therapeutic index is high across all the pathogenic strains, it is significantly higher for the A. baumannii strains, which reflects its higher potency against this pathogen, indicating that the compound could be a particularly effective treatment for A. baumannii infections.
Bacterial infection screen
With the aim of assessing whether the in vitro activity of 1 4 + against AMR strains is carried through into an in vivo model, the potential of employing G. mellonella as an infection model for these multidrug resistant pathogens was first explored.Again, AB184 was chosen for this study as it is a representative MDR strain of A. baumannii and the highly resistant PA2017 strain was chosen as the P. aeruginosa strain.
In these experiments, it was found that PA2017 at concentrations of 10 3 CFU/mL or above killed 100 % of all inoculated G. mellonella within 24 h -see Supporting Information; Figures S1-3.In fact, this strain is so virulent that a reliable and statistically valid infection model could not be developed, even at very low concentrations.Therefore, the infection model was carried forward solely with AB184 -Si; Figures S1, S4 and S5.
In developing the model, G. mellonella activity over 120 h was scored at 0-4 (0: no movement, 1: minimal movement, 2: movement on stimulation, 3: movement without stimulation) and melanization was scored between 0-3.There is extra scoring for cocoon formation as evidence of cocoon formation was observed in the non-injected controls at the 120-hour timepoint.Additionally, at each time-point, the concentration of bacteria within the larval hemolymph was determined via CFU/ mL counts after extractions.
These experiments revealed that AB184 causes a detectable infection in hemolymph.As there is no decrease in colony forming units over time this infection is pathogenic, with larvae being unable to clear the infection using their innate immune system.This indicates that, in these conditions, AB184 is suitable to be employed in an G. mellonella infection model as any decrease in CFU/mL with treatment with an antimicrobial compound can be attributed to the effects of the compound, and not larvae's immune system.Before investigating treatment regimes, studies confirmed our original report that 1 4 + was non-toxic to G. mellonella up to concentrations of at least 80 mg/kg (Si, Figures S6 and S7), the average maximum daily dose for a clinical antibiotic. [88]Therefore, in the final infection model, larvae were injected with AB184 at concentrations of 10 5 or 10 6 CFU/mL, then 30 minutes later with 1 4 + at 40 or 80 mg/kg and then incubated in the dark.The larvae were then monitored over 120-hours and scored using the previously delineated scheme.Survival curves were plotted for AB184-infected larvae treated with 1 4 + , alongside water and AB184 only controls.
For both AB184 control concentrations (10 5 , 10 6 CFU/mL) Log-Rank statistical t-tests indicated a (**) difference in larvae survival between the water controls: P values 0.0048 and 0.0045 respectively.It's therefore determined that AB184 is pathogenic to G. mellonella at concentrations from 10 5 CFU/mL upward.Significantly, survival curves for AB184 injected larvae at both 40 mg/kg and 80 mg/kg treated with 1 4 + showed no significant difference to the water controls, indicating that the compound kills bacteria and clears the larval infection -for activity and melonization scores, see the Supporting Information Figures S8 and S9.
To ensure that the observed difference between the survival of the co-injected larvae and the bacteria injected controls was a result of the antibacterial action of 1 4 + , CFU/mL counts were conducted from extracted hemolymph -Figure 3. From 96 h onward, no bacterial colonies were formed confirming that the AB184 infection was completely eradicated by treatment with of 1 4 + as a single dose at either of the tested concentrations.
Using 1 4 + to visualize bacterial infection and its clearance within hemolymph
In controls before the infection experiments, it was noticed injection of the compound caused the larvae's hemolymph to luminesce red confirming its localization within the G. mellonella circulatory system.To quantitate this effect the amount of ruthenium in the larvae's hemolymph was assessed using ICP-AES.Hemolymph was extracted from a small incision beneath the larvae's head and ICP-AES experiments were used to monitor Ru content over the 120 h for larvae treated with 20 and 80 mg/kg of 1 4 + -Figure 4.
Given that 1 4 + concentrations are high in the hemolymph and -as illustrated by Figure 2 -the complex is taken up and images A. baumannii cells in vitro, the possibility of directly monitoring infection clearance through optical microscopy was investigated.Hemolymph from live larvae was extracted 24hours after infection and treatment with 1 4 + .Extractions were performed under anaesthetized conditions and bacteria cells were imaged using confocal microscopy -Figure 5.The images confirm that once injected into larvae, 1 4 + preferentially localizes in the bacteria cells.This experiment also revealed an interesting phenomenon.
As complex 1 4 + incorporates two electron dense ruthenium centers it is also an excellent contrast stain for transmission electron microscopy (TEM), [100][101][102] therefore we also investigated if the clearance of the A. baumannii infection could also be monitored through this technique.
Hemolymph from infected larvae was extracted at 24, 48 and 96 h post treatment and imaged through TEM using 1 4 + as the sole contrast stain -Figure 6.
The images clearly reveal that there are still intact A. baumannii cells within the extracted hemolymph at 24 h.However, while live and dead A. baumannii cells were observed at 48 h in both the hemolymph and in hemocyte cells,by 96hours there were no bacteria observed within either the hemolymph and hemocyte cells.Strikingly, the hemocyte cells were still intact with no visible damage.As samples were only stained with 1 4 + ; and sites of contrast arise from binding of the compound, these experiments offer further evidence that 1 4 + is taken up by A. baumannii but also reveal that, like previously reported analogues, [86,102] it appears to bind to mitochondria and lysosomes within the hemocyte cells.
Taken together, the optical microscopy and TEM images are consistent with the results from the CFU hemolymph extraction assays and live/dead scoring, confirming a single dose treat-ment of 1 4 + results in a total clearance of a pathogenic A. baumannii infection of Galleria larvae within 96 h.
Conclusions
Our in vitro experiments show that antimicrobial lead 1 4 + is active against resistant strains of A. baumannii and -albeit to a lesser extent -P.aeruginosa.Together, these bacteria present some of the greatest global threats to health and are two of the most resistant organisms encountered in clinical practice; A. baumannii alone, has been estimated to cause one million infections a year, with carbapenems resistance rates being as high as > 95 %.The need to develop new therapeutic leads against these pathogens is particularly urgent as strains that are resistant to the last-line antibiotics polymyxins and tigecycline are emerging.
Significantly, the complex also cleared an infection of a multidrug resistant A. baumannii strain within G. mellonella in a single dose.In terms of treatment two key observations arise from the infection model study: first, the infection was cleared in all treated larvae, even at the lowest doses of 1 4 + ; second, the infection was successfully treated at concentrations of the complex that produced no detectable toxicity effects in G. mellonella.Furthermore, the fact that 1 4 + is intrinsically luminescent and taken up by the infecting bacteria means it is a genuine theranostic as the larval hemolymph infection could be directly monitored until the host was clear of infection.This is the first time any single agent combining imaging properties capable of monitoring an infective agent and the ability to clear a highly resistant ESKAPE infection in an in vivo model with a single dose has been reported.This study underlines the potential metal complexes offer as new and novel therapeutics, particularly as Ru II complexes have been very recently shown to offer promise as potential leads against pernicious mycobacter pathogens, [103] Further studies in G. mellonella and murine models aimed at optimizing the therapeutic properties of 1 4 + and derivatives, and identifying new leads are currently underway.
Figure 3 .
Figure3.Galleria mellonella infection model.Colony forming unit counts from larvae hemolymph extractions Larvae were injected with: (A) 10 5 CFU/ mL of AB184 or (B) 10 6 CFU/mL of AB184.In both cases, the G. mellonella were treated with 1 4 + (40/80 mg/kg) and the results were compared to untreated larvae.Extractions were taken at 24 and 120 h.Protocol: Larvae were injected with bacteria in their right pro-leg, then those that were treated received a dose of 1 4 + 30 minutes later in their left pro-leg.Larvae were incubated for 120 h at 37.5 °C.Error bars represent the results from three repeats.
Figure 5 .
Figure 5. Confocal microscope images confirm localization of 1 4 + in A. baumannii AB184 cells within the larvae's hemolymph.A: Selected cells images using the emission of 1 4 + on excitation at 450 nm using A568 filter.B: Combined phase contrast/emission image.Extracted hemolymph cells were washed with nitric acid before fixing with paraformaldehyde (16 %).
Table 3 .
Therapeutic index: IC 50 /MIC ratio for all strains of bacteria in GDMM. | 2022-11-25T06:17:30.614Z | 2022-11-24T00:00:00.000 | {
"year": 2023,
"sha1": "473a788eab84d1ad5c4a92099fbccc9540113d63",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/chem.202203555",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5637231b26f07e20bc538e0b8c3d67a9a2ff022a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253322343 | pes2o/s2orc | v3-fos-license | Current Situation of Staff Providing Social Work Services to Children with Autism Spectrum Disorder in Vietnam
______________________________________________________________________________________________________ Autism spectrum disorder is a common syndrome in many countries around the world. According to the General Statistics Office (2018), Vietnam has about one million autistic people (out of a total of 6.2 million people with disabilities aged two years and over). The estimated prevalence of children with autism spectrum disorder is 1% of all newborns. To meet the need for assisting families and children with an autistic spectrum disorder, social work services have been established. However, these services are in the process of being built and are affected by many different factors. The following article presents research results on the status of social work service providers for families and children with an autism spectrum disorder in Vietnam. Service providers are important players in the service supply chain for any group of customers. They will contribute decisively to the quality and efficiency of service. The staff relies on their capabilities to create quality services suitable for each customer group. The study's results provide important suggestions for improving the quality of human resources in implementing social work service models for families and children with autism spectrum disorder. families of children with autism spectrum disorder, children with autism spectrum disorder.
On the other hand, it is necessary to have practical studies on the situation of social work service providers for families and children of ASD to develop strategies and specific solutions to improve the effectiveness of social work services for children with autism and their families. But how and to what extent is the current situation of social services in supporting families and children? What factors affect the quality of social work services? These are still questions that need more research to answer. In studying social work services for families and children with social workers, it is essential to learn about the staff providing services, specifically social workers. In the components of social work (Subjects of social work, Problems of objects, Social agencies, and Problem-solving processes), social workers participate in social agencies and problem-solving approaches. The social worker is mentioned: "Persons who provide services or implement social assistance programs... They are trained people with professional knowledge and skills." (Bui Thi Xuan Mai, 2014, p 99). This study describes the situation of social work service providers in several public and non-public centers in five provinces/cities, including Yen Bai, Bac Giang, Bac Ninh, Hanoi, and Nghe An, thereby giving an overview of their significant features such as age, gender, qualifications, training majors, and work experience. In addition, the results of in-depth interviews with leaders and service providers to analyze the factors affecting the effectiveness of social work services approached from the capacity factor of the staff. This is the information base to make recommendations on support activities for service providers, thereby improving the effectiveness of social work services in the field of autism spectrum disorder.
Research Methodology
The study surveyed social work service providers at 15 public and non-public establishments in five areas: Yen Bai, Bac Giang, Bac Ninh, Hanoi, and Nghe An. There was 133 staff selected to participate in this study with the method of selecting samples as snowballs, and current job positions are employees who directly provide services to families and children with an autism spectrum disorder. The questionnaire focused on issues related to the interviewee's personal information such as gender, age, qualifications, training major, and work experience. From there, some discussions were made about how to improve the quality of service delivery for children and families through creating working conditions and how to upgrade the qualifications of the staff at service providers.
Key findings of the study
The main findings of this study focus on the critical content, which is the situation of social work service providers in the public and non-public centres. This situation includes information such as age, qualifications, training majors, and work experience. From there, there is some discussion about improving the quality of service delivery for children and families through creating working conditions and improving the qualifications of the staff at service providers.
Age and gender of service staff
When conducting research on social work service providers for families and children with autism spectrum disorder, the results of the demographic survey show that there is a gender disparity in this staff: the proportion of women accounted for 12.8% and men accounted for 87.2%. This is explained on a number of bases as follows: First, the staff working in the field of autism spectrum disorders are bachelors in psychology, pedagogy, special education, and social work. This is a female-dominated sector. Therefore, this is the reason why there is a gender disparity in the service staff. In addition, due to gender characteristics, women often have advantages in working with children with autism spectrum disorder, which are traits such as caring, sensitive, sweet, often encouraging, gentle, warm, affectionate, gentle, emotional, emotional, devoted and understanding (Braggin 1982, Burke 2009, Kite 2001, Worell 2001. These are essential qualities for social work activities. Regarding the age of the staff, currently the number of young cadres from 20 to under 30 years old accounts for the highest proportion (49.6%), cadres from 30 to under 40 years old account for 45.9%, cadres over 40 years old with a rate of 3%. This is strength of the social service staff. Young staff will be human resources that quickly absorb new knowledge and new technologies in the working process, adapt easily to different conditions and circumstances, and have abundant health. Besides, the staff from 30-40 years old is also expected to be a human resource with rich working experience. This will be an additional resource besides the young staff. These two forces will complement each other's support in the work process, contributing to the effectiveness of providing the best services for families and children with an autism spectrum disorder.
Professional qualifications
The survey revealed that the percentage of employees with a bachelor's degree from was the highest (47.7%), while the percentage of social workers with a master's degree was the lowest (3%). Thus, the human resources are quite high quality. Other qualifications such as graduation from high school, technical school, and college accounted for a total proportion of 50%, of which the lowest were those who graduated from high school (0.8%) (Cf. Fig. 2). With a high proportion of staff with university degrees or higher, it is a positive signal that the staff and the institutions have planned and invested in capacity development for the staff through recruitment and creating conditions for them to participate in capacity building courses. A high level of expertise is one factor that ensures the quality of staff in the provision of services.
With a team of service providers with education, mainly college and university bachelors (accounting for 84.2%), this is the foundation to provide intensive training in the field of autism through short-term and long-term training courses to improve the quality of service delivery. Methods to improve and update knowledge and skills for intensive social work fields through specialized training courses are also being used and brought effectively in many areas in many countries around the world, for example, social work in the field of disability, drug rehabilitation, and judicial assistance.
Specialized training of service staff
One of the issues assessed through previous summary reports related to the field of social work is the limitation of the right qualifications of the staff. The reason is that the new social work was officially recognized in 2010. When establishments switch functions to perform social work activities and services due to limited staffing, it is difficult for establishments to recruit new ones. Thus, staff needs to recruit from other departments whose professional qualifications are not following the field of social work. This problem has been recently overcome due to the awareness of expertise in this area. Therefore, nonpublic and public institutions have paid more attention to the issue of recruitment, emphasizing people with degrees in social work. Specific results showed that the number of staff with specialized degrees in social work accounted for the most significant proportion (39.1%).
According to the survey results, the training majors of staff providing services to families and children with ASD are mainly graduates from training disciplines such as social work, special education, psychology, delinquency, medicine, and other majors. Except for particular education majors with modules directly related to the field of children with ASD, the rest of the majors do not have these modules (Nguyen Phuong Anh, Nguyen Thi Thai Lan, 2022). Therefore, the lack of well-trained knowledge at the professional level is one of the main factors that make the professional knowledge base while working with families and children with ASD service providers unable to meet the functional needs fully. Regarding the framework of social work training programs at universities and institutes (15 institutions), the sections related to social work in the autism spectrum disorder (ASD) are almost absent. Among the 15 training institutions, only one has a module directly connected to ASD, which is the early intervention subject for children with ASD (Nguyen Phuong Anh, Nguyen Thi Thai Lan, 2022).
Work experience with children affected by autism spectrum disorders
Work experience will have a direct impact on service delivery efficiency. When asked how they evaluate the relationship between their experience and work efficiency, this is the general opinion of employees. Work experience in children with autism spectrum disorder is measured by the time the officer performs work related to the child with ASD and their family, as well as working with the community on ASD.
The work directly provides services to families and children with ASD and the community. For children with ASD are the following services: early intervention, autism screening/ diagnosis, intervention/therapy, community integration, case management, and rehabilitation. For young families with ASD, these are the following services: prevention, psychological counseling, resource mobilization, advocacy, policy counseling, and counseling/ knowledge level. Some specific services for the community include propaganda, raising awareness and mobilizing resources.
The results show that the work experience of the staff providing services in this field is one of the important limitations. The number of employees with work experience of less than one year accounted for 21.1%. Staff with work experience from 1 to 3 years accounted for the highest proportion (33.8%).
In particular, the number of employees with work experience of 5-7 years or more than seven years accounts for a tiny proportion, at 14.3% and 7.5%, respectively. The lack of experience especially in the field of interventions supporting children with autism spectrum disorder is a limitation in ensuring the quality of services. However, that is the reality because working in this field faces many challenges and requires patience and dedication to the profession. When interviewing in-depth some center leaders about the fact that the years of experience of service providers are few, mainly less than five years, there are some reasons given that the staff often have to be replaced because they are less engaged with the profession, especially for non-public centers. The reason for this situation is that working with this group of service providers is often under a lot of pressure: pressure from the family about the child's progress, stress fatigue, and even health effects/injuries during work because of some typical characteristics of ASD children: yelling, eating when not being met, even smashing furniture, causing injury to oneself and others (Nguyen Sinh Phuc et al., 2017). In addition, according to this study's results, the staff's income is also relatively low, below 5 million VND, accounting for 14.3%, from 5 million to less than 10 million VND, accounting for 60.9%. Low income, not enough to cover life, is the factor that makes it difficult for employees to stick with the profession. The survey results also show that the fewer years
Chart 4. Service staff's working experience
of experience, the lower the salary. When asked about the difficulties affecting service providers, up to 41.6% of people choose the low-income factor which is not enough to cover their lives.
Discussion
The study's results have shown that the services staffs have met the requirements of qualifications and training majors. In addition, age and work experience have certain advantages for providing services in the field of autism spectrum disorder in Vietnam today. However, this staff also has some areas that need to change to meet the needs of children and families better. From the situation of social worker services for families and children with ASD described through the main findings of the study on 133 samples above, some recommendations can be made on strengthening staff development to ensure the highest efficiency of the services provided to the beneficiaries in the field of autism spectrum disorders. The first for training institutions is to develop an interest in training social workers in the field of autism spectrum disorder, thereby providing a team of qualified human resources that meet the needs and characteristics of the subject. As analyzed above, according to the survey results, the training majors of staff providing services to families and children with ASD are mainly graduates from training disciplines such as social work, psychology, sociology, and special education. Except for special education majors with modules directly related to the field of children with ASD (Early intervention of children with autism spectrum disorder, Life skills education of children with autism spectrum disorder, Development of communicative language for children with autism spectrum disorder) the rest of the majors do not have these modules (Nguyen Phuong Anh, Nguyen Thi Thai Lan, 2022). Therefore, the lack of welltrained knowledge at the professional level is one of the main factors that make the professional knowledge base while working with families and children with ASD service providers unable to meet the working needs. Thus, to improve the staff's knowledge, we need to pay attention and develop a system of appropriate training institutions. After that, human resources in this field will be better ensured. This can be done through two forms: the development of professional training programs at the university level, and the development of intensive training courses in the field of autism spectrum disorder.
Secondly, for service providers, it is necessary to have a plan to train, foster, and improve the qualifications of staff working through the topics of in-depth knowledge of each type of service, working skills, and coping with stress, thereby contributing to improving the ability to adapt to the work of the staff directly providing services. A scarcity of experienced workers can be caused when individuals quit their positions for an extended time because they cannot overcome difficulties and challenges. Professional work skills are the driving factor for the job success of any profession. Working with families and children with ASD is also of great interest because this is a group of clients with many unique characteristics and the problems they face are also very diverse. However, it is currently observed that service providers are quite focused and doing well in specific skills in children's education, while skills to work with young families and communities are still weak. This is the reason why they encounter difficulties in the process of working with their families and communities. Some skill groups that are quite specific to families and children with ASD are mentioned in the process of working with the family, such as: listening, observing, asking questions, responding, handling crises, or skills working with the community: building networks, mobilizing resources, coordination (Nguyen Trung Hai et al., 2017). However, implementing these skills is still very limited because, as mentioned above, there are many reasons for this situation. First, the field of ASD children has not been well trained, so the access, learning, and practice of these skills are not much. Second, when it comes to families affected by ASD, the community only cares about children, and then often ignores the family as an important factor in raising and changing the child's condition. So all the hard pressure falls on the service providers and families expecting and leaving the children at the center. If the results are not as desired, they pressure the staff to provide services and centers. This seems to be a circle of corollary issues where the person trapped is the service provider. Because of the lack of skills to work with young families, the effectiveness of service delivery is often affected. Thus, making the family understand and accept their child's problems is also a job that requires the skills to work with the young family, thereby improving the efficiency of service delivery and bringing the best benefits to children and families. | 2022-11-05T15:38:18.498Z | 2022-10-30T00:00:00.000 | {
"year": 2022,
"sha1": "c565e8c68b02b100294ac92c95118474ad980baf",
"oa_license": null,
"oa_url": "https://msocialwork.com/index.php/aswj/article/download/229/135",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "db62a882b34b28690a66b3d9f27a215ceca35611",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"extfieldsofstudy": []
} |
23892926 | pes2o/s2orc | v3-fos-license | Meningiomas of the Anterior Clinoid Process: Is It Wise to Drill Out the Optic Canal?
Introduction: Meningiomas of the anterior clinoid process are uncommon tumors, acknowledged by most experienced surgeons to be among the most challenging meningiomas to completely remove. In this article, we summarize our institutional experience removing these uncommon and challenging skull base meningiomas. Methods: We analyzed the clinical outcomes of patients undergoing surgical removal of anterior at our institution over an 18-year period. We characterized the radiographic appearance of these tumors and related tumor features to symptoms and ability to obtain a gross total resection. We also analyzed visual outcomes in these patients, focusing on visual outcomes with and without optic canal unroofing. Results: We identified 29 patients with anterior clinoid meningiomas who underwent surgical resection at our institution between 1991 and 2007. The median length of follow-up was 7.5 years (range: 2.0 to 18.6 years). Similar to others, we found gross total resection was seldom safely achievable in these patients. Despite this, only 1/20 of patients undergoing subtotal resection without immediate postoperative radiosurgery experienced tumor progression. The optic canal was unroofed in 18/29 patients in this series, while in 11/29 patients it was not. Notably, all five patients experiencing visual improvement underwent optic canal unroofing, while three of four patients experiencing visual worsening did not. Conclusions: These data provide some evidence suggesting that unroofing the optic canal in anterior clinoid meningiomas might improve visual outcomes in these patients.
Introduction
Meningiomas of the anterior clinoid process are uncommon tumors, acknowledged by most experienced surgeons to be among the most challenging meningiomas to completely remove due to their propensity to encase the internal carotid artery (ICA) and its branches, and invade the cavernous sinus and the optic canal [1][2][3][4]. In many cases, the tumor is densely adherent to the carotid artery, rendering complete tumor removal impossible, even in experienced hands [4][5][6][7] (Figures 1-3).
FIGURE 3: Coronal image of tumor and surrounding structures
As tumors enlarge in the cisternal supraclinoid segment, they displace the optic nerve and ICA inferiorly and medially, covering these structures from view to the operating surgeon via an intradural approach.
To date, most surgical outcome studies of meningioma patients have focused on presenting and analyzing outcomes of a cohort of patients undergoing surgery for a specific type of meningioma as single unified patient cohort [1][2][3][4][5][6][8][9][10]. However, it is hard to argue that any group of skull base meningiomas represent a unified group of uniform pathologic anatomy. While some skull base meningiomas present as a localized mass, others present as a diffuse mass, infiltrating the cavernous sinus, encasing vessels, and invading cranial nerve foramina. Most skull base surgeons are well aware that not all clinoid meningiomas are the same. However, due to the rarity of these lesions, it has been difficult to sub-stratify and subanalyze these lesions differently based on differing radiographic features. Thus, the literature to date has generally not analyzed outcomes for clinoidal meningiomas in the same way that skull base surgeons think of them when they are planning an operation.
While complete tumor removal, if possible, is the goal of all meningioma surgery, the evolution of stereotactic radiosurgery and 3D-conformal radiotherapy as effective, less invasive treatment options for addressing residual tumor postoperatively has paradoxically made evidence-based intraoperative decision in these cases more complex [11][12][13][14]. We would argue that in present neurosurgical practice, surgery of these complex, multi-faceted lesions can best be described as a series of risk-benefit comparisons in which the surgeon weighs the risks and benefits of surgically removing each portion of the tumor against leaving all or part of that portion of the tumor and treating with radiosurgery, or simply observing the residual disease with serial imaging.
In this article, we summarize our institutional experience removing these uncommon and challenging skull base meningiomas. We have specifically targeted our analysis towards radiographically characterizing the frequency of specific radiographic characteristics of surgically treated lesions at our institution. We further characterize the significance of these tumor characteristics for surgical decision-making and clinical outcome, with a specific emphasis on analyzing the impact of the decision to open the optic canal on visual outcomes.
Patient population
The patients were adults (age ≥ 18 years) who underwent surgery at the University of California at San Francisco (UCSF) between 1991 and 2009, had preoperative and postoperative (< 72 hours) magnetic resonance imaging (MRI), and had at least one year of clinical followup. Patients with hemangiopericytomas were excluded from the study. Patients were included in this analysis only if radiosurgery did not seem an appropriate alternative treatment option given the clinical and radiographic characteristics of the specific cases. In general, these cases involved tumors greater than 2.5 cm in the largest diameter, tumors with imaging features concerning for higher histologic grade (i.e. irregular borders, an indistinct interface with the cortical surface), tumors growing rapidly on serial imaging, and tumors with significant symptoms referable to mass effect.
This study was approved by the UCSF Committee on Human Research (approval #H7828-29842-03). Informed patient consent was obtained for all patients undergoing treatment.
Microsurgical technique, surgical strategy, and perioperative management
Intraoperative neuronavigation was used routinely in order to help identify the internal carotid artery (ICA) displayed as a red object using merged 2D magnetic resonance angiography with axial T1-weighted images. This was helpful for the larger tumors where the ICA was enveloped by the tumor. For most cases, a cranio-orbital skull base approach was used while attempting a Simpson Grade 1 resection whenever possible. Preoperative embolization was generally not performed for these tumors given that these are often supplied largely by small ICA perforators, which cannot be sacrificed. Tumors were generally approached using a standard front temporal craniotomy or with a two part frontotemporal orbitozygomatic osteotomy. The decision whether or not to perform extradural anterior clinoidectomy and unroof the optic canal performed was based on attending practice patterns (some attending physicians unroof the optic canal routinely as was the practice of the senior author, others routinely do not).
For those cases in which the optic canal was drilled out, an extradural method was used. If there was involvement of the infra-clinoid region then an extradural clinoidectomy was performed after opening the optic canal. The senior author's rule was: in order to do a safe clinoidectomy you first need to open the optic canal, but you don't need a clinoidectomy to only open the optic canal. To open the optic canal, the extradural dissection proceeded medially along the sphenoid wing until the orbitomeningeal fold was identified. The first 6-8 mm of the fold were cut to allow exposure of the length of the clinoid. The bone of the roof of the orbit toward the orbital apex was removed using a 2 mm diamond drill bit with constant irrigation (Figures 4-5).
FIGURE 4: Roof of optic canal
Intraoperative image of the initial drilling of the roof of the optic canal.
FIGURE 5: Extradural approach
Artists depiction of this phase of the extradural approach.
Ultrasonic aspirators were never used for bone removal around the canal or clinoid. A trough was created on the medial and lateral side of the canal using the marrow space of the clinoid as the lateral marker for the lateral limit of the canal and posterior ethmoid air cells as the medial limit ( Figures 6-7) and then the central two-thirds of the roof was removed with a #4 Rhoton microdissector (Figures 8-9).
FIGURE 6: Optic Canal Exposed
Intraoperative image of cortical bone of roof of optic canal exposed. Medial border of waxed ethmoid air cells and lateral border of cancellous bone of clinoid marked with arrows. Artists illustration of image.
FIGURE 8: Removal optic canal roof
After drilling a medial and lateral gutter through cortical bone of the optic canal, the central 2/3 of the roof is dissected off optic nerve sheath using micro instrument (Rhoton #4). Artist's depiction of image.
It is thought that limiting drilling of the cortical bone of the roof of the optic canal to the medial and lateral sides may reduce the chance of optic nerve injury from the transmission of mechanical energy or heat. If clinoidectomy was required, it was completed at this stage using the posterior lateral aspect of the optic canal as a marker for the medial drilling of the base of the optic strut. Intradural exposure of the tumor from an inferior frontal approach with tumor debulking until such time as the basal parts of the tumor needed attention ( Figure 10). Here, the basal frontal dura was incised back towards the optic canal and nerve sheath (Figures 11-12).
FIGURE 10: Intradural tumor removal
Artist's depiction of first steps in intradural tumor removal towards base near clinoid/canal region.
FIGURE 16: Identification of displaced optic nerve
Optic nerve identified, displaced posteriorly, inferiorly and medially by tumor. This would not be seen until the end of the dissection if an intradural dissection were performed without opening of the optic canal.
Just deep to the displaced optic nerve, the ICA can be found after removing the tumor in the proximal, superior, and lateral aspect of the optic canal ( Figure 17).
FIGURE 17: Identification of ICA
After identifying the optic nerve, the ICA can be found after removing tumor entering the lateral and superior part of the optic canal.
Now the two critical structures to be preserved have been found and the tumor detached from the base, both assisting with further dissection of the infra-clinoid and cisternal tumor ( Figure 18).
FIGURE 18: Dissection of ICA
Dissection of subarachnoid planes performed along ICA after detachment of tumor from the base and having identified both the optic nerve and ICA early in the dissection.
Once intradural in general, the tumor was debulked from within using an ultrasonic aspirator. Careful attention was paid to identifying and respecting the arachnoid plane at the tumor-brain interface, which facilitates complete resection and minimizes pial vessel injury. Whenever possible, the involved dura was resected or cauterized.
While the goal of the operation from the onset was total tumor removal, the discovery of significant tumor adherence to the cranial nerves or the internal carotid artery, or significant invasion of the cavernous sinus, generally prompts us to seek near total resection, leaving a small amount of tumor in the involved region.
Intraoperatively, all patients received decadron (10 mg), mannitol (1 g/kg), and ceftriaxone (1 or 2 gm) at the time of incision. Postoperatively, all patients were cared for in a neuro-intensive care unit for one day before returning to the ward. On postoperative day 2, a prophylactic dose of enoxaparin (40 mg SC each day) was initiated in all patients and continued for one week. Routine use of venous thrombosis prophylaxis was not started until after 2001 [15]. The incidence of postoperative intracranial hemorrhage was no different in the patient groups before or after prophylaxis was begun [16]. Irrespective of preoperative seizure history, all patients were also loaded with an antiepileptic agent at the time of surgery (Dilantin initially, Keppra more recently), which was continued for one week postoperatively and then discontinued.
Data collection
Preoperative MR imaging was reviewed for each patient in order to confirm the diagnosis of a We routinely perform formal assessments of visual function using formal visual acuity testing and formal perimeter field testing, both pre-and postoperatively. Improvement in visual function was defined as > 30% reduction in visual field deficit and/or meaningful improvement in visual acuity on postoperative examination. Worsening in visual function was defined as any new visual field cut, or any significant decline in visual acuity postoperatively. Visual function was defined as "unchanged" if no change or minor change occurred between tests. All patients were seen by ophthalmologists for visual follow-up, and the visual outcome was obtained from written objective assessments of visual improvement and patient reports. Conversion from written or printed forms to electronic medical records did not allow for scanning of visual fields into the EMR in all but one case. Paper records were subsequently destroyed after the digital conversion, limiting our ability to display postoperative visual field patterns.
Central pathology review was performed on the basis of the most recent World Health Organization (WHO) guidelines [17]. Clinical data were collected from the patient records and telephone interviews. All clinical assessments were performed by a neurosurgeon. In each case, the extent of resection and Simpson Grade [18][19] were determined using a combination of the surgeon's assessment and MR imaging.
Statistical analysis
Binary variables were compared using Pearson's χ 2 test. Continuous variables were compared using an independent samples t-test or ANOVA, after statistical confirmation of normality. Continuous variables are presented as mean ± SE. Statistical tests were considered significant when p < 0.05 after correcting for multiple comparisons using the Bonferroni method.
Patient and tumor demographics
We identified 29 patients with anterior clinoid meningiomas who underwent surgical resection at our institution between 1991 and 2007. The demographic characteristics of individual patients are listed in Table 1. The median length of follow-up was 7.5 years (range: 2.0 to 16.9 years). The median patient age was 53 years old at the time of surgery (range: 21 to 78 years old). The patient population was largely female, which is not unusual for a series of meningioma patients. Twenty-seven of 29 patients had WHO Grade 1 meningiomas. All patients underwent surgery as opposed to radiotherapy either due to large tumor size, proximity of the tumor to the optic apparatus, or both.
Relationship between preoperative radiographic characteristics and presenting symptoms
The relationship between presenting symptoms and radiographic tumor characteristics as summarized in Table 2. We found that 25/29 of tumors had a supraclinoidal origin, while 4/29 were infraclinoidal in origin. All four patients with an infraclinoidal origin demonstrated cavernous sinus invasion while only 3/25 patients with supraclinoidal tumor origin did. About half of tumors (15/29) invaded the optic canal, and slightly less than half of these tumors encased the supraclinoid carotid artery (14/29). Sellar invasion was present in 12/29 of the patients.
TABLE 2: Imaging Characteristics and Symptoms
The frequency of various imaging characteristics of anterior clinoid meningiomas, and the relationship between these findings and various presenting symptoms.
Interestingly, while patients with optic canal invasion by tumor usually presented with decreased preoperative vision (12/15), over half of the patients without optic canal invasion did as well (8/14), suggesting that radiographic optic canal invasion is not necessary for visual compromise in these tumors. Also interesting was the complete absence of hypopituitarism in these patients despite sellar invasion being present in a large number of patients. Cranial nerves, III, IV, and/or VI, were only present in one patient preoperatively, which is similarly interesting given the proximity of these tumors to the nerves of the superior orbital fissure.
Relationship between preoperative radiographic characteristics and extent of resection
Similar to others [2], we found gross total resection was seldom safely achievable in these patients. Simpson Grade 1 resection was achieved in three patients, Grade 2 resection was achieved in three patients, and Grade 3 resection was achieved in one patient. We were unable to achieve gross total resection in any cases with an infraclinoidal tumor origin or cavernous sinus invasion ( Table 3). We were, however, able to obtain gross total resections (Simpson Grades 1-3) in a few patients with optic canal invasion (3/15), vessel encasement (2/14), and sellar invasion (2/10).
3: Imaging & Extent or Resection
The relationship between imaging characteristics and extent of resection achieved in these patients.
Despite the frequent need for subtotal resection for these tumors, we found that these tumors seldom progressed following subtotal resection, even without radiosurgery. Of the 22 patients in this series who received a Simpson Grade IV resection, two patients underwent radiosurgery for the residual disease in the cavernous sinus shortly following surgery. Of the remaining 20 patients not undergoing upfront adjuvant postoperative radiosurgery, only 1/20 (5%) has experienced documented growth of their residual tumor during the follow-up period ( Table 1). This cohort interestingly includes two WHO Grade 2 tumors, which have not recurred to date.
Postoperative visual outcome with or without optic canal unroofing
The optic canal was unroofed in 18/29 of patients in this series. In 11/29 of patients, the meningioma was removed, and when possible, the dura overlying the anterior clinoid process was coagulated; however, the optic canal was not unroofed, and tumor invasion into the optic canal was not addressed surgically. Notably, all five patients experiencing visual improvement underwent optic canal unroofing, while three of four patients experiencing visual worsening did not ( Table 4). While statistical significance is difficult to achieve in cohorts of this size, these data did display a statistical trend towards improved outcomes with unroofing the optic canal (χ 2 p=0.13).
TABLE 4: Optic Canal Opening and Vision
Visual outcomes of patients who underwent optic canal unroofing and those who did not To address the possibility that extent of resection might impact visual outcomes, we compared visual outcomes in patients stratified by the Simpson Grade of resection (Table 5). Again, it is difficult to draw firm conclusions in a cohort this size; however, these data do not obviously suggest a significant relationship between subtotal resection and improved/worsened visual outcomes, as some patients receiving Simpson Grade 4 resection experienced improved visual outcomes, while others experienced worse outcomes.
TABLE 5: Simpson Grade and Visual Outcome
A summary of extent of resection and visual outcome in the 29 patients in this study.
Surgical morbidity and mortality after resection of anterior clinoid meningiomas
Seven of 29 patients (24%) in this series experience at least one medical, neurosurgical, or neurologic complication resulting from the surgical procedure. Four of these seven patients experienced worsening visual function as described in the previous section. One of the remaining patients suffered a retraction injury causing word finding difficulties, which largely had resolved at long-term follow-up. One patient developed a venous infarction, which caused facial nerve weakness that also resolved by six months postoperatively. One patient developed new hydrocephalus, which eventually required ventriculoperitoneal shunting. There were no wound complications and no medical complications (i.e. DVT/PE's, UTI's, cardiac, renal, pulmonary, hepatic, etc.) in this cohort. The six-month mortality rate in these patients was 0%.
In addition, one patient (patient #17) ( Table 1) who had a tumor with significant cavernous sinus involvement and proximity to the optic nerve, underwent craniotomy and removal of the components of the tumor near the optic apparatus and in the optic canal. The craniotomy was uneventful. Four months postoperatively, she subsequently underwent Gamma Knife radiosurgery with 15 Gy to the 50% isodose line administered to the cavernous sinus disease. Eight months post-Gamma Knife, she presented with intermittent motor symptoms and eventually underwent angiography demonstrating complete occlusion of the cavernous carotid. Given that she had good collateral filling through the posterior communicating artery, she was treated with aspirin alone with good symptom resolution.
Discussion
In our opinion, anterior clinoid meningiomas should be conceptually thought of as being really three different tumors in close proximity: the cisternal portion, the cavernous/carotid portion, and the optic canal portion, although not all of these portions are present in all cases. In this conceptual framework, each of these three "tumors" poses different issues regarding their proximity to the optic apparatus, their relationship to important neurovascular structures, and the challenges of the surgical maneuvers necessary to remove tumor from the relevant anatomic region. Thus, the decision to remove the tumor from the optic canal represents a riskbenefit decision comparing the relative merits of drilling out the optic canal and removing the tumor, versus leaving tumor behind and observing it. With this question formally posed, the relative merits of different treatment approaches can potentially be systematically studied, and the decision about whether this surgical technique is a worthwhile risk can be made based on data targeted at specifically answering this question. Due to the rarity of these lesions, it is difficult for any one center to definitively answer this question alone, and thus, our study represents the first formal contribution of data towards this answer.
The present study presents data regarding the frequency of various preoperative anatomic characteristics and clinical outcomes for a moderately sized series of patients treated surgically for anterior clinoid meningiomas at our institution. While our series adds to a growing literature regarding outcomes of patients undergoing surgery for these difficult lesions [1-6, 8-10, 20], due to the rarity of these lesions, it is unlikely that any one center treating these lesions can acquire enough experience with anterior clinoid meningiomas to definitively answer important questions, such as "Should surgeons drill out the optic canal and remove tumor from the canal?", and "What is the fate of residual tumor left in the cavernous sinus or on the ICA?" Due to the variability of anatomic presentation of skull base meningiomas, the rarity of many meningioma subtypes (in this case, clinoid meningiomas), and the long time period that these patients need to be observed in the postoperative period, it is very likely that these questions can only be answered through the collaborative efforts of multiple centers over many years. It is important, however, that such data collection follows a standardized, detailed, and rational methodology, so that important confounding variables can be controlled for, and that wellconducted studies to address these important questions can be structured.
Our analysis of surgical outcomes of patients with anterior clinoid meningiomas aimed to provide data for two major questions regarding these tumors. The first question was whether it is wise to unroof the optic canal and attempt to remove the tumor in these cases. Interestingly, we found that all patients who experienced visual improvement underwent optic canal unroofing while most patients whose vision worsening postoperatively did not. These data are certainly not definitive; however, they suggest that optic canal roofing is at least not clearly a bad idea and might be helpful. Interestingly, two of the patients whose vision improved did not have obvious radiographic evidence of tumor invasion into the canal, and the optic canal unroofing was performed as part of the extradural clinoidectomy, suggesting that merely the fact that tumor removal from the optic canal decompressed the optic nerve cannot explain the visual improvement in these cases. Possible benefits of optic canal unroofing in these cases include protection of the optic nerve from vibratory and thermal injury during surgery, elimination of a point of kinking of a compressed optic nerve, and reduction of pressure around the optic nerve in the perioperative period when tumor and/or nerve swelling might occur [1][2][3]6]. The latter might be of particular importance given the frequent need to leave behind tumor in these cases. It is important to note that these findings may not necessarily extend to optic canal invasion from other meningiomas, such as those arising from the tuberculum sella, as the different origin of these tumors might cause different tumor-optic nerve orientations and different arachnoidal planes than those seen with anterior clinoid meningiomas [1][2]. As such, these data deserve independent analysis.
An additional question we sought to study in our dataset was: Given the frequent need for subtotal resection in these cases, is upfront postoperative adjuvant radiosurgery or radiotherapy indicated to prevent growth of the residual disease [14]? We found that even without conformal radiotherapy or radiosurgery, most residual tumors did not regrow over a period of several years of follow-up, suggesting that adjuvant therapy can usually be avoided with close imaging follow-up. Given the close proximity of these tumors (and their postsurgical remnants) to the highly radiosensitive optic nerve [11,14], the fact that radiation can often be avoided or delayed in many patients with subtotal WHO Grade I tumors is not a trivial point. This concept is further highlighted by the post-radiosurgery carotid occlusion experienced by patient #17, which highlights the fact that radiosurgery, while generally safe and effective, is not entirely benign when administered in this region and should be reserved for patients with residual disease demonstrating growth on follow-up imaging studies. On the other hand, observation is also not entirely free of risk, and we recommend performing annual follow imaging on these patients, as a delay in the re-treatment of persistent tumor regrowth can leave the patient with a much more difficult to manage problem.
Conclusions
In conclusion, we present data from our series of patients with surgically treated anterior clinoid meningiomas, which suggests that while a conservative approach to these lesions can still provide reasonable rates of tumor control; even without upfront radiosurgery/radiotherapy to the residual disease, some aggressive maneuvers, such as unroofing the optic canal, might be beneficial. At a minimum, these data contribute to what we hope will be the beginning of a collaborative effort towards a systematic approach to assessing the risks and benefits of techniques we utilize in skull base surgery.
Additional Information Disclosures
Animal subjects: This study did not involve animal subjects or tissue. Human subjects: UCSF Committee on Human Research issued approval H7828-29842-03. | 2018-04-03T01:45:26.617Z | 2015-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "0ec942a0511ed6884b4d2c6a7068bc29e01598e1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7759/cureus.321",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3df675f4c0c0a055f3da7f160432bb43738316a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56576597 | pes2o/s2orc | v3-fos-license | Phase Diagram of the Frustrated Square-Lattice Hubbard Model: Variational Cluster Approach
The variational cluster approximation is used to study the frustrated Hubbard model at half filling defined on the two-dimensional square lattice with anisotropic next-nearest-neighbor hopping parameters. We calculate the ground-state phase diagrams of the model in a wide parameter space for a variety of lattice geometries, including square, crossed-square, and triangular lattices. We examine the Mott metal-insulator transition and show that, in the Mott insulating phase, magnetic phases with N\'eel, collinear, and spiral orders appear in relevant parameter regions, and in an intermediate region between these phases, a nonmagnetic insulating phase caused by the quantum fluctuations in the geometrically frustrated spin degrees of freedom emerges.
Introduction
The effect of geometrical frustration in strongly correlated electron systems has been one of the major issues of condensed matter physics. In particular, a spinliquid state caused by the frustration has been interpreted as an exotic state of matter, where the magnetic long-range order is destroyed, yielding a quantum paramagnetic (or nonmagnetic) state at zero temperature 1) or even exotic mechanisms of high-temperature superconductivity. 2) The Hubbard, Heisenberg, and related models defined on the two-dimensional square and triangular lattices with the geometrical frustration have been studied in this respect to find the novel quantum disordered states by means of a variety of theoretical methods.
In the square-lattice cases, the J 1 -J 2 Heisenberg model with the nearest-neighbor (J 1 ) and next-nearestneighbor (J 2 ) exchange interactions have been studied for more than two decades. At J 2 = 0, where the frustration is absent, the model is known to have the Néel-type antiferromagnetic long-range order. With increasing J 2 , the frustration increases, but at J 2 = J 1 , the model again has the ground state with the collinear antiferromagnetic long-range order. The strongest frustration occurs around J 2 /J 1 = 0.5, where a nonmagnetic disordered states such as a valence bond state 4, 6, 8, 10, 11, 14-16, 18, 22, 29) and a spin-liquid state 12,[24][25][26][27][28] have been suggested to appear, the region of which has recently been studied further in detail. 31,32) The t 1 -t 2 Hubbard model with the nearest-neighbor (t 1 ) and next-nearest-neighbor (t 2 ) hopping parameters has also been studied, where it has been shown that the ground state is the Néel order at a small t 2 /t 1 region and a collinear order around t 2 = t 1 , 33,34) and that the quantum disordered state appears between these ordered states. 35,36) In the triangular-lattice cases, the anisotropic J-J ′ triangular Heisenberg model has been studied. In the isotropic case (J = J ′ ), the 120 • spiral ordered phase is known to be stable. 37) In the anisotropic case, the Néel order is realized when J ′ /J is small and the spiral order is realized around J ′ /J = 1, [38][39][40][41][42][43][44][45][46][47][48][49] and between these phases, a dimer ordered phase 39) or a spin-liquid phase 47,48) have been predicted to appear. The anisotropic t-t ′ triangular Hubbard model has also been studied [50][51][52][53][54] and a quantum disordered state has also been observed between the Néel and spiral phases. 51,52,54) Recently, the magnetic orders in the triangular-lattice Heisenberg model with the nearestneighbor (J 1 ) and next-nearest-neighbor (J 2 ) exchange interactions have also been studied, where the quantum disordered phase is shown to appear between the spiral and collinear phases. [55][56][57][58][59] In this paper, motivated by the above developments in the field, we study the frustrated square-lattice Hubbard model at half filling with the isotropic nearest-neighbor and anisotropic next-nearest-neighbor hopping parameters and clarify the appearance of possible magnetic orderings and emergence of quantum disordered phase. The search is done for a wide parameter space including the square, crossed-square, and triangular lattices. We use the variational cluster approximation (VCA) based on the self-energy functional theory (SFT), [60][61][62] which enables us to take into account the quantum fluctuations of the system, so that we can study the effect of geometrical frustration on the spin degrees of freedom and determine the critical interaction strength for the spontaneous symmetry breaking of the model. We in particular focus on the strong correlation regime at zero temperature of which not much is known so far, and compare our results with those of the Heisenberg model for which many studies have been accumulated.
We will thereby show that the magnetic phases with the Néel, collinear, and spiral orders appear in relevant regions of the parameter space of our model and that the quantum disordered phase caused by the effect of frustration emerges in a wide parameter region between the ordered phases obtained. The orders of the phase transitions will also be determined. We will summarize our results as a ground-state phase diagram in a full two-dimensional parameter space. This phase diagram will make the characterization of the quantum disordered phase more approachable although it is beyond the scope of the present paper.
Model and method
We consider the frustrated Hubbard model defined on the two-dimensional square lattice at half filling as illustrated in Fig. 1. The Hamiltonian is given by where c † iσ is the creation operator of an electron with spin σ at site i and n iσ = c † iσ c iσ . i, j indicates the nearest-neighbor bonds with an isotropic hopping parameter t 1 , and i, j and i, j ′ indicate the nextnearest-neighbor bonds with anisotropic hopping parameters t 2 and t ′ 2 , respectively [see Fig. 1(a)]. U is the onsite Coulomb repulsion between electrons and µ is the chemical potential maintaining the system at half filling. In the large-U limit, the model can be mapped onto the frustrated spin-1/2 Heisenberg model in the second-order perturbation of the hopping parameters. We define the spin-1/2 operator S i = c † iα σ αβ c iβ /2, where σ αβ is the vector of Pauli matrices. The exchange coupling constants are given by Fig. 1(a). Because in this paper we are interested in the geometrical frustration in the spin degrees of freedom of the model and want to compare our results with those of the Heisenberg model for which related studies have been accumulated, we restrict ourselves to a large-U region assuming a value U/t 1 = 60, so that we can preclude the Mott metal-insulator transition. We treat a wide parameter space of 0 ≤ t 2 /t 1 ≤ 1 and 0 ≤ t ′ 2 /t 1 ≤ 1, including three limiting cases: (i) at t 2 = t ′ 2 = 0 [square lattice, see Fig. 1 where the collinear order is realized, and (iii) at t 2 = t 1 and t ′ 2 = 0 [triangular lattice, see Fig. 1(d)], where the 120 • spiral order is realized. We will calculate how the above three ordered phases change when the hopping parameters are varied in the ranges 0 ≤ t 2 ≤ t 1 and 0 ≤ t ′ 2 ≤ t 1 . We employ the VCA, which is a quantum cluster method based on the SFT, [60][61][62] where the grand potential Ω of the original system is given by a functional of the self-energy. By restricting the trial self-energy to that of the reference system Σ ′ , we obtain the grand potential in the thermodynamic limit as where Ω ′ and G ′ are the exact grand potential and Green function of the reference system, respectively, and G 0 is the noninteracting Green function. The short-range electron correlations within the cluster of the reference system are taken into account exactly. The advantage of the VCA is that the spontaneous symmetry breaking can be treated within the framework of the theory. Here, we introduce the Weiss fields for magnetic orderings as variational parameters. The Hamiltonian of the reference system is then given by where h ′ N , h ′ C , and h ′ S are the strengths of the Weiss fields for the Néel, collinear, and spiral orders, respectively. The wave vectors are defined as Q N = (π, π) for the Néel order and Q C = (π, 0) or (0, π) for the collinear order. For the spiral order, the unit vectors e ai are rotated by 120 • to each other, where a i (= 1, 2, 3) is the sublattice index of site i. The variational parameter is optimized on the basis of the variational principle ∂Ω/∂h ′ = 0 for each magnetic order. The solution with h ′ = 0 corresponds to the ordered state.
We use a 12-site cluster shown in Fig. 2 as the reference system. This cluster is convenient because we can treat the two-sublattice states (Néel and collinear states) with an equal number of up and down spins, and at the same time the three-sublattice state (spiral state) with an equal number of three sublattice sites. Note that longer period phases such as a spiral phase mentioned in a different system 52) cannot be treated in the present approach.
Results of calculations
First, let us present the entire phase diagram of our model in Fig. 3, where the result for our Hubbard model in the (t 2 /t 1 , t ′ 2 /t 1 ) plane as well as the same result converted to the Heisenberg-model parameters (J 2 /J 1 , J ′ 2 /J 1 ) are shown. We find three ordered phases: the Néel ordered phase around (t 2 /t 1 , t ′ 2 /t 1 ) = (0, 0), the collinear ordered phase around (t 2 /t 1 , t ′ 2 /t 1 ) = (1, 1), and the spiral ordered phase around (t 2 /t 1 , t ′ 2 /t 1 ) = (1, 0) and (0, 1). The quantum disordered phase, which is absent in the classical system, appears in an intermediate region between the three ordered phases. As shown below, the phase transition to the collinear phase is of the first order (or discontinuous) and the phase transitions to the Néel and spiral phases are of the second order (or continuous). This phase diagram is determined based on the calculated ground-state energies E = Ω + µ (per site) and magnetic order parameters M (per site) defined as M N = (2/L c ) i e iQN·ri S z i for the Néel order, M C = (2/L c ) i e iQC·ri S z i for the collinear order, and M S = (2/L c ) i e ai · S i for the spiral order, where stands for the ground-state expectation value and L c is the number of sites in the reference system. In the following, we will circumstantiate the obtained phases, in particular along the lines (i), (ii), and (iii) drawn in Fig. 3(a), whereby we will discuss some details of our Fig. 4, where we assume t 2 = t ′ 2 . At t 2 = 0, the ground state is the Néel order, and with increasing t 2 , the energy of the Néel order gradually approaches the energy of the disordered state. At t 2 /t 1 = 0.73, the energy of the Néel order continuously reaches the energy of the disordered state and the Néel order disappears. The calculated order parameter indicates the continuous phase transition. At t 2 /t 1 = 1, on the other hand, the ground state is the collinear order. The ground-state energy of the collinear order increases with decreasing t 2 , and at t 2 /t 1 = 0.79, it crosses that of the disordered state, resulting in a discontinuous phase transition as the calculated order parameter indicates. The disordered state thus appears at 0.73 < t 2 /t 1 < 0.79, which corresponds to the region 0.53 < J 2 /J 1 < 0.63 in the Heisenbergmodel parameters. In comparison with previous studies on the J 1 -J 2 square-lattice Heisenberg model, which have estimated the transition point between the Néel and disordered phases to be at J 2 /J 1 = 0.40−0. 44,17,22,24,31,32) our result slightly overestimates the stability of the Néel order. This overestimation may be caused by the cluster Fig. 4. (Color online) Calculated the results for the ground-state energies (upper panels) and order parameters (lower panels) for the Néel, collinear, spiral, and disordered phases as a function of t 2 /t 1 or t ′ 2 /t 1 . The left, middle, and right panels correspond to the lines (i), (ii), and (iii) in Fig. 3(a), where we assume t 2 = t ′ 2 , t ′ 2 = 0, and t 2 = 1, respectively. The inset in (c) and (e) displays the energy difference between the spiral and disordered phases, ∆E = E S − E D , and other insets enlarge the region near the phase boundary. geometry used in our calculations; if we use the 2 × 2 site cluster as the reference system, the transition occurs at J 2 /J 1 = 0. 42,36) which is in good agreement with the previous studies. The transition point between the collinear and disordered phases, on the other hand, has been estimated to be at J 2 /J 1 = 0.59 − 0. 62,17,22,24,32) which is in good agreement with our result.
Along the line (ii): The results are shown in the middle panel of Fig. 4, where we assume t ′ 2 = 0. With increasing t 2 from t 2 = 0 at which the ground state is the Néel order, the energy of the Néel order gradually approaches the energy of the disordered state, and at t 2 /t 1 = 0.88, the Néel order disappears continuously. The calculated order parameter indicates the continuous phase transition. At t 2 /t 1 = 1, on the other hand, the ground state is the spiral order although the energy difference between the spiral and disordered states is very small [see the inset of Fig. 4c)] due to the strong geometrical frustration of the triangular lattice. With decreasing t 2 from t 2 /t 1 = 1, the ground-state energy of the spiral order increases gradually and approaches the energy of the disordered state, and at t 2 /t 1 = 0.89, the spiral order disappears continuously, in agreement with the calculated order parameter. Thus, the disordered phase appears in a very narrow region 0.88 < t 2 /t 1 < 0.89. The corresponding Heisenbergmodel parameters where the Néel and spiral orders disappear are around J 2 /J 1 = 0.79. The previous studies for the anisotropic triangular-lattice Heisenberg model done by the coupled-cluster and exact-diagonalization meth-ods 42,44) have given the value around J 2 /J 1 = 0.80−0.87 for the transition point, which is in good agreement with our result.
Along the line (iii): The results are shown in the right panel of Fig. 4, where we assume t 2 = t 1 . At t ′ 2 = 0, the ground state is the spiral order although the energy difference from the disordered state is very small [see the inset of Fig. 4(e)]. With increasing t ′ 2 , the energy of the spiral order gradually approaches the energy of the disordered state, and at t ′ 2 /t 1 = 0.34, the spiral order disappears continuously, in agreement with the calculated order parameter. On the other hand, with decreasing t ′ 2 from t ′ 2 /t 1 = 1 at which the collinear order is stable, the ground-state energy of the collinear order increases and crosses the energy of the disordered state at t ′ 2 /t 1 = 0.59. The transition is thus discontinuous, in agreement with the calculated order parameter. The disordered state therefore appears at 0.34 < t ′ 2 /t 1 < 0.59, which corresponds to the region 0.11 < J ′ 2 /J 1 < 0.34 if we use the Heisenberg-model parameters. To our knowledge, no comparable calculations have been made for the frustrated Heisenberg model in this parameter region.
Summary
In summary, we have used the VCA based on the SFT to study the two-dimensional frustrated Hubbard model at half filling with the isotropic nearest-neighbor and anisotropic next-nearest-neighbor hopping parameters. We have in particular focused on the effect of geometrical frustration on the spin degrees of freedom of the model in the strong correlation regime at zero temperature, and have investigated the magnetic orderings and emergence of the quantum disordered phase in a wide parameter space including the square, crossed-square, and triangular lattices.
We have thereby presented the ground-state phase diagram of the model, which includes the magnetic phases with the Néel, collinear, and spiral orders. We have also shown that the quantum disordered phase caused by the effect of frustration emerges in a wide parameter region between the three ordered phases obtained and that the phase transition from the Néel and spiral orders to the disordered phase is continuous (or second-order transition), whereas the transition from the collinear order to the disordered phase is discontinuous (or first-order transition). We have compared our results with the results of the corresponding Heisenberg-model calculations that have been made so far and found that the agreement is good whenever the comparison is possible. We hope that our results for the phase diagram will encourage future studies on the characterization of the quantum disordered state obtained. | 2017-02-01T04:33:52.000Z | 2015-04-09T00:00:00.000 | {
"year": 2015,
"sha1": "87fb7e9fa704fc9a9228bc9c93c1b911873b734c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.02213",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "87fb7e9fa704fc9a9228bc9c93c1b911873b734c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
226690096 | pes2o/s2orc | v3-fos-license | Modeling Cost Saving and Innovativeness for Blockchain Technology Adoption by Energy Management
: In developed nations, the advent of distributed ledger technology is emerging as a new instrument for improving the traditional system in developing nations. Indeed, adopting blockchain technology is a necessary condition for the coming future of organizations. The distributed ledger technology provides better transparency and visibility. This study investigated the features that may influence the behavioral intention of energy experts to implement the distributed ledger technology for the energy management of developing countries. The proposed model is based on the Technology Acceptance Model construct and the di ff usion of the innovation construct. Based on a survey of 178 experts working in the energy sector, the proposed model was tested using structural equation modeling. The findings showed that perceived ease of use, perceived usefulness, attitude, and cost saving had a positive and significant impact during the blockchain technology adoption. However, innovativeness showed a positive e ff ect on the perceived ease of use whereas an insignificant impact on the perceived usefulness. The present study o ff ers a holistic model for the implementation of innovative technologies. For the developers, it suggest rising disruptive technology solutions.
Introduction
Digitalization and technological development is the backbone for economic growth and environmental sustainability for any country [1]. All the countries of the world are adopting modern ways and technologies to rival each other and get work done in a paramount strategic way. Innovations and technological adoptions are very much essential for the economies to retain their business and achieve the targets [2]. The energy sector of any country is the crucial one to accomplish efficiency and fulfill the demand of the country and its residents [3]. Worldwide energy consumption and energy requirements anticipate an increase to 28% from the year 2015 to 2040. In the case of the Asian region, the expected rise of energy will be 51%, which is the highest among the other regions of the world [4]. Currently, worldwide renewable energy production is highly focused, and developing countries are also moving toward the proper implementation of renewable energy solutions. Presently, developing countries are facing serious problems in the production and distribution of energy. In such presently, it is a big challenge to meet the energy needs of both the industry and residential sectors [5]. So, the advent of disruptive technology in developed states for energy management is emerging as a paradigm to improve the traditional energy system in developing states. The use of distributed ledger technology as renewable energy will have a substantially significant effect on energy's sustainable usage by offering greater convenience for the customer. The distributed ledger technology can be useful in the energy sector for carbon management, distributed trading, and the popularization of renewable energy [6]. Specially, distributed ledgers can aid in lessening transaction costs and enhancing the flexibility in energy project funding developments [7]. The blockchain can provide better privacy for transaction within the energy wholesale trading phase [8]. Moreover, distributed ledgers can improve the clearing settlement mechanism in retail trading practice, promoting community involvement in the procurement cycle in new energy use and in the reduction of carbon emissions [9].
Indeed, innovation in technology is a critical engine of energy transitions. One such breakthrough is the smart grid. Consequently, turning in developments in the digital industry is beneficial [10]. According to [11], "the technology revolution reverses the industrial revolution and in this way changes the structure of the markets". The payment system is experiencing remarkable change, with an increase in cashless associations, P2P transactions, and social networking micropayments [12]. The marketplaces are gradually decentralized with multiple dealers where trust affects the transaction costs. The traditional centralized system is inefficient in the energy sector [13]. It needs greater digital technology, data security, and information trustworthiness [14]. The smart grid has been considered as the "energy internet" for the networking of multi-energy projects [15]. The blockchain technology can provide transparent, decentralized, and secure frameworks for the energy internet [16]. The distributed ledger technology has the ability to provide P2P microgrids with prosumers [17]. The distributed ledgers are grounded on consensus algorithms [18], which can lessen the exchange cost, increase efficiency, enhance trust, and are fast and help P2P transactions on multiple scales [19]. The disruptive technology is the perfect framework for any crowd system type: Tracking, smart contract, proof of ownership (provenance), and identity management (prosumer and machine). The basic blockchain-based network for a crowd system is shown in Figure 1. The energy market, and the electricity market in particular, is in a transitional stage, based on administrative monitoring and technological developments. The decentralized electricity market is characterized by a great number of dealers with consistent transactions. So, the applicability of distributed ledgers in the energy management determines safety and trust [21]. In a blockchain-based network, all the participants agree on the validity of the data. All members can check and access the data within a specific time, confirming that this ecosystem is transparent. In addition, transparency without a declaration of identity is guaranteed. The appraisal is improved more if we consider the brainchild of Nick Szabo, referred to as the smart contract [22]. It allows trusted transactions to take The energy market, and the electricity market in particular, is in a transitional stage, based on administrative monitoring and technological developments. The decentralized electricity market is characterized by a great number of dealers with consistent transactions. So, the applicability of distributed ledgers in the energy management determines safety and trust [21]. In a blockchain-based network, all the participants agree on the validity of the data. All members can check and access the data within a specific time, confirming that this ecosystem is transparent. In addition, transparency without a declaration of identity is guaranteed. The appraisal is improved more if we consider the brainchild of Nick Szabo, referred to as the smart contract [22]. It allows trusted transactions to take place between disparate anonymous parties without the need for a mechanism of central authority [23]. Consequently, distributed ledgers provide automation for exchange processes, specifically in P2P energy management.
Existing research on the implementation of distributed ledger technology for energy is mostly studied by advanced economies like the US [24,25]. Our study focused on disruptive technology adoption for developing economies. The previous studies mostly focused on the technology organization and environment framework [26]. In this study, we utilized a hypothetical framework based on TAM constructs [27] with cost saving [28] and innovativeness [29], and an in-depth online survey for the measurements. After studying several papers, we analyzed that this is the first paper for the evaluation of disruptive technology adoption in the energy management. The findings indicate that it plays a vital role for both practitioners and policymakers to adopt distributed ledger technology in the energy management. The present study was conducted to answer the subsequent research questions.
RQ1. What are the aspects that drive the attention of the energy sector to implement distributed ledger technology? RQ2. Among the factors, which has a better impact on the disruptive technology acceptance intention?
The structure of the manuscript is organized as follow: In Section 2, we explain the literature review. Section 3 presents the proposed model. Section 4 explains the methodology. Section 5 clarify the results, and finally the work ends with a discussion and implications.
Distributed Ledger Technology
A distributed ledger technology is the brainchild of Sakashi Nakamoto [30]. The immense intention of blockchain technology affords amazing features in various sectors of organizations. The blockchain records transactional details between businesses with an unlimited level of security [31]. The distributed ledger technology reduces transaction costs, brings transparency in the supply chain, and increases the traceability in the manufacturing domain for the anti counter measures [32]. The distributed ledgers automatically deliver the required results instantly [33]. The disruptive technology enables the entire world to make contracts embedded in digital codes where all the data is saved authentically without the fear of deletion, revision, and tempering [34]. The distributed ledger technology provides every agreement, every process, and every task with a higher level of validity; it gives digital signatures' verification and the identification of contracts [35]. All the intermediaries in daily life like, including bankers, administrators, lawyers, and stock exchange brokers, might no longer be required [36]. Machines, organizations, individuals, and algorithms will interact with users with little efforts [37]. The entire businesses and economies are revolutionized virtually by the blockchain. In the future, distributive ledger technology will transform businesses and governments in a new way, which concludes the lowest cost solutions [38]. Blockchain technology is the recent tremendous technology that creates innovations and cost reduction in different fields of the economy. Blockchain innovates economic functions, by the peer-to-peer models, boosting small economics and sustainable societies [8].
The blockchain technology provides persistency, automation, auditability, and immutability [39]. These benefits are due to the cryptographic hash nature of the distributed ledger, digital signature of smart contracts, and distributed network of the consensus algorithm [18]. Still, the exact process depends on the consensus mechanism. Three phases of distributed ledger technology applications can be distinguished. The blockchain 1.0 indicates virtualization of digital currencies like bitcoin [30]. Blockchain 2.0 includes smart contracts for the transaction process [40]. The next blockchain 3.0 enables a high level of independency with decentralized autonomous organization based on the savvy contract by predefined complex rules [18]. In addition, in a public/permissionless blockchain, like bitcoin and ethereum, anyone can participate in and access the ledgers [41]. It is mainly based on the proof-of-work algorithm; anyone can add new blocks. While in a private/permissioned blockchain like the hyper ledger fabric network, only members can read and write the data. It is mainly based on the proof of authority and proof of stake [26]. Moreover, the consortium is a combination of both permissionless and permissioned blockchains based on predefined rules. Apart from it, tendermint is the most prominently used for allowing a unified swap of tokens among several blockchains [13]. In conclusion, blockchain is a decentralized technology that can lower the cost transaction, provide better security, increase transparency, and improve the traditional system in the organizations [42].
Blockchain in Energy Sector
Blockchain imparts its advantages in various fields like in automotive, finance, manufacturing, internet, and networking. Blockchain technology preserves its tremendous benefits in the energy sector [32]. The world is rapidly shifting toward renewable energy sources due to the furious effects of non-renewable energy on the environment. Several countries aim for a 100% shift toward renewable energy sources up to 2050, such as Denmark [43]. Recently, some countries (China, Spain, and Germany) have planned to achieve a 70% implementation of renewable energy sources. The renewable energy transformation is only possible with technological innovations, which are achieved by blockchain technology. The distributed ledger technology provides services direct from the source without the middle man so it reduces a lot of costs and develops trust in the clients [8]. The blockchain implemented in Japan's energy sector is analyzed on the technology, economics, society, environment, and institution. The results emphasized that blockchain will support improvement of the energy sector and the production of zero carbon until 2050 [8].
The research work of [44] focused the blockchain in the energy sector of China. There is concern about the environmental sustainability with renewable energy production. The distributed ledger technology provides a reduction in the cost and ease for the clients for consumption. In addition, the research explored by [7] indicates that blockchain technology became one of the top 10 successful technologies in the year 2018. It works in many areas as a promising technology, for energy technological development, and is featured for future technological advancements in the energy sector. The specific energy applications of blockchain include P2P trading, energy storage arrangements, and manageable loads. The authors of [45] studied the distributed ledger technology applications for the energy marketplace setting, containing two manufacturers and one customer. The author discussed the viability of disruptive technology to Industry 4.0 and concluded that distributed ledgers play a pivotal role in the energy market. The authors of [45,46] proposed a blockchain base-distributed demand-side management model that could match the demand for energy production. The authors of [47] investigated the smart grid concept with blockchain technology from the perspective of energy production. The smart grid replaces the conventional method of energy production. The research concluded that the blockchain provides new and secure ways of energy production. The authors of [13] studied 140 blockchain projects and its possible effects on energy companies. The findings indicate that disruptive technology greatly lessens the exchange cost like processing data and confirmation, which led the marketplace to embrace minor distributed generators. The authors of [48] proposed a distributed ledgers-based model for development of the distributed microgrid energy trade algorithms. The authors of [49] conceived an energy blockchain-based scheme for safe electric vehicle-charging services in the smart city. The authors of [50] proposed a decentralized market network from which prosumers and consumers could use the blockchain to exchange local electricity. The author analyzed their decentralized market based on 100 households, indicating that this could lessen the future cost. In conclusion, the disruptive technology is useful in the energy transaction, supply chain, and energy internet. The distributed ledgers-based energy framework could bring efficiency in the traditional energy system, and consequently lower the energy cost for end consumers.
Technology Adoption Model
Technological development and advancements always impart a vital position in the financial growth of a country. Various researches have focused on the technology adoption model like [51], who worked on the adoption of consumers towards renewable energy consumption with a comparison of the perceived attributes and the attitude intentions were determined. The case of solar Photovoltaic cell installation in the USA [52], and efficiency programs for the adoption of new and used energy technology using the spatial energy growth model [53]. Technological development in the energy sectors can enhance the sustainable environment of the country. The above-mentioned technology projects are for the betterment of society and consumers' well-being, but it is subject to the technology adoption.
TAM
The technological acceptance model is offered by [27], and indicates a subdivision of the theory of reasoned action (TRA) especially designed for user adoption behavior. Accordingly, [27] implemented the TAM in the implementation of computer-based information systems in organizations to get an enhanced organization performance. To get increased user acceptance, it is necessary to explain why people should accept the work on computer information systems [54]. TAM emphasized the determinants of computer acceptance working, which is it provides better user behavior, end-user computer technologies, and a broad range of performance [55]. TAM is not only helpful for the prediction but also for aiding both practitioners and researchers to identify it pursues some appropriate steps [56]. The goal of the TAM is based on two main concepts: Perceived usefulness (PU) and perceived ease of use (PEOU). It is one of the prominent models that predicts user behavioral intention to accept a new technology [54,57] and is the leading model [58] in the literature. The recent literature on TAM is presented in Table 1.
[59]
Using TAM four variables in the energy sector (Perceived behavior, moral norms, awareness, and social norms).
The paper concluded TAM in the renewable energy sector in Iran. The findings confirm a significant relationship among the variables of intentions and a negative relationship with intentions in terms of social norms.
[ 60] The paper is about the implementation of Green IT using the extended TAM.
The study predicts environment IT is an emerging trend.
The study determined the TAM for Green IT by using constructs (injunctive, descriptive, and personnel norms). The results describe that environmental beliefs, descriptive, personal norms, and perceived usefulness directly impact the intentions towards green IT. Moreover, environmental beliefs and government policies have significant effects on normative variables. [61] The study focus on importance of psychological ownership of user attitudes performed in the organization.
The research is connected with the TAM of antecedents and results with psychological ownership. TAM has a significant relationship in long term customer loyalty and customer engagement in media use. [62] The work is about perceived usefulness, how the users get to use the blockchain technology in the digital world transactions. The paper specifically focused on Twitter insights of users.
Blockchain technology is the modern emergence in the digital world. The paper explores the individual acceptance toward the disruptive technology models and exchanges. The research concluded that users are inclined towards security, ease of use, traceability, verifications, and digital transactions. The paper explains the managerial implications with the future of blockchain technology.
[63] The adoption Cycle of Cryptocurrency. The research discussed the TAM from the perspective of Blockchain technology. The study explains the consumer's acceptance behavior by using digital currencies.
[64] Blockchain technology as a decentralized Business. A sharing economy perspective with technology adoption Model (TAM) The work focused on the blockchain technology adoption in the business and economy. The work explains the business transactions which are decentralized and more secured using BT. The paper elaborates on the ease of use and technology adoption models using Blockchain technology.
[65] Blockchain Technology in terms of Business Sustainability and Adoption behaviors of users in SMEs, Hospitality, and Tourism sector.
The paper determined the implementation of the Cryptocurrency in Small and Medium-sized firms like the Hospitality sector, Small business, and Tourism under the Technology Adoption Model for the business transaction. The results declared that managers of the organizations play a key role to implement Blockchain Technology. Perceived usefulness works as a mediating factor among strategic orientation.
[66] Adoption of blockchain technology for financial development.
The study focused on the expansion of the supply chain structure of India to the rural areas. Authors analyzed implementation of Blockchain technology in remote areas to get economic development.
BT connects the rural areas with the global business. It is concluded that Technology Adoption is necessary for economic growth.
Proposed Model
In Section 2, we discussed several research papers regarding technology adoption models. Our study assimilates TAM constructs with cost saving and innovativeness for the subsequent goals. First, the customer intention to implement innovative technology could be discussed by [67]. Second, TAM is banded on system-specific perception and cost saving is money saved by using an advance technology [68]. Third, innovativeness is considered as the sparks of the technology [69]. Therefore, the present study expands the TAM constructs with the cost saving construct proposed by [28] and the innovativeness construct proposed by [29] to comprehend the acceptance of blockchain in the the energy management of developing countries context. How the behavioral intention attitude is established and what position the perceived ease of use and perceived usefulness are playing will be evaluated by using the technology adoption model. The proposed model is shown in Figure 2.
India to the rural areas. Authors analyzed implementation of Blockchain technology in remote areas to get economic development. BT connects the rural areas with the global business. It is concluded that Technology Adoption is necessary for economic growth.
Proposed Model
In Section 2, we discussed several research papers regarding technology adoption models. Our study assimilates TAM constructs with cost saving and innovativeness for the subsequent goals. First, the customer intention to implement innovative technology could be discussed by [67]. Second, TAM is banded on system-specific perception and cost saving is money saved by using an advance technology [68]. Third, innovativeness is considered as the sparks of the technology [69]. Therefore, the present study expands the TAM constructs with the cost saving construct proposed by [28] and the innovativeness construct proposed by [29] to comprehend the acceptance of blockchain in the the energy management of developing countries context. How the behavioral intention attitude is established and what position the perceived ease of use and perceived usefulness are playing will be evaluated by using the technology adoption model. The proposed model is shown in Figure 2.
Hypothesis Development
The TAM construct perceived usefulness is the customer's personal belief that with the use of some advanced methods, his or her job performance will increase in the organization. While, perceived ease of use emphasizes that the adopted technology or system provides comfort of practice. Moreover, TAM plays a vital role to provide effective ways to influence external factors on internal beliefs, behavioral intention (BI), and attitude (ATT). Attitude is a user's favorable or unfavorable assessment of the conduct being referred to [70]. Attitude with regard to user acceptance of IT is characterized as a person's general productive response (loving, delight, happiness, and joy) to utilize technology [27].
Hypothesis Development
The TAM construct perceived usefulness is the customer's personal belief that with the use of some advanced methods, his or her job performance will increase in the organization. While, perceived ease of use emphasizes that the adopted technology or system provides comfort of practice. Moreover, TAM plays a vital role to provide effective ways to influence external factors on internal beliefs, behavioral intention (BI), and attitude (ATT). Attitude is a user's favorable or unfavorable assessment of the conduct being referred to [70]. Attitude with regard to user acceptance of IT is characterized as a person's general productive response (loving, delight, happiness, and joy) to utilize technology [27].
The results from the past research proposed that the perceived ease of use has a significant impact on perceived usefulness [67,71,72]. Moreover, perceived ease of use has positive impact on attitude [73][74][75]. Perceived usefulness positively impacts attitude [76,77]. Attitude has a positive impact on behavioral intention [27,[78][79][80][81]. Perceived usefulness has a positive effect on the user's intention [82][83][84]. Similarly, this study also expects that TAM constructs along with cost saving and innovativeness will also show a noteworthy effect on the user's intention to adopt blockchain in the energy management. So, we postulate the following hypotheses:
Cost Saving
It refers to the time and money saved by using an advanced technology [68]. The perceived cost savings are considered to be "the extent by which user thinks about use of a specific framework will save money spent on service operation" [85]. Moreover, [86] listed the saved money factor as one of the sub-categories that pushes clients to select self-services. The authors of [87] discovered that price and cost savings were one of the major benefits that favored self-service. The authors of [88] identified that the higher the effort taken by the user to participate in self-service, the lesser the amount the user usually expects to pay for that service. The previous findings confirm that cost saving has a positive effect on Perceived ease of use [89][90][91]. Moreover, cost saving has a positive effect on perceived usefulness [92][93][94]. Accordingly: Hypothesis 6. Cost saving has a positive effect on the perceived usefulness of blockchain technology Hypothesis 7. Cost saving has a positive effect on the perceived ease of use of blockchain technology
Innovativeness
The innovativeness construct is derived from the technology readiness index [29]. It is a desire to be a technology leader and visionary [95]. Positive thinking can be used as a guide to a positive outlook for creativity, and it fills in as a confidence that it can create efficiency and adoptability. Innovativeness is measured as the incentives of the technology [69]. The previous findings indicate that innovativeness has a significant effect on perceived usefulness [96,97]. Moreover, innovativeness has a positive impact on the perceived ease of use [98][99][100]. Thus: Hypothesis 8. Innovativeness has a positive effect on the perceived usefulness of blockchain technology Hypothesis 9. Innovativeness has a positive effect on the perceived ease of use of blockchain technology
Data Collection
An online survey approach was used for the current analysis by using the Google Form service to investigate the connection amongst the conceptual model constructs. Therefore, online data, using the official English language, were developed to get the feedback from experts working in the energy sector of a developing country. To assess the feedback, a 5-point Likert scale closed-ended questionnaire and pilot testing process were used [101,102]. For the four months (January 2020-April 2020), an online survey was conducted for the four major electric supply companies of a developing economic in Asia, namely IESO, FESCO, PESCO, and LESCO. Due to the pandemic situation, in four months, 178 complete questionnaires were received and used for the measurement model. The final sample size consisted of 178 experts representing four major supply companies. The sample size satisfied the standard requirement of 5 observations per parameter [103]. In the current research, we selected 19 factors with a minimum requirement of 165 respondents. Moreover, [104] suggested a small sample size is enough for an energy study. So, the sample size of 178 experts was acceptable for the structural model analysis. The top companies for the study were IESCO (30.33%) and FESCO (24.71%). The designation of a deputy secretary represents the highest percentage (38.20%). More data were collected from experts, representing 16.29%. The majority of employees in the energy sector have more than 10 years of experience, representing 29.21%. The details of the respondents' demographic profile are presented in the Table 2.
Structural Equation Modeling
For the current analysis, partial least square structure equation modeling was used [88,105]. The first-generation techniques were not used because of their limited capability with regards to casual and complex modeling [106]. Among the second-generation analysis techniques, PLS-SEM is widely adopted and accepted [107,108]. The SmartPLS is more specifically used in terms of studying technology adoption models. The details of the measurement items are presented in Table 3.
BI3
It is expected that Energy firms will take advantages from the blockchain application in the manufacturing and service operations.
BI34
By developing blockchain technology, Energy sector would increase resource usage and provide better services.
Common Method Bias Issues
For sample characteristics, a Kolmogorov and Smirnov test (P > 0.05) was applied to examine sample distribution of the initial and later non-response bias respondents [115,116]. As indicated by [117], the mean response to all the constructs shown in the proposed model provided by 46 respondents over the last six weeks was matched by the random sample of 132 respondents of the early ten-week return to determine whether any significant differences occured. The study was appropriate because the respondents who submitted their questionnaires late were approximately identical to the non-respondents [118]. The non-response bias findings are presented in Table 4. Moreover, the use of a single instrument to assess exogenous and endogenous structures usually raises questions about common method bias issues [119]. Therefore, both methodological and statistical methods were used to prevent the common method bias problems. The statistical solution was implemented by the Harmon's test. Consequently, the findings showed the data variation was recorded by the first factor by 38.274%. Since the outcome is below 50%, it could be assumed that there was no Commom method bias problem [120]. Moreover, the variance inflation factor (VIF) was tested before checking the structural model to detect the existence of the high correlated construct. Consequently, the findings showed that the high VIF value among the construct was 3.261 below the standard cut-off threshold of 5 [121]. The results indicate that this research does not pose a significant multicollinearity problem and is suitable for the measurement model. For Variance Inflation Factor, see Table 5.
Results
The conceptual model was tested by a two-step process. First, we tested the reliability and validity checks. In step two, we analyzed the structural equation model.
Measurement Model
For the measurement model, validity is the degree to which information gathering approaches extend whatever they were intended to measure. Therefore, for the current proposed model, the subsequent analyses were implemented. When the hypothetical constructs established for the model are highly correlated with the elements used for measuring it, we have to check for convergent validity. In other words, the ratio of the variation common through the measures of a particular construct must be high. In the proposed model, we tested for the six constructs. As per the guidelines, we performed the following validity checks.
•
First, we checked the factor loadings. Consequently, the construct was above the standard of 0.5 as suggested by [122]. The factor loadings are presented in Table 6. The measurement model is shown in Figure 3.
Construct Reliability
After checking the factor loading, we tested the composite reliability as suggested by [122] and the average variance extracted proposed by [123]. Based on the findings, it is indicated that all the values were found above the standard value (0.7 for CR and 0.5 for AVE) as presented in Table 7.
Construct Reliability
After checking the factor loading, we tested the composite reliability as suggested by [122] and the average variance extracted proposed by [123]. Based on the findings, it is indicated that all the values were found above the standard value (0.7 for CR and 0.5 for AVE) as presented in Table 7.
Discriminant Validity
After checking the CR and AVE, we tested the discriminant validity (DV) as recommended by [123]. The DV shows the square root of AVE, with each hidden variable in the proposed model. Consequently, constructs would show high variance with their measures than with other constructs. The DV for each construct is very well established and is presented in Table 8. In addition, the HTMT ratios for checking the normality of the DV are presented in Table 9.
Structural Model
In the second phase, we applied the bootstrapping process for testing the normality of the data. In this process, a large number of subsamples (5000) were taken from the original sample to check errors. The result provides the T-values for the significance of the measurement model. So, the bootstrapping process for the structural model is shown in Figure 4.
Structural Model
In the second phase, we applied the bootstrapping process for testing the normality of the data. In this process, a large number of subsamples (5000) were taken from the original sample to check errors. The result provides the T-values for the significance of the measurement model. So, the bootstrapping process for the structural model is shown in Figure 4.
Goodness of Model Fit
This study's goodness of model fit was obtained by including the exclusion process for items. Five measures were applied, namely SRMR, d ULS, d G, Chi square, and NFI. Accordingly, the model tested meets all of them (especially SmartPLS SRMR and NFI) because according to [124], the
Goodness of Model Fit
This study's goodness of model fit was obtained by including the exclusion process for items. Five measures were applied, namely SRMR, d ULS, d G, Chi square, and NFI. Accordingly, the model tested meets all of them (especially SmartPLS SRMR and NFI) because according to [124], the standard value for SRMR is less than 0.08 and higher than 0.9 for NFI. Hence, the model fit is presented in Table 10. In addition, the path coefficients are shown in Table 11.
Major Findings
The current study confirms that the perceived ease of use shows a positive effect on the perceived usefulness and is supported by previous studies of [84,[125][126][127]. Moreover, the perceived ease of use shows a positive and significant impact on attitude and is supported by the other studies of [128][129][130]. The perceived usefulness shows a positive effect on attitude and is supported by the previous studies of [131][132][133]. Attitude confirms a positive effect on the behavioral intention to use blockchain technology and is supported by the other studies of [84,129,134]. In addition, the perceived usefulness shows a significant effect on the behavioral intention to use distributed leger technology for the energy management and is supported by the previous studies of [84,135,136]. The findings indicate that the adoption of distributed ledger technology will improve the technical features (privacy and speediness) for distributed energy resources' businesses to increased flexibility [137].
It was interesting to find that cost saving shows a positive effect on the perceived ease of use and is supported by other studies [89,[92][93][94]. Moreover, cost saving also shows a positive effect on the perceived usefulness and is supported by previous studies [89][90][91]. The findings indicate that distributed ledger technology could lessen the transaction cost, although delivering clear information for entry to many groups, and counting groups that verify monitoring compliance. Thus, distributed leger technology could eliminate the central authority and probably trade volumes, and aid in this manner to minor-scale customers to participate in energy markets [44]. The results also indicate that innovativeness shows a significant effect on the perceived ease of use while an insignificant effect on the perceived usefulness. In such context, it may be due to the lack of awareness about blockchain technology in developing countries. Still, it is in the beginning phase in developing nations. The findings suggest for the firms that their advertising agencies should not only focus on developing the awareness about distributed ledger technology but also buy the applications of blockchain for its actual use in the organizations [138].
Theoretical Implications
The current study responded to a request by [139], who emphasized that there is a vital need to enhance the contemporary state of the blockchain topic. Certainly, until now, the literature on distributed ledger technology is commonly a review type like [8,13,140,141]. In this way, through the integration of TAM constructs with cost saving and innovativeness by empirical evidence from the energy sector, the current study complements the limited literature on the distributed ledger acknowledgement model for technology innovation by analyzing an empirical model. So, our study plays a key role in the field of information technology implementation for energy management, given by the anticipated impact of blockchain technology. The present study is one of the initial studies using SmartPLS, findings from a statistically confirmed model, exposing that TAM constructs with cost saving can serve as a base for blockchain acceptance in energy management. Our projected model suggested related information visions that can help experts as well as scholars recognize and progress their work if they incorporate disruptive technology in their energy management.
Practical Implication
Based on the findings, the current study indicates that the proposed model holds a strong explanatory power (R 2 = 0.594 and R 2 adjusted = 0.589), explaining 59.4% of the variance of the behavioral intention. Moreover, attitude exhibits a variance of (R 2 = 0.5664 and R 2 adjusted = 0.660). Similarly, the perceived ease of use shows a variance (R 2 = 0.420 and R 2 adjusted = 0.413). Hence, the perceived usefulness exhibits a strong variance (R 2 = 0.707 and R 2 adjusted = 0.702). Developing countries have begun to explore the distributed ledger technology adoption in energy management [8]. There are movements toward proper implementations of renewable energy sources [5], and the adoption of distributed ledger technology is reflected by having an optimistic opening to be economical worldwide [142]. The distributed ledger technology implementation should bring a reduction in cost and ease for clients for consumption [44]. By virtue of the benefits, distributed ledgers could advance energy cybersecurity, and in turn as a backup technology, which can advance the privacy of the supply, conclusively encouraging sustainability through aiding renewable generation with a low-carbon solution.
Limitations and Conclusions
Just like other studies, there are also some limitations in the current study. Firstly, the present study was conducted only in the energy management in one country. For the coming future, we may take neighbor technology-advanced countries like a cross-sectional study with China. The results of such a study will be more interesting. Secondly, the current study integrated the TAM constructs (perceived ease of use, perceived usefulness, attitude, and behavioral intention) with cost saving and innovativeness. In the future, we may integrate with other traditional adoption theories like TAM with the theory of planned behavior. The result of such a study will be more interesting. Third, blockchain is not a standalone technology. In the current study, we did not integrate the distributed ledger technology with other technologies. In the future, we may integrate with other technologies like the internet of things. The findings of such studies will be more helpful for the organizations. Fourth, few studies have been conducted on the cost related to distributed ledger technology adoption apart from protype research [141]. In the future, further research is required on similar technology, as companies that plan to integrate distributed ledger technology into their traditional trade would require more attention on the need for it.
In conclusion, the current study expands the technology acceptance model constructs with cost and innovativeness for the acceptance of blockchains in the energy management. In response to RQ1, based on the results, it is confirmed that the perceived ease of use, perceived usefulness, and attitude with cost saving show a positive effect on the user's intention to accept disruptive technology for energy management. However, innovativeness shows a significant effect on the perceived ease of use while an insignificant effect on the perceived usefulness. Pertaining to RQ2, the study findings show that the perceived ease of use matters most in the implementation of blockchain. Moreover, an important role of this research is that most technology adoption approaches have been studied in developed states [143]. Therefore, this study is unique to the such context. The current study offers a holistic model for the implementation of innovative technologies. For the developers, it suggests precious visions for increasing disruptive technology solutions. The adoption of distributed ledger technology for regional energy marketplaces in P2P will provide a solution for regional energy system optimization that can reduce the power network strain or delay costly strengthening. Additionally, domestic markets might deliver extra revenue sources for RES produces and could possibly reduce the energy cost for end consumers. | 2020-10-28T18:01:27.986Z | 2020-09-14T00:00:00.000 | {
"year": 2020,
"sha1": "3140917f8000c5ab6adf83fbdb920a4d9c5140d6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/13/18/4783/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "971e97ab475bac1981f5830afe36c5c3ef324f38",
"s2fieldsofstudy": [
"Environmental Science",
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
} |
119196325 | pes2o/s2orc | v3-fos-license | Neutrino phenomenology of a high scale supersymmetry model
CP violation in the lepton sector, and other aspects of neutrino physics, are studied within a high scale supersymmetry model. In addition to the sneutrino vacuum expectation values (VEVs), the heavy vector-like triplet also contributes to neutrino masses. Phases of the VEVs of relevant fields, complex couplings and Zino mass are considered. The approximate degeneracy of neutrino masses $m_{\nu_1}$ and $m_{\nu_2}$ can be naturally understood. The neutrino masses are then normal ordered, $\sim$ 0.020 eV, 0.022 eV, and 0.054 eV. Large CP violation in neutrino oscillations is favored. The effective Majorana mass of the electron neutrino is about 0.02 eV.
I. INTRODUCTION
Neutrino physics, like leptonic CP violation, is an interesting topic [1] in the current research of particle physics. Among other things, it might be the final place where experiments of particle physics will give definite results in the near future. The results will check various theoretical models about the fermion masses of the Standard Model (SM).
We proposed that supersymmetry (SUSY) [2] can be the theory underlying the fermion masses in Refs. [3][4][5]. The basic idea is the following. It assumes a flavor symmetry. The flavor symmetry is broken after the sneutrinos obtain nonvanishing vacuum expectation values (VEVs). (In this way, SUSY is motivated.) These VEVs result in a nonvanishing neutrino mass. The empirical smallness of neutrino masses needs very large SM super partner masses to be understood which are about 10 12 GeV. Thus, our SUSY is of high scale breaking [6][7][8].
A further natural assumption is that the flavor symmetry breaks softly. Namely the soft SUSY breaking masses of the sfermions do not obey the flavor symmetry either. The theoretical reason is that the soft masses are due to the supergravity effect which generically breaks any global symmetry. Soft breaking of the flavor symmetry implies that the lepton number violation due to sneutrino VEVs is explicit instead of being spontaneous. Therefore there is no any massless Nambu-Goldstone boson related to nonvanishing sneutrino VEVs.
Actually the large masses of the model make the low energy effective theory just the SM via Higgs mass fine tuning, except for that we have an understanding of the hierarchical pattern of the charged lepton masses, or that of the SM Yukawa coupling constants.
To briefly review the model in a simple way, the SM is SUSY generalized. The flavor symmetry is Z 3 cyclic among the three generation SU(2) L lepton doublets L 1 , L 2 and L 3 . The with α and β denoting the SU(2) L indices. In terms of the following redefined lepton superfields, , the above Z 3 invariant combinations are L τ and ǫ αβ L α e L β µ , respectively. The superpotential is then where H u and H d are the two Higgs doublets, the right-handed lepton singlet E c τ is defined as the one which couples to L τ , and E c µ is that orthogonal to E c τ and with a coupling to L e L µ . y τ , λ τ and λ µ are coupling constants. (Note that considering the mixing between L τ and H d gives the same form of the above superpotential [4].) It is seen that the electron is massless, because E c e is always absent in the Lagrangian. This is true whenever SUSY is conserved, the nonvanishing electron mass is due to SUSY breaking (together with electroweak gauge symmetry and flavor symmetry breaking via loops). Note that all the coupling constants in our superpotential are assumed to be natural values, say typically ∼ 0.01 − 1, and the mass parameterμ is taken to be large ∼ 10 12 GeV. The SM fermion mass hierarchy is due to symmetries and their breaking.
In addition, a heavy vector-like SU(2) L triplet field T (T ) with hypercharge 2(−2) needs to be introduced so as to make the Higgs mass realistic [5,6]. This triplet field also contributes to neutrino masses. In terms of the redefined fields, the flavor symmetric superpotential relevant to the triplet T andT fields is with M T the mass ∼ 10 13 GeV. The braces denote that the two doublets form an SU(2) L triplet representation.
The soft SUSY breaking terms in the Lagrangian are in general form which also break the flavor symmetry [3][4][5]. All the mass parameters of the model are taken to be about 10 12 − 10 13 GeV. The spontaneous gauge symmetry breaking of the SM occurs. Through fine tuning, the right electroweak vacuum is obtained. By including contribution due to the triplet field, this model can give reasonable neutrino spectrum and the mixing pattern, and predicted the right order of θ 13 [4,5]. (The quark sector was considered in Ref. [4].) Roughly speaking about the electroweak symmetry breaking. There are five scalar doublets, the mass parameters are all large ∼ 10 12 GeV. Eigenvalues of their mass-squared matrix are generically large. However, one of these values can be exceptional, because it is a difference between two large parameters. It is this difference that makes the fine-tuning possible. Whence the difference is tuned to be about −(100 GeV) 2 , correct electroweak symmetry breaking occurs. The corresponding eigenstate field is one superposition of the five doublets. It is the only light scalar doublet, and is just the SM Higgs field from the point of view of the low energy effective field theory. The SM Higgs gets a VEV is equivalent to that the original two Higgses and sleptons get their VEVs [4,5].
II. COMPLEX COUPLINGS AND SNEUTRINO VEVS
In this paper, we will carefully consider CP violation of the lepton sector, and completely analyze the neutrino masses and mixing. In general, the coupling constants are complex, however, because of the flavor symmetry, many of them can be made real via field phase rotation. In the superpotential Eq. (1) for charged leptons, all the couplings can be adjusted to be real. On the other hand, in the superpotential Eq. (2) for neutrino masses, the couplings cannot be all taken real, as can be seen in the following way. The mass parameters µ and M T are taken real, thus H u and H d always have opposite phases, and so do T andT .
In such a phase convention, only y ν , λ ν 1 and λ ν 3 can be complex. The λ ν 1 term will contribute to the neutrino masses, which was omitted in our previous analysis [5].
In the soft SUSY breaking terms, the mass parameters and coupling constants are generally complex, and there is no enough freedom to rotate all of the phases away.
The scalar potential relevant to the electroweak symmetry breaking is where g and g ′ are SM gauge coupling constants. h u and h d denote the scalar components In considering CP violation of the scalar potential, the essential point lies in the soft bilinear terms where the mass parameters are complex. Field redefinition of h d andl α may remove phases of B µ and B µα respectively, however, the phases of m 2 dα and off-diagonal terms of m 2 αβ are still there. This means that after the electroweak symmetry breaking, Higgs and sneutrino VEVs are complex in general. (Previously we took all the VEVs real.) In the analysis, we still have the freedom to choose the VEV of Higgs field h u to be real, and VEVs of the Higgs and the sneutrino fields are denoted as (v u v lτ e iδ lτ ) where the phases have been explicitly written down. These VEVs enter the lepton mass matrices and thus contribute to CP violation in the leptonic mixing.
III. NEUTRINO MASSES
The sneutrino VEVs result in a nonvanishing neutrino mass, where a = (g 2 + g ′2 )/2, MZ is the Zino mass which is the typical superpartner mass, and the phase of Zino mass term, δ Z , is explicitly written. This is due to gauge interactions, it is natural realization of the type-I seesaw mechanism [9] where the role of right-handed neutrinos is replaced by the Zino. In addition, the superpotential (2) contributes following neutrino masses [5], where the phase of coupling λ ν 1 has been explicitly written. This part of neutrino mass generation is realization of the type-II seesaw mechanism [10].
The full neutrino mass matrix is Note this is the full neutrino mass matrix of the model. It is due to tree level contribution of lepton number violation. The loop level contribution due to R-parity violation is negligible [4], because the sparticles in the loops are very heavy.
The physics analysis including λ ν 1 is different from our previous one [5]. We observe that it is natural to take that M ν 1 is numerically dominant over M ν 0 , then there appears a degeneracy between the first two neutrinos. This roughly fits the neutrino spectrum obtained from neutrino oscillation experiments. This degeneracy is perturbed by M ν 0 which also contributes neutrino mixing. Furthermore, it is interesting to note that inclusion of λ ν 1 in certain cases does not really increase difficulty in the analysis because M ν 1 is diagonal. We rewrite M ν by adjusting the diagonal part M ν 1 to be proportional to identity matrix, Generally, M ν is complex, the phases make further analytical calculation [11] difficult. For illustration and an easy analysis, and without losing generality about CP violation, we simply take δ lα = 0 and δ λ = −δ Z in the following. Then, up to an overall factor,M ν 0 is a real symmetric matrix and can be diagonalized by an orthogonal matrix. It just needs diagonalizingM ν 0 , becauseM ν 1 is essentially an unit matrix which does not affect this diagonalization. By further assuming that v 2 does not violate the Z 3 flavor symmetry, it is found thatM ν 0 is diagonalized by, with eigenvaluesM In fact, O ν diagonalizes M ν , Noticing that the diagonalized matrix is still complex, we further write that the neutrino masses in our model are with the phases
10
. According to neutrino oscillation experiments [12], ∆m 2 12 = 8.0 × 10 −5 eV 2 and |∆m 2 23 | = 2.4 × 10 −3 eV 2 , this model typically gives that Naturally the phases in above formulae are O(1). This makes us to take all the cosines to be O(1) for simplicity in estimating the neutrino masses. And m ν 3 is numerically fixed by choosing λ ′ 2 and v 2 lτ . Finally, we obtain the unitary matrix U ν which diagonalizes M ν , with P being the pure phase matrix appearing in Eq. (13).
IV. CHARGED LEPTON MASSES
From Eq. (1), the charged lepton mass matrix is obtained. Considering the sneutrino and Higgs VEVs are complex, it is Here the electron mass is neglected. In this model, the electron mass would be a loop contribution of SUSY breaking terms which also break the flavor symmetry and the electroweak symmetry [3,4]. M l in the above equation basically fixes the mixing due to charged leptons with a precision of m e /m µ . It is standard to find the unitary matrix U l which diagonalizes It can be expressed as
V. LEPTON MIXING MATRIX
The lepton mixing matrix is V = U † l U ν . It is obtained that ν e − ν µ mixing is The ν µ − ν τ mixing is The ν e − ν τ mixing is Experimental data for best values of these mixings are |V e2 | ≃ 0.54, |V µ3 | ≃ 0.65, and |V e3 | ≃ 0.15 [12]. Obviously, taking v lµ ≃ 2v le , |V e2 | is in agreement with data. The value of v lτ is taken to be larger and still in the natural range, v lτ ≃ 3v lµ . Choosing ∆λ ≃ 0.3v 2 lτ , it is easy to get |V e3 | ≃ 0.3|V e 2 |.
For |V µ3 |, there are two terms in Eq.(24), neglecting the first term for simplicity, this mixing would be maximal if λ τ v lµ 2 + v le 2 = y τ v d , namely λ τ ≃ 0.8. Of course, a smaller λ τ is more natural. Therefore this model slightly favors the atmospheric neutrino angle to be in the first octant.
The important CP violation in neutrino oscillations is given through the invariant parameter J [13], and δ v d is expected to be large, namely | sin δ| ∼ 0.1 − 1. This agrees with current preliminary experimental results [15].
VI. MAJORANA NEUTRINO MASS
The effective Majorana mass in the neutrinoless double beta decay is In this work, it is In the above formula, the V e3 term has a Majorana phase dependence, which is negligibly small anyway.
VII. DISCUSSIONS
Like gauge theories which are used to describe the elementary particle interactions, SUSY is used for fermion masses. Our model is the minimal SUSY SM with a vector-like triplet field extension, but SUSY breaks at a high scale and the R-parity (lepton number) is not required. The sneutrino VEVs result in a neutrino mass which is suppressed by the Zino mass. This is a nice realization of the type-I seesaw mechanism which, even does not need to introduce any right-handed neutrino. The triplet field is originally for the realistic Higgs mass. However, it also contributes to neutrino masses through a type-II seesaw mechanism.
The Zino related seesaw mechanism results in only one massive neutrino. By including the triplet contribution, the neutrino masses can be realistic. Compared to our previous studies [4,5], a more natural pattern for neutrino masses is obtained.
To be numerically natural, let us return back to the original superpotential in the beginning. The couplings are assumed to be taken natural values. The field VEVs are mainly fixed by the soft parameters in the Lagrangian, in addition to those in the superpotential. To fit the lepton spectrum and mixing, we take v le ≃ 1 GeV, v lµ ≃ 2 GeV, v lτ ≃ 6 GeV, v d ≃ 10 GeV, and v u ≃ 228 GeV. Note v lτ does not break the flavor symmetry, it is natural that its value is more close to v d . And the large v u /v d ratio is for explaining the top quark mass [4].
It is necessary to check the reliability of our approximation in estimating the neutrino masses. That approximation about the phases can be good when the quantities appear in the mass formulae are hierarchical, say if λ ′ 1 ≫ v 2 le + v 2 lµ . As it has been seen that this is indeed the case for m ν 2 . In m ν 3 (Eq. (14)), λ ′ 2 , λ ′ 1 and v 2 lτ are of the same order. This allows us to look at an extreme case where the phase is π. In this case, there is a possibility of inverted neutrino mass hierarchy, namely a very small m ν 3 . But this is achieved through a large cancellation between λ ′ 2 and v 2 lτ . Although this is possible, it is unnatural. The physics of neutrinos in this work is quite different from that in Refs. [4,5]. This is mainly due to the triplet. In Ref. [4], we introduced a singlet, the neutrino mass matrix M ν 1 was that with only the 33 matrix element nonvanishing. And in [5], the triplet replaced the singlet for the Higgs mass in the beginning, however, in the neutrino mass analysis, we took λ ν 1 to be zero which essentially was the same as that for the singlet case. Taking λ ν 1 to be zero was actually unreasonable because our principle is to treat all the basic couplings close to 0.01 − 1. As a result, in Refs. [4,5], there was always one massless neutrino. That led to that the Majorana mass m ee is about 10 −3 eV. In addition, in [5] it was wrong to say CP violation is small in the lepton sector.
VIII. SUMMARY
In summary, in the model of high scale SUSY for understanding the fermion mass hierarchies, we have studied CP violation in the lepton sector, and other aspects of neutrino physics in detail. In the analysis, the phases of the Higgs and sneutrino VEVs, and contribution of the λ ν 1 term in superpotential (2), have been included. This analysis is more complete than previous consideration. The neutrino mass matrix, and the charged lepton one, are fixed by the model. Its specific feature is the triplet contribution, the approximate degeneracy of neutrinos ν 1 and ν 2 can be naturally explained.
This model could not predict exact values of the fermion masses because of the flavor symmetry breaking as well as SUSY breaking. However, the principle we follow is that all the coupling constants should be in the natural parameter range which is about (0.01 − 1).
Taking triplet contribution dominant, and inputting relevant experimental data on leptons, we obtain that (i) m ν 1 ≃ 0.020 eV, m ν 2 ≃ 0.022 eV, m ν 3 ≃ 0.054 eV. This normal ordering neutrino spectrum is to be checked in JUNO experiment [14]. (ii) CP violation in neutrino oscillation most probably is large. There have been some experimental hint on this [15]. CP violation in neutrino oscillations is a great study task experimentally [16]. (iii) The effective Majorana neutrino mass in the neutrinoless double beta decay is about 0.02 eV, it is within the detection ability of future measurements [17]. (iv) θ 23 is slightly favored being in the first octant. (v) The electron neutrino mass to be measured in β decays is about 0.02 eV. This is, however, still one order of magnitude lower than the future limit of direct measurements [18]. (vi) The sum of three neutrino masses is close to m ν ≃ 0.1 eV. If the standard cosmology is correct, astrophysics measurements on the cosmic microwave background has constrained this sum to be < 0.15 eV [19]. It is interesting to note that a recent analysis showed the sum is about ∼ 0.11 eV [20]. Most of the above predictions are close to their experimental limits, therefore, this model will soon be checked experimentally. | 2019-01-03T01:11:15.000Z | 2018-08-31T00:00:00.000 | {
"year": 2019,
"sha1": "a9aa29df3d4812bb72cc5c9eb9fbd64b33723617",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.10599",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a9aa29df3d4812bb72cc5c9eb9fbd64b33723617",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
33155713 | pes2o/s2orc | v3-fos-license | MKP-7, a Novel Mitogen-activated Protein Kinase Phosphatase, Functions as a Shuttle Protein*
Mitogen-activated protein kinase (MAPK) phosphatases (MKPs) negatively regulate MAPK activity. In the present study, we have identified a novel MKP, designated MKP-7, and mapped it to human chromosome 12p12. MKP-7 possesses a long C-terminal stretch containing both a nuclear export signal and a nuclear localization signal, in addition to the rhodanese-like domain and the dual specificity phosphatase catalytic domain, both of which are conserved among MKP family members. When expressed in mammalian cells MKP-7 protein was localized exclusively in the cytoplasm, but this localization became exclusively nuclear following leptomycin B treatment or introduction of a mutation in the nuclear export signal. These findings indicate that MKP-7 is the first identified leptomycin B-sensitive shuttle MKP. Forced expression of MKP-7 suppressed activation of MAPKs in COS-7 cells in the order of selectivity, JNK ≫ p38 > ERK. Furthermore, a mutant form MKP-7 functioned as a dominant negative particularly against the dephosphorylation of JNK, suggesting that MKP-7 works as a JNK-specific phosphatase in vivo. Co-immunoprecipitation experiments and histological analysis suggested that MKP-7 determines the localization of MAPKs in the cytoplasm.
Mitogen-activated protein kinase (MAPK) phosphatases (MKPs) negatively regulate MAPK activity. In the present study, we have identified a novel MKP, designated MKP-7, and mapped it to human chromosome 12p12. MKP-7 possesses a long C-terminal stretch containing both a nuclear export signal and a nuclear localization signal, in addition to the rhodanese-like domain and the dual specificity phosphatase catalytic domain, both of which are conserved among MKP family members. When expressed in mammalian cells MKP-7 protein was localized exclusively in the cytoplasm, but this localization became exclusively nuclear following leptomycin B treatment or introduction of a mutation in the nuclear export signal. These findings indicate that MKP-7 is the first identified leptomycin B-sensitive shuttle MKP. Forced expression of MKP-7 suppressed activation of MAPKs in COS-7 cells in the order of selectivity, JNK > > p38 > ERK. Furthermore, a mutant form MKP-7 functioned as a dominant negative particularly against the dephosphorylation of JNK, suggesting that MKP-7 works as a JNK-specific phosphatase in vivo.
Co-immunoprecipitation experiments and histological analysis suggested that MKP-7 determines the localization of MAPKs in the cytoplasm.
The activation of the mitogen-activated protein kinase (MAPK) 1 cascade plays a key role in transducing various extracellular signals to the nucleus to induce responses such as gene expression, cell proliferation, differentiation, cell cycle arrest, and apoptosis (1,2). MAPKs consist of three major subfamilies, extracellular signal kinases (MAPK/ERK), stressactivated kinase/c-Jun N-terminal kinases, and homologues of the budding yeast HOG1 protein (p38). For full activation of these MAPKs, phosphorylation of both threonine and tyrosine residues found in TXY motifs is required by dual specificity kinases, known as MAPK kinases. Thus dephosphorylation of the TXY motif is critical for negative regulation of MAPK activity (3).
MKPs are potential negative regulators of MAPK cascades and as such are assumed to be involved in carcinogenesis by regulating cell proliferation and apoptosis. As a result, all human MKP genes have been mapped. Among them, MKP-2 and MKP-3 are mapped to a gene locus encoding tumor suppressors for prostate and pancreatic cancer, respectively (20,21). MKP-X and MKP-5 are mapped to 3p21 (21) and to 1q41 (13), respectively, where frequent deletions are reported in a number of different tumors.
Activation/phosphorylation of MAPKs leads to their nuclear translocation and phosphorylation of certain DNA-binding proteins that contribute to transcriptional regulation. The mechanism of nuclear-cytoplasmic transport of MAPKs is, however, not clear. Recent reports indicate that several different proteins contain an intrinsic nuclear export signal (NES) motif mediating their subcellular localization and nuclear-cytoplasmic shuttling through association with the export receptor, CRM1/exportin 1 (22)(23)(24). Among them, MEK1, one of MAPK kinases (MAPKKs) has been well characterized (25,26). Its NES motif was shown to function as an anchor protein of ERK in the cytoplasm when cells are unstimulated, thereby suppressing cell transformation. Leptomycin B (LMB), a specific inhibitor of nuclear export that blocks binding between the NES and CRM1 (27)(28)(29), caused nuclear accumulation of MEK1. Substitutions of crucial leucines in the NES motif with alanines caused nuclear accumulation of MEK1 and ERK (30).
By screening an EST library, we identified a human cDNA clone encoding a novel member of the MKP family, MKP-7.
Interestingly MKP-7 contains predicted functional motifs such as a nuclear export signal (NES) and nuclear localization signals (NLSs), suggesting that it functions as a shuttle protein and a MAPK phosphatase. In this report, the substrate specificity, subcellular localization, and regulation of MKP-7 are presented and discussed.
EXPERIMENTAL PROCEDURES
Identification of a Novel MKP cDNA-By using the amino acid sequence of human MKP-4, we screened an expressed sequence tag data base, dbEST, and identified a novel MKP. A human clone (GenBank TM accession number AI274662, IMAGE clone ID 1986459) and a mouse clone (GenBank TM accession number AA879894, IMAGE clone ID 1230637) had high sequence homology to human MKP-4. The human and mouse clones were obtained from Research Genetics, Inc. (Huntsville, AL), and their nucleotide sequences were determined using the dideoxynucleotide chain termination method on a 373A DNA sequencer (Applied Biosystems, Foster City, CA), with a Dynamic dye terminator cycle sequencing kit (Amersham Pharmacia Biotech). We performed 5Јand 3Ј-RACE using cDNA derived from Jurkat cells as template and primers based on the human clone. Nucleotide sequencing analysis showed that the deduced amino acid sequences of the PCR product are identical to those of clone AI 274662 with the presence of some polymorphism in 5Ј-and 3Ј-UTR regions (data not shown). The obtained PCR fragment had an ORF of 1995 base pairs. The mouse clone lacked the first ATG; therefore, 5Ј-and 3Ј-RACE using cerebellar mRNA was performed, resulting in a PCR fragment containing an ORF of 1980 base pairs. All RACE methods were performed with a SMART TM RACE cDNA amplification kit (CLONTECH, Palo Alto, CA) according to the manufacturer's protocol.
Northern Blot Analysis-Total RNAs from various tissues of 6-weekold male mice were isolated by acid guanidinium thiocyanate extraction (31). The RNAs were fractionated on a 1.5% formaldehyde-agarose gel and transferred to nitrocellulose membranes (Schleicher & Schuell). The membranes were hybridized with a 32 P-labeled 1.8-kb insert of mouse MKP-7 cDNA, which contains the full-length ORF. Hybridization was performed at 42°C in 50% formamide, 0.65 M NaCl, 5 mM EDTA, 1ϫ Denhardt's solution, 10% dextran sulfate, 0.1 M PIPES, pH 6.8, 0.1% SDS, and 100 g/ml denatured salmon sperm DNA. The membranes were washed twice with 2ϫ SSC containing 0.1% SDS at room temperature for 5 min, followed by sequential washes with 0.5ϫ SSC containing 0.1% SDS and 0.2ϫ SSC containing 0.1% SDS at 50°C, each for 15 min. The filters were exposed to an x-ray film using an intensifying screen at Ϫ80°C. MKP-7 Expression Plasmids-To construct pEGFP-MKP-7, the coding region of human MKP-7 cDNA was amplified by PCR to introduce a BglII site on the 5Ј end and a SalI site on the 3Ј end using Platinum Pfx DNA polymerase (Life Technologies, Inc.) and ligated to BglII and SalI-digested pEGFP-C2 vector (CLONTECH) in frame with the EGFP-coding sequence. To construct pFLAG-MKP-7, the same region was amplified by PCR to introduce a NotI site on the 5Ј end and a SalI site on the 3Ј end using Platinum Pfx DNA polymerase and ligated to NotI-and SalI-digested pFLAG-CMV2 vector (Sigma) in frame with the FLAG epitope sequence. Several constructs encoding MKP-7 mutant proteins, including DA (D213A), CS (C244S), delC1 (residues 1-290), delC2 (residues 1-370), delC3 (residues 1-604), LA (L380A, L383A, and L385A), delR (residues 162-665), and delR-LA (residues 162-665 of LA), were constructed by PCR and subcloned into pFLAG-CMV2 (Sigma). The final PCR products were cloned into pGEM-T Easy (CLONTECH) and sequenced. No substitution was found except for the targeted mutation.
Cell Culture and Transient Transfection-HeLa and COS-7 cells were maintained in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum at 37°C under 5% CO 2 . Cells were co-transfected with pFLAG-MKP-7 (wild type or mutant constructs) together with SR␣-HA-ERK2, SR␣-HA-JNK1, or pMT3-HA-p38␣. For transient assays, cells were transfected using Fugene-6 (Roche Molecular Biochemicals) according to the manufacturer's recommendation. Twenty four hours after transfection, cells were maintained with or without serum for 18 h and then stimulated with either 5 ng/ml PMA for 10 min for ERK2 activation or with 0.4 M sorbitol for 30 min for JNK1 and p38␣ activation.
Leptomycin B Treatment-Twenty four hours after transfection, HeLa cells were maintained without serum for 18 h and then treated with 5 nM LMB (provided by Dr. M. Yoshida) for the indicated periods.
Co-immunoprecipitation-Transfected COS-7 cells were lysed on a plate (300 l/60-mm plate) in co-IP buffer (50 mM Tris-HCl (pH 7.5), 150 mM NaCl, 2 mM EDTA, 10% glycerol, 0.5% Triton X-100, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml leupeptin, and 10 g/ml aprotinin). The cell lysate was clarified by centrifugation, and protein concentrations were measured as above. The supernatant (500 g) was incubated with 2 g of mouse anti-FLAG M2 antibody and 15 l of protein G-Sepharose 4 fast flow (Amersham Pharmacia Biotech), which had been equilibrated with the co-IP buffer in a 500-l tube. After 1 h rotation at 4°C, the beads were washed 5 times with 500 l of the co-IP buffer. The immunoprecipitates were resuspended in 40 l of 1 ϫ Laemmli's SDS sample buffer, boiled for 2 min, separated by SDSpolyacrylamide gel electrophoresis on 10% gels, and transferred to a nitrocellulose membrane (Amersham Pharmacia Biotech). FLAG-or HA-tagged proteins were detected by the respective antibodies using ECL reagents.
Cell Staining-HeLa cells on coverslips coated with Vitrogen 100 (Collagen Biochemical, Palo Alto, CA) were transfected with wild type or mutant forms of pFLAG-MKP-7. Transfected cells were fixed in PBS containing 3.7% formaldehyde for 10 min and then permeabilized with PBS containing 0.5% Triton X-100 for 5 min. After incubation in PBS containing 3% BSA (PBS-B) for 2 h, cells were incubated with anti-FLAG M2 antibody or anti-FLAG polyclonal antibody (provided from Dr. K. Yamashita) to detect FLAG-tagged proteins and an anti-HA (12C5) antibody to detect HA-tagged proteins in PBS-B overnight at 4°C. After three washes with PBS, cells were incubated for 20 min at 37°C with Cy3-conjugated goat anti-mouse IgG ϩ IgM (H ϩ L) antibody (Chemicon International), fluorescent isothiocyanate-conjugated goat anti-mouse IgG (H ϩ L) (Kirkegaard & Perry Laboratories, Inc.), or AlexaFluor 546-conjugated goat anti-rabbit IgG (H ϩ L) (highly crossabsorbed) (Molecular Probes, Eugene, OR) in PBS-B. After three washes with PBS, coverslips were mounted with PBS containing 90% glycerol. Fluorescence signals were visualized by a fluorescence microscope.
Isolation of a Human and Mouse MKP-7-To search for novel
MKPs, we screened a human dbEST using the amino acid sequence of human MKP-4 as a probe. This clone (GenBank TM accession number AI274662, IMAGE clone ID 1986459) also showed high sequence homology to human MKP-4. The nucleotide sequence of this clone was determined, and then 5Ј-RACE was performed to identify the first methionine codon and sequences of the 5Ј-UTR region. The full-length clone was obtained by reverse transcriptase-PCR using mRNA from Jurkat cells. The nucleotide sequences were verified by comparing three independent clones (Fig. 1). The open reading frame (ORF) of this cDNA was predicted to encode 665 amino acids. A dual specificity phosphatase (DSP) catalytic site motif, VXVH-CXAGXSRSXTXXXAYXM, which is essential for phosphatase activity, was found in this clone (32). A MAPK-docking motif composed of a kinase-interacting motif (33) at residues 51-65 and ␦-like domain (34 -36) at residues 161-169 were also present. Since this ORF contains these two essential sequences, we designated the clone a new member of MKP gene family (Fig. 1). To date, 6 species among 12 DSPs have been designated MKP-1 to MKP-6 based on their structural similarity and substrate specificity toward MAPKs. Following this nomenclature, we named our novel DSP MKP-7 (Fig. 1).
MKP-7 exhibits several other predicted functional motifs. Two bipartite NLS motifs (37) were located at amino acid residues 296 -313 (NLS1) and 610 -627 (NLS2). One leucinerich NES motif was located at amino acid residues 376 -385. The presence of a NES as well as an NLS suggested that MKP-7 acts as a shuttle protein. PEST sequences, which are thought to be involved in rapid degradation through ubiquitinmediated proteolysis (38), were found at residues 332-353 and 441-462 residues. The presence of these motifs suggested that a C-terminal stretch of MKP-7 is important for localization and stability of the protein.
To obtain the mouse homologue, we screened mouse dbESTs and found one clone (GenBank TM accession number AA879894, IMAGE clone ID 1230637) that had high sequence homology to human MKP-7 but lacked the 5Ј-half of the ORF. We obtained a full-length cDNA clone by RACE using mouse cerebellum RNA as a template. The nucleotide sequence was verified using three independent reverse transcriptase-PCR fragments covering the entire ORF of mouse MKP-7. In Fig. 2A the deduced amino acid of mouse MKP-7 (lower line) is aligned with human MKP-7 (upper line). The identity between mouse and human MKP-7 is 90.4% at the amino acid level. MKP-7 appears to be composed of three domains. Domain 1 is a rhodanese-like domain that contains two CH2 domains and a kinase-interacting motif. Domain 2 is a DSP catalytic domain that contains ␦-like domain and catalytic motif, and domain 3 is a long C-terminal stretch. Domains 1 and 2 are highly conserved between human and mouse MKP-7s.
The alignment of domain structures of MKP-7 with those of other MKPs is shown in Fig. 2B. MKP-7 has similar domain structures and the highest sequence similarity to hVH5. The similarities between human MKP-7 and hVH5 are 76.1, 59.6, and 30.5% in the DSP domain, rhodanese-like domain, and C-terminal stretch, respectively.
Chromosomal Location of the MKP-7 Gene-The location of the MKP-7 gene on human chromosomes was determined by the identification of MKP-7 cDNA between two sequencetagged site markers, G24001 and G41293, in Homo sapiens 12p BAC RP11-253I19 (GenBank TM accession number AC007619.22). This gene was localized to human chromosome 12p12 as shown in Fig. 3A. By comparing the nucleotide sequences of the MKP-7 genomic clone with the cDNA, the MKP-7 gene was shown to be composed of at least seven exons (Fig. 3B). The catalytic core and C-terminal stretch were each encoded on a single exon, whereas the rhodaneselike domain was contained on three exons. Since the 5Ј-UTR sequence did not match any sequences in the NCBI data base or others, we did not identify the promoter region of MKP-7.
Theoretically, DSPs have tumor suppressor activity. The Recurrent Chromosome Aberrations in Cancer Data base searcher (www.cgap.nci.nih.gov/Chromosomes/Recurrent Aberrations) showed that chromosome 12p12 is a region where deletions often occur in acute lymphoblastic leukemia, acute and chronic myeloid leukemia, and myelodysplastic syndrome, suggesting that tumor suppressor genes for leukemia lie within this region.
Tissue Distribution of MKP-7 mRNA-The expression pattern of MKP-7 mRNA in mouse tissues was examined by Northern blot analysis using mouse MKP-7 cDNA containing the entire ORF as a probe. As shown in Fig. 4, two mRNA species of 4.1 kb as a major transcript and 2.1 kb as a minor transcript were detected. The 4.1-kb transcript was abundantly expressed in the brain, kidney, intestine, and testis but expressed at low levels in the thymus, spleen, and bone marrow. The 2.1-kb transcript was detected only in the testis.
In order to analyze specificity further, we used two types of catalytically inactive proteins with mutations in conserved residues, MKP-7-CS (C244S) and MKP-7-DA (D213A). Both mutant proteins significantly enhanced phosphorylation of HA-JNK1 but had little effect on activation of HA-ERK2 and HA-p38␣ (Fig. 5B, lanes 10 and 11), indicating that both mutant proteins function as dominant negatives toward JNK. Under the same conditions, MKP-5 inactivated JNK1 and p38␣ more strongly than ERK2, as reported (Fig. 5, A-C, lane 12, and Refs. 11 and 12). Therefore, MKP-7 blocked activation of MAP kinases in the order of selectivity, JNK1 Ͼ Ͼ p38␣ Ͼ ERK2.
In Vivo Interaction between MKP-7 and MAPKs-Since MKP-7 inactivated JNK1 and p38␣ in vivo, we asked whether MKP-7 binds MAPKs in vivo. We tested an in vivo interaction between MKP-7 and MAPKs by co-immunoprecipitation experiments to determine whether MKP-7 has a binding preference among MAPKs and, if a direct interaction occurs, whether MAPKs must be phosphorylated for that interaction.
FLAG-MKP-7 and HA-MAPKs (either HA-ERK2, HA-JNK1, or HA-p38␣) were co-expressed in COS-7 cells (Fig. 6A). As expected, HA-JNK1 was co-immunoprecipitated with FLAG-MKP7 (Fig. 6A, lanes 10 -13); however, stimulation did not affect this interaction (Fig. 6A, compare lanes 12 and 13). The interaction was observed even under culture conditions lacking starvation or stimuli (Fig. 6A, lane 10). When we expressed FLAG-MKP-7DA, an inactive mutant, interaction of FLAG-MKP-7DA and HA-JNK1 was similar to that of FLAG-MKP-7 and HA-JNK1 (data not shown), suggesting that MKP-7 binds the dephosphorylated as well as the phosphorylated protein. It should be noted that MKP-7 binds not only JNK1 but also ERK2 and p38␣. Under standard conditions, we did not observe any binding preference of MKP-7 toward a specific MAPK. Under these conditions, the MAPK binding specificity of MKP-5 (Fig. 6B) and MKP-2 (data not shown) was confirmed as already reported (11,39), which excludes a possibility that the binding of MKP-7 and MAPKs is due to be artificial by overexpression. Taken together, we conclude that MKP-7 interacts with ERK2 and p38␣ as well as JNK1 with similar preference in vivo and that such interaction does not depend on the phosphorylation state of MAPKs.
An 6 -9, and B, lanes 1 and 2), SR␣-HA-JNK1 (A, lanes 10 -13, and B, lanes 3 and 4), or pMT3-HA-p38␣ (A, lanes 14 -17, and B, lanes 5 and 6) in 60-mm dishes. Twenty four hours after transfection, the cells were maintained with or without serum for 18 h and then stimulated with either 5 ng/ml PMA for 10 min (ERK2 activation) or 0.4 M sorbitol for 30 min (JNK1 and p38␣ activation). An immunoprecipitation (IP)-Western was done by using anti-FLAG M2 antibody for immunoprecipitation and blotted with anti-HA antibody. The expression levels of FLAG-MKPs (FLAG-MKP-7 and FLAG-MKP-5) and HA-MAPKs (HA-ERK2, HA-JNK1 and HA-p38␣) were assessed by immunoblot using anti-FLAG or anti-HA antibody. Data are representative of three independent experiments. ined (Fig. 6), we further tested the effect of MKP-7-inactive mutants CS and DA on the phosphorylation state of MAPKs under unstimulated conditions (Fig. 7). Expression levels of either FLAG-MKP-7 proteins or HA-MAPKs were similar in each lane (data not shown). A dominant negative effect against dephosphorylation of JNK was observed even in unstimulated cells (Fig. 7B, lanes 5 and 7). A similar effect was observed for ERK (Fig. 7A, lanes 5 and 7) but not for p38␣ (Fig. 7C, lanes 5 and 7).
Next we examined those effects under conditions lacking starvation or stimulation (Fig. 7, D-F). Accumulation of phosphorylated forms of ERK2 and JNK1 was evident (Fig. 7, D and E, lanes 5 and 7) but that of p38␣ was not (Fig. 7F, lanes 5 and 7). These results suggest that MKP-7 may block ERK and JNK phosphorylation/activation when cells are unstimulated.
MKP-7 Is Localized in the Cytoplasm-In order to understand the function of MKP-7, we investigated the subcellular localization of MKP-7. The localization of EGFP-MKP-7 in HeLa cells is shown in Fig. 8A. The control EGFP protein was distributed evenly in transfected cells, whereas EGFP-MKP-7 was specifically localized in the cytoplasm (Fig. 8A, a and c). In a separate experiment, we used FLAG-tagged MKP-7 to ensure that localization of MKP-7 to the cytoplasm was not an artifactual result of the green fluorescent protein domain being fused to the phosphatase (Fig. 8A, e). We examined the subcellular distribution of FLAG-MKP-7 in several cell lines, including COS-7, NIH3T3, and 293 cells. FLAG-MKP-7 was localized exclusively in the cytoplasm of all these cell lines (data not shown).
MKP-7 Is an LMB-sensitive Shuttle
Protein-To determine whether the predicted NES is functional, we examined the effect of LMB on distribution of FLAG-MKP-7 when the cells were starved (Fig. 8B). Without LMB treatment, FLAG-MKP-7 was localized exclusively in the cytoplasm, but it accumulated in the nucleus in a manner proportional to incubation time with LMB. By 120 min of LMB treatment, FLAG-MKP-7 had exclusively accumulated in the nucleus. Similar results were obtained in cells without starvation (data not shown). These data suggested that MKP-7 shuttles between the nucleus and the cytoplasm and that nuclear export of MKP-7 is LMB-sensitive.
Analysis of the Sequences Required for Shuttling-To determine the importance of C-terminal stretch for nuclear transport and export, we analyzed localization of the following three deletion mutant proteins: FLAG-MKP-7-delC1, FLAG-MKP-7-delC2, and FLAG-MKP-7-delC3 (Fig. 9A). In sharp contrast to the wild type protein, FLAG-MKP-7-delC1 lost its specific localization and was evenly distributed in the cell. It also lacked sensitivity to LMB (Fig. 9), strongly suggesting that the cytoplasmic localization and LMB sensitivity of MKP-7 is determined by the C-terminal stretch. FLAG-MKP-7-delC2 localized mainly in the nucleus and its localization was not affected by LMB, whereas FLAG-MKP-7-delC3 was localized in the cytoplasm and this localization was sensitive to LMB. These data show that the region (residues 291-370) containing NLS1 functions for nuclear import, and the region containing the NES functions for LMB-sensitive nuclear export. NLS2 appears not to be critical since the localization of FLAG-MKP-7delC3 is the same as that of the wild type protein.
To verify the importance of the NES motif, FLAG-MKP-7-LA, which has crucial three leucines substituted with alanines, was expressed in HeLa cells. This mutant protein was com-pletely accumulated in the nucleus. These results were observed in other cell lines such as COS-7, NIH3T3, and 293 cells (data not shown).
We also investigated the involvement of the rhodanese-like domain in subcellular localization. FLAG-MKP-7-delR and FLAG-MKP-7-delRLA did not translocate to the nucleus even with LMB treatment. These results also support the idea that MKP-7 is a shuttle protein between the nucleus and the cytoplasm. NLS1, in collaboration with rhodanese-like domain, seems function to allow nuclear import, and the NES in the C-terminal stretch is critical for the nuclear export.
MKP-7 Determines MAPK Localization-The observation that MKP-7 was localized exclusively in the cytoplasm (Fig. 8) and that it co-immunoprecipitated with MAPK ( Fig. 6) led us to analyze effect of MKP-7 on localization of MAPKs. Without stimulation, HA-ERK2 is localized in the cytoplasm; however, HA-JNK1 and HA-p38␣ are distributed evenly in the nucleus as well as the cytoplasm as reported (Fig. 10A) (40 -45). However, following co-transfection with FLAG-MKP7, HA-JNK1 or HA-p38␣ became accumulated in the cytoplasm. Localization of co-expressed FLAG-MKP-7 was similar to that of HA-ERK2, HA-JNK1, and HA-p38␣. (g and h). B, after transfection of pFLAG-MKP7, HeLa cells were maintained without serum for 18 h and then exposed to 5 nM LMB for the indicated periods. FLAG-MKP-7 was detected by immunofluorescence using an anti-FLAG M2 antibody with Cy-3 conjugated goat anti-mouse secondary antibody. appears to be regulated by both an NLS and an NES located in the C-terminal stretch of the protein.
Since MKP is a potential negative regulator of the MAPK cascade, it could play a role in carcinogenesis by regulating cell proliferation and apoptosis. To further our understanding of the relationship of MKPs to diseases, six human MKP genes have been already mapped. Among them, hVH2/MKP-2 and MKP-4 map to suppressor gene loci for prostate cancer and pancreatic cancer, respectively. PYST2/MKP-X maps to 3p21, where frequent deletions are found in several different tumors. We have mapped MKP-7 to 12p12, where deletions often occur in acute lymphoblastic leukemia, acute and chronic myeloid leukemia, and myelodysplastic syndrome. Since MKP-7 was identified as a phosphatase specific for JNK, it could also function as a tumor suppressor in cancers through negative regulation of the JNK pathway. Whether the MKP-7 gene is deleted or mutated in such tumors is currently under investigation, although levels of MKP-7 gene expression were very low in hematopoietic and lymphoid cells.
In Northern blots a mouse MKP-7 probe detected 4.1-and 1.8-kb mRNAs as ubiquitous and testis-specific transcripts, respectively. An interesting feature of MKP-7 expression is that the level of the 4.1-kb mRNA is very low in some tissues, such as thymus, spleen, and bone marrow. It is possible that expression is down-regulated in hematopoietic or proliferating cells. Recently we reported that testis-and skeletal musclespecific DSP TMDP is abundantly expressed in the testis (46), and MKP-5 is expressed as a shorter transcript in the testis (13). It is interesting that both testis-specific transcripts were expressed specifically during meiosis in the testis. The structure and properties of the testis-specific MKP-7 transcript are being investigated and compared with those of TMDP and the testis-specific MKP-5 transcripts. MKP-7 is likely to function as a JNK phosphatase. MKP-7 was more effective toward phosphorylated and activated JNK1 than ERK2 and p38␣ (JNK1 Ͼ Ͼ p38␣ Ͼ ERK2). Also the finding that inactive mutants of MKP-7 worked as strong dominant negatives against dephosphorylation of JNK supports the idea that MKP-7 functions as a JNK phosphatase in vivo. The substrate specificity of MKP-7 toward MAPKs was similar to that of hVH-5/M3/6 (JNK Ϸ p38 Ͼ Ͼ ERK) (47) and MKP-5 (JNK Ϸ p38 Ͼ Ͼ ERK) (11,12) but very different from that of PAC-1 (ERK ϭ p38 Ͼ JNK) (48 -50), MKP-2 (ERK ϭ JNK Ͼ p38) (50), MKP-3 (ERK Ͼ Ͼ JNK ϭ p38) (9,47), and MKP-4 (ERK Ͼ p38 ϭ JNK) (10). MKP-7 and hVH-5/M3/6 have high sequence homology, similar domain structures, and similar substrate specificities toward MAPKs. Compared with the wild type protein, FLAG-MKP-7delC2 showed higher activity toward p38␣, although its activity toward JNK1 and ERK2 was unchanged (data not shown). These results suggest that 1) the high specificity to JNK1 depends on conserved sequences between MKP-7 and hVH-5/M3/6, which include the rhodaneselike domain and catalytic domain, and 2) the C-terminal stretch of MKP-7 may interfere with its recognition of p38␣ as a substrate.
It is unclear why, despite its high specificity toward JNK1, MKP-7 can bind ERK2 and p38␣ as well as JNK1 with similar affinity. This observation suggests that binding is necessary but not sufficient for determination of substrate specificity. To address this issue, experiments either substituting the catalytic domain of MKP-7 with the corresponding domain from other MKPs or mutating the catalytic domain of MKP-7 will be required. It is of note that MKP-7-inactive mutants increased levels of phosphorylated HA-ERK2 as well as HA-JNK1, when cells are unstimulated (Fig. 7). MKP-7 may play a role as a gatekeeper for ERK as well as JNK by setting a high threshold for stimulation.
To our knowledge, MKP-7 is the first identified shuttle MKP. By substitution experiments and LMB treatment, we showed that the NES in the C-terminal stretch is functional. It is of interest that the NES motif of MKP-7 (LXXXLXXLXL) is identical to that of MEK1 (25,30). For nuclear import, the NLS1 region in addition to the rhodanese domain was identified as functional. The rhodanese-like domain may be involved in conformational changes of MKP-7, but the details remain to be clarified. An important question is the function and role of MKP-7 as a shuttle molecule. Based on the results shown in Fig. 10 that MKP-7 trapped MAPKs in the cytoplasm, we propose two models. One model is that MKP-7 translocates into the nucleus and interacts with activated MAPKs and then dephosphorylates and transports them back to the cytoplasm. Another is that MKP-7 remains in the cytoplasm to anchor and dephosphorylate MAPKs. To distinguish between these two models, we analyzed the activity of FLAG-MKP-7 LA. This mutant was localized in the nucleus (Fig. 9) and showed activity toward MAPKs similar to that of the wild type protein (data not shown), supporting the former model. Future experiments focusing on MKP-7 will address the question how localizations and activity of MAPKs are regulated. | 2018-04-03T02:20:47.420Z | 2001-10-19T00:00:00.000 | {
"year": 2001,
"sha1": "d53215eefc1705fbd653d5be81cdb5ee229abd8b",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/276/42/39002.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "0d0286deedc613d61a642b985bdff47cd5734f38",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119250776 | pes2o/s2orc | v3-fos-license | Is Compton cooling sufficient to explain evolution of observed quasi-periodic oscillations in Outburst sources?
In outburst sources, quasi-periodic oscillation (QPO) frequency is known to evolve in a certain way: in the rising phase, it monotonically goes up till a soft intermediate state is achieved. In the propagating oscillatory shock model, oscillation of the Compton cloud is thought to cause QPOs. Thus, in order to increase QPO frequency, Compton cloud must collapse steadily in the rising phase. In decline phases, exactly opposite should be true. We investigate cause of this evolution of the Compton cloud. The same viscosity parameter which increases the Keplerian disk rate, also moves the inner edge of the Keplerian component, thereby reducing the size of the Compton cloud and reducing the cooling time scale. We show that cooling of the Compton cloud by inverse Comptonization is enough for it to collapse sufficiently so as to explain the QPO evolution. In the Two Component Advective Flow (TCAF) configuration of Chakrabarti-Titarchuk, centrifugal force induced shock represents boundary of the Compton cloud. We take the rising phase of 2010 outburst of Galactic black hole candidate H~1743-322 and find an estimation of variation of $\alpha$ parameter of the sub-Keplerian flow to be monotonically rising from $0.0001$ to $0.02$, well within the range suggested by magneto-rotational instability. We also estimate the inward velocity of the Compton cloud to be a few meters/second which is comparable to what is found in several earlier studies of our group by empirically fitting the shock locations with the time of observations.
INTRODUCTION
Study of temporal variability including signatures of quasi-periodic oscillations (QPOs) is an important aspect of astrophysics of black holes. Several models in the literature attempt to explain origin of low frequency QPOs. They include perturbation inside a Keplerian disk (Trudolyubov et al. 1999), global disk oscillation (Titarchuk & Osherovich 2000), oscillation of wrapped disk (Shirakawa & Lai 2002), accretion ejection instability at the inner radius of a Keplerian disk (Rodriguez et al. 2000). Titarchuk et al. (1998), envisages a bounded region surrounding compact objects which is called the transition layer (TL) and identifies low frequency QPOs as that associated with the viscous magneto-acoustic resonance oscillation of the bounded TL. Chakrabarti and his collaborators (Molteni et al. 1996;Chakrabarti et al. 2004) showed that the oscillations of centrifugal pressure supported accretion shocks (Chakrabarti 1990a) could cause observed low frequency QPOs. According to the two-component advective flow (TCAF) model (Chakrabarti & Titarchuk 1995) the post-shock region itself is the Compton cloud. Because the shock is formed due to centrifugal force, where energy is dissipated and angular momentum is redistributed, post-shock region is also known as the CENtrifugal pressure supported BOundary Layer (CEN-BOL) of the black hole. This TCAF solution has been proven to be of stable configuration (Giri & Chakrabarti, santanu@csp.res.in; chakraba@bose.res.in; dipak@csp.res.in; 1 Indian Center for Space Physics, 43 Chalantika, Garia Station Rd., Kolkata, 700084, India; santanu@csp.res.in 2 S. N. Bose National Centre for Basic Sciences, Salt Lake, Kolkata, 700098, India.
2013) and
Monte-Carlo simulations of spectral and timing properties through a time dependent radiative hydrodynamic code showed the formation of QPOs very similar to what is observed (Garain et al. 2014). The Compton cloud becomes smaller because of higher viscosity as well as higher cooling. Higher viscosity causes the Keplerian disk on the equatorial plane to move in. This causes the Compton cloud to cool down. This picture is clear from the two component model of Chakrabarti & Titarchuk (1995) and the outburst source picture given in Ebisawa et al. (1996) based on it. To our knowledge, except TCAF, no other model is found to be capable of explaining continuous and simultaneous variation of spectral properties and timing properties (see, Debnath et al. 2008Debnath et al. , 2010Debnath et al. , 2013Debnath et al. , 2014aNandi et al. 2012).
There are mainly two reasons behind oscillation of shock wave in an accretion flow: i) Resonance Oscillation: when cooling time scale of the flow is comparable to the infall time scale (Molteni et al. 1996), this type of oscillation occurs. Such cases can be identified by the fact that when accretion of the Keplerian disk is steadily increased, QPOs may occur in a range of the accretion rates, and the frequency should go up with accretion rate. Not all the QPOs may be of this type. Some sources (for example, 2010 GX 339-4 outburst), show signatures of sporadic QPOs during rising soft-intermediate states (where QPO frequencies of in ∼6 Hz were observed for around 26 days; Nandi et al. (2012)), although rise in the accretion rates. In these cases the shock strength has to change in order that the resonance condition holds good. ii) Non-Steady Solution: in this case, the flow has two saddle type sonic points, but Rankine-Hugoniot conditions which were used to study standing shocks in Chakrabarti (1989) are not satisfied. Examples of these oscillations are given in Ryu et al. (1997) where no explicit cooling was used. Such type of QPOs are possible at all accretion rates, outside the regime of type (i) QPOs mentioned above. QPO frequencies depend on viscosity (higher viscosity will remove angular momentum, bring shocks closer to the black hole, and produce higher frequency QPOs), but not explicitly on accretion rate. In any case, observed QPO frequency is inversely proportional to the infall time (t inf all ) in the post-shock region. So, when low frequency (e.g., mHz to few Hz) QPOs are observed, generally during very early phase of an outburst or very late phase of an outburst of transient black hole candidates (BHCs), shocks are located very far away from black holes and size of the CEN-BOL is large. As a result, amount of cooling by photons from Keplerian disk (Shakura & Sunyaev 1973) is high (Chakrabarti & Titarchuk 1995;Mondal & Chakrabarti 2013, hereafter Paper-I) and CENBOL pressure drops, moving the shock closer towards black hole (Molteni et al. 1996;Das et al. 2010;Mondal et al. 2014b, Paper-I) until pressure (including centrifugal) is strong enough to balance the inward pull. Lower shock location increases the QPO frequency. Different BHCs show different oscillation frequencies during their evolution (both in rising and declining) phases. Using Propagating Oscillatory Shock (POS) model by Chakrabarti and his collaborators (Chakrabarti et al. 2005(Chakrabarti et al. , 2009Debnath et al. 2010Debnath et al. , 2013Nandi et al. 2012) one can satisfactorily explain origin and day-wise evolution of QPO frequencies during rising and declining phases of outbursting BHCs. During rising phase shock moves towards black holes increasing QPO frequencies monotonically with time and opposite scenario is observed during declining phase, mainly in hard and hard-intermediate spectral states of the outbursts (see, Debnath et al. 2013).
Recently Debnath et al. (2014a) showed that observed QPO frequencies can be predicted from detailed spectral analysis using Two Component Advective Flow (TCAF) model as a local additive table model in XSPEC. Mondal et al. (2014a) and Debnath et al. (2014b) also showed physical reason behind spectral state transitions from spectral model fitted parameters of TCAF model for two different Galactic BHCs H 1743-322, and GX 339-4 during their outbursts. Basically, the same shock location is obtained by fitting the spectra produces QPOs through oscillations. So spectral properties are interlinked with timing properties as far as TCAF solution is concerned.
In this Paper, our goal is to explain origin of observed QPO evolution from pure analytical point of view using Compton cooling. Biggest uncertainly being that of viscosity parameter, we would like to have an idea of how viscosity usually vary with distance in a known source. We consider a transient BHC H 1743-322 during its 2010 outburst. We hope that in future, this behavior could be used to better predict QPO evolutions.
In August 2010, H 1743-322 was found to be active in X-rays (Yamaoka et al. 2010) with a characteristics of temporal and spectral evolutions (Debnath et al. 2013) similar to those observed in other transient BHCs (see for a review, Remillard & McClintock 2006). Detailed source description is already in literature (Debnath et al. 2013;Mondal et al. 2014a, and references therein).
The paper is organized in the following way: in the next Section, we discuss governing equations of modified Rankine-Hugoniot (R-H) shock conditions in presence of Compton cooling. In §3, observed QPO evolution and what this tells us about viscosity variation in the disk as a function of radial distance. We also present phase space diagram of the flow in progressive days. Finally, in §4, we briefly discuss our results and make our concluding remarks.
SHOCK CONDITION AND SHOCK CONSTANT
We assume the accreting flow to be thin, axisymmetric and rotating around vertical axis. To avoid integrating in a direction transverse to flow motion, we consider that the flow is in hydrostatic equilibrium in vertical direction as in Chakrabarti (1989). In TCAF, CENBOL is basically the post-shock region of a low angular momentum, sub-Keplerian flow. It is comparatively hotter, puffed up, and much like an ion supported torus (Rees et al. 1982). Due to inverse Compton cooling effect of intercepted low energy photons from a Keplerian disk, energy of CEN-BOL decreases and is radiated away. Energy equation at the shock is modified by, where, ∆ε is the energy loss due to Comptonization. Baryon number conservation equation at the shock is, Since the gas is puffed up, R-H conditions (Landau & Lifshitz 1959) have to be modified, so that only vertically integrated pressure and density parameters are important. This modification was first carried out in (Chakrabarti 1989), where pressure balance condition was modified using vertically integrated values: Here, W and Σ are pressure and density, integrated in the vertical direction (Matsumoto et al. 1984). In our solution, we use the Eq. (8a) of Paper-I as an invariant quantity at the shock, which is given by: where, M , v and γ are the Mach number, radial velocity and adiabatic index of flow respectively, ζ = 2∆ε(γ−1) Here, a is adiabatic sound speed. We follow the same mathematical procedure and methodology as in Paper-I, to find shock location for a given cooling rate. In the standard theory of thin accretion flows around black holes (Shakura & Sunyaev 1973) viscosity plays a major role. Giri & Chakrabarti (2013) showed formation of Keplerian disk for super-critical α parameter (Chakrabarti 1990b). Inflow angular momentum is transported outward by viscosity and allow it to fall into the black holes. As the shock moves closer, the angular momentum must be adjusted by viscosity so that the shock formation is theoretically allowed. For our viscosity calculation, we use the relation (Chakrabarti 1990b): where, W rφ = −αP , is the viscous stress, α being Shakura & Sunyaev (1973) viscosity parameter. Angular momentum variation from Eq. (3) can be written as where, ∆λ = (λ−λ ′ ) is the change in angular momentum (λ) due to viscous transport.
Methodology of ∆ε calculation
We analyze archival data of 8 observational IDs of RXTE/PCA instrument (only PCU2, all layers) starting from 2010 August 9 (Modified Julian Day, i.e., MJD = 55417.2) to 2010 August 16 (MJD = 55424.1), selected from rising phase of 2010 outburst of H 1743-322. We carry out data analysis using FTOOLS software package HeaSoft version HEADAS 6.14 and XSPEC version 12.8. For generation of source and background '.pha' files and spectral fitting (in 2.5 − 25 keV energy range) using combined disk blackbody and power-law models, we use same method as described in Debnath et al. (2013). After achieving best fit based on reduced chi-square value (χ 2 red ∼ 1), we integrate only power-law component of the spectrum. This can be written as: where, E l and E u are the lower and the upper limits of energy. For interstellar photo-electric absorption correction, we follow the prescription of Morrison & McCammon (1983). To calculate cooling time of the Compton cloud (CENBOL) from observed spectrum, we consider distance correction in following way: we multiply the integrated spectrum by the model normalization value (norm) of 4πD 2 cos(i) , where 'D' is source distance in 10 kpc unit and 'i' is the disk inclination angle. In case of H 1743-322, we use source distance d = 8.5 kpc, and i = 75 • (Steiner et al. 2012). We keep hydrogen column density (N H ) frozen at 1.6× 10 22 atoms cm −2 for absorption model wabs and assume a 1.0% systematic error for all observations .
RESULTS
In this Paper, we study origin and evolution of QPOs in outbursting BHC H 1743-322, from a purely analytical point of view. In Chakrabarti & Titarchuk (1995) and Das & Chakrabarti (2004), it was shown that matter from the companion is heated up due to compression and puffed up due to centrifugal barrier to form CEN-BOL. Low energy photons from a Shakura & Sunyaev (1973) disk with an accretion rate ofṁ d are intercepted by CENBOL and are emitted as high energy photons after inverse Comptonization. In Fig. 1a, we show that the rate of cooling of the CENBOL in progressive days during the rising phase of the outburst. As day progresses, amount of cooling increases and shock moves towards the black hole (MSC96), which is shown in Fig. 1b. On first observed day of the outburst (MJD = 55417.2), location of the shock (X s in Schwarzschild radius r g = 2GM/c 2 ) was at 350.65 r g and at the end of our observation (MJD = 55424.1), it reaches at ∼ 64.99 r g . In Fig. 1c, we show Mach number variation of the flow in 1 st day (solid curve, shock was at 350.65 r g ), 5.05 th day (dashed curve, shock was at 191.69 r g ) and 7.81 th day (dotted curve, shock was at 64.99 r g ) of the outburst. We calculate the velocity of movement of the shock to be ∼ 13.11 ms −1 , which roughly matches with the final velocity of shock wave calculated from the POS model fit of the QPO frequency evolution (see, Debnath et al. 2013). In Fig. 2a, we show variation of observed QPO frequencies with time. If viscosity parameter α were constant throughout the outburst then the variations of theoretically calculated QPO frequencies would be different. Dotted curve drawn for a viscosity parameter (α) 0.001 shows that QPO frequency increases at almost constant rate. The dashed curve of Fig. 2a is for the effect of non-linear variation of the viscosity, which is shown in Fig. 2b. As the day progresses, viscosity adjusts in such a way that the angular momentum can produce a shock at a suitable place satisfying R-H conditions. Chakrabarti & Molteni (1995) and Giri & Chakrabarti (2012), with their extensive numerical simulations showed that angular momentum distribution depends on viscosity parameter. In our solution, at the beginning of the outburst during the hard state, from MJD=55417.2 to MJD=55420.2 (Debnath et al. 2013; MDC14), α varies from 1.3e-4 to 5.9e-4. During the hard-intermediate state, from MJD=55421.3 to MJD=55424.1 (Debnath et al. 2013;Mondal et al. 2014a), α varies rapidly from 1.6e-3 to 1.9e-2. Our α calculation is for sub-Keplerian component only. In Fig. 2c, we show the variation of α with shock location. Dashed curve shows variation from our analytical solution, whereas dotted curve is a fitted polynomial, which gives a general trend and could be used in other systems. We see that α ∼ K X −q s , where, K(=350.2, with asymptotic standard error 6.29%) and q(=2.34, with asymptotic standard error 0.61%) are constants for this BHC. It is to be noted that this viscosity parameter is computed for the sub-Keplerian flow component only.
DISCUSSIONS AND CONCLUDING REMARKS
QPOs in black hole candidates are very stable features. They are seen day after day, though the frequency may be drifted slowly as the object goes from hard to soft state in the rising phase. This is generally observed in most of the outbursting BHCs (Nandi et al. 2012;Debnath et al. 2013, and references therein). Propagatory oscillating shock solution can explain such frequency drifts very well Debnath et al. 2013). These phenomenological model is found to be justified when we actually compute shock drifts from radiated energy loss from a self-consistent transonic solution. We find that in order to have Rankine-Hugoniot conditions satisfied on each day, viscosity parameter must be evolving too. If the outer boundary condition is kept fixed, increase in viscosity parameter causes shock to drift outward (Chakrabarti & Molteni 1995;Giri & Chakrabarti 2012), but if the inner boundary condition is kept fixed, the shock moves inward (Chakrabarti 1990a;Das & Chakrabarti 2004). We find support of the latter phenomenon in an outbursting source where matter supply is changing and viscosity enhancement steadily brought the shock closer to the black hole. Cool-ing is found to rise day by day and so is α. Such a movement of the shock increases QPO frequency as is observed. Our result establishes consistency in theoretical understanding of the observed data: as cooling increases, observed QPO frequency increases due to drifting of the shock towards black hole in a way that the cooling time scale roughly matches with the infall time scale. This process brings the object towards the softer states as is observed. Shock locations were found to be located at the right place (i.e., R-H conditions are satisfied), only if viscosity is not strictly constant, but gradually rises from 0.0001 to 0.02 from the first day to ∼ seventh day. It is to be noted that there are alternative models (Titarchuk & Fiorito 2004;Titarchuk et al. 1998) where the corona is supposed to oscillate at its eigen frequency and the viscosity required in this case is around 0.1 − 0.5. This appears to be too high as compared to what we find in the present paper. The discrepancy could be due to the fact the latter models rely on oscillations of a Keplerian disk with high angular momentum and they require higher viscosity to reduce it drastically. In our case, on the contrary, the oscillating CENBOL is highly sub-Keplerian to begin with. Therefore, a little viscosity is enough the transport requisite angular momentum. This range of α we require is in the same ball park as obtained from numerical simulations (Balbus & Hawley 1991;Arlt & Rüiger 2001;Masada & Sano 2009) of magnetorotational instability (MRI). | 2014-12-20T05:48:42.000Z | 2014-10-30T00:00:00.000 | {
"year": 2014,
"sha1": "d25cc052bd1911f564b9b386cfb761eae77b633a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1410.8266",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d25cc052bd1911f564b9b386cfb761eae77b633a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
268299332 | pes2o/s2orc | v3-fos-license | Scheduled and Breakthrough Opioid Use for Cancer Pain in an Inpatient Setting at a Tertiary Cancer Hospital
Background: Our aim was to examine the frequency and prescription pattern of breakthrough (BTO) and scheduled (SCH) opioids and their ratio (BTO/SCH ratio) of use, prior to and after referral to an inpatient supportive care consult (SCC) for cancer pain management (CPM). Methods and Materials: Patients admitted at the MD Anderson Cancer Center and referred to a SCC were retrospectively reviewed. Cancer patients receiving SCH and BTO opioids for ≥24 h were eligible for inclusion. Patient demographics and clinical characteristics, including the type and route of SCH and BTO opioids, daily opioid doses (MEDDs) of SCH and BTO, and BTO/SCH ratios were reviewed in patients seen prior to a SCC (pre-SCC) and during a SCC. A normal BTO ratio was defined as 0.5–0.2. Results: A total of 665/728 (91%) patients were evaluable. Median pain scores (p < 0.001), BTO MEDDs (p < 0.001), scheduled opioid MEDDs (p < 0.0001), and total MEDDs (p < 0.0001) were higher, but the median number of BTO doses was fewer (2 vs. 4, p < 0.001), among patients seen at SCC compared to pre-SCC. A BTO/SCH ratio over the recommended ratio (>0.2) was seen in 37.5% of patients. The BTO/SCH ratios in the pre-SCC and SCC groups were 0.10 (0.04, 0.21) and 0.17 (0.10, 0.30), respectively, p < 0.001. Hydromorphone and Morphine were the most common BTO and SCH opioids prescribed, respectively. Patients in the early supportive care group had higher pain scores and MEDDs. Conclusions: BTO/SCH ratios are frequently prescribed higher than the recommended dose. Daily pain scores, BTO MEDDs, scheduled opioid MEDDs, and total MEDDs were higher among the SCC group than the pre-SCC group, but the number of BTO doses/day was lower.
Introduction
Pain is a frequent and distressing symptom among cancer patients [1,2].Pain in these patients may be due to their disease, its treatment, or nonmalignant causes.Approximately 70-80% of cancer patients will have cancer-related pain, and this prevalence is even higher in advanced cancer patients.Despite recent advances in cancer pain management (CPM), cancer pain is not controlled [3][4][5].Pain interferes with patients' work, mood, and enjoyment of life.Despite significant progress in CPM over the last few decades, recent studies indicate unsatisfactory outcomes due to poor assessments and suboptimal treatments [6].Most importantly, uncontrolled pain remains the most common reason for inpatient supportive/palliative care consultation [7][8][9].Background pain is a continuous and constant pain present at rest [10].Breakthrough pain is a transitory increase of pain in cancer patients with adequately controlled background pain [10].Breakthrough pain's prevalence is nearly 50%, and is usually defined with a rapid onset, a short duration, and severe intensity, and averages four episodes a day [9][10][11][12].Presence of breakthrough pain has been considered as a negative prognostic factor that interferes with quality of life of patients [9,[12][13][14][15].For effective pain management, the cancer pain guidelines recommend pain screening, comprehensive assessment, and monitoring of pain after the start of pain treatments [16].Pain is often a complex multidimensional experience, consisting of sensory and affective dimensions.Cancer pain can be regarded as "total pain", consisting of physical, psychological, spiritual, and social dimensions.Often, standardized and validated assessment tools that can be routinely used in clinical practice are used to assess pain.Some commonly used assessment tools used are the 0-10 numeric rating scale or visual analog scale.The Edmonton symptom assessment scale (ESAS) is frequently used by supportive care clinicians as it not only assesses pain but other symptoms related to pain, such as depression, anxiety, or drowsiness, and thereby helps to assess the 'total pain', and its serial assessment may provide an overall picture of pain control during an inpatient hospital admission.In addition to assessment of the severity of pain, assessment of other 'pain characteristics' is important, which usually includes the type of pain, both nociceptive (somatic, visceral) and neuropathic, the presence of non-opioid risk behavior, and delirium.In an ambulatory setting, pain assessment can be accomplished using pain dairy's or, more recently, using digital health applications, such as mobile apps for pain assessment.These mobile phone apps help in regular pain assessments, provide timely feedback to patients and their clinicians, facilitate patient education, and help physicians to make timely medication changes and improve patient-physician communication [17].However, there are still challenges which prevent the routine use of mobile phone apps.Some of the main barriers are socioeconomic status, data protections, and evidence-based app validation [18].
As opioids are the primary treatment for moderate-to-severe cancer pain, successful CPM involves the use of effective opioid doses (both scheduled and breakthrough or rescue opioid doses) and is an indicator of end-of-life quality of care in palliative care.An adequate treatment of breakthrough pain is important because it is associated with significant anxiety, depression, and low quality of life [12,14,[19][20][21][22].Traditionally, scheduled long-acting opioids are used for background pain and short-acting immediate-release oral opioids are used on an "as needed" basis to treat breakthrough pain [13,[23][24][25].There are different types of short-and long-acting opioids, such as Morphine, Hydromorphone, Oxycodone, Methadone, Oxymorphone, and Transdermal and transmucosal Fentanyl, for scheduled and breakthrough pain [24,26,27].In hospitalized cancer patients, the use and ratios of scheduled (SCH) and breakthrough pain or rescue (BTO) opioids have a very wide range [24,27,28].Scheduled opioids have no ceiling effect and therefore no dosing limit may be required until they are associated with unmanageable side effects, like constipation, confusion, delirium.The dose of BTO opioids used is proportionate to the opioid daily dose [7].The current recommendation for BTO dosage ranges from 5-20% of the daily opioid dose every one or two hours, as required.In the elderly, BTO dosage is usually a lower percentage (5%) of the total daily opioid dose every 4 h, as required [29][30][31][32][33].
However, the use of scheduled and BTO opioids and their ratio in routine clinical practice in inpatient cancer patients is not clear, especially regarding supportive/palliative care vs. oncology care teams.This information may be helpful to determine strategies to better control cancer pain, especially breakthrough pain, and minimize the side-effects of opioids.Our aim was to examine the frequency and prescription pattern of BTO and scheduled opioids and their ratio (BTO/SCH ratio) of use prior to and after referral to an inpatient supportive care consult (SCC) for CPM.
Materials and Methods
This retrospective study was approved by the Institutional Review Board of MD Anderson, which waived the requirement for informed patient consent.
For the conduct of this study, the goal of which was to capture the opioid prescription use of scheduled and breakthrough opioids in routine clinical practice in a tertiary cancer setting, we have used a retrospective design due to its ability to analyze long-term trends through existing medical records.This design was chosen to capture the real-time practice patterns in cancer hospital settings.The retrospective design was also chosen as it was more feasible to obtain the outcomes aimed for in our study than a prospective design due to resources, time, and expense [34].
Patients admitted in the inpatient setting at the MD Anderson Cancer Center who had a SCC between 1 June 2017 and 31 May 2018 were reviewed retrospectively.Patients were eligible if they were older than 18 years old and receiving opioids for at least 24 h.SQUIRE guidelines were used to ensure completeness and transparency [35].
Process of the Supportive Care Service and Decision-Making Process for Opioid Prescription Adjustments
At The University of Texas M. D. Anderson Cancer Center, Houston, USA, a 698-bed comprehensive cancer center, inpatient supportive care consultations have been available since October 1999.The full-time supportive care team consists of board-certified palliative care specialists, palliative care and oncology fellows, advanced practice providers, pharmacists, chaplains, social workers, case managers, psychologists, and counselors.The program provides symptom control and palliative care in all areas of the cancer center by consultation or mobile team on daily basis.In addition, the program includes an outpatient supportive care clinic and an acute palliative and supportive care unit, wherein patients with distressing symptoms are admitted for control of their symptoms and for help in transitioning home or to hospice care.The mobile team comprises the above team members, except that social workers and case managers vary based on the type of cancer, e.g., patients with genitourinary cancers have a separate social worker and case manager from the ones for patients with lung cancer.The team's primary focus is on pain, symptom control, palliative care, and end-of-life issues.The care of all patients follows a standardized management plan [36][37][38].Patients and their families are initially assessed by palliative care, oncology fellows, or advanced practice providers, using tools such as the ESAS [39], Memorial Delirium Assessment Scale (MDAS) [40], and constipation and family support questionnaires.The findings are then discussed with a palliative care specialist, who then conducts an interview with the patient and the family and performs a physical examination.The physician then requests appropriate members of the interdisciplinary team to participate based on the patient's and family's individual needs.These interventions and care provided by the interdisciplinary team follow PC guidelines established by the National Comprehensive Cancer Network and National Consensus Project and have been outlined elsewhere.These guidelines focus on (a) assessing and managing cancer-related symptoms, including pain, fatigue, anorexia, anxiety, depression, sedation, dyspnea, sleep disturbance, and impaired feeling of wellbeing; (b) providing assistance to the patient's and caregivers' understanding of the disease and treatment goals; and (c) providing assistance to the patient and their caregivers in coping with life-threatening illness and in decision-making.
Opioids are the primary treatment for cancer pain management.The decision-making process for opioid prescription adjustments is made after the patient assessment described above, which includes assessing pain and symptoms using the ESAS and MDAS.When patients present with poor pain control, opioid-induced side effects such as nausea and constipation, opioid-induced neurotoxicity such as myoclonus, hallucinations, or if the current route or formulation is no longer feasible, the opioids are either replaced by another (opioid switching or opioid rotation), route changed (e.g., change from oral route to intravenous), or dose changed (increased).The patient is always prescribed a scheduled opioid to control the background pain, and breakthrough doses, usually at 1-4 h intervals, to control breakthrough pain episodes inherent with cancer pain.The dosage for breakthrough opioids is usually 10-15% of the scheduled dosing.Following a changed pain prescription, the patient is monitored daily with assessments as done at the initial visit, and medications are adjusted until hospital discharge by the supportive care team.Most importantly, once the supportive care team is involved in the pain management of a given referred patient, the responsibility of the patient's pain control and prescription of opioids is solely under their control and no changes are made, either by the primary oncology team responsible for the patient's care in the hospital or the other teams involved in the patient's care, without consulting the supportive care team.In addition to opioids, other treatments for cancer pain management are provided based on the assessment.This may include the use of non-steroidal anti-inflammatory medications, steroids, antidepressants such as duloxetine, referral for physical therapy, counseling, chaplaincy, radiation therapy, or procedures such as neurolytic blocks, cordotomy, etc.
Data prior to and after a SCC referral after 72 h of hospital admission were collected.The data obtained prior to the referral to a SCC were termed as the "pre-supportive care group", and the data from after referral as the "supportive care group (after referral to SCC)", respectively.In the pre-supportive care group, cancer pain was managed by the primary oncology care team prior to their referral to a SCC.Patients that were referred to a SCC within 72 h of their admission were classified as the "early supportive care group".The rationale for the use of a pre-supportive care group and supportive care group was to understand patient characteristics, pain, and opioid prescription use for scheduled and breakthrough opioids after the systematic assessment and management by a specialized supportive care team.The early supportive care cohort was evaluated to assess patient characteristics, pain, and opioid prescription use after the systematic assessment and management by a specialized pain/supportive care team at the time of hospital admission rather than later (after 72 h of admission).
Assessments
Demographics and clinical information (performance status, cancer diagnosis, relationship between SCH and BT opioids, the type, route, and frequency of administration of SCH and BT opioids, and dosage, assessed using Morphine Equivalent Daily Dose (MEDD) scores, Edmonton symptom assessment scale (ESAS) scores, CAGE-AID questionnaire scores, and Memorial delirium assessment scale (MDAS) scores), were collected for all enrolled patients.
ESAS, CAGE-AID, and MDAS scores are part of the standard of care in SCCs and ambulatory clinical visits.
ESAS is a tool designed by our group to assist in the assessment of ten symptoms common in cancer patients over the prior 24 h: pain, fatigue, nausea, depression, anxiety, drowsiness, shortness of breath, appetite, sleep, and feelings of well-being [38].The severity at the time of assessment of each symptom is rated from 0 to 10 on a numerical scale, 0 meaning that symptom is absent and 10 meaning that it is of the worst possible severity.A higher score indicates higher symptom intensity.The instrument is both valid and reliable in the assessment of the intensity of symptoms in cancer populations.All the assessments were completed by the patients themselves, or with the help nurse or a caregiver.The pain item of ESAS is the one used to assess pain in the inpatient setting of a cancer hospital.The optimal cutoffs were ≥1 point on the ESAS pain item for improvement of pain, and ≤1 point for deterioration of pain [41].
The CAGE-AID (cut down, annoyed, guilty, eye opener) questionnaire is a fouritem validated tool that is used to screen for a history of alcoholism or drug abuse and the presence of severe symptom distress and potential non-medical opioid use in cancer patients.A score ≥ 2 was considered positive and has been reported to be more than 85% sensitive and 90% specific for the diagnosis of alcohol/drug abuse and/or dependence [42].
The MDAS is structured as a ten-item, four-point clinician-rated scale designed to qualify the severity of delirium in medically ill patients [40].Items included in the MDAS reflect the diagnostic criteria for delirium in the DSM IV.Scale items assess the disturbances in arousal and level of consciousness, as well as several areas of cognitive functioning (memory, attention, orientation, and disturbances in thinking) and psychomotor activity.The MDAS yields a global score ranging from 0 to 30, with a suggested cut-off score of 7 for delirium.
Study data were collected and managed using Research Electronic Data Capture (REDCap) electronic data capture tools hosted at the MD Anderson Cancer Center [43,44].REDCap is a secure, web-based software platform designed to support data capture for research studies, providing (1) an intuitive interface for validated data capture; (2) audit trails for tracking data manipulation and export procedures; (3) automated export procedures for seamless data downloads to common statistical packages; and (4) procedures for data integration and interoperability with external sources.All data were collected for trained researchers.
Statistical Analysis
Data were summarized using standard descriptive statistics such as mean, standard deviation, median and range for continuous variables, and frequency and proportion for categorical variables.Association between categorical variables was examined by the Chi-squared test, Fisher's exact test, or McNemar's test when appropriate.Differences in continuous variables before and after referral to supportive care were compared using the Wilcoxon signed-rank test.For the comparison between early (before 72 h of hospitalization) and late referral (after 72 h of hospitalization) supportive care groups, the Wilcoxon ranksum test was used to examine the difference of continuous variables between groups.
Sample size calculation: For the primary objective, we compared MEDD scores for scheduled pain control and breakthrough pain control before (pre-supportive care) and after (supportive care) referral after 72 h.With 364 patients, we would have a 90% power to detect an effect size of 0.185 in MEDD score difference for scheduled pain control and for breakthrough pain control using a paired t-test with a 2-sided type I error rate of 0.025.
All computations were carried out in SAS 9.4 (SAS Institute Inc., Cary, NC, USA).
Results
Figure 1 shows the study flow diagram.In total, 665/728 (91%) patients were evaluable.Of these, 362 patients were referred to the supportive care service after 72 h of hospitalization, and data were compared before (pre-supportive care, n = 330) and after (n = 292) referral to supportive care.Due to the absence of opioid use, 32 and 38 patients were excluded from analysis from the pre-supportive care and supportive care groups, respectively.A total of 355 of the 366 patients referred to the supportive care service before 72 h of hospitalization (early supportive care) were analyzed.Eleven patients were excluded due to the absence of opioid use.
Patients in the early supportive care group were younger (p = 0.0018), female (p = 0.09), had higher illicit drug use (p = 0.016), higher pain scores (p = 0.007), higher MEDDs (p < 0.001), and higher BTO MEDDs (p = 0.018).When patients referred earlier (early supportive care group) and later (supportive care group), Hydromorphone (36.3% in the supportive care group vs. 42.6% in the early supportive care group, p = 0.26) and Morphine (33.3% in the supportive care group vs. 32.8% in the early supportive care group, p = 0.30) were the most common BTO opioids prescribed in both groups.Morphine was the most commonly prescribed SCH opioid in both groups (36% in the supportive care group and 38.1% in the early supportive care group, p = 0.30).A BTO/SCH relation over the recommended ratio (>0.2) was seen in 490 patients (51%).
The clinical and demographic characteristics of patients in the pre-supportive care and supportive care groups are summarized in Table 1.There were no significant differences in age, gender, marital status, cancer diagnosis, race, frequency of smoking status, illicit drug use, CAGE positive scores, MDAS scores, and number of follow up visits between the pre-supportive and supportive care groups.Pain scores were higher among the patients in the supportive care group both at initial visit (p < 0.001) and follow up (p < 0.001) when compared to the pre-supportive care group.The pain change from initial visit at follow up visit was −1 points on the ESAS 0-10 scale, which is equivalent to the minimal clinically important difference for the ESAS Pain item.The median number of BTO doses was lower in the supportive care group, 2 vs. 4 (p < 0.001).BTO MEDDs (p < 0.0001), scheduled opioid MEDDs (p < 0.0001), and total MEDDs (p < 0.0001) were higher among the patients in the supportive care group.
The clinical and demographic characteristics of patients in the pre-su and supportive care groups are summarized in Table 1.There were n differences in age, gender, marital status, cancer diagnosis, race, frequenc status, illicit drug use, CAGE positive scores, MDAS scores, and number visits between the pre-supportive and supportive care groups.Pain score among the patients in the supportive care group both at initial visit (p < 0.00 up (p < 0.001) when compared to the pre-supportive care group.The pain initial visit at follow up visit was −1 points on the ESAS 0-10 scale, which is the minimal clinically important difference for the ESAS Pain item.The me of BTO doses was lower in the supportive care group, 2 vs. 4 (p < 0.001).BTO 0.0001), scheduled opioid MEDDs (p < 0.0001), and total MEDDs (p < 0.0001 among the patients in the supportive care group.Hydromorphone (34.7% in the pre-supportive care group vs. 40.7% in the supportive care group) and Morphine (42.9% in the pre-supportive care group vs. 40.71% in the supportive care group) were the most common BTO opioids prescribed in both groups (Figure 2).Morphine (39.6% in the pre-supportive care group vs. 37.7% in the supportive care group) and Fentanyl (23% pre-supportive care vs. 23.4% supportive care) were the most common scheduled opioids prescribed in both groups.
Discussion
There are limited studies investigating the scheduled and breakthrough opioid use patterns for cancer pain management by inpatient supportive care teams.In this study, we found that patients receiving cancer pain management from supportive care, when compared to prior to supportive care consultation (pre-supportive care group), had higher pain scores, MEDDs, and BTO daily doses.The supportive care group had lower numbers of BTO doses used and improved pain scores at follow-up visits than the pre-supportive group.These results may suggest better opioid prescription use, but further studies are needed to validate these findings.Most of the patients in the pre-supportive and supportive care groups received higher-than-recommended BTO/SCH ratios.Patients in the early supportive care group had higher pain scores and daily opioid use (MEDDs), and higher BTO MEDDs, suggesting higher levels of distress and supportive care needs.Even though is retrospective study, this is an interesting paper as it reports the routine daily practice of opioid pain management at one of the main tertiary cancer centers in the United States of America.
In a study published by Mercadante et al. (2010), Morphine and oral transmucosal Fentanyl were the most common BTO opioids prescribed among patients admitted to an acute palliative care unit, with Morphine being prescribed for 386 episodes of breakthrough pain and oral transmucosal Fentanyl for 152 episodes [45].Qian et al. (2020) showed that Morphine (38%), Hydromorphone (17%), and Fentanyl (15%) were the most common SCH opioids prescribed between patients hospitalized and receiving palliative care consultation [46].Mercadante et al. (2017) found that sublingual Fentanyl for the treatment of breakthrough pain was associated with safe and effective analgesia in cancer patients receiving low doses of SCH opioids [47].In another study by Mercadante et al. (2020), cancer patients on lower doses of SCH opioids (MEDD < 60 mg/day) had fewer episodes of breakthrough pain, with less severe pain intensity and early pain onset pain, as well as longer times to meaningful pain relief taking breakthrough opioids and less satisfaction with BTO [48].Currow et al. (2020) found that oral Morphine was ineffective as a BTO treatment at ratios of 0.16, 0.12, and 0.08 for the effective management of breakthrough pain and was at different dose ranges proportional to the SCH opioid dose [49].Azhar et al. (2019) found that BTO doses were 10% of scheduled opioid doses [29].However, results reported were based on patient satisfaction rather than actual pain scores.In our study, the median ratio was over 0.15 in all groups and more than 37% of patients had ratios that were over 0.2 [49][50][51][52].The ratios were higher when Hydromorphone or Morphine were used as a SCH medication.
Compared to patients referred to supportive care after 72 h, patients in the early supportive care group were younger females, having higher levels of pain and BTO daily doses.However, the BTO/SCH ratio for Hydromorphone, the most common opioid in patients referred to supportive care after 72 h, was 0.73, and was 0.23 in the early supportive group.These findings suggest that the early supportive care group was associated with a higher risk for poor pain control and higher opioid dose needs for breakthrough pain.Further, in a study conducted by Azhar et al. (2019), young age and higher ESAS scores for pain had a significant association with poor responses to immediate-release opioids for breakthrough pain, which could explain why the total and BTO MEDDs were higher among the patients that were referred earlier to supportive care [29].Further studies need to be done to study whether there is an association between BTO/SCH ratio and better pain outcomes such as pain improvement, attainment of personalized pain goals, non-medical opioid use, and toxicity.
Future studies are needed to optimize the use of opioid (scheduled and breakthrough) prescriptions to improve pain control and thereby patients' overall quality of life.Some of the interventions to consider include the use of a patient's genetic data, such as single nucleotide polymorphisms of candidate genes such as inflammatory genes, to determine the sensitivity to a particular opioid type for a given patient, the dose needed for a opioid to be effective, and the risk of opioid-related side effects [53].In a recent study by our team, we assessed the genetic factors associated with pain severity, daily opioid dose, and pain response to opioids.The results of the study suggest single nucleotide polymorphisms of OPRM1, COMT, NFKBIA, CXCL8, IL-6, STAT6, and ARRB2 genes were significantly associated with pain severity, opioid daily dose, and pain response [53].Similar findings were found in other studies investigating the use of pharmacogenomics for personalized pain management [54].Further studies are needed.
In recent years, there has been increased use of Artificial Intelligence (AI) to provide patient care, as well as in pain research.AI may have the potential to provide better pain control, as in traditional pain assessment and management methods there is a high likelihood of variability of patient-reported pain scores, the perception of pain by different individuals, and the algorithms for pain management which includes the use of opioids and other pain interventions.AI technologies such as machine learning, deep learning, and natural language processing have been used for pain assessment, surveillance and monitoring, and opioid misuse risk prediction.However, there is limited published research on the use of AI for pain management [54][55][56][57].Future studies are needed on patients with cancer pain using AI, and these should utilize the recent advances in pain assessments such as facial image analysis [54].Better cancer pain management may be facilitated by using predictive clinical decision systems which incorporate patients' clinical data, patient data obtained from wearable devices which assess pain, sleep, and activity, and biomarkers as discussed above, such as single nucleotide polymorphisms or various candidate genes which may predict the sensitivity to certain opioids, response rates of a given pain type to an opioid, or other pain treatment.However, the use of AI may only supplement clinician decision-making processes for the management of cancer pain rather than replace them, due to their inherent limitations.
Our study has several limitations.It is a single-center study and therefore some of the results may not be generalizable to other cancer and community hospital settings.The data used in our study were obtained prior to the COVID-19 pandemic; however, recent research suggests that the issues discussed in our study in regards to SCH and BTO opioid use in cancer patients would be still applicable to the current economic and social landscape, based on recently published studies [50][51][52]58].A further limitation of this study was that our study found a lower number of BTO doses in the supportive care group but was unable to capture whether the opioid prescription changes made by the supportive care group had any impact on their breakthrough pain episodes.Further studies are needed.
Conclusions
BTO/SCH ratios were frequently prescribed higher than the recommended dose.Daily pain scores, BTO MEDDs, scheduled opioid MEDDs, and total MEDDs were higher among patients seen at a supportive care group, but their number of BTO doses/day was smaller.Further studies are needed.
Figure 2 .
Figure 2. Opioid prescription at Pre-Supportive Care and Supportive Care.Medication prescription per patient.Codeine and Oxymorphone are not shown due to low N.No statistically significant differences between groups (McNemar's test); pre-supportive care: data from patients referred after 72 h of hospitalization and before supportive consult; supportive care: data from patients referred after 72 h of hospitalization and after supportive consult.
Figure 2 .
Figure 2. Opioid prescription at Pre-Supportive Care and Supportive Care.Medication prescription per patient.Codeine and Oxymorphone are not shown due to low N.No statistically significant differences between groups (McNemar's test); pre-supportive care: data from patients referred after 72 h of hospitalization and before supportive consult; supportive care: data from patients referred after 72 h of hospitalization and after supportive consult.
supportive care: data from patients referred after 72 h of hospitalization and a consult; early supportive care: data from patients referred to supportive care hospitalization and before.* Categorical variables examined by McNemar's test; continuous variable examined by Wilcoxon signed-rank-sum test; ** MDAS: Memorial delirium assessment scale; *** MEDD: morphine equivalent daily dose.
Table 2 .
Ratio of MEDD Breakthrough/MEDD Schedule opioids.MEDD: morphine equivalent daily dose.IQR: Inter quartile range.BTO: breakthrough opioids.* Continuous variable examined by Wilcoxon signed-rank-sum test; categorical paired variables were evaluated by McNemar's test.Ratio between opioids classified as under if BTO/Schedule MEDD ratio ≤ 0.1, normal if the BTO/Schedule MEDD ratio > 0.1 and ≤0.2, or over if the BTO/Schedule MEDD ratio > 0.2.
Table 2 .
Ratio of MEDD Breakthrough/MEDD Schedule opioids.Continuous variable examined by Wilcoxon signed-rank-sum test; categorical paired variables were evaluated by McNemar's test.Ratio between opioids classified as under if BTO/Schedule MEDD ratio ≤ 0.1, normal if the BTO/Schedule MEDD ratio > 0.1 and ≤0.2, or over if the BTO/Schedule MEDD ratio > 0.2.
Table 3 .
Ratio of MEDD Breakthrough/Scheduled opioid per medication.MEDD: morphine equivalent daily dose.IQR: Inter quartile range.ER: Extended release.Ratio breakthrough/schedule MEDD per medication; data for the two most common prescribed breakthrough medications.* Continuous variables examined by Wilcoxon signed-rank-sum test. | 2024-03-11T16:52:57.519Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "7712e8dbb25a50ecdca95382e3b88fe8de0fd163",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1718-7729/31/3/101/pdf?version=1709605056",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75638fa2acbd1a2d5c3b56b069c49c531ab7b407",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
22335590 | pes2o/s2orc | v3-fos-license | New guidelines for diagnosis and treatment of insomnia
The Brazilian Sleep Association brought together specialists in sleep medicine, in order to develop new guidelines on the diagnosis and treatment of insomnias. The following subjects were discussed: concepts, clinical and psychosocial evaluations, recommendations for polysomnography, pharmacological treatment, behavioral and cognitive therapy, comorbidities and insomnia in children. Four levels of evidence were envisaged: standard, recommended, optional and not recommended. For diagnosing of insomnia, psychosocial and polysomnographic investigation were recommended. For non-pharmacological treatment, cognitive behavioral treatment was considered to be standard, while for pharmacological treatment, zolpidem was indicated as the standard drug because of its hypnotic profile, while zopiclone, trazodone and doxepin were recommended.
Even today, insomnia remains a clinical entity that is difficult to diagnose and complex to treat, demanding an approach with appropriate strategy and planning.Insomnia, as a symptom, syndrome or disease, has serious social and professional conse-quences, affecting daily activities and rendering individuals incapable of performing their tasks.It therefore generates a high cost for society.
In November 2008, the Brazilian Sleep Society brought together doctors who specialize in sleep medicine, in São Paulo.The meeting aimed to provide new guidelines for diagnosing and treating insomnia.During this meeting, the following subjects were considered: concepts, clinical and psychosocial evaluations, recommendations for polysomnography, pharmacological treatment, behavioral and cognitive therapy, comorbidities and insomnia in children.
METHOD
Based on searches in the literature for articles, reviews and meta-analyses, five levels of evidence were put forward as recommendations for managing insomnia: Level I -Randomized trials with low false-positive (alpha) and low false-negative (beta) errors (high power); evidence obtained from meta-analyses on randomized controlled trials; Level II -Randomized trials with high falsepositive (alpha) and (or) high false-negative (beta) errors (low power); evidence obtained from at least one randomized controlled trial; Level III -Nonrandomized concurrent cohort comparisons between patients with and without receiving a concomitant nutritional intervention; evidence obtained from at least one well-designed, controlled study, without randomization; Level IV -Nonrandomized historical cohort comparisons between current patients who received a nutritional intervention, and former patients (from the same institution or from the literature) who did not; Level V -Case series without controls; evidence obtained from expert committee reports or opinions and/or clinical experiences from respected authorities.Based on these five levels of evidence, the recommendations for interventions were considered to be: standard (levels I and II); recommended (levels III and IV), optional (level V) and not recommended, when no level of evidence existed 1,2 .
Concept (standard)
Insomnia is defined as a disorder that is characterized by difficulty in falling asleep or maintaining sleep.Furthermore, insomnia is also related to dissatisfaction with the quality of sleep, thus resulting in daily physical and emotional symptoms that have an impact on social and cognitive performance 3 .
Classification (standard)
According to the latest classification of sleep disorders (2005), insomnia is divided into the following forms: acute insomnia, psychophysiological insomnia, paradoxical insomnia, idiopathic insomnia, insomnia associated with mental disorders, insomnia associated with systemic diseases and insomnia associated with inadequate habits [4][5][6] .
Acute insomnia, transitory insomnia or adjustment insomnia
The essential element for this diagnosis is the pres-ence of symptoms of acute insomnia caused by a triggering causal factor that is clearly identified in an individual who previously had a normal sleeping pattern, without insomnia complaints.This clinical condition lasts no longer than one month 4 .
Primary chronic insomnia
In the etiopathogenesis of primary insomnia, three points should be considered: predisposing (genetic and constitutional), precipitating and perpetuating factors.Predisposing factors depend on hyperactivity of the awakening system (stress response mechanisms), hyperactivity of the hypothalamic-pituitary-adrenal axis, anxiety and depression, abnormalities in the mechanisms of sleep-wakefulness homeostasis, abnormalities in the circadian rhythm (circadian sleep-wakefulness control) and abnormalities of the intrinsic mechanisms of sleep-wakefulness control [7][8][9][10][11] .The precipitating and perpetuating factors depend on psychosocial factors, behavioral changes and cognitive characteristics.
Primary insomnia can be divided in three subtypes, namely psychophysiological, idiopathic and paradoxical 4 .Psychophysiological insomnia occurs concomitantly with a cognitive hyperalert state that is characterized by anxiety related to the act of sleeping and the presence of neurocognitive symptoms such as fatigue and irritability.Idiopathic insomnia starts before puberty and persists throughout adulthood, and a family history of insomnia is often present.In paradoxical insomnia, subjective complaints of poor quality sleep can be observed, despite the lack of objective sleep abnormalities on polysomnography.This subtype of insomnia is related to sleep misperception.
[2] Inadequate sleep hygiene -This is related to habits that are inappropriate for good quality of sleep, for example psychologically stressful activities, consumption of caffeine, nicotine, alcohol and heavy meals, vigorous physical activity close to bed time, inconstant time for going to sleeping and waking up, long naps or naps near the main time for sleeping 4 .
[3] Medical condition -This sleep disorder is related to particular medical conditions, for example painful syndromes, infections, metabolic diseases, hyperthyroidism and neurological diseases 4 .
[4] Use of substances or medication -This sleep disorder is related to the use of a drug or substance such as al-Guidelines: insomnia Pinto Jr. et al. cohol, stimulants (amphetamine and derivatives) or antidepressives 4 .
Obstructive sleep apnea
In 1973, Guilleminault et al described the association between insomnia and obstructive sleep apnea, and called it "sleep-insomnia apnea syndrome" 19 .The relationship between these two common sleep disorders is complex and unclear.There is a higher incidence of breathing disorders in insomniac patients than there is in the general population 20,21 .The severity of insomnia symptoms is strongly correlated with the severity of apnea, thereby characterizing comorbidity.Lichstein et al demonstrated that high proportions of individuals, particularly the elderly, present this combined condition of undiagnosed sleep apnea and insomnia 22 .Therefore, polysomnography (PSG) can help identify a substantial number of breathing disorders that are associated with insomnia 23,24 .
Women at the premenopausal and menopausal periods are more likely to develop sleep complaints and disorders than are women of a fertile age.Conjugated hormonal therapy (estrogen and progesterone) has been shown to efficiently improve general sleep complaints, as well as insomnia and OSAS 25 .Benzodiazepine drugs are associated with reductions in wakefulness, reductions in the muscle tonus of airways and decreases in the ventilatory response to hypoxemia.Therefore, these drugs are considered to be inappropriate for treating these comorbidities.The use of CPAP or oral devices also interferes negatively in the quality of sleep, particularly during the adaptation phases.
Fibromyalgia
Patients with fibromyalgia present persistent tiredness and physical fatigue, associated with non-restoring sleep and diffuse muscle pain.Usually, these patients have the perception of a sleep disorder associated with fatigue.Pharmacological treatment mainly consists of tricycles antidepressants and cyclobenzaprine [26][27][28][29][30][31][32] .
Restless legs syndrome and periodic movements of limbs
The restless legs syndrome is characterized by sensory disorders that mainly affect the lower limbs, particularly before falling sleep, thus leading to difficulty in falling asleep.Periodic movements of limbs usually accompany the restless legs syndrome during sleep, leading to a fragmented sleeping pattern, which affects the quality of sleep.Periodic movements of the lower limbs can occur during sleep, independently of the existence of restless legs syndrome.In these cases, the repercussions on the sleep profile, with insomnia or daytime hypersomnolence, must be analyzed one by one, in each case 43 .
Evaluation
When considering the etiopathogenesis of insomnia, it is important to highlight that insomnia may be of biological, environmental, behavioral or psychological nature.Likewise, the factors causing and perpetuating insomnia are interrelated with social, professional and family factors.Therefore, insomnia evaluations need to be broad-based, covering the patients' medical, psychological and social characteristics.
Medical evaluation (standard)
Evaluations on insomniac patients should begin by taking a rigorous and detailed medical history in which the history of symptoms is recorded, including the start of insomnia and its progression to a chronic condition, along with treatments already used and repercussions of the abnormal sleeping pattern during the day, such as somnolence, tiredness, fatigue and reduction of attention, concentration and memory 44 .
Nighttime habits that should be recorded include: bedtime, activities in bed, turning off lights, time to fall asleep, time to waking up in the morning, time to getting up, sleep quality, number of awakenings, time spent awake during the night and reports of snoring and leg movements.
Day habits that should be recorded include: mealtimes, work and study periods, daytime naps, physical activity, smoking habit, alcohol intake, use of drugs and medications.
Bedroom conditions that should be recorded include: condition of the bed, mattress and pillows, number of people who sleep in the same bed, luminosity, noise, temperature and presence of a TV, computer or audio equipment in the bedroom.
Psychosocial evaluation (recommended)
This has the aim of investigating, in greater detail, the main precipitating and perpetuating factors of insomnia.A psychosocial evaluation must be carried out, taking into account the systemic focus, i.e. the insomnia symptoms are analyzed within the context of patients lives, and what these symptoms allow or cover [45][46][47][48] .
Subsidiary examinations (recommended)
It is recommended that every insomniac patient should undergo complementary examinations when there is a suspicion of any systemic disease.
Questionnaires (recommended)
The use of a sleep diary, as well as other questionnaires, is fundamental to cognitive-behavioral therapy.
Polysomnography (recommended)
In order to investigate comorbidities such as obstructive sleep apnea, and for objective evaluation of sleep in cases of diagnosing inadequate perception, polysomnography is recommended as an auxiliary method for diagnosing of insomnia, whenever possible [49][50][51][52] .
Treatment of primary insomnia
Cognitive-behavioral therapy (standard) Today, cognitive-behavioral therapy (CBT) is a standard treatment for primary insomnia.It must not be used alone but, rather, in association with pharmacological therapy [53][54][55][56][57][58] .CBT presents an advantage over pharmacological treatment: the low risk of side effects and the long-term maintenance of sleep pattern improvement.CBT has a limited and defined period of use, from four to eight sessions.It is a focal and direct type of therapy, in which patients play an active role and are co-responsible for their treatment.It can be undertaken individually or in groups [59][60][61][62] .
The interventions are educational, behavioral and cognitive, and their theoretical basis is the behavioral model of insomnia proposed by Spielman.According to this model, three main factors can cause insomnia: predisposing, precipitating and perpetuating factors.The main CBT targets are the precipitating and perpetuating factors.The main behavioral and cognitive techniques are sleep hygiene, stimulus-control therapy, therapy of bedtime and sleeping time restriction, relaxation techniques, cognitive restructuring, paradoxical intention and cognitive therapy in sleep misperception disorders [63][64][65][66][67] .
[1] Sleep hygiene: This is a psychoeducational intervention containing basic information on sleep habits and hygiene.It includes instructions for establishing regular sleeping times; going to bed only when feeling sleepy and not using the bed as a means of trying to sleep; not spending the day worrying about sleeping time; having control over time; avoiding the use of stimulants (coffee, cigarettes, drugs, black tea, Coca-Cola and chocolate); avoiding alcohol consumption before sleeping; and avoiding high liquid consumption before sleeping.It includes suggestions for dinner (light foods) not less than two hours before going to sleep, and for regular physical activity, preferably in the mornings.It evaluates the bedroom conditions: comfort, temperature, noise, and stresses the importance of having a bedroom that is silent, aired, clean and organized.
[2] Stimulus-control therapy: This aims towards educating insomniac patients on how to establish a more appropriate sleep-wakefulness rhythm and limit the time awake and the behavior allowed in the bedroom/bed.The main instructions for patients include the following items: to go to bed only when feeling sleepy; avoid any behav-ior other than sleep or sex in the bedroom/bed; if feeling incapable of sleeping, the patient should get up from bed and go to another place to do some relaxing activity in an environment with little light, and only go back to bed when feeling somnolence again; to keep to a fixed time for waking up, seven days per week, independently of the amount of sleep obtained; not to nap or to lie down during the day, to remove the TV, stereo and computer from the bedroom; not to eat, read, work, watch TV or use a computer in the bedroom/bed.
[3] Therapy of bedtime and sleeping time restriction: The aim of this therapy is to consolidate sleep through restricting the time that patients spend in bed to the average time they spend sleeping (i.e. the number of hours that they really spend sleeping), based on the information in the sleep diary.This technique creates a mild state of sleep deprivation that may cause daytime somnolence.However, at the same time, it provides sleep consolidation, thus making it easier to fall asleep, improving sleep efficiency and decreasing latency and variability between nights.It is not recommended to have less than four to five hours of sleep, and the necessary adjustments must be made in relation to time spent in bed, according to patients' responses to the proposed treatment.If patients reach 90% sleep efficiency, 15 minutes are added to the time allowed in bed and, if the efficiency is less than 85%, 15 minutes are taken away.
[4] Relaxation techniques: The aim of teaching relaxation techniques is to show patients how tense and hypervigilant they are during both day and night.Progressive relaxation is the treatment for insomnia that has been studied most.Patients are guided to tension and relax the major muscle groups sequentially, while observing the sensation of tension and relaxation.
[5] Cognitive restructuring: This is mainly based on cognitive symptoms that can cause or perpetuate insomnia.Cognitive restructuring works on concerns, thoughts, false attitudes, irrational beliefs about sleep and amplification of its consequences, false ideas about the causes of insomnia and disbelief about sleep induction practices and about their own capacity to sleep.The idea is to make patients abandon the symptoms of insomnia, by reminding them that the way in which events are thought about or judged determines the way that individuals feel about them.
[6] Paradoxical intention: This technique reduces the anticipatory anxiety associated with the fear of trying to fall sleep and not being capable of doing so, since insomniacs usually believe that they have lost their natural capacity to fall asleep.Patients are instructed to go to bed and stay awake and try not to sleep; this makes them more relaxed and not under obligation to fall asleep.They consequently fall asleep faster.
[7] Cognitive therapy for sleep misperception disorders: This therapy works on the relationship between patients' subjective perceptions of total sleeping time and the total sleeping time obtained through PSG.The intention of this approach is to give patients objective data on sleep efficiency obtained through PSG and make them comprehend that they are sleeping for longer than they think.This technique also makes them more relaxed regarding the quantity of sleep they consider necessary, and it enables them to fall asleep more easily when this new reality is acquired [51][52] .
Pharmacological treatment
Pharmacological treatment consists of the use of hypnotic drugs that induce sleep, mainly because they act on the main inhibitory system of the central nervous system, the GABA system.Additionally, substances presenting sedative effects, such as antidepressants, may be used.More recently, medications that act on melatoninergic receptors have been considered promising as drugs for treating insomnia [68][69][70][71][72] .
GABA-A receptor-selective agonist hypnotics
[1] Zolpidem (standard): This is the hypnotic drug used for treating insomnia.Zolpidem is an imidazopyridine that was developed in 1980 and has been used since 1990.It was the first selective α1 agonist.It is rapidly absorbed (in approximately one hour) and presents a short half-life of 2.5 hours.Its bioavailability ranges from 65% to 70%.Plasma concentration peaks occur 1.5 hours after drug intake.The therapeutic doses range from 5 to 10 mg, and the drug is metabolized in the liver and eliminated by the kidneys.In older people, and in cases of liver or kidney failure, the recommended dose is 5 mg 73 .Although the use of sleep inductors for treating chronic insomnia is only recommended for one month, clinic trials have suggested that zolpidem remains effective and safe for a prolonged period of use, i.e. more than 35 days, in a 10 mg doses 74,75 .The use of zolpidem reduces the cyclic alternating pattern types A1 and A2, even when in intermittent use 76,77 .
Slow-release zolpidem (zolpidem MR, still not available in Brazil) is a new formulation used for patients with difficulty in maintaining their sleep.This formulation comprises pills with immediate release and pills for prolonged release, which maintains plasma concentrations for three to six hours after intake 78,79 .Zolpidem can also be used intermittently over the long term, in accordance with patient needs, without rebound insomnia appearing [80][81][82] .
[2] Zopiclone (recommended): This is a hypnotic drug that is recommended for treating insomnia.Zopiclone is a cyclopyrrolone that differs from zolpidem because of its longer half-life (5.3 hours) and its action on receptors containing the subunits α1 and α2.The recommended dose is 3.7 to 7.5 mg.A few side effects after withdrawal have been described; however, the residual effects on the following day may be attributed to its long half-life 83 .
[3] Zaleplon (recommended) -not available: This is a pyrazolopyrimidine that links to the α1 receptor, thus making the drug a hypnotic agent that can be recommended for treating insomnia.The recommended dose is 10 mg and its half-life is approximately one hour.Because of these characteristics, zaleplon is indicated for sleep induction, while showing little effect on sleep maintenance.Zaleplon has already been in the Brazilian market, but it was withdrawn, which limits its use in this country 84 .
[4] Eszopiclone (recommended) -not available: This is a zopiclone isomer of cyclopyrrolone that is recommended for treating insomnia.Eszopiclone is rapidly absorbed and presents a relatively long half-life.The dose must be individualized, but ranges from 1 to 3 mg before going to bed [85][86][87] .
[5] Indiplon (recommended) -not available: This is a pyrazolopyrimidine with similarities to zolpidem, zopiclone and zaleplon that is selective for receptors that contain a subunit α 1.It is a hypnotic drug recommended for treating insomnia.This drug has a formulation for immediate release (indiplon IR), which is indicated for initial insomnia, and a controlled formulation (indiplon MR), which lasts six to eight hours and is indicated for patients with complaints regarding sleep maintenance.The recommended dose ranges from 15 to 30 mg, taken just before going to bed 88 .
Antidepressants
Sedative antidepressants (tricyclic, trazodone, doxepin and mirtazapine) are alternatives for pharmacological treatment of insomnia.However, there are no doubleblind randomized studies proving the efficacy and safety of these agents.Some tricyclic antidepressants such as amitriptyline improve sleep continuity and efficiency and produce sedation during the day 89 .
[1] Trazodone (recommended): Trazodone seems to be the second most commonly prescribed agent for treating insomnia.It belongs to the pharmacological group of serotonin reuptake inhibitors, and has antagonist action on the adrenergic receptors α 1, 5-HT1A and 5-HT2.Trazodone slightly suppresses REM sleep and improves sleep continuity.The recommended dose is 50 mg/day 90 .
[2] Doxepin (recommended): This is a tricyclic antidepressant with antagonist effect on histamine H1/H2 receptors.It has been shown to be efficient if used in small doses (1 to 6 mg/night), for treating insomnia.It does not cause clinically significant residual or anti-cholinergic effects 91 .
[3] Mirtazapine (optional): This is an atypical antidepressant.Its mechanism of action depends on the increased noradrenergic activity provided by the antagonist effect of the drug on alpha-2a adrenergic receptors, and nonspecific blockage of serotonergic reuptake.Mir-tazapine is a postsynaptic antagonist (blocker) of 5TH 2A and 5TH 2C and 5-HT 3 with sedative and anxiolytic effects.Its histaminic H1 anti-receptor activity explains the strong sedative effect, and this is the antidepressant with the greatest sedative effect among the currently available drugs.The recommended doses range from 7 to 30 mg 92 .
[4] Amitriptyline (optional): This presents significant sedative effects due to its anticholinergic, anti-histaminic and anti-alpha 1 profile, and also due to the blockage of 5HT 2A and 5HT 2C receptors.The sedative effects are immediate, preceding the antidepressant effects, and decrease after a few weeks of treatment.The recommended dose ranges from 12.5 to 50 mg.
[5] Mianserin (optional): This is an atypical antidepressant with sedative effect that occurs through antihistaminic 1 and 5HT 2A/2C receptor antagonistic effects.There are no long-term studies proving the efficacy and safety of mianserin for treating insomnia.
Valerian (optional)
Valerian (valepotriates) may be an option for treating insomnias and is used as an auxiliary medication when discontinuing benzodiazepine among chronic users.Some studies have reported that its mechanism of action is related to GABA.Valerian may act during sleep through other mechanisms, through MT1 and MT2 receptors (melatonin) and through the A1 adenosinergic receptor and some subtypes of 5-HT receptors 93 .
Benzodiazepines (optional)
Benzodiazepines (BZDs) link nonspecifically to the alpha-1 and alpha-2 subunits of the GABA-A postsynaptic receptor and to any subunit of the gamma type.BZDs increase the affinity of the GABA-A postsynaptic receptor with endogenous GABA, and increase the intensity and duration of the inhibitory effects through boosting chloride channels.The link to the subunit alpha-1 is responsible for the hypnotic and cognitive effects of this drug, while the link to the subunit alpha-2 is responsible for the anxiolytic, anti-convulsion and muscle-relaxing effects.Withdrawal of BZDs may bring back the insomnia or cause rebound insomnia in patients, with worse symptoms than those presented before treatment.The presence of anxiety and the intensity of insomnia depend on patients' psychological profiles.Gradual and slow discontinuation of BZDs, with technical support, is recommended.The abstinence symptoms when discontinuing BZDs depend on a variety of factors.Many chronic users will be able to discontinue treatment successfully, provided that it is done with an appropriate technique [94][95] .
Medication abuse often occurs among chronic users.Tolerance reflecting the progressive increase of BZD doses also depends on several factors.However, there are patients who do not develop tolerance after using BZDs for a long time.There are studies demonstrating the existence of a correlation between prolonged use of BZDs and increased risk of death.Amplification of obstructive ventilatory disorders during sleep, sedation, suppression of selfcare, falls, confusion, amnesia and other possible drugrelated symptoms may explain the increased mortality.BZDs are not indicated for individuals with drug addiction and alcohol abuse.Special care is necessary with elderly individuals, patients with kidney, liver and lung dysfunctions, and patients with psychiatric problems.BZDs may worsen the ventilatory disorders during sleep and are not indicated during pregnancy, or for individuals whose work may require prompt waking up and quick decisionmaking.
Melatonin receptor agonists (optional)
[1] Ramelteon: This is a new hypnotic drug that has been approved for treating chronic insomnia.It is an agonist with high selectivity for melatonin MT1 and MT2 receptors 96 .The 8 mg recommended dose is rapidly absorbed (0.75-0.94 hour) and presents a half-life of 1.3 hours.Due to its short half-life, Ramelteon is indicated for treating initial insomnia [97][98][99] .It is not efficient in maintaining sleep.Ramelteon is safe with regard to cognitive effects on the following day, and has not been shown to cause rebound insomnia when discontinued after chronic use.It has not shown any potential for abusive use or dependence [100][101][102] .
[2] Agomelatine: This is an antidepressant with agonist action on melatonin receptors 1 and 2, and antagonist effect on serotoninergic 5-HT2C receptors.Because of its melatoninergic agonist effect, agomelatine may be a potential regulator of the circadian rhythm of depressed patients, thus leading to an added contribution for improving depression.Use of this medication at a dose of 25 to 50 mg has been shown to improve sleep quality, with reduced sleep latency, reduced awakening and increased slow-wave sleep 103,104 .
Other pharmacological and new perspectives Antihistamines are optional, while antipsychotics are not recommended.
New GABA agonists, like tiagabine and gaboxadol, are still not available in Brazil and are not recommended.These drugs are inhibitors of GABA reuptake, and are among the new perspectives for treating insomnia [105][106][107][108][109][110] .
Classification (standard)
Insomnia during childhood is divided into behavioral insomnia, psychophysiological insomnia, insomnia in special populations, insomnia associated with clinical conditions and insomnia associated with the use of medications.The most common clinical causes of insomnia during childhood are pain or cramps, recurrent otitis, reflux, medications (stimulants or corticoids), night asthma attacks and airway obstructions. 111The main type of insomnia in children is behavioral insomnia, but this is an exclusion diagnosis.During the first approach towards the child, the clinical causes of insomnia must always be eliminated.
[1] Behavioral insomnia during childhood: This occurs in 10 to 30% of preschool children.The International Classification of Sleep Disorders (ICSD-2005) defines children's difficulty in falling asleep and/or maintaining sleep as the essential characteristic of behavioral insomnia.These problems are associated with certain attitudes among children or their parents, and they can be classified into two types: association disorder and lack-of-limit disorder 112 .
[2] Association disorder: There are certain conditions associated with the start of sleep that are necessary for children to fall asleep and for them to go back to bed after each awakening during the night.Positive associations are conditions that children can provide for themselves (pacifiers/dummies or teddy bears), while negative associations need assistance from someone else (baby bottles or rocking).The negative associations also include external stimuli (television or toys) or different situations (parents' bed or a car ride).When the condition associated with sleep is present, the child falls asleep rapidly.If the condition associated with sleep is not present, the child presents frequent and long-duration nighttime awakenings.
The diagnostic criteria consist of findings that falling asleep is a slow process that requires special conditions, and that associations with falling asleep are problematic and require much effort.When association elements are absent, the start of sleep is significantly delayed or sleep is fragmented.Nighttime awakening requires intervention so that these children can fall asleep again.
[3] Lack-of-limit disorder: This is presented as a refusal or delay in going to bed at the established time.On the other hand, delaying the time for going to sleep might include several requests (feeling thirsty, needing the bathroom or asking for one more goodnight kiss) or additional activities (watching TV or reading one more story).Once these children fall asleep, their sleep quality is normal and they tend to have few awakenings.However, children with lack-of-limit disorder normally have a shorter sleeping time (30 to 60 minutes).
The diagnostic criteria consist of difficulty in falling asleep or maintaining sleep; postponing or refusing to go to bed at the appropriate time or refusing to go back to bed after nighttime awakening; inability of the parents to establish appropriate sleep behavior for the child; lack of explanation for the sleep disorder in terms of other sleep disorders, clinical conditions, mental or neurological diseases, or use of medications.
[4] Insomnia associated with neurological and psychiatric conditions: Most syndromes with central nervous system dysfunction present some kind of sleep abnormality in their clinical presentation.
Diagnosis
Medical evaluation (standard) -The main questions in evaluating sleep disorders in pediatric cases include duration of sleep, sleep routines, events associated with sleep, daily behavior, humor and cognitive function.It is also essential to find out about significant events in the child's life, such as parents' divorce, changes of school or moving house, or events involving siblings.A sleep diary must be kept over a one or two-week period, and this is always useful for finding out about sleeping patterns and for following them over time.Parents are asked to write details about what time the child went to bed, how long the child took to fall asleep, the frequency and duration of nighttime awakening, the time and duration of daily naps, the time of waking up in the morning and the total duration of sleep 113 .
Polysomnography (optional): Polysomnographic testing and actigraphy are optional in diagnosing and treating insomnia in children.They are indicated only when necessary.
Consequences
Children with insufficient duration of sleep present fatigue and irritability.Parents may present negative feelings towards their children and, in order to avoid frustrations during sleeping times, they may postpone the sleep routine, which delays the start of sleep even more and prolongs the cycle of addiction.
Treatment
[1] Behavioral approach (standard): Time for going to sleep: The appropriate time for a child to go to sleep, from infancy to preschool age, should be between 7:00 and 8:30 pm.When bedtime is later than this, children get very tired, irritated and have difficulty in sleeping.The time for going to sleep should not vary between weekdays and weekends.Daytime naps are essential for the child.The need for daytime naps tends to disappear between the ages of three and six years 114 .
Bedtime routine: Establishing a routine is very important for children's lives.The bedtime routine can be started at three months of age, through establishing a constant time for going to sleep.Any electronic equipment near the child must be turned off before starting the ritual for going to sleep.
Falling asleep independently: Children with insomnia are incapable of falling asleep without their parents' intervention, such as rocking or feeding.Children must be put in the cradle or go to bed when they are sleepy, but still awake, and then they must fall asleep independently.
There are several methods that help children fall asleep by themselves, for example, "extinction" alone, gradual "extinction", positive routines, brief visits and weaning children from their parents' presence 115 ."Extinction" alone consists of leaving the child cry until falling sleep."Extinction" is based in the theory that behavior that is reinforced increases in frequency, while behavior that is ignored will disappear with time.If parents are regular and do not attend their child's calls, in general, the child will be able to sleep alone after three to five nights.Gradual extinction is an alternative for parents who do not want to use extinction alone.This method consists of putting the sleepy, but awake, child in the cradle and then ignoring the calls or crying for gradually increasing periods.When observing the child at night, the visit must be short and uniform, without lights and without speaking loudly or touching the child.
The gradual reduction of the mother's presence includes an initial phase in which physical contact is reduced at bedtime.Mothers who feed their children at bedtime must do this activity earlier in another room and only rock the child to sleep.After achieved success with this strategy, the child must be put in the cradle and the mother must caress the child's head or arm until the child falls asleep.In the second step, the mother's presence in the bedroom must be reduced.The third step consists of increasing the time between each visit.Positive routines aim to create a pleasant and positive environment not only for the child but also for the parents.
[2] Pharmacological treatment (optional): Pharmacological treatment must be considered as the last option.Most medications prescribed for insomnia among adults are not recommended for children.However, in specific cases, generally when there is an underlying neurological or psychiatric disease, BDZs can be used (clonazepam, clobazam, midazolam or diazepam), as well as zolpidem, zopiclone, chloral hydrate, levomepromazine, promethazine, carbamazepine, clonidine, risperidone and melatonin, always considering the age of the child and the risk/benefit associated with the use of these drugs 116 . | 2017-06-18T03:10:30.232Z | 2010-08-01T00:00:00.000 | {
"year": 2010,
"sha1": "d790a4e7c5250cf5ca6ef804964a9ca4f26c94f2",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/anp/a/9WGDfTBmLQ3pThSFxCk7Hhj/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d790a4e7c5250cf5ca6ef804964a9ca4f26c94f2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55823238 | pes2o/s2orc | v3-fos-license | Genome-Wide Identification and Expression Profiling Analysis of the Galactinol Synthase Gene Family in Cassava ( Manihot esculenta Crantz )
Galactinol synthases (GolSs) are the key enzymes that participate in raffinose family oligosaccharides (RFO) biosynthesis, which perform a big role in modulating plant growth and response to biotic or abiotic stresses. To date, no systematic study of this gene family has been conducted in cassava (Manihot esculenta Crantz). Here, eight MeGolS genes are isolated from the cassava genome. Based on phylogenetic background, the MeGolSs are clustered into four groups. Through predicting the cis-elements in their promoters, it was discovered that all MeGolS members act as hormone-, stress-, and tissue-specific related elements to different degrees. MeGolS genes exhibit incongruous expression patterns in various tissues, indicating that different MeGolS proteins might have diverse functions. MeGolS1 and MeGolS3–6 are highly expressed in leaves and midveins. MeGolS3–6 are highly expressed in fibrous roots. Quantitative real-time Polymerase Chain Reaction (qRT-PCR) analysis indicates that several MeGolSs, including MeGolS1, 2, 5, 6, and 7, are induced by abiotic stresses. microRNA prediction analysis indicates that several abiotic stress-related miRNAs target the MeGolS genes, such as mes-miR156, 159, and 169, which also respond to abiotic stresses. The current study is the first systematic research of GolS genes in cassava, and the results of this study provide a basis for further exploration the functional mechanism of GolS genes in cassava.
Introduction
Galactinol synthases (GolS, EC 2.4.1.123)are the enzymes that catalyze the reaction of myo-inositol and UDP-galactose to generate galactinol.Galactinol is one of the organic osmolytes in plants [1], and also is considered to be one of plant defense molecules [2].It is known as the substrate to provide galactosyl in raffinose family oligosaccharides (RFOs) biosynthesis pathway for generation of the raffinose, stachyose, verbascose, and other larger soluble oligosaccharides [3].Galactinol and RFOs participate in a variety of physiological and developmental functions in plant life [4].The RFOs accumulate in seeds to protect the embryo from dehydration damage, and are associated with seed longevity [5,6], and act as galactose stores for rapid germinating requirements [7].They accumulate in vegetative tissues and act as signaling molecules in answer to a series of biotic [2,8] and abiotic stresses [1,9,10] suffered by plants.
GolS participates in the initial stage of RFO biosynthesis.Since it plays important roles in plants, more and more GolS genes have been investigated from various plants, for instance, Arabidopsis (Arabidopsis thaliana L.) [11], rapeseed (Brassica napus L.) [12], tobacco (Nicotiana tabacum L.) [12], maize (Zea mays L.) [13], Tea plant (Camellia sinensis (L.) O. Ktze) [14], wheat (Triticum aestivum L.) [15], tomato (Solanum lycopersicum Mill.) [16], chickpea (Cicer arietinum L.) [17], and so on.Previous studies indicate that the expression of the GolS genes went hand in hand with abiotic stress.Seven GolS genes have been characterized in Arabidopsis, in which the expressions of AtGolS1 and AtGolS2 responded to drought and salinity while AtGolS3 responded to cold [11].Some studies have also revealed that AtGolS1 also responded to heat stress [18][19][20].Over-expression of AtGolS2 in rice significantly improved the capability of transgenic plants to tolerant to drought, and increased the grain yield under drought conditions [21].Regarding chickpea, CaGolS1 and CaGolS2 responded to abiotic stresses and CaGolS1 was induced prior to CaGolS2 under both heat and oxidative stressors [17].Considering wheat, TaGolS1 and TaGolS2 responded to cold stress, but not drought, heat stressors, or abscisic acid (ABA).Over-expression of TaGolS1 or TaGolS2 in rice found that the transgenic rice could improve cold tolerance and accumulate significantly higher standards of galactinol and raffinose [22].In wheat, the TaGolS3 was upregulated by heavy metal, cold, and salinity stressors.By over-expression of TaGolS3 in transgenic Arabidopsis and transgenic rice, it was revealed that TaGolS3 can improve their zinc stress tolerance by regulating the reactive oxygen species (ROS) production [15].Found in Ammopiptanthus nanus (M.Pop.) Cheng f., AnGolS1 responded to cold, salinity, and drought pressures [23].
Cassava (Manihot esculenta Crantz), which belongs to Euphorbiaceae, is a starch-rich root crop found in tropical and subtropical areas.It has outstanding soil nutrient-and water-use efficiency and can grow in barren and arid soil where other crops cannot grow and develop normally.It has a strong tolerance to drought and can withstand 4-6 months of continuous drought stress, quickly restoring growth and development when the rainy season arrives [24].However, it is sensitive to low temperatures and salinity [25], and the storage root deteriorates easily after harvest.The molecular processes of cassava in answering to abiotic stresses are still unclear.Although several papers have reported the functions of GolS genes in other species, their features and functions in cassava are still unclear.Considering the significant roles of galactinol synthase in plant growth and response to environmental stressors, eight MeGolS genes in cassava are amplified according to the latest cassava genome database shared online (http://www.phytozome.net/cassava).The phylogenetic relationship, gene structures, conserved motifs, chromosome localization, and mode of expression of MeGolSs in plant organs or tissue and their response to abiotic stressors are systematically analyzed.The results will lay a basis for further investigation of the regulating functions of MeGolS genes in cassava.
Plant Materials and Treatments
One-month-old cassava seedlings (Manihot esculenta Crantz cultivar, SC8) were treated with 200 mM NaCl for salt stress, 20% PEG6000 for drought stress, and 4 • C for cold stress for inducible expression analysis of MeGolS genes under abiotic stresses.The one-month-old cassava seedlings without NaCl, PEG6000, and 4 • C treatment were used as control.Young leaves (the first to third fully expanded leaves from top position) were harvested at 0 h, 3 h, 6 h, 9 h, 12 h, and 24 h after the three abiotic stress treatments, respectively.Three biological replicates were set in this experiment.The harvested leaves were immediately frozen in liquid nitrogen and analyzed by qRT-PCR.
Identification, Isolation, and Analysis of MeGolS Family Genes
To identify the MeGolS members in cassava, the previously identified Galactinol synthase amino acid sequences from the Arabidopsis thaliana genome database were downloaded as well as the BLASTP of the Manihot esculenta V6.1 database.Reverse transcription PCR was performed to obtain the full-length cDNAs of GolS from cassava, and the primers utilized are listed in Table S1.The PCR products were confirmed by sequencing and uploaded to GenBank.The predicted amino acids of MeGolSs were analyzed by Jellyfish software (LabVelocity Inc., Los Angeles, CA, USA).Each amino acids sequence of MeGolSs was further verified by the Pfam [26] and SMART [27] online tools.The physicochemical properties of each MeGolS protein was calculated by the online tool ProtParam, including molecular weight (MW), theoretical isoelectric point (pI), and hydrophilic mean (GRAVY) [28].The possible subcellular location of GolS proteins from cassava and Arabidopsis were predicted using the CELLO2GO tool [29].
Chromosomal Localization of MeGolS Family Genes
The information of chromosomal localization of the MeGolS family genes from the cassava genomics resource (Bioproject: PRJNA234389) were searched.A chromosomal localization map of these genes was drawn by Mapinspect software (Mike Lischke, Berlin, Germany) based on the relative location of each gene on each chromosome.Then the Figure was enhanced by Photoshop software CS (San Jose, CA, USA).
Prediction of MeGolS Gene Structures and the Conserved Motifs
The GolS amino acid sequences from Arabidopsis and cassava were aligned by ClustalW to produce the phylogenetic tree based on the NJ-method (neighbor-joining) using molecular evolutionary genetics analysis (MEGA7.0,Institute of Molecular Evolutionary Genetics The Pennsylvania State University, University Park, PA 16802, USA) software [30].The gene structures of AtGolSs and MeGolSs were drawn by the online software GSDS (Gene Structure Display Server, Center for Bioinformatics (CBI), Peking University, Beijing, China) [31].The conserved motifs in GolSs were analyzed by using the online Motif alignment and search tool-MEME Suite 4.12.0 [32].
Cis-Element Analysis within the MeGolS Family Gene Promoters
To analyze the cis-elements of MeGolS promoters, the 1500 base pair (bp) sequences upstream of the initiation codons (ATG) of each MeGolS genes were obtained from the cassava genome database.The website of PlantCARE database (http://bioinformatics.psb.ugent.be/webtools/plantcare/html/)was used to predict the cis-elements within promoters of MeGolS genes.The data was processed by Microsoft Office Excel software to obtain table and figure.
The Expression Profiles of MeGolS Genes in Cassava
To further understand the diverse gene expression profiles of the MeGolS family genes, the authors downloaded the publicly available M. esculenta RNA sequencing (RNA-seq) data from the NCBI website and constructed a heatmap using the OmicShare tools (http://www.omicshare.com/tools).Expression data of the 8 MeGolS genes were drawn from the Transcriptome sequencing datasets (Bioproject ID PRJNA324539).The 11 different cassava tissues were the leaf blade, leaf mid-veins, petioles, stems, lateral buds, shoot apical meristems (SAM), storage roots, fibrous roots, root apical meristems (RAM), somatic organized embryogenic structures (OES), and friable embryogenic callus (FEC).
MeGolS Gene Expression Profiles under Drought, Cold, and Salt Stresses
The relative expression profiles of MeGolSs were obtained by qRT-PCR using the ABI7900HT FastReal-Time PCR System (Applied Biosystems, Foster City, CA, USA).To evaluate the relative expression level of MeGolS genes, the 2 − Ct method was used.The relative expression levels were equal to the mean values and transform log2 values to draw heatmap by the OmicShare tools (http: //www.omicshare.com/tools).The MeGolS gene primers, and internal reference gene primers used for qRT-PCR, are listed in Table S1.Each sample was analyzed with three technical replicates.
Prediction of microRNAs Targeting to MeGolS Genes
Genome sequences of 8 MeGolS genes were used in an online tool (psRNATarget Server, http: //plantgrn.noble.org/psRNATarget/) to search against cassava microRNAs to predict the potential miRNAs targeting MeGolS genes.The interaction network between the MeGolS genes and predicted miRNAs was built by Cytoscape software (http://www.cytoscape.org/).
Identification and Cloning of MeGolS Family Genes in Cassava
There were eight GolS genes in the M. esculenta genome were identified based on the BLASTP program constructed on the known AtGolS amino acid sequences as references.The full-length CDS sequences of MeGolS1-8 were isolated by reverse transcription PCR.Detailed information about the CDS sequences of MeGolS1-8 and the corresponding amino acids were deposited in GenBank, the accession numbers can be found in Table 1.The protein length and molecular weight of these MeGolS proteins ran from 320-338 aa, and 37.14-38.60,respectively.All the MeGolS proteins were acidic since their isoelectric points were smaller than seven.All the MeGolS proteins were hydrophilic proteins since their GRAVY scores were negative.Subcellular localization predication indicated that MeGolS3 was located in two organelles, the cytoplasm and chloroplast.The other seven MeGolSs were located in one organelle; MeGolS1, 2, 4, and 8 were located in the cytoplasm, MeGolS5 and MeGolS6 were located in the plasma membrane, and MeGolS7 was located in ER (Table 1).
Analysis of Phylogenesis, Conserved Motifs, and Gene Structures of GolS Family Genes in Cassava and Arabidopsis
Multiple sequence alignment was used to manage the GolS protein sequences from cassava and Arabidopsis.After that, a neighbor-joining tree was established by MEGA7.0 (Figure 2A).All the GolSs were divided into four (groups I-IV) groups.Group I contained MeGolS5-7, AtGolS2-3.Group II contained MeGolS4, MeGolS8, and AtGolS1.Group III contained MeGolS1-2, AtGolS5-6.Group IV contained MeGolS3, AtGolS4, and AtGolS7.To investigate the GolS gene structures, the GSDS online software was used to analyze the MeGolS and AtGolS gene exon-intron structures.Except for MeGolS8 having 5 exons and 4 introns, all the other GolS genes had 4 exons and 3 introns.Both MeGolS8 and AtGolS7 did not have upstream and downstream factors.MeGolS7, AtGolS5, and AtGolS6 contained only downstream and no upstream features.The other genes had both upstream and downstream aspects.Obviously, the first intron length of MeGolS8 was significantly longer than the other genes, at more than 8 Kb (Figure 2B).To predict the function of MeGolS proteins, the conserved motifs of GolSs from cassava and Arabidopsis were compared.Ten conserved motifs of GolS proteins were detected by the MEME online tool (Figure 2C, Table S2).These conserved motifs ranged in length from 11 to 50 aa.All the GolS proteins from cassava and Arabidopsis contained the motifs of 1, 2, 3, 4, 5, 7, 8, and 9.However, motif 6 was found in all the GolS proteins except MeGolS8; Motif 10 was only distributed on MeGolS4, 5, 6, 7, 8, not on any AtGolSs.Thus, it was deduced that the same conserved motifs in the homologous GolSs might have similar functions; and motif 10 in MeGolSs might have a special function that is different from Arabidopsis.
The Cis-Element Analysis within MeGolS Family Gene Promoters
To better uncover the potential function of the MeGolS genes, the cis-elements within these promoters were identified by PlantCARE.A total of 74 types of cis-acting elements were detected in the promoters of MeGolSs (Table S3), among which were 10 types of hormone-related, 7 tissue specific-related, and 7 stress-related elements (Figure 3).These hormone-related elements included abscisic acid responsive elements (ABRE), MeJA-responsive elements (CGTCA-motif and TGACG-motif), ethylene-responsive elements (ERE), salicylic acid responsive elements (TCA-element), gibberellin-responsive (GARE-motif, P-box and TATC-box), and auxin-responsive elements (TGA-box and TGA-element).ABRE elements were found to be the most abundant hormone-related elements in six of eight MeGolSs.MeGolS4 had nine of the 10 hormone-related elements.MeGolS1 and MeGolS2 had one hormone-related element ABRE or TCA-element, respectively.These tissue specific-related elements included a shoot-specific expressive element (as-2-box), meristem expressive elements (CAT-box and CCGTCC-box), endosperm expressive elements (GCN4-motif and Skn-1-motif), a seed-specific expressive element (RY-element), and a nodule-specific expressive element (Nodule-site2).The Skn-1 motif was the most abundant tissue specific-related element in MeGolS promoters.The numbers of tissue specific-related elements in each MeGolS promoter ranged from one to three.These stress-responsive elements included heat stress (HSE), low-temperature stress (LTR), drought stress (MBS), wound stress (WUN-motif), anaerobic stress (ARE), anoxic stress (GC-motif), and defense and stress responsive element (TC-rich repeats).At least three of these stress-responsive elements were found to be included in each MeGolS promoter.HSE was the most abundant stress responsive related element and was found in all the 8 MeGolS promoters.
Tissue-Specific Expression of MeGolS in Cassava
To understand the expression abundance of the MeGolS gene family in different tissues or organs, 11 tissues or organs of cassava were analyzed based on the RNA-seq datasets from cassava.The heatmap results showed that the eight MeGolS genes had different expression patterns in tissues and organs (Figure 4).Seven of eight MeGolS genes were expressed in mid-vein (except MeGolS7), fibrous roots (FR) (except MeGolS1), stem (except MeGolS7), and petiole (except MeGolS2); six of eight MeGolS genes were expressed in root apical meristem (RAM) (except MeGolS1, and MeGolS7); five of eight MeGolS genes were expressed in somatic organized embryogenic structures (OES) (except MeGolS1, 2, and 7), shoot apical meristem (SAM) (except MeGolS1, 2, and 8); only 4 MeGolS genes (MeGolS3, 4, 6, and 8) were expressed in storage roots (SR).The genes with the highest expression levels were, MeGolS3 in leaf, mid-vein, stem, and petiole; MeGolS4 in the lateral bud, OES, FEC, SR, and SAM, MeGolS5 in RAM; and MeGolS6 in FR.At gene expression levels, three MeGolS genes (MeGolS3, 4 and 6) were expressed in all 11 tissues and organs; MeGolS5 and 8 were slightly expressed in 10 tissues and organs, respectively, except MeGolS5 in SR and MeGolS8 in SAM; MeGolS2 is slightly expressed in seven tissues and organs except OES, SR, petiole, and SAM; MeGolS1 was only expressed in leaf, mid-vein, lateral bud, FEC, stem, and petiole; while MeGolS7 was only slightly expressed in leaf, lateral bud, FEC, FR, petiole, and SAM.
Gene ID Hormone-relative elements
Stress-relative elements Tissue specific-relative
Tissue-Specific Expression of MeGolS in Cassava
To understand the expression abundance of the MeGolS gene family in different tissues or organs, 11 tissues or organs of cassava were analyzed based on the RNA-seq datasets from cassava.The heatmap results showed that the eight MeGolS genes had different expression patterns in tissues and organs (Figure 4).Seven of eight MeGolS genes were expressed in mid-vein (except MeGolS7), fibrous roots (FR) (except MeGolS1), stem (except MeGolS7), and petiole (except MeGolS2); six of eight MeGolS genes were expressed in root apical meristem (RAM) (except MeGolS1, and MeGolS7); five of eight MeGolS genes were expressed in somatic organized embryogenic structures (OES) (except MeGolS1, 2, and 7), shoot apical meristem (SAM) (except MeGolS1, 2, and 8); only 4 MeGolS genes (MeGolS3, 4, 6, and 8) were expressed in storage roots (SR).The genes with the highest expression levels were, MeGolS3 in leaf, mid-vein, stem, and petiole; MeGolS4 in the lateral bud, OES, FEC, SR, and SAM, MeGolS5 in RAM; and MeGolS6 in FR.At gene expression levels, three MeGolS genes (MeGolS3, 4 and 6) were expressed in all 11 tissues and organs; MeGolS5 and 8 were slightly expressed in 10 tissues and organs, respectively, except MeGolS5 in SR and MeGolS8 in SAM; MeGolS2 is slightly expressed in seven tissues and organs except OES, SR, petiole, and SAM; MeGolS1 was only expressed in leaf, mid-vein, lateral bud, FEC, stem, and petiole; while MeGolS7 was only slightly expressed in leaf, lateral bud, FEC, FR, petiole, and SAM.Leaf, leaf blade; midvein, leaf mid-vein; SAM, shoot apical meristem; SR, storage roots; FR, fibrous roots; RAM, root apical meristem; OES, somatic organized embryogenic structures; FEC, friable embryogenic callus.The bar on the upper right corner indicated the FPKM (fragments per kilobase of exon per million reads mapped) values that corrected by Log2, and different colors indicate different expression levels.
The Inducible Expressions of MeGolSs under Abiotic Stress
To establish the MeGolS gene expressions under abiotic stresses, quantitative real-time RT-PCR data from different abiotic stressors (drought, cold, and salinity) and different treatment times (3 h, 6 h, 9 h, 12 h, and 24 h) were gathered to construct a heatmap.The expression abundance of each MeGolSs was obviously different among different stresses and treatment times (Figure 5).The expression patterns of MeGolS5 and MeGolS6 were similar in response to drought stress, during which both were upregulated and peaked at 3 h; MeGolS1 and MeGolS2 had similar expression patterns and peaked at 12 h; MeGolS3, 4, and 8 were downregulated.Regarding response to cold stress, six of eight tested MeGolSs except MeGolS3 and 8 were upregulated, during which MeGolS5 and MeGolS6 had similar expression patterns, with higher expressions from 3 h to 24 h except at 12 h; MeGolS1, MeGolS2, and MeGolS7 peaked their expressions at 12 h; MeGolS4 was slightly upregulated, and peaked at 24 h; however, MeGolS3 and 8 were downregulated at all time points.MeGolS1 was highly induced at all time points in response to salinity and peaked at 12 h; MeGolS5
The Inducible Expressions of MeGolSs under Abiotic Stress
To establish the MeGolS gene expressions under abiotic stresses, quantitative real-time RT-PCR data from different abiotic stressors (drought, cold, and salinity) and different treatment times (3 h, 6 h, 9 h, 12 h, and 24 h) were gathered to construct a heatmap.The expression abundance of each MeGolSs was obviously different among different stresses and treatment times (Figure 5).The expression patterns of MeGolS5 and MeGolS6 were similar in response to drought stress, during which both were upregulated and peaked at 3 h; MeGolS1 and MeGolS2 had similar expression patterns and peaked at 12 h; MeGolS3, 4, and 8 were downregulated.Regarding response to cold stress, six of eight tested MeGolSs except MeGolS3 and 8 were upregulated, during which MeGolS5 and MeGolS6 had similar expression patterns, with higher expressions from 3 h to 24 h except at 12 h; MeGolS1, MeGolS2, and MeGolS7 peaked their expressions at 12 h; MeGolS4 was slightly upregulated, and peaked at 24 h; however, MeGolS3 and 8 were downregulated at all time points.MeGolS1 was highly induced at all time points in response to salinity and peaked at 12 h; MeGolS5 and MeGolS7 had similar expression patterns, which were downregulated at 3 h, then upregulated from 6 h to 24 h, with a peak expression at 12 h; MeGolS2 and 6 were downregulated from 3 h to 9 h, and upregulated from 12 h to 24 h; MeGolS2 was highly expressed at 24 h; and MeGolS4 was upregulated at 6 h and 24 h.MeGolS3 and 8 were downregulated in all stages.
Discussion
Cassava has a strong tolerance to drought and infertility but is sensitive to low temperature and salinity.However, there still insufficient information to reveal the mechanisms by which cassava treats various abiotic stress.Generally, raffinose family oligosaccharides (RFOs) have been reported to be kinds of important metabolites in response to various abiotic stresses including drought, cold, and salinity.Galactinol synthase, as a key enzyme in RFO synthases, plays vital roles in various aspects of the plant growth, including seed maturation and abiotic and biotic stress tolerances.However, the systematic analysis of the GolS gene family in cassava is still lacking.GolS family genes have been identified in seven GolS genes from Arabidopsis [11], nine GolS genes from poplar (Populus trichocarpa Torr.& Gray) [33], four GolS genes in tomato [16], nine GolS genes from tobacco [12], three GolS genes from tea plant [14], and others.Eight GolS genes were isolated from cassava in this study.The diversity of gene number might be associated with the diversity in evolution and genome duplication or genome sizes in plants.
Phylogenetic analysis revealed that GolS proteins in cassava and Arabidposis are divided into four groups.This classification is consistent with that previous reported in other species, such as poplar [33], tomato [16], rapeseed [12], tobacco [12], and sesame (Sesamum indicum L.) [34].The proteins clustered together might have similar functions.MeGolS4 and MeGolS8 are clustered with AtGolS1, MeGolS5-7 are clustered with AtGolS2-3, MeGolS1-2 are clustered with AtGolS5 and AtGolS6, and MeGolS3 is clustered with AtGolS4 and AtGolS7 (Figure 2A) in this study.Previous studies indicated that AtGolS1, AtGolS2, and AtGolS3 are related to response to abiotic stresses, such as drought, salinity [11], and temperature stresses [19].Thus, MeGolS4-8 might have the function to respond to abiotic stresses.
Genomic comparison is a rapid means of obtaining knowledge about less-studied taxon.Eight MeGolS protein sequences have been identified and isolated in this study, which have a very close amino acid length (320-338 amino acids) and molecular weight (37.1-38.6 kDa) with PI value in the range of 5.04-5.47.Similar results can be found in other plant species, including coffee (Coffea The red dot indicates the MeGolS genes, the blue dot indicates the miRNAs that targeting MeGolSs.
Discussion
Cassava has a strong tolerance to drought and infertility but is sensitive to low temperature and salinity.However, there still insufficient information to reveal the mechanisms by which cassava treats various abiotic stress.Generally, raffinose family oligosaccharides (RFOs) have been reported to be kinds of important metabolites in response to various abiotic stresses including drought, cold, and salinity.Galactinol synthase, as a key enzyme in RFO synthases, plays vital roles in various aspects of the plant growth, including seed maturation and abiotic and biotic stress tolerances.However, the systematic analysis of the GolS gene family in cassava is still lacking.GolS family genes have been identified in seven GolS genes from Arabidopsis [11], nine GolS genes from poplar (Populus trichocarpa Torr.& Gray) [33], four GolS genes in tomato [16], nine GolS genes from tobacco [12], three GolS genes from tea plant [14], and others.Eight GolS genes were isolated from cassava in this study.The diversity of gene number might be associated with the diversity in evolution and genome duplication or genome sizes in plants.
Phylogenetic analysis revealed that GolS proteins in cassava and Arabidposis are divided into four groups.This classification is consistent with that previous reported in other species, such as poplar [33], tomato [16], rapeseed [12], tobacco [12], and sesame (Sesamum indicum L.) [34].The proteins clustered together might have similar functions.MeGolS4 and MeGolS8 are clustered with AtGolS1, MeGolS5-7 are clustered with AtGolS2-3, MeGolS1-2 are clustered with AtGolS5 and AtGolS6, and MeGolS3 is clustered with AtGolS4 and AtGolS7 (Figure 2A) in this study.Previous studies indicated that AtGolS1, AtGolS2, and AtGolS3 are related to response to abiotic stresses, such as drought, salinity [11], and temperature stresses [19].Thus, MeGolS4-8 might have the function to respond to abiotic stresses.
Genomic comparison is a rapid means of obtaining knowledge about less-studied taxon.Eight MeGolS protein sequences have been identified and isolated in this study, which have a very close amino acid length (320-338 amino acids) and molecular weight (37.1-38.6 kDa) with PI value in the range of 5.04-5.47.Similar results can be found in other plant species, including coffee (Coffea canephora Pierre ex Froehn) [35], cotton (Gossypium hirsutum L.) [36], tomato [16,37], and purple false-brome (Brachypodium distachyon) [16].All the eight MeGolS genes in this study have four exons and three introns.According to previous research, the exon numbers of GolS genes in different species vary between two and four, such as GolS genes in cotton have three exons [36], in pea (Pisum sativum L.) with at least three exons [38], in tomato with three to four exons, and in B. distachyon with four exons [16].This suggests that there is a slightly different gene structure diversity of GolS genes in different species.
Expression abundance of MeGolS genes in different tissues or organs of cassava revealed that MeGolS genes exhibit incongruous expression patterns in various tissues (Figure 4), indicating that different MeGolS proteins might have diverse functions.The tissue specific expression levels of GolS genes have been described in other species, including poplar [33], tomato [16], rapeseed [12], tobacco [12], and sesame [34].Previous research indicates that GolS expression in leaves is associated with phloem export, carbon storage, and plant protection against abiotic stress and oxidative damage [39].GolS expression in roots is associated with the osmotic stress [40].In this study, MeGolS1 and MeGolS3-6 are highly expressed in leaves and midveins.MeGolS3-6 are highly expressed in fibrous roots.It is reported that Pa3gGolSI homologue and BnGolS3 subfamily members were highly expressed in leaves [3,12].However, not all the plants have high GolS gene expression in leaves.For example, there were no highly expressed GolS genes observed in tobacco leaves [12].Thus, these tissue specific expression MeGolS genes might have important functions in phloem export, carbon storage, and plant protection against abiotic stress and oxidative damage.
Plants frequently face abiotic stressors such as drought, cold, and high salinity.In order to help plants avoid the damage of adverse conditions, researchers are constantly exploring genetic resources that are beneficial to plants against cold [41][42][43][44], drought [45][46][47], salt [48][49][50][51], and so on.In this paper, qRT-PCR was employed to calculate the transcript levels of each MeGolS under different abiotic stressors.Differential expressions of GolS linking to abiotic stresses have been reported in several plants [11,12,16,33,34,52].The stresses of salt, low temperature, and drought could be strongly represented by MeGolS genes in the current study.The response to the three treatments varied among the respective MeGolS genes (Figure 4), which also was found in other plants, for instance in Verbascum.phoeniceum L., where VpGolSs showed diverse expression under salt, cold, and drought stresses [53].In poplar, the expression of Pa × gGolSII isoform were highly temperature related, while the Pa3gGolSI isoform persistent expression throughout the year [3], which suggests that each member of the gene family may play a different physiological role.MeGolS5 and MeGolS6 were induced by drought stress and could rapidly peak at three hours after drought treatment.MeGolS1, 2 and 7 were also responsive to drought stress (Figure 4).As described in the phylogenetic analysis, MeGolS5-7 are clustered with AtGolS2 and 3 (which were proven to be significant for drought stress [11]), thus there was no surprise that they were induced by drought stress.MeGolS5 was highly responsive to cold, followed by MeGolS6 (Figure 4).These results indicated that the GolS genes expression mechanisms under drought and cold stress might be similar.MeGolS1 could rapidly respond to salt stress and reached peak expression at 12 h, thus it is a very salt sensitive MeGolS gene.Additionally, MeGolS2 and 7 are involved in salt stress, but their response is comparatively weaker than MeGolS1.The upregulated GolS genes to salt stress are also found in other plants, such as AtGolS1 and 2 from Arabidopsis [11], PtrGolS3, 4, and 6 from P. Trichocarpa [52].However, in cassava, the most salt sensitive gene, MeGolS1, is not clustered with AtGolS1 and AtGolS2 in phylogenetic analysis.Thus, the current results indicate that cassava might have a specific mechanism to respond to salt stress.Generally, several MeGolS genes are induced by salt, drought, and cold treatments, indicating that these genes might be crucial for cassava response to abiotic stresses.miRNAs were crucial for regulating gene expression to adapt to biotic and abiotic stresses [54][55][56][57].There were 14 miRNAs in wild eggplant roots (Solanum linnaeanum L.) [58], 71 miRNAs in radish (Raphanus sativus, L.) [59], 123 miRNAs in tomato [60], and 37 miRNAs in maize (Zea mays L.) [61] that showed significant changes in expression under salt stress.There were 18 miRNAs in rice (Orazy sativa L.) [62] and 30 miRNAs in Populus tomentosa Carr [63] that were identified to be cold-responsive.There were also 35 miRNAs in tef (Eragrostis tef, (Zucc.)Trotter) [64], 13 miRNAs in wild emmer wheat (Triticum dicoccoides ssp.dicoccoides (Korn.)Thell.)[65], and 34 miRNAs in citrus (Citrus junos Siebold) [66] responses to drought stress.Regarding cassava, 881 miRNAs have been reported to identified by high-throughput sequencing [67], among which another 38 new miRNAs have been identified from cassava [68].Here, 70 mes-miRNAs have been predicted to target eight MeGolS genes in cassava.These miRNAs belong to 14 miRNA families, mes-miR156, 159, 164, 169, 171, 172, 319, 397, 403, 827, 828, 2111, 1446, and 2275.Previous studies indicate that miR156, 159, 169 are related to temperature, drought, and salt stresses [69][70][71][72][73][74][75][76].Thus, the cleavage or translation activities of mes-miR156, 159, and 169 targeted to MeGolS1, MeGolS5, MeGolS6, or MeGolS7 might account for the qRT-PCR results of these genes response to abiotic stresses.The prediction of miRNA targeting to MeGolS may be useful to understanding the regulatory network of MeGolS involved in responding to abiotic stress in cassava.
Conclusions
In the cassava genome, 8 MeGolSs were identified and their physicochemical properties, gene structure, protein motifs, and classification were investigated.By combining promoter cis-element prediction, expression patterns in tissue and in response to abiotic stress, and with the prediction of miRNA targeting to the MeGolSs, we have increased the functional knowledge of MeGolS genes.Finally, these results would be useful in further study of the biological functions of MeGolSs in cassava.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2073-4395/8/11/250/s1,Table S1: The primers were used for gene cloning and qRT-PCR analysis of MeGolS genes, Table S2: List of the putative motifs of GolS proteins from cassava and Arabidopsis, Table S3: Predicted cis-acting elements in the promoter regions of MeGolSs, Table S4: Predicted miRNAs targeted to MeGolS genes.
Author Contributions: R.L. and Y.H. were responsible for all aspects of the research, including experimental design, data acquisition and analysis, and manuscript preparation.S.Y., J.F., Y.Z., T.Q. and X.L. worked on the preparation of the studied materials, gene cloning and qRT-PCR.Y.Y., J.L. and S.F. worked on primer design, and technical and informatics analyses of these genes.X.H. and J.G. were responsible for the programs and all experiments, critically revised the manuscript, and provided the final approval of the article.
Figure 1 .
Figure 1.Chromosomal locations of 8 predicted MeGolS genes.The chromosome number is marked upon each chromosome.The red arrows indicate gene orientation on chromosomes.The number below each chromosome indicates chromosome size.The number next to each gene represents the position of the gene on the chromosome.
Figure 2 .
Figure 2. Comparative analysis of the phylogenetics, structure, and conserved motifs of GolSs from cassava and Arabidopsis.(A) The phylogenetic tree of MeGolSs and AtGolSs was constructed by using MEGA 7.0 and all the GolSs were classed into four groups.(B) GSDS software was employed to generate the gene structures of MeGolSs and AtGolSs.The yellow boxes are CDS, the blue boxes are 5′UTR or 3′UTR, and the black lines are introns.(C) Conserved motifs of MeGolSs and AtGolSs proteins from cassava and Arabidopsis.The gene order in B and C is similar to A.
Figure 2 .
Figure 2. Comparative analysis of the phylogenetics, structure, and conserved motifs of GolSs from cassava and Arabidopsis.(A) The phylogenetic tree of MeGolSs and AtGolSs was constructed by using MEGA 7.0 and all the GolSs were classed into four groups.(B) GSDS software was employed to generate the gene structures of MeGolSs and AtGolSs.The yellow boxes are CDS, the blue boxes are 5 UTR or 3 UTR, and the black lines are introns.(C) Conserved motifs of MeGolSs and AtGolSs proteins from cassava and Arabidopsis.The gene order in B and C is similar to A.
Figure 3 .
Figure 3. Cis-acting elements in cassava GolS gene promoters.(A) Number of every selected cis-acting element in the MeGolS gene promoters.(B) Statistics for total number of MeGolS genes containing the corresponding cis-acting elements (red dots) and the total number of cis-acting elements in every MeGolS (blue boxes).
Figure 3 .
Figure 3. Cis-acting elements in cassava GolS gene promoters.(A) Number of every selected cis-acting element in the MeGolS gene promoters.(B) Statistics for total number of MeGolS genes containing the corresponding cis-acting elements (red dots) and the total number of cis-acting elements in every MeGolS (blue boxes).
Figure 4 .
Figure 4. Expression abundance of MeGolSs in different tissues and organs of cassava.
Figure 4 .
Figure 4. Expression abundance of MeGolSs in different tissues and organs of cassava.Leaf, leaf blade; midvein, leaf mid-vein; SAM, shoot apical meristem; SR, storage roots; FR, fibrous roots; RAM, root apical meristem; OES, somatic organized embryogenic structures; FEC, friable embryogenic callus.The bar on the upper right corner indicated the FPKM (fragments per kilobase of exon per million reads mapped) values that corrected by Log2, and different colors indicate different expression levels.
Agronomy 2018, 8 , 19 Figure 5 .Figure 5 .
Figure 5. Expression abundance of MeGolSs under drought, cold, and salinity abiotic stresses in cassava.The bars indicated the relative gene expression levels, calculated based on the 2 −△△Ct method.The expression levels were equal to the mean values and transform log2 values.
Funding:
This research was funded by the National Natural Science Foundation of China (No.31601359; 31600196 and 31671767), the key research and development projects of Hainan Province (No. ZDYF2017073), the Earmarked Fund for China Agriculture Research System (No.CARS-11-HNGJC), and Central Public-interest Scientific Institution Basal Research Fund for Chinese Academy of Tropical Agricultural Sciences (No. 1630052016004). | 2018-12-16T01:53:56.978Z | 2018-11-03T00:00:00.000 | {
"year": 2018,
"sha1": "4c4b75e7e441446543fbad37013a96ad9b191820",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/8/11/250/pdf?version=1541239477",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4c4b75e7e441446543fbad37013a96ad9b191820",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
268042840 | pes2o/s2orc | v3-fos-license | Analysis of fate determination of vegetative cells with reduced phycocyanin content in a multicellular cyanobacterium using Raman microscopy
The one-dimensional multicellular cyanobacterium Anabaena sp. PCC 7120 exhibits two different cell types under nitrogen-deprived conditions. We found that the intensity of the Raman band at 1,629 cm −1 , which is associated with phycocyanin, was higher in undifferentiated cells (vegetative cells) than in differentiated cells (heterocysts). We observed cells whose band intensity at 1,629 cm −1 was statistically lower than that of vegetative cells, and named them “proheterocysts”. We found that proheterocysts did not necessarily differentiate, and could divide or revert to being vegetative cells, as defined by having a higher band intensity at 1,629 cm −1 .
Description
Anabaena sp.PCC 7120 (hereafter referred to as Anabaena) is one kind of the multicellular cyanobacteria, and shows a onedimensional morphology composed of many vegetative cells.These cells express Photosystem I (PSI) and II (PSII) to carry out photosynthesis.Under conditions free of nitrogen compounds, several vegetative cells differentiate into nitrogen-fixing heterocysts at almost 10-cell intervals (Golden et al., 2003;Kumar et al., 2009;Ishihara et al., 2015) (Figure A).Once differentiated, a heterocyst never divides or dedifferentiates.After an increase in the number of vegetative cells due to the division, heterocysts differentiate newly in an almost intermedium region between older heterocysts.Heterocysts can be easily identified with optical microscopy because their size and shape are larger and rounder than those of vegetative cells.In addition, phycobilisome complexes, which are a crucial element of PSII, chemically decompose or are not activated in heterocysts (Zhang et al., 2006).As photosynthesis and nitrogen fixation are incompatible reactions (Meeks et al., 2002), heterocysts and vegetative cells exchange their respective metabolites with each other (Figure B).
We previously studied microbial pigment composition in vegetative cells and heterocysts of Anabaena with Raman spectral measurements (Ishihara et al., 2013;Ishihara andTakahashi, 2023, Ishihara andImai, 2023) (Figures C and D).The Raman spectral bands were assigned to vibrations of the light-harvesting pigments chlorophyll a, carotene, phycocyanin, and allophycocyanin at an excitation wavelength of 785 nm (Ishihara and Takahashi, 2023).The band positions in the Raman spectra of the vegetative cells were almost identical to those of the heterocysts; however, the band intensities exhibited some distinctive difference.In particular, the intensity of the Raman band for phycocyanin was significantly lower in heterocysts than in vegetative cells (Ishihara and Takahashi, 2023;Ishihara and Imai, 2023).Considering that phycocyanin is a component of the light-harvesting phycobilisome complex, the findings from our previous study correlate well with those from earlier studies that reported that phycobilisomes decompose during differentiation (Asai et al., 2009).Furthermore, our study established normalized intensity at 1,629 cm −1 as being representative of phycocyanin content (Ishihara and Takahashi, 2023;Ishihara and Imai, 2023).
Vegetative cells that exhibited a statistically significant decrease in Raman band intensity at 1,629 cm −1 were considered to be candidates for future heterocysts (Ishihara and Imai, 2023).In our previous study, we named such cells "proheterocysts"; these cells have begun to lose their phycobilisome structures, yet are still functionally and morphologically vegetative cells (Ishihara and Imai, 2023).In our previous study, we identified seven proheterocysts in three Anabaena filaments and confirmed that one of them finally differentiated into a heterocyst (Ishihara and Imai, 2023).However, whether the other six proheterocysts had differentiated, until 8 hours after the Raman spectral measurement was taken, was impossible to check.Thus, the aim of the current study was to investigate how proheterocysts form and determine whether they invariably differentiate into mature heterocysts.
To address these questions, we first measured the Raman spectra of individual cells along an Anabaena filament at 120, 130, and 140 hours after nitrogen depletion.At 130 hours, we identified proheterocysts among the vegetative cells by their statistically lower band intensities at 1629 cm −1 .We then determined the intensity value at 1,629 cm −1 and the phenotype (vegetative cell or heterocyst) at 120 and 140 hours of all cells identified as proheterocysts at 130 hours.We adopted the definition of proheterocyst used in our previous study (Ishihara and Imai, 2023): the band intensity at 1,629 cm −1 of the proheterocysts was more than two standard deviations greater than the average band intensity at 1,629 cm −1 of all vegetative cells in each filament.
We analyzed the Raman spectra of individual cells from three Anabaena filaments (Filaments A-C).The segment length, which was defined as the number of vegetative cells surrounded by two heterocysts, was 19.33±13.35, 15.50±4.65, and 21.33±8.74 (avg±s.d.) in Filaments A, B, and C, respectively, at 130 hours.The Raman spectra were obtained as we described previously (Ishihara and Takahashi, 2023;Ishihara and Imai, 2023), and as explained in the Methods.The intensity values of the Raman spectra from 990 to 1,770 cm −1 were normalized to unity.The ranges of the band intensities at 1,629 cm −1 among the vegetative cells and heterocysts were 0.00295-0.0126and 0.00134-0.00243,respectively.Especially, the upper limit of the band intensity among the heterocysts was lower than the lower limit among the vegetative cells in all filaments.That is, all vegetative cells exhibited distinctively higher phycocyanin band intensities than did heterocysts.
We first plotted the normalized band intensity at 1,629 cm −1 against cell area at 130 hours to detect fluctuations in phycocyanin content in individual cells ( Figures E-G).There were three, four, and three proheterocysts in Filaments A, B, and C, respectively (red dots in Figures E-G).Given that the cell areas of the proheterocysts were comparable to those of the vegetative cells, phycocyanin decomposition was considered to have already begun in the proheterocysts.Next, we superimposed the band intensities at 1,629 cm −1 and the cell areas at 120 and 140 hours of cells identified as proheterocysts at 130 hours onto the scatter plots of data from their respective filaments at 130 hours ( Figures H-R).As Figures H-R show, the band intensities at 1,629 cm −1 of cells identified as proheterocysts at 130 hours were not all statistically lower than those of vegetative cells at 120 hours.This suggests that the cells transitioned from vegetative cells to proheterocysts during this 10hour time period.At 140 hours, however, the proheterocysts identified at 130 hours exhibited three distinct cellular fates.First, proheterocysts d and j in Filaments A and C at 130 hours had differentiated into heterocysts by 140 hours (Figures K and Q).The new heterocysts were as large as the preexisting heterocysts, and their band intensities at 1,629 cm −1 had decreased to a level comparable to those of heterocysts at 140 hours.Second, proheterocysts b, c, f, i, and k in Filaments A-C at 130 hours had each divided into two vegetative cells at 140 hours (Figures I, J, M, P, and R).The cell areas and band intensities at 1,629 cm −1 of the newly divided cells were similar to those of the preexisting vegetative cells.Third, proheterocysts a, e, g, and h in Filaments A-C at 130 hours had not differentiated or divided by 140 hours, and exhibited a phycocyanin content similar to that of vegetative cells at 140 hours ( Figures H, L, N, and O).The cell areas were increased but still within that range of vegetative cell areas, and the band intensities at 1,629 cm −1 were not statistically lower at 140 hours.
In conclusion, proheterocysts do not necessarily differentiate into heterocysts.When proheterocysts do not differentiate, the proheterocysts may divide, or they may neither divide nor differentiate.When the proheterocysts divided, the Raman band intensities at 1,629 cm −1 of the daughter cells were restored to levels similar to those exhibited by other vegetative cells (except for proheterocysts).However, even when the proheterocysts did not divide or differentiate, their Raman band intensities at 1,629 cm −1 were restored; that is, the proheterocyst reverted to being a vegetative cell.In addition, all of the proheterocysts were generated from vegetative cells that exhibited similar Raman band intensities at 1,629 cm −1 to other vegetative cells.Therefore, we propose that proheterocysts represent a transient precursor state to differentiation and may reversibly revert to the vegetative cell state.
Bacterial strains and culture
Anabaena sp.PCC 7120 (wild type) were cultured in 25 ml of BG-11 0 (lacking sodium nitrate) liquid medium at 30℃ under white fluorescent light (FL30SW-B, Hitachi co.) illumination of 45 μM photons m -2 s -1 .The culture was shaken and incubated at 120 rpm until the optimal density at 730 nm (OD 730 ) was 0.4-0.5.The liquid culture was washed three times with BG11 0 liquid medium, diluted to an OD 730 of about 0.2 and placed under a fresh BG-11 0 solid medium plate containing 1.5 % agar solution (Becton, Dickinson and company, USA) with a bottom dish glass.The sample was placed in a Raman microscope (as see below) kept at 30℃ under illuminated with a white fluorescent lamp at 45 μM photons m -2 s -1 .
We used In Via confocal Raman spectrometer equipped with a CCD camera (inVia Reflex, Renishaw co.) to measure the Raman spectrum.The excitation wavelength was at 785 nm.The central points of the cells were chosen along each filament and the Raman spectra of individual vegetative cells and heterocysts were measured.A typical Raman spectrum of a small confocal volume in the cytoplasm (horizontal diameter, ~ 1 μm) of a single living vegetative cell (~ 3 μm diameter) provides a signal-to-noise ratio sufficient for the analysis (~1 s per pixel, with a 785 nm laser at ~20 mW directed at the confocal volume).In this study, the baselines of Raman spectra were corrected.The baseline-corrected Raman spectrum y'(ν) was calculated as y'(ν) = y(ν) -y poly (ν), where y poly (ν) is a fitted polynomial curve constructed using the following procedures.(i) For a spectrum truncated between the minimum and maximum Raman shift positions ν min and ν max , a polynomial function was used to select the order d of the function that fits the baseline (d=3).(ii) Using the least squares method, the polynomial function y poly was first fitted to the Raman spectrum y. (iii) The Raman spectrum y was split up and down with respect to the fitted baseline y poly .(iv) The number of data points above y was defined as N A , and the number of data points below y was defined as N B .When N B was larger than N A , the upper part of y was removed from the whole y, and the Raman spectrum y was replaced by the lower part of the spectrum.The procedure (ii) was then repeated.When N A was larger than N B , the baseline was considered to be the best fit and optimal.
Measurement of the cell area
We measured the cellular area by using Image J software (ver.1.54f, NIH).
Figure 1 .
Figure 1.Raman band intensities at 1,629 cm -1 of individual cells at 130 hours after nitrogen depletion and transitions in cellular state at 120 and 140 hours of cells that were identified as proheterocysts at 130 hours.: (A) Heterocyst differentiation under nitrogen source starvation conditions.Heterocysts are visible as expanded cells in which the phycobilisome complexes have been degraded.A bright field micrograph and a phycobilisome fluorescence micrograph are shown on the left and right panels.Black arrows indicate heterocysts.Scale bar = 5 μm.(B) Heterocysts and vegetative cells exchange nitrogen compounds and carbohydrates by periplasmic diffusion along the Anabaena filament.(C, D) An example of the normalized Raman spectra obtained from vegetative cells and heterocysts at an excitation wavelength at 785 nm.This panel is from Ishihara and Imai (2023).(E-G) Scatter plot of normalized band intensities at 1,629 cm −1 and cell areas at 130 hours after nitrogen depletion.The cell area is shown as the ratio of the area of each cell in each filament to the average area of the vegetative cells in the corresponding filament.Black and blue points represent vegetative cells and heterocysts in each filament, and red points indicate proheterocysts.The number of the data points was 152, 125, and 110, which corresponds to the number of cells in Filaments A-C, respectively.Filaments A, B, and C included 7, 5, and 4 heterocysts, respectively.(H-R) Normalized band intensities at 1,629 cm −1 and cell areas at 120 and 140 hours of cells identified as proheterocysts at 130 hours were superimposed on the scatter plots of the respective filaments at 130 hours (that is, Figures E-G).The gray points in each graph are the same as the black and blue points in Figures E-G.The magenta points represent data from proheterocysts identified at 130 hours ("ph" on Figure H-R stands for proheterocyst).The cyan and yellow points represent data at 120 and 140 hours from cells identified as proheterocysts at 130 hours.Thus, the cyan, magenta, and yellow points show data from the same cells at different time points after nitrogen depletion.When two yellow points are shown in a single graph (Figures I, J, M, P, and R), they represent the sister cells generated by division of a proheterocyst identified at 130 hours. | 2024-02-29T11:20:33.031Z | 2024-02-20T00:00:00.000 | {
"year": 2024,
"sha1": "6b2dec67b70ebb34b3c2dd2f24f0bd3bc0e54dda",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6b2dec67b70ebb34b3c2dd2f24f0bd3bc0e54dda",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
258928727 | pes2o/s2orc | v3-fos-license | No evidence for seasonal variations of the incidence of testicular germ cell tumours in Germany
The pathogenesis of testicular germ cell tumours (GCTs) is still incompletely understood. Any progress in its understanding must derive from observational studies. Recently, it has been suggested that the incidence of GCTs may follow a seasonal pattern based on circannual changes in the Vitamin D serum levels, with maximum incidence rates in winter months. To examine this promising hypothesis, we studied monthly incidence rates of testicular GCTs in Germany by analysing 30,988 GCT cases aged 15–69 years, diagnosed during 2009–2019. Monthly incident case numbers with data regarding histology and patient age were obtained from the Robert Koch Institut, Berlin, along with annual male population counts. We used precision weighting for deriving pooled monthly incidence rates for GCTs of the period 2009–2019. We stratified pooled rates by histology (seminoma and nonseminoma) and age (15–39 and 40–69 years). By assuming a cyclical effect, we used an estimator of the intensity of seasonal occurrence and report seasonal relative risks (RR). The mean monthly incidence rate was 11.93/105 person-months. The seasonal RR for testicular cancer over-all is 1.022 (95% CI 1.000–1.054). The highest seasonal RR was found in the subgroup of nonseminoma aged 15–39 years, with a RR 1.044 (95% CI 1.000–1.112). The comparison of the pooled monthly rates of the winter months (October—March) with the summer months (April-September) revealed a maximum relative difference of 5% (95% CI 1–10%) for nonseminoma, aged 15–39 years. We conclude that there is no evidence of a seasonal variation of incidence rates of testicular cancer. Our results are at odds with an Austrian study, but the present data appear sound because the results were obtained with precision weighted monthly incidence rates in a large population of GCT cases.
Introduction
There is wide-spread international consensus that adult testicular germ-cell tumours (GCTs) derive from germ cell neoplasia in situ (GCNis). These precursor cells origin from primordial germ cells that fail to follow the normal maturation process of embryonic germ cells during embryogenesis [1]. As these cells keep their embryonic pluripotency characteristics during later life, they may develop to germ cell neoplasms after puberty [2,3]. While the basic principles of this theory are undisputed, the details of the pathogenetic pathway are widely unknown [4]. As there is no experimental model of GCT and as animal testicular tumours are different from their human counterparts [5], any progress in understanding the pathogenesis of human GCTs must rely on systematic observation studies in all fields of clinical and preclinical medicine. Particularly, epidemiological studies involve a great potential of generating hypotheses. Recently, a study by the National Austrian Cancer Registry reported a significant seasonal variation in the incidence of GCT with peak incidence rates in winter months, October to December, and January to March [6]. The authors suggested sun-exposure related seasonal variations of Vitamin D3 serum levels to be associated with the changes in the GCT incidence rates. This hypothesis appears quite appealing because-if confirmed in a larger patient population-it could also be a clue for other epidemiological peculiarities of GCTs such as the north-south gradient of incidence [7]. Also, as GCTs afflict the male reproductive organs, seasonal variation of the GCT incidence would be consistent with the recently documented marked circannual variations of sperm parameters [8]. Therefore, we studied the monthly incidence rates of adult GCTs arising in Germany during the last decade.
Material & methods
In Germany, there is no national cancer registry, but reporting incident cancer cases to the cancer registries of the federal states is compulsory for all cancer-care providing institutions. The Centre for Cancer Registry Data (Zentrum für Krebsregisterdaten, ZfKD) a subdivision of the Robert Koch-Institut (RKI), Berlin, routinely collects records from all population-based cancer registries in Germany. After quality control of the incoming records, the data are merged into a central national database annually. We received data from the ZfKD on all incident cases of primary malignant testicular cancer (ICD-10) aged 15-69 years and diagnosed 2009-2019. We included only data from the federal states of Niedersachsen, Schleswig-Holstein, Hamburg, Bremen, Nordrhein-Westfalen, Saarland, Hessen, Rheinland-Pfalz, Baden-Württemberg, and Bayern, because these states have an estimated completeness of registration above 90% in each year of the period of 2009-2019. Federal states reporting data only for parts of the entire observation period were excluded from the present analysis. The number of cases identified by death certificate only (DCO) was 2%. In addition, we received the official German population count numbers for each of the calendar years 2009-2019 from the RKI by age groups [9]. As no monthly populations counts were available, we assumed an even distribution over the year and calculated the assumed monthly population counts by simply dividing the annual count by 12.
As the focus of the present study was specifically on testicular germ-cell tumours, we used ICD-O morphology codes to categorize testicular neoplasms as seminoma ( We did not differentiate spermatocytic tumour cases from classical seminoma because this entity is clinically very similar to seminoma, it is very rare (< 1% of all seminoma) and it is characterized by frequent patho-histological mis-classifications [10].
Ethical approval was provided by Ethical committee of Ä rztekammer Hamburg on May 20, 2022 (2022-100828-BO-ff). The research was carried out at the Asklepios Klinik Altona, Hamburg, Germany, and in the Institut für Medizinische Informatik, Biometrie und Epidemiologie, Universitätsklinikum Essen, Germany. The need for informed consent of patients was waived by the ethical committee since only registry-based anonymous data were used for analysis.
Statistical methods
We first calculated the annual incidence of the entire population of testicular cancer for the observation period 2009-2019 (cases/10 5 person-years [py]). We then calculated monthly incidence rates for each year, and stratified by histologic group (seminoma and nonseminoma) and age (15-39 years, and 40-69 years). Stratification by age categories was done because the clinical features of younger patients (<40 years) are somewhat different from those of older patients and it thus appeared rational to look for differences regarding epidemiological characteristics. Another reason for analysing age categories was to be consistent with the Austrian report [6]. We weighted each monthly rate by its precision, that is, by the inverse of its variance and thereafter pooled the monthly rates across the 11 years month by month [11]. We used inverse-variance weighting for pooling monthly incidence rates of each of the 11 years. For each pooled incidence rate, we calculated 95% confidence intervals (CIs). Monthly incidence rates are reported as cases per 10 5 person-months (pm).
For the graphical display of pooled monthly rates, we also calculated the average of monthly incidences across all pooled monthly rates to make deviations from the annual average of the rates easily visible.
To estimate the intensity of seasonal occurrence of GCT, we used an estimator of the intensity of seasonal occurrence. This estimator is based on the assumption of a single cyclical effect (harmonic) that can be well approximated by a sine curve [12,13]. We used the EpiSheet workbook to estimate the peak/low ratio [14]. The estimated peak/low ratio is also called the seasonal relative risk (RR).
An additional analysis was done by comparing winter months (October-March) with summer months (April-September). The incidence rates for these two seasons were computed after precision-weighted pooling of the monthly rates (October-March and April-September). Finally, we determined the ratio of the winter and summer rate with calculating 95% CIs.
Results
A total of 30,988 cases were included in the present analysis, thereof 19,936 (64.3%) cases with seminoma, 10,164 (32.8%) with nonseminoma, 17 (0.05%) with dysgerminoma, not other specified, and 871 (2.8%) with germinoma not other specified. A total of 131 cases with spermatocytic tumour (ICD-O-3: 9063/3; formerly called spermatocytic seminoma) were included in the group of seminoma. Based on the population count numbers 2009-2019, the over-all incidence rate during the complete observation period was 12.02/10 5 py. The incidence rates in the histologic subgroups and in age categories are listed in Table 1.
The mean monthly incidence rate across all precision-weighted monthly incidence rates for the entire population of GCT during 2009-2019 was 11.93/10 5 pm. Monthly incidence rates showed barely any variation over the year (Fig 1).
Stratification by histology (Table 2) revealed an over-all monthly incidence of 7.65/10 5 pm and 3.93/10 5 pm for seminoma and for nonseminoma, respectively. The seasonal RR was 1.022 (95% CI 1.000-1.054). As shown in Fig 2a and 2b, monthly incidence rates showed barely any variation over the year in either histologic group.
Stratification of the monthly incidence rates by age groups (Fig 3a and 3b) revealed over-all incidences of 15.88/10 5 pm and 8.94/10 5 pm in men aged 15-39 years and in those aged 40-69 years, respectively. Monthly deviations from the average were very small in both age categories.
Detailed results of the seasonal analyses of all subpopulations are given in Table 2. The seasonal RR is close to 1.0 in all subgroups with a range of 1.005 (minimum) to 1.057 (maximum), indicating that the highest monthly rate was at most 5.7% higher than the lowest monthly rate.
Comparisons of the pooled monthly rates in the winter season (October-March) with the pooled rate in the summer season (April-September) regarding the total population and its subpopulations are listed in Table 3. Consistent with our other findings, there was very little difference of the incidence rates between the two seasons for the over-all group, the histologyspecific groups and age -specific groups. This result is exemplified by the incidence rate for the entire population of testicular cancer (C62 overall, age group 15-69 years) which was relatively 3% higher in winter than in summer (12.16 versus 11.84 per 10 5 pm, rate ratio 1.03, 95%CI 1.00-1.05).
Discussion
There was barely any seasonal variation of the incidence of GCTs in Germany for testicular cancer over-all, the two major histologic subgroups and for age groups. Furthermore, there was practically no difference between the incidence rates in the winter and summer periods.
The results of the present study appear methodologically sound, since they were derived from a thorough statistical analysis of a rather large population of GCT patients using precisionweighted monthly incidence rates. The over-all annual incidence rate of GCT in Germany [15], the higher incidence of seminoma compared to that of nonseminoma [16,17], and the much higher incidence of GCT in younger than in older ages are in line with previous publications [7,17,18]. However, our data are at odds with the results of the Austrian group who reported significant increases of the GCT incidence in autumn and winter months [6]. Yet, a closer look at the data of the Austrian study reveals that the difference between the summer and winter period was restricted to the histologic subgroup of seminoma and particularly to cases with localized disease. Among the nonseminomas, no variation of month-specific incidence rates were found. In the present study, no such variations were observed. On the other hand, it is well documented that in seminoma there is usually a very long diagnostic delay, which also applies to localized stages [19][20][21]. Accordingly, in patients with seminoma, the time-points of establishing the clinical diagnosis and of perceiving first symptoms by the patient are markedly apart which implies that there is likewise a long time interval between the time points of clinical detection and the biologic onset of the disease. These long symptomatic intervals may relate to the usually rather slow growth rate of seminoma if compared to the more aggressive course of nonseminoma [22]. In light of the exceptionally long lag time between biologic onset of disease and clinical diagnosis of seminoma, the seasonal variation of the incidence of seminoma found by the Austrian group is likely to reflect effects unrelated to the disease, such as patientrelated variations of bodily self-perception or health-care system associated temporal changes of diagnostic capacities. The hypothesized association of GCT pathogenesis with vitamin D serum levels is thus likewise only little substantiated by our data.
No other epidemiological study has so far analysed seasonal variations of the incidence of testicular cancer. However, indirect support for the present results comes from a study of the Swedish National Cancer Registry that investigated seasonal variations of all cancers in that country [23]. Actually, seasonal variations were found only in the four malignancies of melanoma, and of cancers of breast, prostate, and thyroid. As all other cancers did not exhibit seasonal variations of incidences, it must be assumed-although not unequivocally specifiedthat there was no seasonal effect in testicular cancer.
Curiously, a seasonal pattern had been reported regarding the months of birth of patients with testicular GCT in UK and in Hungary and the finding was suggested to be related to prenatal infections [24,25]. However, other studies noted this effect only in selected histologic subgroups, or only in patients succumbing to the disease or solely in particular geographic regions [26][27][28]. Accordingly, no further consideration had been credited to the birth date hypothesis in recent major reviews on the aetiology and pathogenesis of testicular cancer [1,22,29,30].
In a small number of malignant diseases, seasonal variations of the incidences have been documented. In acute myeloic leukemia, that association was suggested to relate to seasonally changing environmental factors or to infectious agents [31,32]. Malignant melanoma is more frequently diagnosed in summer months than in winter time [33] and this finding conceivably relates to melanoma-promoting sun exposure but also to easier detection due to light summer clothing [23]. Breast cancer has also repeatedly been found to occur more frequently in winter months [34], but this finding has been linked to mammography screening programmes that are usually less frequently attended in summer months [23]. Finally, thyroid cancer and prostatic cancer have been reported to be diagnosed with circannual rhythms and in these diseases the changing frequencies of clinical diagnoses have been linked to health system-associated temporal changes of diagnostic capacities [23].
In spite of the large sample size, there are several limitations that need to be borne in mind in interpreting the present results. Histological subtyping of testicular cancer was based on the coding of the cancer registries without a central pathologic review. Some histologic misclassification is expected since GCT is a rare disease and less-experienced local pathologists may sometimes fail a correct classification of testicular neoplasms [35]. However, the main result of the present study relating to the over-all analysis without histologic stratification, is probably not affected by this issue. The cases with spermatocytic tumour were included in the subgroup of seminoma although it is pathogenetically different from seminoma according to the most recent patho-histological classification system [36]. Cases registered with death certificate only (DCO) were included in the analysis with their date of death as a surrogate of the date of diagnosis of testicular cancer if not otherwise specified on the death certificate. However, we believe that both, spermatocytic tumour cases and DCO cases had no major impact on the over-all results of this study because the basic findings are very clear-cut, and because both groups involve very small numbers in relation to the over-all large sample size. A minor limitation of the study might result from the lack of monthly population counts and from the assumption of an even distribution of population counts over the respective years. The present evaluation comprised of cases from Germany only and mostly included Caucasians. Thus, it is unclear whether our results can be generalized to other ethnicities.
In conclusion, we found barely any seasonal variation of the incidence of testicular cancer. Our results are in conflict with a recent Austrian study [6]. However, as the present evaluation involves an almost ten-fold larger patient population than the Austrian study, the weight of evidence of the present investigation appears greater. In conjunction with a Swedish cancer registry study that indirectly reported a null finding, too, there is apparently no evidence for a seasonal variation of the incidence of testicular GCT. | 2023-05-28T05:07:04.695Z | 2023-05-26T00:00:00.000 | {
"year": 2023,
"sha1": "ef4024be278ee14b17e86ffb80c5e96bb37476d6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ef4024be278ee14b17e86ffb80c5e96bb37476d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229679910 | pes2o/s2orc | v3-fos-license | Convexity and AFPP in the Digital Plane
We examine the relationship between convexity and the approximate fixed point property (AFPP) for digital images in Z^2.
Introduction
The study of fixed points is prominent in many branches of mathematics. In digital topology, it has become worthwhile to broaden the study to "approximate fixed points." The Approximate Fixed Point Property (AFPP), a generalization of the classical fixed point property (FPP), was introduced in [7]. In this paper, we show that for digital images X ⊂ Z 2 , convexity can help us show whether (X, c 2 ) has the AFPP.
Preliminaries
Much of this section is quoted or paraphrased from papers that are listed in the references, especially [4,5,6,7].
We use Z to indicate the set of integers; R for the set of real numbers.
Adjacencies
A digital image is a graph (X, κ), where X is a subset of Z n for some positive integer n, and κ is an adjacency relation for the points of X. The c u -adjacencies are commonly used. Let x, y ∈ Z n , x = y, where we consider these points as n-tuples of integers: x = (x 1 , . . . , x n ), y = (y 1 , . . . , y n ).
Let u ∈ Z, 1 ≤ u ≤ n. We say x and y are c u -adjacent if * Department of Computer and Information Sciences, Niagara University, Niagara University, NY 14109, USA; and Department of Computer Science and Engineering, State University of New York at Buffalo. email: boxer@niagara.edu • There are at most u indices i for which |x i − y i | = 1.
• For all indices j such that |x j − y j | = 1 we have x j = y j .
Often, a c u -adjacency is denoted by the number of points adjacent to a given point in Z n using this adjacency. E.g., • In Z 1 , c 1 -adjacency is 2-adjacency.
We write x ↔ κ x ′ , or x ↔ x ′ when κ is understood, to indicate that x and x ′ are κ-adjacent. Similarly, we write x κ x ′ , or x x ′ when κ is understood, to indicate that x and x ′ are κ-adjacent or equal.
A subset Y of a digital image (X, κ) is κ-connected [12], or connected when κ is understood, if for every pair of points a, b ∈ Y there exists a sequence Given a digital image (X, κ) and x ∈ X, we denote by N * (X, κ, x) the set {y ∈ X | y κ x}.
Digitally continuous functions
The following generalizes a definition of [12].
When the adjacency relations are understood, we will simply say that f is continuous. Continuity can be expressed in terms of adjacency of points: See also [8,9], where similar notions are referred to as immersions, gradually varied operators, and gradually varied mappings.
Let Y ⊂ X and let f : X → Y be (κ, κ)-continuous such that r(y) = y for all y ∈ Y . Then r is a κ-retraction.
Approximate fixed points and the AFPP
Let f ∈ C(X, κ) and let x ∈ X. We say • If f (x) κ x, then x is an almost fixed point [12,14] or approximate fixed point [7] of (f, κ).
• A digital image (X, κ) has the approximate fixed point property (AFPP) [7] if for every f ∈ C(X, κ) there is an approximate fixed point of f . This generalizes the fixed point property (FPP): a digital image (X, κ) has the FPP if every f ∈ C(X, κ) has a fixed point.
The AFPP gathered attention in part because only a digital image with a single point has the FPP [7].
Let I be a digital picture, and let f be a continuous function from I into I; then there exists a point P ∈ I such that f (P ) = P or is a neighbor or diagonal neighbor of P .
We quote from [4]: Several subsequent papers have incorrectly concluded that this [Rosenfeld's] result implies that I with some c u adjacency has the AF P P S . By digital picture Rosenfeld means a digital cube, I = [0, n] v Z . By a "continuous function" he means a (c 1 , c 1 )-continuous function; by "a neighbor or diagonal neighbor of P " he means a c v -adjacent point.
Digital convexity, disks
Material in this section is quoted or paraphrased from [6].
Remark 2.6. [6] A digital line segment must be vertical, horizontal, or have slope of ±1. We say a segment with slope of ±1 is slanted.
These requirements are necessary for the Jordan Curve Theorem of digital topology, below, as a c 1 -simple closed curve in Z 2 needs at least 8 points to have a nonempty finite complementary c 2 -component, and a c 2 -simple closed curve in Z 2 needs at least 4 points to have a nonempty finite complementary c 1component. Examples in [11] show why it is desirable to consider S and Z 2 \ S with different adjacencies.
One of the κ ′ -components of Z 2 \ S is finite and the other is infinite. This suggests the following. Note a disk may have multiple distinct bounding curves [6]. More generally, we have the following.
Then {S j } n j=1 is a set of bounding curves of X. As above, X may have multiple distinct sets of bounding curves. A set X in a Euclidean space R n is convex if for every pair of distinct points x, y ∈ X, the line segment xy from x to y is contained in X. The convex hull of Y ⊂ R n , denoted hull(Y ), is the smallest convex subset of R n that contains Y . If Y ⊂ R 2 is a finite set, then hull(Y ) is a single point if Y is a singleton; a line segment if Y has at least 2 members and all are collinear; otherwise, hull(Y ) is a polygonal disk, and the endpoints of the edges of hull(Y ) are its vertices.
A digital version of convexity can be stated for subsets of the digital plane Z 2 as follows.
• Y is a digital disk with a bounding curve S such that the endpoints of the maximal digital line segments of S are the vertices of hull(Y ) ⊂ R 2 .
3 Retractions, convexity, and the AFPP Due to assertions (3) and (6) of Theorem 2.3, the following theorem can be useful in determining whether (X, c 2 ) has the AFPP, for X ⊂ Z 2 .
such that X is a digitally convex disk. Let S be a bounding curve for X. Then there is a c 2 -retraction r : Y → X such that r(Y \ Int(S)) = S.
Proof. We define a function r : Y → X as follows. For y ∈ X, r(y) = y.
For y ∈ X we proceed as follows. Let -If x = (a, c) ∈ S such that b > c = max{n | (a, n) ∈ X} then r(y) = x.
(See y 1 , y 2 , y 3 in Figure 1.) (See y 4 in Figure 1.) • Suppose y = (a, b) for a < m; then there is a unique nearest (in the Euclidean metric) y ′ ∈ L to y, determined as follows. Let • Suppose y = (a, b) for a > M ; then there is a unique nearest (in the Euclidean metric) y ′ ∈ L to y, determined as follows. Let In order to show r is a c 2 -retraction, we must show r ∈ C(X, c 2 ). Let y ↔ c2 y ′ in X.
-If y ′ ∈ X then we must have y ∈ S. Then either r(y ′ ) = r(y), or, since X is convex, it follows from Remark 2.6 that r(y ′ ) ↔ c2 y = r(y).
• Suppose y is vertically above or below a point x ∈ S, so r(y) = x. Since X is convex, it follows from Remark 2.6 that r(y ′ ) c2 x = r(y).
Thus r ∈ C(X, c 2 ). Therefore, r is a retraction. Clearly, r(Y \ Int(S)) = S. This completes the proof.
such that X is digitally convex. Then (X, c 2 ) has the AFPP.
Corollary 3.3. Let X = X 1 × X 2 , where X 1 ⊂ Z n , (X 1 , c n ) has the AFPP, X 2 ⊂ Z 2 , and X 2 is a digitally convex disk. Then (X, c n+2 ) has the AFPP. Figure 1: Retraction r of a digital image Y to a subset X that is a convex disk as in Theorem 3.1. Here, s 0 = 2, s 1 = 4, s 2 = 3, s 3 = 6. a) Each point vertically above or below the disk is mapped to its nearest vertical neighbor in X, e.g., r(y i ) = x i , i ∈ {1, 2, 3, 4}. b) Each point to the left (not necessarily horizontally) of X is mapped to the nearest member of X with minimal first coordinate, e.g., r(y i ) = x i , i ∈ {5, 6, 7}. c) Each point to the right (not necessarily horizontally) of X is mapped to the nearest member of X with maximal first coordinate, e.g., r( (4), (X 1 ×Y, c n+2 ) has the AFPP. By Theorem 3.1, there is a c 2 -retraction r : Y → X 2 . Then id X1 ×r : X 1 × Y → X is a c n+2 -retraction. The assertion follows from Theorem 2.3 (3).
Then there is a c 2 -retraction of X onto S.
Proof. By Theorem 3.1, there is a c 2 -retraction r : X ∪ X ′ → X ′ such that r(X) = S. Then r| X : X → S is a retraction.
Then (X, c 2 ) does not have the AFPP.
Proof. By Proposition 3.4, there is a c 2 -retraction r : X → S. Let F be as described above and let f : X → X be the function f (x) = F • r(x). Since composition preserves continuity, we have f ∈ C(X, c 2 ). Consider the following cases.
• Suppose y c2 x for all x ∈ S. Then in particular, y c2 f (y), so y is not an approximate fixed point for f .
• Suppose y c2 x for some x ∈ S. Then the continuity of f implies f (y) c2 f (x). It follows from (1) that y is not an approximate fixed point for f .
Thus f does not have an approximate fixed point. The assertion follows. Figure 2. As a bounding curve for A, we can take S = {(x, y) ∈ Z 2 | |x|+|y| = 2}. Then S is a c 2 -simple closed curve. Let F : S → S be the map F (x, y) = (−x, −y). Then we may apply Theorem 3.5 to conclude that (X, c 2 ) does not have the AFPP.
Further remarks
We have explored relationships between the convexity of digital images in Z 2 and the AFPP.
In classical topology, every absolute retract (a contractible compactum with certain "nice" local properties for which we need not consider analogs in digital topology) has the FPP [1]. Since under the definition of digital homotopy in [2], a digital simple closed curve of 4 points is digitally contractible [3], Example 2.5 shows that contractibility based on [2] is not sufficient for the AFPP. Recent papers of Staecker [13] and Lupton, Oprea, and Scoville [10] have developed a different notion of homotopy under which a digital simple closed curve of 4 points is not digitally contractible; Staecker calls this strong homotopy. This suggests the following questions concerning possible extensions of Theorem 3.2.
Question 4.1. Let X ⊂ Z 2 be finite and c 2 -strongly contractible, i.e., contractible with respect to strong homotopy. Does (X, c 2 ) have the AFPP?
If Question 4.1 and the following Question 4.2 both have affirmative answers, the latter result would be contained in the former. Question 4.2. Let X ⊂ Z 2 be a digital disk. Does (X, c 2 ) have the AFPP? | 2020-12-29T02:15:51.701Z | 2020-12-27T00:00:00.000 | {
"year": 2020,
"sha1": "e528980f21e00995df4c6bbe2c242528f34b7b65",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1a6df69ae1bec6f70c041a7d23b0dffc85f5d80d",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
252979758 | pes2o/s2orc | v3-fos-license | Clinical and Economic Impact of Rapid Blood Pathogen Identification Via Verigene
Introduction: Bloodstream infections (BSIs) are associated with increased morbidity and mortality if not treated appropriately. Rapid identification of microorganisms will allow clinicians the opportunity to modify initial broad-spectrum antibiotic therapy and improve patient outcomes in bacteremia. We aim to evaluate the impact of the Verigene Gram-positive blood culture (BC-GP) technology on time to modification of antibiotic therapy by clinicians. Methods: This was a retrospective research study conducted at Corpus Christi Medical Center. Verigene BC-GP technology was employed to rapidly identify microorganisms in patients with suspected Gram-positive bacteremia. Empiric antibiotic therapy was modified via de-escalation or escalation when culture results became available. The primary outcome for this study was the mean time to modification of antibiotic therapy after Verigene BC-GP results became available. Data analysis was conducted from data collected between January 2015 and August 2017 to assess the clinical and pharmacoeconomic impact of BC-GP. Results: Data were collected on 159 patients, with 123 of 159 (77%) meeting the inclusion criteria. The mean age was 66 ± 14.9 years, with 53/123 (43%) females and 70/123 (57%) males. Positive cultures identified were as follows: Streptococcus species (34), Staphylococcus species (72), 31/72 (43%) were MRSA, and Enterococcus species (19), 4/19 (21%) were Vancomycin-resistant Enterococcus (VRE). Antibiotic therapies in 31 of 123 patients (25%) were escalated, and 29 of 123 (24%) were de-escalated. Therapy was determined to be appropriate based on culture results in 63 of 123 (51%) patients, and thus therapy was not modified in this group. The mean time to escalate therapy was 6.2 ± 6 h and 9.2 ± 12.1 h to de-escalate. The average time for modification of antibiotic therapy was 7.6 ± 9.5 h. The conventional approach would take approximately 24-72 h for pathogen identification. Data on cost savings per intervention is estimated to be approximately $4000 per intervention. Based on this model, we estimate approximately $240,000 in cost savings from the 60 cases where interventions occurred. Conclusion: There is a significant time advantage to pathogen identification, therapy modification as well as a pharmacoeconomic benefit associated with the Verigene GC-GP system as compared to the conventional approach, which translates to positive patient outcomes.
Introduction
Bloodstream infections (BSIs) are associated with increased morbidity and mortality if not treated appropriately [1][2]. Rapid identification of the offending microorganism has been shown to shorten the length of hospital stay, reduce healthcare costs, and lower mortality rates [3][4][5][6]. Empiric broad-spectrum antibiotic therapy is initiated for BSIs to impair or eliminate potential bacteria that could be the culprit of the infection. Initial broad-spectrum antibiotics are warranted, but the need to de-escalate the antibiotics promptly is critical as antibiotic use is associated with various adverse effects, antibiotic resistance, and drug toxicity [7]. According to the United States Centers for Disease Control (CDC) and Prevention, it is estimated in 2013 that there are over two million illnesses associated with antibiotic resistance each year that results in approximately 23,000 deaths [8]. Calculating the medical cost of antimicrobial resistance has been challenging and varies amongst studies but can be as high as $20 billion annually in the United States [8]. A previous study examining six multi-drug resistant (MDR) infections resulted in approximately $1.9 billion in medical costs amongst older adult patients in the United States in 2017 [9]. It was calculated that more than 400,000 inpatient days occurred and 11,852 deaths resulted due to MDR infections in 2017 [9]. The use of any antibiotics also contributes to the cause of Clostridium difficile infections where the CDC estimates approximately 250,000 C. difficile infections each year which is associated with antibiotic use which result in approximately 14,000 deaths [8]. C. difficile infection is a complication that can be prevented if healthcare professionals can de-escalate antibiotics as soon as lab culture results are available and are clinically appropriate. The standard of practice for diagnosing BSIs is through the use of blood cultures. Positive blood cultures are then Gram-stained to determine if the bacteria present is Gram-positive or Gram-negative. Additional growth in vitro (transferring the sample onto agar plates and then incubated) to allow for the identification of the specific microorganism is then required. This process of identifying the microorganism of interest could take anywhere between 24 and 72 h depending on how quickly the microorganisms grow, delaying the opportunity for clinicians to de-escalate or broaden initial empiric antibiotic therapy if necessary [10].
With the rise in antibiotic resistance and the necessity to improve antimicrobial stewardship in the clinical setting, various new technologies have been created to allow the rapid identification of microorganisms and their resistance markers. Previous studies have examined various technologies for the rapid identification of microorganisms in positive cultures and their relationship to hospital length of stay and economic benefits. Peptide nucleic acid (PNA) fluorescent in situ hybridization (FISH) stains is an example of a technology used to rapidly identify pathogens in positive blood cultures [11]. PNA-FISH identifies specific pathogens by targeting specific rRNA that is produced in growing bacteria and yeast. NA-FISH identifies various pathogens including Staphylococcus aureus, Coagulase-negative staphylococci (CoNS), Enterococcus faecalis, Enterococcus species, Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Candida species [11]. Despite the technology's ability to identify various microorganisms, PNA-FISH did not have the ability to identify resistance markers that some pathogens may have. Results are available within approximately 90 min. A previous retrospective study was done examining the use of PNA-FISH at a 650-bed academic medical center to look at the median hospital length of stay and economic benefit of the technology [12]. The study determined a decrease in median hospital length of stay by two days and cost savings of approximately $4005 per patient [12].
The Verigene Gram-positive blood culture (BC-GP) test is a multiplex nucleic assay that identifies microorganisms and resistance markers from positive blood cultures [13]. Microorganisms are identified through the presence of specific nucleic acid sequences. The BC-GP assay detects various Gram-positive bacterial pathogens that include Staphylococcus species, Staphylococcus aureus, Staphylococcus epidermidis, Staphylococcus lugdudensis, Streptococcus species, Streptococcus pyogenes, Streptococcus agalactiae, Streptococcus anginosus group, Streptococcus pneumoniae, E. faecalis, Enterococcus faecium, and Listeria species. In addition, BC-GP assay also detects the presence of three resistance markers, mecA, vanA, and vanB [13]. When S. aureus bacteria carry the mecA gene, the bacteria become MRSA (methicillin-resistant S. aureus). The mecA gene allows for the bacteria to encode for the PBP2a (penicillin-binding protein 2a), which has a low affinity for all beta-lactam antibiotics including penicillins, cephalosporins (except the fifth generation), and carbapenems [14]. The vanA and vanB genes are most commonly found in vancomycin-resistant Staphylococcus aureus and vancomycin-resistant Enterococcus. Vancomycin normally binds to the D-alanine-D-alanine (target site) terminus of bacterial peptidoglycan cell wall. When vanA and vanB genes are present, it alters the terminus into D-alanine-D-lactate, preventing vancomycin from being able to bind to its target site [14]. The BC-GP assay provides results within approximately 2.5 h, which is significantly less time compared to the standard of practice (24-72 h). Previous studies have demonstrated a 92%-95% overall agreement rate for Gram-positive microorganism identification between the BC-GP assay and conventional methods [15][16].
The aim of this study is to evaluate the impact of Verigene BC-GP on time to modification of antibiotic therapy by clinicians.
Materials And Methods
This was a single-center, retrospective study analyzing the utilization of the Verigene BC-GP technology to help guide antibiotic therapy. The study was conducted at Corpus Christi Medical Center (CCMC) after approval from the Institutional Review Board of CCMC. The primary outcome of the study was to determine the average time it took physicians to modify antibiotic therapy after BC-GP assay results were available. In addition, the secondary outcome will examine the pharmacoeconomic benefits of using the BC-GP technology. Patients were identified by utilizing the microbiology laboratory's data of patients who had their blood cultures tested with the BC-GP assay. Data were collected and analyzed for January 2015 through August 2017. Patients were included in the study if they were 18 years or older and had positive blood cultures for Staphylococcus species, S. aureus, S. epidermidis, S. lugdudensis, Streptococcus species, S. pyogenes, S. agalactiae, S. anginosus group, S. pneumoniae, E. faecalis, E. faecium, and Listeria species. In addition, patients were excluded if they presented to the emergency department but were not admitted, expired prior to the availability of BC-GP assay results, concomitant Gram-negative infections in the bloodstream, or Gram-positive BSIs not listed in the inclusion criteria ( Figure 1). The most common reason for exclusion was emergency room patients who were not admitted. At CCMC, the BC-GP assay was only utilized if the Gram stain of the blood sample showed Gram-positive microorganisms. The microbiology laboratory only tested blood cultures using the BC-GP assay Monday through Friday between 0600 and 1300 when there was microbiology staff available to read and report the results to the healthcare team. In addition to the BC-GP assay results being reported on the patient's electronic medical record, the microbiology lab would also attempt to contact the physician to report the results. When the microbiology lab was unable to contact the physician, the patient's nurse was contacted to follow up with the physician with the results. In 32/123 (26%) cases, the clinical pharmacists were also contacted with the results. Clinical data were collected from the patient's electronic medical record. The time at which blood cultures were drawn was documented, along with the time when the BC-GP assay results were available to the medical team. Antibiotics that the patient was receiving up until the BC-GP assay results, excluding antibiotics that were given but discontinued prior to results, were also noted.
Data for this study were analyzed using descriptive statistics for patient characteristics and mean and standard deviation for continuous data that was collected. The time of modification of therapy was determined from the time that the medical team was contacted with the results to the time that the therapy was either de-escalated or escalated. This data was analyzed as a whole to determine the average time that antibiotics were modified and then further broken down to determine the average time to either de-escalate or escalate antibiotics.
Results
A total of 159 patients with Gram-positive bacteremia were assessed. Due to the exclusion criteria listed above, only 123 patients were included in the data analysis. There was a similar number of males and females included in the study with an average age of 66 years ( Table 1). (34) ( Table 2). Out of 123 blood cultures, two were found to be polymicrobial with two Gram-positive organisms growing in each blood culture, giving us a total of 125 microorganisms identified. The Verigene BC-GP assay does not provide sensitivity, therefore, a change in antibiotic therapy (deescalation or escalation) is only considered as a modification of therapy due to BC-GP assay results if it occurred prior to the availability of sensitivities. Modification of antibiotic therapy that occurred after sensitivities were available was not considered a modification of antibiotic therapy as a result of the Verigene BC-GP assay.
Patient characteristics (N=123)
There were 60 patients who had their antibiotic therapy modified, based on the BC-GP assay results. On average, it took physicians approximately 7.6 ± 9.5 h to modify (de-escalate or escalate) antibiotic therapy ( Table 3). Within the subgroup analysis, we determined that 31 patients had their antibiotics escalated with a mean time of 6.2 ± 5.9 h. On the other hand, de-escalation occurred in only 29 patients with a mean time of 9.2 ± 12.1 h. There were 63 patients whose antibiotic therapy was not modified based on BC-GP assay results. The physicians determined that the antibiotic therapies in these 63 patients were appropriate or necessary, and therefore no modifications occurred.
TABLE 3: Mean time to antibiotic modification after blood culture results were available.
Cost savings per intervention is estimated to be approximately $4,000 based on previous studies [15]. With a total of 60 interventions that occurred, we estimate that there were approximately $240,000 in cost savings for the hospital between January 2015 and August 2017 from the utilization of the Verigene BC-GP assay.
Discussion
Studies have shown a significant difference in time to antimicrobial optimization through the use of the BC-GP assay. A prior study done at Baylor University Medical Center at Dallas analyzed the de-escalation of empiric antibiotic therapy for methicillin-sensitive S. aureus (MSSA) and VRE bacteremia, and it was determined that the mean time to the first dose of optimal antibiotic therapy was reduced by 18.9 h when the BC-GP assay was utilized [17]. Another study done at a pediatric hospital demonstrated that the BC-GP assay helped reduce the time to antibiotic optimization by 12.5 h [18]. Our research study at CCMC also resulted in major time differences in antibiotic therapy modification when the BC-GP assay rapid identification technology was used compared to conventional approaches in identifying microorganisms in blood cultures.
The results of this study showed that on average, antibiotic therapy modifications (de-escalation or escalation) occurred in less than 10 h after BC-GP assay results were reported to the healthcare team. This reduction in time to modification of broad-spectrum antibiotics translated into pharmacoeconomic benefits and improvement in antimicrobial stewardship at CCMC. However, modification of broad-spectrum antibiotics only occurred in less than half of the patient cases analyzed.
There are some limitations to this research study. Utilization of the BC-GP assay only occurred Monday through Friday between 0600 and 1300 due to the lack of microbiology staff to report the results to the healthcare team. This confounding factor will result in an underestimation of the potential benefits of the BC-GP assay. Furthermore, pharmacists were only contacted in 26% of the cases with results from the BC-GP assay. Initially, pharmacists were not contacted with BC-GP assay results when CCMC first acquired the technology. Pharmacists were only contacted with results in the later portion of the study time period. The lack of pharmacist involvement early on could have potentially affected how quickly antibiotic therapy was optimized.
Another limitation of this study was that there was incomplete data provided by the microbiology lab to identify potential patients who could have been included in the study. Patient medical record numbers associated with specific blood cultures were unavailable for the first quarter of 2017 (January 2017 and March 2017), therefore, we were unable to analyze this data. We also excluded patients who were discharged home from the emergency department but were contacted to return to the hospital to receive IV antibiotics due to their positive blood cultures. It was determined that including these patients would alter the overall benefit of the BC-GP assay due to the delay in the time patients would return to the hospital to receive their intravenous antibiotics. Additionally, the exact time healthcare team members were notified of BC-GP assay results was not always reported. When this time was missing, the time that the microbiology lab called gram stain results to the nurse was used in the data analysis instead. At CCMC, Gram stain results are reported to the patient's nurse prior to utilizing the BC-GP assay. Using the time that the Gram-stain results were reported to nurses when the actual time the healthcare team was contacted with BC-GP assay results were missing, negatively affects the average time that antibiotic therapies were modified. Modification of therapy was based on the time new antibiotics were scheduled to start and not the time that the new antibiotics were administered.
In addition to these limitations, this study did not have a comparator group to determine statistical significance for the mean time that antibiotics were modified when the conventional method is used vs the BC-GP assay. With the lack of a comparator group, we were unable to directly analyze the impact of the BC-GP assay on patient outcomes vs the conventional method.
There have been studies that showed a reduction in hospital length of stay when the BC-GP assay was used. A previous study analyzing the clinical outcomes of the BC-GP assay to optimize antimicrobial therapy for Enterococcus bacteremia determined that the mean reduction in hospital length of stay was significantly shorter with 21.7 days (P=0.0484) [19]. This study determined that the attributed mortality rates were not significantly different between pre-BC-GP and post-BC-GP groups (2.1% vs 14.2% with P=0.065), but the study was not powered to assess mortality rates [19]. In another study, researchers found that usage of the BC-GP assay resulted in a significant reduction in median hospital length of stay that was 1.5 days (P=0.04) shorter in a general pediatric unit and a median of 5.6 days (P=0.01) shorter when they specifically analyzed S. aureus bacteremia [18].
At this time, there are a few studies examining the mortality benefits of the BC-GP assay but with varied outcomes. In a study done by Roshdy and colleagues, there was no difference in mortality between the pre-BC-GP and post-BC-GP groups when they assessed patients with Streptococcus and Enterococcus bacteremia [20]. In another study done by Box and colleagues, it was determined that there was no significant difference in mortality rates (9.1% vs 9.2%, P=0.98) found between the pre-intervention and post-intervention groups with BC-GP [21]. There was also no significant difference found in mortality rates (15% vs 18%, P=0.40) between the pre-BC-GP and post-BC-GP treatment groups in a study done by Neuner and colleagues at Cleveland Clinic [22].
Conversely, there have been studies that demonstrated mortality benefits after the implementation of the BC-GP assay. One study examining 226 patients with S. aureus bacteremia using the conventional culture method vs the BC-GP assay showed that there was lower in-hospital mortality (13.2% vs 5.8%, P=0.047) and lower 30-day mortality (17.9% vs 8.3%, P=0.025) [23]. Another study done by Mahrous and colleagues examining the benefits of both Verigene Gram-positive and Gram-negative assays together showed that there was a significant difference in in-hospital mortality (18% vs 10%, P=0.034) in the post-intervention phase [24]. In addition, published literature has clearly shown that there is increased mortality when vancomycin is used to treat MSSA bacteremia vs beta-lactam therapy [25]. The BC-GP assay can also quickly identify the presence of MRSA bacteremia. We can extrapolate this information and confidently say that if MRSA is not identified, empiric vancomycin therapy can be de-escalated to an appropriate beta-lactam antibiotic, which can result in mortality benefits. The differences in findings when assessing for mortality benefits amongst the studies discussed could be due to the patients included and excluded from the studies as well as other outliers.
The BC-GP assay also has some limitations in itself. It was designed to only identify 12 of the most common Gram-positive bacteria that cause bacteremia and is not inclusive of every Gram-positive bacterium such as Micrococcus. Additionally, positive blood culture and a Gram stain confirming the presence of Grampositive bacteria is required prior to using the assay.
Conclusions
The increasing prevalence of MDR bacteria is the result of the misuse of antibiotics. Rapid identification diagnostic technology is becoming increasingly important in all institutions because of the growing problem of antibiotic resistance. There are various rapid microbial identification technologies present at this time including Verigene BC-GP and Gram-negative blood culture (BC-GN) assays, PNA-FISH, BioFire FilmArray systems, and many others that vary in what bacteria they can identify and how quickly. Unfortunately, this technology is not available at all institutions. At CCMC, Verigene BC-GP was an expensive technology incorporated into the hospital's budget and required a change in the workflow and staffing of the microbiology department. It may be due to these same reasons that such technology is not universal yet at all institutions. The use of rapid identification technology can help improve antimicrobial stewardship, reduce the risk of antibiotic adverse events, and prevent complications associated with antibiotic use, such as C. difficile infections. At CCMC, the Verigene GC-GP assay has the potential to help guide physicians, pharmacists, and other healthcare providers in preventing the inappropriate use of broad-spectrum antibiotics to improve patient outcomes and reduce healthcare costs. Due to the lack of studies evaluating the direct impact of BC-GP on various aspects of patient outcomes, more studies are needed to address its potential benefits. | 2022-10-19T15:25:37.549Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "773b89b0a535894e555e5df92fbf1089809dd527",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/115752-clinical-and-economic-impact-of-rapid-blood-pathogen-identification-via-verigene.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64c4fe1d9c922469980a30a7284bb01fc824c8d8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
230624583 | pes2o/s2orc | v3-fos-license | Juvenile Offenders: Reasons and Characteristics of Criminal Behavior
: The article examines the phenomenon of “juvenile delinquency”, assesses its actual state and establishes the tendencies of its manifestations. Juvenile delinquency in Ukraine as a part of crime in a broad sense arises and develops under the influence of certain determinants. The study of the causes and conditions of juvenile delinquency remains relevant today, which indicates the special danger of this kind of crime for the development of society. The purpose of the article is to study the state of the problem in Ukraine and the experience of other countries in minimising the criminal behaviour of minors in the process of property and non-property relations. The leading approach that was used when writing the article is the comparison and analysis of modern materials on the problems of criminal behaviour of criminals who have not reached the age of majority. As a result, it was possible to identify the social characteristics of juvenile criminals and the reasons for their criminal behaviour. Considerable attention is paid to the factors influencing the commission of crimes: a dysfunctional family, shortcomings of the educational process, the problem of alcohol and drug use by minors. In addition, some directions for the prevention of juvenile delinquency were developed. The applied value is the ability to change legislation in terms of work and correction of minor criminals’ behaviour.
INTRODUCTION
Radical transformations in the political, social, economic conditions and in the public consciousness of the citizens of Ukraine, which brought a lot of positive things into public life, led at the same time to exacerbate contradictions in the youth environment.The ability to isolate them, understand the causes and interconnectedness, provide for ways to solve them in the interests of young people at the state, professional and individual levels could significantly improve the position of young people in the country and the opportunities for their life self-determination, intellectual, moral and physical development, the realisation of creative potential in the interests of both their own and Ukraine.The historical development of society largely depends on the extent to which such an effective factor of socio-political development as minors is used.First, it is determined by the share of young people in the structure of the population of each country.Secondly, at all times and among all peoples, young people have been at the forefront of social movements, a kind of catalyst and engine of social transformations.One of the most acute problems of the present, affecting essentially all aspects of public life and, in particular, creating an immediate threat to economic and political transformations (a factor of social destabilisation in the society) is the steady growth of crime.Numerous studies have shown that the overwhelming majority of offenders enter the criminal path precisely at a minor age.The elimination of the causes of this phenomenon greatly contributes to the elimination of crimes that are committed not only by minors but also by adults.
The phenomenon of juvenile delinquency, regardless of the duration of its course, provides for a certain procedure for studying its genesis -determining the prerequisites for the occurrence of a crime; analysis of the appearance of its individual elements, their integration; identification of its internal mechanism.The study of the nature, causes, consequences, tendencies of a given social phenomenon has both scientific and practical significance.It can be the basis for improving social relations and civil society institutions, social norms and the practice of their application, for strengthening the system of social control, consistent implementation of measures of moral and legal education, social prevention and responsibility.All of them are aimed at ensuring maximum personal protection, meeting the interests of citizens, democratising and humanising society.
The article attempts to provide a criminological characteristic of a personality of a juvenile offender, considers the reasons and conditions that induce minors to commit illegal acts, emphasises the necessity to take comprehensive measures to combat juvenile delinquency, using their criminological characteristics.Much attention is paid to the problem of juvenile delinquency in criminology.At different times, scientists (Bugera 2014;Golina et al. 2006;Dedkovskaya 2016;Rybalko 1990;Aksonova, Vakulenko, and Vasiliev 2015;Yuzikova 2015).Most of them investigated a wide range of issues related to juvenile delinquency.Based on the results of their scientific research, new working hypotheses were put forward about: criminological characteristics of the most common crimes among juveniles; typical personality traits of a juvenile offender; determination of crime, as well as preventive measures.However, the developed theoretical provisions, conclusions and practical recommendations largely reflect the views on the problem of juvenile delinquency that were formed back in Soviet times.There is a lack of domestic studies of the current state of this phenomenon.An analysis of recent studies indicates that criminologists have established several features of the spread of juvenile delinquency and certain historical patterns of its development.Among them are: selfish orientation and predominantly group and street nature of criminal encroachments; the increased criminal activity of pupils of socially disadvantaged families, as well as children with mental and behavioural disorders; excessive aggression and unmotivated cruelty towards victims of violent crimes.Therefore, the still unresolved problems include the necessity to characterise the phenomenon of juvenile delinquency and a description of its latest manifestations, as well as an objective assessment of the current state of this issue.
MATERIALS AND METHODS
For the article, both materials of Ukrainian criminologists, lawyers, counsellors and doctors of sciences (Bandurka, Bocharova, and Zemlyanskaya 1998;Didorenko 2007;Steblinskaya 2013;Smetanina 2013;Golovkin 2013;Grabbe 2013), and research in the field of criminology of minors by foreign authors were used.When conducting criminological research, all methodological and methodical requirements that may be applicable to social research were taken into account.The main methods used in writing this article included:
1)
Reading the scientific literature, the results of research conducted earlier, social practice, as well as the analysis of the provisions that were covered in the literature, their evidence and assessment of the theoretical and practical significance. 2) The research methodology was developed in such a way that the collected information contained information of both objective and subjective character.Only a combination of objective and subjective indicators (objectivesubjective complex) is one of the indispensable conditions for obtaining reliable results.
3) Methodology of sociological research -a set of methods for establishing specific social factors and means of obtaining and processing primary sociological information.This is a system of techniques that allow one or another method to be applied in a specific subject area in order to accumulate and systematise empirical material.
It is about both the methods of obtaining the required data and the methods of processing the material.It is known that processing is an independent stage of research, and the methods that were applied were taken into account already at the first stages of the research; with their consideration the methods of obtaining primary information, the methodological documents themselves: questionnaires, programs, interviews, etc., were adjusted.
4)
One of the methods of collecting information on juvenile delinquency was the document analysis method.The analysis of documents plays an essential role in social cognition, which is due to their place in public life.The documents reflect the spiritual and material life of society with varying degrees of completeness, they contain information about the processes and results of the activities of individuals, collectives, large groups of the population and society as a whole.Consequently, documentary information was of certain interest for this study.
RESULTS AND DISCUSSION
According to its content, the phenomenon of juvenile delinquency is the criminal activity of children aged 11 to 18 years.The nature and direction of the criminal activity of children are determined by the unfavourable conditions for the formation and development of their personality during puberty, agerelated characteristics of motivation, lifestyle, as well as the influence of persons with criminal experience.Minors take the path of committing crimes due to 4 main reasons: first, they are drawn into criminal activities by adults who have criminal experience; secondly, with the help of prohibited (illegal) behaviour, children express themselves in a play or protest form, distortedly exercise their right to independence (adulthood); third, the commission of crimes is a defensive reaction to social helplessness, feelings of abandonment, uncertainty and fear of the future; fourthly, criminal behaviour acts as a means of adaptation to difficult living conditions, the struggle for survival in any situation.The consequences of the growing level of juvenile delinquency is an increase in the rates of recidivism of crimes committed by adults.All the authors shared a similar opinion, incl.foreign ones who were engaged in the study of the crime situation in unfavourable areas of large industrial cities, incl.USA.These conclusions formed the basis of fundamental international legal acts against juvenile delinquency.At the same time, it should be remembered that persons who turned 16 years old before committing a crime are subject to criminal liability in Ukraine.However, for the commission of certain types of crimes, minors between the ages of 14 and 16 are prosecuted.Such types of punishments can be applied to minors found guilty of a crime, as fines, community service, correctional labour, arrest or imprisonment.Also, minors may be subject to additional punishments in the form of a fine and deprivation of the right to hold certain positions or engage in certain activities.
In addition, special, less strict and more humane conditions of criminal responsibility and punishment are provided for minors, in comparison with adult criminals, namely: under certain conditions, it is possible to release a juvenile from criminal responsibility with the use of compulsory educational measures; the types of punishments have been reduced and the terms of established punishments have been limited; there are softer requirements (conditions) for exemption from criminal punishment; the terms, after which it is possible to apply early conditional release to minors, as well as the terms of repayment and removal of conviction have been reduced.When sentencing juvenile offenders in Ukraine, a court takes into account the severity of a crime committed, a personality of a perpetrator and the circumstances that mitigate and aggravate a punishment, as well as conditions of his life and upbringing, the influence of adults, the level of development and other characteristics of a personality of a minor.Also, the minor age of a person, in itself, is a circumstance that mitigates punishment -an interesting fact that must be taken into account when sentencing, regardless of whether a defendant has reached the age of majority at the time of a trial.The peculiarity of working with juvenile criminals is that they can be exempted from criminal punishment -Ukrainian legislation provides for this possibility, but under certain conditions.First, a minor can be released from punishment with a probationary period.Such release is possible if a person is sentenced to arrest or imprisonment.The probationary period is established for a duration of 1 to 2 years.Secondly, a minor can be released from punishment subject to the application of compulsory educational measures -if a minor has committed a crime of little or medium gravity, he can be released from punishment.However, it must be recognised that as a result of sincere remorse and further impeccable behaviour, a juvenile offender ceased to be dangerous to society.A similar practice of combating juvenile crime is found in the countries of the UN, the European Union and other international associations.
In Ukraine, as in many other developed countries, a minor can be imprisoned.Deprivation of liberty for a specified period is the most severe punishment for: repeated offences of little gravity (for a period not exceeding 1 year 6 months), a crime of average gravity (for a period not exceeding 4 years), a grave crime (for a period not exceeding 7 years), particularly grave crime (for a period not exceeding 10 years), particularly grave crime involving premediated homicide (for a period up to 15 years).General types of exemption from criminal liability can be applied to minors: in connection with remorse; in connection with the reconciliation of a guilty person and a victim; in connection with the transfer of a guilty person on bail; due to a change in a situation.Parole can also be applied to minors from serving a sentence.However, the latter type applies only to those who have been sentenced to imprisonment.Separately, it can be noted that the coordinator of all areas of work related to the reform of the sphere of justice for children in Ukraine is the Ministry of Justice.Ukraine has introduced the National Strategy for Reforming Justice for Children for the Period until 2023, within the framework of which a draft law "On Child-Friendly Justice" was developed, and the project "From Dream to Action" was launched, the purpose of which is to prevent juvenile delinquency.Additionally, the Ministry of Justice, together with the Prosecutor General's Office, launched a pilot project "Rehabilitation program for minors who are suspected of committing a crime" based on the system for providing BPD in Donetsk, Odesa, Lviv, Lugansk, Mykolaiv and Kharkiv regions.The key conditions for using the program are additional measures that will help the minor to build social connections, find a new hobby and change his behaviour.For this, specialised institutions and psychologists with experience in resocialisation of children are involved.Further, according to the results of the Recovery Program, if a minor compensates for the damage and reconciles with the victim, the criminal proceedings are closed.In this case, the minor will undergo resocialisation programs.According to the statistics of the Prosecutor General Office, in 70% of cases, a juvenile offender who is imprisoned for more than one year is sent to prison again.Therefore, restorative justice offers a chance to return to normal life.
Crimes are usually committed by juvenile offenders for specific reasons.Analysis of the information available today by domestic and foreign forensic experts has revealed some of the reasons for the criminal behaviour of minors.According to scientists, teachers, employees of various institutions that deal with minors (criminal police for minors, special institutions for minors, etc.), the main cause of juvenile delinquency is the unfavourable situation in the family and its negative impact.The family, in accordance with its nature, has an initial and, moreover, a very longterm function of raising children.It is the bearer of an emotional and psychological microclimate based on the unique closeness of educators and a child, and therefore directs the development of children's communication in all spheres of family, neighbours, educational, leisure, labour contacts and relationships.It is the family that provides lessons on gender relations and future family life; forms the attitude towards education and work activities, the requirements of responsibility to society, mutual assistance; determines the worldview, ideological, moral, legal values of the society; forms character, selfesteem and self-criticism; simulates leisure; ensures control over children and adolescents as members of society who are in the stage of intensive development and have not yet fully mastered the skills of social interaction.A special and very important component is the specificity of the process of family education itself.Speaking about family education, it is necessary first of all to note its continuity, duration, versatility.In this, no other educational public institution can compare with the family.Deficiencies and violations in family education are the main sources of the formation of those distortions of a personality of a teenager, which determine the commission of a crime.They cause up to 80% of juvenile misconduct cases.It should be borne in mind that the influence of other sources of criminal "infection" of minors is largely stimulated by the position of the family.
The problem of juvenile delinquency is far from being limited to dysfunctional families, although for minors who grew up in them, the criminal risk increases four to five times compared to peers from families where there are no clear examples of daily antisocial behaviour.According to authors' data, 15.8% of the examined juvenile convicts lived in families where there were previously convicted persons among adults; 13.1% -where there were constant quarrels; 14.3%where alcoholic beverages were abused.For 10.8% of families of convicted adolescents, hostile attitudes towards other people are characteristic.To neutralise unfavourable conditions in a family, their negative impact on the criminalisation of minors, a state program is necessary to overcome all types of family problems.The manifestations of juvenile delinquency are about ten of the most common types of crime among children.Children are also victims of unlawful attacks by minors in more than a third of cases.More than half of the crimes committed by minors are classified as grave and particularly grave.In the regional context, juvenile delinquency is spreading more intensively on the territory of densely populated industrial eastern and south-eastern regions of Ukraine, where a complex crime situation is always observed.Children living in depressed areas of large cities, regional and district centres are characterised by increased criminal activity.The consequence of the increase in the level of juvenile delinquency is an increase in the rate of recidivism of crimes committed by adults after a certain time.Based on these general provisions, the authors will try to give a quantitative and qualitative description of juvenile delinquency.
The legislator limited the period of committing crimes by minors according to the criterion of the lower limit of reaching the age of criminal responsibility (14 years) and the upper limit of reaching majority (18 years).However, in fact, the boundaries of the existence of the phenomenon of juvenile delinquency are determined by the very criminal reality that has objectively developed among minors.Practice shows that children begin to experiment with committing offences and socially dangerous acts, as a rule, from the age of 11.From this age, for committing socially dangerous acts, for which the Criminal Code of Ukraine provides punishment in the form of imprisonment for a term of over five years, juvenile offenders are placed in reception centres for children.In general, juvenile offenders are characterised by two main models of criminal behaviour -poly-motivational and monomotivational.The first is characterised by the ambivalence of desires and feelings, competition of needs and interests, scattering of goals, the uncertainty of intent regarding the ways, methods and means of unlawful encroachments, high dependence of the implementation of intentions on collective decisions, favourable situation development and victim behaviour.This model of behaviour is more typical for minors who are just taking the path of committing crimes and experimenting with various forms of dangerous behaviour and thereby strive to acquire a primary criminal experience and raise their status in the reference group.The mono-motivational model of juvenile criminal behaviour is based on homogeneous needs and interests, common motives and goals, priority forms and methods of criminal behaviour, which are covered by a single intent.Most often, such crimes are planned and committed in advance in criminal groups of mixed age composition of minors and adult criminals.Mostly these are crimes against property or against human life and health.
CONCLUSION
The criminological characteristics of a personality of a juvenile offender contain detailed information about a juvenile (age, state of health, level of development, other socio-psychological traits and properties), the presence of adult instigators and other accomplices in a criminal offence, negative inclinations (alcoholism, drug addiction, gambling addiction), mitigating and aggravating punishment of circumstances, the presence of causal relationships between motives, actions and the result of a committed unlawful act.The study of the criminological characteristics of a juvenile offender is necessary for organising counteraction to relevant crimes, developing a system of measures of state institutions and public organisations aimed at eliminating negative phenomena and processes that give rise to juvenile delinquency.The conducted research has allowed revealing some social characteristics of juvenile criminals, the main determinants of their criminal behaviour.Crime belongs to the phenomena of social pathology, the consequences of which are dysfunctional, damaging society and an individual.Strengthening democratic institutions and building a civil society is impossible without reducing the negative effects of this type of deviation.The development of this problem may have not only theoretical but also practical interest, and its further study will provide additional opportunities for correct and timely conclusions regarding not only the present, but also the future in terms of creating favourable conditions for the harmonious development of youth, meeting the needs for voluntary choice of a behaviour type not prohibited by law, active participation in creative, cultural, sports and recreational activities.
In extreme conditions and in connection with the accelerated reform of law enforcement agencies, as well as the beginning of the development of criminal justice in relation to minors in Ukraine, the state clearly underestimates the threat from juvenile delinquency and does not pay sufficient attention to counteraction.Despite the optimistic statistical data in recent years, juvenile delinquency has been on the rise since 2014.This is due to the general complication of a crime situation in the state and the rapid criminalisation of the deviant teenage environment.The current state of juvenile delinquency is characterised by the following tendencies: exaggeration of selfish motivation, predetermination of common crimes by difficult life circumstances and the struggle for survival in an aggressive environment, an increase in the level of street violence in cities, the convergence of various forms of criminal behaviour, an increase in the proportion of repetition and recidivism, the involvement of minors in criminal activity by their parents, close relatives and other persons with criminal experience.Juvenile offenders are more and more focused on the seizure of money and property for a wide range of economic purposes.However, their criminal behaviour is predominantly unstable.Modern juvenile delinquency, on the one hand, is acquiring signs of a hybrid combination with offences, and on the other, it manifests itself in an increase in the proportion of grave and particularly grave criminal offences.The established trend has a negative impact on the effectiveness of prevention of this category of illegal acts in Ukraine and should be revised taking into account the identified models of behaviour of juvenile criminals. | 2020-12-10T09:02:37.147Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "eed3c261e82409c34825a4184562daaafef271b5",
"oa_license": "CCBYNC",
"oa_url": "https://lifescienceglobal.com/pms/index.php/ijcs/article/download/7873/4141",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e005456c8236ae8e6704c4373b2c2e2a84ecd546",
"s2fieldsofstudy": [
"Sociology",
"Law"
],
"extfieldsofstudy": [
"Psychology"
]
} |
118936074 | pes2o/s2orc | v3-fos-license | Dimensional dependence of the metal-insulator transition
We study the dependence on the spatial dimensionality of different quantities relevant in the description of the Anderson transition by combining numerical calculations in a $3 \leq d \leq 6$ disordered tight binding model with theoretical arguments. Our results indicate that, in agreement with the one parameter scaling theory, the upper critical dimension for localization is infinity. Typical properties of the spectral correlations at the Anderson transition such as level repulsion or a linear number variance are still present in higher dimensions though eigenvalues correlations get weaker as the dimensionality of the space increases. It is argued that such a critical behavior can be traced back to the exponential decay of the two-level correlation function in a certain range of eigenvalue separations. We also discuss to what extent different effective random matrix models proposed in the literature to describe the Anderson transition provide an accurate picture of this phenomenon. Finally, we study the effect of a random flux in our results.
I. INTRODUCTION
After almost fifty years of the landmark paper by Anderson 1 about localization, the study of the properties of a quantum particle in a random potential is still one of the central problems of modern condensed matter physics.
In the the early days of localization theory, research was largely focused on the determination of the critical disorder at which the the metal-insulator transition -also referred as the Anderson transition-occurs as a function of the connectivity of the lattice. In the original Anderson's paper this was achieved by looking at the limits of applicability of a locator expansion. 1,2 Later on, a more refined estimation based on the solution of a self-consistent equation 3 provided with a similar answer. The self-consistent method is only exact in the case of Cayley tree but it is believed to be accurate if the spatial dimensionality is large enough. We note that in the locator expansion 1 the metal-insulator transition is induced by increasing the hopping amplitude of an initially localized particle.
In the seventies the application of ideas and techniques of the theory of phase transitions such as scaling and the renormalization group 4 opened new ways to tackle the localization problem especially in low dimensional systems. These progress led eventually to the proposal of the one parameter scaling theory 5 which, despite being still under debate, has become the 'standard' theory of localization. In the one parameter scaling theory, localization in a given disordered sample is described by the dimensionless conductance g. This quantity 6 is defined either as the sensitivity of a given quantum spectrum to a change of boundary conditions in units of the mean level spacing ∆ or as g = E c /∆ where E c , the Thouless energy, is an energy scale related to the classical diffusion time to cross the sample. The dimensionless conductance g is sensitive to localization effects. In a metal (insulator), it increases (decreases) monotonically with the system size, L.
Under the assumption that the dimensionless conductance is only a function of the system size and by using simple scaling arguments, the one parameter scaling theory predicts that the metal-insulator transition is characterized by a scale invariant dimensionless conductance g = g c . The lowest dimension in which the metal-insulator transition occurs is d > 2. In two and lower dimensions destructive interference caused by backscattering produces exponential localization of the eigenstates in real space for any amount of disorder in the limit L → ∞. In this picture, the Anderson transition is considered as a standard second order transition with critical exponents s, ν that control how the conductivity σ ∝ |W −W c | s vanishes or the localization length ξ ∝ |W c − W | −ν diverges as the critical disorder W c is approached.
In d = 2 + ǫ (ǫ ≪ 1) the transition occurs in the weak disorder region and consequently an analytical treatment is possible. Diagrammatic perturbation theory and field theory techniques 4,7 predict that ν ∼ 1/ǫ and W c ∝ ǫ. By contrast, the critical exponent associated to the Cayley tree, which should be close to that of a disordered conductor in d ≫ 2 dimensions, is ν = 1/2. 8 In the context of second order phase transitions this value corresponds with the upper critical dimension d u , namely, for d ≥ d u fluctuations are irrelevant and the mean field approximation become exact. For the localization problem different values d u = 4, 6, 8, ∞ of the upper critical dimension have been reported. 9 The results of this paper discard d u = 4, 6 and indicate that d u → ∞ is the upper critical dimension. However we would like to point that the exact significance of the upper critical dimension for localization is unclear. It is not known what fluctuations are suppressed at the upper critical dimension and to what extent spectral or transport properties at criticality are affected.
The Anderson transition in a disordered conductor is a consequence of a highly non trivial interplay between quantum destructive interference effects and quantum tunneling. In low dimensions, d ∼ 2, weak quantum destructive interfer-ence effects induce the Anderson transition. Analytical results are available based on perturbation theory around the metallic state. 4,7 In high dimensions, d ≫ 2, quantum tunneling is dominant and the locator expansion 1 or the the self-consistent formalism 3 can be utilized to describe the transition. We note that in these papers corrections due to interference of different paths are neglected.
The progress in numerical calculations during the last twenty years has increased dramatically our knowledge 10,11,12,13,14 of the metal-insulator transition specially in intermediate dimensions such as d = 3, 4 for which a rigorous analytical treatment is not available. Below we cite a few of its most relevant results.
It was verified that for a disorder strength below the critical one, the system has a mobility edge at a certain energy which separates localized from delocalized states. 10 Its position moves away from the band center as the disorder is decreased. Delocalized eigenstates, typical of a metal, are extended through the sample and the level statistics agree with the random matrix prediction 15 for the appropriate symmetry. The spectral correlations at the Anderson transition, usually referred to as critical statistics 11,16 , are scale invariant and intermediate between the prediction for a metal and for an insulator. 11 By scale invariant we mean any spectral correlator utilized to describe the spectral properties of the disordered Hamiltonian does not depend on the system size.
Eigenfunctions at the Anderson transition are multifractals 10,13 (for a review see 17,18 ), namely, their moments present an anomalous scaling, P q = d d r|ψ(r)| 2q ∝ L −Dq(q−1) with respect to the sample size L, where D q is a set of exponents describing the Anderson transition.
The main features of the Anderson transition only depend on the dimensionality of the space and the universality class 20,21 , namely, the presence or not of a magnetic field (or other time reversal breaking mechanism) or a spin-orbit interaction. The dependence with the universality class diminishes as the spatial dimension increases. It has also been reported that certain spectral correlators at the Anderson transition are sensitive to different boundary conditions. 22 All of these numerical findings are compatible with the one parameter scaling theory. The applicability of the ǫ-expansion (d = 2 + ǫ) is by contrast much more restricted. A naive extrapolation to ǫ = 1, 2 yields ν 3D ∼ 1/ǫ = 1, ν 4D = 1/2 thus suggesting that the upper critical dimension is four. However numerical calculations 12,23 show undoubtedly that ν 3D ∼ 3/2 and ν 4D ∼ 1. Similarly, up to d = 4, the self-consistent theory overestimate by more than a factor two the value of the critical disorder at which the Anderson transition occurs. This suggests none of the theories traditionally utilized to describe the metal-insulator transition can be really extrapolated to the physically relevant case of d = 3. In order to make progress in this difficult problem a new basis for the study of the Anderson transition in any dimension is necessary. In this paper we have a more modest goal: a detailed exploration of the dependence of different quantities defining the Anderson transition on the spatial dimensionality.
We propose simple relations that describe how the parameters defining the Anderson transition depend on the dimen-sionality of the space. It is argued that the upper critical dimension must be infinite. Our results are supported by numerical evidence from a disordered Anderson model in a hypercubic lattice in 3 ≤ d ≤ 6. This is the first time than the Anderson transition in d = 5, 6 is investigated numerically in the literature (for some recent results in a small asymmetric lattice in d = 5 we refer to Ref. 24 ).
The organization of the paper is as follows. In section II, we introduce the model to be studied, explain the technical details of the numerical simulation, locate the mobility edge in different dimensions and investigate how the critical exponent and the critical disorder depend on the spatial dimensionality. In Sec. III, spectral correlation at the Anderson transition are investigated, three different regions of the two level correlation function are distinguished, we also study the dependence of the slope of the number variance and the asymptotic decay of the level spacing distribution with the spatial dimensionality. It is also discussed the range of validity of certain phenomenological models commonly used in the literature to describe the spectral correlations at the Anderson transition. Finally, we examine the effect of a magnetic flux in our results.
II. CRITICAL DISORDER, CRITICAL EXPONENTS AND UPPER CRITICAL DIMENSION
In this section we determine the critical disorder and critical exponents for different dimensions and then discuss their dependence with the spatial dimensionality.
A. The model: Technical details
Our starting point is the standard tight-binding Anderson model on a hyper-cubic L d lattice with d = 3, . . . , 6 where the operator a i (a † i ) destroys (creates) an electron at the ith site of the lattice and t ij is the hopping integral between sites i and j which is non zero only for nearest neighbors. In the following we take t ij = 1 for i, j and the lattice constant equal to unity, which sets the energy and length units, respectively. The uncorrelated random energies ǫ i are distributed with constant probability within the interval (−W/2, W/2), where W denotes the strength of the disorder and hard-wall boundary conditions are imposed in all directions.
In order to proceed we compute eigenvalues of the Hamiltonian Eq. (1) for different volumes and disorders by using techniques for large sparse matrices, in particular a Lanczos tridiagonalizaton without reorthogonalization method. 25 We restrict ourselves to a small energy window (−2, 2) around the center of the band. Calculations have been carried out in samples of sizes up to L = 30 for d = 3, 12 (d = 4), 10 (d = 5) and 7 (d = 6). The number of random realizations is such that for a given triad of {d, L, W } the number of eigenvalues obtained is at least 3 × 10 5 . In order to study the level statistics around the mobility edge more accurately, this number was increased to 20 × 10 6 at the critical disorder. Eigenvalues thus obtained are appropriately unfolded, i.e, they were rescaled so that the spectral density on a spectral window comprising several level spacings is unity.
B. Location of Wc(d) and ν(d)
Our first task is to find out the critical disorder W c and the critical exponent ν for different dimensions in the small spectral window (−2, 2) around to the origin. In order to proceed we determine the location of the mobility edge close to the band center by using the finite size scaling method. 11 First we evaluate a certain spectral correlator for different sizes L and disorder strengths W . Then we locate the mobility edge by finding the disorder W c such that the spectral correlator analyzed becomes size-independent. In our case we investigate the level spacing distribution P (s) (probability of finding two neighboring eigenvalues at a distance s = (ǫ i+1 −ǫ i )/∆, with ∆ being the local mean level spacing). The scaling behavior of P (s) is examined through the following function of its variance 26 which describes the relative deviation of var(s) from the Wigner-Dyson (WD) limit. In Eq.
where . . . denotes spectral and ensemble averaging, and var WD = 0.286 and var P = 1 are the variances of WD and Poisson statistics, respectively. Hence η = 1(0) for an insulator (metal). Any other intermediate value of η in the thermodynamic limit is an indication of a mobility edge. In Fig. 1 we plot the W dependence of η for different system sizes in d = 5 (left panel) and d = 6 (right panel). The critical disorder W = W c signaling the Anderson transition corresponds with the point for which η is independent of L. For a weaker (stronger) disorder, η tends to the metallic (insulator) prediction. This is the first time that an Anderson tran-sition is found in such a high dimensional disordered system. For a precise determination of the critical disorder W c and the critical exponent ν we look at the correlation length near W c where ξ 0 is a constant. The numerical values of W c and ν are obtained by expressing η(L, W ) = f [L/ξ(W )] and then performing an expansion around the critical point In practice, we have truncated the series at n = 4. For each dimension (d = 5, 6) we have performed a statistical analysis of the data in the windows shown in Fig. 1 with the Levenberg-Marquardt method for nonlinear least-squares models. The most likely fit is determined by minimizing the χ 2 statistics of the fitting function (4). We found the following critical disorders W c = 51.4 ± 0.4 in d = 5 and W c = 74.5 ± 0.7 in d = 6, and the corresponding critical exponents are equal to ν = 0.84 ± 0.06 and ν = 0.78 ± 0.06, respectively. A similar analysis for the d = 3 and d = 4 systems results in W c = 15.22 ± 0.08 and ν = 1.52 ± 0.06 for d = 3, and W c = 29.8 ± 0.2 and ν = 1.03 ± 0.07 for d = 4. We note that in the d = 3 case, the deviation of W c from the accepted value W c ∼ 16.5 is due to the utilization of rigid boundary conditions. See Fig. 2 for a plot of W c and ν as a function of the spatial dimensionality.
C. Theoretical analysis of Wc(d) and ν(d)
In certain limiting cases W c and ν are known analytically. For instance, if effects related to interference among different paths are neglected 3 , the standard tight binding Anderson model is effectively defined on a Cayley tree and ν = 1/2. On the other hand if only interference corrections to the metallic limit are considered then ν = 1/(d − 2). 7 The former prediction is supposed to be approximately valid for d ≫ 2 and the latter for d = 2 + ǫ and ǫ ≪ 1. From the above numerical results it is clear that none of these limits is appropriate in the range of intermediate dimensions of interest. Additionally, it is believed 3 that corrections to the ν = 1/2 result should go as ∼ 1/d since this is the dependence on the spatial dimensionality of the neglected diagrams describing interference effects. Combining these two facts we propose that for all dimensions. As is shown in Fig. 2 (right panel), this relation verifies all limiting cases and reproduce the numerical results accurately. According to the above relation, the upper critical dimension for localization is infinite. This result is fully supported by the analysis of the spectral correlations (see next section). Moreover, in a recent paper 27 it has been proved rigorously that the level statistics of a disordered system in a Cayley tree (ν = 1/2) is Poisson as for an insulator.
As was mentioned previously, the Cayley tree represents to a d-dimensional conductor where all interference effects between different paths are neglected. It is thus supposed to be an accurate description of a disordered conductor only in the limit d ≫ 2.
On the other hand, if the one parameter scaling theory is valid, quantum diffusion never stops (see next section) for any finite dimension. Level repulsion typical of a metal will be present in any finite dimension so level statistics at criticality can be that of an insulator only in the d → ∞ limit. But this precisely the result for the Cayley tree 27 which corresponds with the upper critical dimension for localization. It is thus clear that the upper critical dimension must be infinity.
A similar analysis can be carried out for the critical disorder W c . The original estimation of Anderson 1 , W c = 4K ln(W c /2) (where K is the connective constant which is a little bit less than the number of nearest neighbors minus one) greatly overestimates W c . This is hardly surprising since the Anderson's calculation involves crude approximations and consequently should be considered as an order of magnitude estimation rather than an accurate prediction. For instance, deviations coming from interference effects are neglected in this scheme. Roughly speaking they tend to reduce W c by an amount of order W c /d.
In the opposite limit, d = 2 + ǫ, simple perturbation theory 7 yields W c ∝ d − 2. The discrepancy observed with the analytical results in the limit of high dimensionality prevent us from proposing an interpolating relation as in the case of the critical exponent. However we have noticed that a much better agreement with the numerical results is achieved if an effective connective constant K eff = K/2 is utilized (solid line in left panel of Fig. 2). Furthermore, the remaining deviation gets smaller as the spatial dimensionality increases thus suggesting that it may be produced by destructive interference effects (∼ W c /d).
III. LEVEL STATISTICS
In this section we investigate the level statistics at the Anderson transition. We shall mainly focus on its dependence on spatial dimensionality and the exact functional form of the two-level correlation function (TLCF).
A. Theoretical analysis of R2(s)
Our starting point is the connected TLCF, where ρ(ǫ) is the density of states at energy ǫ, denotes averaging over disorder realizations and s = ω/∆ where ∆ = 1/L d ρ(ǫ) in the mean level spacing. Once the spectrum has been unfolded R 2 (s) can be simply written as where p(n; s) is the distribution of distances s n between n other energy levels and δ(s) describes self-correlation of levels. 15 In numerical computations we use Eq. (7) since it gives much more accurate results than Eq. (6). According to the one parameter scaling theory, the spectral properties depend on the dimensionless conductance g which is a function of the system size L only. In a metal g → ∞ for L → ∞, the Hamiltonian can be accurately approximated by a random matrix with the appropriate symmetry and Wigner-Dyson statistics applies. 15 For instance, for broken time reversal invariance, R 2 (s) = δ(s) + 1 − sin 2 (πs) π 2 s 2 . In an insulator, eigenvalues uncorrelated, Poisson statistics applies and R 2 (s) = δ(s).
Right at the Anderson transition, the dimensionless conductance g = g c is size independent and level statistics are supposed to be universal and intermediate between Wigner-Dyson and Poisson statistics. Unfortunately there are few analytical results for the TLCF at criticality. In the d > ∼ 2 region 4 , g c ∼ 1/(d − 2) ≫ 1. The TLCF, can only evaluated explicitly in the limit → 2 28 and g ≫ 1 where R 2 (s) ∼ s (timereversal), for s ≪ g c , as for a metal, and R 2 (s) ∼ e −As/gc for s ≫ g c , with A being a factor of order unity. The Anderson transition is thus characterized by level repulsion combined with an exponential decay of the TLCF.
In higher dimensions the exact form of the TLCF is not known. However, we note that the scale invariance of the spectral correlations at the Anderson transition restricts the decay of the TLCF in the s ≫ g c region to be power-law or exponential 19 . Our numerical results (see Fig. 3) for d ≥ 3 support also this picture.
The limit of long times and small energy differences s ≪ g c , s ≪ g c is well understood in high dimensions as well. Level repulsion of neighboring eigenvalues R 2 (s) ∝ s, typical of a metal, should be a generic feature in any dimension. According to the one parameter scaling theory, the averaged moments of the particle position at the Anderson transition increase asymptotically t → ∞ as r(t) 2m ∼ t 2m/d where m is a positive integer. As the spatial dimensionality d increases the diffusion is slowed down but it never stops even for long times. This is an indication that the spectral correlations for sufficiently small energy intervals are similar to those of a metal and, as a consequence, R 2 (s) ∼ s for s ≪ g c (see Fig. 3).
Finite size effects modify the TLCF in the critical region. 29 In any finite system at criticality the localization length ξ ∝ |E c − E| −ν (E c is the location in energies of the mobility edge) is finite and the dimensionless conductance is not, strictly speaking, scale invariant, g(L ξ ) = g c [1 + (L ξ /L) 1/ν ] where L ξ is the localization length for a given E ∼ E c . As a consequence 29 , the TLCF develops a power-law tail, R tail 2 (s) ∝ s γ−2 with γ = 1 − 1/(νd) for s > ∆ ξ /∆, where ∆ ξ is the mean level spacing in a localization volume ξ d . This tail is not related with the properties of the critical point but rather with how the system approaches to it. In d = 2 + ǫ, ν = 1/ǫ ≫ 1 and R tail 2 (s) ∼ 1/s. As a summary, we can distinguish three different regions in the critical TLCF, for s ≪ g c , R 2 (s) ∝ s, for s ≫ g c , R 2 (s) decays exponentially. For s > ∆ ξ /∆ decays as power-law due to finite size effects. In order to observe the exponential decay related to the critical point our system size must be such that g c > ∆ ξ /∆. Finally we note that the exact dependence of g c on the spatial dimensionality it is not known. We are only aware of the prediction of Vollhardt and Wolfle 7 by using a self-consistent diagrammatic theory valid for 4 > d > 2, and S d the surface of a d-dimensional sphere of radius unity. In principle it should be accurate only for d > ∼ 2 though it is unclear its exact range of validity.
B. Numerical analysis of R2(s)
After the theoretical analysis we are now ready to present our numerical results for R 2 (s) at the Anderson transition in d = 3 − 6 dimensions. Our motivation is to study the existence and extension of the three regions introduced above: level repulsion, power-law and exponential decay. Indeed our numerical results clearly show these three regimes in all dimensions d = 3 − 6 investigated.
We have first verified (not shown) that for sufficiently large s, R 2 (s) ∼ 1/s γ . The numerical value of the exponent γ was in full agreement with the theoretical prediction γ = 1 − (νd) −1 .
Then we investigate to what extent level repulsion typical of the Anderson transition in d = 3, 4 is still present in higher dimensions. As is observed in Fig. 3, left, for sufficiently small s, R 2 (s) ∼ s for all dimensions studied. The solid lines lines are linear fits of the form R 2 (s) = C + Ds with fitting parameters D = 6.6 ± 0.8 for d = 3, 15.0 ± 1.2 (d = 4), 101 ± 5 (d = 5), and 373 ± 32 (d = 6). The parameter C is equal to zero within the error bars in all cases. This is consistent with the prediction of the one parameter scaling theory that quantum diffusion never stops. However the range in which level repulsion is observed decreases dramatically with the spatial dimensionality thus suggesting that the critical conductance g c also decreases rapidly with the dimension. It is hard to give a more quantitative prediction of g c as a function of the spatial In the left panel we look at the region of small s where level repulsion is still observed. As the dimensionality increases the Anderson transition occurs for stronger disorder and the region of level repulsion is smaller. In the right panel it is shown the window of s in which exponential decay is observed. Such decay is responsible of typical features of the Anderson transition as a linear number variance or a scale invariance spectrum. For the sake of clarity we have removed the power-law contribution R2(s) ∝ 1/s γ . It is well established that this term does not really describe the properties at the Anderson transition but rather how the system approaches to it. Moreover its contribution to the number variance and other spectral correlators is negligible with respect to the exponential contribution.
dimensionality: the estimation of Vollhardt and Wolfle 7 mentioned previously fails for d > 3. Another option is to extrapolate the result in the diffusive regime 30 R 2 (s) ∼ s(1+a d /g 2 ) for s ≪ g c to the critical one. However the geometrical coefficient a d diverges also for d > 3.
Our numerical results (see Fig. 3, right) show that for larger spectral separations s ≥ g c , the linear repulsion is replaced by an exponential decay. The solid lines correspond to a linear fit ln[1 − R 2 (s)] = C − Ds with fitting parameters D = 3.7 ± 0.1 for d = 3, 4.7 ± 0.1 (d = 4), 5.6 ± 0.5 (d = 5), and 9.5 ± 0.2 (d = 6). The maximum value of s plotted was chosen attending to technical criteria. For larger values of s, 1 − R 2 (s) fluctuates around zero thus suggesting that the maximum precision of the computer has been reached.
We note that such exponential decay has already observed in certain one-dimensional disordered systems with long range hopping 13 and in phenomenological short-range plasma models 34 whose spectral properties are strikingly similar to those of a disordered system with short range hopping at the Anderson transition. In these one-dimensional systems it can be proved analytically that R c (s) ∼ e −As/g where A is a constant of order unity. It is thus tempting to speculate that in our case g c ∼ 1/D with D the fitting parameter above. However a deeper analytical knowledge about the Anderson transition is needed to discard that additional geometrical factors (as a d above) enter in the exponent of R 2 (s) making thus less evident the relation between g c and D. The system tends to the Poisson limit (P) as the dimension is increased. WD denotes the Wigner-Dyson distribution.
C. Spectral correlators
Level statistics at the Anderson transition are usually investigated by computing certain spectral correlators from the TLCF or higher n-level correlation functions. The level spacing distribution P (s) is a popular choice to study the correlations of eigenvalues separated short distances of order the mean level spacing. On the other hand, the number variance Σ 2 (ℓ) = (N ℓ − N ℓ ) 2 (N ℓ is the number of eigenvalues in an interval of length ℓ) provides useful information about spectral correlations for distances much larger than the mean level spacing.
Numerical calculations in d ≤ 4 at the Anderson transition have found that P (s) → 0 for s → 0, as in a metal. However the number variance is asymptotically linear Σ 2 (ℓ) ∼ χℓ, as in an insulator but with a slope χ < 1. The origin of this linear behavior can be explained heuristically 31 by using the one parameter scaling theory and making the plausible approximation that eigenvalues interact only if their separation (in units of the mean level spacing) is smaller than g c . In the critical region it is also expected that P (s) ∼ e −As (A > 1) for s ≫ 1 similar to the insulator limit P (s) = e −s .
A natural question to ask is whether these spectral features also holds at the Anderson transition in higher dimensions d = 5, 6. Our numerical results (see Fig. 4) fully confirm that both P (s) and Σ 2 (ℓ) have all the signatures of critical statistics. The plots of P (s) and Σ 2 (ℓ) correspond to the maximum L used in each dimension though almost identical results are obtained for smaller volumes (not shown). The straight lines in Fig. 4 are fits of the form Σ 2 (ℓ) = C + χℓ and ln P (s) = D − As. The best fitting parameters χ and A are plotted in Fig. 5 as a function of the spatial dimensionality. It is clearly observed that the slope of the number variance χ increases and A decreases with the spatial dimensionality but does not reach the Poisson limit χ = A = 1. This confirms that the upper critical dimension must be d u > 6 and strongly suggests that it is indeed infinity as this is, according to the fitting in Fig. 5, the dimension in which χ = A = 1.
We are especially interested on the specific dependence of χ and A with the spatial dimensionality. In d = 2 + ǫ the Anderson transition occurs in the weak disorder region, g c ≫ 1 and χ ∼ 1/g c ∼ d − 2 ≪ 1. On the other hand, the prediction for d → ∞ (Cayley tree) is A = χ = 1. 27 In principle, corrections to the Cayley tree limit due to interference between different paths decay as 1/d or faster so it is tempting to conjecture that χ = 1 − C/(d − 2) and A = 1 + D/(d − 2). The numerical results of Fig. 5 confirm this dependence especially for the parameter A. In the case of χ the situation is less clear. A reason for the discrepancy with the theoretical prediction could be that d ∼ 3 is still far from the limit d ≫ 2 in which holds. Indeed we have observed that our numerical data are better described (dotted line in Fig. 5) by χ = tanh[C (d − 2)] with C = 0.29 ∼ 1/π. Such a dependence of χ on hyperbolic functions has already been reported on the generalized random matrix models 13 whose spectral correlations are strikingly similar to the ones at the Anderson transition. The straight lines in Fig. 5 are linear fits to the conjectured relations with fitting parameters C = 0.78 ± 0.06 and D = 0.55 ± 0.01. From a physical point of view these numerical results are a further confirmation that analytical approaches to the Anderson transition starting from the metallic limit and adding interference corrections or starting from the insulator state and inducing the transition to a metal by increasing the tunneling amplitude fail to capture key features of the Anderson transition in intermediate dimensions where both mechanisms are at work.
D. Random matrix models and the Anderson transition
Typical signatures of critical statistics have also been found in both generalized random matrix models 13,32,33 whose joint distribution of eigenvalues can be mapped onto the Calogero-Sutherland model at finite temperature and phenomenological short-range plasma models whose joint distribution of eigenvalues 34 is given by the classical Dyson gas with the logarithmic pairwise interaction restricted to a finite number k of nearest neighbors (the spectral correlations of this model are usually referred to as Semi-Poisson statistics though this name refers to the case k = 2). In the latter explicit analytical solutions for all correlation functions are available for general k. Although these models reproduce typical properties of critical statistics such as spectral scale invariance, level repulsion and linear number variance, they are quantitatively different. In the generalized matrix models the joint distribution of eigenvalues can be considered as an ensemble of free particles at finite temperature with a nontrivial statistical interaction. The statistical interaction resembles the Vandermonde determinant and the effect of a finite temperature is to suppress smoothly correlations of distant eigenvalues. In the case of the short range plasma model 34 this suppression is abrupt since only nearest neighbor levels interact each other. A natural question to ask is which of those mechanisms is dominant in the Anderson transition studied in this paper. We have found a method to distinguish between them. In the short range plasma model Aχ = 1 (A describes the exponential decay of P (s) ∼ e −As ). By contrast, in the generalized random matrix models Aχ falls between 1/2 in the region of weak disorder to unity in the region for strong disorder. On the other hand in our case -a disordered tight binding model at the Anderson transition -Aχ ranges from 0.44 in d = 3 to 0.9 in d = 6 in agreement with the prediction of the generalized random matrix models. Our results thus suggest that the abrupt suppression of spectral correlations typical of Semi-Poisson statistics can describe the spectral correlation at the Anderson transition in d ≫ 2 but not for intermediate dimensions.
E. Effect of a magnetic flux
So far all results we have presented correspond to the case of time-reversal invariance. We have also investigated the effect of a random flux at criticality in d = 3 − 6. This has been achieved by the substitution t ij → t ij e iθij in the Hamiltonian Eq. (1). The phases θ ij were chosen to be uniformly distributed in the interval [−π, π]. In d = 3, in agreement with previous claims in the literature 21 , small differences with respect to the time reversal invariance case were found in W c and in P (s) in the s ≪ 1 limit. Typically these effects are related with weak localization like corrections that are strongly affected by the flux. However in d = 5, 6 the time-reversal and the time-broken cases were almost indistinguishable. This suggests that the mechanism of localization leading to weak localization corrections based on destructive interference is less important in d ≫ 2 dimensions.
IV. CONCLUSIONS
We have studied the dependence on the spatial dimensionality of different quantities relevant in the description of the Anderson transition. As a result we have concluded that the upper critical dimension for localization is infinity. The level statistics tend to Poisson statistics, typical of an insulator, as the upper critical dimensionality is approached. We have also proposed that the exponential decay of the TLCF observed in numerical calculations is a signature of an Anderson transition. Neither the self-consistent theory of localization exact in the Cayley tree nor the ǫ-expansion formalism are accurate for intermediate dimensions. A new basis for the localization problem is thus called for. Finally, the effect of a magnetic flux and the validity of certain effective models to describe the spectral correlations at the Anderson transition have been discussed. | 2019-04-14T02:13:10.967Z | 2006-12-18T00:00:00.000 | {
"year": 2006,
"sha1": "ebefbaf0202190a39abc5f22c36f93f391747bed",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0612454",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ebefbaf0202190a39abc5f22c36f93f391747bed",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53714618 | pes2o/s2orc | v3-fos-license | Dissecting the Genomic Architecture of Resistance to Eimeria maxima Parasitism in the Chicken
Coccidiosis in poultry, caused by protozoan parasites of the genus Eimeria, is an intestinal disease with substantial economic impact. With the use of anticoccidial drugs under public and political pressure, and the comparatively higher cost of live-attenuated vaccines, an attractive complementary strategy for control is to breed chickens with increased resistance to Eimeria parasitism. Prior infection with Eimeria maxima leads to complete immunity against challenge with homologous strains, but only partial resistance to challenge with antigenically diverse heterologous strains. We investigate the genetic architecture of avian resistance to E. maxima primary infection and heterologous strain secondary challenge using White Leghorn populations of derived inbred lines, C.B12 and 15I, known to differ in susceptibility to the parasite. An intercross population was infected with E. maxima Houghton (H) strain, followed 3 weeks later by E. maxima Weybridge (W) strain challenge, while a backcross population received a single E. maxima W infection. The phenotypes measured were parasite replication (counting fecal oocyst output or qPCR for parasite numbers in intestinal tissue), intestinal lesion score (gross pathology, scale 0–4), and for the backcross only, serum interleukin-10 (IL-10) levels. Birds were genotyped using a high density genome-wide DNA array (600K, Affymetrix). Genome-wide association study located associations on chromosomes 1, 2, 3, and 5 following primary infection in the backcross population, and a suggestive association on chromosome 1 following heterologous E. maxima W challenge in the intercross population. This mapped several megabases away from the quantitative trait locus (QTL) linked to the backcross primary W strain infection, suggesting different underlying mechanisms for the primary- and heterologous secondary- responses. Underlying pathways for those genes located in the respective QTL for resistance to primary infection and protection against heterologous challenge were related mainly to immune response, with IL-10 signaling in the backcross primary infection being the most significant. Additionally, the identified markers associated with IL-10 levels exhibited significant additive genetic variance. We suggest this is a phenotype of interest to the outcome of challenge, being scalable in live birds and negating the requirement for single-bird cages, fecal oocyst counts, or slaughter for sampling (qPCR).
Coccidiosis in poultry, caused by protozoan parasites of the genus Eimeria, is an intestinal disease with substantial economic impact. With the use of anticoccidial drugs under public and political pressure, and the comparatively higher cost of live-attenuated vaccines, an attractive complementary strategy for control is to breed chickens with increased resistance to Eimeria parasitism. Prior infection with Eimeria maxima leads to complete immunity against challenge with homologous strains, but only partial resistance to challenge with antigenically diverse heterologous strains. We investigate the genetic architecture of avian resistance to E. maxima primary infection and heterologous strain secondary challenge using White Leghorn populations of derived inbred lines, C.B12 and 15I, known to differ in susceptibility to the parasite. An intercross population was infected with E. maxima Houghton (H) strain, followed 3 weeks later by E. maxima Weybridge (W) strain challenge, while a backcross population received a single E. maxima W infection. The phenotypes measured were parasite replication (counting fecal oocyst output or qPCR for parasite numbers in intestinal tissue), intestinal lesion score (gross pathology, scale 0-4), and for the backcross only, serum interleukin-10 (IL-10) levels. Birds were genotyped using a high density genome-wide DNA array (600K, Affymetrix). Genome-wide association study located associations on chromosomes 1, 2, 3, and 5 following primary infection in the backcross population, and a suggestive association on chromosome 1 following heterologous E. maxima W challenge in the intercross population. This mapped several megabases away from the quantitative trait locus (QTL) linked to the backcross primary W strain infection, suggesting different underlying mechanisms for the primary-and heterologous secondary-responses. Underlying pathways for those genes located in the respective QTL for resistance to primary infection and protection against heterologous challenge were related mainly to immune response, with IL-10 signaling in the backcross primary infection being the most significant. Additionally, the identified markers associated
INTRODUCTION
Coccidiosis is an intestinal disease caused by intracellular protozoan parasites of the genus Eimeria . The control of coccidiosis is a challenge to the international poultry industry, with economic losses estimated at USD 3 billion annually (Dalloul and Lillehoj, 2006). Current control of coccidiosis relies on the prophylactic use of anticoccidial drugs, or vaccination with formulations of live wild-type or attenuated parasites (Crouch et al., 2003;McDonald and Shirley, 2009). However, use of some anticoccidial drugs has been curtailed by legislation, while the limited production capacity and costs of live attenuated vaccines compromise their utility in broiler flocks . Thus, there is a need for complementary strategies to control coccidiosis in poultry. A promising approach would be to breed chickens for increased genetic resistance and increased vaccine response to Eimeria parasitism since there is evidence for relevant host genetic variation (Johnson et al., 1986;Bumstead and Millard, 1992).
Coccidiosis in poultry is caused by seven distinct Eimeria species (Reid et al., 2014), with Eimeria maxima being one of the most common causes of coccidiosis in commercial broilers. Immunity introduced by primary infection (vaccination) against E. maxima is commonly strain-specific, with immune escape contributing to sub-clinical coccidiosis symptoms that include decreased feed conversion efficiency, marked weight loss and low performance (Fitz-Coy, 1992;Blake et al., 2005). Johnson et al. (1986) demonstrated variance in coccidiosis susceptibility in chickens as a prerequisite to selective breeding for resistance. A subsequent study using several inbred White Leghorn lines established variance for benchmark phenotypes when chickens were infected with controlled doses of Eimeria spp. Millard, 1987, Bumstead andMillard, 1992). The between-line variation observed in oocyst production by the different lines was not correlated with weight loss or mortality, indicating that within-trait observations were a result of effect accommodation rather than parasite restriction. The greatest differences in parasite replication (PR) were between lines 15I and C major histocompatibility complex (MHC) haplotype B12 (C.B12) chickens that produced relatively high and low numbers of oocysts, respectively (Bumstead and Millard, 1987;Smith et al., 2002). Most notably, primary infection with the Houghton or Weybridge reference E. maxima strains induce 100% protection against secondary homologous challenge in 15I and C line chickens (Smith et al., 2002). However, the outcome of heterologous challenge varied by parasite strain and host genotype combination (Smith et al., 2002;Blake et al., 2004Blake et al., , 2005. Regardless of the substantial financial losses to industry caused by coccidiosis, few studies have attempted to identify quantitative trait loci (QTL) for resistance to E. maxima infection and there are no relevant studies on the genetics of heterologous secondary challenge response.
The present study extends previous work in inbred chicken lines to determine the genetic architecture of E. maxima resistance, i.e., lack of PR, and protection against secondary challenge with a heterologous E. maxima strain. First, an F2 intercross of inbred White Leghorn chicken lines C.B12 × 15I were initially infected with E. maxima H, followed 3 weeks later by challenge with E. maxima W to investigate response to challenge with the heterologous strain. Fecal oocyst output was counted to determine severity of challenge. Second, a backcross population from the same two inbred lines [(C.B12 × 15I) × C.B12] was infected with E. maxima W to study primary resistance to parasitism. Three phenotypes were determined for these birds following infection: PR by qPCR for parasite numbers in intestinal tissue, intestinal lesion score (LS) (gross pathology, scale 0-4) and levels of serum interleukin-10 (IL-10), a novel biomarker, found to be positively correlated with the pathology trait in chickens infected with E. tenella (Wu et al., 2016;Boulton et al., 2018). All birds were then genotyped using a 600K Affymetrix R Axiom R HD array (Kranis et al., 2013), enabling genome-wide association studies (GWASs), followed by pathway analysis to identify candidate genomic regions, pathways, networks and genes for resistance to E. maxima primary infection and effective responses to challenge with a heterologous strain.
Ethics Statement
These trials were conducted under Home Office Project Licence in accordance with Home Office regulations under the Animals (Scientific Procedures) Act 1986 and the guidelines set down by the Institute for Animal Health and RVC Animal Welfare and Ethical Review Bodies.
Parasites
The E. maxima Houghton (H) and Weybridge (W) strains were used throughout these studies (Norton and Hein, 1976). Routine parasite passage, sporulation, and dose preparation were undertaken as described previously (Eckert et al., 1995) using specific pathogen free Light Sussex or Lohman LSL chickens. Oocysts were used within 1 month of harvest.
Animals
Inbred chicken lines 15I and C derived from White Leghorn flocks at USDA-ARS Avian Disease and Oncology Laboratory in East Lansing, MI, United States, were maintained by random mating within the specified-pathogen-free (SPF) flocks at the Pirbright Institute [formerly the Institute for Animal Health (IAH)], United Kingdom since 1962 and1969, respectively. F2 intercross birds (n = 195) were generated by crossing nine F1 (C.B12 × 15I) male progeny with 27 unrelated F1 female progeny at the IAH (Compton site). Six birds from each of the two parental lines, 15I and C.B12, were also hatched and kept under the same experimental conditions as F2 (individual cages post-challenge).
To generate the backcross (n = 214), 20 F1 (C.B12 × 15I) male progeny were crossed with 100 unrelated C.B12 line females. The breeding was performed in the SPF Bumstead facility at the Roslin Institute, The University of Edinburgh, United Kingdom. Day old chicks were transported in isolated SPF containment to the Royal Veterinary College poultry barn, University of London, United Kingdom, where the primary infection with E. maxima W sporulated oocysts were conducted in floor pens.
Intercross Population
F2 intercross (n = 195), and 12 parental line birds were initially infected by oral gavage with 100 sporulated oocysts of E. maxima H at 25 days of age and moved to individual cages. Feces were collected from each bird on a daily basis during the 5-10 days post-challenge (pi) period following infection. Three weeks later (47 days of age) a secondary challenge was initiated by oral gavage of 250 sporulated oocysts of E. maxima W. Feces were again collected from each bird on a daily basis during the 5-10 day post-challenge period.
Backcross Population
At 21 days of age, chickens were inoculated by oral gavage with either 1 ml distilled water (control group, n = 20) or 100 sporulated oocysts of E. maxima W (infected group, n = 194). To avoid cross-infection the control group was housed separately. Birds were euthanised humanely at day 7 pi, coinciding with the peak pathological effects of E. maxima (Rothwell et al., 2004), providing the greatest sensitivity for parasite genome detection (Blake et al., 2006). A blood sample from each bird was collected post-mortem via aortic rupture into 1.5 ml Sigma-Aldrich (Dorset, United Kingdom) microcentrifuge tubes. Bijou tubes (7 ml Sterilin TM ) containing 5-10 volumes of room temperature RNAlater R (Life Technologies, Carlsbad, CA, United States) were used to store 5.0 cm of intestinal tissue and content from either side of Meckel's diverticulum.
Phenotyping
Individual oocyst output was used to study the outcome of the E. maxima H primary infection and secondary heterologous E. maxima W challenge in the intercross chicken population. Oocysts were quantified daily (5 to 10 days post-infection and challenge) using a microscope and saturated salt flotation in a McMaster counting chamber (Eckert et al., 1995;Smith et al., 2002). Daily totals were combined to provide a total count for oocyst output per bird for both the primary infection and secondary challenge. Oocyst counts were log-transformed to approximate normal distribution.
The phenotypes used to study resistance to E. maxima W primary infection in the backcross population were relative intestinal Eimeria genome copy number (PR, measured using quantitative PCR as parasite genomes per host chicken genome), intestinal LS (pathology, on a scale 0-4), and serum IL-10 level (IL-10). Quantitative real-time PCR targeting the E. maxima microneme protein 1 (EmMIC1) and Gallus gallus β-actin (actb) loci was performed using total genomic DNA extracted from a 10 cm length of intestinal tissue centered on Meckel's diverticulum using a DNeasy Blood and Tissue kit (Qiagen, Hilden, Germany). Briefly, each complete tissue sample was disaggregated using a Qiagen TissueRuptor and an aliquot was processed for extraction of combined host and parasite DNA (see Blake et al., 2006, for full details). A CFX96 Touch R Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules, CA, United States), was used to amplify each sample in triplicate (Nolan et al., 2015), with an additional Bead-Beater homogenization step prior to buffer ATL treatment (including 1 volume 0.4-0.6 mm glass beads, 3,000 oscillations per minute for 1 min). Intestinal pathology was assessed by the same experienced operator scoring lesions according to Johnson and Reid (1970). A capture ELISA was used to measure IL-10, employing ROS-AV164 and biotinylated ROS-AV163 as capture and detection antibodies, respectively (see Wu et al., 2016, for full details). IL-10 levels and parasite genome numbers were log-transformed to approximate normal distribution.
Phenotypic Correlations
Following log-transformation for PR and IL-10, all backcross phenotypic traits were rescaled to modify the unit of measurement differences. Then, fitting host sex as a fixed effect in a multivariate linear model, phenotypic correlations (r P ) were estimated using ASReml 4.1 (Gilmour et al., 2015).
Genome-Wide Association Studies
Sixty-seven F2 birds exhibiting the most extreme phenotypes, plus the 12 intercross parental line birds and the entire backcross generation were genotyped using the 600K Affymetrix R Axiom R HD genotyping array (Kranis et al., 2013). Although each data set was analyzed separately, the same GWAS steps were used for both populations. The marker genotype data were subjected to quality control measures using the thresholds: minor allele frequency < 0.02 and call rate > 90%. Deviation from Hardy-Weinberg equilibrium was not considered a reason for excluding markers since these were experimental populations of inbred lines. After quality control 203,845 intercross and 204,072 backcross markers remained and were used, respectively, to generate separate intercross and backcross genomic relationship matrixes (GRMs) to investigate the presence of population stratification. Next, each GRM was converted to a distance matrix that was analyzed with a classical multidimensional scaling using the GenABEL package of R (Aulchenko et al., 2007) to obtain principal components. These analyses revealed three principal components in the intercross population (one for each parental line and one for F2 birds), but no substructure in the backcross. GWAS for each trait were then conducted using GenABEL based on a mixed model, with the population principal components fitted as a co-variate (intercross population only), sex fitted as a fixed effect in both studies, and GRM fitted as a random polygenic effect to adjust for population sub-structure. In the case of GWAS for heterologous secondary challenge response, the oocyst output following the first challenge was also fitted as a covariate to account for the effect of the first challenge. After Bonferroni correction for multiple testing, significance thresholds were P ≤ 2.45 × 10 −7 and P ≤ 4.90 × 10 −6 for genome-wide (P ≤ 0.05) and suggestive (namely one false positive per genome scan) significant levels, respectively corresponding to −log 10 (P) of 6.61 and 5.30. The extent of linkage disequilibrium (LD) between significant markers located on the same chromosome regions was calculated using the r-square statistic of PLINK v1.09 (Purcell et al., 2007).
Effects of the significant markers identified in each GWAS were re-estimated in ASReml 4.1 (Gilmour et al., 2015) by individually fitting the markers as fixed effects in the same model as used for GWAS analyses. Effects were calculated as follows: additive effect, a = (AA -BB)/2; dominance effect, d = AB-((AA + BB)/2), where AA, BB, and AB were the predicted trait values for each genotype class.
All significant markers identified in GWAS for responses to primary infection and secondary E. maxima W challenge were mapped to the reference Gallus gallus domesticus genome and annotated using the variant effect predictor 1 tool within the Ensembl (genome browser 92) database and the Gal-gal5 assembly 2 . Furthermore, genes located within 100 kb up-and down-stream of the significant markers were annotated using the BioMart data mining tool 3 and the Gal-gal5 assembly. This method of annotation enabled all genes located in the vicinity of the identified significant markers to be identified and cataloged.
Re-sequencing Data Analysis
To identify possible protein-coding genes associated with the detected QTL, genomic sequences in the regions of interest from the line 15I and C.B12 chickens were compared. The two parental chicken lines were entirely re-sequenced at 15-20 fold coverage, using pools of 10 individuals per line, performed on an Illumina GAIIx platform using a paired-end protocol (Krämer et al., 2014). Re-sequencing data of the candidate regions (i.e., 1 kb up-and downstream of the candidate gene end sites), for resistance to primary infection and heterologous challenge derived from intercross and backcross analyses, were then extracted and examined separately. Using the Mpileup tool for marker calling (SAMtools v0.1.7; Li et al., 2009), single nucleotide variants (SNVs) between the two parental lines and the reference genome in these regions were detected. These were then annotated using the same variant effect predictor software as above. Information for all SNV [intergenic, intronic, exonic, splicing, 3 and 5 untranslated regions (3 UTR, 5 UTR)] present in the regions of interest were collated. Intergenic, intronic, and 1 http://www.ensembl.org/Tools/VEP 2 https://www.ncbi.nlm.nih.gov/assembly/GCF_000002315.4/ 3 http://www.ensembl.org/biomart/martview/ exonic synonymous variants were then filtered out along with SNV that were common in the two parental lines but different from the reference genome. Thus, only sites that were different between the parental lines and had an effect on the coding sequence (nonsense, missense, splicing) or a potential effect on the gene expression (3 UTR and 5 UTR) were retained for further study.
Pathway, Network, and Functional Enrichment Analyses
Identification of potential canonical pathways and networks underlying the candidate genomic regions associated with outcomes of primary infection and heterologous secondary E. maxima challenge were performed using the ingenuity pathway analysis (IPA) program 4 . IPA constructs multiple possible upstream regulators, pathways, and networks that serve as hypotheses for the biological mechanism underlying the phenotypes based on a large-scale causal network derived from the Ingenuity Knowledge Base. After correcting for a baseline threshold and calculating statistical significance, the most likely pathways involved are inferred (Krämer et al., 2014). The constructed networks can then be ranked using their IPA score based on the P-values obtained using Fisher's exact test [IPA score or P-score = −log 10 (P-value)].
The gene lists for each phenotype were also analyzed using the Database for Annotation, Visualization and Integrated Discovery (DAVID; Dennis et al., 2003). To understand the biological meaning behind these genes, gene ontology (GO) was determined, and functional annotation clustering analysis was performed using the integral G. gallus background. The enrichment score (ES) of DAVID is a modified Fisher exact P-value calculated by the software, with higher ES reflecting more enriched clusters. An ES > 1 means that the functional category is overrepresented.
Descriptive Statistics
Phenotypic distributions for oocyst counts following primary infection with E. maxima H and secondary challenge with E. maxima W in the intercross and parental populations along with relative DNA and IL-10 levels in the backcross populations after primary infection with E. maxima W are presented in Figures 1A-C. After primary infection the pure line C.B12 birds produced fewer E. maxima oocyst counts compared to the pure line 15I and F2 birds, with the highest oocyst output recorded in the pure line 15I group. Conversely, inverse findings regarding oocyst output were recorded in the two parental lines following heterologous secondary strain challenge. These results agree with previous findings that show line C.B12 birds develop no cross protection between primary H and secondary W strain challenges, while line 15I birds develop significant cross-protection when infected in this order (Smith et al., 2002;Blake et al., 2005). As expected, for both primary and Among the backcross chickens, following infection with E. maxima W, phenotypic scores for intestinal lesions were low (0-2), however significant variance (P = 0.05) was noted ( Table 1). Estimated phenotypic correlations between the three measured traits ranged from 0.8 to 0.15, with only the correlation between LS and IL-10 being statistically significant (r LS,IL−10 = 0.15 ± 0.07; Figure 1D and Table 1).
Intercross Study
Genome-wide association study analysis for oocyst output following primary infection of the intercross population with E. maxima H did not reveal significant associations after the strict Bonferroni correction. However, an association with markers on chromosome 2, just below the suggestive threshold was reported (results not shown). GWAS analysis following secondary challenge with the heterologous E. maxima W strain identified 11 markers on chromosome 1, all having suggestive associations with the trait in the intercross population. These 11 markers belonged to the same LD block (499 bp, r 2 = 1; Figure 2 and Table 2). The corresponding Q-Q plot for the GWAS intercross result is found in Figure 2.
The 11 significant markers associated with the outcome of secondary challenge by the heterologous E. maxima strain were all located in intronic, upstream, and downstream regions of the phenylalanine hydroxylase (PAH) gene (Supplementary Table S1). In the 0.5 Mb candidate region for enhanced response to heterologous secondary E. maxima challenge only 16 protein coding genes were located (Supplementary Table S2).
Backcross Study
Genome-wide association study results for resistance to E. maxima W primary infection in the backcross population revealed several of significant genomic associations for each of the measured phenotypes. However, there was no overlap of the candidate genomic regions linked to parasite reproduction, intestinal pathology, or IL-10 induction (Figure 3 and Table 3). Specifically, a single marker on chromosome 3 had a suggestive association with PR ( Figure 3A and Table 3). Four suggestive marker associations were identified with markers on chromosomes 1, 2, and 3 for intestinal pathology (i.e., lesion FIGURE 2 | (A) Manhattan and (B) corresponding Q-Q plot for GWAS for oocyst output measured from the intercross chickens following heterologous secondary challenge. The -log 10 P-value (on the y axis) indicating genome-wide significance is represented by the red line, while the blue line represents suggestive significance. The positions of the markers analyzed for the 28 main chicken autosomes (1-28) plus the sex chromosomes Z and W (29 and 30 respectively) and microchromosomes (31), are represented on the x axis. In (B), the expected chi-squared (χ 2 ) values are plotted on the x axis, whereas the observed χ 2 values are presented on the y axis, with the red line indicating the anticipated slope. damage; Figure 3B and Table 3). A further four associations were found for IL-10 on chromosomes 1, 2, and 5 ( Figure 3C and Table 3). None of the markers found on chromosome 2 for LS and IL-10 were in common, nor were they in LD. However, the candidate QTL region for IL-10 on chromosome 2 was in proximity with an intercross marker found following primary infection with E. maxima H in the intercross population that falls below the suggestive threshold. The corresponding Q-Q plots for GWAS are displayed in Figure 4. All significant markers identified in both studies exhibited significant (P < 0.01) additive genetic effects ( Table 3).
All of the significant markers identified for resistance to primary E. maxima W infection in the backcross population were located in intronic or intergenic regions (Supplementary Table S3). The candidate regions for response to primary E. maxima W infection contains a small number of genes: 36 protein-coding genes and four microRNAs (Supplementary Table S4).
Resequencing Analysis
In total, 3,230 variants were identified in the candidate regions associated with resistance to primary E. maxima infections. SNV located in exonic regions accounted for less than 3% of the total, while the remaining SNV (97%) were located in intronic, upstream, and downstream regions. Genes with SNVs that could potentially lead to non-functional transcripts were not detected. However, six genes contained missense SNVs that may affect the function of the encoded proteins. More specifically, LONRF2, CHST10, PDCL3, and TBC1D8 genes on chromosome 1, FAM69C on chromosome 2, and IPCEF1 on chromosome 3 had missense with moderate effect SNVs. Also, these genes contained 3 /5 UTR variants that may affect the expression of these genes. Details of the missense variants identified in the candidate regions for E. maxima resistance to primary infection are presented in Supplementary Table S5.
In total, 2,165 SNV were detected in the candidate region on chromosome 1 for the response to heterologous secondary E. maxima W challenge. Most of the identified SNV (95%) were located in intronic, upstream and downstream regions; 5% were located in exonic regions, mostly in 3 and 5 UTR regions. Measured traits -parasite replication per host genome (PR), Lesion Score (LS), and serum interleukin-10 (IL-10). Details provided: Affymetrix marker identifier; chromosome and position of markers in the Gal-gal5 assembly (Chr:mb); the additive genetic effect (G A ) and significance values (P-value).
Nevertheless, three genes (PMCH, TBXAS1, THL3) containing missense variants with moderate effects as well as 3 /5 UTR variants were detected. Details of the missense variants identified in the candidate regions for heterologous secondary E. maxima W challenge are presented in Supplementary Table S6.
Pathway, Network, and Functional Enrichment Analyses
The analyses for resistance to primary E. maxima infection revealed pathway enrichment for immune response involvement, including IL-10, interleukin-6 (IL-6), nuclear factor kappa-lightchain-enhancer of activated B cells (NF-κb) and toll like receptor signaling (Figure 5). Using the list of candidate region genes, two networks were constructed, comprising molecular interactions related to inflammatory response and disease, cell death and survival, cellular compromise, and cell cycle (IPA scores = 25; Figures 6A,B). A single enriched cluster was found, related to immune response linked to interleukin-1 (IL-1), Toll/IL1 response and cytokine-cytokine receptor response (ES = 2.2, with IL1R1, IL1RL1, IL2R, IL19R18, PTPRM, and COL14A genes involved). The pathway analyses for response to heterologous E. maxima W strain secondary challenge revealed enrichment for both immune (prostanoid biosynthesis, retinoic acid mediated apoptosis signaling, eicosanoid signaling) and metabolic pathways (Figure 7). Two gene networks were constructed, related to cell signaling, nucleic acid metabolism and small molecule biochemistry (IPA score = 20), and cellular development, tissue development and function (IPA score = 45), respectively (Figures 8A,B). Accompanying functional annotation clustering analysis revealed the presence of two enriched clusters related to cell to cell signaling (ES = 1.7) and metal-ion binding (ES = 1.3).
DISCUSSION
Coccidiosis remains one of the costliest diseases for the international poultry industry. Selectively breeding chickens for enhanced resistance to Eimeria challenge, and for improved breadth of vaccine response, could provide a tractable strategy to improve coccidiosis control. We conducted two studies using different crosses between the White Leghorn inbred lines 15I and C.B12. Our data confirm that line 15I birds are more susceptible to primary infection with E. maxima than line C.B12 by overall PR (Smith et al., 2002;Blake et al., 2006). While the two inbred lines exhibit similar resistance/susceptibility profiles following primary infection with either of the two antigenically distinct E. maxima strains, they show radically different levels of protection against heterologous secondary challenge by antigenically distinct strains of the same pathogen (Smith et al., 2002). We therefore investigated the genetic background of resistance to primary and heterologous secondary E. maxima W challenges.
The resistance of chickens to Eimeria infection has traditionally been quantified using measures such as oocyst output and LS, indicating resistance to PR and parasiteinduced pathology, respectively. For the former, the fewer oocysts excreted, the more resistant the chicken. Thus, oocyst shedding is considered to be an indicative trait and an accurate phenotype for calculating resistance to primary infection and subsequent parasite challenges and this method was used in the intercross experiment. However, calculation of oocyst output by fecal flotation and microscopy is labor intensive. Thus, quantitative real-time PCR for parasite genome copies in intestinal tissues was used as an alternative measure of PR in the more recent backcross experiment (Blake et al., 2006). A third trait, serum IL-10, was also quantified for these latter chickens, providing a measure of the innate immune response to Eimeria infection (Rothwell et al., 2004;Boulton et al., 2018). IL-10 is produced after E. maxima and E. tenella primary infection of White Leghorn chickens (lines 15I and C.B12) and E. tenella primary infection of commercial broilers (Rothwell et al., 2004;Wu et al., 2016;Boulton et al., 2018). In all these cases, IL-10 was expressed at high levels in infected birds only, and significantly correlated with pathology (lesion scores). Here, GWAS from the backcross experiment identified markers associated with IL-10 that exhibit significant additive genetic variance. These findings, in conjunction with indications that IL-10 is correlated significantly with gross pathology in a commercial population primary infection with E. tenella (Boulton et al., 2018), support the use of IL-10 as an accessible early-life biomarker in breeding programs aiming to enhance Eimeria resistance to challenge or pathological outcomes.
Although the significance of E. maxima in field coccidiosis has been recognized for many years, there has been a limited number of genetic studies investigating host resistance to E. maxima primary infection and challenge. A recent study that investigated the genetic background of resistance to high-level E. maxima infection using the same HD genotyping array but measuring three different phenotypes (body weight gain, plasma coloration, and β2-globulin in blood plasma) identified several QTL on chromosomes 1, 2, 3, 5, and 10 in commercial Cobb500 broilers (Hamzic et al., 2015). Similar to our findings, Hamzic et al. (2015) found no QTL overlap among their different phenotypes. Interestingly, QTL identified by Hamzic et al. (2015) on chromosome 1 for β2-globulin in blood plasma is nearby (2 Mb difference) QTL found in our study linked to for resistance to heterologous secondary E. maxima W challenge. Similar enriched biological pathways related to innate immune responses and metabolic processes were also detected in the two studies with this parasite species.
In other comparable work, Zhu et al. (2003) performed a linkage analysis study investigating chicken resistance in terms of oocyst output following controlled E. maxima infection using an F2-intercross between two broiler lines with different susceptibility to primary E. maxima infection. Using 119 microsatellite markers one locus associated with E. maxima resistance was identified on chromosome 1 (Zhu et al., 2003). Expanding this work, Kim et al. (2006) used nine microsatellite markers located on chromosome 1 to refine this region.
According to their results, the peak of QTL was located a considerable genetic distance (i.e., 254 cM) away from the chromosome 1 QTL identified here and in the Hamzic et al. (2015) study. This could be attributed to the use of different chicken lines, E. maxima strains, analysis methods, and/or genotyping tools. It is worth mentioning that the power to detect QTL as well as the resolution of their location using a few microsatellites is limited compared to HD genotyping platforms.
Comparison of the re-sequencing data of the two parental chicken lines identified a small number of genes that differ regarding the presence of exonic variants with a putative functional effect on the encoded proteins. Two genes of interest with missense variants located in the candidate regions for resistance to E. maxima primary infection encode Phosducin Like 3 (PDCL3) and TBC1 Domain Family Member 8 (TBC1D8) proteins. These immune-related genes were included in the two networks related to inflammatory response, and cell death and survival, constructed by IPA. PDCL3 acts as a chaperone for the angiogenic vascular endothelial growth factor receptor, controlling its abundance and inhibiting its ubiquitination and degradation, and also modulating activation of caspases during apoptosis (Wilkinson et al., 2004;Srinivasan et al., 2013). TBC1D8 is involved in the regulation of cell proliferation, calcium ion transportation, and also has GTPase activator activity (Ishibashi et al., 2009).
The genes encoding Thromboxane A Synthase 1 (TBXAS1) and Pro-Melanin Concentrating Hormone (PMCH) are located in the candidate region and are of interest in resistance to secondary challenge by heterologous E. maxima W. TBXAS1 encodes a member of the cytochrome P450 superfamily of enzymes involved in both immune response and metabolism; it plays a role in drug metabolism, platelet activation and metabolism, and synthesis of cholesterol, steroids, and other lipids (Yokoyama et al., 1991;Miyata et al., 1994). The proinflammatory actions of thromboxane receptors have been demonstrated to enhance cellular immune responses in a mouse model (Thomas et al., 2003). PMCH encodes a preproprotein that is proteolytically processed to generate multiple protein products, including melanin-concentrating hormone (MCH) that stimulates hunger and may additionally regulate energy homeostasis, reproductive function, and sleep (Viale et al., 1997;Chagnon et al., 2007). In a further mouse model, MCH has also been reported as a mediator of intestinal inflammation (Kokkotou et al., 2008). Although, the genes mentioned above are good functional candidates for resistance to primary infection and heterologous challenge with E. maxima, further studies are needed to confirm the present results and identify the actual causative genes and mutations.
The immune interactions between an intracellular pathogen and a host are complex and vary as a consequence of the survival mechanisms that have evolved in both (Blake et al., 2011;Blake and Tomley, 2014). It has been suggested that host control of challenge with Eimeria, an obligate intracellular pathogen, requires a strong inflammatory, mostly cell mediated response Dalloul and Lillehoj, 2006). Also, host innate immune responses have been detected during initial pathogen exposure in several studies (Kim et al., 2008;Pinard-van der Laan et al., 2009;Wu et al., 2016;Boulton et al., 2018). According to our findings, several gene networks and pathways relating to innate, humoral and cell-mediated, immune responses were highlighted from the gene products located in the candidate regions for resistance to primary Eimeria infection. Among the canonical pathways, IL-10 signaling was the most significant, with relevance as a regulator of cytokines such as interferon-(IFN-) γ. These findings agree with previous studies of Eimeria resistance that have highlighted IFN γ and tumor necrosis factor (TNF) nodes as crucial (Pinard-Van Der Laan et al., 1998;Smith and Hayday, 2000a,b;Bacciu et al., 2014), since IL-10 downregulates IFNγ production (Schaefer et al., 2009).
CONCLUSION
We identified genomic regions, putative candidate genes, canonical pathways and networks involved in the underlying molecular mechanisms of chicken resistance to E. maxima primary infection and to secondary heterologous E. maxima strain challenge. More emphasis should be placed on the relevant mechanisms for disease resistance, the response to secondary heterologous strain challenge and the role of IL-10 induction in immune responses to intestinal challenge in the future selective breeding of chickens.
AVAILABILITY OF SUPPORTING DATA
The resequencing data used in this study is available in NCBI dbSNP at the following web page: http://www.ncbi.nlm.nih.gov/ SNP/snp_viewBatch.cgi?sbid=1062063.
AUTHOR CONTRIBUTIONS
AS, PK, SB, FT, and DB devised the overall strategy and obtained funding. PK, SB, FT, and DB conceived the backcross experiments. PMH and KB devised the backcross breeding. MN managed the backcross trials and performed qPCR and DNA extraction assisted by KH and KB. Backcross phenotype collection was carried out by MN, KH and KB, while DB scored lesions. ZW performed IL-10 assays assisted by KB. KB prepared backcross DNA for genotyping and carried out all backcross analyses with input from AP, VR, and OM. AS designed the intercross trials with input from NB and these were carried out by PH and AA. AP performed an initial analysis of the intercross data with input from OM and KB. Pathway and resequencing analyses were performed by AP and KB. The manuscript was drafted by KB and AP with input from all other authors except PMH, SB, NB, and PK. AS, DB, FT, DH, and AP assisted in the interpretation of results.
FUNDING
The backcross work was funded by the BBSRC through the Animal Research Club (ARC) program under grants BB/L004046 and BB/L004003, while DEFRA OD0534 and BBSRC BB/E01089X/1 funded the intercross study. | 2018-11-26T14:03:10.467Z | 2018-11-26T00:00:00.000 | {
"year": 2018,
"sha1": "70ac21dd265b2773c8f66291829c80a74857476e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2018.00528/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70ac21dd265b2773c8f66291829c80a74857476e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
239514232 | pes2o/s2orc | v3-fos-license | Elevated C-Reactive Protein (CRP) during First-Trimester For Gestational Diabetes Screening
Background: Studies investigating the impact of inflammatory factors on gestational diabetes mellitus (GDM) are extremely sparse. This study aimed to find out the association of inflammation as defined by C-reactive protein (CRP) with gestational diabetes. Methods: This prospective cohort study, conducted in Patel Hospital Karachi from September 2020 to February 2021, enrolled 172 healthy gravid women at ≤ 10 weeks of pregnancy. A structured proforma was used to record age, education, past medical and obstetric history (parity, number of miscarriages, mode of deliveries, gestational age), blood pressure, height, and weight of all patients. Serum concentration of CRP and levels >1mg/dl was considered high. Oral glucose tolerance test was performed at 24-28 weeks to diagnose GDM. Data was analysed by using SPSS and Chi-squared test was applied to compare the difference between groups with, and without diabetes and p-value ≤ 0.05 was considered significant Results: Total 93 patients had raised CRP of which 8(4.6%) developed GDM (p<0.00001). Advanced age (p=0.00042) and weight (p<0.00001) were found to be independent risk factors. It was observed that CRP levels rise with increasing weight (p=0.0095) however, parity and blood pressure had no effect on GDM development. Women who had diabetes had higher BMI (<0.00001) showing that increasing weight was an independent risk factor. The sensitivity and specificity of CRP in detecting GDM was 100% and 48.17%. Conclusion: Raised C-reactive protein levels in first trimester can lead to subsequent development of hyperglycaemia of pregnancy and thus can be considered as an easy and simple screening test for GDM.
INTRODUCTION
Gestational Diabetes Mellitus (GDM) is defined as hyperglycaemia that is first recognized in the second or third trimester of pregnancy in women with previous normal sugar levels and can lead to numerous adverse outcomes for mother and baby 1,2 . GDM prevalence varies across the world affecting almost 7-10% of pregnancies worldwide [3][4][5] . The sooner the disease is detected in pregnancy the better would be the outcome. Women who had a history of hyperglycaemia in pregnancy have an increased chance of getting adult onset diabetes later on in life 2 .
Diabetes of pregnancy is more prevalent in women who are above 30 years, overweight or obese, have diabetes in family, previous history of stillbirth or anomalous baby and have gestational diabetes in the previous pregnancy 2 . GDM screening tests are done after assessing risk factors. There is increase insulin resistance in pregnancy making it a diabetogenic condition 2 . Insulin resistance is linked with inflammatory conditions and the development of type 2 diabetes. It is shown by several studies that increased inflammation as measured by CRP is an independent risk factor for the development of diabetes 6 .
Although pregnancy itself is an anti-inflammatory condition, there is an increased level of inflammation during the early stage of pregnancy such as during implantation leading to increasing levels of various inflammatory mediators 7 . High maternal CRP is associated with miscarriage, premature labour and rupture of membranes, toxemia of pregnancy, fetal growth restriction and chorioamnionitis 8,9 . The possible mechanism by which inflammation leading to diabetes could be the increase blood sugar and glycosylated haemoglobin levels (HbA1c) causing the release of CRP 10 . In addition, obesity, which is a risk factor for GDM, causes an increase release of pro-inflammatory cytokines from adipocytes 6 . Currently, it is being hypothesized systemic inflammation may be involved in the establishment of GDM 11 . Early diagnosis and treatment are very crucial to prevent maternal and neonatal adverse outcome. Therefore, this study was performed to define the role of CRP in the screening of gestational diabetes.
METHODS
It was a prospective cohort study conducted from September 2020 until February 2021. A total number of 172 pregnant patients attending antenatal checkups in Department of Obstetrics and Gynaecology of Patel Hospital, Karachi were selected. All pregnant women18-35 years with single fetus less than 10 weeks of gestation based on their last menstrual period were included in the study after written informed consent. All patients were ensured about the confidentiality of identity and were given free choice of withdrawing from the study at any point. Ethical approval was obtained from the institutional ethic committee (Ref no.102/2020).
A structured proforma was used to record age, education, past medical and obstetric history (parity, number of miscarriages, mode of deliveries, gestational age), blood pressure, height, and weight of all patients at booking. Patients with previous history of hyperglycaemia in pregnancy, pre-existing or family history of diabetes, random blood sugar (RBS) of more than 140mg/dl, history of macrosomia, stillbirth, recurrent miscarriage, hypertension and polycystic ovarian syndrome were excluded from the study. Patients with chronic hypertension, thyroid disorder, chronic kidney disease, cardiovascular disease, autoimmune and chronic inflammatory disease, current active infection, antibiotic use within two weeks before sampling, seasonal allergy and those taking corticosteroids or non-steroidal anti-inflammatory drugs were also excluded.
The blood sample was taken for serum CRP in addition to the standard antenatal tests from all patients in the first trimester. Participants were followed up until 24-28 week of pregnancy and 75g oral glucose tolerance test (OGTT) was carried out at that time. CRP level was measured by latex agglutination semi-quantitative test kit. CRP > 1mg/dl was considered raised. Gestational diabetes was diagnosed by Hyperglycaemia and Adverse Pregnancy Outcome (HAPO) criteria (fasting blood glucose ≥92mg/dl, 1-hour ≥180mg/dl, 2-hour ≥153mg/dl) 12 .
Data were analyzed by using SPSS version 22. Mean, standard deviation and frequency table was used for data presentation. Chi-square test was used to compare the difference between groups with and without diabetes. The 2 × 2 contingency table was used for calculating sensitivity, specificity, positive predictive value, negative predictive value and accuracy. The level of significance was kept at p-value ≤ 0.05.
RESULTS
The average age of patients was 26.5±5.33 years and mean gestation of pregnancy was 7.5±1.87 weeks. The majority of participants were multipara (74.41%), educated (72.6%) and working women (76.1%). The average body mass index was 24.03±3.83 kg/m 2 . The mean random blood sugar was 99.5±22.7mg/dl. The association of CRP and GDM with baseline characteristics is shown in Table 1.
Out of 172 patients, ninety-three had raised CRP of which 8(8.6%) developed GDM (p<0.00001) showing a significant association between high serum C reactive protein and gestational diabetes as shown in Figure 1. It is shown in Table 2 that with increasing levels of CRP the risk of GDM also increases. It was found that CRP levels rise with increasing weight (p=0.0095) indicating that CRP level is independently related to BMI which can be seen in Figure 2. The risk of GDM increased with advance maternal age (p=0.00042). However, parity and blood pressure had no effect on GDM development. It was also found in our study that women who had diabetes were of higher Body mass index (BMI) (<0.00001) Table 1. Thus, it was showing that increasing weight was an independent risk factor for GDM as shown in Figure 2.
Variables
The sensitivity, specificity, positive predictive value, negative predictive value and accuracy of CRP in detecting GDM was 100%, 48.17%, 8.6%, 100% and 50.58% respectively as shown in Table 3.
X-axis: Number of patients, Y-axis: Body mass index (BMI kg/m 2 ).
PPV: Positive Predictive Value, NPV: Negative Predictive Value, CI: Confidence Interval. showing that high C-reactive protein can result in GDM. This is inconsistency with previous research studies 13,14 . Increased levels of CRP in the blood can lead to hyperglycaemia by influencing insulin resistance 15 . It has been shown in others studies that elevated CRP levels in blood have a notable relation with subsequent development of hyperglycaemia of pregnancy 16 . Patients with gestational diabetes tend to have higher CRP levels as compared to normal pregnant women 16 . Hence, quantitative C-reactive protein level is an acceptable test in predicting diabetes in pregnancy 17 . In contrast to the above, the studies by Corcoran and Korkmazer showed no association between CRP and GDM 18,19 . Gestational diabetes resolves spontaneously after delivery but is linked to complications in both mother and the newborn 21 . It has been proven that intrauterine exposure to maternal GDM can lead to glucose intolerance, obesity and type 2 diabetes in offspring 22 . Recently, there has been an increased interest to disclose the role of inflammation in GDM development. In a previous study, increased levels of CRP and IL-6 were observed in pregnant women with glucose intolerance and GDM. Thus, suggesting that these may have a role in the pathophysiology of glucose intolerance and can serve as potential serum markers for the early screening of glucose intolerance 23 . Furthermore, there is an elevated inflammatory response with increasing age and BMI and has a significant association with GDM and inflammatory response is observed with increases in both age and BMI 24 . Studies have suggested that there is strong correlation between body fat mass and serum CRP levels 24 . Obesity has been identified as a predictor of elevated CRP, which is a risk factor for cardiovascular and coronary heart disease 25 . Furthermore, C-reactive protein was highly correlated with body mass index in some previous research studies 6 . In the current study it was observed that patients with high BMI tend to have increased CRP levels (p=<0.0095) and there was a remarkable correlation between GDM and obesity (p=<0.00001).
Figure 2: Association of body mass index (BMI) with C-reactive protein (CRP) levels and gestational diabetes (GDM).
Seeing that it is known that inflammation can cause diabetes in pregnancy by causing insulin resistance 6 . The increase in blood sugar levels accelerates the synthesis of glycosylated haemoglobin and the expression of macrophages thus causing the release of inflammatory markers like CRP 10 . CRP is a commonly available laboratory test and if combined with maternal risk factors as assessed by history and demographic features, can be predictive of GDM 11 . Thus, it helps in identification of those patients who are at higher risk of developing hyperglycaemia of pregnancy as early as in the first trimester.
Recognizing these patients earlier can help in better maternal and neonatal outcomes by intervening through treatment and lifestyle modification.
However, a single measurement of serum CRP level does not provide a measure of maternal inflammation status. Besides, C reactive protein levels rise during normal pregnancy 15 . Further comprehensive studies are needed to assess diagnostic accuracy of serum CRP in GDM screening. In addition, inflammatory cytokine levels fluctuate throughout pregnancy. Therefore, the levels should be monitored multiple times for finding a specific correlation between inflammation and GDM or glucose intolerance.
CONCLUSION
Women with hyperglycaemia of pregnancy have increased inflammatory reactions during the first trimester. C reactive protein measurement could be a useful screening test in the first-trimester for the prediction of GDM. CRP test can be a new, fast and reliable screening test for GDM screening. However, further research studies are needed to strengthen these findings. It is easy to measure and can predict the risk of other pregnancy complications as well such as preeclampsia, preterm labour, intrauterine growth restriction etc. allowing better surveillance during gestation.
AKNOWLEDGEMENTS
Special thanks to Dr. Tashmina Taha and Dr. Durriya Rehmani for their help in sample collection.
CONFLICT OF INTEREST
There is no conflict of interest.
ETHICS APPROVAL
The ethical board of Patel Hospital (Ref no.102/2020) approved the following study.
PATIENTS CONSENT
Verbal and written informed consents were obtained from all patients.
AUTHORS' CONTRIBUTION
SY was involved in randomization of patients, data collection and analysis, literature review, discussion, results and reference writing and authored the manuscript. | 2021-10-24T15:13:47.360Z | 2021-10-05T00:00:00.000 | {
"year": 2021,
"sha1": "c2d4cb79275210f8589db9293eeed88e737b1fd6",
"oa_license": "CCBY",
"oa_url": "https://pjmd.zu.edu.pk/wp-content/uploads/2021/10/PMJD-10.4-October%E2%80%8B-December-2021-ORIGINAL-ARTICLE-Elevated-C-Reactive.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4de2d26c4fc12941f564620f9e53de92edc4891a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
204832822 | pes2o/s2orc | v3-fos-license | Effects of temperature and salinity on respiratory losses and the ratio of photosynthesis to respiration in representative Antarctic phytoplankton species
The Southern Ocean (SO) is a net sink for atmospheric CO2 whereby the photosynthetic activity of phytoplankton and sequestration of organic carbon (biological pump) plays an important role. Global climate change will tremendously influence the dynamics of environmental conditions for the phytoplankton community, and the phytoplankton will have to acclimate to a combination of changes of e.g. water temperature, salinity, pH, and nutrient supply. The efficiency of the biological pump is not only determined by the photosynthetic activity but also by the extent of respiratory carbon losses of phytoplankton cells. Thus, the present study investigated the effect of different temperature and salinity combinations on the ratio of gross photosynthesis to respiration (rGP/R) in two representative phytoplankton species of the SO. In the comparison of phytoplankton grown at 1 and 4°C the rGP/R decreased from 11.5 to 7.7 in Chaetoceros sp., from 9.1 to 3.2 in Phaeocystis antarctica strain 109, and from 12.4 to 7.0 in P. antarctica strain 764, respectively. The decrease of rGP/R was primarily dependent on temperature whereas salinity was only of minor importance. Moreover, the different rGP/R at 1 and 4°C were caused by changes of temperature-dependent respiration rates but were independent of changes of photosynthetic rates. For further interpretation, net primary production (NPP) was calculated for different seasonal conditions in the SO with specific combinations of irradiance, temperature, and salinity. Whereas, maximum photosynthetic rates significantly correlated with calculated NPP under experimental ‘Spring’, ‘Summer’, and ‘Autumn’ conditions, there was no correlation between rGP/R and the respective values of NPP. The study revealed species-specific differences in the acclimation to temperature and salinity changes that could be linked to their different original habitats.
Introduction
The Southern Ocean (SO) plays a pivotal role for Earth's climate by controlling the amount of dissolved inorganic carbon stored in the ocean. The SO is considered as a net sink for atmospheric CO 2 due to the cooling of southward directed subtropical surface waters which increases the solubility of CO 2 . This mechanism represents the so-called solubility pump whereby the majority of dissolved CO 2 is sequestered in the deep ocean [1]. However, about 10% of the total amount of CO 2 sequestration is assigned to the biological pump [2]. Accordingly, in the euphotic zone, phytoplankton cells photosynthetically assimilate inorganic carbon, which is then transferred as organic carbon to the deep ocean by sedimentation. The efficiency of this carbon transfer depends on the physical characteristics of the SO such as water temperature, extent of sea ice cover, wind speed, stratification, changes in nutrient dynamics, pH, light conditions, and salinity of surface waters ([3], reviewed in [4]). All these parameters will be altered by climate change and, as a consequence, will also influence the physiology (e.g. photosynthesis and respiration activity) and ecology (e.g. species composition) of phytoplankton in the SO [5,6]. The physiological response of phytoplankton cells to the expected changes in the SO's physics is poorly understood [7,8] and represents a considerable gap of knowledge. For instance, it is not known how the balance between photosynthetic carbon assimilation and respiratory carbon losses depends on environmental conditions and seasonal changes. Although the data base to estimate the proportion of phytoplankton respiration to total microbial respiration is scarce, from the observed correlation of community respiration rates and chlorophyll concentrations it could be assumed that phytoplankton respiration contributes to a large part of community respiration at least in coastal waters of the SO [9] (and ref. therein). Since phytoplankton in the SO experience extreme variations in daily solar irradiance during the growth period due to changing day lengths (ranging from very few hours in winter to 20 hours in summer [10]) and sea ice cover extent, it could be assumed that respiratory losses have a stronger impact on the net primary production (NPP; equal to the difference between gross photosynthesis rate, GP, and respiration rate) in Antarctic waters than in temperate waters. Particularly, under short daylength or under deep-mixing conditions phytoplankton respiration strongly influences the algal biomass balance [11]. Unfortunately, there is a strong methodological limitation for the determination of phytoplankton respiration rates in natural habitats (e.g. distinction from heterotrophic respiration, limited comparability between alternative methods like O 2 and 14 C [12]) and only scarce information about respiration rates and the variability of the ratio gross photosynthesis to respiration (rGP/R) in phytoplankton of the SO. Nevertheless, studies have shown that rGP/R is indirectly correlated with temperature in phytoplankton from the SO [11,13]. Importantly, changes in rGP/R are primarily due to the fact that respiration rates are more temperature-dependent than photosynthetic rates [11]. It is known that respiratory losses in the euphotic zone of coastal waters range from 7 to 34% of GP and could reach even 50% of GP under bloom conditions with high chlorophyll concentrations [9,14]. Thus, the knowledge on the variability of respiratory losses and rGP/R is important for the evaluation of the carbon budget of the SO and particularly for the prediction of future development in the light of climate change. However, to our knowledge species-specific respiratory losses and the variability of rGP/R in phytoplankton from the SO were not investigated systematically under a combination of relevant different environmental factors (e.g. temperature and salinity).
In the present study, we investigated the physiological response of two Antarctic phytoplankton species in an experimental setup with combined changes of temperature and salinity. Accordingly, the diatom Chaetoceros sp. and two strains of the Haptophyte Phaeocystis antarctica were chosen as typical representatives of SO phytoplankton. Whereas Chaetoceros sp.
shows a high abundance in sea ice and represents a dominating diatom species [15,16], P. antarctica is a typical pelagic phytoplankton species living in deeply mixed water [17,18]. Diatoms and Haptophytes differ in their light acclimation potential and show different physiological plasticity ([6] and ref. therein). For P. antarctica, two strains were investigated from very different geographic origins, namely from the Lazarev Sea and from Prydz Bay.
With respect to seasonal dynamics and the expected future changes in SO's physics, the algae were cultivated under nine different combinations of salinity and temperature conditions. With this combination of phytoplankton species and experimental conditions it was intended to investigate the influence of temperature and salinity on rGP/R in general but also in the light of possible species-specific differences. In addition, the impact of changes in rGP/R on NPP under different seasonal conditions was evaluated.
Culture conditions
Cultures of the Antarctic diatom Chaetoceros sp. and two strains of the Haptophyte Phaeocystis antarctica were obtained from Dr. Steffi Gäbler-Schwarz (AWI Bremerhaven, Germany). The Phaeocystis strains were sampled and isolated on RV Polarstern cruises and at an Antarctic research station between 2005 and 2007 [19] whereby strains 109_27 and 764_48 were isolated from the Lazarev Sea (ANT XXIII-2) and from Prydz Bay (ANT XXIII-9), respectively. All cultures were grown in GP5 Medium [20] modified in this study with respect to the use of specific amounts of marine salt (Dupla Marin, Dohse Aquaristic, Koblenz, Germany) instead of seawater to yield the desired salinity of the medium (details below). The cultures were maintained in polystyrene culture flasks with filter screw caps (Carl Roth) in a climate chamber (Economic Lux Chamber, Snijders Labs) under low-light conditions (10 μmol photons m -2 s -1 ; 16:8 hours light-dark cycle). The cultures were used for experiments in their exponential growth period between 6 and 10 days post inoculation. The number of replicates (n) given in the results section is equivalent to the number of biological replicates (a detailed list of the number of replicates is presented in S1 Table). Since the measurements of oxygen evolution rates were characterized by a relatively low signal-to-noise ratio the number of biological replicates for this type of measurements was expanded up to n = 11 to enhance the statistical significance.
Three different temperature treatments were applied, namely -1˚C, 1˚C, and 4˚C (± 0.5˚C), in combination with different salinities of the growth medium: 20, 35, 50, and 70 practical salinity units (PSU; S1 Table). More precisely, growth temperature of -1˚C was combined with salinities of 35, 50, and 70 PSU whereas growth temperatures of 1˚C and 4˚C were combined with salinities of 20, 35, and 50 PSU, respectively. The combinations of 20 PSU at -1˚C and 70 PSU at 1 or 4˚C were omitted since they are practically impossible. A salinity well below 35 PSU can be found only in regions with melting sea ice (T > 0˚C) whereas salinities as high as 70 PSU can be reached only in the brine channels of sea ice (T < 0˚C). The salinity of the medium was adjusted by the addition of the respective amount of marine salt. Depending on the growth rates of the cultures under the different experimental conditions, the cultures were acclimated for a period of at least two weeks (usually four weeks) to the new condition before starting physiological measurements.
Chlorophyll a determination
Chlorophyll a (Chla) concentrations were determined spectrophotometrically by extraction with 90% acetone according to the protocol from [21]. Algal samples (5 mL) were collected on glass-fiber filters, 2.5 mL acetone was added, and cells were broken in a cell homogenizer (Precellys Evolution, Bertin Technology, France). After centrifugation (2 min, 12.500 x g, Sigma 1-14, Sigma, Germany), absorbance of the pigment extract was measured with a spectrophotometer (Hitachi U2000, Tokyo, Japan) at 664 and 630 nm.
Measurements of photosynthesis rates and variable chlorophyll fluorescence
Oxygen-based (P O ) and fluorescence-based (P F ) photosynthesis rates were measured and calculated as described in detail in [22]. Essentially, oxygen evolution and variable Chlorophyll (Chl) fluorescence were measured by light-irradiance curves (P-E curves) in a so-called Light pipette equipped with a special cuvette (Topgallant LLC, Salt Lake City, UT, USA) that allows the connection to a PAM-fluorometer (PAM 101/103, Walz, Effeltrich, Germany). A 3-ml aliquot of cells (equals a Chla concentration of 4-6 μg mL -1 ) from each experimental condition was transferred into the cuvette and maintained at the respective growth temperature under continuous stirring in darkness for 5 min. For P-E curves, six actinic light levels (21,50,107,207,415, 713 μmol photons m −2 s −1 ) were applied for 4 min each. These light periods alternated with dark periods of 4 min length each. Measurements of P-E curves always started with an initial 4-min dark period yielding a total dark adaptation period of 9 min duration. Oxygen evolution was measured using a Clark-type electrode (MI 730, Microelectrodes Inc., NH, USA). For the calculation of P O (μmol O 2 [mg Chla] -1 h -1 ) the oxygen solubility corrected for the medium salinity and the measuring temperature [23] was taken into account. Net oxygen evolution and dark respiration rates were derived from the average oxygen evolution rates measured during the last minute of each light and dark period, respectively. A representative example of light-dependent net oxygen evolution for Chaetoceros sp. and P. antarctica is shown in S1 Fig. Gross oxygen production was derived by correcting net oxygen evolution rates for the corresponding dark respiration (R; μmol O 2 [mg Chla] -1 h -1 ) measured after the respective light periods. It should be noted that no enhanced post-illumination respiration [24] was observed in the measurements. Moreover, the respiration rates showed very little variability with respect to the preceding irradiance levels.
The ratio of photosynthesis to respiration (rGP/R) was derived from the maximum value of fitted (details see below) gross photosynthesis (GP max ) divided by the mean value of all respiration rates measured within a specific P-E curve.
In parallel with oxygen evolution, the variable Chl fluorescence parameters were determined, whereby Fo and Fm are the minimum and maximum fluorescence in darkness, respectively, and F and Fm' are the steady-state minimum and maximum fluorescence under actinic illumination, respectively. Fluorescence-based photosynthetic rates (μmol O 2 [mg Chla] -1 h -1 ) were estimated as: where F PSII is the effective quantum yield of PSII [25], Q phar is the amount of absorbed radiation (see below), d is the optical path length of the measuring cuvette, and Chl is the Chla concentration of the algal suspension. The factors 0.5 and 0.25 are based on the assumption that the linear transport of one electron requires two quanta and that four electrons are required for the evolution of one molecule of oxygen, respectively. It is thus assumed that P F represents the maximum amount of electrons (expressed as oxygen equivalents) transported through the electron transport chain, whereas P O is the oxygen evolution rate of PSII biased by alternative electron pathways, such as the Mehler-reaction or cyclic electron transport [26]. Therefore, the ratio P F /P O describes the activity of alternative electron-consuming reactions [27,28]. A representative example of fluorescence-based P-E curve measured in Chaetoceros sp. and in P. antarctica is shown in S1 Fig.
The oxygen-based and fluorescence-based P-E curves were fitted according to [29]. The derived fitting parameters (a, b, and c) were used to calculate GP max and the light saturation index (E k value) according to [29]: In addition to the estimation of P F , the variable fluorescence parameters were used to calculate the extent of non-photochemical quenching (NPQ) according to [30]: where Fm is the maximum fluorescence measured at the end of the initial dark period of P-E curve measurements. The maximum NPQ values (NPQ max ) and the half-saturation irradiance of NPQ max (E 50 ) were derived from fitting of the light-response curves of NPQ using the Hill equation (according to [31]). A representative example of the fitted light-dependent NPQ measured in C. sp. and in P. antarctica is shown in S1 Fig.
Cellular optical properties
The in vivo-absorption spectra of algal cells were measured in a dual-beam spectrophotometer (M500, Zeiss, Jena, Germany). The photometer was equipped with an adapter for dispersive samples (Zeiss) to allow a very close placement of the sample to the detector and to correct for light scattering. The Chla-specific in vivo-absorption coefficient, a � phy (cm 2 [mg Chla] -1 ) was calculated as: where 2.3 is the conversion factor from log10 to ln, A is the absorption of the sample (400-700 nm), d is the path length of the cuvette (0.01 m), and Chl is the Chla concentration of the sample (mg m -3 ). In the results section, the mean values of the Chl-specific absorption (� a � phy ) are given.
The knowledge of the emission spectra of the light source and of a � phy allows the estimation of the amount of photosynthetically active radiation absorbed by the algal cultures, Q phar . The estimation is based on the following equation (according to [32]): where Q phar is the photosynthetically absorbed radiation (μmol m -2 s -1 ), Q is the photosynthetically available (incident) radiation (μmol m -2 s -1 ), and d is the optical path length (m).
Estimation of net primary production
To describe the potential effect of different rGP/R on NPP under different seasonal conditions (see below) the expected daily NPP was estimated from measured oxygen-based P-E curves (P O ; see above) and considering the measured respiration rates. For the respective experimental conditions, the mean values of light-dependent GP (derived from measured P-E curves, see above) were fitted according to [29]. It should be noted that the fit function does not include a term for the initial respiration rate. Therefore, only P-E curves based on GP can be fitted in this way. The derived fitting parameters (a, b, c) were used to estimate daily NPP (μmol O 2 [mg Chla] -1 d -1 ) as: where E is the amount of incident irradiance (μmol photons m −2 s −1 ; see below) and R is the respiration rate. The respiration rates were derived from the mean value of all respiration rates measured within a specific P-E curve. The incident irradiance was based on four daily light climates (S2 Fig) representing model estimates of different seasonal in situ-light conditions (adopted from [10]): winter sea ice, spring melt water, summer pelagic water, and autumn new sea ice. These light climates were combined with the fitting parameters derived from specific temperature and salinity conditions that reasonably represent the seasonal conditions during spring, summer, autumn, and winter ( Table 1). To take into account the dynamics of light conditions, NPP was estimated for 10-min time intervals and integrated over 24 h.
Statistical analysis
Two-way analysis of variance (ANOVA) followed by Bonferroni post-tests (p-value < 0.05) were performed on the physiological data (GP max , R, rGP/R, NPQ max , P F /P O , a � phy ) to test for differences of the algal species in response to culture conditions (temperature, salinity). The different salinity and temperature conditions were used as treatment factors. The data set was checked for normality by Shapiro-Wilk test (SigmaPlot 12.5), and all random samples passed the test. Correlation was calculated by Spearman rank correlation test (two-tailed test of significance with 95% confidence interval).
Physiological key parameters
The data from P-E curves were used to compare the maximum gross photosynthesis rates (GP max ), respiration rates (R), ratio of GP max to respiration (rGP/R), NPQ max , and ratio of maximum fluorescence-based/maximum oxygen-based photosynthesis rates (P F /P O ) for all experimental conditions and for the three algal strains used in this study (see below). With this wide set of experimental conditions it was intended to find general physiological responses of the investigated species to different temperature and salinity conditions. It has to be mentioned that at a growth temperature of -1˚C the two strains of P. antarctica did not grow sufficiently well at 70 PSU to obtain sufficient biomass for physiological measurements. Therefore, under this temperature/salinity combination physiological measurements were performed for Chaetoceros sp. only. In addition to the determination of physiological parameters, data of P-E curves were also used to apply a curve fit according to [29] and to finally estimate the effects of changes in rGP/R on NPP for different environmental scenarios (see below). Fig 1A shows the mean values of GP max (Gross oxygen-based photosynthesis) at three different growth temperatures and in combination with different salinities. For Chaetoceros sp. Table 1. Experimental conditions and assumed light conditions used for the estimation of daily net primary production (NPP) under different seasonal conditions from measured photosynthesis and respiration rates. In case of a given range of temperature or salinity values, NPP was calculated as mean value of the respective NPP at the specific conditions. Light conditions were adopted from [10]. no significant effect of temperature on GP max was observed which is in contrast to P. antarctica. At the salinities 35 and 50 PSU, P. antarctica strain 109 showed significantly higher GP max at -1˚C than at 1˚C (p < 0.001) whereas no temperature effect was detected for the comparison of GP max measured at 1 and 4˚C. For P. antarctica strain 764, a comparable increase of GP max from 1˚C to -1˚C (p < 0.001) was found for 35 PSU, only. Moreover, only for P. antarctica strain 764 a significant increase of GP max from 1˚C to 4˚C was observed at 20 and 35 PSU (p < 0.001). For all tested species an influence of salinity on GP max was observed at 4˚C with significantly lower GP max values at 50 PSU than at 20 PSU (p < 0.01) and 35 PSU (p < 0.05), respectively. Significant species-specific differences were found at a growth temperature of 1˚C with significantly higher GP max values in Chaetoceros sp. (p < 0.01) than in both strains of P. antarctica. Significant differences in GP max in comparison of strain 109 and 764 of P. antarctica were observed at 4˚C (p < 0.05) only. Fig 1B depicts the respiration rates under the applied experimental conditions. In Chaetoceros sp., a trend of increasing respiration rates with temperature was found at a salinity of 35 PSU with significant differences between -1 and 4˚C (p < 0.001). In P. antarctica strain 109 a comparable effect was observed at 35 and 50 PSU with a significant increase of respiration rates at 4˚C compared to 1˚C (p < 0.01) and at 4˚C compared to -1˚C (p < 0.01). In P. antarctica strain 764 a significant increase of respiration rates with temperature was observed only at a salinity of 35 PSU in the comparison of 4 to 1˚C (p < 0.001). In the comparison of the different species the most prominent result is the significantly higher respiration rate at 4˚C/35 and 50 PSU in P. antarctica strain 109 compared to strain 764 (p < 0.01) and to Chaetoceros sp. (p < 0.001). Fig 1C depicts the ratio GP max over respiration (rGP/R). At first sight, the temperatureand salinity-induced changes of GP and R seem to influence the ratio GP/R rather randomly. However, a few general trends could be deduced. Accordingly, for all investigated species rGP/ R decreased from 1˚C to 4˚C at 35 PSU (p < 0.05). This trend of lower rGP/R with increasing temperature was measured in Chaetoceros sp. and P. antarctica strain 109 also in the comparison of -1 to 4˚C (at 35 PSU; p < 0.01). Salinity was of minor importance on changes in rGP/R. In both strains of P. antarctica, only at a growth temperature of 1˚C a significantly higher rGP/ R was observed at 35 PSU compared to 20 and 50 PSU, respectively (p < 0.01). Significant species-specific differences were detected particularly at 1˚C with higher rGP/R in Chaetoceros sp. than in both strains of P. antarctica (at 20 and 50 PSU; p < 0.05). Another important general trend was found in the relation of rGP/R to R and GP max , respectively. Whereas rGP/R significantly correlated to changes in respiration rates (p < 0.01) there was no correlation of rGP/R to changes in GP max (Fig 2).
Season
The comparison of NPQ max values revealed the largest interspecies differences between Chaetoceros sp. and P. antarctica (Fig 1D). At all growth conditions, NPQ max values in Chaetoceros sp. were significantly higher (1 and 4˚C with p < 0.001; -1˚C with p < 0.05) than in P. antarctica. In contrast, there was no significant influence of temperature or salinity on NPQ max in neither Chaetoceros sp. nor in both Phaeocystis strains. The species-specific differences in NPQ max were further supported by the ratio of the half-saturation irradiance of NPQ max (E 50 ) over the photoacclimation parameter E k (derived from fluorescence-based photosynthetic rates P F ). Thus, the ratio E 50 /E k describes the light-dependent NPQ induction status in relation to the saturation level of the electron transport chain. It is evident that the mean value E 50 /E k for all experimental conditions was significantly higher in Chaetoceros sp. (mean E 50 /E k = 4.0) compared to both strains of Phaeocystis (mean E 50 /E k = 2.0; S3 Fig).
The ratio of maximum fluorescence-based to maximum oxygen-based gross photosynthetic rates (P F /P O ) is depicted in Fig 3A. For all investigated species no significant influence of temperature or salinity on P F /P O was observed at growth temperatures of 1 and 4˚C. Only at -1˚C Temperature and salinity effects on respiratory losses P F /P O was significantly increased in Chaetoceros sp. and in P. antarctica strain 764 compared to 1 and 4˚C, respectively (at 50 PSU; p < 0.001). As a consequence, P F /P O was significantly lower at -1˚C/50 PSU in P. antarctica strain 109 than in strain 764 (p < 0.01).
Fig 2. Relationship between a) ratio of gross photosynthesis to respiration (rGP/R) and respiration and b) rGP/R and maximum gross photosynthetic rates (GP max ) in
The mean value of the Chl-specific in vivo-absorption (a � phy ) describes the absorption efficiency of algal cells. Under the experimental conditions, Chaetoceros sp. showed the lowest variation of a � phy values with no significant influence of neither temperature nor salinity ( Fig 3B). Similarly, there was no significant influence of salinity on the absorption efficiency of both strains of P. antarctica. On the other hand, in both strains of P. antarctica a large variation of a � phy values was observed. Accordingly, at a growth temperature of 1˚C and 4˚C the a � phy values were significantly lower in both strains of P. antarctica at all salinities than in Chaetoceros sp. (p < 0.001). In addition, P. antarctica strain 764 showed significantly higher a � phy values than strain 109 at a growth temperature of 4˚C. In contrast, at -1˚C growth temperature a � phy values were in a comparable range for all three algal species. Interestingly, this resulted in a specific pattern of a � phy changes with respect to those experimental conditions that represent different seasonal conditions (S4 Fig; see Table 1 for experimental conditions). Whereas Chaetoceros sp. showed constant a � phy values over all seasonal conditions, in P. antarctica strain 109 significantly higher a � phy values were observed in the winter condition than in the other seasonal conditions (p < 0.01). Thereby, strain 109 reached similar a � phy values under winter conditions as Chaetoceros sp. In P. antarctica strain 764 significantly higher a � phy was observed in the winter condition than in spring and autumn (p < 0.01).
rGP/R and NPP under seasonal conditions
The large range of applied experimental conditions was chosen to investigate the general influence of salinity and temperature on the physiology of Antarctic phytoplankton. However, phytoplankton will not be confronted with all of these conditions in their natural environment. Therefore, Fig 4A depicts the changes in rGP/R under those experimental conditions that represent the salinity/temperature combinations of different seasonal conditions. Whereas, no significant changes of rGP/R under different seasonal conditions were observed in Chaetoceros sp., there was a significant increase of rGP/R from the spring/summer to the autumn/winter conditions (p < 0.05) in both strains of P. antarctica.
It was additionally intended to evaluate the effect of changes in rGP/R on NPP. For this purpose, daily-integrated NPP for different seasonal conditions (Fig 4B) were calculated on the basis of the measured photosynthesis and respiration rates under the specific experimental conditions in combination with season-specific in situ light conditions (see Materials & methods for details). As expected from seasonal in situ light conditions (with respect to maximum irradiance and daylength), the highest NPP was calculated for summer conditions, whereas under winter conditions a barely positive NPP was calculated for Chaetoceros sp. and P. antarctica strain 109, but not for strain 764. Although, there was the trend of higher rGP/R in autumn/winter than in spring/summer, Fig 5 reveals that there is no correlation between rGP/ R and NPP for different seasonal conditions. Instead, a significant, positive correlation between GP max and NPP is found for the spring, summer, and autumn conditions (p < 0.01). For the winter condition, there is no correlation between GP max and NPP.
Effects of temperature and salinity on photosynthetic and respiration rates
Several studies highlighted the importance of phytoplankton on microbial respiration and gross carbon production in the SO [9]. However, to our knowledge there is no study dealing with the influence of multiple stressors on both, photosynthesis and respiration rates, in Antarctic phytoplankton. Thus, the present study focussed on the investigation of the ratio photosynthesis to respiration under different combinations of temperature and salinity in two typical phytoplankton species of the SO.
Accordingly, the analysis of photosynthesis rates revealed no clear trend of temperature or salinity-dependent changes in GP max values in Chaetoceros sp. and Phaeocystis antarctica. Although this observation is in accordance with previous studies (e.g. [11,33]) it is rather unexpected because GP max is mainly defined by the activity of the enzyme RubisCO, whose activity should be directly correlated with temperature changes. A possible explanation for this observation could be that the solubility of CO 2 with decreasing temperature increases more than that of O 2 [34] and that the temperature effect can be compensated by a higher cellular RubisCO content at lower temperature [35].
The present study further revealed that only Chaetoceros sp. but not P. antarctica was able to grow at a combination of -1˚C and a salinity of 70 PSU. To our knowledge this was not shown before and it could explain that P. antarctica is usually found in younger sea ice with conditions comparable to the water column but is rarely found in older sea ice (with higher salinity) [36].
In contrast to photosynthetic rates, a general trend of increasing respiration rates with the increase of growth temperature from 1 to 4˚C at a salinity of 35 PSU was observed in both strains, Chaetoceros sp. and P. antarctica. In P. antarctica, this trend was also found at a salinity of 50 PSU. These changes in respiration rates also influenced the temperature-dependent Chla] -1 d -1 ) was calculated from fitted gross oxygen production rates (GP) minus measured respiratory losses considering the light conditions under seasonal conditions. The asterisks represent significant differences between the species ( � p < 0.05, �� p < 0.01, ��� p < 0.001). https://doi.org/10.1371/journal.pone.0224101.g004 Temperature and salinity effects on respiratory losses ., strains 764 and 109). Estimation of NPP is based on mean values of fitted Photosynthesis-Irradiance curves for different experimental conditions that represent specific seasonal in situ-conditions (see text for details). The calculated NPP values were plotted against the respective mean values of maximum gross photosynthesis (GP max ) and against rGP/R, respectively. The correlation was calculated using Spearman rank correlation (correlation coefficient r s ). In a), the correlation between NPP and GP max data was calculated separately for 'Spring', 'Summer', and 'Autumn' (filled squares) and for the 'Winter' condition (open squares). changes of the ratio of gross photosynthesis to respiration. At a salinity of 35 PSU, Chaetoceros sp. and both strains of P. antarctica showed decreasing rGP/R with increasing growth temperature from 1 to 4˚C. From these results two major conclusions could be drawn: first, the changes in rGP/R were primarily due to variations in respiration but not in photosynthetic rates, and second, rGP/R is primarily temperature-dependent, whereas the impact of the salinity is of minor importance for rGP/R. The novel finding of the present study is that salinity influenced the temperature dependence of respiration to a very small degree in Chaetoceros sp., whereas in P. antarctica an effect of salinity was observed specifically in the combination with low salinity (20 PSU). Moreover, this study provides values of taxon-specific respiratory losses in SO phytoplankton. Accordingly, for all investigated experimental conditions, the respiratory losses in relation to GP were in the range of 8-14% in Chaetoceros sp., 8-25% in P. antarctica strain 764, and 8-33% in P. antarctica strain 109, with the lowest and highest losses at -1˚C and 4˚C, respectively. More specifically, P. antarctica showed significantly higher rGP/ R values in autumn/winter compared to spring/summer whereas, the season-specific rGP/R values varied not significantly in Chaetoceros sp. In the light of these species-specific variation of rGP/R and of the observed species-specific temperature dependence of respiration it could be also concluded that the Q 10 rule may not be systematically applicable in Antarctic phytoplankton.
Respiratory losses and net primary production
An important aim of the present study was the evaluation of the impact of different rGP/R on NPP estimates in representative phytoplankton species from the SO. Therefore, the present data set was used to calculate NPP for specific irradiance, temperature, and salinity combinations that represent different seasonal conditions. The comparison of species-specific NPP for the different seasons showed a comparable pattern for all investigated species. The highest NPP was calculated for the 'Summer' condition with high irradiance, short dark period, and high water temperatures. Compared to the 'Summer condition', NPP calculated for 'Spring' and 'Autumn' ranged at 28% in P. antarctica strain 764 and between 60-80% in Chaetoceros sp. and P. antarctica strain 109 (Fig 4B). Despite the large season-specific differences in rGP/ R, the comparison of the calculated NPP with season-specific rGP/R and GP max values in the 'Spring', 'Summer', and 'Autumn' conditions, respectively, revealed that the NPP is clearly correlated with the photosynthetic potential of the investigated phytoplankton species but not with their respiratory losses. This does not hold true for the 'Winter' condition and could be due to its very short light period (6/18h, L/D). Here, a positive NPP was calculated for Chaetoceros sp. and P. antarctica strain 109, only. This means that these algal strains are able to keep respiratory losses at a minimum and to maintain the cells in an energetic balance during 'Winter' condition. This is in line with results of [33] where strongly reduced but still positive carbon uptake rates were measured under a combination of low irradiance (5 μmol m -2 s -1 ) and low temperature (-1.5˚C) in the diatom Chaetoceros.
Species-specific differences in the acclimation to variations in temperature and salinity
A distinctive species-specific difference in the acclimation to different temperature and salinity combinations is based on the observation of lower variations of some physiological parameters (R, rGP/R, a � phy , P F /P O ) in Chaetoceros sp. than in P. antarctica. Obviously, cells of Chaetoceros sp. are able to cope with strongly changing temperature and salinity conditions within the range of their actual physiological capacity, whereas cells of P. antarctica were forced to specifically acclimate their physiological cell status according to the experimental conditions. This could be interpreted as different acclimation strategies of phytoplankton, which is also reflected by significantly higher NPQ max values, higher rGP/R (at 1 and 4˚C), higher P F /P O (at 1 and 4˚C), and a generally higher ratio E 50 /E k in Chaetoceros sp. than in P. antarctica. The ratio E 50 /E k describes the half-saturation light intensity of NPQ max in relation to the beginning saturation of photosynthetic rates. The significantly higher E 50 /E k in Chaetoceros sp. could be interpreted in the way that the full potential of light protection in the investigated species was required at very high irradiance only, which is an indication of a very high overall potential of light protection. Thus, in our opinion, the higher NPQ max values in Chaetoceros sp. compared to P. antarctica are not an indication of photoinhibitory stress but of a high photoprotective potential. The species dependence of NPQ max values in the comparison of different Antarctic phytoplankton species and, in particular, the higher NPQ max in diatoms than NPQ max in P. antarctica was also shown in previous publications [18,37,38]. The non-photochemical quenching is designated as a very important mechanism to adapt to dynamic light conditions as experienced by the phytoplankton in their natural habitats (e.g. [39]). The most important component of NPQ is the energy-dependent quenching that depends on the presence of a proton gradient across the thylakoid membrane, of de-epoxidized xanthophyll cycle pigments, and of specific light-harvesting proteins (Lhcx) [40]. It is therefore likely that the higher NPQ capacity in Chaetoceros sp. than in P. antarctica is due to a larger pool size of xanthophyll cycle pigments and/or to an increased Lhcx protein content of the cells [41].
With respect to the photoprotective potential of phytoplankton, the extent of alternative electron transport is of importance. The significantly higher ratio P F /P O in Chaetoceros sp. than in P. antarctica (at 1 and 4˚C with 35 PSU) could be interpreted as a higher activity of alternative electron pathways [10]. Alternative electrons are not used for the reduction of NADP + . Instead, they contribute to e.g. cyclic electron transport at PSII and PSI, to the waterwater cycle, to photorespiration, to the reduction of nitrate and sulphate [42,43] and, thus, to the generation of the trans-thylakoid pH gradient. Therefore, it is assumed that the activity of alternative electron transport changes the photosynthetic NADPH/ATP ratio in favour of ATP which in turn decreases the energetic pressure on the photosynthetic electron transport chain [27]. It is not known whether this additional ATP production could compensate for ATP production by e.g. lower respiration rates. However, notably high P F /P O values were observed in Chaetoceros sp. and P. antarctica strain 764 at the 'Winter' condition (-1˚C, 50 PSU) where at the same time low relative respiratory losses were measured. Thus, alternative electron sinks could contribute to dissipate excessively absorbed light energy to maintain the cellular energy balance under unfavourable conditions [42].
The different acclimation strategy to changing temperature and salinity conditions could be also deduced from the comparison of a � phy values. Whereas a � phy did not vary significantly in Chaetoceros sp., a significant increase of a � phy in both strains of P. antarctica was observed in the ' Winter' condition (S4 Fig). The a � phy value describes the wavelength-dependent and Chla-normalized absorptivity (spectrally integrated optical absorption cross section) of phytoplankton cells. It depends to a large extent on cellular Chl concentration and the related package effect of pigments but also on the content of accessory pigments. Thus, an increase of a � phy is typically induced by a decrease of cellular Chl content [44]. It could be concluded that the cellular Chl content did not change in Chaetoceros sp. under the applied experimental conditions whereas the results indicate a decreased cellular Chl content in P. antarctica at low temperature in combination with high salinity.
In summary, the species-specific differences observed in the present study might reflect the specific adaptation of Antarctic phytoplankton to different environmental conditions, e.g. to sea ice or highly stratified water conditions in the case of Chaetoceros sp., in contrast to deeply mixed waters in the pelagic zone in the case of P. antarctica [6].
Conclusions
In the light of the importance of the SO for the atmospheric CO 2 level, it is essential to understand the influence of combined changes of environmental factors on respiratory losses in relation to the photosynthetic activity of the phytoplankton. The present study on two different species of Antarctic algae has shown that particularly temperature changes induce variations of rGP/R. However, these variations did not influence NPP. It could be therefore concluded that the assumption of constant respiratory loss rates in the range of 10-15% of GP within the annual growth period appears appropriate under field conditions when measured respiration data are not available. It should be emphasized that changes of other environmental factors (e.g. nutrient availability, grazing pressure) may induce stronger variation of rGP/R. In this case, the impact on NPP needs to be re-evaluated.
Supporting information S1 Table. Temperature and salinity of the applied experimental conditions and numbers of biological replicates for the measured parameters. The numbers in the table represent the numbers of biological replicates for the measured physiological parameters under the applied experimental conditions: GP max , maximum gross photosynthetic rate; R, respiration rate; rGP/ R, ratio of maximum gross photosynthetic rate to respiration rate; NPQ, non-photochemical quenching; P F /P O , ratio fluorescence-based to oxygen-based gross photosynthetic rate; a � phy , Chlorophyll-specific absorption coefficient. | 2019-10-23T13:06:35.434Z | 2019-10-21T00:00:00.000 | {
"year": 2019,
"sha1": "12d58c9e819a9fbc5f80c58293fe60ad9bc727b4",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0224101&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb1fe72d87ed5b704b0bf54cebdfff9b262d90b3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
225708445 | pes2o/s2orc | v3-fos-license | STUDIES AND RESEARCH ON THE CONTROL OF THE RESIDUAL GAS LEAKS IN A CASTING METAL PARTS WORKSHOP
The paper presents an analysis, regarding the control of the possible dangerous leaks of carbon monoxide, in this type of spaces, destined for the casting of the metal parts. These carbon monoxide leaks can occur mainly in the metal alloy making sector, which is to be cast into parts. The direct measurements were made especially in this working point, within the technological process, in different phases of the alloys elaboration process, but also during the time, when the workshop, was without production activity. The measurements were made with the help of the Fluke 975 Anemometer device, the results obtained being compared with the legislation in force, regarding safety and health at work.
Introduction
Environmental pollution is a reality today, it is found both in industrial areas and in urban settlements (to a lesser extent in rural areas). If natural pollution cannot be predicted, and in this context, only to a small extent can it be controlled, artificial pollution is induced by human activity (regardless of the type of activity undertaken), and depends only on us, to limiting the effects for the airwater -soil [1].
Steel is a ubiquitous material in our daily lives, it is a 100% durable and recyclable material. No matter how much steel is recycled, it remains just as strong and durable [2,3].
There are a number of new, modern processes for the manufacture of steel, which are in operation in industrial practice. The technological advances made in recent years have had as their main goal, the minimization of electricity consumption, and maximize energy efficiency in the manufacturing process.
During these activities, greenhouse gas emissions occur due to the following causes: -chemical processes that take place in the production of steel; -burning fuel for technological purposes, or for heating workspaces.
The process of elaboration and casting of steels, in electric arc furnaces, is one of the determining factors for the formation of gas and dust emissions.
The process of elaboration of steels in electric arc furnaces is based on physico-chemical processes, which take place at high temperatures, ensuring, based on specific technological instructions, the obtaining of steels, in the prescribed composition and quality.
The raw material used is scrap iron, and the main auxiliary materials are: iron ores, lime, dolomite, ferroalloys, coke, fluorine. The melting of the metal charge is done by means of the electric arc, formed between the electrodes of the electric furnace and the charge [4].
The process of making steel in electric arc furnaces includes several technological stages, characterized by technological operations which must be performed: -adjustment (hot oven repairs -shotcreting); -loading (operations of handling and loading of raw materials in these furnaces); -melting (melting of the load, use of oxygen, oxy -gas burners); -oxidation (insufflation of oxygen, evacuation of oxidizing slag); -refining -deoxidation and alloying (addition of materials with deoxidizing action and alloying); -evacuation (slag evacuation and steel evacuation, respectively).
In the current stage of technological development of the steel industry, the modernized electric furnace, transformed into a melting machine, transfers the refining -refining operations, in metallurgy aggregates in the pot, but here too, the [5,6].
Due to the physico-chemical processes, which take place at high temperatures, the elaboration is accompanied by the intense formation of gases, which contain an appreciable amount of dust.
The phenomenon is specific to all the mentioned technological stages, but especially to the melting and oxidation periods.
The amount of gas and dust emissions is directly proportional to the intensity of the physico-chemical processes, in the metal bath, and in the working space of the furnace, and depends on the specific periods of the elaboration process, presented below.
During the melting period: -a brown smoke is emitted, due to the oxidation of metal vapours, which is emitted from the area of the electric arcs of the furnaces, where the temperature is much higher than their evaporation temperature; -gas emissions are due to the oxidation of carbon in the area of the electric arc, where the temperature of the metal is maximum, and the burning of oils that contaminate the load, when during of the melting the load collapses.
Quantitatively, dust and gas emissions are variable, being influenced by two categories of factors: a) technological factors: -supply intensity and voltage of the furnace; -how to intensify the melting (use of oxygen, oxy-gas burners); -compactness of the load; -the steel brand being developed; b) random factors: -unforeseen collapse of the load due to uneven melting; -short circuits between the charge and the electrodes; -damage to the electrodes. During the oxidation period: -gaseous emissions, in particular carbon oxides, because, due to the use of gaseous oxygen, decarburization occurs at high speed; -a dense brown smoke is emitted, due to the vaporization of the metals, explainable by the fact that the decarburization process is accompanied by an intense release of heat, which determines the increase of the metal bath temperature, in the reaction zone, above the vaporization temperature of some metals dissolved in it; also, when the CO bubbles float and come out of the metal bath, on its entire surface, a mechanical extraction of the metal and slag particles takes place, which are entrained by the gas flow.
From a quantitative point of view, this is the stage in which the gas and dust emissions are maximum (usually the gas emissions from this period exceed by 20% the gas emissions from the melting period).
Taking samples for gas measurement can be performed either in the form of a network measurement or on point. Taking samples at a measuring point, in the measuring plane, means that the chosen measuring point is representative of the entire measuring cross section [7-9].
Experimental results
In the case of installations which operate in unchanging conditions, at least three discontinuous measurements are performed, in the case of a longterm normal operation, with maximum emission, and at least one more measurement, in the case of operating situations which repeats regularly, and aims at an oscillating emission behaviour.
In the case of installations whose operating conditions undergo temporal changes, sufficient measurements are made, but at least 6, in the case of operating conditions which can lead to maximum emissions.
The duration of a discontinuous measurement must not exceed half an hour; measured values are noted as values/half an hour. The appliance also determines the temperature in the combustion zone, where the measurements are made.
The verification is performed in two measurement plans, at the beginning and at the end of the elaboration. This device measures the percentage of heat by convection, and the heat of radiation is not taken into account.
Fig. 1. Fluke 975 Anemometer
The experimental determinations were made in a workshop for casting metal parts, in 2 working points, respectively in the steel making area, in the electric arc furnace (point 1), and in the non-ferrous alloy making area, in the flame furnace (point 2). Determinations were made, and while the production activity was interrupted. The measurements were made at 10 minutes, during the operation of the workshop, in a time interval of 8 hours, during a working day.
In order to be able to describe a gas flow as clearly as possible, it is necessary to follow the following parameters of waste gases, which can be considered marginal conditions for waste gases: -waste gas density; -humidity; -flow rate and static pressure; -temperature [10]. The inspection instrument, used for complete air quality testing, is the Fluke 975 Anemometer, which combines five air monitoring instruments into a single, robust and easy-to-use instrument. The Fluke 975 is used to verify the efficient operation of heating, ventilation and air conditioning systems, and to test for hazardous carbon monoxide leaks in all types of buildings (Fig. 1).
The determined values were centralized in table 1, and compare with the legislation in force, regarding the occupational exposure to chemical agents, according to GD. Nr. 1218/2006 [11][12][13]. Concentrations of chemicals in the air are usually measured as the mass of chemicals (milligrams, micrograms, nanograms or picograms) per volume of air (cubic meters or cubic feet). Concentrations can also be expressed as parts per million (ppm) or parts per billion (ppb) by using a conversion factor. This conversion factor is based on the molecular weight of the chemical and it is different for every chemical. The temperature of the atmosphere also has an influence on the calculation.
To convert ppm to mg/m 3 the next formula was used: concentration (mg/m 3 ) = 0.0409 x concentration (ppm) x molecular weight.
Carbon 3-Area without production activity) As can be seen from the graphs above, the maximum values were determined in the flame furnace, especially during the melting and oxidation periods.
Although high values have been obtained, they fall within the limits prescribed by GD. No. 1218/2006, regarding professional exposure to chemical agents.
The carbon dioxide (CO2) content of the air is an indicator of ambient air quality, assuming that human respiration is the main source of CO2 emissions. Measuring instruments with CO2 sensors allow you to reliably control this important value: because, due to the decrease in ambient air quality (increasing the CO2 content in the air), performance decreases [11].
The highest determined value, for CO2, was detected in the working area of the electric arc furnace. In the flame oven, we have a lower value of CO2, determined at a higher temperature (Fig. 2).
When assessing ambient air quality by measuring the CO2 concentration and other parameters, the CO2 concentrations should not exceed 1000 ppm.
Carbon monoxide is colourless and odourless, and stops the absorption of oxygen by the blood, when inhaled too high a concentration. Breathing CO in a concentration of 700 ppm, in an enclosed space, leads to death in about 3 hours.
It is a suffocating, toxic gas, which arises from an incomplete combustion (oxidation) of carboncontaining substances. The most common sources of daily CO are: gasoline engines, gas ovens, heating systems, solid fuels such as wood and coal.
The maximum allowable concentration of carbon monoxide at work is 30 ppm (parts per million). Carbon monoxide can accumulate very quickly, to a life-threatening extent, in enclosed and semi-enclosed spaces. A carbon monoxide detector allows reliable detection of this insidious gas [12].
The average of the carbon monoxide determinations, in all the monitored working points, is inscribed in the allowed values, according to the norms in force, regarding the air quality in closed working spaces, obtaining a maximum, in the working point, the flame furnace (Fig. 3). The humidity formed in a room depends mainly on the following factors: moisture production in the room, air exchange with outside air, moisture absorption capacity by walls and furniture, transport of moisture by external construction components.
Parameters, air temperature, and relative humidity are important for planning, choosing, and installing ventilation systems, and air conditioning.
The humidity of the air decisively influences the feeling of comfort of the people in a room. The ideal ambient humidity is between 30 and 65%. Of course, in addition to this, the temperature is also decisive. High air humidity can be extremely annoying in high temperatures [13].
For this reason, humidity measuring devices are generally equipped with humidity and temperature sensors.
Humidity values, determined in all work points, as well as in the workshop, when we do not have production activity, are within the norms regarding air quality, in closed industrial workspaces (Fig. 4).
Conclusions
The paper presents the data necessary to investigate the occupational exposure to noxious pollution, regarding the control of hazardous waste gas leaks, in a workshop for casting metal parts.
The evaluation involves the investigation of the working environment conditions (knowledge of technological processes, choice of noxious substances, air sampling, method used, interpretation of results, etc.).
In order to create conditions appropriate to the working environment, a series of technical and organizational measures are required as follows: Organizational measures: -training on the need to use work-specific protective equipment; -periodic medical control, with the theme of detecting diseases that are manifested, or are about to be installed, in the respiratory system; -making determinations for the air quality in the workplace atmosphere, with the periodicity required by law; -appropriate signalling of risks at workplaces. Technical measures: -Acquisition of E.I.P. corresponding to the activity to be carried out according to the regulations in force.
-Identification and design / redesign of the working conditions, in accordance with the legislation in force regarding the minimum safety and health requirements for the workplaces. | 2020-07-02T10:36:11.111Z | 2020-06-15T00:00:00.000 | {
"year": 2020,
"sha1": "7cafefd215c5c05f226732370e56655228631c44",
"oa_license": "CCBYNC",
"oa_url": "https://www.gup.ugal.ro/ugaljournals/index.php/mms/article/download/3462/3091",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ebcf048e44bb79001c44c310a84073e69b0983bb",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119251483 | pes2o/s2orc | v3-fos-license | On the near-threshold $\bar pp$ invariant mass spectrum measured in $J/\psi$ and $\psi'$ decays
A systematic analysis of the near-threshold enhancement in the $\bar pp$ invariant mass spectrum seen in the decay reactions $J/\psi \to x \bar pp$ and $\psi (3686) \to x \bar pp$ $(x = \gamma,\, \omega,\, \rho,\, \pi,\, \eta)$ is presented. The enhancement is assumed to be due to the $\bar NN$ final-state interaction (FSI) and the pertinent FSI effects are evaluated in an approach that is based on the distorted-wave Born approximation. For the $\bar NN$ interaction a recent potential derived within chiral effective field theory and fitted to results of a partial-wave analysis of $\bar pp$ scattering data is considered and, in addition, an older phenomenological model constructed by the J\"ulich group. It is shown that the near-threshold spectrum observed in various decay reactions can be reproduced simultaneously and consistently by our treatment of the $\bar pp$ FSI. It turns out that the interaction in the isospin-1 $^1S_0$ channel required for the description of the $J/\psi \to \gamma \bar pp$ decay predicts a $\bar NN$ bound state.
I. INTRODUCTION
The origin of the enhancement in the antiprotonproton (pp) mass spectrum at low invariant masses observed in heavy meson decays like J/ψ → γpp, B → Kpp andB → Dpp, but also in the reaction e + e − ↔pp, is an interesting and still controversially discussed issue. In particular, the spectacular near-threshold enhancement in thepp invariant mass spectrum for the reaction J/ψ → γpp, first observed in a high-statistics and high-mass-resolution experiment by the BES Collaboration [1], has led to numerous publications with speculations about the discovery of a new resonance [1] or of app bound state (baryonium) [2][3][4], and was even associated with exotic glueball states [5][6][7]. However, in the above processes the hadronic final-state interaction (FSI) in thē pp system should play a role too. Indeed, the group in Jülich-Bonn [8,9] but also others [10][11][12][13][14][15][16][17] demonstrated that the near-threshold enhancement in thepp invariant mass spectrum of the reaction J/ψ → γpp could be simply due to the FSI between the outgoing proton and antiproton. Specifically, the calculation [8,9] based on the realistic Jülich antinucleon-nucleon (N N ) model [18][19][20], the one by the Paris group [15], utilizing the Paris N N model [21], and that of Entem and Fernández [14], using aN N interaction derived from a constituent quark model [22], explicitly confirmed the significance of FSI effects estimated in the initial studies [10][11][12] within the effective range approximation.
In the present work we perform a systematic analysis of the near threshold enhancements in the reactions J/ψ → xpp and ψ ′ (3686) → xpp (x = γ, ω, ρ, π, η) with emphasis on the role played by thepp interaction. The aim is to achieve a simultaneous and consistent descrip-tion of allpp invariant mass spectra measured in the various reactions. FSI effects for different decay channels cannot be expected to be quantitatively the same. In particular, with regard topp, the two baryons have to be in different states if the quantum numbers of the third particle in the decay channel differ, in accordance with the general conservation laws. Furthermore, it is possible that dynamical selection rules, reflecting the details of the reaction mechanism, could suppress the decay intō pp S-waves for some decays near threshold. Thus, in different decay modes the finalpp system can and must be in different partial waves and, accordingly the FSI effects will differ too.
As mentioned, initial studies of FSI effects in the decay J/ψ → γpp were done in the rather simplistic effective range approximation. Later investigations, like the ones performed by us [8,9], employed directly scattering amplitudes from realisticN N potential models. Still also here the treatment of the FSI is done within the so-called Migdal-Watson approach [23,24] where the elementary decay (or production) amplitude is simply multiplied with thepp T -matrix. It is known that this approach works reasonably well for reactions with a final N N system [25]. In this case the scattering length a is fairly large, for example, a ≈ −24 fm for a final np system (in the 1 S 0 state). Measurements of the level shifts in antiprotonic hydrogen atoms suggest that the scattering lengths forpp scattering are presumably only in the order of 1 to 2 fm [26]. Moreover, those scattering lengths are complex due to the presence of annihilation channels. Therefore, in the present paper we consider an alternative and more refined approach for taking into account the FSI. Specifically, we use the Jost function which is calculated directly from realisticN N potentials. FSI effects are then taken into account by multiplying the reaction amplitude with the inverse of this Jost function. This is practically equivalent to a treatment of such decay reactions within a distorted-wave Born approximation. Note that this is different from the popular Jost-function approach based on the effective range approximation [27] which is widely used in investigations of FSI effects.
We present results for the decays J/ψ → xpp with x = γ, ω, π 0 , η, which all have been measured. For the last three cases parity, G-parity, and isospin are conserved so that each of those channels allows one to explore thepp system in a distinct partial wave. At the same time the analoguous reactions ψ ′ → xpp are studied. In this case there are data for x = γ, π 0 , η. Clearly, ifpp FSI effects are responsible for the enhancements seen in specific J/ψ decays, then very similar effects should occur in the corresponding ψ ′ decays because the selection rules are the same.
As far as theN N interaction is concerned we employ again the phenomenological model A(OBE) of the Jülich group [18] used in our earlier works [8,9,28,29]. In addition, and as a novelty, we utilize also aN N interaction derived in the framework of chiral effective field theory (EFT) [30]. The latter interaction incorporates results of a recent partial-wave analysis (PWA) ofpp scattering data [31]. In particular, this EFT potential has been constructed in such a way, that it reproduces the amplitudes determined in the PWA well up to laboratory energies of T lab ≈ 200 − 250 MeV [30], i.e. in the low-energy region where we expect that FSI effects are important.
As pointed out at the beginning, also in decays of the B and Υ mesons to final states with aN N pair enhancements at low invariant masses have been observed [32][33][34][35][36][37][38][39]. However, in the majority of those experiments the invariant-mass resolution of theN N spectrum is relatively low and often there are only two or three data points in the (relevant) near-threshold region. Therefore, we refrain from looking at those data in detail. Note also, that in case of weak decays like B → Kpp or B → Dpp parity is not conserved and, as a consequence, there is less restriction on the possible partial waves of theN N final state. The situation is different for the reaction e + e − ↔pp. As shown by us in recent studies [40,41], employing the same formalism and the sameN N interactions as in the present work, the FSI mechanism can indeed explain the near-threshold enhancement seen in the data taken by the PS170 [42], the FENICE [43] and the BaBar [44] Collaborations.
The paper is structured in the following way: In Section II we provide a summary of the formalism that we employ for treating the FSI due to theN N interaction. We discuss also the selection rules for the decay channels considered. Results of our calculations are presented in Section III. First we analyze hadronic decay channels of J/ψ and ψ ′ (where isospin is assumed to be conserved) and compare our predictions with measurements of thē pp invariant mass spectrum for the π 0p p, ηpp, and ωpp channels. Subsequently we consider radiative decays. Since it turns out that thepp invariant mass spectrum of J/ψ → γpp can no longer be described with the employed and previously establishedN N interactions, once the more realistic treatment of FSI effects is utilized, we perform and present a refit of the chiral EFTN N potential that reproduces the γpp data and stays also very close to the result of the PWA (and to the original EFT potential [30]) for the relevant ( 1 S 0 ) partial wave. The paper ends with a summary. Results of the refitted 1 S 0 N N potential are presented in an appendix, and compared with the PWA and the previously published EFT potential [30].
II. TREATMENT OF THEN N FINAL STATE INTERACTION
Our study of the processes of J/ψ (or ψ ′ ) decaying to xpp (x = γ, ω, π, η) is based on the distorted wave Born approximation (DWBA) where the reaction amplitude A is given by Here A 0 is the elementary (or primary) decay amplitude, GN N the freeN N Green's function, and TN N theN N scattering amplitude. For a particular (uncoupled)N N partial wave with orbital angular momentum L, Eq. (1) reads where T L denotes the partial-wave projected T -matrix element, and k and E k are the momentum and energy of the proton (or antiproton) in the center-of-mass system of theN N pair. The quantity T L (p, k; E k ) is obtained from the solution of the Lippmann-Schwinger (LS) equation, for a specificN N potential V L . In case of coupled partial waves like the 3 S 1 -3 D 1 we solve the corresponding coupled LS equation as given in Eq. (2.20) of Ref. [30], and use then T LL in Eq. (2). In principle, the elementary production amplitude A 0 L in Eq. (2) has an energy dependence and it depends also on theN N momentum and the photon momentum relative to theN N system. However, in the near-threshold region the variation of the production amplitude with regard to those variables should be rather small as compared to the strong momentum dependence induced by theN N FSI and, therefore, we neglect it in the following. Then Eq. (2) can be reduced to Here, we have separated the factor k L which ensures the correct threshold behaviour for a particular orbital angular momentum so thatĀ 0 L is then a constant. The quantity in the bracket in Eq. (4) is the so-called enhancement factor [27]. Introducting a suitably normalized wave function for thepp pair in the continuum [27], ψ (−) * k,L (0), this quantity is just the inverse of the Jost function, i.e. ψ . We want to emphasize that in the present work we calculate the enhancement factor for the consideredN N interactions explicitly, which amounts to an integral over the pertinent (half-offshell) T matrix elements, see Eq. (4). This should not be confused with the popular Jost-function approach which relies simply on the effective range approximation. In any case, the latter cannot be easily applied in theN N case because now the scattering length as well as the effective range are complex quantities. For a thorough discussion of various aspects of the treatment of FSI effects due to baryon-baryon interactions, see Refs. [45][46][47].
The differential decay rate for the processes X → xpp (X = J/ψ, ψ ′ ) can be written in the form [8,48] after integrating over the angles.
Here the Källén function λ is defined as λ(x, y, z) = (x − y − z) 2 − 4yz /(4x), M ≡ M (pp) is the invariant mass of thepp system, m X , m p , m x are the masses of the J/ψ (or ψ ′ ), the proton, and the meson (or γ) in the final state, while A is the total (dimensionless) reaction amplitude. Note that in Eq. (5) we have assumed that averaging over the spin states has been already performed [48]. In the present manuscript we will consider only individual partial wave amplitudes and, therefore, use a specific A L in Eq. (5).
Let us come back to A 0 and, specifically, to the assumption that it is constant in the region near theN N threshold where we perform our calculation. Such an assumption is sensible if there are no dominant one-, twoor even three-particle doorway channels, with masses or thresholds close to theN N threshold, for the transition from J/ψ (or ψ ′ ) toN N . For example, a dominant N N production via ρ, ππ or πππ intermediate states would definitely not invalidate this assumption. However, a genuine resonance with a mass comparable to the X(1835) found by the BES Collaboration in the reaction J/ψ → γπ + π − η ′ [49,50] would render it already somewhat questionable, if it constitutes indeed the dominant doorway channel for the decay into theN N system. In any case, and as in all previous works that exploit FSI effects, it should be clear that the assumption of a con-stantĀ 0 L is first and foremost a working hypothesis. The question that can be addressed in our study is simply, whether the energy dependence generated by theN N interaction in the final state alone suffices to describe thepp invariant mass spectra or not. A possible genuine energy dependence of the primary production amplitude itself cannot be excluded. Conservation of the total angular momentum, together with parity, charge conjugation and isospin conservation for the strong interactions, put strong constraints on the partial waves of the producedpp system. We list the allowed quantum numbers for various decay channels in Table I for orbital angular momentum L ≤ 1, i.e. S and P waves. We use the standard notation (2S+1) L J , where L, S, J are the orbital angular momentum, the total spin and the total angular momentum. The isospin I is sometimes indicated by the notation (2I+1)(2S+1) L J . In the actual calculation we consider, in general, only the lowest partial wave, i.e. either the 1 S 0 or the 3 S 1 . Those should be the dominant partial waves for energies near thepp threshold. As already said, we assume also that a single partial wave saturates (or dominates) in the energy range covered, i.e. up to excess energies of M (pp) − 2m p ≈ 100 MeV considered also in the earlier works [8, 9, 13-15, 28, 29]. In principle, higher partial wave may well play a non-negligible role around 100 MeV (or even at somewhat lower energies) and one could limit oneself to excess energies up to ≈ 50 MeV, say, to be on the safe side. Or one could introduce a cocktail of amplitudes. However, at present there is very little experimental information to constrain the relative weight of the partial waves and also their interference. Hopefully in the future, with a larger data set and more precise measurements of angular distributions a more refined analysis will become feasible. Note that it is possible that dynamical selection rules lead to a suppression of the lowest partial waves in theN N system. This could be also detected by measuring the angular distributions of the decay products.
III. RESULTS
Most of the studies of FSI effects in the reaction J/ψ → γpp (and related decays) in the literature are performed in the Migdal-Watson approach [23,24]. In this approximation, instead of evaluating the integral equation that arises in the DWBA, see Eq. (2), the FSI is simply accounted for by multiplying the elementary reaction amplitude by the on-shellN N T -matrix, i.e.
where N is an arbitrary normalization factor. It is known from pertinent studies that the applicability of the Migdal-Watson approach is limited to a fairly small energy range [46]. In particular, it works only reasonably well if the scattering length is rather large -which is the case for N N scattering with values of a ≈ −24 fm for the interaction in the (np) 1 S 0 partial wave. However, forN N scattering the values for the scattering lengths are typcially in the order of only 1-2 fm [26,30].
Entem and Fernández have presented results based on the Migdal-Watson approximation and on the DWBA [14] and those suggest drastic differences between the two approaches. Indeed, we can confirm this with our own calculation employing theN N potential A(OBE) [18] that we used in our earlier studies [8,9,28,29]. Corresponding results are presented in Fig. 1. The dashdotted curve is the prediction for thepp invariant mass based on the I = 1 1 S 0 amplitude in the Migdal-Watson approximation, as published in Ref. [8] which reproduces rather well the energy dependence found in the experiments [1,51,52]. The result for the sameN N interaction but based on the more refined treatment of the FSI, Eq. (4), is no longer in agreement with the data, see the solid curve in Fig. 1. At first sight, this is certainly disturbing. However, we want to emphasize that it would be premature to see the observed discrepancy as signal for the failure of the FSI interpretation of the enhancement in the near-thresholdpp invariant mass spectrum. Rather it could be simply an evidence for certain shortcomings of the employedN N interaction in the 1 S 0 partial wave. In addition, isospin is not conserved in the reaction J/ψ → γpp and, therefore, the actualN N FSI can involve any combination of the I = 0 and I = 1 1 S 0 amplitudes. We will address these issues in detail later in this section. First we want to look at purely hadronic (J/ψ and ψ ′ ) decay channels with app final state where nominally isospin is conserved.
But before that we would like to comment on the normalization. Usually only event rates are given for the various experiments. These differ for different experiments and also for different invariant-mass resolutions. For the figures presented below, in general, we fix the scale according to the experiment with the highest resolution. Which data set is used to fix the scale will be emphasized in the pertinent caption. The data (and error bars) from other experiments with lower resolution are then renormalized to this scale, guided by the eye. Also our theory results are renormalized to this scale (guided by the eye) by an appropriate choice ofĀ 0 L in Eq. (4). The only exception is the J/ψ → γpp reaction, where the constantĀ 0 L is fixed via a fit to thepp invariant-mass spectrum. Note also that the actual values of (most of) the data presented in the various figures were not directly available to us. We use here values obtained from digitizing the figures of the original publications. Finally, for some decays the BES Collaboration has published data sets with different statistics but with the same momentum resolution. Since we wanted to include both sets in the same figure we shifted the ones from the earlier measurement slightly to the right (by 1 MeV) so that one can distinguish the two data sets easier in the figure. This concerns the γpp and the ωpp channels. 4), while the dash-dotted curve is based on the Migdal-Watson approximation [8]. Data are taken from Refs. [1,51,52]. The measurement of Ref. [51] is adopted for the scale. The data for the BES measurement from 2003 have been shifted slightly to the right, cf. text.
A. Decays into three hadrons
Besides J/ψ → γpp there is also experimental information on J/ψ and ψ ′ decays into three-body channels involving app pair and a pseudo-scalar (π, η) [1,[52][53][54][55][56] or vector (ω) [57,58] meson. There is, however, a strong variation in the quality of the data. While in case of J/ψ → π 0p p and J/ψ → ωpp the momentum resolution is excellent and comparable to the one for J/ψ → γpp, the bin widths for the other reactions are much larger. pp spectrum for the decay J/ψ → π 0p p. The band represents the result based on theN N FSI generated from the chiral EFT potential [30] while the solid curve is the result for theN N interaction A(OBE) [18]. The dashed curve denotes the phase space behavior. Data are taken from Refs. [1,53].
The measurement of Ref. [1] is adopted for the scale.
Let us first consider channels with pseudo-scalar mesons. The processes of J/ψ and ψ ′ decaying to πpp or ηpp involve the 3 S 1 partial wave, see Table I. The event rates calculated via Eqs. (4) and (5) are shown in Fig. 2 for the decay J/ψ → π 0p p, in Fig. 3 for J/ψ → ηpp, in Fig. 4 for ψ ′ → π 0p p, and in Fig. 5 for ψ ′ → ηpp. Results for ourN N potential derived in chiral EFT are presented as bands. This band is generated from the four cutoff combinations {Λ ,Λ} considered in the construction of the EFTN N potential [30] and can be viewed as a (rough) estimate of the theoretical uncertainty, see the corresponding discussions in Refs. [30,59]. The solid line is the prediction for the meson-exchange potential A(OBE). The dashed line represents the phase space behavior and follows from Eq. (5) by setting the production amplitude A to a constant. In general, the latter is normalized in such a way that it coincides with the results for the EFT interaction for excess energies around 70 − 80 MeV. We want to stress once more that in Fig. 2 and in the other figures in this section all normalizations are arbitrary. We are only interested in the energy dependence as it follows from the FSI effects predicted by the employedN N interactions.
Obviously, in all cases our predictions are in line with the data. Specifically, the results for J/ψ → π 0p p are in nice agreement with the experiment. Here the FSI generates a moderate but noticable enhancement at small pp invariant masses as compared to the phase space and yields app spectrum which is seemingly closer to the trend exhibited by the data than the phase-space curve. It is interesting to see that the results based on the chiral potential and on A(OBE) are fairly similar. In this context let us remind the reader that we had to introduce some phenomenological adjustments in our earlier study based on the Migdal-Watson approximation (and with A(OBE)) in order to be able to reproduce that experimental invariant mass spectrum, cf. Eq. (8) in Ref. [8]. Now the behavior follows directly from the refined treatment of FSI effects via Eq. (4).
The results for the other channels are less conclusive. The invariant-mass resolution in the pertinent measurements is only in the order of 30 MeV or so and, consequently, there are only three or four data points below the excess energy of 100 MeV. Whether or not the present data require the enhancement provided by theN N FSI is difficult to judge. Hopefully, future measurements with much higher statistics as well as much higher resolution will provide a more serious test for FSI effects.
Let us now look at the decay J/ψ → ωpp. In this case thepp state is produced in the 1 S 0 partial wave with isospin I = 0, cf. Table I The measurement of Ref. [58] is adopted for the scale. The data for the measurement from 2008 have been shifted slightly to the right, cf. text.
model A(OBE) [18] (solid curve) and the chiral potential constructed in Ref. [30] (band) are shown in Fig. 6 and compared to data from the BES Collaboration [57,58]. As can be seen from Fig. 6, the predictions agree rather well with the measuredpp invariant mass spectrum in the energy range considered. Also here differences between the results based on the chiral potential and A(OBE) are small. Actually, it seems that for this particularN N partial wave there is no strong dependence on the employed FSI formalism. Our results for A(OBE) based on the Migdal-Watson approximation, published in [28], are qualitatively very similar to the ones we get now within the DWBA.
B. Radiative decays
In the J/ψ → γpp and ψ ′ → γpp decays the isospin is no longer conserved and, in principle, the finalpp state can have any admixture of the isospin 0 and 1 components. In our previous works the I = 1 amplitude was used for J/ψ → γpp [8] while for ψ ′ → γpp the isospin averaged amplitude, Tp p = (T I=0 + T I=1 )/2, was found to yield a good agreement with the measurements [29]. Results based on anN N interaction derived from the quark model, presented in Ref. [14], suggest that the FSI effects of both isospin components might be roughly in line with the data, while apparently in Refs. [13,15,17] only the isospin 0 amplitude was considered. The BES Collaboration argues in favor of a decay into a pure I = 0pp state, guided by the experimental observation that apparently I = 1 states are suppressed in J/ψ radiative decays [49]. Indeed, the branching fraction of J/ψ → γπ 0 is very small as compared to J/ψ → γη [60]. But one must also say that there are only a few candidates listed in [60] for a decay of J/ψ into γ and a pure I = 1 hadronic channel. And, in case of J/ψ → γρω for example, only an upper limit of the branching fraction is known.
Note that for the reaction J/ψ → γpp a partial-wave analysis has been performed [51]. It suggests that the near-threshold enhancement is dominantly in the J P C = 0 −+ state, which means that the pp system should be in the 1 S 0 partial wave.
As already shown above, using the Jülich model A(OBE) as input, the mass dependence of the nearthresholdpp spectrum (and specifically the pronounced peak) is no longer reproduced when the refined treatment of the FSI is employed. It turns out that the same is also the case for the chiral EFT potential of Ref. [30].
In the present study we adhere to the hypothesis that the enhancement in the γpp channel is connected with thepp FSI. Then, there are two options: First, we can dismiss the assumption that the producedpp state consists only of the I = 1 component alone (made in our earlier work [8] and also in the calculation based on the EFT interaction mentioned right above) and allow for an arbitrary mixture of the I = 0 and 1 amplitudes. Second, we can question the amplitude in the 1 S 0 partial wave as predicted by the employed Jülich A(OBE) and chiral EFTN N potentials. Since the one produced by the latter interaction was fixed by a fit to the partialwave analysis of Zhou and Timmermans [31] this implies that we have to depart from the results of that analysis.
Clearly, for physical (and practical) reasons we still want to stay as close as possible to the solution given in Ref. [31] which reproduces the consideredN N data very well. Thus, we allow only minimal variations in the 1 S 0 partial wave and keep all other partial waves fixed. Furthermore, we require that allN N scattering observables in the low-energy region remain practically unchanged. This concerns the total, elastic (pp →pp), and chargeexchange (pp →nn) cross sections, and also the differential cross sections. Since at low energies those observables are dominated by the 3 S 1 partial wave and the weight of the 1 S 0 amplitude is fairly small, there is some freedom for variations even under such strict constraints.
We will consider only variations in the 31 S 0 partial wave, i.e. in the I = 1 amplitude. The 11 S 0 potential is kept as in Ref. [30]. Given the fact that thepp invariant mass spectrum for J/ψ → ωpp is well reproduced by the 11 S 0 amplitude we do not see any reasons to introduce modifications in this partial wave. Recall that the γpp and ωpp channels involve the very same amplitudes, see the selection rules in Table I. Thus, the assumption that isospin is conserved in the hadronic decay rules out that the strong enhancement seen for γpp can be directly associated with FSI effects due to the I = 0 amplitude. Indeed, any appreciable modification of the I = 0 amplitude would automatically spoil the reproduction of the ωpp data. Note, however, that, in principle, one cannot exclude that isospin conservation is also violated in hadronic decays, see, e.g., Ref. [61].
In the following we examine the two options jointly. We regard two exemplary combinations of the two isospin amplitudes, namely the "standard" one, T = Tp p = (T 0 + T 1 )/2, and also one with a predominant I = 0 component, T = (0.7 T 0 + 0.3 T 1 ). For both cases we then perform a combined fit to thepp invariant mass spectrum for J/ψ → γpp (up to excess energies of 67.5 MeV) and to theN N partial-wave cross sections of the 1 S 0 amplitude as determined in the PWA of Zhou and Timmermans [31]. Results for thepp invariant mass spectrum are reported below while details and results for thē N N sector are summarized in the Appendix. The decay rate for J/ψ → γpp based on the refitted N N interaction is shown in Fig. 7. The results are for the combination T = (T 0 + T 1 )/2. One can see that now the pronounced peak near 10 MeV is very well described by the FSI. At the same time our (former)N N results are also reproduced, c.f. the Appendix. Interestingly, the modified potential generates a bound state in the 31 S 0 channel which was not the case for the interaction presented in Ref. [30]. For example, for the cutoff combination {Λ,Λ} = {450 MeV, 500 MeV} the bound state is located at E B = (−36.9 − i 47.20) MeV, where the real part denotes the energy with respect to theN N threshold. As it happens, this bound state is not very far away from the position of the X(1835) resonance found by the BES Collaboration in the reaction J/ψ → γπ + π − η ′ [49,50]. That resonance was interpreted as a possible signal for aN N bound states in several investigations. But, be aware, our bound state is in the I = 1 channel and not in I = 0 as advocated in publications of the BES Collaboration [49] and of other authors [15]. In any case, we want to stress that the actual value we get for the binding energy should be viewed with caution. As mentioned, we examined also the combination T = 0.7 T 0 + 0.3 T 1 , and with it we can achieve likewise a simultaneous description of the J/ψ → γpp data and thepp scattering cross section with similar quality. However, in this case the position of the bound state is around E B = (−14.8 − i 39.7) MeV. Clearly, the data above theN N threshold do not allow to determine the binding energy reliably given that the bound state might be 30 or 40 MeV below the threshold and has a sizable width.
Note that we do not show in Fig. 7 the data points in the lowest bin from the BES experiments. For energies below 5 MeV the Coulomb interaction has a significant influence and likewise the difference between thepp and nn thesholds plays a role. Both effects are not included in the present calculation. Indeed, because of the strong energy dependence very near threshold, one would need to take into account also the finite momentum resolution of the experiment for a sensible comparison with the data.
There are also experimental results for the decay ψ ′ → γpp [51,52]. While the statistics is not as high as for the J/ψ → γpp case, nonetheless, the recent data from the BES Collaboration [51] provide clear evidence that, contrary to J/ψ → γpp, in this channel there is no prominent near-threshold peak, but still a significant enhancement as compared to the pure phase-space behavior, see Fig. 8. This is interesting because the quantum numbers of the particles involved in the two reactions are identical and, therefore, one would expect naively to see similar effects from thepp FSI. However, in the ψ ′ → γpp decay isospin is likewise not conserved and, in particular, the reaction amplitude can have a different admixture of the isospin-0 and isospin-1 components. Indeed, when we assume, for example, that for ψ ′ → γpp the finalpp state is given by the combination T = 0.9 T 0 + 0.1 T 1 we can describe thē pp invariant mass spectrum measured in this reaction very well, as demonstrated in Fig. 8. But a somewhat smaller or larger admixture (± 5-10 %) of the isospin-0 component would still yield results that are compatible with the data. Note also that the isospin-1 T -matrix from the refitted 31 S 0 potential is employed here, i.e. the same amplitude as in our J/ψ → γpp calculation. Results based on theN N model A(OBE) are also shown in Fig. 8 (solid lines). Here agreement is found for the isospin combination T = (T 0 + T 1 )/2.
The branching ratios of ψ ′ → γχ cJ (J = 0, 1, 2) are around 10%, for each of the χ cJ 's [60]. Together they amount to about 30%, which is orders of magnitude larger than all other radiative decay modes. Thus, it is quite possible that in the radiative decay of ψ ′ thepp pair is produced predominantly via one of the χ cJ resonances acting as doorway state. If so, then thepp state must emerge in a P -wave, see Tab. I. Therefore, we performed also calculations where we explored such a scenario. It turned out that the assumption of a transition via the χ c0 resonance which then leads to app final state in the 3 P 0 partial wave yields results that agree fairly well with the data. The corresponding event distribution for the finalpp pair is presented in Fig. 9 where the isospin-0 amplitude predicted by the consideredN N interactions was employed. Anyway, the masses of the χ cJ , J = 0, 1, 2 states are 3415, 3511, and 3556 MeV, respectively [60]. Thus, theN N theshold is very far away from the nominal masses of those resonances and, therefore, only the very tail of the χ cJ 's can contribute to thepp spectrum at those low invariant masses considered in our investigation. Fig. 8, however, the 13 P0 partial wave is used for generating theN N FSI effects. Data are taken from Ref. [51].
C. Discussion
The scenario outlined above allows us to describe consistenly (and quantitatively) the near-threshold enhancement seen in thepp invariant mass spectrum of various J/ψ and ψ ′ decays in terms of FSI effects. In particular, we can reproduce the moderate enhancement seen in the reactions J/ψ → ωpp and ψ ′ → γpp as well as the rather large enhancement in the J/ψ → γpp channel. The analysis of the latter indicates the possible existence of aN N bound state. However, contrary to the suggestion of the BES Collaboration [49] and the theoretical studies of the Paris group [15], this bound state would be in the isospin-1 channel and not in isospin 0! Therefore, in the following, let us discuss our scenario and possible alternatives in detail.
Near thepp threshold the reactions J/ψ → γpp, ψ ′ → γpp, and J/ψ → ωpp are all governed by the samē N N partial wave, namely the 1 S 0 (cf. Table I). The assumption that isospin is conserved in the hadronic decay J/ψ → ωpp, together with the observed moderate enhancement in the pertinentpp invariant mass spectrum, practically excludes that the exceptionally large enhancement in the J/ψ → γpp decay has anything to do with the isospin-0N N amplitude. Actually, as shown in our analysis, the two measurements can be only reconciled if we assume that the decay into γpp involves a substantial isospin-1 amplitude. Of course, it could be possible that there is a strong violation of isospin conservation in the hadronic decay J/ψ → ωpp. However, we believe that this is much less likely than a sizable isospin-1 admixture in the radiative reaction J/ψ → γpp where isospin is not conserved anyway. Another option would be that the decay J/ψ → ωpp leads predominantly tō N N P -waves -even close to threshold -and only the reaction J/ψ → γpp is dominated by the decay into the 1 S 0 partial wave. While a dominance of P -waves might be indeed plausible for ψ ′ → γpp, as discussed above, at the moment there is no experimental evidence that it could be also the case for the ω channel. Clearly, here measurements of the angular distributions for the ωpp case, analogous to those available for γpp [51], would be very useful. What if a genuine resonance is responsible for the enhancement observed in the decay J/ψ → γpp? Of course, such a resonance should not couple strongly to thē N N channel, because otherwise it will contribute significantly to the (direct)N N interaction. Then, in turn, it would contribute to theN N FSI effects in the pertinent channel, i.e. it should be also seen in ωpp, for example. A resonance that couples strongly to J/ψ and only rather weakly toN N should be seen in other J/ψ decay channels. In principle, the X(1835) found by the BES Collaboration in the reaction J/ψ → γπ + π − η ′ [49,50] could be a candidate for such a resonance. But then we expect it to be absent in the corresponding reaction J/ψ → ωπ + π − η ′ , say -otherwise one would again have difficulties to explain simultaneously the rather moderate enhancement for the ωpp channel. Indeed, it would be interesting to investigate the latter J/ψ decay channel experimentally.
In any case, the scenario favored by us where the exceptionally strong near-threshold enhancement in the reac-tion J/ψ → γpp is primarily due to strong FSI effects in the 1 S 0N N amplitude with isospin I = 1 can be tested experimentally. If this scenario is correct then one should see a similarly strong enhancement in other decay channels where near threshold theN N system is produced in the same partial wave. This applies first of all to the reaction J/ψ → ρpp where theN N state has to have I = 1, provided that isospin is conserved in this strong decay. We present our predictions for the corresponding invariant mass spectrum in Fig. 10.
A measurement of χ c0 decaying into π − pn would be also rather interesting. In this case, near threshold the pn state is likewise produced in the 1 S 0 partial wave and, moreover, it has to be in isospin I = 1, see Table I. Data reported in Ref. [62] suggest that there is a large enhancement in the pn invariant mass spectrum in the low-energy region. However, the invariant-mass resolution is still fairly poor and does not allow for any reliable conclusions.
IV. SUMMARY
In the present paper we have provided a systematic analysis of the near-threshold enhancement in thepp invariant mass spectrum, as observed in various experiments of the decay reactions J/ψ → xpp and ψ ′ (3686) → xpp, with x = γ, ω, π, η. The enhancement is assumed to be due to theN N final-state interaction (FSI) and the pertinent FSI effects are evaluated in an approach that is based on the distorted-wave Born approximation. For theN N interaction a potential derived within chiral effective field theory and fitted to results of a recent partial-wave analysis ofpp scattering data [31] is employed. For comparison, a phenomenological model constructed by the Jülich group and used by us in earlier studies of J/ψ and ψ ′ decays is also utilized. It is found that the near-threshold spectrum of all considered decay reactions can be reproduced simultaneously and consistently by our treatment of thepp FSI. Specifically, the moderate enhancement seen for π 0p p, ηpp, and ωpp final states is well described by theN N interaction in the relevant 3 S 1 and 1 S 0 partial waves as determined in the partial-wave analysis.
The situation is more complicated for the process J/ψ → γpp where there is a rather large near-threshold enhancement. While the pertinentpp invariant mass spectrum was reproduced in our previous work [8] that was based on the Migdal-Watson approach, this is no longer the case for the more realistic treatment of FSI effects employed in the present study. However, we can show that a modest modification of the interaction in the I = 1 1 S 0N N channel -subject to the constraint that the corresponding partial-wave cross sections forpp →pp andpp →nn remain practically unchanged at low energies -allows one to reproduce the events distribution of the radiative J/ψ decay, and consistently all other decays. In this context the decay J/ψ → ωpp plays a crucial role. The moderate enhancement observed in this channel, together with the fact that the producedpp system has to be in I = 0 (assuming that isospin is conserved in this purely hadronic decay) implies that the strong variation seen in the γpp case has to come primarily from the I = 1 1 S 0N N interaction.
It turns out that the modified I = 1 1 S 0 interaction that can reproduce thepp invariant mass spectrum in the reaction J/ψ → γpp predicts aN N bound state. Previous investigations suggested that there could be such a bound state, but in the isospin I = 0 channel [15]. Also the BES Collaboration favored an I = 0 bound state, being led by their observation of the X(1835) resonance in the reaction J/ψ → γπ + π − η ′ [49]. Interestingly, the value we get for the binding energy is comparable to the mass of the X(1835). However, we want to stress that one should view our value with great caution. First, due to the unknown fraction of the I = 0 and I = 1 components in the finalpp state for the radiative decay there is a sizable uncertainty in the actual value. Moreover, one should be aware that, in general, any data above the reaction threshold, like thepp invariant mass spectrum in the present case, do not allow to pin down the binding energy reliably given that the bound state might be 30 or 40 MeV below the threshold and has a sizable width. Actually, at this stage we cannot exclude that an alternative fit of similar quality to the invariant mass spectrum and to the near-thresholdN N scattering data is possible without a bound state in the I = 1 1 S 0N N partial wave.
Another interesting implication of our study is thatpp invariant mass spectra as measured in heavy meson decays could be indeed very useful as further constraint for the determination of theN N partial-wave amplitudes, provided that those data are of high statistics and high resolution like the ones for J/ψ → γpp. This is of specific relevance for the near-threshold region. Here the availableN N observables are dominated by the 3 S 1 partial wave whereas the weight of the 1 S 0 amplitude is fairly small. At such low energies directpp scattering experiments for measuring spin-dependent observables that would allow one to disentangle the spin-singlet and triplet contributions are rather difficult (if not impossible) to perform. the squares represent the results for the published NNLO potential [30] with the cutoff {450 MeV, 500 MeV} while the bands show our calculation with the refitted isospin-1 1 S 0 amplitude. We see that the latter reproduces the former results very well. The circles are the partial-wvae cross sections for the PWA of Ref. [31]. Finally, in Fig. 12 we present phase shifts for the 1 S 0 partial wave. Here the results from the refit are shown by a filled band while those of the published NNLO potential [30] are indicated by the hatched band. For convenience we reproduce here also the results for the isospin 0 case from [30] and those of the employed JülichN N potential. a I=1 (fm) (0.97 · · · 1.07) − i (0.63 · · · 0.70) (1.02 · · · 1.04) − i (0.57 · · · 0.61) ∆E (eV) −(329 · · · 376) −(302 · · · 361) −740 ± 150 [63] −440 ± 75 [64] Γ (eV) (1596 · · · 1659) (1545 · · · 1589) 1600 ± 400 [63] 1200 ± 250 [64] TABLE III. 1 S0 scattering lengths a and hadronic shifts and broadenings in hyperfine states ofpH for 1 1 S0. Results based on the refitted 31 S0 LECs are given and compared with the ones given in Ref. [30] and with empirical information. The 11 S0 scattering length is taken over from Ref. [30]. | 2015-02-03T15:00:22.000Z | 2015-02-03T00:00:00.000 | {
"year": 2015,
"sha1": "ec45ed49087c38e17c7ef7c701c14f8a57d74376",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1502.00880",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ec45ed49087c38e17c7ef7c701c14f8a57d74376",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
213635813 | pes2o/s2orc | v3-fos-license | Experimental modeling of subsurface gas traps on Mars
Methane seasonal variation observable by MSL mission and possible variations of atmospheric mass on timescale (105-106 years) are among the most intriguing problems in Mars exploration. These variations are connected with hypothetical biosphere activity in the subsurface Martian soil and existence of liquid water on the Martian surface within modern era. Stability of liquid water on surface request higher atmospheric pressure in comparing to modern value. CO2 cannot loss with known mechanisms of atmospheric escape. Therefore, the main part of necessary CO2 must be buried in upper layers of the Martian soil. Local and seasonal time variable sources and fast methane destruction are needed to explain high seasonal variations of methane concentration in air at the Martian surface. Gas reservoirs, containing biogenic or abiogenic methane could be possible seasonal sources of methane as well. In this work we experimental study stability of the gas reservoirs covered of mixture of regolith and water ice with perchlorates. Thickness of covered regolith layer was about 10mm. In experimental runs we increased a temperature of gas traps and monitored a possible diffusion of gases through the isolated layer with mass spectrometer. The gas traps stay stable at gas pressure up to 1 bar. We did not discover any diffusion process before mechanical destruction of reservoirs at gas pressure over 1 bar. In this work we show that the big subsurface gas reservoirs can exist for a long time before cracking due to slow process of the water ice sublimation by climate and seasonal variation of subsurface temperature.
Introduction
The question about life possibility or traces of the existence of life in the past is one of the most discussed questions in Mars exploration. This question connects with atmosphere exploration. On the one hand, liquid water is not stable on the surface of Mars at modern atmospheric pressure. On the other hand, liquid water can exist in shallow subsurface reservoirs in modern Mars. It is known that liquid water probably existed on the surface in the past (billion years and less) [1].
Therefore, the questions about the high-density atmosphere existence in the near past and mechanisms of atmosphere loss are widely discussed today. The atmosphere's composition of terrestrial planet is similar. So the minimal quantity of outgassing CO2 on Mars is estimated 0.5-1 bar. Martian isotope ratio of 12 C/ 13 C is comparable to Earth, so the loss of CO2 by atmospheric escape process should be minimal because it leads to strong enrichment of 13 C/ 12 C ratio. Therefore, the main part of the outgassing CO2 must be buried on the planet.
Carbon dioxide is able to form carbonates at presence of liquid water, but only the small quantity of carbonates were discovered on Mars till now. There is another way to bury a big mass of carbon dioxide by adsorption into regolith. Adsorption of atmospheric CO2 in high latitude regions is possible during low obliquity periods. The following slow process of water vapor transport to "cold traps" could create the water ice crust covering absorbed CO2 ice. It leads to the formation of gas subsurface reservoirs. Massive CO2 ice deposits were discovered in the South Polar Layered Deposits of Mars at depth more than hundreds of meters [2]. Moreover, they are comparable with the present mass of the Martian atmosphere. However, a temperature at these depths is stable against [3] the seasonal and longtime (about 120 000 years) surface temperature variations. Therefore, gas reservoirs in shallow subsurface layers are more interesting because they are subjected to the longtime and seasonal temperature variation.
Methane concentrations changing in the Martian atmosphere are both at decade timescale [4] and during several months [5,6]. Local and seasonal time variable sources and fast methane destruction are needed to explain high seasonal variations of methane concentration in air at the Martian surface. 15th and 16th of June 2013 methane spike are registered near Gale Crater with two independent methods [7,6]. Methane concentration in this region increased to 15 ppbv. Shallow gas reservoirs, containing biogenic or abiogenic methane, could be possible seasonal sources of methane as well.
In this work we experimental study stability of the CH4 and CO2 reservoirs covered of mixture of regolith and water ice with perchlorates. We also estimate regions of gas traps stability and regions where these gas traps could be destructed with seasonal temperature variations. We show that gas traps destruction has a threshold mechanism. Figure 1 shows experimental facility for exploration of gas diffusion through traps. Sample (2) was connected to buffer chamber (3) with pressure sensor and two tubes with valves: one for pumping out, another for studied gas inlet.
Experimental facility for looking for microseepage
At the beginning tube from buffer chamber to sample and tube to gas inlet were locked. Then, the chamber was pumped out. Studied gas through inlet tube was pumped in chamber to necessary pressure. Then all chamber inputs were closed and output to sample was being opened. The sample surface was purged with nitrogen vapours. Its entered to membrane input of mass-spectrometer. In case of gas diffusion through ice regolith signal on mass-spectrometer is more than background. The sample was cooled with dry ice (1). We increase the pressure in experiments run until the massspectrometer signal was being appeared.
Laboratory modeling gas trap formation
Special facility was developed for laboratory modeling of gas trap formations (see figure 2). Brass tube has special radial protrusion. Dry ice cylinder (diameter 10mm and thickness 2mm) was set on this protrusion. Bottom of the tube was cooled by liquid nitrogen. Then model regolith was poured into the tube. The thermocouple was set into regolith. Then the regolith was filled the perchlorate solution. We terminated the cooling after the solution have been frozen. Then we were waiting until the dry ice completely evaporate. On next step the tube has been inserted into thermostat with dry ice and has been connected to the facility for looking for microseepage.
We used two methods for freezing regolith. In the first method bottom of the tube was opened and dry ice evaporate directly to atmosphere. In the second method bottom of the tube was closed and dry ice evaporate through regolith.
Mass-spectrometer
In this work for methane detection we used experimental static mass-spectrometer with double focusing development by Ioffe Institute. Its ion optical scheme was described in work [10]. This mass spectrometer has dual membrane inlet system, which was described in work [11]. Ion source of massspectrometer use electron ionization [12]. Mass resolution of mass spectrometer is equal to 250, range of measured mass from 12 up to 300 a.m.u.
We measured a calibration line for methane concentration calculation. Air-methane has mix with volume methane fractions: 100 ppm, 1000 ppm, 10000 ppm and 100000 ppm. The methane counts were normalized by argon counts. Figure 3 shows calibration curve.
Error of methane fraction detection was being less than 7%. Methane was been detecting by ion m/z = 15 a.m.u. Limit of methane detection was being 55 ppmv.
In all experiments, we used the saturated water solution of perchlorate (2.09g NaClO4 on 1g H2O).
Results
On figures 4,5 coevolution of temperature, gas concentration outside the gas trap and excess of atmospheric pressure in gas trap are presented for experimental runs with CH4 and CO2. Figure 4 and 5 shows time diagram for experiments with SiO2 sample with thickness 3mm and 10mm (created by method 1 and method 2). It's clear that diffusion was not observed till the moment of gas trap destruction.
Discussions
We see that even slims regolith layers can hold high pressure. Liquid water can exist in these traps. We measured the sublimation speed of model regolith in depend on temperature for computing the stable of the studied gas traps (see figure 6). According to measured Mars regolith temperature oscillations [3] we computed a map of the gas traps stability with 1 bar overpressure (see figure 7).
Although the liquid water on the Martian surface cannot stable exist nowadays, the pressure in gas traps can be enough for liquid regions existence. Environments in these regions can be comfortable for methanogens existence. In this case, the CO2 can create the necessary pressure for trap destruction with temperature increases. Then we can see this trap destruction as local methane release.
Conclusion
In this way, we show the possibility of gas traps existences. Their destruction can be a spontaneous source of methane and explain the fast atmospheric methane variations. | 2019-12-12T10:32:36.521Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "e7549ba32ec774b2edf22333094585711660c1b6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1400/2/022046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2e600b415077b0184c10d7ff6ec099928560d1c4",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
119325374 | pes2o/s2orc | v3-fos-license | Lipschitz regularity results for nonlinear strictly elliptic equations and applications
Most of lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.
Introduction
The main goal of this work is to obtain gradient bounds, which are uniform in ǫ > 0 and t respectively, for the viscosity solutions of a large class of nonlinear strictly elliptic equations (1.2) We work in the periodic setting (T N denotes the flat torus R N /Z N ) and assume for simplicity that A(x) = σ(x)σ(x) T with σ ∈ W 1,∞ (T N ; M N ). Let us mention that all the results of this paper hold true if σ ∈ C 0,1/2 (T N ; M N ). We recall that a diffusion matrix A is called strictly elliptic if there exists ν > 0 such that A(x) ≥ νI, x ∈ T N .
Most of Lipschitz regularity results for elliptic equations are obtained for a suitable growth power with respect to the gradient variable (subquadratic for instance, see Frehse [14], ). In this article, we establish some gradient bounds |Dv ǫ | ∞ ≤ K, where K is independent of ǫ, (1.4) |Du(·, t)| ∞ ≤ K, where K is independent of t, (1.5) for strictly elliptic equations whose Hamiltonians H have arbitrary growth power in the gradient variable, which is unsual.
An important feature of our work is that we look for uniform gradient bounds in ǫ or t. In many results, the bounds depend crucially on the L ∞ norm of the solution (which looks like O(ǫ −1 ) or O(t)), something we want to avoid in order to be able to solve some ergodic problems by sending ǫ → 0 or to study the large time behavior of u(x, t) when t → +∞. These applications are discussed more in details below and are done in Section 4. We focus now on the more delicate part, i.e., the Lipschitz bounds for (1.1).
Let us start by recalling the existing results when H is superquadratic and coercive. Hölder regularity of the solution is proved under the very general assumption see Capuzzo Dolcetta et al. [10], Barles [7], Cardaliaguet-Silvestre [11], Armstrong-Tran [3]. But there are only few results as far as Lipschitz regularity is concerned. In general they are established using Bernstein method [15,19] or the adaptation of this method in the context of viscosity solutions, see Barles [5], Barles-Souganidis [8], Lions-Souganidis [21], Capuzzo Dolcetta et al. [10]. This approach requires some structural assumptions on H which are often close to "convexity-type assumptions". They appear naturally when differentiating the equation, a drawback of the original Bernstein method. Even if the weak Bernstein method [5] is less restrictive as far as the regularity of the datas is concerned (Lipschitz continuity is enough), we do not consider this approach here to be able to deal with Hamiltonians having few regularity like Hölder continuous Hamiltonians for instance. Actually most of our assumptions do not even require the Hamiltonian to be continuous as soon as a continuous solution to the equation exists. However, let us mention that the weak Bernstein method has also several advantages: the method may be used for degenerate equations in some cases and the Hamiltonian may have arbitrary growth, see for instance [8,10]. Instead, in this work, we use the Ishii-Lions' method introduced in [16], see also [12,6]. This method allows to takes profit of the strict ellipticity of the equation to control the strong nonlinearities of the Hamiltonian. In Ishii-Lions [16] and Barles [4], weak regularity assumptions are assumed over H, merely a kind of balance between some Hölder continuity in x and the growth size of H with respect to the gradient, namely |H(x, p) − H(y, p)| ≤ ω(|x − y|)|x − y| τ |p| 2+τ + C in [16,Assumption (3.2)], (1.6) or |H(x, p) − H(y, p)| ≤ C|x − y||p| 3 + C(1 + |p| 2 ) in [4,Assumption (3.4)], (1.7) where x, y ∈ T N , p ∈ R N , τ ∈ [0, 1], ω is a modulus of continuity and C > 0. These assumptions are designed for subquadratic (or growing at most like |p| 3 ) Hamiltonians. This is not surprising since it is known that, in general, the ellipticity is not powerful enough to control nonlinearities which are more than quadratic [10]. Under these assumptions, the authors prove a Lipschitz bound, which depends however of the L ∞ norm of the solution.
Before giving some comments about these results, let us explain in a formal way the strategy to establish them. The proof follows roughly the same lines as the one in [4]. We aim at proving that the maximum is nonnegative, choosing in a first step ψ(r) = Lr α , α ∈ (0, 1), to obtain a Hölder bound, and, in a second step, ψ(r) = L(r − r 1+α ), to improve the Hölder bound into a Lipschitz one. To do this, we use in a crucial way the strict concave behavior of ψ near 0 to take profit of the strict ellipticity of the equation as usual in Ishii-Lions' method.
The first notable difference with the previous works is that we are able to force the maximum to be achieved at (x, y) with r := |x − y| enough close to 0 without increasing L in terms of the L ∞ norm of v ǫ . This is a consequence of an a priori oscillation bound (1.12) obtained by the authors [18] for any continuous solution of (1.1) when merely (1.8) holds. Let us underline that this oscillation bound is a crucial tool in our work and that the assumption (1.8) is very general; it is satisfied as soon as We extend the oscillation bound in the parabolic setting, see Lemma 4.5, and give an application.
The second step starts by noticing that, once we have on hands a Hölder bound, then the strength of the nonlinearity is weakened. We can apply again Ishii-Lions' method in a context where the ellipticity is reinforced compared to the nonlinearity, even when the Hamiltonian has a large growth with respect to the gradient. It allows to improve the regularity up to Lipschitz continuity. This is one of the main novelty to obtain the gradient bounds. Then, a careful study of the balance between both terms finally gives the best exponents.
Let us comment our results. Theorem 1.1 reduces to [4, III.1] when α = 1. But notice that our Lipschitz bound does not depend on the L ∞ bound of the solution and we are able to deal with Hamiltonians having less regularity with respect to x. For instance, our result applies when (1.14) and G satisfies (1.13) (superlinearity) and |G(x, p)| ≤ C(1 + |p| 2 ) (subquadratic) without any regularity condition on G.
In Theorem 1.2, the coercivity assumption (1.10) is the one needed to obtain the Hölder regularity with exponent k−2 k−1 in [10]. Notice that, this estimate being independent of ǫ, we get for free the oscillation bound (1.12). The first step in this case consists in showing that the solution is γ-Hölder continuous for any γ ∈ ( k−2 k−1 , 1). It then allows us to improve the regularity up to Lipschitz continuity. In (1.11), the growth power with respect to the gradient variable can be much greater than k > 2, which enlarges the class of Hamiltonians under which our result applies. Let us emphasize that the situation is very different comparing to Theorem 1.1 where we can start with any Hölder exponent to get the Lipschitz regularity. Here, starting with a Hölder exponent equal to k−2 k−1 seems crucial to be able to improve the regularity when H has a strong growth with respect to the gradient.
As examples of applications of Theorem 1.2, we can deal with some new classes of Hamiltonians for which the existing regularity theory does not apply. We can first consider again (1.14), where now there exists k > 2 such that Notice that even if Σ is now assumed to be nondegenerate, this Hamiltonian is not necessarily convex.
The Hamiltonian where a is merely continuous and positive, satisfies all the assumptions of Theorem 1.2 and is not convex in general. Let us give another example which will be used in Section 4.2 to extend the results to the parabolic case (1.2) and in Section 4.4 to prove an existence result in a quite surprising situation. Let K be any continuous function satisfying K(x, p) ≤ C(|p| M + 1), for any x ∈ T N , p ∈ R N , M > 2. Then, the function satisfies all the assumptions of Theorem 1.2. These examples also illustrate the few regularity assumptions on the datas which are needed.
Our work takes place in the periodic setting to take profit of the compactness and the absence of boundary of T N . The issue of extending our results in a bounded set is very interesting and not obvious. In the case of Neumann boundary conditions, it should be true but the case of Dirichlet boundary conditions faces the problem of loss of boundary conditions when H is superquadratic [9]. Notice that we cannot expect such general results to be true in a general bounded set since it is known [10] that k−2 k−1 -Hölder continuity is optimal in general. Our results can be extended for A = σσ T with σ ∈ C 0,1/2 (T N ; M N ), for quasilinear equations when A = A(x, p) and for fully nonlinear equations of Bellman-Isaacs type, see Section 2.5 for a discussion.
To study the well-posedness of (1.1) under the assumptions of Theorems 1.1 and 1.2, we have first to prove a comparison principle (Theorem 3.2) whose proof is not classical since the Hamiltonian is not Lipschitz continuous with respect to the gradient. Instead, we use the same ideas as for the proof of the Lipschitz bounds. As a consequence, we obtain the existence and uniqueness of a continuous viscosity solution to (1.1). Moreover this solution is Lipschitz continuous and, if the datas are C ∞ , then the solution is C ∞ thanks to the classical elliptic regularity theory. Let us mention that our approach also allows to construct Hölder continuous solutions to (1.1) (Theorem 4.9) under the general assumption (4.20) which is not sufficient to provide a comparison principle.
We then give several applications of our results. A straightforward consequence to the bound (1.4) is the solvability of the ergodic problem associated with (1.1), see [20,2] and Theorem 4.
The next application is the study of the parabolic equation (1.2). The natural idea to extend the gradient bound for (1.1) to (1.2) is to prove first a bound for the time derivative | ∂u ∂t | ∞ and then to apply the results obtained for the stationary equation. This approach does not work directly for several reasons. On the one side, the bound for the time derivative is usually obtained as a consequence of the comparison principle which is not available here. On the other side, our a priori stationary gradient bounds are valid for continuous solutions and not for subsolutions. We overcome these difficulties by considering a tricky approximate equation where H is replaced by We finally apply all the previous results to prove the large time behavior of the solution of (1.2). Having on hands the gradient bound (1.5), a solution of the ergodic problem (1.15) and the strong maximum principle, the proof is classical [8].
The paper is organized as follows. In Section 2, we prove the stationary gradient bounds, Theorems 1.1 and 1.2. Section 3 is devoted to establish the well-posedness of (1.1). Finally, the applications are presented in Section 4. We start by solving the ergodic problem, then a study of the parabolic equation (1.2) is provided. We end with the long-time behavior of the solution of (1.2) and the construction of Hölder continuous solutions to equations with Hamiltonians of arbitrary growth without the use of comparison principle. Acknowledgement. This work was partially supported by the ANR (Agence Nationale de la Recherche) through HJnet project ANR-12-BS01-0008-01 and WKBHJ project ANR-12-BS01-0020.
where L is the constant (independent of ǫ) which appears in (1.8).
An immediate consequence is To make the article self-contained, we present the proof of this result in Appendix.
2.2.
Preliminary lemma for Ishii-Lions's method. The following technical lemma is a key tool in this article.
. Let Ψ : R + → R + be an increasing concave function such that Ψ(0) = 0 and the maximum of is achieved at (x, y). If we can write the viscosity inequalities for v ǫ at x and y, then for every and the following estimate holds If, in addition, (1.3) holds, then there existsC =C(N, ν, |σ| ∞ , |σ x | ∞ ) (given by (5.3)) such that and, if the maximum is positive, then The first part of the result is a basic application of Ishii's Lemma in viscosity theory, see [12]. The trace estimates can be found in [16,4,8] and (2.6) takes benefit of the ellipticity of the equation and allows to apply Ishii-Lion's method introduced in [16]. For reader's convenience, we provide a proof in the Appendix.
2.3.
Proof of Theorem 1.1. The proof relies on some ideas of [4]. The main difference is that, thanks to the uniform oscillation bound presented in Lemma 2.1, we can obtain a gradient bound independent of the L ∞ norm of the solution.
Step 1. Hölder continuity. We claim that there exist some constants γ ∈ (0, 1], K > 0 independent of ǫ such that We skip the ǫ superscript in v ǫ hereafter for sake of notations. Thanks to Lemma 2.1, the oscillation of v is uniformly (in ǫ) bounded by a constant O. Consider where Ψ(s) = K 0 s γ . Our goal is to choose γ ∈ (0, 1], K 0 > 0, which depend only on C, α given by the hypothesis (1.9) such that the above maximum is nonegative. To do so, we assume by contradiction that the maximum is positive and hence, it is achieved at (x, y) with x = y thanks to the continuity of v. We next choose r depending on K 0 such that With such a choice of r, it is clear that |x − y| < r. Denote s := |x − y|. From Lemma 2.2 and (1.9), we will have a contradiction if we can choose K, γ such that It is clear that νK 0 γ(1 − γ)s γ−2 ≥CK 0 γs γ + C when r is small enough. Hence, the above inequality holds true if we can choose K 0 , γ such that the two following inequalities hold, Since K 0 s γ ≤ K 0 r γ ≤ O + 1, both inequalities hold true when γ is small enough depending on the oscillation O (but not on K 0 ). This proves the claim.
Step 2. Improvement of the Hölder regularity to Lispchitz regularity. From the previous step, v is γ Hölder continuous (γ is possibly small) and the Hölder constant K 0 can be chosen to be independent of ǫ. We fix such a γ. We also recall that, from Lemma 2.1, the oscillation of v is bounded by a constant O independent of ǫ.
We first construct a concave function Ψ : where r, A 1 , A 2 > 0, which depend only on C, α, β given by the hypothesis (1.9), will be precised later. We extend Ψ into R + by defining Ψ(s) = Ψ(r) for s ≥ r.
We compute, for 0 ≤ s < r, We then choose r depending on A 2 (A 2 may vary in the next arguments) such that It is straightforward to see that Ψ is a smooth concave increasing function on [0, r) satisfying Ψ(0) = 0 and, for all s ∈ [0, r], If M ≤ 0 then the theorem holds with K = A 1 A 2 . The rest of the proof consists in proving that M is indeed nonpositive for A 2 big enough. We argue by contradiction assuming that M > 0. This maximum is achieved at (x, y) with x = y. With the choice of r in the condition (2.8) and the fact that Ψ is non-decreasing, it is clear that |x − y| < r.
Denote s := |x − y|. From (1.9) and Lemma 2.2, we have which gives us The goal now is to have a contradicton in the above inequality for large A 2 .
We first note that it is possible to increase A 2 in order that Indeed, the inequality is true for all A 2 ≥ 1 if s ≥ 1 and, when s ≤ 1, it is sufficient to Therefore, it is enough to show that we may choose A 2 such that the following inequalities hold true, We first prove that it is possible to choose A 2 such that (2.11) holds true. We know that Ψ is concave and γ-Hölder continuous, so we have and it follows that (2.11) is true provided and (2.14) is true if Finally, (2.16) indeed holds for A 2 big enough since α + 2 − α 1−γ < 2. We now prove that it is possible to choose A 2 such that (2.12) holds true. At first, from (2.15), we have νA 1 A 1+γ From (2.9) and (2.13), The proof of the theorem is complete where k > 2 is given by the assumption (1.10). In [10], the authors prove that the Hölder constant depends only on N, k, |ǫv ǫ | ∞ and, since |ǫv ǫ | ∞ ≤ |H(x, 0)| ∞ , K 0 can be chosen independent of ǫ. A by-product of the above result (or of Lemma 2.1) is that the oscillation of v ǫ is bounded by a constant O > 0 independent of ǫ.
Hereafter we write v for v ǫ .
We set where K > 0, which depend only on C, α, β given by the hypothesis (1.11), will be precised later. We fix a constant r which depends on K as follows If the maximum is nonpositive then the theorem holds. From now on, we argue by contradiction assuming that the maximum is positive. The maximum is achieved at (x, y) with x = y. With the choice of r in (2.19), it is clear that |x − y| < r.
Denote s := |x − y|. From (1.11) and Lemma 2.2, we have We can rewrite the above inequality as At first, from (2.19), it is possible to increase K such that r is small enough in order to have Hence, to get a contradiction in the above inequality, we only need to choose K such that the two following inequalities hold, Step 1.1. Choosing K large enough such that we have (2.21). Writing that the maximum (2.20) is positive and using the concavity of Ψ and (2.
which is a constant independent of K, s, we rewrite the above desired inequality as From (2.23) and the choice χ > k−2 k−1 , it follows that inequality (2.24) holds true if Finally (2.25) holds true for large K since r → 0 as K → +∞ by (2.19). This proves (2.21).
Step 1.2. Choosing K large enough such that we have (2.22). We have (2.19), we have that |p| → +∞ as K → +∞. We then obtain that the above inequality holds true for large K concluding (2.22). This ends Step 1.
Step 2. Improvement of the new Hölder exponent to Lipschitz continuity. We are now ready to prove the lipschitz continuity.
The beginning of the proof is similar to the one of Theorem 1.1. We consider the increasing concave function Ψ given by (2.7) for any γ ∈ (0, 1) and A 1 , A 2 , r > 0 satisfying (2.8) and set We are done if the maximum is nonnegative. Assuming by contradiction that the maximum is positive, we know it is achieved at (x, y) with s := |x − y| < r. Applying Lemma 2.2 and (1.11), we see that we reach the desired contradiction if the following inequalities hold Next substeps are devoted to prove that we can fulfill the two above inequalities by choosing A 2 large enough. It then leads to a contradiction which implies that the maximum M is nonnegative concluding that v is Lipschitz continuous with constant A 1 A 2 and ending the proof of Theorem 1.2.
Step 2.1. Choosing A 2 such that (2.26) holds true. From Step 1, we know that v is χ-Hölder continuous for any It follows that (2.26) holds provided Recalling that 1/s ≥ 1/r > A 2 , the above inequality is true if First of all, we have β < k − 1 < 1 1−χ , so, by (2.26), ω (1 + Ψ ′ (s) β )s is small for small s. Therefore, to fulfill (2.26), it is enough to fix χ close enough to 1 such that and to take A 2 large enough.
Step 2.2. Choosing A 2 such that (2.27) holds true. We need to choose A 2 such that Using (2.28) and (2.29) again, we see that the above inequality is true provided We fix χ ∈ ( k−2 k−1 , 1) close enough to 1 such that (2.30) holds and k − 1−γ 1−χ < 1 + γ. Noticing that Ψ ′ (s) = |p| → +∞ when A 2 → +∞, the previous inequality holds when A 2 is big enough. Therefore (2.27) holds. The proof of the theorem is complete. More precisely, we consider where Ω ⊂ R N is an open bounded set with ∂Ω ∈ C 1,1 , g ∈ C(∂Ω), ǫ > 0 and we need to assume that H ∈ C(Ω × R N ; R) to prove the comparison principle.
The comparison principle follows easily from the ad-hoc inequality (3.3) which follows. Then, there exists a constant C such that The proof of the proposition follows the same ideas of the proof of Theorem 1.2. We only sketch the minor changes between two proofs.
Proof. We make the proof under assumptions (1.10)-(1.11), the another case being simpler. With the assumption ∂Ω ∈ C 1,1 and (1.10), the result of [10] gives where k is given by the assumption (1.10).
Since u, v are bounded, we can set By the upper semi-continuity of u and the compactness of ∂Ω, there exists r > 0 such that u(x) − u(y) ≤ d for all y ∈ ∂Ω, x ∈ Ω and |x − y| ≤ r, v(x) − v(y) ≤ d for all x ∈ ∂Ω, y ∈ Ω and |y − x| ≤ r.
Hence, using u ≤ v on ∂Ω, there exists r > 0 such that This implies that for C = U r , we have Step 1. Now, we prove that for any χ ∈ ( k−2 k−1 , 1), there exists a constant K such that max where K > 0 depends only on C, α, β given by the hypothesis (1.11) and will be precised later.
We argue by contradiction assuming that the maximum is positive for any K > 0. It is therefore achieved at (x, y) with x = y. Denote s := |x − y|. We have It follows from (3.5) that s tends to zero as K → +∞. Thanks to (3.6), we then infer that necessarily x, y ∈ Ω for K big enough. Therefore, for K big enough, we can write the viscosity inequalities for u at x and v at y.
From this point, the next arguments follow exactly the same ones of Part 1 and 2 in the proof of Theorem 1.2. The only minor difference if the way we get (2.23). From (3.8) and (3.4), setting Ψ(t) = Kt χ , we obtain which is exactly the estimation (2.23) as desired.
3). Consider the function Ψ(s) =
If the maximum is negative, (3.3) holds with C = A 1 A 2 . From now, we argue by contradiction assuming that the maximum is positive and achieved at (x, y). With the choice of r in (2.8), we have 0 < s := |x − y| < r. Using the same arguments as in the beginning of Step 1, up to take A 2 big enough, we can assume that x, y ∈ Ω and therefore we can write the viscosity inequalities for u at x and v at y. The next arguments follow exactly the same ones of Part 3 in the proof of Theorem 1.2. The only minor difference is the way we get (2.28). Fix any χ ∈ ( k−2 k−1 , 1). From Step 1, we obtain We then have sΨ ′ (s) ≤ Ψ(s) < Ks χ , hence This is exactly Estimate (2.28) as we want. Having on hands (2.28) we repeat readily the arguments of Part 3 in the proof of Theorem 1.2 to conclude.
We now prove the comparison principle Notice that we assume that the Dirichlet boundary conditions hold in the classical viscosity sense on ∂Ω. This is a little bit restrictive especially when working with superquadratic Hamiltonians since it is known that loss of boundary conditions may happen, see [9] for instance. But it is enough for our purpose here since we work in the periodic setting without boundary condition.
Proof. The proof of this result is followed quite easily from the estimate (3.3). Define d as in (3.2). We assume that d > 0 and try to get a contradiction. Since u ≤ g ≤ v on ∂Ω, any z ∈ Ω such that d = u(z) − v(z) lies in Ω. The maximum is achieved at (x η , y η ) ∈ Ω × Ω. If there is a sequence η → 0 such that x η , y η → x ∈ ∂Ω, then which is a contradiction. Therefore, (x η , y η ) ∈ Ω × Ω for η small enough. The theory of second order viscosity solutions yields, for every ̺ > 0, the existence of (p η , X) ∈ J 2,+ u( such that and the following viscosity inequalities hold Thanks to Proposition 3.1, we have This implies that p η is bounded independently of η. Subtracting the viscosity inequalities and using (2.4), we get ǫd ≤ H(y η , p η ) − H(x η , p η ) + O(η) + O(̺), which leads to a contradiction when ̺ → 0, η → 0, thanks to the uniform continuity of H on compact subsets.
As a consequence of the previous results, we obtain the well-posedness for (1.1) in the class of Lipschitz continuous functions. Proof. Thanks to the comparison principle, Theorem 3.2, we can construct a unique continuous viscosity solution to (1.1) with Perron's method. To apply this method, it is enough to build some sub and supersolution to (1.1) which is easily done by considering v ± (x) = ± 1 ǫ |H(·, 0)| ∞ . The Lipschitz regularity of the solution is then obtained from Theorems 1.1 and 1.2. When A and H are C α in x, the C 2,α regularity of v ǫ is a consequence of the Lipschitz bounds and the classical elliptic regularity theory [15, Theorems 6.13 and 6.14]. Proof. Having on hands Theorems 1.1 and 1.2, the result is an easy application of the method of [20] and the strong maximum principle. We only give a sketch of proof. Let v ǫ be the Lipschitz continuous solution of (1.1) given by Corollary 3.3. Since |ǫv ǫ | ≤ |H(·, 0)| ∞ and |Dv ǫ | ∞ ≤ K, the sequences ǫv ǫ and v ǫ −v ǫ (0) are bounded and equicontinuous in C(T N ) for all ǫ > 0. By Ascoli-Arzela Theorem, they converge, up to subsequences to −c ∈ R and v 0 ∈ W 1,∞ (T N ) respectively. By stability, (c, v 0 ) is a solution of (4.1). To prove the uniqueness part of the theorem, assume we have two solutions (c 1 , v 1 ) and (c 2 , v 2 ) of (4.1). Thenũ 1 (x, t) := v 1 (x) − c 1 t − (|v 1 | ∞ + |v 2 | ∞ ) andũ 2 (x, t) := v 2 (x) − c 2 t are respectively subsolution and supersolution of the associated evolution problem (1.2) with initial datasũ 1 (x, 0) ≤ũ 2 (x, 0). Since bothũ 1 andũ 2 are Lipschitz continuous, we have a straightforward comparison principle for the evolution problem which yieldsũ 1 (x, t) ≤ u 2 (x, t) for all (x, t) ∈ T N ×[0, +∞). Sending t → +∞, we infer c 1 ≥ c 2 and exchanging the role of the two solutions, we conclude c 1 = c 2 . It is then easy to prove, using the Lipschitz continuity of v 1 , v 2 and H with respect to the gradient that If, in addition A, H and u 0 are C ∞ , then u ∈ C ∞ (T N × [0, +∞)).
To prove the theorem, we adapt the proofs of Theorems 1.1 and 1.2. The proof under the set of assumptions (1.10)-(1.11) is more delicate since the proof of Theorem 1.2 requires first to construct a solution to (1.2) which is k−2 k−1 -Hölder continuous. Due to the lack of comparison principle for (1.2) in our case and since the Hölder regualrity result of [10] does not apply directly to evolution equations, the task is difficult. We need to extend the result of [10] for subsolutions of (1.2) which are Lipschitz continuous in time (see Lemma 4.3) and to construct an approximate solution of (1.2) which is indeed Lipschitz continuous in time.
Proof of Theorem 4.2.
Step 1. Proof when (1.8)-(1.9) hold. We truncate the Hamiltonian H by defining Notice that, on the one side, for n ≥ L, H n satisfies (1.8). On the other side, for all n, H n satisfies (1.9) with the same constant C as for H. Moreover H n converges locally uniformly to H as n → +∞. By construction, H n ∈ BUC(T N × R N ; R). It follows that the comparison principle holds for (1.2) where H is replaced by H n . Since H n (x, Du 0 (x)) = H(x, Du 0 (x)), for n large enough, are respectively super and subsolutions of (1.2) with H n , and Perron's method yields a unique continuous viscosity solution u n of this latter equation.
By Theorem 4.1, there exists a solution (c n , v n ) ∈ R × W 1,∞ (T N ) of (4.1) where H is replaced by H n . Notice that, since H n satisfies (1.8)-(1.9) with constants independent of n for n > L, both |v n | ∞ and |Dv n | ∞ are bounded independently of n. Choosing A independent of n such that A ≥ |v n | ∞ + |u 0 | ∞ , the functions (x, t) → v n (x) − c n t ± A are respectively a viscosity super and subsolutions of (1.2) with H n . By comparison with u n we get It follows that It is now possible to mimic the proof of Theorem 1.1 for u n .
We begin by proving that u n is γ-Hölder continuous with a constant independent of t, n for some γ ∈ (0, 1). For any η > 0, consider where Ψ(s) = Ks γ , 0 < γ < 1. If the maximum is nonpositive for some K > 1 and all η > 0, then we are done. Otherwise, for all K > 1, there exists η > 0 such that the maximum is positive. It is achieved at some (x, y, t) with x = y.
If t = 0, then, using that |x − y| ≤ √ N , we have It follows that, for K big enough, the maximum is achieved at t > 0 and we can write the viscosity inequalities for u n using the parabolic version of Ishii's Lemma [12,Theorem 8.3]. Using Lemma 2.2 in this context, we get We then obtain a contradiction in the above inequality repeating readily the proof of Step 1 of Theorem 1.1 with O := sup t>0 osc(u n (·, t)).
With the same adaptations as above in this parabolic context, we can reproduce the rest of the proof of Theorem 1.1. We conclude that u n is Lipschitz continuous in space with a constant independent of t, n since we used (1.8)-(1.9) with constants independent of n and since osc(u n (·, t)) is bounded independently of t, n.
By Ascoli-Arzela Theorem, up to extract subsequences, u n converge locally uniformly in T N × [0, +∞) as n → +∞ to a function u which is still Lipschitz continuous in space with a constant independent of t. By stability, u is a solution to (1.2).
The proof of the Lipschitz continuity of u in time requires u 0 to be C 2 and can be done exactly as in the second case below.
We have a comparison principle for (4.5) since H n ∈ BUC(T N × R N ) and 1 q |p| M is a nonlinearity which is independent of x; when subtracting the viscosity inequality, this term disappears since we are in T N and there is no need to add a localization term in the testfunction to achieve the maximum. Moreover, since (4.4) are still super and subsolutions of (4.5), by means of Perron's method, we can build a continuous viscosity solution u qn of the problem (4.5).
The next lemma extends the result of [10] for USC subsolutions of parabolic equations with coercive Hamiltonian satisfying (1.10). The proof is postponed at the end of the section. Lemma 4.3. Assume that (1.10) holds. Let U ∈ USC(T N × [0, +∞)) be a subsolution of (1.2) which is bounded and Lipschitz continuous in time with constants independent of t. Then, there existsC > 0 which depends on k, A, Λ (appearing in (1.10) and (4.7)) but not on t such that We are going to prove that u qn satisfies the assumptions of Lemma 4.3. We first claim that there exists a constant c qn bounded with respect to n such that u qn + c qn t is bounded in T N × [0, +∞) by a constant depending on q but not on n. The equation satisfies Assumptions (1.10)-(1.11) of Theorem 1.2 with k = M and a constant C depending on q but not on n. By Theorem 4.1, there exists a solution (c qn , v qn ) ∈ R×W 1,∞ (T N ) of the associated ergodic problem. By the maximum principle, |ǫv| ≤ |H n (·, 0)| ∞ ≤ |H(·, 0)| ∞ so c qn is bounded independently of q, n. Moreover, since the constants in the assumptions in Theorem 1.2 may be taken independent of n, v qn is bounded and Lipschitz continuous with constants independent on n. Noticing thatṽ qn (x, t) = v qn (x) − c qn t ± A q are respectively viscosity super and subsolutions of (4.5) when A q ≥ |v qn | ∞ + |u 0 | ∞ (A q may be chosen independent of n). By comparison with u qn we get and the claim is proved.
We then claim that u qn is Lipschitz continuous in time, i.e., there exists Λ > 0 independent of t, q, n such that The proof is classical and relies on the comparison principle together with the fact that u 0 ∈ C 2 (T N ). We only give a sketch of proof. Since A and the Hamiltonian in (4.5) do not depend on t, for all h > 0, u qn (·, · + h) is solution to (4.5) with initial data u qn (·, h). By comparison, we obtain Setting Λ := |trace(AD 2 u 0 )| ∞ + |Du 0 | M ∞ + |H(·, 0)| ∞ (notice that Λ does not depend neither on q nor n), we have that u 0 (x)±Λt are respectively super and subsolutions of (4.5). By comparison, it follows |u qn (x, t) − u 0 (x)| ≤ Λt. Using this inequality in (4.8), we obtain (4.7).
Therefore, we can apply Lemma 4.3 to U(x, t) = u qn (x, t) + c qn t which is Lipschitz continuous in time with a constant independent of t, q, n since c qn is bounded independently on q, n. We obtain that u qn (x, t) + c qn t and so u qn is M −2 M −1 -Hölder continuous in space with a constant depending on q (but not on n, t). By Ascoli-Arzela Theorem, u qn converges, up to subsequences, locally uniformly in T N × [0, +∞) as n → +∞ to a function u q which still satisfies (4.7) (with the same constant Λ). Moreover, by stability, u q is solution to (4.5) with H n replaced by H.
Arguing as above on (4.6) where H n is replaced by H, we can construct a solution (c q , v q ) to the ergodic problem associated to (4.6) with H. Using that this time and that (1.11) holds for datas independent of q, we can prove that c q is bounded and v q is bounded and Lipschitz continuous with constants independent of q. By comparison, u q + c q t is bounded independently of q, t.
Applying again Lemma 4.3 to u q + c q t but using (4.9), we obtain that u q is k−2 k−1 -Hölder continuous with a constant independent of q now. Thanks again to Ascoli-Arzela Theorem, we can send q → +∞ to obtain, up to subsequences, a solution u of (1.2) which is still k−2 k−1 -Hölder continuous with a constant independent of t.
We are not in position to mimic the proof of Theorem 1.2 for this solution u, which is done easily adapting the proof in the time-dependent case.
In conclusion, we built a Lipschitz continuous (in space and time) solution to (1.2) with constants independent of t.
Step 3. Uniqueness in the class of continuous functions and upper regularity. Even if a strong comparison principle between semicontinuous viscosity sub and supersolutions does not necessarily hold for (1.2) under our assumptions, it is easy to see that a comparison principle holds if either the subsolution or the supersolution is Lipschitz continuous. It allows to compare any continuous viscosity solution of (1.2) with u.
The regularity of u when the data u 0 ∈ C 2,α and H is C α in x-variable is a consequence of the Lipschitz bounds and the classical parabolic regularity theory, see [17] for instance.
The proof of the theorem is complete. Proof of Lemma 4.3. To prove the lemma, it is sufficient to prove that there exists C > 0 such that, for every t > 0, −trace(A(x)D 2 U(x, t)) + H(x, DU(x, t)) ≤ C for x ∈ T N in the viscosity sense. (4.10) Indeed, once (4.10) is established, we can repeat readily the proof of [10,Theorem 2.7].
We end this section with a general bound for the oscillation of continuous solutions to (1.2) when the comparison result holds. It is the analogous of Lemma 2.1 in the parabolic setting and is a result interesting by itself. We give below as an easy application the convergence of u(x, t)/t towards a constant. (4.12) Then, the unique continuous solution u of (1.2) satisfies and y t such that u(y t , t) = min Notice that (4.12) is a parabolic version of (1.8) which holds as soon as H is superlinear. Remark 4.6. Assuming that the comparison principle holds is a bit restrictive in this context but we do not succeed to skip it.
Proof of Lemma 4.5. Setting
A := |H(·, Du 0 ) − trace(AD 2 u 0 )| ∞ , (4.14) we have that u 0 (x) ± At are respectively super and subsolutions of (1.2). By comparison, it follows |u(x, t) − u 0 (x)| ≤ At. By comparison again, we get where the constant L is the one in (4.12). If M ≤ 0, then (4.13) is straightforward. Otherwise, M ≥ Lδ > 0 for δ > 0 enough small. Thanks to (4.15), we can approximate φ(t) := min x∈T N u(x, t) from below over the compact interval [0, T ] by a sequence of smooth functions φ n (t) whose lipschitz norm is bounded by A given by (4.14). Up to choosing n big enough, we may assume 0 ≤ φ−φ n ≤ δ. For n ∈ N, we consider It is clear that M n ≥ δ > 0. The above positive maximum is achieved at (x n , y n , t n ) with x n = y n . Unless u(x n , t n ) − Lu(x n , t n ) + (L − 1)φ n (t n ) ≥ δ, which is impossible since φ n (t) ≤ φ(t) = min x∈T N u(x, t). Moreover, by replacing L with max{L, ||Du 0 || ∞ } if necessary, we can see easily that t n > 0. The claim is proved and the maximum in M n is achieved at a differentiable point of the test-function. The theory of second order viscosity solutions [12,Theorem 8.3] yields, for every ̺ > 0, the existence of (a, p, X) ∈ J 2,+ u(x n , t n ) and (b/L, p/L, Y /L) ∈ J 2,− u(y n , t n ), with p = It follows Letting ̺ → 0 and applying (4.12) yields a contradiction. We end this section by an application of the oscillation bound.
Proposition 4.7. Assume (4.12) and suppose that a comparison principle for (1.2) holds. For every u 0 ∈ C(T N ), there exists c ∈ R such that the unique solution u of (1.2) satisfies For related results in the case of Bellman equations, see [2,1].
Sketch of proof of Proposition 4.7. Without loss of generality, we assume that u 0 ∈ C 2 (T N ). The general case where u 0 ∈ C(T N ) can be handled using an approximation of u 0 in the class of C 2 functions and the comparison principle. Set m(t) = min T N u(·, t). Since (x, t) → u 0 (x) − At, where A is given by (4.14), is a subsolution of (1.2), we have m(t) ≥ −C(1 + t). Moreover, an easy application of the comparison principle yields that m is subadditive, namely m(t + s) ≤ m(t) + m(s) for all t, s ≥ 0. By the subadditive theorem, there exists c ∈ R such that m(t)/t → −c as t → +∞. By Lemma 4.5, 0 ≤ u(x, t) − m(t) ≤ L diam(T N ). This implies the uniform convergence of u(·, t)/t to −c.
Large time behavior of solutions of nonlinear strictly parabolic equations.
In this section, we use the uniform gradient bound proved in Theorems 1.1 and 1.2 to study the large time behavior of the solution of (1.2).
The first results on the large time behavior of solutions for second order parabolic equaions were established in Barles-Souganidis [8]. They prove the uniform gradient bounds (1.4) and (1.5) for (1.1) and (1.2) in two cases. The first one is for Hamiltonians with a sublinear growth with respect to the gradient. A typical example is The second case is for superlinear Hamiltonians. The precise assumptions ([8, (H2)]) are more involved and require both local Lipschitz regularity properties and convexity-type assumptions on H. These assumptions are designed to allow the use of weak Bernsteintype arguments ( [5]). The typical example is with a superlinear growth with respect to the gradient H(x, p) = a(x)|p| 1+α + ℓ(x), α > 0, a, ℓ ∈ W 1,∞ (T N ) and a > 0. (4.17) The proof of the large time behavior of the solution of (1.2) is then a consequence of the strong maximum principle (we give a sketch of proof below).
On the one hand, our resuts generalizes the assumptions on sublinear Hamiltonians made in [8]. More importantly, our results allow to deal with a class of superlinear Hamiltonians which is very different with the superlinear case of [8].
Theorem 4.8. (Large time behavior) Assume that either the assumptions of Theorem 1.1 or the assumptions of Theorem 1.2 hold. Moreover, suppose that H is continuous and locally lipschitz with respect to p. Then, there exists a unique c ∈ R such that, for all u 0 ∈ C(T N ), the solution u of (1.2) satisfies u(x, t) + ct → v 0 (x) unif ormly as t → +∞, (4.18) where (c, v 0 ) is a solution of (4.1).
Sketch of proof of Theorem 4.8. First of all, it is enough to assume that u 0 ∈ C 2 (T N ). The general case where u 0 ∈ C(T N ) can be handled using an approximation of u 0 in the class of C 2 functions and the comparison principle.
Passing to the limit with respect to j in m(t + t j ) we obtain Since u ∞ is solution of (1.2) ith c in the right-hand side and v 0 is solution of (4.1), thanks to the Lipschitz continuity of u ∞ , v 0 with respect to x and H with respect to the gradient, we obtain that there exists C > 0 such that (4.19) and the strong maximum principle ( [13]), we infer u ∞ (x, t) − v 0 (x) = ℓ for every (x, t) ∈ T N × [0, +∞). Noticing that ℓ+v 0 (x) does not depend on the choice of subsequences, we obtain u(x, t)+ct−ℓ−v 0 (x) → 0 uniformly in x as t → ∞.
4.4.
Existence result of Hölder continuous solutions for equations without comparison principle. Usually, existence results for Equations like (1.1) or (1.2) are consequence of a strong comparison principle as Theorem 3.2 together with Perron's method or using the value function of an optimal control problem when H is convex. In this section, we use Theorem 3.2 and the result of [10] to build Hölder continuous solutions under assumptions which are too weak to expect any comparison principle. Theorem 4.9. Assume A ≥ 0, H is continuous and satisfies Then there exists a viscosity solution v ǫ of (1.1) which is m−2 m−1 -Hölder continuous solution and, for every u 0 ∈ C 2 (T N ), a viscosity solution u of (1.2) which is m−2 m−1 -Hölder continuous in space and Lipschitz continuous in t.
Proof. The proof follows the approach used in Step 2 of the proof of Theorem 4.2.
Step 1. Existence for the stationary problem (1.1). Equation (1.1) with H replaced by H q (x, p) = |p| M +1 q +H(x, p) and A replaced by A+ 1 q I satisfies the conditions of Theorem 3.2, hence we have the strong comparison principle for this new equation. Therefore, we can apply Perron's method to obtain the existence of a continuous solution v ǫ q . From [10], v ǫ q is m−2 m−1 -Hölder continuous. Using Ascoli-Arzela Theorem and stability when q → +∞, we obtain the existence of a viscosity solution v ǫ which is m−2 m−1 -Hölder continuous (with a constant independent of ǫ).
Step 2. Existence of Hölder continuous solutions to the ergodic problem. We can reproduce the beginning of the proof of Theorem 4.1 with v ǫ : the sequences ǫv ǫ and v ǫ − v ǫ (0) are still equicontinuous and therefore, we can build a solution (c, v 0 ) ∈ R × C 0, m−2 m−1 (T N ) to (4.1).
Step 3. Existence for the parabolic problem. We now consider (4.5). This equation satisfies a strong comparison principle. We can follow readily the proof of Step 2 of Theorem 4.2 up to obtain a Hölder continuous solution u q . Notice it is possible to build a solution to (4.1) as explained in Step 2 above. The comparison of u q with v q − c q t ± C where C is a big constant is not anymore straightforward as in the proof of Theorem 4.2 since v q is only Hölder continuous and not Lipschitz continuous. To continue, we need to adapt the proof of Theorem 3.2 to the parabolic case which can be done easily since u q , v q are m−2 m−1 -Hölder continuous in space. It is then possible to send a subsequence q → +∞ to obtain a Hölder continuous (in space) solution u to (1.2) as desired.
Appendix
Proof of Lemma 2.1. For simplicity, we skip the ǫ superscript in v ǫ . The constant L which appears below is the one of (1.8). Consider We are done if M ≤ 0. Otherwise, the above positive maximum is achieved at (x, y) with x = y. Notice that the continuity of v is crucial at this step. The theory of second order viscosity solutions yields, for every ̺ > 0, the existence of (p, X) ∈ J 2,+ v(x) and
We now build a suitable base to prove (2.4) and another one to prove (2.5).
If e 1 andẽ 1 are collinear, then we complete the basis with orthogonal unit vectors e i = e i ∈ e ⊥ 1 , 2 ≤ i ≤ N. Otherwise, in the plane span{e 1 ,ẽ 1 }, we consider a rotation R of angle π 2 and define e 2 = Re 1 ,ẽ 2 = −Rẽ 1 .
Since the maximum is supposed to be positive and Ψ ≥ 0, we have v(x) > v(y) and obtain − trace(A(x)X − A(y)Y ) + H(x, p) − H(y, p) < 0. | 2016-07-13T07:51:25.000Z | 2016-07-13T00:00:00.000 | {
"year": 2016,
"sha1": "06da32ac02abec6f3922377b39a4961f611e8796",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jde.2017.05.020",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "06da32ac02abec6f3922377b39a4961f611e8796",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.